Sample records for random parameters application

  1. Dynamic defense and network randomization for computer systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavez, Adrian R.; Stout, William M. S.; Hamlet, Jason R.

    The various technologies presented herein relate to determining a network attack is taking place, and further to adjust one or more network parameters such that the network becomes dynamically configured. A plurality of machine learning algorithms are configured to recognize an active attack pattern. Notification of the attack can be generated, and knowledge gained from the detected attack pattern can be utilized to improve the knowledge of the algorithms to detect a subsequent attack vector(s). Further, network settings and application communications can be dynamically randomized, wherein artificial diversity converts control systems into moving targets that help mitigate the early reconnaissancemore » stages of an attack. An attack(s) based upon a known static address(es) of a critical infrastructure network device(s) can be mitigated by the dynamic randomization. Network parameters that can be randomized include IP addresses, application port numbers, paths data packets navigate through the network, application randomization, etc.« less

  2. Multi-parameter fiber optic sensors based on fiber random grating

    NASA Astrophysics Data System (ADS)

    Xu, Yanping; Zhang, Mingjiang; Lu, Ping; Mihailov, Stephen; Bao, Xiaoyi

    2017-04-01

    Two novel configurations of multi-parameter fiber-optic sensing systems based on the fiber random grating are reported. The fiber random grating is fabricated through femtosecond laser induced refractive index modification over a 10cm standard telecom single mode fiber. In one configuration, the reflective spectrum of the fiber random grating is directly detected and a wavelength-division spectral cross-correlation algorithm is adopted to extract the spectral shifts for simultaneous measurement of temperature, axial strain, and surrounding refractive index. In the other configuration, a random fiber ring laser is constructed by incorporating the random feedback from the random grating. Numerous polarization-dependent spectral filters are formed along the random grating and superimposed to provide multiple lasing lines with high signal-to-noise ratio up to 40dB, which enables a high-fidelity multi-parameter sensing scheme by monitoring the spectral shifts of the lasing lines. Without the need of phase mask for fabrication and with the high physical strength, the random grating based sensors are much simpler and more compact, which could be potentially an excellent alternative for liquid medical sample sensing in biomedical and biochemical applications.

  3. Estimation of Parameters from Discrete Random Nonstationary Time Series

    NASA Astrophysics Data System (ADS)

    Takayasu, H.; Nakamura, T.

    For the analysis of nonstationary stochastic time series we introduce a formulation to estimate the underlying time-dependent parameters. This method is designed for random events with small numbers that are out of the applicability range of the normal distribution. The method is demonstrated for numerical data generated by a known system, and applied to time series of traffic accidents, batting average of a baseball player and sales volume of home electronics.

  4. Parameter identification using a creeping-random-search algorithm

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.

    1971-01-01

    A creeping-random-search algorithm is applied to different types of problems in the field of parameter identification. The studies are intended to demonstrate that a random-search algorithm can be applied successfully to these various problems, which often cannot be handled by conventional deterministic methods, and, also, to introduce methods that speed convergence to an extremal of the problem under investigation. Six two-parameter identification problems with analytic solutions are solved, and two application problems are discussed in some detail. Results of the study show that a modified version of the basic creeping-random-search algorithm chosen does speed convergence in comparison with the unmodified version. The results also show that the algorithm can successfully solve problems that contain limits on state or control variables, inequality constraints (both independent and dependent, and linear and nonlinear), or stochastic models.

  5. The Random-Threshold Generalized Unfolding Model and Its Application of Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien

    2013-01-01

    The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…

  6. Estimation of Random Medium Parameters from 2D Post-Stack Seismic Data and Its Application in Seismic Inversion

    NASA Astrophysics Data System (ADS)

    Yang, X.; Zhu, P.; Gu, Y.; Xu, Z.

    2015-12-01

    Small scale heterogeneities of subsurface medium can be characterized conveniently and effectively using a few simple random medium parameters (RMP), such as autocorrelation length, angle and roughness factor, etc. The estimation of these parameters is significant in both oil reservoir prediction and metallic mine exploration. Poor accuracy and low stability existed in current estimation approaches limit the application of random medium theory in seismic exploration. This study focuses on improving the accuracy and stability of RMP estimation from post-stacked seismic data and its application in the seismic inversion. Experiment and theory analysis indicate that, although the autocorrelation of random medium is related to those of corresponding post-stacked seismic data, the relationship is obviously affected by the seismic dominant frequency, the autocorrelation length, roughness factor and so on. Also the error of calculation of autocorrelation in the case of finite and discrete model decreases the accuracy. In order to improve the precision of estimation of RMP, we design two improved approaches. Firstly, we apply region growing algorithm, which often used in image processing, to reduce the influence of noise in the autocorrelation calculated by the power spectrum method. Secondly, the orientation of autocorrelation is used as a new constraint in the estimation algorithm. The numerical experiments proved that it is feasible. In addition, in post-stack seismic inversion of random medium, the estimated RMP may be used to constrain inverse procedure and to construct the initial model. The experiment results indicate that taking inversed model as random medium and using relatively accurate estimated RMP to construct initial model can get better inversion result, which contained more details conformed to the actual underground medium.

  7. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling

    PubMed Central

    Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-01-01

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464

  8. Nimbus 6 Random Access Measurement System applications experiments

    NASA Technical Reports Server (NTRS)

    Cote, C. E. (Editor); Taylor, R. (Editor); Gilbert, E. (Editor)

    1982-01-01

    The advantages of a technique in which data collection platforms randomly transmit signal to a polar orbiting satellite, thus eliminating satellite interrogation are demonstrated in investigations of the atmosphere; oceanographic parameters; Arctic regions and ice conditions; navigation and position location; and data buoy development.

  9. From micro-correlations to macro-correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eliazar, Iddo, E-mail: iddo.eliazar@intel.com

    2016-11-15

    Random vectors with a symmetric correlation structure share a common value of pair-wise correlation between their different components. The symmetric correlation structure appears in a multitude of settings, e.g. mixture models. In a mixture model the components of the random vector are drawn independently from a general probability distribution that is determined by an underlying parameter, and the parameter itself is randomized. In this paper we study the overall correlation of high-dimensional random vectors with a symmetric correlation structure. Considering such a random vector, and terming its pair-wise correlation “micro-correlation”, we use an asymptotic analysis to derive the random vector’smore » “macro-correlation” : a score that takes values in the unit interval, and that quantifies the random vector’s overall correlation. The method of obtaining macro-correlations from micro-correlations is then applied to a diverse collection of frameworks that demonstrate the method’s wide applicability.« less

  10. On the existence, uniqueness, and asymptotic normality of a consistent solution of the likelihood equations for nonidentically distributed observations: Applications to missing data problems

    NASA Technical Reports Server (NTRS)

    Peters, C. (Principal Investigator)

    1980-01-01

    A general theorem is given which establishes the existence and uniqueness of a consistent solution of the likelihood equations given a sequence of independent random vectors whose distributions are not identical but have the same parameter set. In addition, it is shown that the consistent solution is a MLE and that it is asymptotically normal and efficient. Two applications are discussed: one in which independent observations of a normal random vector have missing components, and the other in which the parameters in a mixture from an exponential family are estimated using independent homogeneous sample blocks of different sizes.

  11. Uncertainty quantification of voice signal production mechanical model and experimental updating

    NASA Astrophysics Data System (ADS)

    Cataldo, E.; Soize, C.; Sampaio, R.

    2013-11-01

    The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is constructed in a new and different manner, using the own system considered.

  12. An uncertainty model of acoustic metamaterials with random parameters

    NASA Astrophysics Data System (ADS)

    He, Z. C.; Hu, J. Y.; Li, Eric

    2018-01-01

    Acoustic metamaterials (AMs) are man-made composite materials. However, the random uncertainties are unavoidable in the application of AMs due to manufacturing and material errors which lead to the variance of the physical responses of AMs. In this paper, an uncertainty model based on the change of variable perturbation stochastic finite element method (CVPS-FEM) is formulated to predict the probability density functions of physical responses of AMs with random parameters. Three types of physical responses including the band structure, mode shapes and frequency response function of AMs are studied in the uncertainty model, which is of great interest in the design of AMs. In this computation, the physical responses of stochastic AMs are expressed as linear functions of the pre-defined random parameters by using the first-order Taylor series expansion and perturbation technique. Then, based on the linear function relationships of parameters and responses, the probability density functions of the responses can be calculated by the change-of-variable technique. Three numerical examples are employed to demonstrate the effectiveness of the CVPS-FEM for stochastic AMs, and the results are validated by Monte Carlo method successfully.

  13. Estimation of correlation functions by stochastic approximation.

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Wintz, P. A.

    1972-01-01

    Consideration of the autocorrelation function of a zero-mean stationary random process. The techniques are applicable to processes with nonzero mean provided the mean is estimated first and subtracted. Two recursive techniques are proposed, both of which are based on the method of stochastic approximation and assume a functional form for the correlation function that depends on a number of parameters that are recursively estimated from successive records. One technique uses a standard point estimator of the correlation function to provide estimates of the parameters that minimize the mean-square error between the point estimates and the parametric function. The other technique provides estimates of the parameters that maximize a likelihood function relating the parameters of the function to the random process. Examples are presented.

  14. Detecting phase transitions in a neural network and its application to classification of syndromes in traditional Chinese medicine

    NASA Astrophysics Data System (ADS)

    Chen, J.; Xi, G.; Wang, W.

    2008-02-01

    Detecting phase transitions in neural networks (determined or random) presents a challenging subject for phase transitions play a key role in human brain activity. In this paper, we detect numerically phase transitions in two types of random neural network(RNN) under proper parameters.

  15. Economical analysis of saturation mutagenesis experiments

    PubMed Central

    Acevedo-Rocha, Carlos G.; Reetz, Manfred T.; Nov, Yuval

    2015-01-01

    Saturation mutagenesis is a powerful technique for engineering proteins, metabolic pathways and genomes. In spite of its numerous applications, creating high-quality saturation mutagenesis libraries remains a challenge, as various experimental parameters influence in a complex manner the resulting diversity. We explore from the economical perspective various aspects of saturation mutagenesis library preparation: We introduce a cheaper and faster control for assessing library quality based on liquid media; analyze the role of primer purity and supplier in libraries with and without redundancy; compare library quality, yield, randomization efficiency, and annealing bias using traditional and emergent randomization schemes based on mixtures of mutagenic primers; and establish a methodology for choosing the most cost-effective randomization scheme given the screening costs and other experimental parameters. We show that by carefully considering these parameters, laboratory expenses can be significantly reduced. PMID:26190439

  16. Macroscopically constrained Wang-Landau method for systems with multiple order parameters and its application to drawing complex phase diagrams

    NASA Astrophysics Data System (ADS)

    Chan, C. H.; Brown, G.; Rikvold, P. A.

    2017-05-01

    A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.

  17. Statistical characteristics of trajectories of diamagnetic unicellular organisms in a magnetic field.

    PubMed

    Gorobets, Yu I; Gorobets, O Yu

    2015-01-01

    The statistical model is proposed in this paper for description of orientation of trajectories of unicellular diamagnetic organisms in a magnetic field. The statistical parameter such as the effective energy is calculated on basis of this model. The resulting effective energy is the statistical characteristics of trajectories of diamagnetic microorganisms in a magnetic field connected with their metabolism. The statistical model is applicable for the case when the energy of the thermal motion of bacteria is negligible in comparison with their energy in a magnetic field and the bacteria manifest the significant "active random movement", i.e. there is the randomizing motion of the bacteria of non thermal nature, for example, movement of bacteria by means of flagellum. The energy of the randomizing active self-motion of bacteria is characterized by the new statistical parameter for biological objects. The parameter replaces the energy of the randomizing thermal motion in calculation of the statistical distribution. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. CURE-SMOTE algorithm and hybrid algorithm for feature selection and parameter optimization based on random forests.

    PubMed

    Ma, Li; Fan, Suohai

    2017-03-14

    The random forests algorithm is a type of classifier with prominent universality, a wide application range, and robustness for avoiding overfitting. But there are still some drawbacks to random forests. Therefore, to improve the performance of random forests, this paper seeks to improve imbalanced data processing, feature selection and parameter optimization. We propose the CURE-SMOTE algorithm for the imbalanced data classification problem. Experiments on imbalanced UCI data reveal that the combination of Clustering Using Representatives (CURE) enhances the original synthetic minority oversampling technique (SMOTE) algorithms effectively compared with the classification results on the original data using random sampling, Borderline-SMOTE1, safe-level SMOTE, C-SMOTE, and k-means-SMOTE. Additionally, the hybrid RF (random forests) algorithm has been proposed for feature selection and parameter optimization, which uses the minimum out of bag (OOB) data error as its objective function. Simulation results on binary and higher-dimensional data indicate that the proposed hybrid RF algorithms, hybrid genetic-random forests algorithm, hybrid particle swarm-random forests algorithm and hybrid fish swarm-random forests algorithm can achieve the minimum OOB error and show the best generalization ability. The training set produced from the proposed CURE-SMOTE algorithm is closer to the original data distribution because it contains minimal noise. Thus, better classification results are produced from this feasible and effective algorithm. Moreover, the hybrid algorithm's F-value, G-mean, AUC and OOB scores demonstrate that they surpass the performance of the original RF algorithm. Hence, this hybrid algorithm provides a new way to perform feature selection and parameter optimization.

  19. Smoothing the redshift distributions of random samples for the baryon acoustic oscillations: applications to the SDSS-III BOSS DR12 and QPM mock samples

    NASA Astrophysics Data System (ADS)

    Wang, Shao-Jiang; Guo, Qi; Cai, Rong-Gen

    2017-12-01

    We investigate the impact of different redshift distributions of random samples on the baryon acoustic oscillations (BAO) measurements of D_V(z)r_d^fid/r_d from the two-point correlation functions of galaxies in the Data Release 12 of the Baryon Oscillation Spectroscopic Survey (BOSS). Big surveys, such as BOSS, usually assign redshifts to the random samples by randomly drawing values from the measured redshift distributions of the data, which would necessarily introduce fiducial signals of fluctuations into the random samples, weakening the signals of BAO, if the cosmic variance cannot be ignored. We propose a smooth function of redshift distribution that fits the data well to populate the random galaxy samples. The resulting cosmological parameters match the input parameters of the mock catalogue very well. The significance of BAO signals has been improved by 0.33σ for a low-redshift sample and by 0.03σ for a constant-stellar-mass sample, though the absolute values do not change significantly. Given the precision of the measurements of current cosmological parameters, it would be appreciated for the future improvements on the measurements of galaxy clustering.

  20. Lumped Parameter Modeling for Rapid Vibration Response Prototyping and Test Correlation for Electronic Units

    NASA Technical Reports Server (NTRS)

    Van Dyke, Michael B.

    2013-01-01

    Present preliminary work using lumped parameter models to approximate dynamic response of electronic units to random vibration; Derive a general N-DOF model for application to electronic units; Illustrate parametric influence of model parameters; Implication of coupled dynamics for unit/board design; Demonstrate use of model to infer printed wiring board (PWB) dynamics from external chassis test measurement.

  1. Use of Bayes theorem to correct size-specific sampling bias in growth data.

    PubMed

    Troynikov, V S

    1999-03-01

    The bayesian decomposition of posterior distribution was used to develop a likelihood function to correct bias in the estimates of population parameters from data collected randomly with size-specific selectivity. Positive distributions with time as a parameter were used for parametrization of growth data. Numerical illustrations are provided. The alternative applications of the likelihood to estimate selectivity parameters are discussed.

  2. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST)

    PubMed Central

    Xu, Chonggang; Gertner, George

    2013-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037

  3. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST).

    PubMed

    Xu, Chonggang; Gertner, George

    2011-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.

  4. Statistical error model for a solar electric propulsion thrust subsystem

    NASA Technical Reports Server (NTRS)

    Bantell, M. H.

    1973-01-01

    The solar electric propulsion thrust subsystem statistical error model was developed as a tool for investigating the effects of thrust subsystem parameter uncertainties on navigation accuracy. The model is currently being used to evaluate the impact of electric engine parameter uncertainties on navigation system performance for a baseline mission to Encke's Comet in the 1980s. The data given represent the next generation in statistical error modeling for low-thrust applications. Principal improvements include the representation of thrust uncertainties and random process modeling in terms of random parametric variations in the thrust vector process for a multi-engine configuration.

  5. Adaptive Peer Sampling with Newscast

    NASA Astrophysics Data System (ADS)

    Tölgyesi, Norbert; Jelasity, Márk

    The peer sampling service is a middleware service that provides random samples from a large decentralized network to support gossip-based applications such as multicast, data aggregation and overlay topology management. Lightweight gossip-based implementations of the peer sampling service have been shown to provide good quality random sampling while also being extremely robust to many failure scenarios, including node churn and catastrophic failure. We identify two problems with these approaches. The first problem is related to message drop failures: if a node experiences a higher-than-average message drop rate then the probability of sampling this node in the network will decrease. The second problem is that the application layer at different nodes might request random samples at very different rates which can result in very poor random sampling especially at nodes with high request rates. We propose solutions for both problems. We focus on Newscast, a robust implementation of the peer sampling service. Our solution is based on simple extensions of the protocol and an adaptive self-control mechanism for its parameters, namely—without involving failure detectors—nodes passively monitor local protocol events using them as feedback for a local control loop for self-tuning the protocol parameters. The proposed solution is evaluated by simulation experiments.

  6. Compressed sensing: Radar signal detection and parameter measurement for EW applications

    NASA Astrophysics Data System (ADS)

    Rao, M. Sreenivasa; Naik, K. Krishna; Reddy, K. Maheshwara

    2016-09-01

    State of the art system development is very much required for UAVs (Unmanned Aerial Vehicle) and other airborne applications, where miniature, lightweight and low-power specifications are essential. Currently, the airborne Electronic Warfare (EW) systems are developed with digital receiver technology using Nyquist sampling. The detection of radar signals and parameter measurement is a necessary requirement in EW digital receivers. The Random Modulator Pre-Integrator (RMPI) can be used for matched detection of signals using smashed filter. RMPI hardware eliminates the high sampling rate analog to digital computer and reduces the number of samples using random sampling and detection of sparse orthonormal basis vectors. RMPI explore the structural and geometrical properties of the signal apart from traditional time and frequency domain analysis for improved detection. The concept has been proved with the help of MATLAB and LabVIEW simulations.

  7. Stochastic reduced order models for inverse problems under uncertainty

    PubMed Central

    Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.

    2014-01-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115

  8. Modelling Biophysical Parameters of Maize Using Landsat 8 Time Series

    NASA Astrophysics Data System (ADS)

    Dahms, Thorsten; Seissiger, Sylvia; Conrad, Christopher; Borg, Erik

    2016-06-01

    Open and free access to multi-frequent high-resolution data (e.g. Sentinel - 2) will fortify agricultural applications based on satellite data. The temporal and spatial resolution of these remote sensing datasets directly affects the applicability of remote sensing methods, for instance a robust retrieving of biophysical parameters over the entire growing season with very high geometric resolution. In this study we use machine learning methods to predict biophysical parameters, namely the fraction of absorbed photosynthetic radiation (FPAR), the leaf area index (LAI) and the chlorophyll content, from high resolution remote sensing. 30 Landsat 8 OLI scenes were available in our study region in Mecklenburg-Western Pomerania, Germany. In-situ data were weekly to bi-weekly collected on 18 maize plots throughout the summer season 2015. The study aims at an optimized prediction of biophysical parameters and the identification of the best explaining spectral bands and vegetation indices. For this purpose, we used the entire in-situ dataset from 24.03.2015 to 15.10.2015. Random forest and conditional inference forests were used because of their explicit strong exploratory and predictive character. Variable importance measures allowed for analysing the relation between the biophysical parameters with respect to the spectral response, and the performance of the two approaches over the plant stock evolvement. Classical random forest regression outreached the performance of conditional inference forests, in particular when modelling the biophysical parameters over the entire growing period. For example, modelling biophysical parameters of maize for the entire vegetation period using random forests yielded: FPAR: R² = 0.85; RMSE = 0.11; LAI: R² = 0.64; RMSE = 0.9 and chlorophyll content (SPAD): R² = 0.80; RMSE=4.9. Our results demonstrate the great potential in using machine-learning methods for the interpretation of long-term multi-frequent remote sensing datasets to model biophysical parameters.

  9. Auto Regressive Moving Average (ARMA) Modeling Method for Gyro Random Noise Using a Robust Kalman Filter

    PubMed Central

    Huang, Lei

    2015-01-01

    To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409

  10. Bayesian statistics and Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Koch, K. R.

    2018-03-01

    The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.

  11. Multiobjective optimization in structural design with uncertain parameters and stochastic processes

    NASA Technical Reports Server (NTRS)

    Rao, S. S.

    1984-01-01

    The application of multiobjective optimization techniques to structural design problems involving uncertain parameters and random processes is studied. The design of a cantilever beam with a tip mass subjected to a stochastic base excitation is considered for illustration. Several of the problem parameters are assumed to be random variables and the structural mass, fatigue damage, and negative of natural frequency of vibration are considered for minimization. The solution of this three-criteria design problem is found by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It is observed that the game theory approach is superior in finding a better optimum solution, assuming the proper balance of the various objective functions. The procedures used in the present investigation are expected to be useful in the design of general dynamic systems involving uncertain parameters, stochastic process, and multiple objectives.

  12. Uncertainty Quantification in Simulations of Epidemics Using Polynomial Chaos

    PubMed Central

    Santonja, F.; Chen-Charpentier, B.

    2012-01-01

    Mathematical models based on ordinary differential equations are a useful tool to study the processes involved in epidemiology. Many models consider that the parameters are deterministic variables. But in practice, the transmission parameters present large variability and it is not possible to determine them exactly, and it is necessary to introduce randomness. In this paper, we present an application of the polynomial chaos approach to epidemiological mathematical models based on ordinary differential equations with random coefficients. Taking into account the variability of the transmission parameters of the model, this approach allows us to obtain an auxiliary system of differential equations, which is then integrated numerically to obtain the first-and the second-order moments of the output stochastic processes. A sensitivity analysis based on the polynomial chaos approach is also performed to determine which parameters have the greatest influence on the results. As an example, we will apply the approach to an obesity epidemic model. PMID:22927889

  13. Fuzzy Stochastic Petri Nets for Modeling Biological Systems with Uncertain Kinetic Parameters

    PubMed Central

    Liu, Fei; Heiner, Monika; Yang, Ming

    2016-01-01

    Stochastic Petri nets (SPNs) have been widely used to model randomness which is an inherent feature of biological systems. However, for many biological systems, some kinetic parameters may be uncertain due to incomplete, vague or missing kinetic data (often called fuzzy uncertainty), or naturally vary, e.g., between different individuals, experimental conditions, etc. (often called variability), which has prevented a wider application of SPNs that require accurate parameters. Considering the strength of fuzzy sets to deal with uncertain information, we apply a specific type of stochastic Petri nets, fuzzy stochastic Petri nets (FSPNs), to model and analyze biological systems with uncertain kinetic parameters. FSPNs combine SPNs and fuzzy sets, thereby taking into account both randomness and fuzziness of biological systems. For a biological system, SPNs model the randomness, while fuzzy sets model kinetic parameters with fuzzy uncertainty or variability by associating each parameter with a fuzzy number instead of a crisp real value. We introduce a simulation-based analysis method for FSPNs to explore the uncertainties of outputs resulting from the uncertainties associated with input parameters, which works equally well for bounded and unbounded models. We illustrate our approach using a yeast polarization model having an infinite state space, which shows the appropriateness of FSPNs in combination with simulation-based analysis for modeling and analyzing biological systems with uncertain information. PMID:26910830

  14. Computer-Aided Screening of Conjugated Polymers for Organic Solar Cell: Classification by Random Forest.

    PubMed

    Nagasawa, Shinji; Al-Naamani, Eman; Saeki, Akinori

    2018-05-17

    Owing to the diverse chemical structures, organic photovoltaic (OPV) applications with a bulk heterojunction framework have greatly evolved over the last two decades, which has produced numerous organic semiconductors exhibiting improved power conversion efficiencies (PCEs). Despite the recent fast progress in materials informatics and data science, data-driven molecular design of OPV materials remains challenging. We report a screening of conjugated molecules for polymer-fullerene OPV applications by supervised learning methods (artificial neural network (ANN) and random forest (RF)). Approximately 1000 experimental parameters including PCE, molecular weight, and electronic properties are manually collected from the literature and subjected to machine learning with digitized chemical structures. Contrary to the low correlation coefficient in ANN, RF yields an acceptable accuracy, which is twice that of random classification. We demonstrate the application of RF screening for the design, synthesis, and characterization of a conjugated polymer, which facilitates a rapid development of optoelectronic materials.

  15. Design of state-feedback controllers including sensitivity reduction, with applications to precision pointing

    NASA Technical Reports Server (NTRS)

    Hadass, Z.

    1974-01-01

    The design procedure of feedback controllers was described and the considerations for the selection of the design parameters were given. The frequency domain properties of single-input single-output systems using state feedback controllers are analyzed, and desirable phase and gain margin properties are demonstrated. Special consideration is given to the design of controllers for tracking systems, especially those designed to track polynomial commands. As an example, a controller was designed for a tracking telescope with a polynomial tracking requirement and some special features such as actuator saturation and multiple measurements, one of which is sampled. The resulting system has a tracking performance comparing favorably with a much more complicated digital aided tracker. The parameter sensitivity reduction was treated by considering the variable parameters as random variables. A performance index is defined as a weighted sum of the state and control convariances that sum from both the random system disturbances and the parameter uncertainties, and is minimized numerically by adjusting a set of free parameters.

  16. CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.

    USGS Publications Warehouse

    Cooley, Richard L.; Vecchia, Aldo V.

    1987-01-01

    A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.

  17. Reference clock parameters for digital communications systems applications

    NASA Technical Reports Server (NTRS)

    Kartaschoff, P.

    1981-01-01

    The basic parameters relevant to the design of network timing systems describe the random and systematic time departures of the system elements, i.e., master (or reference) clocks, transmission links, and other clocks controlled over the links. The quantitative relations between these parameters were established and illustrated by means of numerical examples based on available measured data. The examples were limited to a simple PLL control system but the analysis can eventually be applied to more sophisticated systems at the cost of increased computational effort.

  18. Review of Random Phase Encoding in Volume Holographic Storage

    PubMed Central

    Su, Wei-Chia; Sun, Ching-Cherng

    2012-01-01

    Random phase encoding is a unique technique for volume hologram which can be applied to various applications such as holographic multiplexing storage, image encryption, and optical sensing. In this review article, we first review and discuss diffraction selectivity of random phase encoding in volume holograms, which is the most important parameter related to multiplexing capacity of volume holographic storage. We then review an image encryption system based on random phase encoding. The alignment of phase key for decryption of the encoded image stored in holographic memory is analyzed and discussed. In the latter part of the review, an all-optical sensing system implemented by random phase encoding and holographic interconnection is presented.

  19. Multidimensional density shaping by sigmoids.

    PubMed

    Roth, Z; Baram, Y

    1996-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.

  20. A system identification technique based on the random decrement signatures. Part 2: Experimental results

    NASA Technical Reports Server (NTRS)

    Bedewi, Nabih E.; Yang, Jackson C. S.

    1987-01-01

    Identification of the system parameters of a randomly excited structure may be treated using a variety of statistical techniques. Of all these techniques, the Random Decrement is unique in that it provides the homogeneous component of the system response. Using this quality, a system identification technique was developed based on a least-squares fit of the signatures to estimate the mass, damping, and stiffness matrices of a linear randomly excited system. The results of an experiment conducted on an offshore platform scale model to verify the validity of the technique and to demonstrate its application in damage detection are presented.

  1. Studies of the DIII-D disruption database using Machine Learning algorithms

    NASA Astrophysics Data System (ADS)

    Rea, Cristina; Granetz, Robert; Meneghini, Orso

    2017-10-01

    A Random Forests Machine Learning algorithm, trained on a large database of both disruptive and non-disruptive DIII-D discharges, predicts disruptive behavior in DIII-D with about 90% of accuracy. Several algorithms have been tested and Random Forests was found superior in performances for this particular task. Over 40 plasma parameters are included in the database, with data for each of the parameters taken from 500k time slices. We focused on a subset of non-dimensional plasma parameters, deemed to be good predictors based on physics considerations. Both binary (disruptive/non-disruptive) and multi-label (label based on the elapsed time before disruption) classification problems are investigated. The Random Forests algorithm provides insight on the available dataset by ranking the relative importance of the input features. It is found that q95 and Greenwald density fraction (n/nG) are the most relevant parameters for discriminating between DIII-D disruptive and non-disruptive discharges. A comparison with the Gradient Boosted Trees algorithm is shown and the first results coming from the application of regression algorithms are presented. Work supported by the US Department of Energy under DE-FC02-04ER54698, DE-SC0014264 and DE-FG02-95ER54309.

  2. Molybdenum disulfide and water interaction parameters

    NASA Astrophysics Data System (ADS)

    Heiranian, Mohammad; Wu, Yanbin; Aluru, Narayana R.

    2017-09-01

    Understanding the interaction between water and molybdenum disulfide (MoS2) is of crucial importance to investigate the physics of various applications involving MoS2 and water interfaces. An accurate force field is required to describe water and MoS2 interactions. In this work, water-MoS2 force field parameters are derived using the high-accuracy random phase approximation (RPA) method and validated by comparing to experiments. The parameters obtained from the RPA method result in water-MoS2 interface properties (solid-liquid work of adhesion) in good comparison to the experimental measurements. An accurate description of MoS2-water interaction will facilitate the study of MoS2 in applications such as DNA sequencing, sea water desalination, and power generation.

  3. Descriptive parameter for photon trajectories in a turbid medium

    NASA Astrophysics Data System (ADS)

    Gandjbakhche, Amir H.; Weiss, George H.

    2000-06-01

    In many applications of laser techniques for diagnostic or therapeutic purposes it is necessary to be able to characterize photon trajectories to know which parts of the tissue are being interrogated. In this paper, we consider the cw reflectance experiment on a semi-infinite medium with uniform optical parameters and having a planar interface. The analysis is carried out in terms of a continuous-time random walk and the relation between the occupancy of a plane parallel to the surface to the maximum depth reached by the random walker is studied. The first moment of the ratio of average depth to the average maximum depth yields information about the volume of tissue interrogated as well as giving some indication of the region of tissue that gets the most light. We have also calculated the standard deviation of this random variable. It is not large enough to qualitatively affect information contained in the first moment.

  4. Box-Cox Mixed Logit Model for Travel Behavior Analysis

    NASA Astrophysics Data System (ADS)

    Orro, Alfonso; Novales, Margarita; Benitez, Francisco G.

    2010-09-01

    To represent the behavior of travelers when they are deciding how they are going to get to their destination, discrete choice models, based on the random utility theory, have become one of the most widely used tools. The field in which these models were developed was halfway between econometrics and transport engineering, although the latter now constitutes one of their principal areas of application. In the transport field, they have mainly been applied to mode choice, but also to the selection of destination, route, and other important decisions such as the vehicle ownership. In usual practice, the most frequently employed discrete choice models implement a fixed coefficient utility function that is linear in the parameters. The principal aim of this paper is to present the viability of specifying utility functions with random coefficients that are nonlinear in the parameters, in applications of discrete choice models to transport. Nonlinear specifications in the parameters were present in discrete choice theory at its outset, although they have seldom been used in practice until recently. The specification of random coefficients, however, began with the probit and the hedonic models in the 1970s, and, after a period of apparent little practical interest, has burgeoned into a field of intense activity in recent years with the new generation of mixed logit models. In this communication, we present a Box-Cox mixed logit model, original of the authors. It includes the estimation of the Box-Cox exponents in addition to the parameters of the random coefficients distribution. Probability of choose an alternative is an integral that will be calculated by simulation. The estimation of the model is carried out by maximizing the simulated log-likelihood of a sample of observed individual choices between alternatives. The differences between the predictions yielded by models that are inconsistent with real behavior have been studied with simulation experiments.

  5. Random analysis of bearing capacity of square footing using the LAS procedure

    NASA Astrophysics Data System (ADS)

    Kawa, Marek; Puła, Wojciech; Suska, Michał

    2016-09-01

    In the present paper, a three-dimensional problem of bearing capacity of square footing on random soil medium is analyzed. The random fields of strength parameters c and φ are generated using LAS procedure (Local Average Subdivision, Fenton and Vanmarcke 1990). The procedure used is re-implemented by the authors in Mathematica environment in order to combine it with commercial program. Since the procedure is still tested the random filed has been assumed as one-dimensional: the strength properties of soil are random in vertical direction only. Individual realizations of bearing capacity boundary-problem with strength parameters of medium defined the above procedure are solved using FLAC3D Software. The analysis is performed for two qualitatively different cases, namely for the purely cohesive and cohesive-frictional soils. For the latter case the friction angle and cohesion have been assumed as independent random variables. For these two cases the random square footing bearing capacity results have been obtained for the range of fluctuation scales from 0.5 m to 10 m. Each time 1000 Monte Carlo realizations have been performed. The obtained results allow not only the mean and variance but also the probability density function to be estimated. An example of application of this function for reliability calculation has been presented in the final part of the paper.

  6. Mixing rates and limit theorems for random intermittent maps

    NASA Astrophysics Data System (ADS)

    Bahsoun, Wael; Bose, Christopher

    2016-04-01

    We study random transformations built from intermittent maps on the unit interval that share a common neutral fixed point. We focus mainly on random selections of Pomeu-Manneville-type maps {{T}α} using the full parameter range 0<α <∞ , in general. We derive a number of results around a common theme that illustrates in detail how the constituent map that is fastest mixing (i.e. smallest α) combined with details of the randomizing process, determines the asymptotic properties of the random transformation. Our key result (theorem 1.1) establishes sharp estimates on the position of return time intervals for the quenched dynamics. The main applications of this estimate are to limit laws (in particular, CLT and stable laws, depending on the parameters chosen in the range 0<α <1 ) for the associated skew product; these are detailed in theorem 3.2. Since our estimates in theorem 1.1 also hold for 1≤slant α <∞ we study a second class of random transformations derived from piecewise affine Gaspard-Wang maps, prove existence of an infinite (σ-finite) invariant measure and study the corresponding correlation asymptotics. To the best of our knowledge, this latter kind of result is completely new in the setting of random transformations.

  7. Multivariate random-parameters zero-inflated negative binomial regression model: an application to estimate crash frequencies at intersections.

    PubMed

    Dong, Chunjiao; Clarke, David B; Yan, Xuedong; Khattak, Asad; Huang, Baoshan

    2014-09-01

    Crash data are collected through police reports and integrated with road inventory data for further analysis. Integrated police reports and inventory data yield correlated multivariate data for roadway entities (e.g., segments or intersections). Analysis of such data reveals important relationships that can help focus on high-risk situations and coming up with safety countermeasures. To understand relationships between crash frequencies and associated variables, while taking full advantage of the available data, multivariate random-parameters models are appropriate since they can simultaneously consider the correlation among the specific crash types and account for unobserved heterogeneity. However, a key issue that arises with correlated multivariate data is the number of crash-free samples increases, as crash counts have many categories. In this paper, we describe a multivariate random-parameters zero-inflated negative binomial (MRZINB) regression model for jointly modeling crash counts. The full Bayesian method is employed to estimate the model parameters. Crash frequencies at urban signalized intersections in Tennessee are analyzed. The paper investigates the performance of MZINB and MRZINB regression models in establishing the relationship between crash frequencies, pavement conditions, traffic factors, and geometric design features of roadway intersections. Compared to the MZINB model, the MRZINB model identifies additional statistically significant factors and provides better goodness of fit in developing the relationships. The empirical results show that MRZINB model possesses most of the desirable statistical properties in terms of its ability to accommodate unobserved heterogeneity and excess zero counts in correlated data. Notably, in the random-parameters MZINB model, the estimated parameters vary significantly across intersections for different crash types. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. A Two-Stage Estimation Method for Random Coefficient Differential Equation Models with Application to Longitudinal HIV Dynamic Data.

    PubMed

    Fang, Yun; Wu, Hulin; Zhu, Li-Xing

    2011-07-01

    We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.

  9. A Numerical Study of New Logistic Map

    NASA Astrophysics Data System (ADS)

    Khmou, Youssef

    In this paper, we propose a new logistic map based on the relation of the information entropy, we study the bifurcation diagram comparatively to the standard logistic map. In the first part, we compare the obtained diagram, by numerical simulations, with that of the standard logistic map. It is found that the structures of both diagrams are similar where the range of the growth parameter is restricted to the interval [0,e]. In the second part, we present an application of the proposed map in traffic flow using macroscopic model. It is found that the bifurcation diagram is an exact model of the Greenberg’s model of traffic flow where the growth parameter corresponds to the optimal velocity and the random sequence corresponds to the density. In the last part, we present a second possible application of the proposed map which consists of random number generation. The results of the analysis show that the excluded initial values of the sequences are (0,1).

  10. Novel core-shell (TiO2@Silica) nanoparticles for scattering medium in a random laser: higher efficiency, lower laser threshold and lower photodegradation.

    PubMed

    Jimenez-Villar, Ernesto; Mestre, Valdeci; de Oliveira, Paulo C; de Sá, Gilberto F

    2013-12-21

    There has been growing interest in scattering media in recent years, due to their potential applications as solar collectors, photocatalyzers, random lasers and other novel optical devices. Here, we have introduced a novel core-shell scattering medium for a random laser composed of TiO2@Silica nanoparticles. Higher efficiency, lower laser threshold and long photobleaching lifetime in random lasers were demonstrated. This has introduced a new method or parameter (fraction of absorbed pumping), which opens a new avenue to characterize and study the scattering media. Optical chemical and colloidal stabilities were combined by coating a suitable silica shell onto TiO2 nanoparticles.

  11. A random utility model of delay discounting and its application to people with externalizing psychopathology.

    PubMed

    Dai, Junyi; Gunn, Rachel L; Gerst, Kyle R; Busemeyer, Jerome R; Finn, Peter R

    2016-10-01

    Previous studies have demonstrated that working memory capacity plays a central role in delay discounting in people with externalizing psychopathology. These studies used a hyperbolic discounting model, and its single parameter-a measure of delay discounting-was estimated using the standard method of searching for indifference points between intertemporal options. However, there are several problems with this approach. First, the deterministic perspective on delay discounting underlying the indifference point method might be inappropriate. Second, the estimation procedure using the R2 measure often leads to poor model fit. Third, when parameters are estimated using indifference points only, much of the information collected in a delay discounting decision task is wasted. To overcome these problems, this article proposes a random utility model of delay discounting. The proposed model has 2 parameters, 1 for delay discounting and 1 for choice variability. It was fit to choice data obtained from a recently published data set using both maximum-likelihood and Bayesian parameter estimation. As in previous studies, the delay discounting parameter was significantly associated with both externalizing problems and working memory capacity. Furthermore, choice variability was also found to be significantly associated with both variables. This finding suggests that randomness in decisions may be a mechanism by which externalizing problems and low working memory capacity are associated with poor decision making. The random utility model thus has the advantage of disclosing the role of choice variability, which had been masked by the traditional deterministic model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. Application of randomly oriented spheroids for retrieval of dust particle parameters from multiwavelength lidar measurements

    NASA Astrophysics Data System (ADS)

    Veselovskii, I.; Dubovik, O.; Kolgotin, A.; Lapyonok, T.; di Girolamo, P.; Summa, D.; Whiteman, D. N.; Mishchenko, M.; Tanré, D.

    2010-11-01

    Multiwavelength (MW) Raman lidars have demonstrated their potential to profile particle parameters; however, until now, the physical models used in retrieval algorithms for processing MW lidar data have been predominantly based on the Mie theory. This approach is applicable to the modeling of light scattering by spherically symmetric particles only and does not adequately reproduce the scattering by generally nonspherical desert dust particles. Here we present an algorithm based on a model of randomly oriented spheroids for the inversion of multiwavelength lidar data. The aerosols are modeled as a mixture of two aerosol components: one composed only of spherical and the second composed of nonspherical particles. The nonspherical component is an ensemble of randomly oriented spheroids with size-independent shape distribution. This approach has been integrated into an algorithm retrieving aerosol properties from the observations with a Raman lidar based on a tripled Nd:YAG laser. Such a lidar provides three backscattering coefficients, two extinction coefficients, and the particle depolarization ratio at a single or multiple wavelengths. Simulations were performed for a bimodal particle size distribution typical of desert dust particles. The uncertainty of the retrieved particle surface, volume concentration, and effective radius for 10% measurement errors is estimated to be below 30%. We show that if the effect of particle nonsphericity is not accounted for, the errors in the retrieved aerosol parameters increase notably. The algorithm was tested with experimental data from a Saharan dust outbreak episode, measured with the BASIL multiwavelength Raman lidar in August 2007. The vertical profiles of particle parameters as well as the particle size distributions at different heights were retrieved. It was shown that the algorithm developed provided substantially reasonable results consistent with the available independent information about the observed aerosol event.

  13. An Image Encryption Algorithm Based on Information Hiding

    NASA Astrophysics Data System (ADS)

    Ge, Xin; Lu, Bin; Liu, Fenlin; Gong, Daofu

    Aiming at resolving the conflict between security and efficiency in the design of chaotic image encryption algorithms, an image encryption algorithm based on information hiding is proposed based on the “one-time pad” idea. A random parameter is introduced to ensure a different keystream for each encryption, which has the characteristics of “one-time pad”, improving the security of the algorithm rapidly without significant increase in algorithm complexity. The random parameter is embedded into the ciphered image with information hiding technology, which avoids negotiation for its transport and makes the application of the algorithm easier. Algorithm analysis and experiments show that the algorithm is secure against chosen plaintext attack, differential attack and divide-and-conquer attack, and has good statistical properties in ciphered images.

  14. A Study of a Standard BIT Circuit.

    DTIC Science & Technology

    1977-02-01

    IENDED BIT APPROACHES FOR QED MODULES AND APPLICATION OF THE ANALYTIC MEASURES 36 4.1 Built-In-Test for Memory Class Modules 37 4.1.1 Random Access...Implementation 68 4.1.5.5 Criti cal Parameters 68 4.1.5.6 QED Module Test Equipment Requirements 68 4.1.6 Application of Analytic Measures to the...Microprocessor BIT Techniques.. 121 4.2.9 Application of Analytic Measures to the Recommended BIT App roaches 125 4.2.10 Process Class BIT by Partial

  15. Finding Relevant Parameters for the Thin-film Photovoltaic Cells Production Process with the Application of Data Mining Methods.

    PubMed

    Ulaczyk, Jan; Morawiec, Krzysztof; Zabierowski, Paweł; Drobiazg, Tomasz; Barreau, Nicolas

    2017-09-01

    A data mining approach is proposed as a useful tool for the control parameters analysis of the 3-stage CIGSe photovoltaic cell production process, in order to find variables that are the most relevant for cell electric parameters and efficiency. The analysed data set consists of stage duration times, heater power values as well as temperatures for the element sources and the substrate - there are 14 variables per sample in total. The most relevant variables of the process have been found based on the so-called random forest analysis with the application of the Boruta algorithm. 118 CIGSe samples, prepared at Institut des Matériaux Jean Rouxel, were analysed. The results are close to experimental knowledge on the CIGSe cells production process. They bring new evidence to production parameters of new cells and further research. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Robust estimation of thermodynamic parameters (ΔH, ΔS and ΔCp) for prediction of retention time in gas chromatography - Part II (Application).

    PubMed

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-12-18

    For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Model's sparse representation based on reduced mixed GMsFE basis methods

    NASA Astrophysics Data System (ADS)

    Jiang, Lijian; Li, Qiuqi

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.

  18. Model's sparse representation based on reduced mixed GMsFE basis methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less

  19. Comparison of intravenous labetalol and bupivacaine scalp block on the hemodynamic and entropy changes following skull pin application: A randomized, open label clinical trial.

    PubMed

    Bharne, Sidhesh; Bidkar, Prasanna Udupi; Badhe, Ashok Shankar; Parida, Satyen; Ramesh, Andi Sadayandi

    2016-01-01

    The application of skull pins in neurosurgical procedures is a highly noxious stimulus that causes hemodynamic changes and a rise in spectral entropy levels. We designed a study to compare intravenous (IV) labetalol and bupivacaine scalp block in blunting these changes. Sixty-six patients undergoing elective neurosurgical procedures were randomized into two groups, L (labetalol) and B (bupivacaine) of 33 each. After a standard induction sequence using fentanyl, propofol and vecuronium, patients were intubated. Baseline hemodynamic parameters and entropy levels were noted. Five minutes before, application of the pins, group L patients received IV labetalol 0.25 mg/kg and group B patients received scalp block with 30 ml of 0.25% bupivacaine. Following application of the pins, heart rate (HR), systolic arterial pressure (SAP), diastolic arterial pressure (DAP), mean arterial pressure (MAP), and response entropy (RE)/state entropy (SE) were noted at regular time points up to 5 min. The two groups were comparable with respect to their demographic characteristics. Baseline hemodynamic parameters and entropy levels were also similar. After pinning, the HR, SAP, DAP, MAP, and RE/SE all increased in both groups but were lower in the scalp block group patients. HR increased by 19.8% in group L and by 11% in group B. SAP increased by 11.9% in group L and remained unchanged in group B. DAP increased by 19.7% in group L and by 9.9% in group B, MAP increased by 15.6% in group L and 5% in group B (P < 0.05). No adverse effects were noted. Scalp block with bupivacaine is more effective than IV labetalol in attenuating the rise in hemodynamic parameters and entropy changes following skull pin application.

  20. Accumulator and random-walk models of psychophysical discrimination: a counter-evaluation.

    PubMed

    Vickers, D; Smith, P

    1985-01-01

    In a recent assessment of models of psychophysical discrimination, Heath criticises the accumulator model for its reliance on computer simulation and qualitative evidence, and contrasts it unfavourably with a modified random-walk model, which yields exact predictions, is susceptible to critical test, and is provided with simple parameter-estimation techniques. A counter-evaluation is presented, in which the approximations employed in the modified random-walk analysis are demonstrated to be seriously inaccurate, the resulting parameter estimates to be artefactually determined, and the proposed test not critical. It is pointed out that Heath's specific application of the model is not legitimate, his data treatment inappropriate, and his hypothesis concerning confidence inconsistent with experimental results. Evidence from adaptive performance changes is presented which shows that the necessary assumptions for quantitative analysis in terms of the modified random-walk model are not satisfied, and that the model can be reconciled with data at the qualitative level only by making it virtually indistinguishable from an accumulator process. A procedure for deriving exact predictions for an accumulator process is outlined.

  1. Optimization of the random multilayer structure to break the random-alloy limit of thermal conductivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yan; Gu, Chongjie; Ruan, Xiulin, E-mail: ruan@purdue.edu

    2015-02-16

    A low lattice thermal conductivity (κ) is desired for thermoelectrics, and a highly anisotropic κ is essential for applications such as magnetic layers for heat-assisted magnetic recording, where a high cross-plane (perpendicular to layer) κ is needed to ensure fast writing while a low in-plane κ is required to avoid interaction between adjacent bits of data. In this work, we conduct molecular dynamics simulations to investigate the κ of superlattice (SL), random multilayer (RML) and alloy, and reveal that RML can have 1–2 orders of magnitude higher anisotropy in κ than SL and alloy. We systematically explore how the κmore » of SL, RML, and alloy changes relative to each other for different bond strength, interface roughness, atomic mass, and structure size, which provides guidance for choosing materials and structural parameters to build RMLs with optimal performance for specific applications.« less

  2. Set statistics in conductive bridge random access memory device with Cu/HfO{sub 2}/Pt structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Meiyun; Long, Shibing, E-mail: longshibing@ime.ac.cn; Wang, Guoming

    2014-11-10

    The switching parameter variation of resistive switching memory is one of the most important challenges in its application. In this letter, we have studied the set statistics of conductive bridge random access memory with a Cu/HfO{sub 2}/Pt structure. The experimental distributions of the set parameters in several off resistance ranges are shown to nicely fit a Weibull model. The Weibull slopes of the set voltage and current increase and decrease logarithmically with off resistance, respectively. This experimental behavior is perfectly captured by a Monte Carlo simulator based on the cell-based set voltage statistics model and the Quantum Point Contact electronmore » transport model. Our work provides indications for the improvement of the switching uniformity.« less

  3. Spatio-temporal modelling of wind speed variations and extremes in the Caribbean and the Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Rychlik, Igor; Mao, Wengang

    2018-02-01

    The wind speed variability in the North Atlantic has been successfully modelled using a spatio-temporal transformed Gaussian field. However, this type of model does not correctly describe the extreme wind speeds attributed to tropical storms and hurricanes. In this study, the transformed Gaussian model is further developed to include the occurrence of severe storms. In this new model, random components are added to the transformed Gaussian field to model rare events with extreme wind speeds. The resulting random field is locally stationary and homogeneous. The localized dependence structure is described by time- and space-dependent parameters. The parameters have a natural physical interpretation. To exemplify its application, the model is fitted to the ECMWF ERA-Interim reanalysis data set. The model is applied to compute long-term wind speed distributions and return values, e.g., 100- or 1000-year extreme wind speeds, and to simulate random wind speed time series at a fixed location or spatio-temporal wind fields around that location.

  4. A syringe-sharing model for the spread of HIV: application to Omsk, Western Siberia.

    PubMed

    Artzrouni, Marc; Leonenko, Vasiliy N; Mara, Thierry A

    2017-03-01

    A system of two differential equations is used to model the transmission dynamics of human immunodeficiency virus between 'persons who inject drugs' (PWIDs) and their syringes. Our vector-borne disease model hinges on a metaphorical urn from which PWIDs draw syringes at random which may or may not be infected and may or may not result in one of the two agents becoming infected. The model's parameters are estimated with data mostly from the city of Omsk in Western Siberia. A linear trend in PWID prevalence in Omsk could only be fitted by considering a time-dependent version of the model captured through a secular decrease in the probability that PWIDs decide to share a syringe. A global sensitivity analysis is performed with 14 parameters considered random variables in order to assess their impact on average numbers infected over a 50-year projection. With obvious intervention implications the drug injection rate and the probability of syringe-cleansing are the only parameters whose coefficients of correlations with numbers of infected PWIDs and infected syringes have an absolute value close to or larger than 0.40. © The authors 2015. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  5. Digital simulation of an arbitrary stationary stochastic process by spectral representation.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2011-04-01

    In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America

  6. Universal structures in some mean field spin glasses and an application

    NASA Astrophysics Data System (ADS)

    Bolthausen, Erwin; Kistler, Nicola

    2008-12-01

    We discuss a spin glass reminiscent of the random energy model (REM), which allows, in particular, to recast the Parisi minimization into a more classical Gibbs variational principle, thereby shedding some light into the physical meaning of the order parameter of the Parisi theory. As an application, we study the impact of an extensive cavity field on Derrida's REM: Despite its simplicity, this model displays some interesting features such as ultrametricity and chaos in temperature.

  7. Neuromuscular Taping Application in Counter Movement Jump: Biomechanical Insight in a Group of Healthy Basketball Players.

    PubMed

    Marcolin, Giuseppe; Buriani, Alessandro; Giacomelli, Andrea; Blow, David; Grigoletto, Davide; Gesi, Marco

    2017-06-24

    Kinesiologic elastic tape is widely used for both clinical and sport applications although its efficacy in enhancing agonistic performance is still controversial. Aim of the study was to verify in a group of healthy basketball players whether a neuromuscular taping application (NMT) on ankle and knee joints could affect the kinematic and the kinetic parameters of the jump, either by enhancing or inhibiting the functional performance. Fourteen healthy male basketball players without any ongoing pathologies at upper limbs, lower limbs and trunk volunteered in the study. They randomly performed 2 sets of 5 counter movement jumps (CMJ) with and without application of Kinesiologic tape. The best 3 jumps of each set were considered for the analysis. The Kinematics parameters analyzed were: knees maximal flexion and ankles maximal dorsiflexion during the push off phase, jump height and take off velocity. Vertical ground reaction force and maximal power expressed in the push off phase of the jump were also investigated. The NMT application in both knees and ankles showed no statistically significant differences in the kinematic and kinetic parameters and did not interfere with the CMJ performance. Bilateral NMT application in the group of healthy male basketball players did not change kinematics and kinetics jump parameters, thus suggesting that its routine use should have no negative effect on functional performance. Similarly, the combined application of the tape on both knees and ankles did not affect in either way jump performance.

  8. Neuromuscular Taping Application in Counter Movement Jump: Biomechanical Insight in a Group of Healthy Basketball Players

    PubMed Central

    Marcolin, Giuseppe; Buriani, Alessandro; Giacomelli, Andrea; Blow, David; Grigoletto, Davide; Gesi, Marco

    2017-01-01

    Kinesiologic elastic tape is widely used for both clinical and sport applications although its efficacy in enhancing agonistic performance is still controversial. Aim of the study was to verify in a group of healthy basketball players whether a neuromuscular taping application (NMT) on ankle and knee joints could affect the kinematic and the kinetic parameters of the jump, either by enhancing or inhibiting the functional performance. Fourteen healthy male basketball players without any ongoing pathologies at upper limbs, lower limbs and trunk volunteered in the study. They randomly performed 2 sets of 5 counter movement jumps (CMJ) with and without application of Kinesiologic tape. The best 3 jumps of each set were considered for the analysis. The Kinematics parameters analyzed were: knees maximal flexion and ankles maximal dorsiflexion during the push off phase, jump height and take off velocity. Vertical ground reaction force and maximal power expressed in the push off phase of the jump were also investigated. The NMT application in both knees and ankles showed no statistically significant differences in the kinematic and kinetic parameters and did not interfere with the CMJ performance. Bilateral NMT application in the group of healthy male basketball players did not change kinematics and kinetics jump parameters, thus suggesting that its routine use should have no negative effect on functional performance. Similarly, the combined application of the tape on both knees and ankles did not affect in either way jump performance. PMID:28713536

  9. Effects of Reiki on Post-cesarean Delivery Pain, Anxiety, and Hemodynamic Parameters: A Randomized, Controlled Clinical Trial.

    PubMed

    Midilli, Tulay Sagkal; Eser, Ismet

    2015-06-01

    The aim of this study was to investigate the effect of Reiki on pain, anxiety, and hemodynamic parameters on postoperative days 1 and 2 in patients who had undergone cesarean delivery. The design of this study was a randomized, controlled clinical trial. The study took place between February and July 2011 in the Obstetrical Unit at Odemis Public Hospital in Izmir, Turkey. Ninety patients equalized by age and number of births were randomly assigned to either a Reiki group or a control group (a rest without treatment). Treatment applied to both groups in the first 24 and 48 hours after delivery for a total of 30 minutes to 10 identified regions of the body for 3 minutes each. Reiki was applied for 2 days once a day (in the first 24 and 48 hours) within 4-8 hours of the administration of standard analgesic, which was administered intravenously by a nurse. A visual analog scale and the State Anxiety Inventory were used to measure pain and anxiety. Hemodynamic parameters, including blood pressure (systolic and diastolic), pulse and breathing rates, and analgesic requirements also were recorded. Statistically significant differences in pain intensity (p = .000), anxiety value (p = .000), and breathing rate (p = .000) measured over time were found between the two groups. There was a statistically significant difference between the two groups in the time (p = .000) and number (p = .000) of analgesics needed after Reiki application and a rest without treatment. Results showed that Reiki application reduced the intensity of pain, the value of anxiety, and the breathing rate, as well as the need for and number of analgesics. However, it did not affect blood pressure or pulse rate. Reiki application as a nursing intervention is recommended as a pain and anxiety-relieving method in women after cesarean delivery. Copyright © 2015 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.

  10. Calibration of Predictor Models Using Multiple Validation Experiments

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This paper presents a framework for calibrating computational models using data from several and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncertainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of observations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it casts the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain.

  11. Explanation of power law behavior of autoregressive conditional duration processes based on the random multiplicative process

    NASA Astrophysics Data System (ADS)

    Sato, Aki-Hiro

    2004-04-01

    Autoregressive conditional duration (ACD) processes, which have the potential to be applied to power law distributions of complex systems found in natural science, life science, and social science, are analyzed both numerically and theoretically. An ACD(1) process exhibits the singular second order moment, which suggests that its probability density function (PDF) has a power law tail. It is verified that the PDF of the ACD(1) has a power law tail with an arbitrary exponent depending on a model parameter. On the basis of theory of the random multiplicative process a relation between the model parameter and the power law exponent is theoretically derived. It is confirmed that the relation is valid from numerical simulations. An application of the ACD(1) to intervals between two successive transactions in a foreign currency market is shown.

  12. Explanation of power law behavior of autoregressive conditional duration processes based on the random multiplicative process.

    PubMed

    Sato, Aki-Hiro

    2004-04-01

    Autoregressive conditional duration (ACD) processes, which have the potential to be applied to power law distributions of complex systems found in natural science, life science, and social science, are analyzed both numerically and theoretically. An ACD(1) process exhibits the singular second order moment, which suggests that its probability density function (PDF) has a power law tail. It is verified that the PDF of the ACD(1) has a power law tail with an arbitrary exponent depending on a model parameter. On the basis of theory of the random multiplicative process a relation between the model parameter and the power law exponent is theoretically derived. It is confirmed that the relation is valid from numerical simulations. An application of the ACD(1) to intervals between two successive transactions in a foreign currency market is shown.

  13. Modal identification of structures from the responses and random decrement signatures

    NASA Technical Reports Server (NTRS)

    Brahim, S. R.; Goglia, G. L.

    1977-01-01

    The theory and application of a method which utilizes the free response of a structure to determine its vibration parameters is described. The time-domain free response is digitized and used in a digital computer program to determine the number of modes excited, the natural frequencies, the damping factors, and the modal vectors. The technique is applied to a complex generalized payload model previously tested using sine sweep method and analyzed by NASTRAN. Ten modes of the payload model are identified. In case free decay response is not readily available, an algorithm is developed to obtain the free responses of a structure from its random responses, due to some unknown or known random input or inputs, using the random decrement technique without changing time correlation between signals. The algorithm is tested using random responses from a generalized payload model and from the space shuttle model.

  14. Analysis of stationary and dynamic factors affecting highway accident occurrence: A dynamic correlated grouped random parameters binary logit approach.

    PubMed

    Fountas, Grigorios; Sarwar, Md Tawfiq; Anastasopoulos, Panagiotis Ch; Blatt, Alan; Majka, Kevin

    2018-04-01

    Traditional accident analysis typically explores non-time-varying (stationary) factors that affect accident occurrence on roadway segments. However, the impact of time-varying (dynamic) factors is not thoroughly investigated. This paper seeks to simultaneously identify pre-crash stationary and dynamic factors of accident occurrence, while accounting for unobserved heterogeneity. Using highly disaggregate information for the potential dynamic factors, and aggregate data for the traditional stationary elements, a dynamic binary random parameters (mixed) logit framework is employed. With this approach, the dynamic nature of weather-related, and driving- and pavement-condition information is jointly investigated with traditional roadway geometric and traffic characteristics. To additionally account for the combined effect of the dynamic and stationary factors on the accident occurrence, the developed random parameters logit framework allows for possible correlations among the random parameters. The analysis is based on crash and non-crash observations between 2011 and 2013, drawn from urban and rural highway segments in the state of Washington. The findings show that the proposed methodological framework can account for both stationary and dynamic factors affecting accident occurrence probabilities, for panel effects, for unobserved heterogeneity through the use of random parameters, and for possible correlation among the latter. The comparative evaluation among the correlated grouped random parameters, the uncorrelated random parameters logit models, and their fixed parameters logit counterpart, demonstrate the potential of the random parameters modeling, in general, and the benefits of the correlated grouped random parameters approach, specifically, in terms of statistical fit and explanatory power. Published by Elsevier Ltd.

  15. GLRT-based array receivers for the detection of a known signal with unknown parameters corrupted by noncircular interferences

    NASA Astrophysics Data System (ADS)

    Chevalier, Pascal; Oukaci, Abdelkader; Delmas, Jean-Pierre

    2011-12-01

    The detection of a known signal with unknown parameters in the presence of noise plus interferences (called total noise) whose covariance matrix is unknown is an important problem which has received much attention these last decades for applications such as radar, satellite localization or time acquisition in radio communications. However, most of the available receivers assume a second order (SO) circular (or proper) total noise and become suboptimal in the presence of SO noncircular (or improper) interferences, potentially present in the previous applications. The scarce available receivers which take the potential SO noncircularity of the total noise into account have been developed under the restrictive condition of a known signal with known parameters or under the assumption of a random signal. For this reason, following a generalized likelihood ratio test (GLRT) approach, the purpose of this paper is to introduce and to analyze the performance of different array receivers for the detection of a known signal, with different sets of unknown parameters, corrupted by an unknown noncircular total noise. To simplify the study, we limit the analysis to rectilinear known useful signals for which the baseband signal is real, which concerns many applications.

  16. Safety assessment of a shallow foundation using the random finite element method

    NASA Astrophysics Data System (ADS)

    Zaskórski, Łukasz; Puła, Wojciech

    2015-04-01

    A complex structure of soil and its random character are reasons why soil modeling is a cumbersome task. Heterogeneity of soil has to be considered even within a homogenous layer of soil. Therefore an estimation of shear strength parameters of soil for the purposes of a geotechnical analysis causes many problems. In applicable standards (Eurocode 7) there is not presented any explicit method of an evaluation of characteristic values of soil parameters. Only general guidelines can be found how these values should be estimated. Hence many approaches of an assessment of characteristic values of soil parameters are presented in literature and can be applied in practice. In this paper, the reliability assessment of a shallow strip footing was conducted using a reliability index β. Therefore some approaches of an estimation of characteristic values of soil properties were compared by evaluating values of reliability index β which can be achieved by applying each of them. Method of Orr and Breysse, Duncan's method, Schneider's method, Schneider's method concerning influence of fluctuation scales and method included in Eurocode 7 were examined. Design values of the bearing capacity based on these approaches were referred to the stochastic bearing capacity estimated by the random finite element method (RFEM). Design values of the bearing capacity were conducted for various widths and depths of a foundation in conjunction with design approaches DA defined in Eurocode. RFEM was presented by Griffiths and Fenton (1993). It combines deterministic finite element method, random field theory and Monte Carlo simulations. Random field theory allows to consider a random character of soil parameters within a homogenous layer of soil. For this purpose a soil property is considered as a separate random variable in every element of a mesh in the finite element method with proper correlation structure between points of given area. RFEM was applied to estimate which theoretical probability distribution fits the empirical probability distribution of bearing capacity basing on 3000 realizations. Assessed probability distribution was applied to compute design values of the bearing capacity and related reliability indices β. Conducted analysis were carried out for a cohesion soil. Hence a friction angle and a cohesion were defined as a random parameters and characterized by two dimensional random fields. A friction angle was described by a bounded distribution as it differs within limited range. While a lognormal distribution was applied in case of a cohesion. Other properties - Young's modulus, Poisson's ratio and unit weight were assumed as deterministic values because they have negligible influence on the stochastic bearing capacity. Griffiths D. V., & Fenton G. A. (1993). Seepage beneath water retaining structures founded on spatially random soil. Géotechnique, 43(6), 577-587.

  17. Uncertainty in eddy covariance measurements and its application to physiological models

    Treesearch

    D.Y. Hollinger; A.D. Richardson; A.D. Richardson

    2005-01-01

    Flux data are noisy, and this uncertainty is largely due to random measurement error. Knowledge of uncertainty is essential for the statistical evaluation of modeled andmeasured fluxes, for comparison of parameters derived by fitting models to measured fluxes and in formal data-assimilation efforts. We used the difference between simultaneous measurements from two...

  18. Cooperation evolution in random multiplicative environments

    NASA Astrophysics Data System (ADS)

    Yaari, G.; Solomon, S.

    2010-02-01

    Most real life systems have a random component: the multitude of endogenous and exogenous factors influencing them result in stochastic fluctuations of the parameters determining their dynamics. These empirical systems are in many cases subject to noise of multiplicative nature. The special properties of multiplicative noise as opposed to additive noise have been noticed for a long while. Even though apparently and formally the difference between free additive vs. multiplicative random walks consists in just a move from normal to log-normal distributions, in practice the implications are much more far reaching. While in an additive context the emergence and survival of cooperation requires special conditions (especially some level of reward, punishment, reciprocity), we find that in the multiplicative random context the emergence of cooperation is much more natural and effective. We study the various implications of this observation and its applications in various contexts.

  19. The timbre model

    NASA Astrophysics Data System (ADS)

    Jensen, Kristoffer

    2002-11-01

    A timbre model is proposed for use in multiple applications. This model, which encompasses all voiced isolated musical instruments, has an intuitive parameter set, fixed size, and separates the sounds in dimensions akin to the timbre dimensions as proposed in timbre research. The analysis of the model parameters is fully documented, and it proposes, in particular, a method for the estimation of the difficult decay/release split-point. The main parameters of the model are the spectral envelope, the attack/release durations and relative amplitudes, and the inharmonicity and the shimmer and jitter (which provide both for the slow random variations of the frequencies and amplitudes, and also for additive noises). Some of the applications include synthesis, where a real-time application is being developed with an intuitive gui, classification, and search of sounds based on the content of the sounds, and a further understanding of acoustic musical instrument behavior. In order to present the background of the model, this presentation will start with sinusoidal A/S, some timbre perception research, then present the timbre model, show the validity for individual music instrument sounds, and finally introduce some expression additions to the model.

  20. Quantum walks with tuneable self-avoidance in one dimension

    PubMed Central

    Camilleri, Elizabeth; Rohde, Peter P.; Twamley, Jason

    2014-01-01

    Quantum walks exhibit many unique characteristics compared to classical random walks. In the classical setting, self-avoiding random walks have been studied as a variation on the usual classical random walk. Here the walker has memory of its previous locations and preferentially avoids stepping back to locations where it has previously resided. Classical self-avoiding random walks have found numerous algorithmic applications, most notably in the modelling of protein folding. We consider the analogous problem in the quantum setting – a quantum walk in one dimension with tunable levels of self-avoidance. We complement a quantum walk with a memory register that records where the walker has previously resided. The walker is then able to avoid returning back to previously visited sites or apply more general memory conditioned operations to control the walk. We characterise this walk by examining the variance of the walker's distribution against time, the standard metric for quantifying how quantum or classical a walk is. We parameterise the strength of the memory recording and the strength of the memory back-action on the walker, and investigate their effect on the dynamics of the walk. We find that by manipulating these parameters, which dictate the degree of self-avoidance, the walk can be made to reproduce ideal quantum or classical random walk statistics, or a plethora of more elaborate diffusive phenomena. In some parameter regimes we observe a close correspondence between classical self-avoiding random walks and the quantum self-avoiding walk. PMID:24762398

  1. A mixed-effects regression model for longitudinal multivariate ordinal data.

    PubMed

    Liu, Li C; Hedeker, Donald

    2006-03-01

    A mixed-effects item response theory model that allows for three-level multivariate ordinal outcomes and accommodates multiple random subject effects is proposed for analysis of multivariate ordinal outcomes in longitudinal studies. This model allows for the estimation of different item factor loadings (item discrimination parameters) for the multiple outcomes. The covariates in the model do not have to follow the proportional odds assumption and can be at any level. Assuming either a probit or logistic response function, maximum marginal likelihood estimation is proposed utilizing multidimensional Gauss-Hermite quadrature for integration of the random effects. An iterative Fisher scoring solution, which provides standard errors for all model parameters, is used. An analysis of a longitudinal substance use data set, where four items of substance use behavior (cigarette use, alcohol use, marijuana use, and getting drunk or high) are repeatedly measured over time, is used to illustrate application of the proposed model.

  2. Machine Learning Predictions of a Multiresolution Climate Model Ensemble

    NASA Astrophysics Data System (ADS)

    Anderson, Gemma J.; Lucas, Donald D.

    2018-05-01

    Statistical models of high-resolution climate models are useful for many purposes, including sensitivity and uncertainty analyses, but building them can be computationally prohibitive. We generated a unique multiresolution perturbed parameter ensemble of a global climate model. We use a novel application of a machine learning technique known as random forests to train a statistical model on the ensemble to make high-resolution model predictions of two important quantities: global mean top-of-atmosphere energy flux and precipitation. The random forests leverage cheaper low-resolution simulations, greatly reducing the number of high-resolution simulations required to train the statistical model. We demonstrate that high-resolution predictions of these quantities can be obtained by training on an ensemble that includes only a small number of high-resolution simulations. We also find that global annually averaged precipitation is more sensitive to resolution changes than to any of the model parameters considered.

  3. On the strength of random fiber networks

    NASA Astrophysics Data System (ADS)

    Deogekar, S.; Picu, R. C.

    2018-07-01

    Damage accumulation and failure in random fiber networks is of importance in a variety of applications, from design of synthetic materials, such as paper and non-wovens, to accidental tearing of biological tissues. In this work we study these processes using three-dimensional models of athermal fiber networks, focusing attention on the modes of failure and on the relationship between network strength and network structural parameters. We consider network failure at small and large strains associated with the rupture of inter-fiber bonds. It is observed that the strength increases linearly with the network volume fraction and with the bond strength, while the stretch at peak stress is inversely related to these two parameters. A small fraction of the bonds rupture before peak stress and this fraction increases with increasing failure stretch. Rendering the bond strength stochastic causes a reduction of the network strength. However, heterogeneity retards damage localization and increases the stretch at peak stress, therefore promoting ductility.

  4. A comparison of two experimental design approaches in applying conjoint analysis in patient-centered outcomes research: a randomized trial.

    PubMed

    Kinter, Elizabeth T; Prior, Thomas J; Carswell, Christopher I; Bridges, John F P

    2012-01-01

    While the application of conjoint analysis and discrete-choice experiments in health are now widely accepted, a healthy debate exists around competing approaches to experimental design. There remains, however, a paucity of experimental evidence comparing competing design approaches and their impact on the application of these methods in patient-centered outcomes research. Our objectives were to directly compare the choice-model parameters and predictions of an orthogonal and a D-efficient experimental design using a randomized trial (i.e., an experiment on experiments) within an application of conjoint analysis studying patient-centered outcomes among outpatients diagnosed with schizophrenia in Germany. Outpatients diagnosed with schizophrenia were surveyed and randomized to receive choice tasks developed using either an orthogonal or a D-efficient experimental design. The choice tasks elicited judgments from the respondents as to which of two patient profiles (varying across seven outcomes and process attributes) was preferable from their own perspective. The results from the two survey designs were analyzed using the multinomial logit model, and the resulting parameter estimates and their robust standard errors were compared across the two arms of the study (i.e., the orthogonal and D-efficient designs). The predictive performances of the two resulting models were also compared by computing their percentage of survey responses classified correctly, and the potential for variation in scale between the two designs of the experiments was tested statistically and explored graphically. The results of the two models were statistically identical. No difference was found using an overall chi-squared test of equality for the seven parameters (p = 0.69) or via uncorrected pairwise comparisons of the parameter estimates (p-values ranged from 0.30 to 0.98). The D-efficient design resulted in directionally smaller standard errors for six of the seven parameters, of which only two were statistically significant, and no differences were found in the observed D-efficiencies of their standard errors (p = 0.62). The D-efficient design resulted in poorer predictive performance, but this was not significant (p = 0.73); there was some evidence that the parameters of the D-efficient design were biased marginally towards the null. While no statistical difference in scale was detected between the two designs (p = 0.74), the D-efficient design had a higher relative scale (1.06). This could be observed when the parameters were explored graphically, as the D-efficient parameters were lower. Our results indicate that orthogonal and D-efficient experimental designs have produced results that are statistically equivalent. This said, we have identified several qualitative findings that speak to the potential differences in these results that may have been statistically identified in a larger sample. While more comparative studies focused on the statistical efficiency of competing design strategies are needed, a more pressing research problem is to document the impact the experimental design has on respondent efficiency.

  5. Stochastic theory of polarized light in nonlinear birefringent media: An application to optical rotation

    NASA Astrophysics Data System (ADS)

    Tsuchida, Satoshi; Kuratsuji, Hiroshi

    2018-05-01

    A stochastic theory is developed for the light transmitting the optical media exhibiting linear and nonlinear birefringence. The starting point is the two-component nonlinear Schrödinger equation (NLSE). On the basis of the ansatz of “soliton” solution for the NLSE, the evolution equation for the Stokes parameters is derived, which turns out to be the Langevin equation by taking account of randomness and dissipation inherent in the birefringent media. The Langevin equation is converted to the Fokker-Planck (FP) equation for the probability distribution by employing the technique of functional integral on the assumption of the Gaussian white noise for the random fluctuation. The specific application is considered for the optical rotation, which is described by the ellipticity (third component of the Stokes parameters) alone: (i) The asymptotic analysis is given for the functional integral, which leads to the transition rate on the Poincaré sphere. (ii) The FP equation is analyzed in the strong coupling approximation, by which the diffusive behavior is obtained for the linear and nonlinear birefringence. These would provide with a basis of statistical analysis for the polarization phenomena in nonlinear birefringent media.

  6. Estimating source parameters from deformation data, with an application to the March 1997 earthquake swarm off the Izu Peninsula, Japan

    NASA Astrophysics Data System (ADS)

    Cervelli, P.; Murray, M. H.; Segall, P.; Aoki, Y.; Kato, T.

    2001-06-01

    We have applied two Monte Carlo optimization techniques, simulated annealing and random cost, to the inversion of deformation data for fault and magma chamber geometry. These techniques involve an element of randomness that permits them to escape local minima and ultimately converge to the global minimum of misfit space. We have tested the Monte Carlo algorithms on two synthetic data sets. We have also compared them to one another in terms of their efficiency and reliability. We have applied the bootstrap method to estimate confidence intervals for the source parameters, including the correlations inherent in the data. Additionally, we present methods that use the information from the bootstrapping procedure to visualize the correlations between the different model parameters. We have applied these techniques to GPS, tilt, and leveling data from the March 1997 earthquake swarm off of the Izu Peninsula, Japan. Using the two Monte Carlo algorithms, we have inferred two sources, a dike and a fault, that fit the deformation data and the patterns of seismicity and that are consistent with the regional stress field.

  7. Collaborative emitter tracking using Rao-Blackwellized random exchange diffusion particle filtering

    NASA Astrophysics Data System (ADS)

    Bruno, Marcelo G. S.; Dias, Stiven S.

    2014-12-01

    We introduce in this paper the fully distributed, random exchange diffusion particle filter (ReDif-PF) to track a moving emitter using multiple received signal strength (RSS) sensors. We consider scenarios with both known and unknown sensor model parameters. In the unknown parameter case, a Rao-Blackwellized (RB) version of the random exchange diffusion particle filter, referred to as the RB ReDif-PF, is introduced. In a simulated scenario with a partially connected network, the proposed ReDif-PF outperformed a PF tracker that assimilates local neighboring measurements only and also outperformed a linearized random exchange distributed extended Kalman filter (ReDif-EKF). Furthermore, the novel ReDif-PF matched the tracking error performance of alternative suboptimal distributed PFs based respectively on iterative Markov chain move steps and selective average gossiping with an inter-node communication cost that is roughly two orders of magnitude lower than the corresponding cost for the Markov chain and selective gossip filters. Compared to a broadcast-based filter which exactly mimics the optimal centralized tracker or its equivalent (exact) consensus-based implementations, ReDif-PF showed a degradation in steady-state error performance. However, compared to the optimal consensus-based trackers, ReDif-PF is better suited for real-time applications since it does not require iterative inter-node communication between measurement arrivals.

  8. GuiTope: an application for mapping random-sequence peptides to protein sequences.

    PubMed

    Halperin, Rebecca F; Stafford, Phillip; Emery, Jack S; Navalkar, Krupa Arun; Johnston, Stephen Albert

    2012-01-03

    Random-sequence peptide libraries are a commonly used tool to identify novel ligands for binding antibodies, other proteins, and small molecules. It is often of interest to compare the selected peptide sequences to the natural protein binding partners to infer the exact binding site or the importance of particular residues. The ability to search a set of sequences for similarity to a set of peptides may sometimes enable the prediction of an antibody epitope or a novel binding partner. We have developed a software application designed specifically for this task. GuiTope provides a graphical user interface for aligning peptide sequences to protein sequences. All alignment parameters are accessible to the user including the ability to specify the amino acid frequency in the peptide library; these frequencies often differ significantly from those assumed by popular alignment programs. It also includes a novel feature to align di-peptide inversions, which we have found improves the accuracy of antibody epitope prediction from peptide microarray data and shows utility in analyzing phage display datasets. Finally, GuiTope can randomly select peptides from a given library to estimate a null distribution of scores and calculate statistical significance. GuiTope provides a convenient method for comparing selected peptide sequences to protein sequences, including flexible alignment parameters, novel alignment features, ability to search a database, and statistical significance of results. The software is available as an executable (for PC) at http://www.immunosignature.com/software and ongoing updates and source code will be available at sourceforge.net.

  9. Application of lifting wavelet and random forest in compound fault diagnosis of gearbox

    NASA Astrophysics Data System (ADS)

    Chen, Tang; Cui, Yulian; Feng, Fuzhou; Wu, Chunzhi

    2018-03-01

    Aiming at the weakness of compound fault characteristic signals of a gearbox of an armored vehicle and difficult to identify fault types, a fault diagnosis method based on lifting wavelet and random forest is proposed. First of all, this method uses the lifting wavelet transform to decompose the original vibration signal in multi-layers, reconstructs the multi-layer low-frequency and high-frequency components obtained by the decomposition to get multiple component signals. Then the time-domain feature parameters are obtained for each component signal to form multiple feature vectors, which is input into the random forest pattern recognition classifier to determine the compound fault type. Finally, a variety of compound fault data of the gearbox fault analog test platform are verified, the results show that the recognition accuracy of the fault diagnosis method combined with the lifting wavelet and the random forest is up to 99.99%.

  10. Random walk study of electron motion in helium in crossed electromagnetic fields

    NASA Technical Reports Server (NTRS)

    Englert, G. W.

    1972-01-01

    Random walk theory, previously adapted to electron motion in the presence of an electric field, is extended to include a transverse magnetic field. In principle, the random walk approach avoids mathematical complexity and concomitant simplifying assumptions and permits determination of energy distributions and transport coefficients within the accuracy of available collisional cross section data. Application is made to a weakly ionized helium gas. Time of relaxation of electron energy distribution, determined by the random walk, is described by simple expressions based on energy exchange between the electron and an effective electric field. The restrictive effect of the magnetic field on electron motion, which increases the required number of collisions per walk to reach a terminal steady state condition, as well as the effect of the magnetic field on electron transport coefficients and mean energy can be quite adequately described by expressions involving only the Hall parameter.

  11. A review of emerging non-volatile memory (NVM) technologies and applications

    NASA Astrophysics Data System (ADS)

    Chen, An

    2016-11-01

    This paper will review emerging non-volatile memory (NVM) technologies, with the focus on phase change memory (PCM), spin-transfer-torque random-access-memory (STTRAM), resistive random-access-memory (RRAM), and ferroelectric field-effect-transistor (FeFET) memory. These promising NVM devices are evaluated in terms of their advantages, challenges, and applications. Their performance is compared based on reported parameters of major industrial test chips. Memory selector devices and cell structures are discussed. Changing market trends toward low power (e.g., mobile, IoT) and data-centric applications create opportunities for emerging NVMs. High-performance and low-cost emerging NVMs may simplify memory hierarchy, introduce non-volatility in logic gates and circuits, reduce system power, and enable novel architectures. Storage-class memory (SCM) based on high-density NVMs could fill the performance and density gap between memory and storage. Some unique characteristics of emerging NVMs can be utilized for novel applications beyond the memory space, e.g., neuromorphic computing, hardware security, etc. In the beyond-CMOS era, emerging NVMs have the potential to fulfill more important functions and enable more efficient, intelligent, and secure computing systems.

  12. Proceedings of the Annual Precise Time and Time Interval (PTTI) applications and Planning Meeting (20th) Held in Vienna, Virginia on 29 November-1 December 1988

    DTIC Science & Technology

    1988-12-01

    PERFORMANCE IN REAL TIME* Dr. James A. Barnes Austron Boulder, Co. Abstract Kalman filters and ARIMA models provide optimum control and evaluation tech...estimates of the model parameters (e.g., the phi’s and theta’s for an ARIMA model ). These model parameters are often evaluated in a batch mode on a...random walk FM, and linear frequency drift. In ARIMA models , this is equivalent to an ARIMA (0,2,2) with a non-zero average sec- ond difference. Using

  13. Kinematic Methods of Designing Free Form Shells

    NASA Astrophysics Data System (ADS)

    Korotkiy, V. A.; Khmarova, L. I.

    2017-11-01

    The geometrical shell model is formed in light of the set requirements expressed through surface parameters. The shell is modelled using the kinematic method according to which the shell is formed as a continuous one-parameter set of curves. The authors offer a kinematic method based on the use of second-order curves with a variable eccentricity as a form-making element. Additional guiding ruled surfaces are used to control the designed surface form. The authors made a software application enabling to plot a second-order curve specified by a random set of five coplanar points and tangents.

  14. Large Scale Gaussian Processes for Atmospheric Parameter Retrieval and Cloud Screening

    NASA Astrophysics Data System (ADS)

    Camps-Valls, G.; Gomez-Chova, L.; Mateo, G.; Laparra, V.; Perez-Suay, A.; Munoz-Mari, J.

    2017-12-01

    Current Earth-observation (EO) applications for image classification have to deal with an unprecedented big amount of heterogeneous and complex data sources. Spatio-temporally explicit classification methods are a requirement in a variety of Earth system data processing applications. Upcoming missions such as the super-spectral Copernicus Sentinels EnMAP and FLEX will soon provide unprecedented data streams. Very high resolution (VHR) sensors like Worldview-3 also pose big challenges to data processing. The challenge is not only attached to optical sensors but also to infrared sounders and radar images which increased in spectral, spatial and temporal resolution. Besides, we should not forget the availability of the extremely large remote sensing data archives already collected by several past missions, such ENVISAT, Cosmo-SkyMED, Landsat, SPOT, or Seviri/MSG. These large-scale data problems require enhanced processing techniques that should be accurate, robust and fast. Standard parameter retrieval and classification algorithms cannot cope with this new scenario efficiently. In this work, we review the field of large scale kernel methods for both atmospheric parameter retrieval and cloud detection using infrared sounding IASI data and optical Seviri/MSG imagery. We propose novel Gaussian Processes (GPs) to train problems with millions of instances and high number of input features. Algorithms can cope with non-linearities efficiently, accommodate multi-output problems, and provide confidence intervals for the predictions. Several strategies to speed up algorithms are devised: random Fourier features and variational approaches for cloud classification using IASI data and Seviri/MSG, and engineered randomized kernel functions and emulation in temperature, moisture and ozone atmospheric profile retrieval from IASI as a proxy to the upcoming MTG-IRS sensor. Excellent compromise between accuracy and scalability are obtained in all applications.

  15. Modeling pattern in collections of parameters

    USGS Publications Warehouse

    Link, W.A.

    1999-01-01

    Wildlife management is increasingly guided by analyses of large and complex datasets. The description of such datasets often requires a large number of parameters, among which certain patterns might be discernible. For example, one may consider a long-term study producing estimates of annual survival rates; of interest is the question whether these rates have declined through time. Several statistical methods exist for examining pattern in collections of parameters. Here, I argue for the superiority of 'random effects models' in which parameters are regarded as random variables, with distributions governed by 'hyperparameters' describing the patterns of interest. Unfortunately, implementation of random effects models is sometimes difficult. Ultrastructural models, in which the postulated pattern is built into the parameter structure of the original data analysis, are approximations to random effects models. However, this approximation is not completely satisfactory: failure to account for natural variation among parameters can lead to overstatement of the evidence for pattern among parameters. I describe quasi-likelihood methods that can be used to improve the approximation of random effects models by ultrastructural models.

  16. Applications of a general random-walk theory for confined diffusion.

    PubMed

    Calvo-Muñoz, Elisa M; Selvan, Myvizhi Esai; Xiong, Ruichang; Ojha, Madhusudan; Keffer, David J; Nicholson, Donald M; Egami, Takeshi

    2011-01-01

    A general random walk theory for diffusion in the presence of nanoscale confinement is developed and applied. The random-walk theory contains two parameters describing confinement: a cage size and a cage-to-cage hopping probability. The theory captures the correct nonlinear dependence of the mean square displacement (MSD) on observation time for intermediate times. Because of its simplicity, the theory also requires modest computational requirements and is thus able to simulate systems with very low diffusivities for sufficiently long time to reach the infinite-time-limit regime where the Einstein relation can be used to extract the self-diffusivity. The theory is applied to three practical cases in which the degree of order in confinement varies. The three systems include diffusion of (i) polyatomic molecules in metal organic frameworks, (ii) water in proton exchange membranes, and (iii) liquid and glassy iron. For all three cases, the comparison between theory and the results of molecular dynamics (MD) simulations indicates that the theory can describe the observed diffusion behavior with a small fraction of the computational expense. The confined-random-walk theory fit to the MSDs of very short MD simulations is capable of accurately reproducing the MSDs of much longer MD simulations. Furthermore, the values of the parameter for cage size correspond to the physical dimensions of the systems and the cage-to-cage hopping probability corresponds to the activation barrier for diffusion, indicating that the two parameters in the theory are not simply fitted values but correspond to real properties of the physical system.

  17. Parallelization of a spatial random field characterization process using the Method of Anchored Distributions and the HTCondor high throughput computing system

    NASA Astrophysics Data System (ADS)

    Osorio-Murillo, C. A.; Over, M. W.; Frystacky, H.; Ames, D. P.; Rubin, Y.

    2013-12-01

    A new software application called MAD# has been coupled with the HTCondor high throughput computing system to aid scientists and educators with the characterization of spatial random fields and enable understanding the spatial distribution of parameters used in hydrogeologic and related modeling. MAD# is an open source desktop software application used to characterize spatial random fields using direct and indirect information through Bayesian inverse modeling technique called the Method of Anchored Distributions (MAD). MAD relates indirect information with a target spatial random field via a forward simulation model. MAD# executes inverse process running the forward model multiple times to transfer information from indirect information to the target variable. MAD# uses two parallelization profiles according to computational resources available: one computer with multiple cores and multiple computers - multiple cores through HTCondor. HTCondor is a system that manages a cluster of desktop computers for submits serial or parallel jobs using scheduling policies, resources monitoring, job queuing mechanism. This poster will show how MAD# reduces the time execution of the characterization of random fields using these two parallel approaches in different case studies. A test of the approach was conducted using 1D problem with 400 cells to characterize saturated conductivity, residual water content, and shape parameters of the Mualem-van Genuchten model in four materials via the HYDRUS model. The number of simulations evaluated in the inversion was 10 million. Using the one computer approach (eight cores) were evaluated 100,000 simulations in 12 hours (10 million - 1200 hours approximately). In the evaluation on HTCondor, 32 desktop computers (132 cores) were used, with a processing time of 60 hours non-continuous in five days. HTCondor reduced the processing time for uncertainty characterization by a factor of 20 (1200 hours reduced to 60 hours.)

  18. SU-D-201-06: Random Walk Algorithm Seed Localization Parameters in Lung Positron Emission Tomography (PET) Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soufi, M; Asl, A Kamali; Geramifar, P

    2015-06-15

    Purpose: The objective of this study was to find the best seed localization parameters in random walk algorithm application to lung tumor delineation in Positron Emission Tomography (PET) images. Methods: PET images suffer from statistical noise and therefore tumor delineation in these images is a challenging task. Random walk algorithm, a graph based image segmentation technique, has reliable image noise robustness. Also its fast computation and fast editing characteristics make it powerful for clinical purposes. We implemented the random walk algorithm using MATLAB codes. The validation and verification of the algorithm have been done by 4D-NCAT phantom with spherical lungmore » lesions in different diameters from 20 to 90 mm (with incremental steps of 10 mm) and different tumor to background ratios of 4:1 and 8:1. STIR (Software for Tomographic Image Reconstruction) has been applied to reconstruct the phantom PET images with different pixel sizes of 2×2×2 and 4×4×4 mm{sup 3}. For seed localization, we selected pixels with different maximum Standardized Uptake Value (SUVmax) percentages, at least (70%, 80%, 90% and 100%) SUVmax for foreground seeds and up to (20% to 55%, 5% increment) SUVmax for background seeds. Also, for investigation of algorithm performance on clinical data, 19 patients with lung tumor were studied. The resulted contours from algorithm have been compared with nuclear medicine expert manual contouring as ground truth. Results: Phantom and clinical lesion segmentation have shown that the best segmentation results obtained by selecting the pixels with at least 70% SUVmax as foreground seeds and pixels up to 30% SUVmax as background seeds respectively. The mean Dice Similarity Coefficient of 94% ± 5% (83% ± 6%) and mean Hausdorff Distance of 1 (2) pixels have been obtained for phantom (clinical) study. Conclusion: The accurate results of random walk algorithm in PET image segmentation assure its application for radiation treatment planning and diagnosis.« less

  19. Human intrabony defect regeneration with micro-grafts containing dental pulp stem cells: A randomized controlled clinical trial.

    PubMed

    Ferrarotti, Francesco; Romano, Federica; Gamba, Mara Noemi; Quirico, Andrea; Giraudi, Marta; Audagna, Martina; Aimetti, Mario

    2018-05-19

    The goal of this study was to evaluate if dental pulp stem cells (DPSCs) delivered into intrabony defects in a collagen scaffold would enhance the clinical and radiographic parameters of periodontal regeneration. In this randomized controlled trial, 29 chronic periodontitis patients presenting one deep intrabony defect and requiring extraction of one vital tooth were consecutively enrolled. Defects were randomly assigned to test or control treatments which both consisted of the use of minimally invasive surgical technique. The dental pulp of the extracted tooth was mechanically dissociated to obtain micro-grafts rich in autologous DPSCs. Test sites (n=15) were filled with micro-grafts seeded onto collagen sponge, whereas control sites (n=14) with collagen sponge alone. Clinical and radiographic parameters were recorded at baseline, 6 and 12 months postoperatively. Test sites exhibited significantly more PD reduction (4.9 mm versus 3.4 mm), CAL gain (4.5 versus 2.9 mm) and bone defect fill (3.9 versus 1.6 mm) than controls. Moreover, residual PD < 5 mm (93% versus 50%) and CAL gain ≥ 4 mm (73% versus 29%) was significantly more frequent in the test group. Application of DPSCs significantly improved clinical parameters of periodontal regeneration one year after treatment. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  20. A weighted belief-propagation algorithm for estimating volume-related properties of random polytopes

    NASA Astrophysics Data System (ADS)

    Font-Clos, Francesc; Massucci, Francesco Alessandro; Pérez Castillo, Isaac

    2012-11-01

    In this work we introduce a novel weighted message-passing algorithm based on the cavity method for estimating volume-related properties of random polytopes, properties which are relevant in various research fields ranging from metabolic networks, to neural networks, to compressed sensing. We propose, as opposed to adopting the usual approach consisting in approximating the real-valued cavity marginal distributions by a few parameters, using an algorithm to faithfully represent the entire marginal distribution. We explain various alternatives for implementing the algorithm and benchmarking the theoretical findings by showing concrete applications to random polytopes. The results obtained with our approach are found to be in very good agreement with the estimates produced by the Hit-and-Run algorithm, known to produce uniform sampling.

  1. Effects of Density Fluctuations on Weakly Nonlinear Alfven Waves: An IST Perspective

    NASA Astrophysics Data System (ADS)

    Hamilton, R.; Hadley, N.

    2012-12-01

    The effects of random density fluctuations on oblique, 1D, weakly nonlinear Alfven waves is examined through a numerical study of an analytical model developed by Ruderman [M.S. Ruderman, Phys. Plasmas, 9 (7), pp. 2940-2945, (2002).]. Consistent with Ruderman's application to the one-parameter dark soliton, the effects on both one-parameter bright and dark solitons, the two-parameter soliton as well as pairs of one-parameter solitons were similar to that of Ohmic dissipation found by Hamilton et al. [R. Hamilton, D. Peterson, and S. Libby, J. Geophys. Res 114, A03104,doi:10.1029/2008JA013582 (2009).] It was found in all cases where bright or two-parameter solitons are present initially, that the effects of density fluctuations results in the eventual damping of such compressive wave forms and the formation of a train of dark solitons, or magnetic depressions.

  2. Differential-Evolution Control Parameter Optimization for Unmanned Aerial Vehicle Path Planning

    PubMed Central

    Kok, Kai Yit; Rajendran, Parvathy

    2016-01-01

    The differential evolution algorithm has been widely applied on unmanned aerial vehicle (UAV) path planning. At present, four random tuning parameters exist for differential evolution algorithm, namely, population size, differential weight, crossover, and generation number. These tuning parameters are required, together with user setting on path and computational cost weightage. However, the optimum settings of these tuning parameters vary according to application. Instead of trial and error, this paper presents an optimization method of differential evolution algorithm for tuning the parameters of UAV path planning. The parameters that this research focuses on are population size, differential weight, crossover, and generation number. The developed algorithm enables the user to simply define the weightage desired between the path and computational cost to converge with the minimum generation required based on user requirement. In conclusion, the proposed optimization of tuning parameters in differential evolution algorithm for UAV path planning expedites and improves the final output path and computational cost. PMID:26943630

  3. Scattering Models and Basic Experiments in the Microwave Regime

    NASA Technical Reports Server (NTRS)

    Fung, A. K.; Blanchard, A. J. (Principal Investigator)

    1985-01-01

    The objectives of research over the next three years are: (1) to develop a randomly rough surface scattering model which is applicable over the entire frequency band; (2) to develop a computer simulation method and algorithm to simulate scattering from known randomly rough surfaces, Z(x,y); (3) to design and perform laboratory experiments to study geometric and physical target parameters of an inhomogeneous layer; (4) to develop scattering models for an inhomogeneous layer which accounts for near field interaction and multiple scattering in both the coherent and the incoherent scattering components; and (5) a comparison between theoretical models and measurements or numerical simulation.

  4. Deep Learning for Magnetic Resonance Fingerprinting: A New Approach for Predicting Quantitative Parameter Values from Time Series.

    PubMed

    Hoppe, Elisabeth; Körzdörfer, Gregor; Würfl, Tobias; Wetzl, Jens; Lugauer, Felix; Pfeuffer, Josef; Maier, Andreas

    2017-01-01

    The purpose of this work is to evaluate methods from deep learning for application to Magnetic Resonance Fingerprinting (MRF). MRF is a recently proposed measurement technique for generating quantitative parameter maps. In MRF a non-steady state signal is generated by a pseudo-random excitation pattern. A comparison of the measured signal in each voxel with the physical model yields quantitative parameter maps. Currently, the comparison is done by matching a dictionary of simulated signals to the acquired signals. To accelerate the computation of quantitative maps we train a Convolutional Neural Network (CNN) on simulated dictionary data. As a proof of principle we show that the neural network implicitly encodes the dictionary and can replace the matching process.

  5. Photonic generation of polarization-resolved wideband chaos with time-delay concealment in three-cascaded vertical-cavity surface-emitting lasers.

    PubMed

    Liu, Huijie; Li, Nianqiang; Zhao, Qingchun

    2015-05-10

    Optical chaos generated by chaotic lasers has been widely used in several important applications, such as chaos-based communications and high-speed random-number generators. However, these applications are susceptible to degradation by the presence of time-delay (TD) signature identified from the chaotic output. Here we propose to achieve the concealment of TD signature, along with the enhancement of chaos bandwidth, in three-cascaded vertical-cavity surface-emitting lasers (VCSELs). The cascaded system is composed of an external-cavity master VCSEL, a solitary intermediate VCSEL, and a solitary slave VCSEL. Through mapping the evolutions of TD signature and chaos bandwidth in the parameter space of the injection strength and frequency detuning, photonic generation of polarization-resolved wideband chaos with TD concealment is numerically demonstrated for wide regions of the injection parameters.

  6. Application of Monte Carlo techniques to optimization of high-energy beam transport in a stochastic environment

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.

    1971-01-01

    An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.

  7. Random forests in non-invasive sensorimotor rhythm brain-computer interfaces: a practical and convenient non-linear classifier.

    PubMed

    Steyrl, David; Scherer, Reinhold; Faller, Josef; Müller-Putz, Gernot R

    2016-02-01

    There is general agreement in the brain-computer interface (BCI) community that although non-linear classifiers can provide better results in some cases, linear classifiers are preferable. Particularly, as non-linear classifiers often involve a number of parameters that must be carefully chosen. However, new non-linear classifiers were developed over the last decade. One of them is the random forest (RF) classifier. Although popular in other fields of science, RFs are not common in BCI research. In this work, we address three open questions regarding RFs in sensorimotor rhythm (SMR) BCIs: parametrization, online applicability, and performance compared to regularized linear discriminant analysis (LDA). We found that the performance of RF is constant over a large range of parameter values. We demonstrate - for the first time - that RFs are applicable online in SMR-BCIs. Further, we show in an offline BCI simulation that RFs statistically significantly outperform regularized LDA by about 3%. These results confirm that RFs are practical and convenient non-linear classifiers for SMR-BCIs. Taking into account further properties of RFs, such as independence from feature distributions, maximum margin behavior, multiclass and advanced data mining capabilities, we argue that RFs should be taken into consideration for future BCIs.

  8. On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo

    NASA Astrophysics Data System (ADS)

    Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl

    2016-09-01

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.

  9. Nested generalized linear mixed model with ordinal response: Simulation and application on poverty data in Java Island

    NASA Astrophysics Data System (ADS)

    Widyaningsih, Yekti; Saefuddin, Asep; Notodiputro, Khairil A.; Wigena, Aji H.

    2012-05-01

    The objective of this research is to build a nested generalized linear mixed model using an ordinal response variable with some covariates. There are three main jobs in this paper, i.e. parameters estimation procedure, simulation, and implementation of the model for the real data. At the part of parameters estimation procedure, concepts of threshold, nested random effect, and computational algorithm are described. The simulations data are built for 3 conditions to know the effect of different parameter values of random effect distributions. The last job is the implementation of the model for the data about poverty in 9 districts of Java Island. The districts are Kuningan, Karawang, and Majalengka chose randomly in West Java; Temanggung, Boyolali, and Cilacap from Central Java; and Blitar, Ngawi, and Jember from East Java. The covariates in this model are province, number of bad nutrition cases, number of farmer families, and number of health personnel. In this modeling, all covariates are grouped as ordinal scale. Unit observation in this research is sub-district (kecamatan) nested in district, and districts (kabupaten) are nested in province. For the result of simulation, ARB (Absolute Relative Bias) and RRMSE (Relative Root of mean square errors) scale is used. They show that prov parameters have the highest bias, but more stable RRMSE in all conditions. The simulation design needs to be improved by adding other condition, such as higher correlation between covariates. Furthermore, as the result of the model implementation for the data, only number of farmer family and number of medical personnel have significant contributions to the level of poverty in Central Java and East Java province, and only district 2 (Karawang) of province 1 (West Java) has different random effect from the others. The source of the data is PODES (Potensi Desa) 2008 from BPS (Badan Pusat Statistik).

  10. Statistical analysis of mesoscale rainfall: Dependence of a random cascade generator on large-scale forcing

    NASA Technical Reports Server (NTRS)

    Over, Thomas, M.; Gupta, Vijay K.

    1994-01-01

    Under the theory of independent and identically distributed random cascades, the probability distribution of the cascade generator determines the spatial and the ensemble properties of spatial rainfall. Three sets of radar-derived rainfall data in space and time are analyzed to estimate the probability distribution of the generator. A detailed comparison between instantaneous scans of spatial rainfall and simulated cascades using the scaling properties of the marginal moments is carried out. This comparison highlights important similarities and differences between the data and the random cascade theory. Differences are quantified and measured for the three datasets. Evidence is presented to show that the scaling properties of the rainfall can be captured to the first order by a random cascade with a single parameter. The dependence of this parameter on forcing by the large-scale meteorological conditions, as measured by the large-scale spatial average rain rate, is investigated for these three datasets. The data show that this dependence can be captured by a one-to-one function. Since the large-scale average rain rate can be diagnosed from the large-scale dynamics, this relationship demonstrates an important linkage between the large-scale atmospheric dynamics and the statistical cascade theory of mesoscale rainfall. Potential application of this research to parameterization of runoff from the land surface and regional flood frequency analysis is briefly discussed, and open problems for further research are presented.

  11. Accelerated lifetime test of vibration isolator made of Metal Rubber material

    NASA Astrophysics Data System (ADS)

    Ao, Hongrui; Ma, Yong; Wang, Xianbiao; Chen, Jianye; Jiang, Hongyuan

    2017-01-01

    The Metal Rubber material (MR) is a kind of material with nonlinear damping characteristics for its application in the field of aerospace, petrochemical industry and so on. The study on the lifetime of MR material is impendent to its application in engineering. Based on the dynamic characteristic of MR, the accelerated lifetime experiments of vibration isolators made of MR working under random vibration load were conducted. The effects of structural parameters of MR components on the lifetime of isolators were studied and modelled with the fitting curves of degradation data. The lifetime prediction methods were proposed based on the models.

  12. Online Tools for Uncovering Data Quality (DQ) Issues in Satellite-Based Global Precipitation Products

    NASA Technical Reports Server (NTRS)

    Liu, Zhong; Heo, Gil

    2015-01-01

    Data quality (DQ) has many attributes or facets (i.e., errors, biases, systematic differences, uncertainties, benchmark, false trends, false alarm ratio, etc.)Sources can be complicated (measurements, environmental conditions, surface types, algorithms, etc.) and difficult to be identified especially for multi-sensor and multi-satellite products with bias correction (TMPA, IMERG, etc.) How to obtain DQ info fast and easily, especially quantified info in ROI Existing parameters (random error), literature, DIY, etc.How to apply the knowledge in research and applications.Here, we focus on online systems for integration of products and parameters, visualization and analysis as well as investigation and extraction of DQ information.

  13. Sectoral transitions - modeling the development from agrarian to service economies

    NASA Astrophysics Data System (ADS)

    Lutz, Raphael; Spies, Michael; Reusser, Dominik E.; Kropp, Jürgen P.; Rybski, Diego

    2013-04-01

    We consider the sectoral composition of a country's GDP, i.e the partitioning into agrarian, industrial, and service sectors. Exploring a simple system of differential equations we characterise the transfer of GDP shares between the sectors in the course of economic development. The model fits for the majority of countries providing 4 country-specific parameters. Relating the agrarian with the industrial sector, a data collapse over all countries and all years supports the applicability of our approach. Depending on the parameter ranges, country development exhibits different transfer properties. Most countries follow 3 of 8 characteristic paths. The types are not random but show distinct geographic and development patterns.

  14. Joint min-max distribution and Edwards-Anderson's order parameter of the circular 1/f-noise model

    NASA Astrophysics Data System (ADS)

    Cao, Xiangyu; Le Doussal, Pierre

    2016-05-01

    We calculate the joint min-max distribution and the Edwards-Anderson's order parameter for the circular model of 1/f-noise. Both quantities, as well as generalisations, are obtained exactly by combining the freezing-duality conjecture and Jack-polynomial techniques. Numerical checks come with significantly improved control of finite-size effects in the glassy phase, and the results convincingly validate the freezing-duality conjecture. Application to diffusive dynamics is discussed. We also provide a formula for the pre-factor ratio of the joint/marginal Carpentier-Le Doussal tail for minimum/maximum which applies to any logarithmic random energy model.

  15. Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Novack, Steven

    2015-01-01

    Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.

  16. Communication theory of quantum systems. Ph.D. Thesis, 1970

    NASA Technical Reports Server (NTRS)

    Yuen, H. P. H.

    1971-01-01

    Communication theory problems incorporating quantum effects for optical-frequency applications are discussed. Under suitable conditions, a unique quantum channel model corresponding to a given classical space-time varying linear random channel is established. A procedure is described by which a proper density-operator representation applicable to any receiver configuration can be constructed directly from the channel output field. Some examples illustrating the application of our methods to the development of optical quantum channel representations are given. Optimizations of communication system performance under different criteria are considered. In particular, certain necessary and sufficient conditions on the optimal detector in M-ary quantum signal detection are derived. Some examples are presented. Parameter estimation and channel capacity are discussed briefly.

  17. Random Item IRT Models

    ERIC Educational Resources Information Center

    De Boeck, Paul

    2008-01-01

    It is common practice in IRT to consider items as fixed and persons as random. Both, continuous and categorical person parameters are most often random variables, whereas for items only continuous parameters are used and they are commonly of the fixed type, although exceptions occur. It is shown in the present article that random item parameters…

  18. Multiple Imputation for Incomplete Data in Epidemiologic Studies

    PubMed Central

    Harel, Ofer; Mitchell, Emily M; Perkins, Neil J; Cole, Stephen R; Tchetgen Tchetgen, Eric J; Sun, BaoLuo; Schisterman, Enrique F

    2018-01-01

    Abstract Epidemiologic studies are frequently susceptible to missing information. Omitting observations with missing variables remains a common strategy in epidemiologic studies, yet this simple approach can often severely bias parameter estimates of interest if the values are not missing completely at random. Even when missingness is completely random, complete-case analysis can reduce the efficiency of estimated parameters, because large amounts of available data are simply tossed out with the incomplete observations. Alternative methods for mitigating the influence of missing information, such as multiple imputation, are becoming an increasing popular strategy in order to retain all available information, reduce potential bias, and improve efficiency in parameter estimation. In this paper, we describe the theoretical underpinnings of multiple imputation, and we illustrate application of this method as part of a collaborative challenge to assess the performance of various techniques for dealing with missing data (Am J Epidemiol. 2018;187(3):568–575). We detail the steps necessary to perform multiple imputation on a subset of data from the Collaborative Perinatal Project (1959–1974), where the goal is to estimate the odds of spontaneous abortion associated with smoking during pregnancy. PMID:29165547

  19. Infrared Extinction Performance of Randomly Oriented Microbial-Clustered Agglomerate Materials.

    PubMed

    Li, Le; Hu, Yihua; Gu, Youlin; Zhao, Xinying; Xu, Shilong; Yu, Lei; Zheng, Zhi Ming; Wang, Peng

    2017-11-01

    In this study, the spatial structure of randomly distributed clusters of fungi An0429 spores was simulated using a cluster aggregation (CCA) model, and the single scattering parameters of fungi An0429 spores were calculated using the discrete dipole approximation (DDA) method. The transmittance of 10.6 µm infrared (IR) light in the aggregated fungi An0429 spores swarm is simulated by using the Monte Carlo method. Several parameters that affect the transmittance of 10.6 µm IR light, such as the number and radius of original fungi An0429 spores, porosity of aggregated fungi An0429 spores, and density of aggregated fungi An0429 spores of the formation aerosol area were discussed. Finally, the transmittances of microbial materials with different qualities were measured in the dynamic test platform. The simulation results showed that the parameters analyzed were closely connected with the extinction performance of fungi An0429 spores. By controlling the value of the influencing factors, the transmittance could be lower than a certain threshold to meet the requirement of attenuation in application. In addition, the experimental results showed that the Monte Carlo method could well reflect the attenuation law of IR light in fungi An0429 spore agglomerates swarms.

  20. Recognition and characterization of hierarchical interstellar structure. II - Structure tree statistics

    NASA Technical Reports Server (NTRS)

    Houlahan, Padraig; Scalo, John

    1992-01-01

    A new method of image analysis is described, in which images partitioned into 'clouds' are represented by simplified skeleton images, called structure trees, that preserve the spatial relations of the component clouds while disregarding information concerning their sizes and shapes. The method can be used to discriminate between images of projected hierarchical (multiply nested) and random three-dimensional simulated collections of clouds constructed on the basis of observed interstellar properties, and even intermediate systems formed by combining random and hierarchical simulations. For a given structure type, the method can distinguish between different subclasses of models with different parameters and reliably estimate their hierarchical parameters: average number of children per parent, scale reduction factor per level of hierarchy, density contrast, and number of resolved levels. An application to a column density image of the Taurus complex constructed from IRAS data is given. Moderately strong evidence for a hierarchical structural component is found, and parameters of the hierarchy, as well as the average volume filling factor and mass efficiency of fragmentation per level of hierarchy, are estimated. The existence of nested structure contradicts models in which large molecular clouds are supposed to fragment, in a single stage, into roughly stellar-mass cores.

  1. Application of multivariable search techniques to the optimization of airfoils in a low speed nonlinear inviscid flow field

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Merz, A. W.

    1975-01-01

    Multivariable search techniques are applied to a particular class of airfoil optimization problems. These are the maximization of lift and the minimization of disturbance pressure magnitude in an inviscid nonlinear flow field. A variety of multivariable search techniques contained in an existing nonlinear optimization code, AESOP, are applied to this design problem. These techniques include elementary single parameter perturbation methods, organized search such as steepest-descent, quadratic, and Davidon methods, randomized procedures, and a generalized search acceleration technique. Airfoil design variables are seven in number and define perturbations to the profile of an existing NACA airfoil. The relative efficiency of the techniques are compared. It is shown that elementary one parameter at a time and random techniques compare favorably with organized searches in the class of problems considered. It is also shown that significant reductions in disturbance pressure magnitude can be made while retaining reasonable lift coefficient values at low free stream Mach numbers.

  2. The Lambert Way to Gaussianize Heavy-Tailed Data with the Inverse of Tukey's h Transformation as a Special Case

    PubMed Central

    Goerg, Georg M.

    2015-01-01

    I present a parametric, bijective transformation to generate heavy tail versions of arbitrary random variables. The tail behavior of this heavy tail Lambert  W × F X random variable depends on a tail parameter δ ≥ 0: for δ = 0, Y ≡ X, for δ > 0 Y has heavier tails than X. For X being Gaussian it reduces to Tukey's h distribution. The Lambert W function provides an explicit inverse transformation, which can thus remove heavy tails from observed data. It also provides closed-form expressions for the cumulative distribution (cdf) and probability density function (pdf). As a special case, these yield analytic expression for Tukey's h pdf and cdf. Parameters can be estimated by maximum likelihood and applications to S&P 500 log-returns demonstrate the usefulness of the presented methodology. The R package LambertW implements most of the introduced methodology and is publicly available on CRAN. PMID:26380372

  3. An Analytic Model for the Success Rate of a Robotic Actuator System in Hitting Random Targets.

    PubMed

    Bradley, Stuart

    2015-11-20

    Autonomous robotic systems are increasingly being used in a wide range of applications such as precision agriculture, medicine, and the military. These systems have common features which often includes an action by an "actuator" interacting with a target. While simulations and measurements exist for the success rate of hitting targets by some systems, there is a dearth of analytic models which can give insight into, and guidance on optimization, of new robotic systems. The present paper develops a simple model for estimation of the success rate for hitting random targets from a moving platform. The model has two main dimensionless parameters: the ratio of actuator spacing to target diameter; and the ratio of platform distance moved (between actuator "firings") to the target diameter. It is found that regions of parameter space having specified high success are described by simple equations, providing guidance on design. The role of a "cost function" is introduced which, when minimized, provides optimization of design, operating, and risk mitigation costs.

  4. 3D Forest: An application for descriptions of three-dimensional forest structures using terrestrial LiDAR

    PubMed Central

    Krůček, Martin; Vrška, Tomáš; Král, Kamil

    2017-01-01

    Terrestrial laser scanning is a powerful technology for capturing the three-dimensional structure of forests with a high level of detail and accuracy. Over the last decade, many algorithms have been developed to extract various tree parameters from terrestrial laser scanning data. Here we present 3D Forest, an open-source non-platform-specific software application with an easy-to-use graphical user interface with the compilation of algorithms focused on the forest environment and extraction of tree parameters. The current version (0.42) extracts important parameters of forest structure from the terrestrial laser scanning data, such as stem positions (X, Y, Z), tree heights, diameters at breast height (DBH), as well as more advanced parameters such as tree planar projections, stem profiles or detailed crown parameters including convex and concave crown surface and volume. Moreover, 3D Forest provides quantitative measures of between-crown interactions and their real arrangement in 3D space. 3D Forest also includes an original algorithm of automatic tree segmentation and crown segmentation. Comparison with field data measurements showed no significant difference in measuring DBH or tree height using 3D Forest, although for DBH only the Randomized Hough Transform algorithm proved to be sufficiently resistant to noise and provided results comparable to traditional field measurements. PMID:28472167

  5. Generalized Smooth Transition Map Between Tent and Logistic Maps

    NASA Astrophysics Data System (ADS)

    Sayed, Wafaa S.; Fahmy, Hossam A. H.; Rezk, Ahmed A.; Radwan, Ahmed G.

    There is a continuous demand on novel chaotic generators to be employed in various modeling and pseudo-random number generation applications. This paper proposes a new chaotic map which is a general form for one-dimensional discrete-time maps employing the power function with the tent and logistic maps as special cases. The proposed map uses extra parameters to provide responses that fit multiple applications for which conventional maps were not enough. The proposed generalization covers also maps whose iterative relations are not based on polynomials, i.e. with fractional powers. We introduce a framework for analyzing the proposed map mathematically and predicting its behavior for various combinations of its parameters. In addition, we present and explain the transition map which results in intermediate responses as the parameters vary from their values corresponding to tent map to those corresponding to logistic map case. We study the properties of the proposed map including graph of the map equation, general bifurcation diagram and its key-points, output sequences, and maximum Lyapunov exponent. We present further explorations such as effects of scaling, system response with respect to the new parameters, and operating ranges other than transition region. Finally, a stream cipher system based on the generalized transition map validates its utility for image encryption applications. The system allows the construction of more efficient encryption keys which enhances its sensitivity and other cryptographic properties.

  6. Hexagonal boron nitride and water interaction parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Yanbin; Aluru, Narayana R., E-mail: aluru@illinois.edu; Wagner, Lucas K.

    2016-04-28

    The study of hexagonal boron nitride (hBN) in microfluidic and nanofluidic applications at the atomic level requires accurate force field parameters to describe the water-hBN interaction. In this work, we begin with benchmark quality first principles quantum Monte Carlo calculations on the interaction energy between water and hBN, which are used to validate random phase approximation (RPA) calculations. We then proceed with RPA to derive force field parameters, which are used to simulate water contact angle on bulk hBN, attaining a value within the experimental uncertainties. This paper demonstrates that end-to-end multiscale modeling, starting at detailed many-body quantum mechanics andmore » ending with macroscopic properties, with the approximations controlled along the way, is feasible for these systems.« less

  7. A perturbation method to the tent map based on Lyapunov exponent and its application

    NASA Astrophysics Data System (ADS)

    Cao, Lv-Chen; Luo, Yu-Ling; Qiu, Sen-Hui; Liu, Jun-Xiu

    2015-10-01

    Perturbation imposed on a chaos system is an effective way to maintain its chaotic features. A novel parameter perturbation method for the tent map based on the Lyapunov exponent is proposed in this paper. The pseudo-random sequence generated by the tent map is sent to another chaos function — the Chebyshev map for the post processing. If the output value of the Chebyshev map falls into a certain range, it will be sent back to replace the parameter of the tent map. As a result, the parameter of the tent map keeps changing dynamically. The statistical analysis and experimental results prove that the disturbed tent map has a highly random distribution and achieves good cryptographic properties of a pseudo-random sequence. As a result, it weakens the phenomenon of strong correlation caused by the finite precision and effectively compensates for the digital chaos system dynamics degradation. Project supported by the Guangxi Provincial Natural Science Foundation, China (Grant No. 2014GXNSFBA118271), the Research Project of Guangxi University, China (Grant No. ZD2014022), the Fund from Guangxi Provincial Key Laboratory of Multi-source Information Mining & Security, China (Grant No. MIMS14-04), the Fund from the Guangxi Provincial Key Laboratory of Wireless Wideband Communication & Signal Processing, China (Grant No. GXKL0614205), the Education Development Foundation and the Doctoral Research Foundation of Guangxi Normal University, the State Scholarship Fund of China Scholarship Council (Grant No. [2014]3012), and the Innovation Project of Guangxi Graduate Education, China (Grant No. YCSZ2015102).

  8. Phase-only asymmetric optical cryptosystem based on random modulus decomposition

    NASA Astrophysics Data System (ADS)

    Xu, Hongfeng; Xu, Wenhui; Wang, Shuaihua; Wu, Shaofan

    2018-06-01

    We propose a phase-only asymmetric optical cryptosystem based on random modulus decomposition (RMD). The cryptosystem is presented for effectively improving the capacity to resist various attacks, including the attack of iterative algorithms. On the one hand, RMD and phase encoding are combined to remove the constraints that can be used in the attacking process. On the other hand, the security keys (geometrical parameters) introduced by Fresnel transform can increase the key variety and enlarge the key space simultaneously. Numerical simulation results demonstrate the strong feasibility, security and robustness of the proposed cryptosystem. This cryptosystem will open up many new opportunities in the application fields of optical encryption and authentication.

  9. Optimum Parameters of a Tuned Liquid Column Damper in a Wind Turbine Subject to Stochastic Load

    NASA Astrophysics Data System (ADS)

    Alkmim, M. H.; de Morais, M. V. G.; Fabro, A. T.

    2017-12-01

    Parameter optimization for tuned liquid column dampers (TLCD), a class of passive structural control, have been previously proposed in the literature for reducing vibration in wind turbines, and several other applications. However, most of the available work consider the wind excitation as either a deterministic harmonic load or random load with white noise spectra. In this paper, a global direct search optimization algorithm to reduce vibration of a tuned liquid column damper (TLCD), a class of passive structural control device, is presented. The objective is to find optimized parameters for the TLCD under stochastic load from different wind power spectral density. A verification is made considering the analytical solution of undamped primary system under white noise excitation by comparing with result from the literature. Finally, it is shown that different wind profiles can significantly affect the optimum TLCD parameters.

  10. The Statistical Power of the Cluster Randomized Block Design with Matched Pairs--A Simulation Study

    ERIC Educational Resources Information Center

    Dong, Nianbo; Lipsey, Mark

    2010-01-01

    This study uses simulation techniques to examine the statistical power of the group- randomized design and the matched-pair (MP) randomized block design under various parameter combinations. Both nearest neighbor matching and random matching are used for the MP design. The power of each design for any parameter combination was calculated from…

  11. Simulation of foulant bioparticle topography based on Gaussian process and its implications for interface behavior research

    NASA Astrophysics Data System (ADS)

    Zhao, Leihong; Qu, Xiaolu; Lin, Hongjun; Yu, Genying; Liao, Bao-Qiang

    2018-03-01

    Simulation of randomly rough bioparticle surface is crucial to better understand and control interface behaviors and membrane fouling. Pursuing literature indicated a lack of effective method for simulating random rough bioparticle surface. In this study, a new method which combines Gaussian distribution, Fourier transform, spectrum method and coordinate transformation was proposed to simulate surface topography of foulant bioparticles in a membrane bioreactor (MBR). The natural surface of a foulant bioparticle was found to be irregular and randomly rough. The topography simulated by the new method was quite similar to that of real foulant bioparticles. Moreover, the simulated topography of foulant bioparticles was critically affected by parameters correlation length (l) and root mean square (σ). The new method proposed in this study shows notable superiority over the conventional methods for simulation of randomly rough foulant bioparticles. The ease, facility and fitness of the new method point towards potential applications in interface behaviors and membrane fouling research.

  12. Designing management strategies for carbon dioxide storage and utilization under uncertainty using inexact modelling

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2017-06-01

    Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.

  13. Electronic and transport properties of BCN alloy nanoribbons

    NASA Astrophysics Data System (ADS)

    Darvishi Gilan, Mahdi; Chegel, Raad

    2018-03-01

    The dependence of the carbon (C) concentration on the electronic and transport properties of boron carbonitride (BCN) alloy nanoribbons have been investigated using surface Green's functions technique and random Hamiltonian model by considering random hopping parameters including first and second nearest neighbors. Our calculations indicate that substituting boron (nitrogen) sites with carbon atoms induces a new band close to conduction (valence) band and carbon atoms behave like a donor (acceptor) dopants. Also, while both nitrogen and boron sites are substituted randomly by carbon atoms, new bands are induced close to both valence and conduction bands. The band gap decreases with C substituting and the number of charge carriers increases in low bias voltage. Far from Fermi level in the higher range of energy, transmission coefficient and current of the system are reduced by increasing the C concentration. Based on our results, tuning the electronic and transport properties of BCN alloy nanoribbons by random carbon dopants could be applicable to design nanoelectronics devices.

  14. Ultrasound scatter in heterogeneous 3D microstructures: Parameters affecting multiple scattering

    NASA Astrophysics Data System (ADS)

    Engle, B. J.; Roberts, R. A.; Grandin, R. J.

    2018-04-01

    This paper reports on a computational study of ultrasound propagation in heterogeneous metal microstructures. Random spatial fluctuations in elastic properties over a range of length scales relative to ultrasound wavelength can give rise to scatter-induced attenuation, backscatter noise, and phase front aberration. It is of interest to quantify the dependence of these phenomena on the microstructure parameters, for the purpose of quantifying deleterious consequences on flaw detectability, and for the purpose of material characterization. Valuable tools for estimation of microstructure parameters (e.g. grain size) through analysis of ultrasound backscatter have been developed based on approximate weak-scattering models. While useful, it is understood that these tools display inherent inaccuracy when multiple scattering phenomena significantly contribute to the measurement. It is the goal of this work to supplement weak scattering model predictions with corrections derived through application of an exact computational scattering model to explicitly prescribed microstructures. The scattering problem is formulated as a volume integral equation (VIE) displaying a convolutional Green-function-derived kernel. The VIE is solved iteratively employing FFT-based con-volution. Realizations of random microstructures are specified on the micron scale using statistical property descriptions (e.g. grain size and orientation distributions), which are then spatially filtered to provide rigorously equivalent scattering media on a length scale relevant to ultrasound propagation. Scattering responses from ensembles of media representations are averaged to obtain mean and variance of quantities such as attenuation and backscatter noise levels, as a function of microstructure descriptors. The computational approach will be summarized, and examples of application will be presented.

  15. Unifying model for random matrix theory in arbitrary space dimensions

    NASA Astrophysics Data System (ADS)

    Cicuta, Giovanni M.; Krausser, Johannes; Milkus, Rico; Zaccone, Alessio

    2018-03-01

    A sparse random block matrix model suggested by the Hessian matrix used in the study of elastic vibrational modes of amorphous solids is presented and analyzed. By evaluating some moments, benchmarked against numerics, differences in the eigenvalue spectrum of this model in different limits of space dimension d , and for arbitrary values of the lattice coordination number Z , are shown and discussed. As a function of these two parameters (and their ratio Z /d ), the most studied models in random matrix theory (Erdos-Renyi graphs, effective medium, and replicas) can be reproduced in the various limits of block dimensionality d . Remarkably, the Marchenko-Pastur spectral density (which is recovered by replica calculations for the Laplacian matrix) is reproduced exactly in the limit of infinite size of the blocks, or d →∞ , which clarifies the physical meaning of space dimension in these models. We feel that the approximate results for d =3 provided by our method may have many potential applications in the future, from the vibrational spectrum of glasses and elastic networks to wave localization, disordered conductors, random resistor networks, and random walks.

  16. Hierarchical models and the analysis of bird survey information

    USGS Publications Warehouse

    Sauer, J.R.; Link, W.A.

    2003-01-01

    Management of birds often requires analysis of collections of estimates. We describe a hierarchical modeling approach to the analysis of these data, in which parameters associated with the individual species estimates are treated as random variables, and probability statements are made about the species parameters conditioned on the data. A Markov-Chain Monte Carlo (MCMC) procedure is used to fit the hierarchical model. This approach is computer intensive, and is based upon simulation. MCMC allows for estimation both of parameters and of derived statistics. To illustrate the application of this method, we use the case in which we are interested in attributes of a collection of estimates of population change. Using data for 28 species of grassland-breeding birds from the North American Breeding Bird Survey, we estimate the number of species with increasing populations, provide precision-adjusted rankings of species trends, and describe a measure of population stability as the probability that the trend for a species is within a certain interval. Hierarchical models can be applied to a variety of bird survey applications, and we are investigating their use in estimation of population change from survey data.

  17. Efficacy and tolerability assessment of a topical formulation containing copper sulfate and hypericum perforatum on patients with herpes skin lesions: a comparative, randomized controlled trial.

    PubMed

    Clewell, Amy; Barnes, Matt; Endres, John R; Ahmed, Mansoor; Ghambeer, Daljit K S

    2012-02-01

    Topical Acyclovir has moderate efficacy on recurrent HSV symptoms, requiring repeat applications for several days. Topical Dynamiclear, which requires only a single dose application, may provide a more effective and convenient treatment option for symptomatic management of HSV. The study assessed the comparative efficacy and tolerability of a single use, topical formulation containing copper sulfate pentahydrate and Hypericum perforatum that is marketed as Dynamiclear™ to a topical 5% Acyclovir cream standard preparation and use. A prospective, randomized, multi-centered, comparative, open-label clinical study was conducted. A total of 149 participants between 18 and 55 years of age with active HSV-1 and HSV-2 lesions were recruited for the 14-day clinical trial. Participants were randomized into two groups: A (n=61), those receiving the Dynamiclear formulation, and B (n=59), those receiving 5% Acyclovir. Efficacy parameters were assessed via physical examination at baseline (day 1), day 2, 3, 8, and 14. Laboratory safety tests were conducted at baseline and on day 14. Use of the Dynamiclear formulation was found to have no significant adverse effects and was well tolerated by participants. All hematological and biochemical markers were within normal range for the Dynamiclear group. Statistically, odds for being affected by burning and stinging sensation were 1.9 times greater in the Acyclovir group in comparison to the Dynamiclear group. Similarly, the odds of being affected by symptoms of acute pain, erythema and vesiculation were 1.8, 2.4, and 4.4 times higher in the Acyclovir group in comparison to the Dynamiclear group. The Dynamiclear formulation was well tolerated, and efficacy was demonstrated in a number of measured parameters, which are helpful in the symptomatic management of HSV-1 and HSV-2 lesions in adult patients. Remarkably, the effects seen from this product came from a single application.

  18. A statistical methodology for estimating transport parameters: Theory and applications to one-dimensional advectivec-dispersive systems

    USGS Publications Warehouse

    Wagner, Brian J.; Gorelick, Steven M.

    1986-01-01

    A simulation nonlinear multiple-regression methodology for estimating parameters that characterize the transport of contaminants is developed and demonstrated. Finite difference contaminant transport simulation is combined with a nonlinear weighted least squares multiple-regression procedure. The technique provides optimal parameter estimates and gives statistics for assessing the reliability of these estimates under certain general assumptions about the distributions of the random measurement errors. Monte Carlo analysis is used to estimate parameter reliability for a hypothetical homogeneous soil column for which concentration data contain large random measurement errors. The value of data collected spatially versus data collected temporally was investigated for estimation of velocity, dispersion coefficient, effective porosity, first-order decay rate, and zero-order production. The use of spatial data gave estimates that were 2–3 times more reliable than estimates based on temporal data for all parameters except velocity. Comparison of estimated linear and nonlinear confidence intervals based upon Monte Carlo analysis showed that the linear approximation is poor for dispersion coefficient and zero-order production coefficient when data are collected over time. In addition, examples demonstrate transport parameter estimation for two real one-dimensional systems. First, the longitudinal dispersivity and effective porosity of an unsaturated soil are estimated using laboratory column data. We compare the reliability of estimates based upon data from individual laboratory experiments versus estimates based upon pooled data from several experiments. Second, the simulation nonlinear regression procedure is extended to include an additional governing equation that describes delayed storage during contaminant transport. The model is applied to analyze the trends, variability, and interrelationship of parameters in a mourtain stream in northern California.

  19. Stochastic mechanical model of vocal folds for producing jitter and for identifying pathologies through real voices.

    PubMed

    Cataldo, E; Soize, C

    2018-06-06

    Jitter, in voice production applications, is a random phenomenon characterized by the deviation of the glottal cycle length with respect to a mean value. Its study can help in identifying pathologies related to the vocal folds according to the values obtained through the different ways to measure it. This paper aims to propose a stochastic model, considering three control parameters, to generate jitter based on a deterministic one-mass model for the dynamics of the vocal folds and to identify parameters from the stochastic model taking into account real voice signals experimentally obtained. To solve the corresponding stochastic inverse problem, the cost function used is based on the distance between probability density functions of the random variables associated with the fundamental frequencies obtained by the experimental voices and the simulated ones, and also on the distance between features extracted from the voice signals, simulated and experimental, to calculate jitter. The results obtained show that the model proposed is valid and some samples of voices are synthesized considering the identified parameters for normal and pathological cases. The strategy adopted is also a novelty and mainly because a solution was obtained. In addition to the use of three parameters to construct the model of jitter, it is the discussion of a parameter related to the bandwidth of the power spectral density function of the stochastic process to measure the quality of the signal generated. A study about the influence of all the main parameters is also performed. The identification of the parameters of the model considering pathological cases is maybe of all novelties introduced by the paper the most interesting. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Synchronization of random bit generators based on coupled chaotic lasers and application to cryptography.

    PubMed

    Kanter, Ido; Butkovski, Maria; Peleg, Yitzhak; Zigzag, Meital; Aviad, Yaara; Reidler, Igor; Rosenbluh, Michael; Kinzel, Wolfgang

    2010-08-16

    Random bit generators (RBGs) constitute an important tool in cryptography, stochastic simulations and secure communications. The later in particular has some difficult requirements: high generation rate of unpredictable bit strings and secure key-exchange protocols over public channels. Deterministic algorithms generate pseudo-random number sequences at high rates, however, their unpredictability is limited by the very nature of their deterministic origin. Recently, physical RBGs based on chaotic semiconductor lasers were shown to exceed Gbit/s rates. Whether secure synchronization of two high rate physical RBGs is possible remains an open question. Here we propose a method, whereby two fast RBGs based on mutually coupled chaotic lasers, are synchronized. Using information theoretic analysis we demonstrate security against a powerful computational eavesdropper, capable of noiseless amplification, where all parameters are publicly known. The method is also extended to secure synchronization of a small network of three RBGs.

  1. Elephant random walks and their connection to Pólya-type urns

    NASA Astrophysics Data System (ADS)

    Baur, Erich; Bertoin, Jean

    2016-11-01

    In this paper, we explain the connection between the elephant random walk (ERW) and an urn model à la Pólya and derive functional limit theorems for the former. The ERW model was introduced in [Phys. Rev. E 70, 045101 (2004), 10.1103/PhysRevE.70.045101] to study memory effects in a highly non-Markovian setting. More specifically, the ERW is a one-dimensional discrete-time random walk with a complete memory of its past. The influence of the memory is measured in terms of a memory parameter p between zero and one. In the past years, a considerable effort has been undertaken to understand the large-scale behavior of the ERW, depending on the choice of p . Here, we use known results on urns to explicitly solve the ERW in all memory regimes. The method works as well for ERWs in higher dimensions and is widely applicable to related models.

  2. Complementary nonparametric analysis of covariance for logistic regression in a randomized clinical trial setting.

    PubMed

    Tangen, C M; Koch, G G

    1999-03-01

    In the randomized clinical trial setting, controlling for covariates is expected to produce variance reduction for the treatment parameter estimate and to adjust for random imbalances of covariates between the treatment groups. However, for the logistic regression model, variance reduction is not obviously obtained. This can lead to concerns about the assumptions of the logistic model. We introduce a complementary nonparametric method for covariate adjustment. It provides results that are usually compatible with expectations for analysis of covariance. The only assumptions required are based on randomization and sampling arguments. The resulting treatment parameter is a (unconditional) population average log-odds ratio that has been adjusted for random imbalance of covariates. Data from a randomized clinical trial are used to compare results from the traditional maximum likelihood logistic method with those from the nonparametric logistic method. We examine treatment parameter estimates, corresponding standard errors, and significance levels in models with and without covariate adjustment. In addition, we discuss differences between unconditional population average treatment parameters and conditional subpopulation average treatment parameters. Additional features of the nonparametric method, including stratified (multicenter) and multivariate (multivisit) analyses, are illustrated. Extensions of this methodology to the proportional odds model are also made.

  3. Multivariate generalized hidden Markov regression models with random covariates: Physical exercise in an elderly population.

    PubMed

    Punzo, Antonio; Ingrassia, Salvatore; Maruotti, Antonello

    2018-04-22

    A time-varying latent variable model is proposed to jointly analyze multivariate mixed-support longitudinal data. The proposal can be viewed as an extension of hidden Markov regression models with fixed covariates (HMRMFCs), which is the state of the art for modelling longitudinal data, with a special focus on the underlying clustering structure. HMRMFCs are inadequate for applications in which a clustering structure can be identified in the distribution of the covariates, as the clustering is independent from the covariates distribution. Here, hidden Markov regression models with random covariates are introduced by explicitly specifying state-specific distributions for the covariates, with the aim of improving the recovering of the clusters in the data with respect to a fixed covariates paradigm. The hidden Markov regression models with random covariates class is defined focusing on the exponential family, in a generalized linear model framework. Model identifiability conditions are sketched, an expectation-maximization algorithm is outlined for parameter estimation, and various implementation and operational issues are discussed. Properties of the estimators of the regression coefficients, as well as of the hidden path parameters, are evaluated through simulation experiments and compared with those of HMRMFCs. The method is applied to physical activity data. Copyright © 2018 John Wiley & Sons, Ltd.

  4. Study of pseudo noise CW diode laser for ranging applications

    NASA Technical Reports Server (NTRS)

    Lee, Hyo S.; Ramaswami, Ravi

    1992-01-01

    A new Pseudo Random Noise (PN) modulated CW diode laser radar system is being developed for real time ranging of targets at both close and large distances (greater than 10 KM) to satisy a wide range of applications: from robotics to future space applications. Results from computer modeling and statistical analysis, along with some preliminary data obtained from a prototype system, are presented. The received signal is averaged for a short time to recover the target response function. It is found that even with uncooperative targets, based on the design parameters used (200-mW laser and 20-cm receiver), accurate ranging is possible up to about 15 KM, beyond which signal to noise ratio (SNR) becomes too small for real time analog detection.

  5. Application of the sequential quadratic programming algorithm for reconstructing the distribution of optical parameters based on the time-domain radiative transfer equation.

    PubMed

    Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming

    2016-10-17

    Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.

  6. Characterizing the development of sectoral gross domestic product composition.

    PubMed

    Lutz, Raphael; Spies, Michael; Reusser, Dominik E; Kropp, Jürgen P; Rybski, Diego

    2013-07-01

    We consider the sectoral composition of a country's gross domestic product (GDP), i.e., the partitioning into agrarian, industrial, and service sectors. Exploring a simple system of differential equations, we characterize the transfer of GDP shares between the sectors in the course of economic development. The model fits for the majority of countries providing four country-specific parameters. Relating the agrarian with the industrial sector, a data collapse over all countries and all years supports the applicability of our approach. Depending on the parameter ranges, country development exhibits different transfer properties. Most countries follow three of eight characteristic paths. The types are not random but show distinct geographic and development patterns.

  7. Characterizing the development of sectoral gross domestic product composition

    NASA Astrophysics Data System (ADS)

    Lutz, Raphael; Spies, Michael; Reusser, Dominik E.; Kropp, Jürgen P.; Rybski, Diego

    2013-07-01

    We consider the sectoral composition of a country's gross domestic product (GDP), i.e., the partitioning into agrarian, industrial, and service sectors. Exploring a simple system of differential equations, we characterize the transfer of GDP shares between the sectors in the course of economic development. The model fits for the majority of countries providing four country-specific parameters. Relating the agrarian with the industrial sector, a data collapse over all countries and all years supports the applicability of our approach. Depending on the parameter ranges, country development exhibits different transfer properties. Most countries follow three of eight characteristic paths. The types are not random but show distinct geographic and development patterns.

  8. Novel image encryption algorithm based on multiple-parameter discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Dong, Taiji; Wu, Jianhua

    2010-08-01

    A new method of digital image encryption is presented by utilizing a new multiple-parameter discrete fractional random transform. Image encryption and decryption are performed based on the index additivity and multiple parameters of the multiple-parameter fractional random transform. The plaintext and ciphertext are respectively in the spatial domain and in the fractional domain determined by the encryption keys. The proposed algorithm can resist statistic analyses effectively. The computer simulation results show that the proposed encryption algorithm is sensitive to the multiple keys, and that it has considerable robustness, noise immunity and security.

  9. The concordance index C and the Mann-Whitney parameter Pr(X>Y) with randomly censored data.

    PubMed

    Koziol, James A; Jia, Zhenyu

    2009-06-01

    Harrell's c-index or concordance C has been widely used as a measure of separation of two survival distributions. In the absence of censored data, the c-index estimates the Mann-Whitney parameter Pr(X>Y), which has been repeatedly utilized in various statistical contexts. In the presence of randomly censored data, the c-index no longer estimates Pr(X>Y); rather, a parameter that involves the underlying censoring distributions. This is in contrast to Efron's maximum likelihood estimator of the Mann-Whitney parameter, which is recommended in the setting of random censorship.

  10. Fractional Brownian motion and multivariate-t models for longitudinal biomedical data, with application to CD4 counts in HIV-positive patients.

    PubMed

    Stirrup, Oliver T; Babiker, Abdel G; Carpenter, James R; Copas, Andrew J

    2016-04-30

    Longitudinal data are widely analysed using linear mixed models, with 'random slopes' models particularly common. However, when modelling, for example, longitudinal pre-treatment CD4 cell counts in HIV-positive patients, the incorporation of non-stationary stochastic processes such as Brownian motion has been shown to lead to a more biologically plausible model and a substantial improvement in model fit. In this article, we propose two further extensions. Firstly, we propose the addition of a fractional Brownian motion component, and secondly, we generalise the model to follow a multivariate-t distribution. These extensions are biologically plausible, and each demonstrated substantially improved fit on application to example data from the Concerted Action on SeroConversion to AIDS and Death in Europe study. We also propose novel procedures for residual diagnostic plots that allow such models to be assessed. Cohorts of patients were simulated from the previously reported and newly developed models in order to evaluate differences in predictions made for the timing of treatment initiation under different clinical management strategies. A further simulation study was performed to demonstrate the substantial biases in parameter estimates of the mean slope of CD4 decline with time that can occur when random slopes models are applied in the presence of censoring because of treatment initiation, with the degree of bias found to depend strongly on the treatment initiation rule applied. Our findings indicate that researchers should consider more complex and flexible models for the analysis of longitudinal biomarker data, particularly when there are substantial missing data, and that the parameter estimates from random slopes models must be interpreted with caution. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  11. Hybrid computer optimization of systems with random parameters

    NASA Technical Reports Server (NTRS)

    White, R. C., Jr.

    1972-01-01

    A hybrid computer Monte Carlo technique for the simulation and optimization of systems with random parameters is presented. The method is applied to the simultaneous optimization of the means and variances of two parameters in the radar-homing missile problem treated by McGhee and Levine.

  12. A random forest algorithm for nowcasting of intense precipitation events

    NASA Astrophysics Data System (ADS)

    Das, Saurabh; Chakraborty, Rohit; Maitra, Animesh

    2017-09-01

    Automatic nowcasting of convective initiation and thunderstorms has potential applications in several sectors including aviation planning and disaster management. In this paper, random forest based machine learning algorithm is tested for nowcasting of convective rain with a ground based radiometer. Brightness temperatures measured at 14 frequencies (7 frequencies in 22-31 GHz band and 7 frequencies in 51-58 GHz bands) are utilized as the inputs of the model. The lower frequency band is associated to the water vapor absorption whereas the upper frequency band relates to the oxygen absorption and hence, provide information on the temperature and humidity of the atmosphere. Synthetic minority over-sampling technique is used to balance the data set and 10-fold cross validation is used to assess the performance of the model. Results indicate that random forest algorithm with fixed alarm generation time of 30 min and 60 min performs quite well (probability of detection of all types of weather condition ∼90%) with low false alarms. It is, however, also observed that reducing the alarm generation time improves the threat score significantly and also decreases false alarms. The proposed model is found to be very sensitive to the boundary layer instability as indicated by the variable importance measure. The study shows the suitability of a random forest algorithm for nowcasting application utilizing a large number of input parameters from diverse sources and can be utilized in other forecasting problems.

  13. Bayesian Probability Theory

    NASA Astrophysics Data System (ADS)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  14. Stochastic analysis of uncertain thermal parameters for random thermal regime of frozen soil around a single freezing pipe

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Zhou, Guoqing; Wang, Jianzhou; Zhou, Lei

    2018-03-01

    The artificial ground freezing method (AGF) is widely used in civil and mining engineering, and the thermal regime of frozen soil around the freezing pipe affects the safety of design and construction. The thermal parameters can be truly random due to heterogeneity of the soil properties, which lead to the randomness of thermal regime of frozen soil around the freezing pipe. The purpose of this paper is to study the one-dimensional (1D) random thermal regime problem on the basis of a stochastic analysis model and the Monte Carlo (MC) method. Considering the uncertain thermal parameters of frozen soil as random variables, stochastic processes and random fields, the corresponding stochastic thermal regime of frozen soil around a single freezing pipe are obtained and analyzed. Taking the variability of each stochastic parameter into account individually, the influences of each stochastic thermal parameter on stochastic thermal regime are investigated. The results show that the mean temperatures of frozen soil around the single freezing pipe with three analogy method are the same while the standard deviations are different. The distributions of standard deviation have a great difference at different radial coordinate location and the larger standard deviations are mainly at the phase change area. The computed data with random variable method and stochastic process method have a great difference from the measured data while the computed data with random field method well agree with the measured data. Each uncertain thermal parameter has a different effect on the standard deviation of frozen soil temperature around the single freezing pipe. These results can provide a theoretical basis for the design and construction of AGF.

  15. Obstetric gel shortens second stage of labor and prevents perineal trauma in nulliparous women: a randomized controlled trial on labor facilitation.

    PubMed

    Schaub, Andreas F; Litschgi, Mario; Hoesli, Irene; Holzgreve, Wolfgang; Bleul, Ulrich; Geissbühler, Verena

    2008-01-01

    To determine whether the obstetric gel shortens the second stage of labor and exerts a protective effect on the perineum. A total of 251 nulliparous women with singleton low-risk pregnancies in vertex position at term were recruited. A total of 228 eligible women were randomly assigned to Group A, without obstetric gel use, or to Group B, obstetric gel use, i.e., intermittent application into the birth canal during vaginal examinations, starting at the early first stage of labor (prior to 4 cm dilation) and ending with delivery. A total of 183 cases were analyzed. For vaginal deliveries without interventions, such as C-section, vaginal operative procedure or Kristeller maneuver, obstetric gel use significantly shortened the second stage of labor by 26 min (30%) (P=0.026), and significantly reduced perineal tears (P=0.024). First stage of labor and total labor duration were also shortened, but not significantly. Results did not show a significant change in secondary outcome parameters, such as intervention rates or maternal and newborn outcomes. No side effects were observed with obstetric gel use. Systematic vaginal application of obstetric gel showed a significant reduction in the second stage of labor and a significant increase in perineal integrity. Future studies should further investigate the effect on intervention rates and maternal and neonatal outcome parameters.

  16. Adaptive firefly algorithm: parameter analysis and its application.

    PubMed

    Cheung, Ngaam J; Ding, Xue-Ming; Shen, Hong-Bin

    2014-01-01

    As a nature-inspired search algorithm, firefly algorithm (FA) has several control parameters, which may have great effects on its performance. In this study, we investigate the parameter selection and adaptation strategies in a modified firefly algorithm - adaptive firefly algorithm (AdaFa). There are three strategies in AdaFa including (1) a distance-based light absorption coefficient; (2) a gray coefficient enhancing fireflies to share difference information from attractive ones efficiently; and (3) five different dynamic strategies for the randomization parameter. Promising selections of parameters in the strategies are analyzed to guarantee the efficient performance of AdaFa. AdaFa is validated over widely used benchmark functions, and the numerical experiments and statistical tests yield useful conclusions on the strategies and the parameter selections affecting the performance of AdaFa. When applied to the real-world problem - protein tertiary structure prediction, the results demonstrated improved variants can rebuild the tertiary structure with the average root mean square deviation less than 0.4Å and 1.5Å from the native constrains with noise free and 10% Gaussian white noise.

  17. Adaptive Firefly Algorithm: Parameter Analysis and its Application

    PubMed Central

    Shen, Hong-Bin

    2014-01-01

    As a nature-inspired search algorithm, firefly algorithm (FA) has several control parameters, which may have great effects on its performance. In this study, we investigate the parameter selection and adaptation strategies in a modified firefly algorithm — adaptive firefly algorithm (AdaFa). There are three strategies in AdaFa including (1) a distance-based light absorption coefficient; (2) a gray coefficient enhancing fireflies to share difference information from attractive ones efficiently; and (3) five different dynamic strategies for the randomization parameter. Promising selections of parameters in the strategies are analyzed to guarantee the efficient performance of AdaFa. AdaFa is validated over widely used benchmark functions, and the numerical experiments and statistical tests yield useful conclusions on the strategies and the parameter selections affecting the performance of AdaFa. When applied to the real-world problem — protein tertiary structure prediction, the results demonstrated improved variants can rebuild the tertiary structure with the average root mean square deviation less than 0.4Å and 1.5Å from the native constrains with noise free and 10% Gaussian white noise. PMID:25397812

  18. Application of an extended random-phase approximation to giant resonances in light-, medium-, and heavy-mass nuclei

    NASA Astrophysics Data System (ADS)

    Tselyaev, V.; Lyutorovich, N.; Speth, J.; Krewald, S.; Reinhard, P.-G.

    2016-09-01

    We present results of the time blocking approximation (TBA) for giant resonances in light-, medium-, and heavy-mass nuclei. The TBA is an extension of the widely used random-phase approximation (RPA) adding complex configurations by coupling to phonon excitations. A new method for handling the single-particle continuum is developed and applied in the present calculations. We investigate in detail the dependence of the numerical results on the size of the single-particle space and the number of phonons as well as on nuclear matter properties. Our approach is self-consistent, based on an energy-density functional of Skyrme type where we used seven different parameter sets. The numerical results are compared with experimental data.

  19. Applicability of the Effective-Medium Approximation to Heterogeneous Aerosol Particles.

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Liu, Li

    2016-01-01

    The effective-medium approximation (EMA) is based on the assumption that a heterogeneous particle can have a homogeneous counterpart possessing similar scattering and absorption properties. We analyze the numerical accuracy of the EMA by comparing superposition T-matrix computations for spherical aerosol particles filled with numerous randomly distributed small inclusions and Lorenz-Mie computations based on the Maxwell-Garnett mixing rule. We verify numerically that the EMA can indeed be realized for inclusion size parameters smaller than a threshold value. The threshold size parameter depends on the refractive-index contrast between the host and inclusion materials and quite often does not exceed several tenths, especially in calculations of the scattering matrix and the absorption cross section. As the inclusion size parameter approaches the threshold value, the scattering-matrix errors of the EMA start to grow with increasing the host size parameter and or the number of inclusions. We confirm, in particular, the existence of the effective-medium regime in the important case of dust aerosols with hematite or air-bubble inclusions, but then the large refractive-index contrast necessitates inclusion size parameters of the order of a few tenths. Irrespective of the highly restricted conditions of applicability of the EMA, our results provide further evidence that the effective-medium regime must be a direct corollary of the macroscopic Maxwell equations under specific assumptions.

  20. Stochastic analysis of particle movement over a dune bed

    USGS Publications Warehouse

    Lee, Baum K.; Jobson, Harvey E.

    1977-01-01

    Stochastic models are available that can be used to predict the transport and dispersion of bed-material sediment particles in an alluvial channel. These models are based on the proposition that the movement of a single bed-material sediment particle consists of a series of steps of random length separated by rest periods of random duration and, therefore, application of the models requires a knowledge of the probability distributions of the step lengths, the rest periods, the elevation of particle deposition, and the elevation of particle erosion. The procedure was tested by determining distributions from bed profiles formed in a large laboratory flume with a coarse sand as the bed material. The elevation of particle deposition and the elevation of particle erosion can be considered to be identically distributed, and their distribution can be described by either a ' truncated Gaussian ' or a ' triangular ' density function. The conditional probability distribution of the rest period given the elevation of particle deposition closely followed the two-parameter gamma distribution. The conditional probability distribution of the step length given the elevation of particle erosion and the elevation of particle deposition also closely followed the two-parameter gamma density function. For a given flow, the scale and shape parameters describing the gamma probability distributions can be expressed as functions of bed-elevation. (Woodard-USGS)

  1. Transient Oscilliations in Mechanical Systems of Automatic Control with Random Parameters

    NASA Astrophysics Data System (ADS)

    Royev, B.; Vinokur, A.; Kulikov, G.

    2018-04-01

    Transient oscillations in mechanical systems of automatic control with random parameters is a relevant but insufficiently studied issue. In this paper, a modified spectral method was applied to investigate the problem. The nature of dynamic processes and the phase portraits are analyzed depending on the amplitude and frequency of external influence. It is evident from the obtained results, that the dynamic phenomena occurring in the systems with random parameters under external influence are complex, and their study requires further investigation.

  2. Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations.

    PubMed

    Tornøe, Christoffer W; Overgaard, Rune V; Agersø, Henrik; Nielsen, Henrik A; Madsen, Henrik; Jonsson, E Niclas

    2005-08-01

    The objective of the present analysis was to explore the use of stochastic differential equations (SDEs) in population pharmacokinetic/pharmacodynamic (PK/PD) modeling. The intra-individual variability in nonlinear mixed-effects models based on SDEs is decomposed into two types of noise: a measurement and a system noise term. The measurement noise represents uncorrelated error due to, for example, assay error while the system noise accounts for structural misspecifications, approximations of the dynamical model, and true random physiological fluctuations. Since the system noise accounts for model misspecifications, the SDEs provide a diagnostic tool for model appropriateness. The focus of the article is on the implementation of the Extended Kalman Filter (EKF) in NONMEM for parameter estimation in SDE models. Various applications of SDEs in population PK/PD modeling are illustrated through a systematic model development example using clinical PK data of the gonadotropin releasing hormone (GnRH) antagonist degarelix. The dynamic noise estimates were used to track variations in model parameters and systematically build an absorption model for subcutaneously administered degarelix. The EKF-based algorithm was successfully implemented in NONMEM for parameter estimation in population PK/PD models described by systems of SDEs. The example indicated that it was possible to pinpoint structural model deficiencies, and that valuable information may be obtained by tracking unexplained variations in parameters.

  3. Application of genetic algorithms to tuning fuzzy control systems

    NASA Technical Reports Server (NTRS)

    Espy, Todd; Vombrack, Endre; Aldridge, Jack

    1993-01-01

    Real number genetic algorithms (GA) were applied for tuning fuzzy membership functions of three controller applications. The first application is our 'Fuzzy Pong' demonstration, a controller that controls a very responsive system. The performance of the automatically tuned membership functions exceeded that of manually tuned membership functions both when the algorithm started with randomly generated functions and with the best manually-tuned functions. The second GA tunes input membership functions to achieve a specified control surface. The third application is a practical one, a motor controller for a printed circuit manufacturing system. The GA alters the positions and overlaps of the membership functions to accomplish the tuning. The applications, the real number GA approach, the fitness function and population parameters, and the performance improvements achieved are discussed. Directions for further research in tuning input and output membership functions and in tuning fuzzy rules are described.

  4. Analysis on pseudo excitation of random vibration for structure of time flight counter

    NASA Astrophysics Data System (ADS)

    Wu, Qiong; Li, Dapeng

    2015-03-01

    Traditional computing method is inefficient for getting key dynamical parameters of complicated structure. Pseudo Excitation Method(PEM) is an effective method for calculation of random vibration. Due to complicated and coupling random vibration in rocket or shuttle launching, the new staging white noise mathematical model is deduced according to the practical launch environment. This deduced model is applied for PEM to calculate the specific structure of Time of Flight Counter(ToFC). The responses of power spectral density and the relevant dynamic characteristic parameters of ToFC are obtained in terms of the flight acceptance test level. Considering stiffness of fixture structure, the random vibration experiments are conducted in three directions to compare with the revised PEM. The experimental results show the structure can bear the random vibration caused by launch without any damage and key dynamical parameters of ToFC are obtained. The revised PEM is similar with random vibration experiment in dynamical parameters and responses are proved by comparative results. The maximum error is within 9%. The reasons of errors are analyzed to improve reliability of calculation. This research provides an effective method for solutions of computing dynamical characteristic parameters of complicated structure in the process of rocket or shuttle launching.

  5. Joint inversion of marine seismic AVA and CSEM data using statistical rock-physics models and Markov random fields: Stochastic inversion of AVA and CSEM data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, J.; Hoversten, G.M.

    2011-09-15

    Joint inversion of seismic AVA and CSEM data requires rock-physics relationships to link seismic attributes to electrical properties. Ideally, we can connect them through reservoir parameters (e.g., porosity and water saturation) by developing physical-based models, such as Gassmann’s equations and Archie’s law, using nearby borehole logs. This could be difficult in the exploration stage because information available is typically insufficient for choosing suitable rock-physics models and for subsequently obtaining reliable estimates of the associated parameters. The use of improper rock-physics models and the inaccuracy of the estimates of model parameters may cause misleading inversion results. Conversely, it is easy tomore » derive statistical relationships among seismic and electrical attributes and reservoir parameters from distant borehole logs. In this study, we develop a Bayesian model to jointly invert seismic AVA and CSEM data for reservoir parameter estimation using statistical rock-physics models; the spatial dependence of geophysical and reservoir parameters are carried out by lithotypes through Markov random fields. We apply the developed model to a synthetic case, which simulates a CO{sub 2} monitoring application. We derive statistical rock-physics relations from borehole logs at one location and estimate seismic P- and S-wave velocity ratio, acoustic impedance, density, electrical resistivity, lithotypes, porosity, and water saturation at three different locations by conditioning to seismic AVA and CSEM data. Comparison of the inversion results with their corresponding true values shows that the correlation-based statistical rock-physics models provide significant information for improving the joint inversion results.« less

  6. Microstructure and Mechanical Property of Glutaraldehyde-Treated Porcine Pulmonary Ligament.

    PubMed

    Chen, Huan; Zhao, Xuefeng; Berwick, Zachary C; Krieger, Joshua F; Chambers, Sean; Kassab, Ghassan S

    2016-06-01

    There is a significant need for fixed biological tissues with desired structural and material constituents for tissue engineering applications. Here, we introduce the lung ligament as a fixed biological material that may have clinical utility for tissue engineering. To characterize the lung tissue for potential clinical applications, we studied glutaraldehyde-treated porcine pulmonary ligament (n = 11) with multiphoton microscopy (MPM) and conducted biaxial planar experiments to characterize the mechanical property of the tissue. The MPM imaging revealed that there are generally two families of collagen fibers distributed in two distinct layers: The first family largely aligns along the longitudinal direction with a mean angle of θ = 10.7 ± 9.3 deg, while the second one exhibits a random distribution with a mean θ = 36.6 ± 27.4. Elastin fibers appear in some intermediate sublayers with a random orientation distribution with a mean θ = 39.6 ± 23 deg. Based on the microstructural observation, a microstructure-based constitutive law was proposed to model the elastic property of the tissue. The material parameters were identified by fitting the model to the biaxial stress-strain data of specimens, and good fitting quality was achieved. The parameter e0 (which denotes the strain beyond which the collagen can withstand tension) of glutaraldehyde-treated tissues demonstrated low variability implying a relatively consistent collagen undulation in different samples, while the stiffness parameters for elastin and collagen fibers showed relatively greater variability. The fixed tissues presented a smaller e0 than that of fresh specimen, confirming that glutaraldehyde crosslinking increases the mechanical strength of collagen-based biomaterials. The present study sheds light on the biomechanics of glutaraldehyde-treated porcine pulmonary ligament that may be a candidate for tissue engineering.

  7. Probability Distribution Estimated From the Minimum, Maximum, and Most Likely Values: Applied to Turbine Inlet Temperature Uncertainty

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.

    2004-01-01

    Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal distribution (ref.1). This new approach allows for a very simple and direct algebraic solution without restricting the standard deviation. The beta parameters obtained by the new method are comparable to the conventional method (and identical when the distribution is symmetrical). However, the proposed method generally produces a less peaked distribution with a slightly larger standard deviation (up to 7 percent) than the conventional method in cases where the distribution is asymmetric or skewed. The beta distribution model has now been implemented into the Fast Probability Integration (FPI) module used in the NESSUS computer code for probabilistic analyses of structures (ref. 2).

  8. A model for incomplete longitudinal multivariate ordinal data.

    PubMed

    Liu, Li C

    2008-12-30

    In studies where multiple outcome items are repeatedly measured over time, missing data often occur. A longitudinal item response theory model is proposed for analysis of multivariate ordinal outcomes that are repeatedly measured. Under the MAR assumption, this model accommodates missing data at any level (missing item at any time point and/or missing time point). It allows for multiple random subject effects and the estimation of item discrimination parameters for the multiple outcome items. The covariates in the model can be at any level. Assuming either a probit or logistic response function, maximum marginal likelihood estimation is described utilizing multidimensional Gauss-Hermite quadrature for integration of the random effects. An iterative Fisher-scoring solution, which provides standard errors for all model parameters, is used. A data set from a longitudinal prevention study is used to motivate the application of the proposed model. In this study, multiple ordinal items of health behavior are repeatedly measured over time. Because of a planned missing design, subjects answered only two-third of all items at a given point. Copyright 2008 John Wiley & Sons, Ltd.

  9. A closed-form solution to tensor voting: theory and applications.

    PubMed

    Wu, Tai-Pang; Yeung, Sai-Kit; Jia, Jiaya; Tang, Chi-Keung; Medioni, Gérard

    2012-08-01

    We prove a closed-form solution to tensor voting (CFTV): Given a point set in any dimensions, our closed-form solution provides an exact, continuous, and efficient algorithm for computing a structure-aware tensor that simultaneously achieves salient structure detection and outlier attenuation. Using CFTV, we prove the convergence of tensor voting on a Markov random field (MRF), thus termed as MRFTV, where the structure-aware tensor at each input site reaches a stationary state upon convergence in structure propagation. We then embed structure-aware tensor into expectation maximization (EM) for optimizing a single linear structure to achieve efficient and robust parameter estimation. Specifically, our EMTV algorithm optimizes both the tensor and fitting parameters and does not require random sampling consensus typically used in existing robust statistical techniques. We performed quantitative evaluation on its accuracy and robustness, showing that EMTV performs better than the original TV and other state-of-the-art techniques in fundamental matrix estimation for multiview stereo matching. The extensions of CFTV and EMTV for extracting multiple and nonlinear structures are underway.

  10. Application of Fractal theory for crash rate prediction: Insights from random parameters and latent class tobit models.

    PubMed

    Chand, Sai; Dixit, Vinayak V

    2018-03-01

    The repercussions from congestion and accidents on major highways can have significant negative impacts on the economy and environment. It is a primary objective of transport authorities to minimize the likelihood of these phenomena taking place, to improve safety and overall network performance. In this study, we use the Hurst Exponent metric from Fractal Theory, as a congestion indicator for crash-rate modeling. We analyze one month of traffic speed data at several monitor sites along the M4 motorway in Sydney, Australia and assess congestion patterns with the Hurst Exponent of speed (H speed ). Random Parameters and Latent Class Tobit models were estimated, to examine the effect of congestion on historical crash rates, while accounting for unobserved heterogeneity. Using a latent class modeling approach, the motorway sections were probabilistically classified into two segments, based on the presence of entry and exit ramps. This will allow transportation agencies to implement appropriate safety/traffic countermeasures when addressing accident hotspots or inadequately managed sections of motorway. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. The RANDOM computer program: A linear congruential random number generator

    NASA Technical Reports Server (NTRS)

    Miles, R. F., Jr.

    1986-01-01

    The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.

  12. Quantitative sensory testing response patterns to capsaicin- and ultraviolet-B–induced local skin hypersensitization in healthy subjects: a machine-learned analysis

    PubMed Central

    Lötsch, Jörn; Geisslinger, Gerd; Heinemann, Sarah; Lerch, Florian; Oertel, Bruno G.; Ultsch, Alfred

    2018-01-01

    Abstract The comprehensive assessment of pain-related human phenotypes requires combinations of nociceptive measures that produce complex high-dimensional data, posing challenges to bioinformatic analysis. In this study, we assessed established experimental models of heat hyperalgesia of the skin, consisting of local ultraviolet-B (UV-B) irradiation or capsaicin application, in 82 healthy subjects using a variety of noxious stimuli. We extended the original heat stimulation by applying cold and mechanical stimuli and assessing the hypersensitization effects with a clinically established quantitative sensory testing (QST) battery (German Research Network on Neuropathic Pain). This study provided a 246 × 10-sized data matrix (82 subjects assessed at baseline, following UV-B application, and following capsaicin application) with respect to 10 QST parameters, which we analyzed using machine-learning techniques. We observed statistically significant effects of the hypersensitization treatments in 9 different QST parameters. Supervised machine-learned analysis implemented as random forests followed by ABC analysis pointed to heat pain thresholds as the most relevantly affected QST parameter. However, decision tree analysis indicated that UV-B additionally modulated sensitivity to cold. Unsupervised machine-learning techniques, implemented as emergent self-organizing maps, hinted at subgroups responding to topical application of capsaicin. The distinction among subgroups was based on sensitivity to pressure pain, which could be attributed to sex differences, with women being more sensitive than men. Thus, while UV-B and capsaicin share a major component of heat pain sensitization, they differ in their effects on QST parameter patterns in healthy subjects, suggesting a lack of redundancy between these models. PMID:28700537

  13. Application of stochastic processes in random growth and evolutionary dynamics

    NASA Astrophysics Data System (ADS)

    Oikonomou, Panagiotis

    We study the effect of power-law distributed randomness on the dynamical behavior of processes such as stochastic growth patterns and evolution. First, we examine the geometrical properties of random shapes produced by a generalized stochastic Loewner Evolution driven by a superposition of a Brownian motion and a stable Levy process. The situation is defined by the usual stochastic Loewner Evolution parameter, kappa, as well as alpha which defines the power-law tail of the stable Levy distribution. We show that the properties of these patterns change qualitatively and singularly at critical values of kappa and alpha. It is reasonable to call such changes "phase transitions". These transitions occur as kappa passes through four and as alpha passes through one. Numerical simulations are used to explore the global scaling behavior of these patterns in each "phase". We show both analytically and numerically that the growth continues indefinitely in the vertical direction for alpha greater than 1, goes as logarithmically with time for alpha equals to 1, and saturates for alpha smaller than 1. The probability density has two different scales corresponding to directions along and perpendicular to the boundary. Scaling functions for the probability density are given for various limiting cases. Second, we study the effect of the architecture of biological networks on their evolutionary dynamics. In recent years, studies of the architecture of large networks have unveiled a common topology, called scale-free, in which a majority of the elements are poorly connected except for a small fraction of highly connected components. We ask how networks with distinct topologies can evolve towards a pre-established target phenotype through a process of random mutations and selection. We use networks of Boolean components as a framework to model a large class of phenotypes. Within this approach, we find that homogeneous random networks and scale-free networks exhibit drastically different evolutionary paths. While homogeneous random networks accumulate neutral mutations and evolve by sparse punctuated steps, scale-free networks evolve rapidly and continuously towards the target phenotype. Moreover, we show that scale-free networks always evolve faster than homogeneous random networks; remarkably, this property does not depend on the precise value of the topological parameter. By contrast, homogeneous random networks require a specific tuning of their topological parameter in order to optimize their fitness. This model suggests that the evolutionary paths of biological networks, punctuated or continuous, may solely be determined by the network topology.

  14. The design of optimum remote-sensing instruments

    NASA Technical Reports Server (NTRS)

    Peckham, G. E.; Flower, D. A.

    1983-01-01

    Remote-sensing instruments allow values for certain properties of a target to be retrieved from measurements of radiation emitted, reflected or transmitted by the target. The retrieval accuracy is affected by random variations in the many target properties which affect the measurements. A method is described, by which statistical properties of the target and theoretical models of its electromagnetic behavior can be used to choose values for the instrument parameters which maximize the retrieval accuracy. The technique is applicable to a wide range of remote-sensing instruments.

  15. VizieR Online Data Catalog: TROY project. I. (Lillo-Box+, 2018)

    NASA Astrophysics Data System (ADS)

    Lillo-Box, J.; Barrado, D.; Figueira, P.; Leleu, A.; Santos, N. C.; Correia, A. C. M.; Robutel, P.; Faria, J. P.

    2017-11-01

    tablea4.dat: Posterior confidence intervals of the parameters explored to fit the radial velocity data according to equation 9 in the paper. tablea6.dat: Maximum mass of possible trojan bodies for the six tested models assuming their presence. We present the 95% confidence intervals of the mass computed from random samplings of the radial velocity semi-amplitude K2, the inclination i, the eccentricity e (when applicable), and the stellar mass obtained from the literature. (2 data files).

  16. Rotorcraft Blade Mode Damping Identification from Random Responses Using a Recursive Maximum Likelihood Algorithm

    NASA Technical Reports Server (NTRS)

    Molusis, J. A.

    1982-01-01

    An on line technique is presented for the identification of rotor blade modal damping and frequency from rotorcraft random response test data. The identification technique is based upon a recursive maximum likelihood (RML) algorithm, which is demonstrated to have excellent convergence characteristics in the presence of random measurement noise and random excitation. The RML technique requires virtually no user interaction, provides accurate confidence bands on the parameter estimates, and can be used for continuous monitoring of modal damping during wind tunnel or flight testing. Results are presented from simulation random response data which quantify the identified parameter convergence behavior for various levels of random excitation. The data length required for acceptable parameter accuracy is shown to depend upon the amplitude of random response and the modal damping level. Random response amplitudes of 1.25 degrees to .05 degrees are investigated. The RML technique is applied to hingeless rotor test data. The inplane lag regressing mode is identified at different rotor speeds. The identification from the test data is compared with the simulation results and with other available estimates of frequency and damping.

  17. Inversion of surface parameters using fast learning neural networks

    NASA Technical Reports Server (NTRS)

    Dawson, M. S.; Olvera, J.; Fung, A. K.; Manry, M. T.

    1992-01-01

    A neural network approach to the inversion of surface scattering parameters is presented. Simulated data sets based on a surface scattering model are used so that the data may be viewed as taken from a completely known randomly rough surface. The fast learning (FL) neural network and a multilayer perceptron (MLP) trained with backpropagation learning (BP network) are tested on the simulated backscattering data. The RMS error of training the FL network is found to be less than one half the error of the BP network while requiring one to two orders of magnitude less CPU time. When applied to inversion of parameters from a statistically rough surface, the FL method is successful at recovering the surface permittivity, the surface correlation length, and the RMS surface height in less time and with less error than the BP network. Further applications of the FL neural network to the inversion of parameters from backscatter measurements of an inhomogeneous layer above a half space are shown.

  18. A Spectral Approach for Quenched Limit Theorems for Random Expanding Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Dragičević, D.; Froyland, G.; González-Tokman, C.; Vaienti, S.

    2018-06-01

    We prove quenched versions of (i) a large deviations principle (LDP), (ii) a central limit theorem (CLT), and (iii) a local central limit theorem for non-autonomous dynamical systems. A key advance is the extension of the spectral method, commonly used in limit laws for deterministic maps, to the general random setting. We achieve this via multiplicative ergodic theory and the development of a general framework to control the regularity of Lyapunov exponents of twisted transfer operator cocycles with respect to a twist parameter. While some versions of the LDP and CLT have previously been proved with other techniques, the local central limit theorem is, to our knowledge, a completely new result, and one that demonstrates the strength of our method. Applications include non-autonomous (piecewise) expanding maps, defined by random compositions of the form {T_{σ^{n-1} ω} circ\\cdotscirc T_{σω}circ T_ω}. An important aspect of our results is that we only assume ergodicity and invertibility of the random driving {σ:Ω\\toΩ} ; in particular no expansivity or mixing properties are required.

  19. Intervention-Based Stochastic Disease Eradication

    NASA Astrophysics Data System (ADS)

    Billings, Lora; Mier-Y-Teran-Romero, Luis; Lindley, Brandon; Schwartz, Ira

    2013-03-01

    Disease control is of paramount importance in public health with infectious disease extinction as the ultimate goal. Intervention controls, such as vaccination of susceptible individuals and/or treatment of infectives, are typically based on a deterministic schedule, such as periodically vaccinating susceptible children based on school calendars. In reality, however, such policies are administered as a random process, while still possessing a mean period. Here, we consider the effect of randomly distributed intervention as disease control on large finite populations. We show explicitly how intervention control, based on mean period and treatment fraction, modulates the average extinction times as a function of population size and the speed of infection. In particular, our results show an exponential improvement in extinction times even though the controls are implemented using a random Poisson distribution. Finally, we discover those parameter regimes where random treatment yields an exponential improvement in extinction times over the application of strictly periodic intervention. The implication of our results is discussed in light of the availability of limited resources for control. Supported by the National Institute of General Medical Sciences Award No. R01GM090204

  20. A Spectral Approach for Quenched Limit Theorems for Random Expanding Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Dragičević, D.; Froyland, G.; González-Tokman, C.; Vaienti, S.

    2018-01-01

    We prove quenched versions of (i) a large deviations principle (LDP), (ii) a central limit theorem (CLT), and (iii) a local central limit theorem for non-autonomous dynamical systems. A key advance is the extension of the spectral method, commonly used in limit laws for deterministic maps, to the general random setting. We achieve this via multiplicative ergodic theory and the development of a general framework to control the regularity of Lyapunov exponents of twisted transfer operator cocycles with respect to a twist parameter. While some versions of the LDP and CLT have previously been proved with other techniques, the local central limit theorem is, to our knowledge, a completely new result, and one that demonstrates the strength of our method. Applications include non-autonomous (piecewise) expanding maps, defined by random compositions of the form {T_{σ^{n-1} ω} circ\\cdotscirc T_{σω}circ T_ω} . An important aspect of our results is that we only assume ergodicity and invertibility of the random driving {σ:Ω\\toΩ} ; in particular no expansivity or mixing properties are required.

  1. A modified hybrid uncertain analysis method for dynamic response field of the LSOAAC with random and interval parameters

    NASA Astrophysics Data System (ADS)

    Zi, Bin; Zhou, Bin

    2016-07-01

    For the prediction of dynamic response field of the luffing system of an automobile crane (LSOAAC) with random and interval parameters, a hybrid uncertain model is introduced. In the hybrid uncertain model, the parameters with certain probability distribution are modeled as random variables, whereas, the parameters with lower and upper bounds are modeled as interval variables instead of given precise values. Based on the hybrid uncertain model, the hybrid uncertain dynamic response equilibrium equation, in which different random and interval parameters are simultaneously included in input and output terms, is constructed. Then a modified hybrid uncertain analysis method (MHUAM) is proposed. In the MHUAM, based on random interval perturbation method, the first-order Taylor series expansion and the first-order Neumann series, the dynamic response expression of the LSOAAC is developed. Moreover, the mathematical characteristics of extrema of bounds of dynamic response are determined by random interval moment method and monotonic analysis technique. Compared with the hybrid Monte Carlo method (HMCM) and interval perturbation method (IPM), numerical results show the feasibility and efficiency of the MHUAM for solving the hybrid LSOAAC problems. The effects of different uncertain models and parameters on the LSOAAC response field are also investigated deeply, and numerical results indicate that the impact made by the randomness in the thrust of the luffing cylinder F is larger than that made by the gravity of the weight in suspension Q . In addition, the impact made by the uncertainty in the displacement between the lower end of the lifting arm and the luffing cylinder a is larger than that made by the length of the lifting arm L .

  2. Applications of polarization speckle in skin cancer detection and monitoring

    NASA Astrophysics Data System (ADS)

    Lee, Tim K.; Tchvialeva, Lioudmila; Phillips, Jamie; Louie, Daniel C.; Zhao, Jianhua; Wang, Wei; Lui, Harvey; Kalia, Sunil

    2018-01-01

    Polarization speckle is a rapidly developed field. Unlike laser speckle, polarization speckle consists of stochastic interference patterns with spatially random polarizations, amplitudes and phases. We have been working in this exciting research field, developing techniques to generate polarization patterns from skin. We hypothesize that polarization speckle patterns could be used in biomedical applications, especially, for detecting and monitoring skin cancers, the most common neoplasmas for white populations around the world. This paper describes our effort in developing two polarization speckle devices. One of them captures the Stokes parameters So and S1 simultaneously, and another one captures all four Stokes parameters So, S1, S2, and S3 in one-shot, within milliseconds. Hence these two devices could be used in medical clinics and assessed skin conditions in-vivo. In order to validate our hypothesis, we conducted a series of three clinical studies. These are early pilot studies, and the results suggest that the devices have potential to detect and monitor skin cancers.

  3. Butterfly Encryption Scheme for Resource-Constrained Wireless Networks †

    PubMed Central

    Sampangi, Raghav V.; Sampalli, Srinivas

    2015-01-01

    Resource-constrained wireless networks are emerging networks such as Radio Frequency Identification (RFID) and Wireless Body Area Networks (WBAN) that might have restrictions on the available resources and the computations that can be performed. These emerging technologies are increasing in popularity, particularly in defence, anti-counterfeiting, logistics and medical applications, and in consumer applications with growing popularity of the Internet of Things. With communication over wireless channels, it is essential to focus attention on securing data. In this paper, we present an encryption scheme called Butterfly encryption scheme. We first discuss a seed update mechanism for pseudorandom number generators (PRNG), and employ this technique to generate keys and authentication parameters for resource-constrained wireless networks. Our scheme is lightweight, as in it requires less resource when implemented and offers high security through increased unpredictability, owing to continuously changing parameters. Our work focuses on accomplishing high security through simplicity and reuse. We evaluate our encryption scheme using simulation, key similarity assessment, key sequence randomness assessment, protocol analysis and security analysis. PMID:26389899

  4. Butterfly Encryption Scheme for Resource-Constrained Wireless Networks.

    PubMed

    Sampangi, Raghav V; Sampalli, Srinivas

    2015-09-15

    Resource-constrained wireless networks are emerging networks such as Radio Frequency Identification (RFID) and Wireless Body Area Networks (WBAN) that might have restrictions on the available resources and the computations that can be performed. These emerging technologies are increasing in popularity, particularly in defence, anti-counterfeiting, logistics and medical applications, and in consumer applications with growing popularity of the Internet of Things. With communication over wireless channels, it is essential to focus attention on securing data. In this paper, we present an encryption scheme called Butterfly encryption scheme. We first discuss a seed update mechanism for pseudorandom number generators (PRNG), and employ this technique to generate keys and authentication parameters for resource-constrained wireless networks. Our scheme is lightweight, as in it requires less resource when implemented and offers high security through increased unpredictability, owing to continuously changing parameters. Our work focuses on accomplishing high security through simplicity and reuse. We evaluate our encryption scheme using simulation, key similarity assessment, key sequence randomness assessment, protocol analysis and security analysis.

  5. Logistic regression of family data from retrospective study designs.

    PubMed

    Whittemore, Alice S; Halpern, Jerry

    2003-11-01

    We wish to study the effects of genetic and environmental factors on disease risk, using data from families ascertained because they contain multiple cases of the disease. To do so, we must account for the way participants were ascertained, and for within-family correlations in both disease occurrences and covariates. We model the joint probability distribution of the covariates of ascertained family members, given family disease occurrence and pedigree structure. We describe two such covariate models: the random effects model and the marginal model. Both models assume a logistic form for the distribution of one person's covariates that involves a vector beta of regression parameters. The components of beta in the two models have different interpretations, and they differ in magnitude when the covariates are correlated within families. We describe ascertainment assumptions needed to estimate consistently the parameters beta(RE) in the random effects model and the parameters beta(M) in the marginal model. Under the ascertainment assumptions for the random effects model, we show that conditional logistic regression (CLR) of matched family data gives a consistent estimate beta(RE) for beta(RE) and a consistent estimate for the covariance matrix of beta(RE). Under the ascertainment assumptions for the marginal model, we show that unconditional logistic regression (ULR) gives a consistent estimate for beta(M), and we give a consistent estimator for its covariance matrix. The random effects/CLR approach is simple to use and to interpret, but it can use data only from families containing both affected and unaffected members. The marginal/ULR approach uses data from all individuals, but its variance estimates require special computations. A C program to compute these variance estimates is available at http://www.stanford.edu/dept/HRP/epidemiology. We illustrate these pros and cons by application to data on the effects of parity on ovarian cancer risk in mother/daughter pairs, and use simulations to study the performance of the estimates. Copyright 2003 Wiley-Liss, Inc.

  6. Assessing the importance of self-regulating mechanisms in diamondback moth population dynamics: application of discrete mathematical models.

    PubMed

    Nedorezov, Lev V; Löhr, Bernhard L; Sadykova, Dinara L

    2008-10-07

    The applicability of discrete mathematical models for the description of diamondback moth (DBM) (Plutella xylostella L.) population dynamics was investigated. The parameter values for several well-known discrete time models (Skellam, Moran-Ricker, Hassell, Maynard Smith-Slatkin, and discrete logistic models) were estimated for an experimental time series from a highland cabbage-growing area in eastern Kenya. For all sets of parameters, boundaries of confidence domains were determined. Maximum calculated birth rates varied between 1.086 and 1.359 when empirical values were used for parameter estimation. After fitting of the models to the empirical trajectory, all birth rate values resulted considerably higher (1.742-3.526). The carrying capacity was determined between 13.0 and 39.9DBM/plant, after fitting of the models these values declined to 6.48-9.3, all values well within the range encountered empirically. The application of the Durbin-Watson criteria for comparison of theoretical and experimental population trajectories produced negative correlations with all models. A test of residual value groupings for randomness showed that their distribution is non-stochastic. In consequence, we conclude that DBM dynamics cannot be explained as a result of intra-population self-regulative mechanisms only (=by any of the models tested) and that more comprehensive models are required for the explanation of DBM population dynamics.

  7. Effects of microcurrent application alone or in combination with topical Hypericum perforatum L. and Arnica montana L. on surgically induced wound healing in Wistar rats.

    PubMed

    Castro, Fabiene C B; Magre, Amanda; Cherpinski, Ricardo; Zelante, Paulo M; Neves, Lia M G; Esquisatto, Marcelo A M; Mendonça, Fernanda A S; Santos, Gláucia M T

    2012-07-01

    This study evaluated the wound healing activity of microcurrent application alone or in combination with topical Hypericum perforatum L. and Arnica montana L. on skin surgical incision surgically induced on the back of Wistar rats. The animals were randomly divided into six groups: (1) no intervention (control group); (2) microcurrent application (10 μA/2 min); (3) topical application of gel containing H. perforatum; (4) topical application of H. perforatum gel and microcurrent (10 μA/2 min); (5) topical application of gel containing A. montana; (6) topical application of A. montana gel and microcurrent (10 μA/2 min). Tissue samples were obtained on the 2nd, 6th and 10th days after injury and submitted to structural and morphometric analysis. Differences in wound healing were observed between treatments when compared to the control group. Microcurrent application alone or combined with H. perforatum gel or A. montana gel exerted significant effects on wound healing in this experimental model in all of the study parameters (P<0.05) when compared to the control group with positive effects seen regarding newly formed tissue, number of newly formed blood vessels and percentage of mature collagen fibers. The morphometric data confirmed the structural findings. In conclusion, application of H. perforatum or A. montana was effective on experimental wound healing when compared to control, but significant differences in the parameters studied were only observed when these treatments were combined with microcurrent application. Copyright © 2012 The Faculty of Homeopathy. Published by Elsevier Ltd. All rights reserved.

  8. Validating the random search model for two targets of different difficulty.

    PubMed

    Chan, Alan H S; Yu, Ruifeng

    2010-02-01

    A random visual search model was fitted to 1,788 search times obtained from a nonidentical double-target search task. 30 Hong Kong Chinese (13 men, 17 women) ages 18 to 33 years (M = 23, SD = 6.8) took part in the experiment voluntarily. The overall adequacy and prediction accuracy of the model for various search time parameters (mean and median search times and response times) for both individual and pooled data show that search strategy may reasonably be inferred from search time distributions. The results also suggested the general applicability of the random search model for describing the search behavior of a large number of participants performing the type of search used here, as well as the practical feasibility of its application for determination of stopping policy for optimization of an inspection system design. Although the data generally conformed to the model the search for the more difficult target was faster than expected. The more difficult target was usually detected after the easier target and it is suggested that some degree of memory-guided searching may have been used for the second target. Some abnormally long search times were observed and it is possible that these might have been due to the characteristics of visual lobes, nonoptimum interfixation distances and inappropriate overlapping of lobes, as has been previously reported.

  9. Local spatiotemporal time-frequency peak filtering method for seismic random noise reduction

    NASA Astrophysics Data System (ADS)

    Liu, Yanping; Dang, Bo; Li, Yue; Lin, Hongbo

    2014-12-01

    To achieve a higher level of seismic random noise suppression, the Radon transform has been adopted to implement spatiotemporal time-frequency peak filtering (TFPF) in our previous studies. Those studies involved performing TFPF in full-aperture Radon domain, including linear Radon and parabolic Radon. Although the superiority of this method to the conventional TFPF has been tested through processing on synthetic seismic models and field seismic data, there are still some limitations in the method. Both full-aperture linear Radon and parabolic Radon are applicable and effective for some relatively simple situations (e.g., curve reflection events with regular geometry) but inapplicable for complicated situations such as reflection events with irregular shapes, or interlaced events with quite different slope or curvature parameters. Therefore, a localized approach to the application of the Radon transform must be applied. It would serve the filter method better by adapting the transform to the local character of the data variations. In this article, we propose an idea that adopts the local Radon transform referred to as piecewise full-aperture Radon to realize spatiotemporal TFPF, called local spatiotemporal TFPF. Through experiments on synthetic seismic models and field seismic data, this study demonstrates the advantage of our method in seismic random noise reduction and reflection event recovery for relatively complicated situations of seismic data.

  10. Statistics of biospeckles with application to diagnostics of periodontitis

    NASA Astrophysics Data System (ADS)

    Starukhin, Pavel Y.; Kharish, Natalia A.; Sedykh, Alexey V.; Ulyanov, Sergey S.; Lepilin, Alexander V.; Tuchin, Valery V.

    1999-04-01

    Results of Monte-Carlo simulations Doppler shift are presented for the model of random medium that contain moving particles. The single-layered and two-layered configurations of the medium are considered. Doppler shift of the frequency of laser light is investigated as a function of such parameters as absorption coefficient, scattering coefficient, and thickness of the medium. Possibility of application of speckle interferometry for diagnostics in dentistry has been analyzed. Problem of standardization of the measuring procedure has been studied. Deviation of output characteristics of Doppler system for blood microcirculation measurements has been investigated. Dependence of form of Doppler spectrum on the number of speckles, integration by aperture, has been studied in experiments in vivo.

  11. Bridges for Pedestrians with Random Parameters using the Stochastic Finite Elements Analysis

    NASA Astrophysics Data System (ADS)

    Szafran, J.; Kamiński, M.

    2017-02-01

    The main aim of this paper is to present a Stochastic Finite Element Method analysis with reference to principal design parameters of bridges for pedestrians: eigenfrequency and deflection of bridge span. They are considered with respect to random thickness of plates in boxed-section bridge platform, Young modulus of structural steel and static load resulting from crowd of pedestrians. The influence of the quality of the numerical model in the context of traditional FEM is shown also on the example of a simple steel shield. Steel structures with random parameters are discretized in exactly the same way as for the needs of traditional Finite Element Method. Its probabilistic version is provided thanks to the Response Function Method, where several numerical tests with random parameter values varying around its mean value enable the determination of the structural response and, thanks to the Least Squares Method, its final probabilistic moments.

  12. Effect of cinnamon on glucose control and lipid parameters.

    PubMed

    Baker, William L; Gutierrez-Williams, Gabriela; White, C Michael; Kluger, Jeffrey; Coleman, Craig I

    2008-01-01

    To perform a meta-analysis of randomized controlled trials of cinnamon to better characterize its impact on glucose and plasma lipids. A systematic literature search through July 2007 was conducted to identify randomized placebo-controlled trials of cinnamon that reported data on A1C, fasting blood glucose (FBG), or lipid parameters. The mean change in each study end point from baseline was treated as a continuous variable, and the weighted mean difference was calculated as the difference between the mean value in the treatment and control groups. A random-effects model was used. Five prospective randomized controlled trials (n = 282) were identified. Upon meta-analysis, the use of cinnamon did not significantly alter A1C, FBG, or lipid parameters. Subgroup and sensitivity analyses did not significantly change the results. Cinnamon does not appear to improve A1C, FBG, or lipid parameters in patients with type 1 or type 2 diabetes.

  13. Remote Sensing/gis Integration for Site Planning and Resource Management

    NASA Technical Reports Server (NTRS)

    Fellows, J. D.

    1982-01-01

    The development of an interactive/batch gridded information system (array of cells georeferenced to USGS quad sheets) and interfacing application programs (e.g., hydrologic models) is discussed. This system allows non-programer users to request any data set(s) stored in the data base by inputing any random polygon's (watershed, political zone) boundary points. The data base information contained within this polygon can be used to produce maps, statistics, and define model parameters for the area. Present/proposed conditions for the area may be compared by inputing future usage (land cover, soils, slope, etc.). This system, known as the Hydrologic Analysis Program (HAP), is especially effective in the real time analysis of proposed land cover changes on runoff hydrographs and graphics/statistics resource inventories of random study area/watersheds.

  14. Tensor of effective susceptibility in random magnetic composites: Application to two-dimensional and three-dimensional cases

    NASA Astrophysics Data System (ADS)

    Posnansky, Oleg P.

    2018-05-01

    The measuring of dynamic magnetic susceptibility by nuclear magnetic resonance is used for revealing information about the internal structure of various magnetoactive composites. The response of such material on the applied external static and time-varying magnetic fields encodes intrinsic dynamic correlations and depends on links between macroscopic effective susceptibility and structure on the microscopic scale. In the current work we carried out computational analysis of the frequency dependent dynamic magnetic susceptibility and demonstrated its dependence on the microscopic architectural elements while also considering Euclidean dimensionality. The proposed numerical method is efficient in the simulation of nuclear magnetic resonance experiments in two- and three-dimensional random magnetic media by choosing and modeling the influence of the concentration of components and internal hierarchical characteristics of physical parameters.

  15. Diffusion amid random overlapping obstacles: Similarities, invariants, approximations

    PubMed Central

    Novak, Igor L.; Gao, Fei; Kraikivski, Pavel; Slepchenko, Boris M.

    2011-01-01

    Efficient and accurate numerical techniques are used to examine similarities of effective diffusion in a void between random overlapping obstacles: essential invariance of effective diffusion coefficients (Deff) with respect to obstacle shapes and applicability of a two-parameter power law over nearly entire range of excluded volume fractions (ϕ), except for a small vicinity of a percolation threshold. It is shown that while neither of the properties is exact, deviations from them are remarkably small. This allows for quick estimation of void percolation thresholds and approximate reconstruction of Deff (ϕ) for obstacles of any given shape. In 3D, the similarities of effective diffusion yield a simple multiplication “rule” that provides a fast means of estimating Deff for a mixture of overlapping obstacles of different shapes with comparable sizes. PMID:21513372

  16. Problems inherent in using aircraft for radio oceanography studies

    NASA Technical Reports Server (NTRS)

    Walsh, E. J.

    1977-01-01

    Some of the disadvantages relating to altitude stability and proximity to the ocean are described for radio oceanography studies using aircraft. The random oscillatory motion introduced by the autopilot in maintaining aircraft altitude requires a more sophisticated range tracker for a radar altimeter than would be required in a satellite application. One-dimensional simulations of the sea surface (long-crested waves) are performed using both the JONSWAP spectrum and the Pierson-Moskowitz spectrum. The results of the simulation indicate that care must be taken in trying to experimentally verify instrument measurement accuracy. Because of the relatively few wavelengths examined from an aircraft due to proximity to the ocean and low velocity compared to a satellite, the random variation in the sea surface parameters being measured can far exceed an instrument's ability to measure them.

  17. Three-dimensional information hierarchical encryption based on computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Kong, Dezhao; Shen, Xueju; Cao, Liangcai; Zhang, Hao; Zong, Song; Jin, Guofan

    2016-12-01

    A novel approach for encrypting three-dimensional (3-D) scene information hierarchically based on computer-generated holograms (CGHs) is proposed. The CGHs of the layer-oriented 3-D scene information are produced by angular-spectrum propagation algorithm at different depths. All the CGHs are then modulated by different chaotic random phase masks generated by the logistic map. Hierarchical encryption encoding is applied when all the CGHs are accumulated one by one, and the reconstructed volume of the 3-D scene information depends on permissions of different users. The chaotic random phase masks could be encoded into several parameters of the chaotic sequences to simplify the transmission and preservation of the keys. Optical experiments verify the proposed method and numerical simulations show the high key sensitivity, high security, and application flexibility of the method.

  18. L-hop percolation on networks with arbitrary degree distributions and its applications

    NASA Astrophysics Data System (ADS)

    Shang, Yilun; Luo, Weiliang; Xu, Shouhuai

    2011-09-01

    Site percolation has been used to help understand analytically the robustness of complex networks in the presence of random node deletion (or failure). In this paper we move a further step beyond random node deletion by considering that a node can be deleted because it is chosen or because it is within some L-hop distance of a chosen node. Using the generating functions approach, we present analytic results on the percolation threshold as well as the mean size, and size distribution, of nongiant components of complex networks under such operations. The introduction of parameter L is both conceptually interesting because it accommodates a sort of nonindependent node deletion, which is often difficult to tackle analytically, and practically interesting because it offers useful insights for cybersecurity (such as botnet defense).

  19. Scaling of Directed Dynamical Small-World Networks with Random Responses

    NASA Astrophysics Data System (ADS)

    Zhu, Chen-Ping; Xiong, Shi-Jie; Tian, Ying-Jie; Li, Nan; Jiang, Ke-Sheng

    2004-05-01

    A dynamical model of small-world networks, with directed links which describe various correlations in social and natural phenomena, is presented. Random responses of sites to the input message are introduced to simulate real systems. The interplay of these ingredients results in the collective dynamical evolution of a spinlike variable S(t) of the whole network. The global average spreading length s and average spreading time s are found to scale as p-αln(N with different exponents. Meanwhile, S(t) behaves in a duple scaling form for N≫N*: S˜f(p-βqγt˜), where p and q are rewiring and external parameters, α, β, and γ are scaling exponents, and f(t˜) is a universal function. Possible applications of the model are discussed.

  20. Recourse-based facility-location problems in hybrid uncertain environment.

    PubMed

    Wang, Shuming; Watada, Junzo; Pedrycz, Witold

    2010-08-01

    The objective of this paper is to study facility-location problems in the presence of a hybrid uncertain environment involving both randomness and fuzziness. A two-stage fuzzy-random facility-location model with recourse (FR-FLMR) is developed in which both the demands and costs are assumed to be fuzzy-random variables. The bounds of the optimal objective value of the two-stage FR-FLMR are derived. As, in general, the fuzzy-random parameters of the FR-FLMR can be regarded as continuous fuzzy-random variables with an infinite number of realizations, the computation of the recourse requires solving infinite second-stage programming problems. Owing to this requirement, the recourse function cannot be determined analytically, and, hence, the model cannot benefit from the use of techniques of classical mathematical programming. In order to solve the location problems of this nature, we first develop a technique of fuzzy-random simulation to compute the recourse function. The convergence of such simulation scenarios is discussed. In the sequel, we propose a hybrid mutation-based binary ant-colony optimization (MBACO) approach to the two-stage FR-FLMR, which comprises the fuzzy-random simulation and the simplex algorithm. A numerical experiment illustrates the application of the hybrid MBACO algorithm. The comparison shows that the hybrid MBACO finds better solutions than the one using other discrete metaheuristic algorithms, such as binary particle-swarm optimization, genetic algorithm, and tabu search.

  1. Hemodynamic effects of nitroglycerin ointment in emergency department patients.

    PubMed

    Mumma, Bryn E; Dhingra, Kapil R; Kurlinkus, Charley; Diercks, Deborah B

    2014-08-01

    Nitroglycerin ointment is commonly used in the treatment of emergency department (ED) patients with suspected acute heart failure (AHF) or suspected acute coronary syndrome (ACS), but its hemodynamic effects in this population are not well described. Our objective was to assess the effect of nitroglycerin ointment on mean arterial pressure (MAP) and systemic vascular resistance (SVR) in ED patients receiving nitroglycerin. We hypothesized that nitroglycerin ointment would result in a reduction of MAP and SVR in the acute treatment of patients. We conducted a prospective, observational pilot study in a convenience sample of adult patients from a single ED who were treated with nitroglycerin ointment. Impedance cardiography was used to measure MAP, SVR, cardiac output (CO), stroke volume (SV), and thoracic fluid content (TFC) at baseline and at 30, 60, and 120 min after application of nitroglycerin ointment. Mixed effects regression models with random slope and random intercept were used to analyze changes in hemodynamic parameters from baseline to 30, 60, and 120 min after adjusting for age, sex, and final ED diagnosis of AHF. Sixty-four subjects with mean age of 55 years (interquartile range, 48-67 years) were enrolled; 59% were male. In the adjusted analysis, MAP and TFC decreased after application of nitroglycerin ointment (p=0.001 and p=0.043, respectively). Cardiac index, CO, SVR, and SV showed no change (p=0.113, p=0.085, p=0.570, and p=0.076, respectively) over time. Among ED patients who are treated with nitroglycerin ointment, MAP and TFC decrease over time. However, other hemodynamic parameters do not change after application of nitroglycerin ointment in these patients. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Hemodynamic Effects of Nitroglycerin Ointment in Emergency Department Patients

    PubMed Central

    Mumma, Bryn E.; Dhingra, Kapil R.; Kurlinkus, Charley; Diercks, Deborah B.

    2014-01-01

    Background Nitroglycerin ointment is commonly used in the treatment of emergency department (ED) patients with suspected acute heart failure (AHF) or suspected acute coronary syndrome (ACS), but its hemodynamic effects in this population are not well described. Objectives Our objective was to assess effect of nitroglycerin ointment on mean arterial pressure (MAP) and systemic vascular resistance (SVR) in ED patients receiving nitroglycerin. We hypothesized that nitroglycerin ointment would result in a reduction of MAP and SVR in the acute treatment of patients. Methods We conducted a prospective, observational pilot study in a convenience sample of adult patients from a single ED who were treated with nitroglycerin ointment. Impedance cardiography was used to measure MAP, SVR, cardiac output (CO), stroke volume (SV), and thoracic fluid content (TFC) at baseline and at 30, 60, and 120 minutes following application of nitroglycerin ointment. Mixed effects regression models with random slope and random intercept were used to analyze changes in hemodynamic parameters from baseline to 30, 60, and 120 minutes after adjusting for age, sex, and final ED diagnosis of AHF. Results Sixty-four subjects with mean age 55 years (IQR 48-67) were enrolled; 59% were male. In the adjusted analysis, MAP and TFC decreased following application of nitroglycerin ointment (p=0.001 and p=0.043, respectively). CI, CO, SVR, and SV showed no change (p=0.113, p=0.085, p=0.570, and p=0.076, respectively) over time. Conclusions Among ED patients who are treated with nitroglycerin ointment, MAP and TFC decrease over time. However, other hemodynamic parameters do not change following application of nitroglycerin ointment in these patients. PMID:24698507

  3. (Un)Natural Disasters: The Electoral Cycle Outweighs the Hydrologic Cycle in Drought Declaration in Northeast Brazil

    NASA Astrophysics Data System (ADS)

    Camps-Valls, G.; Gomez-Chova, L.; Mateo, G.; Laparra, V.; Perez-Suay, A.; Munoz-Mari, J.

    2016-12-01

    Current Earth-observation (EO) applications for image classification have to deal with an unprecedented big amount of heterogeneous and complex data sources. Spatio-temporally explicit classification methods are a requirement in a variety of Earth system data processing applications. Upcoming missions such as the super-spectral Copernicus Sentinels EnMAP and FLEX will soon provide unprecedented data streams. Very high resolution (VHR) sensors like Worldview-3 also pose big challenges to data processing. The challenge is not only attached to optical sensors but also to infrared sounders and radar images which increased in spectral, spatial and temporal resolution. Besides, we should not forget the availability of the extremely large remote sensing data archives already collected by several past missions, such ENVISAT, Cosmo-SkyMED, Landsat, SPOT, or Seviri/MSG. These large-scale data problems require enhanced processing techniques that should be accurate, robust and fast. Standard parameter retrieval and classification algorithms cannot cope with this new scenario efficiently. In this work, we review the field of large scale kernel methods for both atmospheric parameter retrieval and cloud detection using infrared sounding IASI data and optical Seviri/MSG imagery. We propose novel Gaussian Processes (GPs) to train problems with millions of instances and high number of input features. Algorithms can cope with non-linearities efficiently, accommodate multi-output problems, and provide confidence intervals for the predictions. Several strategies to speed up algorithms are devised: random Fourier features and variational approaches for cloud classification using IASI data and Seviri/MSG, and engineered randomized kernel functions and emulation in temperature, moisture and ozone atmospheric profile retrieval from IASI as a proxy to the upcoming MTG-IRS sensor. Excellent compromise between accuracy and scalability are obtained in all applications.

  4. Testing statistical self-similarity in the topology of river networks

    USGS Publications Warehouse

    Troutman, Brent M.; Mantilla, Ricardo; Gupta, Vijay K.

    2010-01-01

    Recent work has demonstrated that the topological properties of real river networks deviate significantly from predictions of Shreve's random model. At the same time the property of mean self-similarity postulated by Tokunaga's model is well supported by data. Recently, a new class of network model called random self-similar networks (RSN) that combines self-similarity and randomness has been introduced to replicate important topological features observed in real river networks. We investigate if the hypothesis of statistical self-similarity in the RSN model is supported by data on a set of 30 basins located across the continental United States that encompass a wide range of hydroclimatic variability. We demonstrate that the generators of the RSN model obey a geometric distribution, and self-similarity holds in a statistical sense in 26 of these 30 basins. The parameters describing the distribution of interior and exterior generators are tested to be statistically different and the difference is shown to produce the well-known Hack's law. The inter-basin variability of RSN parameters is found to be statistically significant. We also test generator dependence on two climatic indices, mean annual precipitation and radiative index of dryness. Some indication of climatic influence on the generators is detected, but this influence is not statistically significant with the sample size available. Finally, two key applications of the RSN model to hydrology and geomorphology are briefly discussed.

  5. Simultaneous escaping of explicit and hidden free energy barriers: application of the orthogonal space random walk strategy in generalized ensemble based conformational sampling.

    PubMed

    Zheng, Lianqing; Chen, Mengen; Yang, Wei

    2009-06-21

    To overcome the pseudoergodicity problem, conformational sampling can be accelerated via generalized ensemble methods, e.g., through the realization of random walks along prechosen collective variables, such as spatial order parameters, energy scaling parameters, or even system temperatures or pressures, etc. As usually observed, in generalized ensemble simulations, hidden barriers are likely to exist in the space perpendicular to the collective variable direction and these residual free energy barriers could greatly abolish the sampling efficiency. This sampling issue is particularly severe when the collective variable is defined in a low-dimension subset of the target system; then the "Hamiltonian lagging" problem, which reveals the fact that necessary structural relaxation falls behind the move of the collective variable, may be likely to occur. To overcome this problem in equilibrium conformational sampling, we adopted the orthogonal space random walk (OSRW) strategy, which was originally developed in the context of free energy simulation [L. Zheng, M. Chen, and W. Yang, Proc. Natl. Acad. Sci. U.S.A. 105, 20227 (2008)]. Thereby, generalized ensemble simulations can simultaneously escape both the explicit barriers along the collective variable direction and the hidden barriers that are strongly coupled with the collective variable move. As demonstrated in our model studies, the present OSRW based generalized ensemble treatments show improved sampling capability over the corresponding classical generalized ensemble treatments.

  6. Personalized Risk Prediction in Clinical Oncology Research: Applications and Practical Issues Using Survival Trees and Random Forests.

    PubMed

    Hu, Chen; Steingrimsson, Jon Arni

    2018-01-01

    A crucial component of making individualized treatment decisions is to accurately predict each patient's disease risk. In clinical oncology, disease risks are often measured through time-to-event data, such as overall survival and progression/recurrence-free survival, and are often subject to censoring. Risk prediction models based on recursive partitioning methods are becoming increasingly popular largely due to their ability to handle nonlinear relationships, higher-order interactions, and/or high-dimensional covariates. The most popular recursive partitioning methods are versions of the Classification and Regression Tree (CART) algorithm, which builds a simple interpretable tree structured model. With the aim of increasing prediction accuracy, the random forest algorithm averages multiple CART trees, creating a flexible risk prediction model. Risk prediction models used in clinical oncology commonly use both traditional demographic and tumor pathological factors as well as high-dimensional genetic markers and treatment parameters from multimodality treatments. In this article, we describe the most commonly used extensions of the CART and random forest algorithms to right-censored outcomes. We focus on how they differ from the methods for noncensored outcomes, and how the different splitting rules and methods for cost-complexity pruning impact these algorithms. We demonstrate these algorithms by analyzing a randomized Phase III clinical trial of breast cancer. We also conduct Monte Carlo simulations to compare the prediction accuracy of survival forests with more commonly used regression models under various scenarios. These simulation studies aim to evaluate how sensitive the prediction accuracy is to the underlying model specifications, the choice of tuning parameters, and the degrees of missing covariates.

  7. Crossover ensembles of random matrices and skew-orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Santosh, E-mail: skumar.physics@gmail.com; Pandey, Akhilesh, E-mail: ap0700@mail.jnu.ac.in

    2011-08-15

    Highlights: > We study crossover ensembles of Jacobi family of random matrices. > We consider correlations for orthogonal-unitary and symplectic-unitary crossovers. > We use the method of skew-orthogonal polynomials and quaternion determinants. > We prove universality of spectral correlations in crossover ensembles. > We discuss applications to quantum conductance and communication theory problems. - Abstract: In a recent paper (S. Kumar, A. Pandey, Phys. Rev. E, 79, 2009, p. 026211) we considered Jacobi family (including Laguerre and Gaussian cases) of random matrix ensembles and reported exact solutions of crossover problems involving time-reversal symmetry breaking. In the present paper we givemore » details of the work. We start with Dyson's Brownian motion description of random matrix ensembles and obtain universal hierarchic relations among the unfolded correlation functions. For arbitrary dimensions we derive the joint probability density (jpd) of eigenvalues for all transitions leading to unitary ensembles as equilibrium ensembles. We focus on the orthogonal-unitary and symplectic-unitary crossovers and give generic expressions for jpd of eigenvalues, two-point kernels and n-level correlation functions. This involves generalization of the theory of skew-orthogonal polynomials to crossover ensembles. We also consider crossovers in the circular ensembles to show the generality of our method. In the large dimensionality limit, correlations in spectra with arbitrary initial density are shown to be universal when expressed in terms of a rescaled symmetry breaking parameter. Applications of our crossover results to communication theory and quantum conductance problems are also briefly discussed.« less

  8. Epidermis Microstructure Inspired Graphene Pressure Sensor with Random Distributed Spinosum for High Sensitivity and Large Linearity.

    PubMed

    Pang, Yu; Zhang, Kunning; Yang, Zhen; Jiang, Song; Ju, Zhenyi; Li, Yuxing; Wang, Xuefeng; Wang, Danyang; Jian, Muqiang; Zhang, Yingying; Liang, Renrong; Tian, He; Yang, Yi; Ren, Tian-Ling

    2018-03-27

    Recently, wearable pressure sensors have attracted tremendous attention because of their potential applications in monitoring physiological signals for human healthcare. Sensitivity and linearity are the two most essential parameters for pressure sensors. Although various designed micro/nanostructure morphologies have been introduced, the trade-off between sensitivity and linearity has not been well balanced. Human skin, which contains force receptors in a reticular layer, has a high sensitivity even for large external stimuli. Herein, inspired by the skin epidermis with high-performance force sensing, we have proposed a special surface morphology with spinosum microstructure of random distribution via the combination of an abrasive paper template and reduced graphene oxide. The sensitivity of the graphene pressure sensor with random distribution spinosum (RDS) microstructure is as high as 25.1 kPa -1 in a wide linearity range of 0-2.6 kPa. Our pressure sensor exhibits superior comprehensive properties compared with previous surface-modified pressure sensors. According to simulation and mechanism analyses, the spinosum microstructure and random distribution contribute to the high sensitivity and large linearity range, respectively. In addition, the pressure sensor shows promising potential in detecting human physiological signals, such as heartbeat, respiration, phonation, and human motions of a pushup, arm bending, and walking. The wearable pressure sensor array was further used to detect gait states of supination, neutral, and pronation. The RDS microstructure provides an alternative strategy to improve the performance of pressure sensors and extend their potential applications in monitoring human activities.

  9. Dental wax decreases calculus accumulation in small dogs.

    PubMed

    Smith, Mark M; Smithson, Christopher W

    2014-01-01

    A dental wax was evaluated after unilateral application in 20 client-owned, mixed and purebred small dogs using a clean, split-mouth study model. All dogs had clinical signs of periodontal disease including plaque, calculus, and/or gingivitis. The wax was randomly applied to the teeth of one side of the mouth daily for 30-days while the contralateral side received no treatment. Owner parameters evaluated included compliance and a subjective assessment of ease of wax application. Gingivitis, plaque and calculus accumulation were scored at the end of the study period. Owners considered the wax easy to apply in all dogs. Compliance with no missed application days was achieved in 8 dogs. The number of missed application days had no effect on wax efficacy. There was no significant difference in gingivitis or plaque accumulation scores when comparing treated and untreated sides. Calculus accumulation scores were significantly less (22.1 %) for teeth receiving the dental wax.

  10. Enhancement of the efficiency of the automatic control system to control the thermal load of steam boilers fired with fuels of several types

    NASA Astrophysics Data System (ADS)

    Ismatkhodzhaev, S. K.; Kuzishchin, V. F.

    2017-05-01

    An automatic control system to control the thermal load (ACS) in a drum-type boiler under random fluctuations in the blast-furnace and coke-oven gas consumption rates and to control action on the natural gas consumption is considered. The system provides for use of a compensator by the basic disturbance, the blast-furnace gas consumption rate. To enhance the performance of the system, it is proposed to use more accurate mathematical second-order delay models of the channels of the object under control in combination with calculation by frequency methods of the controller parameters as well as determination of the structure and parameters of the compensator considering the statistical characteristics of the disturbances and using simulation. The statistical characteristics of the random blast-furnace gas consumption signal based on experimental data are provided. The random signal is presented in the form of the low-frequency (LF) and high-frequency (HF) components. The models of the correlation functions and spectral densities are developed. The article presents the results of calculating the optimal settings of the control loop with the controlled variable in the form of the "heat" signal with the restricted frequency variation index using three variants of the control performance criteria, viz., the linear and quadratic integral indices under step disturbance and the control error variance under random disturbance by the blastfurnace gas consumption rate. It is recommended to select a compensator designed in the form of series connection of two parts, one of which corresponds to the operator inverse to the transfer function of the PI controller, i.e., in the form of a really differentiating element. This facilitates the realization of the second part of the compensator by the invariance condition similar to transmitting the compensating signal to the object input. The results of simulation under random disturbance by the blast-furnace gas consumption are reported. Recommendations are made on the structure and parameters of the shaping filters for modeling the LF and HF components of the random signal. The results of the research may find applications in the systems to control the thermal processes with compensation of basic disturbances, in particular, in boilers for combustion of accompanying gases.

  11. Study on Nonlinear Vibration Analysis of Gear System with Random Parameters

    NASA Astrophysics Data System (ADS)

    Tong, Cao; Liu, Xiaoyuan; Fan, Li

    2018-03-01

    In order to study the dynamic characteristics of gear nonlinear vibration system and the influence of random parameters, firstly, a nonlinear stochastic vibration analysis model of gear 3-DOF is established based on Newton’s Law. And the random response of gear vibration is simulated by stepwise integration method. Secondly, the influence of stochastic parameters such as meshing damping, tooth side gap and excitation frequency on the dynamic response of gear nonlinear system is analyzed by using the stability analysis method such as bifurcation diagram and Lyapunov exponent method. The analysis shows that the stochastic process can not be neglected, which can cause the random bifurcation and chaos of the system response. This study will provide important reference value for vibration engineering designers.

  12. Enhancing Security of Double Random Phase Encoding Based on Random S-Box

    NASA Astrophysics Data System (ADS)

    Girija, R.; Singh, Hukum

    2018-06-01

    In this paper, we propose a novel asymmetric cryptosystem for double random phase encoding (DRPE) using random S-Box. While utilising S-Box separately is not reliable and DRPE does not support non-linearity, so, our system unites the effectiveness of S-Box with an asymmetric system of DRPE (through Fourier transform). The uniqueness of proposed cryptosystem lies on employing high sensitivity dynamic S-Box for our DRPE system. The randomness and scalability achieved due to applied technique is an additional feature of the proposed solution. The firmness of random S-Box is investigated in terms of performance parameters such as non-linearity, strict avalanche criterion, bit independence criterion, linear and differential approximation probabilities etc. S-Boxes convey nonlinearity to cryptosystems which is a significant parameter and very essential for DRPE. The strength of proposed cryptosystem has been analysed using various parameters such as MSE, PSNR, correlation coefficient analysis, noise analysis, SVD analysis, etc. Experimental results are conferred in detail to exhibit proposed cryptosystem is highly secure.

  13. Laser backscattered from partially convex targets of large sizes in random media for E-wave polarization.

    PubMed

    El-Ocla, Hosam

    2006-08-01

    The characteristics of a radar cross section (RCS) of partially convex targets with large sizes up to five wavelengths in free space and random media are studied. The nature of the incident wave is an important factor in remote sensing and radar detection applications. I investigate the effects of beam wave incidence on the performance of RCS, drawing on the method I used in a previous study on plane-wave incidence. A beam wave can be considered a plane wave if the target size is smaller than the beam width. Therefore, to have a beam wave with a limited spot on the target, the target size should be larger than the beam width (assuming E-wave incidence wave polarization. The effects of the target configuration, random medium parameters, and the beam width on the laser RCS and the enhancement in the radar cross section are numerically analyzed, resulting in the possibility of having some sort of control over radar detection using beam wave incidence.

  14. Application of a computerized vibroacoustic data bank for random vibration criteria development

    NASA Technical Reports Server (NTRS)

    Ferebee, R. C.

    1982-01-01

    A computerized data bank system was developed for utilization of large amounts of vibration and acoustic data to formulate component random vibration design and test criteria. This system consists of a computer, graphics tablets, and a dry silver hard copier which are all desk top type hardware and occupy minimal space. Currently, the data bank contains data from the Saturn 5 and Titan 3 flight and static test programs. The vibration and acoustic data are stored in the form of power spectral density and one third octave band plots over the frequency range from 20 to 2000 Hz. The data were stored by digitizing each spectral plot by tracing with the graphics tablet. The digitized data were statistically analyzed, and the resulting 97.5 percent confidence levels were stored on tape along with the appropriate structural parameters. Standard extrapolation procedures were programmed for prediction of component random vibration test criteria for new launch vehicle and payload configurations. A user's manual is included to guide potential users through the programs.

  15. Multimode resource-constrained multiple project scheduling problem under fuzzy random environment and its application to a large scale hydropower construction project.

    PubMed

    Xu, Jiuping; Feng, Cuiying

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.

  16. Multimode Resource-Constrained Multiple Project Scheduling Problem under Fuzzy Random Environment and Its Application to a Large Scale Hydropower Construction Project

    PubMed Central

    Xu, Jiuping

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708

  17. A Note on Parameters of Random Substitutions by γ-Diagonal Matrices

    NASA Astrophysics Data System (ADS)

    Kang, Ju-Sung

    Random substitutions are very useful and practical method for privacy-preserving schemes. In this paper we obtain the exact relationship between the estimation errors and three parameters used in the random substitutions, namely the privacy assurance metric γ, the total number n of data records, and the size N of transition matrix. We also demonstrate some simulations concerning the theoretical result.

  18. Modelling the minislump spread of superplasticized PPC paste using RLS with the application of Random Kitchen sink

    NASA Astrophysics Data System (ADS)

    Sathyan, Dhanya; Anand, K. B.; Jose, Chinnu; Aravind, N. R.

    2018-02-01

    Super plasticizers(SPs) are added to the concrete to improve its workability with out changing the water cement ratio. Property of fresh concrete is mainly governed by the cement paste which depends on the dispersion of cement particle. Cement dispersive properties of the SP depends up on its dosage and the family. Mini slump spread diameter with different dosages and families of SP is taken as the measure of workability characteristic of cement paste chosen for measuring the rheological properties of cement paste. The main purpose of this study includes measure the dispersive ability of different families of SP by conducting minislump test and model the minislump spread diameter of the super plasticized Portland Pozzolona Cement (PPC)paste using regularized least square (RLS) approach along with the application of Random kitchen sink (RKS) algorithm. For preparing test and training data for the model 287 different mixes were prepared in the laboratory at a water cement ratio of 0.37 using four locally available brand of Portland Pozzolona cement (PPC) and SP belonging to four different families. Water content, cement weight and amount of SP (by considering it as seven separate input based on their family and brand) were the input parameters and mini slump spread diameter was the output parameter for the model. The variation of predicted and measured values of spread diameters were compared and validated. From this study it was observed that, the model could effectively predict the minislump spread of cement paste

  19. Optimization of Ferroelectric Ceramics by Design at the Microstructure Level

    NASA Astrophysics Data System (ADS)

    Jayachandran, K. P.; Guedes, J. M.; Rodrigues, H. C.

    2010-05-01

    Ferroelectric materials show remarkable physical behaviors that make them essential for many devices and have been extensively studied for their applications of nonvolatile random access memory (NvRAM) and high-speed random access memories. Although ferroelectric ceramics (polycrystals) present ease in manufacture and in compositional modifications and represent the widest application area of materials, computational and theoretical studies are sparse owing to many reasons including the large number of constituent atoms. Macroscopic properties of ferroelectric polycrystals are dominated by the inhomogeneities at the crystallographic domain/grain level. Orientation of grains/domains is critical to the electromechanical response of the single crystalline and polycrystalline materials. Polycrystalline materials have the potential of exhibiting better performance at a macroscopic scale by design of the domain/grain configuration at the domain-size scale. This suggests that piezoelectric properties can be optimized by a proper choice of the parameters which control the distribution of grain orientations. Nevertheless, this choice is complicated and it is impossible to analyze all possible combinations of the distribution parameters or the angles themselves. Hence we have implemented the stochastic optimization technique of simulated annealing combined with the homogenization for the optimization problem. The mathematical homogenization theory of a piezoelectric medium is implemented in the finite element method (FEM) by solving the coupled equilibrium electrical and mechanical fields. This implementation enables the study of the dependence of the macroscopic electromechanical properties of a typical crystalline and polycrystalline ferroelectric ceramic on the grain orientation.

  20. On the multivariate total least-squares approach to empirical coordinate transformations. Three algorithms

    NASA Astrophysics Data System (ADS)

    Schaffrin, Burkhard; Felus, Yaron A.

    2008-06-01

    The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model ( Y- E Y = ( X- E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler-Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335-342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.

  1. Deducing Electronic Unit Internal Response During a Vibration Test Using a Lumped Parameter Modeling Approach

    NASA Technical Reports Server (NTRS)

    Van Dyke, Michael B.

    2014-01-01

    During random vibration testing of electronic boxes there is often a desire to know the dynamic response of certain internal printed wiring boards (PWBs) for the purpose of monitoring the response of sensitive hardware or for post-test forensic analysis in support of anomaly investigation. Due to restrictions on internally mounted accelerometers for most flight hardware there is usually no means to empirically observe the internal dynamics of the unit, so one must resort to crude and highly uncertain approximations. One common practice is to apply Miles Equation, which does not account for the coupled response of the board in the chassis, resulting in significant over- or under-prediction. This paper explores the application of simple multiple-degree-of-freedom lumped parameter modeling to predict the coupled random vibration response of the PWBs in their fundamental modes of vibration. A simple tool using this approach could be used during or following a random vibration test to interpret vibration test data from a single external chassis measurement to deduce internal board dynamics by means of a rapid correlation analysis. Such a tool might also be useful in early design stages as a supplemental analysis to a more detailed finite element analysis to quickly prototype and analyze the dynamics of various design iterations. After developing the theoretical basis, a lumped parameter modeling approach is applied to an electronic unit for which both external and internal test vibration response measurements are available for direct comparison. Reasonable correlation of the results demonstrates the potential viability of such an approach. Further development of the preliminary approach presented in this paper will involve correlation with detailed finite element models and additional relevant test data.

  2. Efficacy of different therapy regimes of low-power laser in painful osteoarthritis of the knee: a double-blind and randomized-controlled trial.

    PubMed

    Gur, Ali; Cosut, Abdulkadir; Sarac, Aysegul Jale; Cevik, Remzi; Nas, Kemal; Uyar, Asur

    2003-01-01

    A prospective, double-blind, randomized, and controlled trial was conducted in patients with knee osteoarthritis (OA) to evaluate the efficacy of infrared low-power Gallium-Arsenide (Ga-As) laser therapy (LPLT) and compared two different laser therapy regimes. Ninety patients were randomly assigned to three treatment groups by one of the nontreating authors by drawing 1 of 90 envelopes labeled 'A' (Group I: actual LPLT consisted of 5 minutes, 3 J total dose + exercise; 30 patients), 'B' (Group II: actual LPLT consisted of 3 minutes, 2 J total dose + exercise; 30 patients), and 'C' (Group III: placebo laser group + exercise; 30 patients). All patients received a total of 10 treatments, and exercise therapy program was continued during study (14 weeks). Subjects, physician, and data analysts were unaware of the code for active or placebo laser until the data analysis was complete. All patients were evaluated with respect to pain, degree of active knee flexion, duration of morning stiffness, painless walking distance and duration, and the Western Ontario and Mc Master Universities Osteoarthritis Index (WOMAC) at week 0, 6, 10, and 14. Statistically significant improvements were indicated in respect to all parameters such as pain, function, and quality of life (QoL) measures in the post-therapy period compared to pre-therapy in both active laser groups (P < 0.01). Improvements in all parameters of the Group I and in parameters, such as pain and WOMAC of the Group II, were more statistically significant when compared with placebo laser group (P < 0.05). Our study demonstrated that applications of LPLT in different dose and duration have not affected results and both therapy regimes were a safe and effective method in treatment of knee OA. Copyright 2003 Wiley-Liss, Inc.

  3. Capacity-optimized mp2 audio watermarking

    NASA Astrophysics Data System (ADS)

    Steinebach, Martin; Dittmann, Jana

    2003-06-01

    Today a number of audio watermarking algorithms have been proposed, some of them at a quality making them suitable for commercial applications. The focus of most of these algorithms is copyright protection. Therefore, transparency and robustness are the most discussed and optimised parameters. But other applications for audio watermarking can also be identified stressing other parameters like complexity or payload. In our paper, we introduce a new mp2 audio watermarking algorithm optimised for high payload. Our algorithm uses the scale factors of an mp2 file for watermark embedding. They are grouped and masked based on a pseudo-random pattern generated from a secret key. In each group, we embed one bit. Depending on the bit to embed, we change the scale factors by adding 1 where necessary until it includes either more even or uneven scale factors. An uneven group has a 1 embedded, an even group a 0. The same rule is later applied to detect the watermark. The group size can be increased or decreased for transparency/payload trade-off. We embed 160 bits or more in an mp2 file per second without reducing perceived quality. As an application example, we introduce a prototypic Karaoke system displaying song lyrics embedded as a watermark.

  4. Intensity moments by path integral techniques for wave propagation through random media, with application to sound in the ocean

    NASA Technical Reports Server (NTRS)

    Bernstein, D. R.; Dashen, R.; Flatte, S. M.

    1983-01-01

    A theory is developed which describes intensity moments for wave propagation through random media. It is shown using the path integral technique that these moments are significantly different from those of a Rayleigh distribution in certain asymptotic regions. The path integral approach is extended to inhomogeneous, anisotropic media possessing a strong deterministic velocity profile. The behavior of the corrections to Rayleigh statistics is examined, and it is shown that the important characteristics can be attributed to a local micropath focusing function. The correction factor gamma is a micropath focusing parameter defined in terms of medium fluctuations. The value of gamma is calculated for three ocean acoustic experiments, using internal waves as the medium fluctuations. It is found that all three experiments show excellent agreement as to the relative values of the intensity moments. The full curved ray is found to yield results that are significantly different from the straight-line approximations. It is noted that these methods are applicable to a variety of experimental situations, including atmospheric optics and radio waves through plasmas.

  5. Designing clinical trials to test disease-modifying agents: application to the treatment trials of Alzheimer's disease.

    PubMed

    Xiong, Chengjie; van Belle, Gerald; Miller, J Philip; Morris, John C

    2011-02-01

    Therapeutic trials of disease-modifying agents on Alzheimer's disease (AD) require novel designs and analyses involving switch of treatments for at least a portion of subjects enrolled. Randomized start and randomized withdrawal designs are two examples of such designs. Crucial design parameters such as sample size and the time of treatment switch are important to understand in designing such clinical trials. The purpose of this article is to provide methods to determine sample sizes and time of treatment switch as well as optimum statistical tests of treatment efficacy for clinical trials of disease-modifying agents on AD. A general linear mixed effects model is proposed to test the disease-modifying efficacy of novel therapeutic agents on AD. This model links the longitudinal growth from both the placebo arm and the treatment arm at the time of treatment switch for these in the delayed treatment arm or early withdrawal arm and incorporates the potential correlation on the rate of cognitive change before and after the treatment switch. Sample sizes and the optimum time for treatment switch of such trials as well as optimum test statistic for the treatment efficacy are determined according to the model. Assuming an evenly spaced longitudinal design over a fixed duration, the optimum treatment switching time in a randomized start or a randomized withdrawal trial is half way through the trial. With the optimum test statistic for the treatment efficacy and over a wide spectrum of model parameters, the optimum sample size allocations are fairly close to the simplest design with a sample size ratio of 1:1:1 among the treatment arm, the delayed treatment or early withdrawal arm, and the placebo arm. The application of the proposed methodology to AD provides evidence that much larger sample sizes are required to adequately power disease-modifying trials when compared with those for symptomatic agents, even when the treatment switch time and efficacy test are optimally chosen. The proposed method assumes that the only and immediate effect of treatment switch is on the rate of cognitive change. Crucial design parameters for the clinical trials of disease-modifying agents on AD can be optimally chosen. Government and industry officials as well as academia researchers should consider the optimum use of the clinical trials design for disease-modifying agents on AD in their effort to search for the treatments with the potential to modify the underlying pathophysiology of AD.

  6. Clustering of time-course gene expression profiles using normal mixture models with autoregressive random effects

    PubMed Central

    2012-01-01

    Background Time-course gene expression data such as yeast cell cycle data may be periodically expressed. To cluster such data, currently used Fourier series approximations of periodic gene expressions have been found not to be sufficiently adequate to model the complexity of the time-course data, partly due to their ignoring the dependence between the expression measurements over time and the correlation among gene expression profiles. We further investigate the advantages and limitations of available models in the literature and propose a new mixture model with autoregressive random effects of the first order for the clustering of time-course gene-expression profiles. Some simulations and real examples are given to demonstrate the usefulness of the proposed models. Results We illustrate the applicability of our new model using synthetic and real time-course datasets. We show that our model outperforms existing models to provide more reliable and robust clustering of time-course data. Our model provides superior results when genetic profiles are correlated. It also gives comparable results when the correlation between the gene profiles is weak. In the applications to real time-course data, relevant clusters of coregulated genes are obtained, which are supported by gene-function annotation databases. Conclusions Our new model under our extension of the EMMIX-WIRE procedure is more reliable and robust for clustering time-course data because it adopts a random effects model that allows for the correlation among observations at different time points. It postulates gene-specific random effects with an autocorrelation variance structure that models coregulation within the clusters. The developed R package is flexible in its specification of the random effects through user-input parameters that enables improved modelling and consequent clustering of time-course data. PMID:23151154

  7. Color image encryption based on color blend and chaos permutation in the reality-preserving multiple-parameter fractional Fourier transform domain

    NASA Astrophysics Data System (ADS)

    Lang, Jun

    2015-03-01

    In this paper, we propose a novel color image encryption method by using Color Blend (CB) and Chaos Permutation (CP) operations in the reality-preserving multiple-parameter fractional Fourier transform (RPMPFRFT) domain. The original color image is first exchanged and mixed randomly from the standard red-green-blue (RGB) color space to R‧G‧B‧ color space by rotating the color cube with a random angle matrix. Then RPMPFRFT is employed for changing the pixel values of color image, three components of the scrambled RGB color space are converted by RPMPFRFT with three different transform pairs, respectively. Comparing to the complex output transform, the RPMPFRFT transform ensures that the output is real which can save storage space of image and convenient for transmission in practical applications. To further enhance the security of the encryption system, the output of the former steps is scrambled by juxtaposition of sections of the image in the reality-preserving multiple-parameter fractional Fourier domains and the alignment of sections is determined by two coupled chaotic logistic maps. The parameters in the Color Blend, Chaos Permutation and the RPMPFRFT transform are regarded as the key in the encryption algorithm. The proposed color image encryption can also be applied to encrypt three gray images by transforming the gray images into three RGB color components of a specially constructed color image. Numerical simulations are performed to demonstrate that the proposed algorithm is feasible, secure, sensitive to keys and robust to noise attack and data loss.

  8. Nonlinear consolidation in randomly heterogeneous highly compressible aquitards

    NASA Astrophysics Data System (ADS)

    Zapata-Norberto, Berenice; Morales-Casique, Eric; Herrera, Graciela S.

    2018-05-01

    Severe land subsidence due to groundwater extraction may occur in multiaquifer systems where highly compressible aquitards are present. The highly compressible nature of the aquitards leads to nonlinear consolidation where the groundwater flow parameters are stress-dependent. The case is further complicated by the heterogeneity of the hydrogeologic and geotechnical properties of the aquitards. The effect of realistic vertical heterogeneity of hydrogeologic and geotechnical parameters on the consolidation of highly compressible aquitards is investigated by means of one-dimensional Monte Carlo numerical simulations where the lower boundary represents the effect of an instant drop in hydraulic head due to groundwater pumping. Two thousand realizations are generated for each of the following parameters: hydraulic conductivity ( K), compression index ( C c), void ratio ( e) and m (an empirical parameter relating hydraulic conductivity and void ratio). The correlation structure, the mean and the variance for each parameter were obtained from a literature review about field studies in the lacustrine sediments of Mexico City. The results indicate that among the parameters considered, random K has the largest effect on the ensemble average behavior of the system when compared to a nonlinear consolidation model with deterministic initial parameters. The deterministic solution underestimates the ensemble average of total settlement when initial K is random. In addition, random K leads to the largest variance (and therefore largest uncertainty) of total settlement, groundwater flux and time to reach steady-state conditions.

  9. Effect of multiplicative noise on stationary stochastic process

    NASA Astrophysics Data System (ADS)

    Kargovsky, A. V.; Chikishev, A. Yu.; Chichigina, O. A.

    2018-03-01

    An open system that can be analyzed using the Langevin equation with multiplicative noise is considered. The stationary state of the system results from a balance of deterministic damping and random pumping simulated as noise with controlled periodicity. The dependence of statistical moments of the variable that characterizes the system on parameters of the problem is studied. A nontrivial decrease in the mean value of the main variable with an increase in noise stochasticity is revealed. Applications of the results in several physical, chemical, biological, and technical problems of natural and humanitarian sciences are discussed.

  10. Random sampling and validation of covariance matrices of resonance parameters

    NASA Astrophysics Data System (ADS)

    Plevnik, Lucijan; Zerovnik, Gašper

    2017-09-01

    Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices) in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.

  11. Soil variability in engineering applications

    NASA Astrophysics Data System (ADS)

    Vessia, Giovanna

    2014-05-01

    Natural geomaterials, as soils and rocks, show spatial variability and heterogeneity of physical and mechanical properties. They can be measured by in field and laboratory testing. The heterogeneity concerns different values of litho-technical parameters pertaining similar lithological units placed close to each other. On the contrary, the variability is inherent to the formation and evolution processes experienced by each geological units (homogeneous geomaterials on average) and captured as a spatial structure of fluctuation of physical property values about their mean trend, e.g. the unit weight, the hydraulic permeability, the friction angle, the cohesion, among others. The preceding spatial variations shall be managed by engineering models to accomplish reliable designing of structures and infrastructures. Materon (1962) introduced the Geostatistics as the most comprehensive tool to manage spatial correlation of parameter measures used in a wide range of earth science applications. In the field of the engineering geology, Vanmarcke (1977) developed the first pioneering attempts to describe and manage the inherent variability in geomaterials although Terzaghi (1943) already highlighted that spatial fluctuations of physical and mechanical parameters used in geotechnical designing cannot be neglected. A few years later, Mandelbrot (1983) and Turcotte (1986) interpreted the internal arrangement of geomaterial according to Fractal Theory. In the same years, Vanmarcke (1983) proposed the Random Field Theory providing mathematical tools to deal with inherent variability of each geological units or stratigraphic succession that can be resembled as one material. In this approach, measurement fluctuations of physical parameters are interpreted through the spatial variability structure consisting in the correlation function and the scale of fluctuation. Fenton and Griffiths (1992) combined random field simulation with the finite element method to produce the Random Finite Element Method (RFEM). This method has been used to investigate the random behavior of soils in the context of a variety of classical geotechnical problems. Afterward, some following studies collected the worldwide variability values of many technical parameters of soils (Phoon and Kulhawy 1999a) and their spatial correlation functions (Phoon and Kulhawy 1999b). In Italy, Cherubini et al. (2007) calculated the spatial variability structure of sandy and clayey soils from the standard cone penetration test readings. The large extent of the worldwide measured spatial variability of soils and rocks heavily affects the reliability of geotechnical designing as well as other uncertainties introduced by testing devices and engineering models. So far, several methods have been provided to deal with the preceding sources of uncertainties in engineering designing models (e.g. First Order Reliability Method, Second Order Reliability Method, Response Surface Method, High Dimensional Model Representation, etc.). Nowadays, the efforts in this field have been focusing on (1) measuring spatial variability of different rocks and soils and (2) developing numerical models that take into account the spatial variability as additional physical variable. References Cherubini C., Vessia G. and Pula W. 2007. Statistical soil characterization of Italian sites for reliability analyses. Proc. 2nd Int. Workshop. on Characterization and Engineering Properties of Natural Soils, 3-4: 2681-2706. Griffiths D.V. and Fenton G.A. 1993. Seepage beneath water retaining structures founded on spatially random soil, Géotechnique, 43(6): 577-587. Mandelbrot B.B. 1983. The Fractal Geometry of Nature. San Francisco: W H Freeman. Matheron G. 1962. Traité de Géostatistique appliquée. Tome 1, Editions Technip, Paris, 334 p. Phoon K.K. and Kulhawy F.H. 1999a. Characterization of geotechnical variability. Can Geotech J, 36(4): 612-624. Phoon K.K. and Kulhawy F.H. 1999b. Evaluation of geotechnical property variability. Can Geotech J, 36(4): 625-639. Terzaghi K. 1943. Theoretical Soil Mechanics. New York: John Wiley and Sons. Turcotte D.L. 1986. Fractals and fragmentation. J Geophys Res, 91: 1921-1926. Vanmarcke E.H. 1977. Probabilistic modeling of soil profiles. J Geotech Eng Div, ASCE, 103: 1227-1246. Vanmarcke E.H. 1983. Random fields: analysis and synthesis. MIT Press, Cambridge.

  12. Nondestructive, fast, and cost-effective image processing method for roughness measurement of randomly rough metallic surfaces.

    PubMed

    Ghodrati, Sajjad; Kandi, Saeideh Gorji; Mohseni, Mohsen

    2018-06-01

    In recent years, various surface roughness measurement methods have been proposed as alternatives to the commonly used stylus profilometry, which is a low-speed, destructive, expensive but precise method. In this study, a novel method, called "image profilometry," has been introduced for nondestructive, fast, and low-cost surface roughness measurement of randomly rough metallic samples based on image processing and machine vision. The impacts of influential parameters such as image resolution and filtering approach for elimination of the long wavelength surface undulations on the accuracy of the image profilometry results have been comprehensively investigated. Ten surface roughness parameters were measured for the samples using both the stylus and image profilometry. Based on the results, the best image resolution was 800 dpi, and the most practical filtering method was Gaussian convolution+cutoff. In these conditions, the best and worst correlation coefficients (R 2 ) between the stylus and image profilometry results were 0.9892 and 0.9313, respectively. Our results indicated that the image profilometry predicted the stylus profilometry results with high accuracy. Consequently, it could be a viable alternative to the stylus profilometry, particularly in online applications.

  13. Application of quadratic optimization to supersonic inlet control

    NASA Technical Reports Server (NTRS)

    Lehtinen, B.; Zeller, J. R.

    1971-01-01

    The application of linear stochastic optimal control theory to the design of the control system for the air intake (inlet) of a supersonic air-breathing propulsion system is discussed. The controls must maintain a stable inlet shock position in the presence of random airflow disturbances and prevent inlet unstart. Two different linear time invariant control systems are developed. One is designed to minimize a nonquadratic index, the expected frequency of inlet unstart, and the other is designed to minimize the mean square value of inlet shock motion. The quadratic equivalence principle is used to obtain the best linear controller that minimizes the nonquadratic performance index. The two systems are compared on the basis of unstart prevention, control effort requirements, and sensitivity to parameter variations.

  14. White Gaussian Noise - Models for Engineers

    NASA Astrophysics Data System (ADS)

    Jondral, Friedrich K.

    2018-04-01

    This paper assembles some information about white Gaussian noise (WGN) and its applications. It starts from a description of thermal noise, i. e. the irregular motion of free charge carriers in electronic devices. In a second step, mathematical models of WGN processes and their most important parameters, especially autocorrelation functions and power spectrum densities, are introduced. In order to proceed from mathematical models to simulations, we discuss the generation of normally distributed random numbers. The signal-to-noise ratio as the most important quality measure used in communications, control or measurement technology is accurately introduced. As a practical application of WGN, the transmission of quadrature amplitude modulated (QAM) signals over additive WGN channels together with the optimum maximum likelihood (ML) detector is considered in a demonstrative and intuitive way.

  15. Detection of mitotic nuclei in breast histopathology images using localized ACM and Random Kitchen Sink based classifier.

    PubMed

    Beevi, K Sabeena; Nair, Madhu S; Bindu, G R

    2016-08-01

    The exact measure of mitotic nuclei is a crucial parameter in breast cancer grading and prognosis. This can be achieved by improving the mitotic detection accuracy by careful design of segmentation and classification techniques. In this paper, segmentation of nuclei from breast histopathology images are carried out by Localized Active Contour Model (LACM) utilizing bio-inspired optimization techniques in the detection stage, in order to handle diffused intensities present along object boundaries. Further, the application of a new optimal machine learning algorithm capable of classifying strong non-linear data such as Random Kitchen Sink (RKS), shows improved classification performance. The proposed method has been tested on Mitosis detection in breast cancer histological images (MITOS) dataset provided for MITOS-ATYPIA CONTEST 2014. The proposed framework achieved 95% recall, 98% precision and 96% F-score.

  16. Improving the performance of the mass transfer-based reference evapotranspiration estimation approaches through a coupled wavelet-random forest methodology

    NASA Astrophysics Data System (ADS)

    Shiri, Jalal

    2018-06-01

    Among different reference evapotranspiration (ETo) modeling approaches, mass transfer-based methods have been less studied. These approaches utilize temperature and wind speed records. On the other hand, the empirical equations proposed in this context generally produce weak simulations, except when a local calibration is used for improving their performance. This might be a crucial drawback for those equations in case of local data scarcity for calibration procedure. So, application of heuristic methods can be considered as a substitute for improving the performance accuracy of the mass transfer-based approaches. However, given that the wind speed records have usually higher variation magnitudes than the other meteorological parameters, application of a wavelet transform for coupling with heuristic models would be necessary. In the present paper, a coupled wavelet-random forest (WRF) methodology was proposed for the first time to improve the performance accuracy of the mass transfer-based ETo estimation approaches using cross-validation data management scenarios in both local and cross-station scales. The obtained results revealed that the new coupled WRF model (with the minimum scatter index values of 0.150 and 0.192 for local and external applications, respectively) improved the performance accuracy of the single RF models as well as the empirical equations to great extent.

  17. A pattern-mixture model approach for handling missing continuous outcome data in longitudinal cluster randomized trials.

    PubMed

    Fiero, Mallorie H; Hsu, Chiu-Hsieh; Bell, Melanie L

    2017-11-20

    We extend the pattern-mixture approach to handle missing continuous outcome data in longitudinal cluster randomized trials, which randomize groups of individuals to treatment arms, rather than the individuals themselves. Individuals who drop out at the same time point are grouped into the same dropout pattern. We approach extrapolation of the pattern-mixture model by applying multilevel multiple imputation, which imputes missing values while appropriately accounting for the hierarchical data structure found in cluster randomized trials. To assess parameters of interest under various missing data assumptions, imputed values are multiplied by a sensitivity parameter, k, which increases or decreases imputed values. Using simulated data, we show that estimates of parameters of interest can vary widely under differing missing data assumptions. We conduct a sensitivity analysis using real data from a cluster randomized trial by increasing k until the treatment effect inference changes. By performing a sensitivity analysis for missing data, researchers can assess whether certain missing data assumptions are reasonable for their cluster randomized trial. Copyright © 2017 John Wiley & Sons, Ltd.

  18. A New Method of Random Environmental Walking for Assessing Behavioral Preferences for Different Lighting Applications

    PubMed Central

    Patching, Geoffrey R.; Rahm, Johan; Jansson, Märit; Johansson, Maria

    2017-01-01

    Accurate assessment of people’s preferences for different outdoor lighting applications is increasingly considered important in the development of new urban environments. Here a new method of random environmental walking is proposed to complement current methods of assessing urban lighting applications, such as self-report questionnaires. The procedure involves participants repeatedly walking between different lighting applications by random selection of a lighting application and preferred choice or by random selection of a lighting application alone. In this manner, participants are exposed to all lighting applications of interest more than once and participants’ preferences for the different lighting applications are reflected in the number of times they walk to each lighting application. On the basis of an initial simulation study, to explore the feasibility of this approach, a comprehensive field test was undertaken. The field test included random environmental walking and collection of participants’ subjective ratings of perceived pleasantness (PP), perceived quality, perceived strength, and perceived flicker of four lighting applications. The results indicate that random environmental walking can reveal participants’ preferences for different lighting applications that, in the present study, conformed to participants’ ratings of PP and perceived quality of the lighting applications. As a complement to subjectively stated environmental preferences, random environmental walking has the potential to expose behavioral preferences for different lighting applications. PMID:28337163

  19. Log-gamma linear-mixed effects models for multiple outcomes with application to a longitudinal glaucoma study

    PubMed Central

    Zhang, Peng; Luo, Dandan; Li, Pengfei; Sharpsten, Lucie; Medeiros, Felipe A.

    2015-01-01

    Glaucoma is a progressive disease due to damage in the optic nerve with associated functional losses. Although the relationship between structural and functional progression in glaucoma is well established, there is disagreement on how this association evolves over time. In addressing this issue, we propose a new class of non-Gaussian linear-mixed models to estimate the correlations among subject-specific effects in multivariate longitudinal studies with a skewed distribution of random effects, to be used in a study of glaucoma. This class provides an efficient estimation of subject-specific effects by modeling the skewed random effects through the log-gamma distribution. It also provides more reliable estimates of the correlations between the random effects. To validate the log-gamma assumption against the usual normality assumption of the random effects, we propose a lack-of-fit test using the profile likelihood function of the shape parameter. We apply this method to data from a prospective observation study, the Diagnostic Innovations in Glaucoma Study, to present a statistically significant association between structural and functional change rates that leads to a better understanding of the progression of glaucoma over time. PMID:26075565

  20. Large Uncertainty in Estimating pCO2 From Carbonate Equilibria in Lakes

    NASA Astrophysics Data System (ADS)

    Golub, Malgorzata; Desai, Ankur R.; McKinley, Galen A.; Remucal, Christina K.; Stanley, Emily H.

    2017-11-01

    Most estimates of carbon dioxide (CO2) evasion from freshwaters rely on calculating partial pressure of aquatic CO2 (pCO2) from two out of three CO2-related parameters using carbonate equilibria. However, the pCO2 uncertainty has not been systematically evaluated across multiple lake types and equilibria. We quantified random errors in pH, dissolved inorganic carbon, alkalinity, and temperature from the North Temperate Lakes Long-Term Ecological Research site in four lake groups across a broad gradient of chemical composition. These errors were propagated onto pCO2 calculated from three carbonate equilibria, and for overlapping observations, compared against uncertainties in directly measured pCO2. The empirical random errors in CO2-related parameters were mostly below 2% of their median values. Resulting random pCO2 errors ranged from ±3.7% to ±31.5% of the median depending on alkalinity group and choice of input parameter pairs. Temperature uncertainty had a negligible effect on pCO2. When compared with direct pCO2 measurements, all parameter combinations produced biased pCO2 estimates with less than one third of total uncertainty explained by random pCO2 errors, indicating that systematic uncertainty dominates over random error. Multidecadal trend of pCO2 was difficult to reconstruct from uncertain historical observations of CO2-related parameters. Given poor precision and accuracy of pCO2 estimates derived from virtually any combination of two CO2-related parameters, we recommend direct pCO2 measurements where possible. To achieve consistently robust estimates of CO2 emissions from freshwater components of terrestrial carbon balances, future efforts should focus on improving accuracy and precision of CO2-related parameters (including direct pCO2) measurements and associated pCO2 calculations.

  1. Smartwatch feedback device for high-quality chest compressions by a single rescuer during infant cardiac arrest: a randomized, controlled simulation study.

    PubMed

    Lee, Juncheol; Song, Yeongtak; Oh, Jaehoon; Chee, Youngjoon; Ahn, Chiwon; Shin, Hyungoo; Kang, Hyunggoo; Lim, Tae Ho

    2018-02-12

    According to the guidelines, rescuers should provide chest compressions (CC) ∼1.5 inches (40 mm) for infants. Feedback devices could help rescuers perform CC with adequate rates (CCR) and depths (CCD). However, there is no CC feedback device for infant cardiopulmonary resuscitation (CPR). We suggest a smartwatch-based CC feedback application for infant CPR. We created a smartwatch-based CC feedback application. This application provides feedback on CCD and CCR by colour and text for infant CPR. To evaluate the application, 30 participants were divided randomly into two groups on the basis of whether CC was performed with or without the assistance of the smartwatch application. Both groups performed continuous CC-only CPR for 2 min on an infant mannequin placed on a firm table. We collected CC parameters from the mannequin, including the proportion of correct depth, CCR, CCD and the proportion of correct decompression depth. Demographics between the two groups were not significantly different. The median (interquartile range) proportion of correct depth was 99 (97-100) with feedback compared with 83 (58-97) without feedback (P=0.002). The CCR and proportion of correct decompression depth were not significantly different between the two groups (P=0.482 and 0.089). The CCD of the feedback group was significantly deeper than that of the control group [feedback vs. 41.2 (39.8-41.7) mm vs. 38.6 (36.1-39.6) mm; P=0.004]. Rescuers who receive feedback of CC parameters from a smartwatch could perform adequate CC during infant CPR.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. http://creativecommons.org/licenses/by-nc-nd/4.0/.

  2. Analysis of fluctuations in semiconductor devices

    NASA Astrophysics Data System (ADS)

    Andrei, Petru

    The random nature of ion implantation and diffusion processes as well as inevitable tolerances in fabrication result in random fluctuations of doping concentrations and oxide thickness in semiconductor devices. These fluctuations are especially pronounced in ultrasmall (nanoscale) semiconductor devices when the spatial scale of doping and oxide thickness variations become comparable with the geometric dimensions of devices. In the dissertation, the effects of these fluctuations on device characteristics are analyzed by using a new technique for the analysis of random doping and oxide thickness induced fluctuations. This technique is universal in nature in the sense that it is applicable to any transport model (drift-diffusion, semiclassical transport, quantum transport etc.) and it can be naturally extended to take into account random fluctuations of the oxide (trapped) charges and channel length. The technique is based on linearization of the transport equations with respect to the fluctuating quantities. It is computationally much (a few orders of magnitude) more efficient than the traditional Monte-Carlo approach and it yields information on the sensitivity of fluctuations of parameters of interest (e.g. threshold voltage, small-signal parameters, cut-off frequencies, etc.) to the locations of doping and oxide thickness fluctuations. For this reason, it can be very instrumental in the design of fluctuation-resistant structures of semiconductor devices. Quantum mechanical effects are taken into account by using the density-gradient model as well as through self-consistent Poisson-Schrodinger computations. Special attention is paid to the presenting of the technique in a form that is suitable for implementation on commercial device simulators. The numerical implementation of the technique is discussed in detail and numerous computational results are presented and compared with those previously published in literature.

  3. Logistic regression for dichotomized counts.

    PubMed

    Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W

    2016-12-01

    Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.

  4. A Comparative Study on Ni-Based Coatings Prepared by HVAF, HVOF, and APS Methods for Corrosion Protection Applications

    NASA Astrophysics Data System (ADS)

    Sadeghimeresht, E.; Markocsan, N.; Nylén, P.

    2016-12-01

    Selection of the thermal spray process is the most important step toward a proper coating solution for a given application as important coating characteristics such as adhesion and microstructure are highly dependent on it. In the present work, a process-microstructure-properties-performance correlation study was performed in order to figure out the main characteristics and corrosion performance of the coatings produced by different thermal spray techniques such as high-velocity air fuel (HVAF), high-velocity oxy fuel (HVOF), and atmospheric plasma spraying (APS). Previously optimized HVOF and APS process parameters were used to deposit Ni, NiCr, and NiAl coatings and compare with HVAF-sprayed coatings with randomly selected process parameters. As the HVAF process presented the best coating characteristics and corrosion behavior, few process parameters such as feed rate and standoff distance (SoD) were investigated to systematically optimize the HVAF coatings in terms of low porosity and high corrosion resistance. The Ni and NiAl coatings with lower porosity and better corrosion behavior were obtained at an average SoD of 300 mm and feed rate of 150 g/min. The NiCr coating sprayed at a SoD of 250 mm and feed rate of 75 g/min showed the highest corrosion resistance among all investigated samples.

  5. Band Structure Engineering of Cs2AgBiBr6 Perovskite through Order-Disordered Transition: A First-Principle Study.

    PubMed

    Yang, Jingxiu; Zhang, Peng; Wei, Su-Huai

    2018-01-04

    Cs 2 AgBiBr 6 was proposed as one of the inorganic, stable, and nontoxic replacements of the methylammonium lead halides (CH 3 NH 3 PbI 3 , which is currently considered as one of the most promising light-harvesting material for solar cells). However, the wide indirect band gap of Cs 2 AgBiBr 6 suggests that its application in photovoltaics is limited. Using the first-principle calculation, we show that by controlling the ordering parameter at the mixed sublattice, the band gap of Cs 2 AgBiBr 6 can vary continuously from a wide indirect band gap of 1.93 eV for the fully ordered double-perovskite structure to a small pseudodirect band gap of 0.44 eV for the fully random alloy. Therefore, one can achieve better light absorption simply by controlling the growth temperature and thus the ordering parameters and band gaps. We also show that controlled doping in Cs 2 AgBiBr 6 can change the energy difference between ordered and disordered Cs 2 AgBiBr 6 , thus providing further control of the ordering parameters and the band gaps. Our study, therefore, provides a novel approach to carry out band structure engineering in the mixed perovskites for optoelectronic applications.

  6. Strategies for efficient resolution analysis in full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Fichtner, A.; van Leeuwen, T.; Trampert, J.

    2016-12-01

    Full-waveform inversion is developing into a standard method in the seismological toolbox. It combines numerical wave propagation for heterogeneous media with adjoint techniques in order to improve tomographic resolution. However, resolution becomes increasingly difficult to quantify because of the enormous computational requirements. Here we present two families of methods that can be used for efficient resolution analysis in full-waveform inversion. They are based on the targeted extraction of resolution proxies from the Hessian matrix, which is too large to store and to compute explicitly. Fourier methods rest on the application of the Hessian to Earth models with harmonic oscillations. This yields the Fourier spectrum of the Hessian for few selected wave numbers, from which we can extract properties of the tomographic point-spread function for any point in space. Random probing methods use uncorrelated, random test models instead of harmonic oscillations. Auto-correlating the Hessian-model applications for sufficiently many test models also characterises the point-spread function. Both Fourier and random probing methods provide a rich collection of resolution proxies. These include position- and direction-dependent resolution lengths, and the volume of point-spread functions as indicator of amplitude recovery and inter-parameter trade-offs. The computational requirements of these methods are equivalent to approximately 7 conjugate-gradient iterations in full-waveform inversion. This is significantly less than the optimisation itself, which may require tens to hundreds of iterations to reach convergence. In addition to the theoretical foundations of the Fourier and random probing methods, we show various illustrative examples from real-data full-waveform inversion for crustal and mantle structure.

  7. The impact of higher-order ionospheric effects on estimated tropospheric parameters in Precise Point Positioning

    NASA Astrophysics Data System (ADS)

    Zus, F.; Deng, Z.; Wickert, J.

    2017-08-01

    The impact of higher-order ionospheric effects on the estimated station coordinates and clocks in Global Navigation Satellite System (GNSS) Precise Point Positioning (PPP) is well documented in literature. Simulation studies reveal that higher-order ionospheric effects have a significant impact on the estimated tropospheric parameters as well. In particular, the tropospheric north-gradient component is most affected for low-latitude and midlatitude stations around noon. In a practical example we select a few hundred stations randomly distributed over the globe, in March 2012 (medium solar activity), and apply/do not apply ionospheric corrections in PPP. We compare the two sets of tropospheric parameters (ionospheric corrections applied/not applied) and find an overall good agreement with the prediction from the simulation study. The comparison of the tropospheric parameters with the tropospheric parameters derived from the ERA-Interim global atmospheric reanalysis shows that ionospheric corrections must be consistently applied in PPP and the orbit and clock generation. The inconsistent application results in an artificial station displacement which is accompanied by an artificial "tilting" of the troposphere. This finding is relevant in particular for those who consider advanced GNSS tropospheric products for meteorological studies.

  8. Genetic parameters of legendre polynomials for first parity lactation curves.

    PubMed

    Pool, M H; Janss, L L; Meuwissen, T H

    2000-11-01

    Variance components of the covariance function coefficients in a random regression test-day model were estimated by Legendre polynomials up to a fifth order for first-parity records of Dutch dairy cows using Gibbs sampling. Two Legendre polynomials of equal order were used to model the random part of the lactation curve, one for the genetic component and one for permanent environment. Test-day records from cows registered between 1990 to 1996 and collected by regular milk recording were available. For the data set, 23,700 complete lactations were selected from 475 herds sired by 262 sires. Because the application of a random regression model is limited by computing capacity, we investigated the minimum order needed to fit the variance structure in the data sufficiently. Predictions of genetic and permanent environmental variance structures were compared with bivariate estimates on 30-d intervals. A third-order or higher polynomial modeled the shape of variance curves over DIM with sufficient accuracy for the genetic and permanent environment part. Also, the genetic correlation structure was fitted with sufficient accuracy by a third-order polynomial, but, for the permanent environmental component, a fourth order was needed. Because equal orders are suggested in the literature, a fourth-order Legendre polynomial is recommended in this study. However, a rank of three for the genetic covariance matrix and of four for permanent environment allows a simpler covariance function with a reduced number of parameters based on the eigenvalues and eigenvectors.

  9. Randomly iterated search and statistical competency as powerful inversion tools for deformation source modeling: Application to volcano interferometric synthetic aperture radar data

    NASA Astrophysics Data System (ADS)

    Shirzaei, M.; Walter, T. R.

    2009-10-01

    Modern geodetic techniques provide valuable and near real-time observations of volcanic activity. Characterizing the source of deformation based on these observations has become of major importance in related monitoring efforts. We investigate two random search approaches, simulated annealing (SA) and genetic algorithm (GA), and utilize them in an iterated manner. The iterated approach helps to prevent GA in general and SA in particular from getting trapped in local minima, and it also increases redundancy for exploring the search space. We apply a statistical competency test for estimating the confidence interval of the inversion source parameters, considering their internal interaction through the model, the effect of the model deficiency, and the observational error. Here, we present and test this new randomly iterated search and statistical competency (RISC) optimization method together with GA and SA for the modeling of data associated with volcanic deformations. Following synthetic and sensitivity tests, we apply the improved inversion techniques to two episodes of activity in the Campi Flegrei volcanic region in Italy, observed by the interferometric synthetic aperture radar technique. Inversion of these data allows derivation of deformation source parameters and their associated quality so that we can compare the two inversion methods. The RISC approach was found to be an efficient method in terms of computation time and search results and may be applied to other optimization problems in volcanic and tectonic environments.

  10. Rheology and fluid mechanics of a hyper-concentrated biomass suspension

    NASA Astrophysics Data System (ADS)

    Botto, Lorenzo; Xu, Xiao

    2013-11-01

    The production of bioethanol from biomass material originating from energy crops requires mixing of highly concentrated suspensions, which are composed of millimetre-sized lignocellulosic fibers. In these applications, the solid concentration is typically extremely high. Owing to the large particle porosity, for a solid mass concentration slightly larger than 10%, the dispersed solid phase can fill the available space almost completely. To extract input parameters for simulations, we have carried out rheological measurements of a lignocellulosic suspension of Miscanthus, a fast-growing plant, for particle concentrations close to maximum random packing. We find that in this regime the rheometric curves exhibit features similar to those observed in model ``gravitational suspensions,'' including viscoplastic behaviour, strong shear-banding, non-continuum effects, and a marked influence of the particle weight. In the talk, these aspects will be examined in some detail, and differences between Miscanthus and corn stover, currently the most industrially relevant biomass substrate, briefly discussed. We will also comment on values of the Reynolds and Oldroyd numbers found in biofuel applications, and the flow patterns expected for these parameter values.

  11. Fitting-free algorithm for efficient quantification of collagen fiber alignment in SHG imaging applications.

    PubMed

    Hall, Gunnsteinn; Liang, Wenxuan; Li, Xingde

    2017-10-01

    Collagen fiber alignment derived from second harmonic generation (SHG) microscopy images can be important for disease diagnostics. Image processing algorithms are needed to robustly quantify the alignment in images with high sensitivity and reliability. Fourier transform (FT) magnitude, 2D power spectrum, and image autocorrelation have previously been used to extract fiber information from images by assuming a certain mathematical model (e.g. Gaussian distribution of the fiber-related parameters) and fitting. The fitting process is slow and fails to converge when the data is not Gaussian. Herein we present an efficient constant-time deterministic algorithm which characterizes the symmetricity of the FT magnitude image in terms of a single parameter, named the fiber alignment anisotropy R ranging from 0 (randomized fibers) to 1 (perfect alignment). This represents an important improvement of the technology and may bring us one step closer to utilizing the technology for various applications in real time. In addition, we present a digital image phantom-based framework for characterizing and validating the algorithm, as well as assessing the robustness of the algorithm against different perturbations.

  12. Conservation of Shannon's redundancy for proteins. [information theory applied to amino acid sequences

    NASA Technical Reports Server (NTRS)

    Gatlin, L. L.

    1974-01-01

    Concepts of information theory are applied to examine various proteins in terms of their redundancy in natural originators such as animals and plants. The Monte Carlo method is used to derive information parameters for random protein sequences. Real protein sequence parameters are compared with the standard parameters of protein sequences having a specific length. The tendency of a chain to contain some amino acids more frequently than others and the tendency of a chain to contain certain amino acid pairs more frequently than other pairs are used as randomness measures of individual protein sequences. Non-periodic proteins are generally found to have random Shannon redundancies except in cases of constraints due to short chain length and genetic codes. Redundant characteristics of highly periodic proteins are discussed. A degree of periodicity parameter is derived.

  13. Parametric and non-parametric masking of randomness in sequence alignments can be improved and leads to better resolved trees.

    PubMed

    Kück, Patrick; Meusemann, Karen; Dambach, Johannes; Thormann, Birthe; von Reumont, Björn M; Wägele, Johann W; Misof, Bernhard

    2010-03-31

    Methods of alignment masking, which refers to the technique of excluding alignment blocks prior to tree reconstructions, have been successful in improving the signal-to-noise ratio in sequence alignments. However, the lack of formally well defined methods to identify randomness in sequence alignments has prevented a routine application of alignment masking. In this study, we compared the effects on tree reconstructions of the most commonly used profiling method (GBLOCKS) which uses a predefined set of rules in combination with alignment masking, with a new profiling approach (ALISCORE) based on Monte Carlo resampling within a sliding window, using different data sets and alignment methods. While the GBLOCKS approach excludes variable sections above a certain threshold which choice is left arbitrary, the ALISCORE algorithm is free of a priori rating of parameter space and therefore more objective. ALISCORE was successfully extended to amino acids using a proportional model and empirical substitution matrices to score randomness in multiple sequence alignments. A complex bootstrap resampling leads to an even distribution of scores of randomly similar sequences to assess randomness of the observed sequence similarity. Testing performance on real data, both masking methods, GBLOCKS and ALISCORE, helped to improve tree resolution. The sliding window approach was less sensitive to different alignments of identical data sets and performed equally well on all data sets. Concurrently, ALISCORE is capable of dealing with different substitution patterns and heterogeneous base composition. ALISCORE and the most relaxed GBLOCKS gap parameter setting performed best on all data sets. Correspondingly, Neighbor-Net analyses showed the most decrease in conflict. Alignment masking improves signal-to-noise ratio in multiple sequence alignments prior to phylogenetic reconstruction. Given the robust performance of alignment profiling, alignment masking should routinely be used to improve tree reconstructions. Parametric methods of alignment profiling can be easily extended to more complex likelihood based models of sequence evolution which opens the possibility of further improvements.

  14. Quantitative sensory testing response patterns to capsaicin- and ultraviolet-B-induced local skin hypersensitization in healthy subjects: a machine-learned analysis.

    PubMed

    Lötsch, Jörn; Geisslinger, Gerd; Heinemann, Sarah; Lerch, Florian; Oertel, Bruno G; Ultsch, Alfred

    2017-08-16

    The comprehensive assessment of pain-related human phenotypes requires combinations of nociceptive measures that produce complex high-dimensional data, posing challenges to bioinformatic analysis. In this study, we assessed established experimental models of heat hyperalgesia of the skin, consisting of local ultraviolet-B (UV-B) irradiation or capsaicin application, in 82 healthy subjects using a variety of noxious stimuli. We extended the original heat stimulation by applying cold and mechanical stimuli and assessing the hypersensitization effects with a clinically established quantitative sensory testing (QST) battery (German Research Network on Neuropathic Pain). This study provided a 246 × 10-sized data matrix (82 subjects assessed at baseline, following UV-B application, and following capsaicin application) with respect to 10 QST parameters, which we analyzed using machine-learning techniques. We observed statistically significant effects of the hypersensitization treatments in 9 different QST parameters. Supervised machine-learned analysis implemented as random forests followed by ABC analysis pointed to heat pain thresholds as the most relevantly affected QST parameter. However, decision tree analysis indicated that UV-B additionally modulated sensitivity to cold. Unsupervised machine-learning techniques, implemented as emergent self-organizing maps, hinted at subgroups responding to topical application of capsaicin. The distinction among subgroups was based on sensitivity to pressure pain, which could be attributed to sex differences, with women being more sensitive than men. Thus, while UV-B and capsaicin share a major component of heat pain sensitization, they differ in their effects on QST parameter patterns in healthy subjects, suggesting a lack of redundancy between these models.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.

  15. A posteriori noise estimation in variable data sets. With applications to spectra and light curves

    NASA Astrophysics Data System (ADS)

    Czesla, S.; Molle, T.; Schmitt, J. H. M. M.

    2018-01-01

    Most physical data sets contain a stochastic contribution produced by measurement noise or other random sources along with the signal. Usually, neither the signal nor the noise are accurately known prior to the measurement so that both have to be estimated a posteriori. We have studied a procedure to estimate the standard deviation of the stochastic contribution assuming normality and independence, requiring a sufficiently well-sampled data set to yield reliable results. This procedure is based on estimating the standard deviation in a sample of weighted sums of arbitrarily sampled data points and is identical to the so-called DER_SNR algorithm for specific parameter settings. To demonstrate the applicability of our procedure, we present applications to synthetic data, high-resolution spectra, and a large sample of space-based light curves and, finally, give guidelines to apply the procedure in situation not explicitly considered here to promote its adoption in data analysis.

  16. The Shock Pulse Index and Its Application in the Fault Diagnosis of Rolling Element Bearings

    PubMed Central

    Sun, Peng; Liao, Yuhe; Lin, Jin

    2017-01-01

    The properties of the time domain parameters of vibration signals have been extensively studied for the fault diagnosis of rolling element bearings (REBs). Parameters like kurtosis and Envelope Harmonic-to-Noise Ratio are the most widely applied in this field and some important progress has been made. However, since only one-sided information is contained in these parameters, problems still exist in practice when the signals collected are of complicated structure and/or contaminated by strong background noises. A new parameter, named Shock Pulse Index (SPI), is proposed in this paper. It integrates the mutual advantages of both the parameters mentioned above and can help effectively identify fault-related impulse components under conditions of interference of strong background noises, unrelated harmonic components and random impulses. The SPI optimizes the parameters of Maximum Correlated Kurtosis Deconvolution (MCKD), which is used to filter the signals under consideration. Finally, the transient information of interest contained in the filtered signal can be highlighted through demodulation with the Teager Energy Operator (TEO). Fault-related impulse components can therefore be extracted accurately. Simulations show the SPI can correctly indicate the fault impulses under the influence of strong background noises, other harmonic components and aperiodic impulse and experiment analyses verify the effectiveness and correctness of the proposed method. PMID:28282883

  17. AGARD Flight Test Instrumentation Series. Volume 14. The Analysis of Random Data

    DTIC Science & Technology

    1981-11-01

    obtained at arbitrary times during a number of flights. No constraints have been placed upon the controlling parameters, so that the process is non ...34noisy" environment controlling a non -linear system (the aircraft) using a redundant net of control parameters. when aircraft were flown manually with...structure. Cuse 2. Non -Stationary Measurements. When the 114S value of a random signal varies with parameters which cannot be controlled , then the method

  18. Performance of Random Effects Model Estimators under Complex Sampling Designs

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  19. Fitting parametric random effects models in very large data sets with application to VHA national data

    PubMed Central

    2012-01-01

    Background With the current focus on personalized medicine, patient/subject level inference is often of key interest in translational research. As a result, random effects models (REM) are becoming popular for patient level inference. However, for very large data sets that are characterized by large sample size, it can be difficult to fit REM using commonly available statistical software such as SAS since they require inordinate amounts of computer time and memory allocations beyond what are available preventing model convergence. For example, in a retrospective cohort study of over 800,000 Veterans with type 2 diabetes with longitudinal data over 5 years, fitting REM via generalized linear mixed modeling using currently available standard procedures in SAS (e.g. PROC GLIMMIX) was very difficult and same problems exist in Stata’s gllamm or R’s lme packages. Thus, this study proposes and assesses the performance of a meta regression approach and makes comparison with methods based on sampling of the full data. Data We use both simulated and real data from a national cohort of Veterans with type 2 diabetes (n=890,394) which was created by linking multiple patient and administrative files resulting in a cohort with longitudinal data collected over 5 years. Methods and results The outcome of interest was mean annual HbA1c measured over a 5 years period. Using this outcome, we compared parameter estimates from the proposed random effects meta regression (REMR) with estimates based on simple random sampling and VISN (Veterans Integrated Service Networks) based stratified sampling of the full data. Our results indicate that REMR provides parameter estimates that are less likely to be biased with tighter confidence intervals when the VISN level estimates are homogenous. Conclusion When the interest is to fit REM in repeated measures data with very large sample size, REMR can be used as a good alternative. It leads to reasonable inference for both Gaussian and non-Gaussian responses if parameter estimates are homogeneous across VISNs. PMID:23095325

  20. Building on crossvalidation for increasing the quality of geostatistical modeling

    USGS Publications Warehouse

    Olea, R.A.

    2012-01-01

    The random function is a mathematical model commonly used in the assessment of uncertainty associated with a spatially correlated attribute that has been partially sampled. There are multiple algorithms for modeling such random functions, all sharing the requirement of specifying various parameters that have critical influence on the results. The importance of finding ways to compare the methods and setting parameters to obtain results that better model uncertainty has increased as these algorithms have grown in number and complexity. Crossvalidation has been used in spatial statistics, mostly in kriging, for the analysis of mean square errors. An appeal of this approach is its ability to work with the same empirical sample available for running the algorithms. This paper goes beyond checking estimates by formulating a function sensitive to conditional bias. Under ideal conditions, such function turns into a straight line, which can be used as a reference for preparing measures of performance. Applied to kriging, deviations from the ideal line provide sensitivity to the semivariogram lacking in crossvalidation of kriging errors and are more sensitive to conditional bias than analyses of errors. In terms of stochastic simulation, in addition to finding better parameters, the deviations allow comparison of the realizations resulting from the applications of different methods. Examples show improvements of about 30% in the deviations and approximately 10% in the square root of mean square errors between reasonable starting modelling and the solutions according to the new criteria. ?? 2011 US Government.

  1. Upper bounds on sequential decoding performance parameters

    NASA Technical Reports Server (NTRS)

    Jelinek, F.

    1974-01-01

    This paper presents the best obtainable random coding and expurgated upper bounds on the probabilities of undetectable error, of t-order failure (advance to depth t into an incorrect subset), and of likelihood rise in the incorrect subset, applicable to sequential decoding when the metric bias G is arbitrary. Upper bounds on the Pareto exponent are also presented. The G-values optimizing each of the parameters of interest are determined, and are shown to lie in intervals that in general have nonzero widths. The G-optimal expurgated bound on undetectable error is shown to agree with that for maximum likelihood decoding of convolutional codes, and that on failure agrees with the block code expurgated bound. Included are curves evaluating the bounds for interesting choices of G and SNR for a binary-input quantized-output Gaussian additive noise channel.

  2. Effects of the Topical Application of Hydroalcoholic Leaf Extract of Oncidium flexuosum Sims. (Orchidaceae) and Microcurrent on the Healing of Wounds Surgically Induced in Wistar Rats

    PubMed Central

    de Gaspi, Fernanda Oliveira de G.; Foglio, Mary Ann; de Carvalho, João Ernesto; Santos, Gláucia Maria T.; Testa, Milene; Passarini, José Roberto; de Moraes, Cristiano Pedroso; Esquisatto, Marcelo A. Marreto; Mendonça, Josué S.; Mendonça, Fernanda A. Sampaio

    2011-01-01

    This study evaluated the wound healing activity of hydroalcoholic leaf extract of Oncidium flexuosum Sims. (Orchidaceae), an important native plant of Brazil, combined or not with microcurrent stimulation. Wistar rats were randomly divided into four groups of nine animals: control (C), topical application of the extract (OF), treated with a microcurrent (10 μA/2 min) (MC), and topical application of the extract plus microcurrent (OF + MC). Tissue samples were obtained 2, 6, and 10 days after injury and submitted to structural and morphometric analysis. The simultaneous application of OF + MC was found to be highly effective in terms of the parameters analyzed (P < .05), with positive effects on the area of newly formed tissue, number of fibroblasts, number of newly formed blood vessels, and epithelial thickness. Morphometric data confirmed the structural findings. The O. flexuosum leaf extract contains active compounds that speed the healing process, especially when applied simultaneously with microcurrent stimulation. PMID:21716707

  3. The glassy random laser: replica symmetry breaking in the intensity fluctuations of emission spectra

    PubMed Central

    Antenucci, Fabrizio; Crisanti, Andrea; Leuzzi, Luca

    2015-01-01

    The behavior of a newly introduced overlap parameter, measuring the correlation between intensity fluctuations of waves in random media, is analyzed in different physical regimes, with varying amount of disorder and non-linearity. This order parameter allows to identify the laser transition in random media and describes its possible glassy nature in terms of emission spectra data, the only data so far accessible in random laser measurements. The theoretical analysis is performed in terms of the complex spherical spin-glass model, a statistical mechanical model describing the onset and the behavior of random lasers in open cavities. Replica Symmetry Breaking theory allows to discern different kinds of randomness in the high pumping regime, including the most complex and intriguing glassy randomness. The outcome of the theoretical study is, eventually, compared to recent intensity fluctuation overlap measurements demonstrating the validity of the theory and providing a straightforward interpretation of qualitatively different spectral behaviors in different random lasers. PMID:26616194

  4. Various Attractors, Coexisting Attractors and Antimonotonicity in a Simple Fourth-Order Memristive Twin-T Oscillator

    NASA Astrophysics Data System (ADS)

    Zhou, Ling; Wang, Chunhua; Zhang, Xin; Yao, Wei

    By replacing the resistor in a Twin-T network with a generalized flux-controlled memristor, this paper proposes a simple fourth-order memristive Twin-T oscillator. Rich dynamical behaviors can be observed in the dynamical system. The most striking feature is that this system has various periodic orbits and various chaotic attractors generated by adjusting parameter b. At the same time, coexisting attractors and antimonotonicity are also detected (especially, two full Feigenbaum remerging trees in series are observed in such autonomous chaotic systems). Their dynamical features are analyzed by phase portraits, Lyapunov exponents, bifurcation diagrams and basin of attraction. Moreover, hardware experiments on a breadboard are carried out. Experimental measurements are in accordance with the simulation results. Finally, a multi-channel random bit generator is designed for encryption applications. Numerical results illustrate the usefulness of the random bit generator.

  5. Role of protein fluctuation correlations in electron transfer in photosynthetic complexes.

    PubMed

    Nesterov, Alexander I; Berman, Gennady P

    2015-04-01

    We consider the dependence of the electron transfer in photosynthetic complexes on correlation properties of random fluctuations of the protein environment. The electron subsystem is modeled by a finite network of connected electron (exciton) sites. The fluctuations of the protein environment are modeled by random telegraph processes, which act either collectively (correlated) or independently (uncorrelated) on the electron sites. We derived an exact closed system of first-order linear differential equations with constant coefficients, for the average density matrix elements and for their first moments. Under some conditions, we obtained analytic expressions for the electron transfer rates and found the range of parameters for their applicability by comparing with the exact numerical simulations. We also compared the correlated and uncorrelated regimes and demonstrated numerically that the uncorrelated fluctuations of the protein environment can, under some conditions, either increase or decrease the electron transfer rates.

  6. Estimation of genetic parameters for milk yield in Murrah buffaloes by Bayesian inference.

    PubMed

    Breda, F C; Albuquerque, L G; Euclydes, R F; Bignardi, A B; Baldi, F; Torres, R A; Barbosa, L; Tonhati, H

    2010-02-01

    Random regression models were used to estimate genetic parameters for test-day milk yield in Murrah buffaloes using Bayesian inference. Data comprised 17,935 test-day milk records from 1,433 buffaloes. Twelve models were tested using different combinations of third-, fourth-, fifth-, sixth-, and seventh-order orthogonal polynomials of weeks of lactation for additive genetic and permanent environmental effects. All models included the fixed effects of contemporary group, number of daily milkings and age of cow at calving as covariate (linear and quadratic effect). In addition, residual variances were considered to be heterogeneous with 6 classes of variance. Models were selected based on the residual mean square error, weighted average of residual variance estimates, and estimates of variance components, heritabilities, correlations, eigenvalues, and eigenfunctions. Results indicated that changes in the order of fit for additive genetic and permanent environmental random effects influenced the estimation of genetic parameters. Heritability estimates ranged from 0.19 to 0.31. Genetic correlation estimates were close to unity between adjacent test-day records, but decreased gradually as the interval between test-days increased. Results from mean squared error and weighted averages of residual variance estimates suggested that a model considering sixth- and seventh-order Legendre polynomials for additive and permanent environmental effects, respectively, and 6 classes for residual variances, provided the best fit. Nevertheless, this model presented the largest degree of complexity. A more parsimonious model, with fourth- and sixth-order polynomials, respectively, for these same effects, yielded very similar genetic parameter estimates. Therefore, this last model is recommended for routine applications. Copyright 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  7. Exploring activity-driven network with biased walks

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Wu, Ding Juan; Lv, Fang; Su, Meng Long

    We investigate the concurrent dynamics of biased random walks and the activity-driven network, where the preferential transition probability is in terms of the edge-weighting parameter. We also obtain the analytical expressions for stationary distribution and the coverage function in directed and undirected networks, all of which depend on the weight parameter. Appropriately adjusting this parameter, more effective search strategy can be obtained when compared with the unbiased random walk, whether in directed or undirected networks. Since network weights play a significant role in the diffusion process.

  8. Random elements on lattices: Review and statistical applications

    NASA Astrophysics Data System (ADS)

    Potocký, Rastislav; Villarroel, Claudia Navarro; Sepúlveda, Maritza; Luna, Guillermo; Stehlík, Milan

    2017-07-01

    We discuss important contributions to random elements on lattices. We relate to both algebraic and probabilistic properties. Several applications and concepts are discussed, e.g. positive dependence, Random walks and distributions on lattices, Super-lattices, learning. The application to Chilean Ecology is given.

  9. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.

  10. Substituting whole grains for refined grains in a 6-week randomized trial favorably affects energy balance parameters in healthy men and post-menopausal women

    USDA-ARS?s Scientific Manuscript database

    Background: The effect of whole grains on the regulation of energy balance remains controversial. Objective: To determine the effects of substituting whole grains for refined grains, independent of body weight change, on energy metabolism parameters and glycemic control. Design: A randomized, con...

  11. The Application of Vector Diffraction to the Scalar Anomalous Diffraction Approximation of van de Hulst.

    DTIC Science & Technology

    1987-05-01

    plvdl ’ , systems, Applied Optics, 16( 12), 1181 - iAQ. Champion, 7. ’., G . H . Meeten and ". enir, ’ " . , ’ particles in the colloidal state, ourna: ,f...he her, 4a London, Faraday Transactions * - Champion, J. V. , G . H . Meeten , 1. 4. U 0 1n and -arp. Optical extinction of randomly ,oriented in, T...distance into the (poly) dispersion. We can find T -from f du fdy fda G (u,y,$) Qi(u,y,S) n(u) g (y) h (B) (15) where u is some parameter specifying

  12. Bi-stability resistant to fluctuations

    NASA Astrophysics Data System (ADS)

    Caruel, M.; Truskinovsky, L.

    2017-12-01

    We study a simple micro-mechanical device that does not lose its snap-through behavior in an environment dominated by fluctuations. The main idea is to have several degrees of freedom that can cooperatively resist the de-synchronizing effect of random perturbations. As an inspiration we use the power stroke machinery of skeletal muscles, which ensures at sub-micron scales and finite temperatures a swift recovery of an abruptly applied slack. In addition to hypersensitive response at finite temperatures, our prototypical Brownian snap spring also exhibits criticality at special values of parameters which is another potentially interesting property for micro-scale engineering applications.

  13. The effect of Laser and taping on pain, functional status and quality of life in patients with fibromyalgia syndrome: A placebo- randomized controlled clinical trial.

    PubMed

    Vayvay, Emre Serdar; Tok, Damla; Turgut, Elif; Tunay, Volga Bayrakci

    2016-01-01

    Conservative treatments have been proved to be effective to control pain and optimize function in fibromyalgia, however there is need for scientific evidence to make better clinical application across various physiotherapy applications. The aim of this study was to investigate the effects of Laser and taping applications on pain, flexibility, anxiety, depression, functional status and quality of life in patients with fibromyalgia syndrome. Forty-five female patients with fibromyalgia syndrome were included to the study and randomly allocated into three treatment groups; Laser (n= 15), placebo Laser (n= 15), and taping applications (n= 15). Visual analogue scale for pain intensity, trunk flexibility, Fibromyalgia Impact Questionnaire for functional status, Short Form 36 Questionnaire for quality of life and health status, and Beck Depression Inventory for anxiety level were evaluated before and after three weeks interventions. There were decreased pain severity in activity (p= 0.028), anxiety level (p= 0.01) and improved general health status, quality of life (p= 0.01) found at Laser group, whereas there were increased trunk flexibility, flexion (p= 0.03), extension (p= 0.02) found at taping group. After interventions, there were decreased pain severity for whole groups at night for Laser group (p= 0.04), placebo Laser group (p= 0.001), taping group (p= 0.01) and improved functional status found for Laser group (p= 0.001), placebo Laser group (p= 0.001), taping group (p= 0.01). Kinesiotape application had a similar effect on parameters in FMS patient, so this method could be preferred instead of Laser application for rehabilitation program.

  14. Local Geostatistical Models and Big Data in Hydrological and Ecological Applications

    NASA Astrophysics Data System (ADS)

    Hristopulos, Dionissios

    2015-04-01

    The advent of the big data era creates new opportunities for environmental and ecological modelling but also presents significant challenges. The availability of remote sensing images and low-cost wireless sensor networks implies that spatiotemporal environmental data to cover larger spatial domains at higher spatial and temporal resolution for longer time windows. Handling such voluminous data presents several technical and scientific challenges. In particular, the geostatistical methods used to process spatiotemporal data need to overcome the dimensionality curse associated with the need to store and invert large covariance matrices. There are various mathematical approaches for addressing the dimensionality problem, including change of basis, dimensionality reduction, hierarchical schemes, and local approximations. We present a Stochastic Local Interaction (SLI) model that can be used to model local correlations in spatial data. SLI is a random field model suitable for data on discrete supports (i.e., regular lattices or irregular sampling grids). The degree of localization is determined by means of kernel functions and appropriate bandwidths. The strength of the correlations is determined by means of coefficients. In the "plain vanilla" version the parameter set involves scale and rigidity coefficients as well as a characteristic length. The latter determines in connection with the rigidity coefficient the correlation length of the random field. The SLI model is based on statistical field theory and extends previous research on Spartan spatial random fields [2,3] from continuum spaces to explicitly discrete supports. The SLI kernel functions employ adaptive bandwidths learned from the sampling spatial distribution [1]. The SLI precision matrix is expressed explicitly in terms of the model parameter and the kernel function. Hence, covariance matrix inversion is not necessary for parameter inference that is based on leave-one-out cross validation. This property helps to overcome a significant computational bottleneck of geostatistical models due to the poor scaling of the matrix inversion [4,5]. We present applications to real and simulated data sets, including the Walker lake data, and we investigate the SLI performance using various statistical cross validation measures. References [1] T. Hofmann, B. Schlkopf, A.J. Smola, Annals of Statistics, 36, 1171-1220 (2008). [2] D. T. Hristopulos, SIAM Journal on Scientific Computing, 24(6): 2125-2162 (2003). [3] D. T. Hristopulos and S. N. Elogne, IEEE Transactions on Signal Processing, 57(9): 3475-3487 (2009) [4] G. Jona Lasinio, G. Mastrantonio, and A. Pollice, Statistical Methods and Applications, 22(1):97-112 (2013) [5] Sun, Y., B. Li, and M. G. Genton (2012). Geostatistics for large datasets. In: Advances and Challenges in Space-time Modelling of Natural Events, Lecture Notes in Statistics, pp. 55-77. Springer, Berlin-Heidelberg.

  15. Effect of low-level laser therapy on pain and perineal healing after episiotomy: A triple-blind randomized controlled trial.

    PubMed

    Alvarenga, Marina B; de Oliveira, Sonia Maria Junqueira Vasconcellos; Francisco, Adriana A; da Silva, Flora Maria B; Sousa, Marcelo; Nobre, Moacyr Roberto

    2017-02-01

    Episiotomy is associated with perineal pain and healing complications. The low-level laser therapy (LLLT) reduces pain and inflammation and stimulates the healing process. This study aimed to assess the effect of LLLT on pain and perineal healing after an episiotomy. A randomized, triple-blind, parallel clinical trial with 54 postpartum women who had a spontaneous birth with a right mediolateral episiotomy. The women were randomized into two groups: the experimental group (applications of LLLT n = 29) or the placebo group (simulated LLLT applications n = 25). Three sessions of real or sham irradiation were performed at 6-10 hours after normal birth, and the 2nd and 3rd applications were performed at 20-24 hours and 40-48 hours after the first session, respectively. Perineal pain was recorded using a Numeric Scale ranging from 0 to 10 (0 = absence and 10 = worst pain). Perineal healing was assessed using the redness, oedema, ecchymosis, discharge, and approximation (REEDA) scale. Both groups were assessed four times: in each of the three LLLT sessions and at 7-10 days after normal birth. The groups were compared using the Student's t, Mann-Whitney, and Chi-square tests. There was no significant difference between the groups regarding perineal healing after LLLT. The perineal pain scores were statistically higher in the experimental group in the first assessment and after the third LLLT. There was no significant difference between the groups related to the perineal pain scores 7-10 days after normal birth. The use of LLLT does not provide any benefit for treating postpartum perineal trauma using these specific protocol and parameters. Lasers Surg. Med. 49:181-188, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  16. Finite-sample corrected generalized estimating equation of population average treatment effects in stepped wedge cluster randomized trials.

    PubMed

    Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B

    2017-04-01

    Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.

  17. Influence of Choice of Null Network on Small-World Parameters of Structural Correlation Networks

    PubMed Central

    Hosseini, S. M. Hadi; Kesler, Shelli R.

    2013-01-01

    In recent years, coordinated variations in brain morphology (e.g., volume, thickness) have been employed as a measure of structural association between brain regions to infer large-scale structural correlation networks. Recent evidence suggests that brain networks constructed in this manner are inherently more clustered than random networks of the same size and degree. Thus, null networks constructed by randomizing topology are not a good choice for benchmarking small-world parameters of these networks. In the present report, we investigated the influence of choice of null networks on small-world parameters of gray matter correlation networks in healthy individuals and survivors of acute lymphoblastic leukemia. Three types of null networks were studied: 1) networks constructed by topology randomization (TOP), 2) networks matched to the distributional properties of the observed covariance matrix (HQS), and 3) networks generated from correlation of randomized input data (COR). The results revealed that the choice of null network not only influences the estimated small-world parameters, it also influences the results of between-group differences in small-world parameters. In addition, at higher network densities, the choice of null network influences the direction of group differences in network measures. Our data suggest that the choice of null network is quite crucial for interpretation of group differences in small-world parameters of structural correlation networks. We argue that none of the available null models is perfect for estimation of small-world parameters for correlation networks and the relative strengths and weaknesses of the selected model should be carefully considered with respect to obtained network measures. PMID:23840672

  18. New Estimates of Design Parameters for Clustered Randomization Studies: Findings from North Carolina and Florida. Working Paper 43

    ERIC Educational Resources Information Center

    Xu, Zeyu; Nichols, Austin

    2010-01-01

    The gold standard in making causal inference on program effects is a randomized trial. Most randomization designs in education randomize classrooms or schools rather than individual students. Such "clustered randomization" designs have one principal drawback: They tend to have limited statistical power or precision. This study aims to…

  19. The effect of visceral osteopathic manual therapy applications on pain, quality of life and function in patients with chronic nonspecific low back pain.

    PubMed

    Tamer, Seval; Öz, Müzeyyen; Ülger, Özlem

    2017-01-01

    The efficacy of osteopathic manual therapy (OMT) applications on chronic nonspecific low back pain (LBP) has been demonstrated. However, visceral applications, which are an important part of OMT techniques, have not been included in those studies. The study's objective was to determine the effect of OMT including visceral applications on the function and quality of life (QoL) in patients with chronic nonspecific LBP. The study was designed with a simple method of block randomization. Thirty-nine patients with chronic nonspecific LBP were included in the study. OMT group consisted of 19 patients to whom OMT and exercise methods were applied. The visceral osteopathic manual therapy (vOMT) group consisted of 20 patients to whom visceral applications were applied in addition to the applications carried out in the other group. Ten sessions were performed over a two-week period. Pain (VAS), function (Oswestry Index) and QoL (SF-36) assessments were carried out before the treatment and on the sixth week of treatment. Both of the treatments were found to be effective on pain and function, physical function, pain, general health, social function of the QoL sub-parameter. vOMT was effective on all sub-QoL parameters (p<0.05). Comparing the groups, it was determined that the energy and physical limitations of the QoL scores in vOMT were higher (p< 0.05). Visceral applications on patients with non-specific LBP gave positive results together with OMT and exercise methods. We believe that visceral fascial limitations, which we think cause limitations and pain in the lumbar segment, should be taken into consideration.

  20. A weighted least squares estimation of the polynomial regression model on paddy production in the area of Kedah and Perlis

    NASA Astrophysics Data System (ADS)

    Musa, Rosliza; Ali, Zalila; Baharum, Adam; Nor, Norlida Mohd

    2017-08-01

    The linear regression model assumes that all random error components are identically and independently distributed with constant variance. Hence, each data point provides equally precise information about the deterministic part of the total variation. In other words, the standard deviations of the error terms are constant over all values of the predictor variables. When the assumption of constant variance is violated, the ordinary least squares estimator of regression coefficient lost its property of minimum variance in the class of linear and unbiased estimators. Weighted least squares estimation are often used to maximize the efficiency of parameter estimation. A procedure that treats all of the data equally would give less precisely measured points more influence than they should have and would give highly precise points too little influence. Optimizing the weighted fitting criterion to find the parameter estimates allows the weights to determine the contribution of each observation to the final parameter estimates. This study used polynomial model with weighted least squares estimation to investigate paddy production of different paddy lots based on paddy cultivation characteristics and environmental characteristics in the area of Kedah and Perlis. The results indicated that factors affecting paddy production are mixture fertilizer application cycle, average temperature, the squared effect of average rainfall, the squared effect of pest and disease, the interaction between acreage with amount of mixture fertilizer, the interaction between paddy variety and NPK fertilizer application cycle and the interaction between pest and disease and NPK fertilizer application cycle.

  1. Uncertainty in dual permeability model parameters for structured soils.

    PubMed

    Arora, B; Mohanty, B P; McGuire, J T

    2012-01-01

    Successful application of dual permeability models (DPM) to predict contaminant transport is contingent upon measured or inversely estimated soil hydraulic and solute transport parameters. The difficulty in unique identification of parameters for the additional macropore- and matrix-macropore interface regions, and knowledge about requisite experimental data for DPM has not been resolved to date. Therefore, this study quantifies uncertainty in dual permeability model parameters of experimental soil columns with different macropore distributions (single macropore, and low- and high-density multiple macropores). Uncertainty evaluation is conducted using adaptive Markov chain Monte Carlo (AMCMC) and conventional Metropolis-Hastings (MH) algorithms while assuming 10 out of 17 parameters to be uncertain or random. Results indicate that AMCMC resolves parameter correlations and exhibits fast convergence for all DPM parameters while MH displays large posterior correlations for various parameters. This study demonstrates that the choice of parameter sampling algorithms is paramount in obtaining unique DPM parameters when information on covariance structure is lacking, or else additional information on parameter correlations must be supplied to resolve the problem of equifinality of DPM parameters. This study also highlights the placement and significance of matrix-macropore interface in flow experiments of soil columns with different macropore densities. Histograms for certain soil hydraulic parameters display tri-modal characteristics implying that macropores are drained first followed by the interface region and then by pores of the matrix domain in drainage experiments. Results indicate that hydraulic properties and behavior of the matrix-macropore interface is not only a function of saturated hydraulic conductivity of the macroporematrix interface ( K sa ) and macropore tortuosity ( l f ) but also of other parameters of the matrix and macropore domains.

  2. Uncertainty in dual permeability model parameters for structured soils

    NASA Astrophysics Data System (ADS)

    Arora, B.; Mohanty, B. P.; McGuire, J. T.

    2012-01-01

    Successful application of dual permeability models (DPM) to predict contaminant transport is contingent upon measured or inversely estimated soil hydraulic and solute transport parameters. The difficulty in unique identification of parameters for the additional macropore- and matrix-macropore interface regions, and knowledge about requisite experimental data for DPM has not been resolved to date. Therefore, this study quantifies uncertainty in dual permeability model parameters of experimental soil columns with different macropore distributions (single macropore, and low- and high-density multiple macropores). Uncertainty evaluation is conducted using adaptive Markov chain Monte Carlo (AMCMC) and conventional Metropolis-Hastings (MH) algorithms while assuming 10 out of 17 parameters to be uncertain or random. Results indicate that AMCMC resolves parameter correlations and exhibits fast convergence for all DPM parameters while MH displays large posterior correlations for various parameters. This study demonstrates that the choice of parameter sampling algorithms is paramount in obtaining unique DPM parameters when information on covariance structure is lacking, or else additional information on parameter correlations must be supplied to resolve the problem of equifinality of DPM parameters. This study also highlights the placement and significance of matrix-macropore interface in flow experiments of soil columns with different macropore densities. Histograms for certain soil hydraulic parameters display tri-modal characteristics implying that macropores are drained first followed by the interface region and then by pores of the matrix domain in drainage experiments. Results indicate that hydraulic properties and behavior of the matrix-macropore interface is not only a function of saturated hydraulic conductivity of the macroporematrix interface (Ksa) and macropore tortuosity (lf) but also of other parameters of the matrix and macropore domains.

  3. Calibration of Discrete Random Walk (DRW) Model via G.I Taylor's Dispersion Theory

    NASA Astrophysics Data System (ADS)

    Javaherchi, Teymour; Aliseda, Alberto

    2012-11-01

    Prediction of particle dispersion in turbulent flows is still an important challenge with many applications to environmental, as well as industrial, fluid mechanics. Several models of dispersion have been developed to predict particle trajectories and their relative velocities, in combination with a RANS-based simulation of the background flow. The interaction of the particles with the velocity fluctuations at different turbulent scales represents a significant difficulty in generalizing the models to the wide range of flows where they are used. We focus our attention on the Discrete Random Walk (DRW) model applied to flow in a channel, particularly to the selection of eddies lifetimes as realizations of a Poisson distribution with a mean value proportional to κ / ɛ . We present a general method to determine the constant of this proportionality by matching the DRW model dispersion predictions for fluid element and particle dispersion to G.I Taylor's classical dispersion theory. This model parameter is critical to the magnitude of predicted dispersion. A case study of its influence on sedimentation of suspended particles in a tidal channel with an array of Marine Hydrokinetic (MHK) turbines highlights the dependency of results on this time scale parameter. Support from US DOE through the Northwest National Marine Renewable Energy Center, a UW-OSU partnership.

  4. Integration and Analysis of Neighbor Discovery and Link Quality Estimation in Wireless Sensor Networks

    PubMed Central

    Radi, Marjan; Dezfouli, Behnam; Abu Bakar, Kamalrulnizam; Abd Razak, Shukor

    2014-01-01

    Network connectivity and link quality information are the fundamental requirements of wireless sensor network protocols to perform their desired functionality. Most of the existing discovery protocols have only focused on the neighbor discovery problem, while a few number of them provide an integrated neighbor search and link estimation. As these protocols require a careful parameter adjustment before network deployment, they cannot provide scalable and accurate network initialization in large-scale dense wireless sensor networks with random topology. Furthermore, performance of these protocols has not entirely been evaluated yet. In this paper, we perform a comprehensive simulation study on the efficiency of employing adaptive protocols compared to the existing nonadaptive protocols for initializing sensor networks with random topology. In this regard, we propose adaptive network initialization protocols which integrate the initial neighbor discovery with link quality estimation process to initialize large-scale dense wireless sensor networks without requiring any parameter adjustment before network deployment. To the best of our knowledge, this work is the first attempt to provide a detailed simulation study on the performance of integrated neighbor discovery and link quality estimation protocols for initializing sensor networks. This study can help system designers to determine the most appropriate approach for different applications. PMID:24678277

  5. Two injection digital block versus single subcutaneous palmar injection block for finger lacerations.

    PubMed

    Okur, O M; Şener, A; Kavakli, H Ş; Çelik, G K; Doğan, N Ö; Içme, F; Günaydin, G P

    2017-12-01

    We aimed to compare two digital nerve block techniques in patients due to traumatic digital lacerations. This was a randomized-controlled study designed prospectively in the emergency department of a university-based training and research hospital. Randomization was achieved by sealed envelopes. Half of the patients were randomised to traditional (two-injection) digital nerve block technique while single-injection digital nerve block technique was applied to the other half. Score of pain due to anesthetic infiltration and suturing, onset time of total anesthesia, need for an additional rescue injection were the parameters evaluated with both groups. Epinephrin added lidocaine hydrochloride preparation was used for the anesthetic application. Visual analog scale was used for the evaluation of pain scores. Outcomes were compared by using Mann-Whitney U test and Student t-test. Fifty emergency department patients ≥18 years requiring digital nerve block were enrolled in the study. Mean age of the patients was 33 (min-max: 19-86) and 39 (78 %) were male. No statistically significant difference was found between the two groups in terms of our main parameters; anesthesia pain score, suturing pain score, onset time of total anesthesia and rescue injection need. Single injection volar digital nerve block technique is a suitable alternative for digital anesthesias in emergency departments.

  6. Effect of surgical periodontal treatment associated to antimicrobial photodynamic therapy on chronic periodontitis: A randomized controlled clinical trial.

    PubMed

    Martins, Sérgio H L; Novaes, Arthur B; Taba, Mario; Palioto, Daniela B; Messora, Michel R; Reino, Danilo M; Souza, Sérgio L S

    2017-07-01

    This randomized controlled clinical trial evaluated the effects of an adjunctive single application of antimicrobial photodynamic therapy (aPDT) in Surgical Periodontal Treatment (ST) in patients with severe chronic periodontitis (SCP). In a split-mouth design, 20 patients with SCP were treated with aPDT+ST (Test Group, TG) or ST only (Control Group, CG). aPDT was applied in a single episode, using a diode laser and a phenothiazine photosensitizer. All patients were monitored until 90 days after surgical therapy. Levels of 40 subgingival species were measured by checkerboard DNA-DNA hybridization at baseline, 60 and 150 days. Clinical and microbiological parameters were evaluated. In deep periodontal pockets depth (PPD ≥5 mm), Test Group presented a significantly higher decrease in PPD than Control Group at 90 days after surgical therapy (p < .05). Test Group also demonstrated significantly less periodontal pathogens of red complex (Treponema denticola) (p < .05). A single episode of aPDT used in adjunct to open flap debridement of the root surface in the surgical treatment of SCP: i) significantly improved clinical periodontal parameters; ii) eliminates periodontal pathogens of the red complex more effectively (NCT02734784). © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. Effects of Systematic and Random Errors on the Retrieval of Particle Microphysical Properties from Multiwavelength Lidar Measurements Using Inversion with Regularization

    NASA Technical Reports Server (NTRS)

    Ramirez, Daniel Perez; Whiteman, David N.; Veselovskii, Igor; Kolgotin, Alexei; Korenskiy, Michael; Alados-Arboledas, Lucas

    2013-01-01

    In this work we study the effects of systematic and random errors on the inversion of multiwavelength (MW) lidar data using the well-known regularization technique to obtain vertically resolved aerosol microphysical properties. The software implementation used here was developed at the Physics Instrumentation Center (PIC) in Troitsk (Russia) in conjunction with the NASA/Goddard Space Flight Center. Its applicability to Raman lidar systems based on backscattering measurements at three wavelengths (355, 532 and 1064 nm) and extinction measurements at two wavelengths (355 and 532 nm) has been demonstrated widely. The systematic error sensitivity is quantified by first determining the retrieved parameters for a given set of optical input data consistent with three different sets of aerosol physical parameters. Then each optical input is perturbed by varying amounts and the inversion is repeated. Using bimodal aerosol size distributions, we find a generally linear dependence of the retrieved errors in the microphysical properties on the induced systematic errors in the optical data. For the retrievals of effective radius, number/surface/volume concentrations and fine-mode radius and volume, we find that these results are not significantly affected by the range of the constraints used in inversions. But significant sensitivity was found to the allowed range of the imaginary part of the particle refractive index. Our results also indicate that there exists an additive property for the deviations induced by the biases present in the individual optical data. This property permits the results here to be used to predict deviations in retrieved parameters when multiple input optical data are biased simultaneously as well as to study the influence of random errors on the retrievals. The above results are applied to questions regarding lidar design, in particular for the spaceborne multiwavelength lidar under consideration for the upcoming ACE mission.

  8. Evaluation of random errors in Williams’ series coefficients obtained with digital image correlation

    NASA Astrophysics Data System (ADS)

    Lychak, Oleh V.; Holyns'kiy, Ivan S.

    2016-03-01

    The use of the Williams’ series parameters for fracture analysis requires valid information about their error values. The aim of this investigation is the development of the method for estimation of the standard deviation of random errors of the Williams’ series parameters, obtained from the measured components of the stress field. Also, the criteria for choosing the optimal number of terms in the truncated Williams’ series for derivation of their parameters with minimal errors is proposed. The method was used for the evaluation of the Williams’ parameters, obtained from the data, and measured by the digital image correlation technique for testing a three-point bending specimen.

  9. Probabilistic migration modelling focused on functional barrier efficiency and low migration concepts in support of risk assessment.

    PubMed

    Brandsch, Rainer

    2017-10-01

    Migration modelling provides reliable migration estimates from food-contact materials (FCM) to food or food simulants based on mass-transfer parameters like diffusion and partition coefficients related to individual materials. In most cases, mass-transfer parameters are not readily available from the literature and for this reason are estimated with a given uncertainty. Historically, uncertainty was accounted for by introducing upper limit concepts first, turning out to be of limited applicability due to highly overestimated migration results. Probabilistic migration modelling gives the possibility to consider uncertainty of the mass-transfer parameters as well as other model inputs. With respect to a functional barrier, the most important parameters among others are the diffusion properties of the functional barrier and its thickness. A software tool that accepts distribution as inputs and is capable of applying Monte Carlo methods, i.e., random sampling from the input distributions of the relevant parameters (i.e., diffusion coefficient and layer thickness), predicts migration results with related uncertainty and confidence intervals. The capabilities of probabilistic migration modelling are presented in the view of three case studies (1) sensitivity analysis, (2) functional barrier efficiency and (3) validation by experimental testing. Based on the predicted migration by probabilistic migration modelling and related exposure estimates, safety evaluation of new materials in the context of existing or new packaging concepts is possible. Identifying associated migration risk and potential safety concerns in the early stage of packaging development is possible. Furthermore, dedicated material selection exhibiting required functional barrier efficiency under application conditions becomes feasible. Validation of the migration risk assessment by probabilistic migration modelling through a minimum of dedicated experimental testing is strongly recommended.

  10. Hydrologic Model Selection using Markov chain Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Marshall, L.; Sharma, A.; Nott, D.

    2002-12-01

    Estimation of parameter uncertainty (and in turn model uncertainty) allows assessment of the risk in likely applications of hydrological models. Bayesian statistical inference provides an ideal means of assessing parameter uncertainty whereby prior knowledge about the parameter is combined with information from the available data to produce a probability distribution (the posterior distribution) that describes uncertainty about the parameter and serves as a basis for selecting appropriate values for use in modelling applications. Widespread use of Bayesian techniques in hydrology has been hindered by difficulties in summarizing and exploring the posterior distribution. These difficulties have been largely overcome by recent advances in Markov chain Monte Carlo (MCMC) methods that involve random sampling of the posterior distribution. This study presents an adaptive MCMC sampling algorithm which has characteristics that are well suited to model parameters with a high degree of correlation and interdependence, as is often evident in hydrological models. The MCMC sampling technique is used to compare six alternative configurations of a commonly used conceptual rainfall-runoff model, the Australian Water Balance Model (AWBM), using 11 years of daily rainfall runoff data from the Bass river catchment in Australia. The alternative configurations considered fall into two classes - those that consider model errors to be independent of prior values, and those that model the errors as an autoregressive process. Each such class consists of three formulations that represent increasing levels of complexity (and parameterisation) of the original model structure. The results from this study point both to the importance of using Bayesian approaches in evaluating model performance, as well as the simplicity of the MCMC sampling framework that has the ability to bring such approaches within the reach of the applied hydrological community.

  11. Preference heterogeneity in a count data model of demand for off-highway vehicle recreation

    Treesearch

    Thomas P Holmes; Jeffrey E Englin

    2010-01-01

    This paper examines heterogeneity in the preferences for OHV recreation by applying the random parameters Poisson model to a data set of off-highway vehicle (OHV) users at four National Forest sites in North Carolina. The analysis develops estimates of individual consumer surplus and finds that estimates are systematically affected by the random parameter specification...

  12. Monte-Carlo Method Application for Precising Meteor Velocity from TV Observations

    NASA Astrophysics Data System (ADS)

    Kozak, P.

    2014-12-01

    Monte-Carlo method (method of statistical trials) as an application for meteor observations processing was developed in author's Ph.D. thesis in 2005 and first used in his works in 2008. The idea of using the method consists in that if we generate random values of input data - equatorial coordinates of the meteor head in a sequence of TV frames - in accordance with their statistical distributions we get a possibility to plot the probability density distributions for all its kinematical parameters, and to obtain their mean values and dispersions. At that the theoretical possibility appears to precise the most important parameter - geocentric velocity of a meteor - which has the highest influence onto precision of meteor heliocentric orbit elements calculation. In classical approach the velocity vector was calculated in two stages: first we calculate the vector direction as a vector multiplication of vectors of poles of meteor trajectory big circles, calculated from two observational points. Then we calculated the absolute value of velocity independently from each observational point selecting any of them from some reasons as a final parameter. In the given method we propose to obtain a statistical distribution of velocity absolute value as an intersection of two distributions corresponding to velocity values obtained from different points. We suppose that such an approach has to substantially increase the precision of meteor velocity calculation and remove any subjective inaccuracies.

  13. Disruption Warning Database Development and Exploratory Machine Learning Studies on Alcator C-Mod

    NASA Astrophysics Data System (ADS)

    Montes, Kevin; Rea, Cristina; Granetz, Robert

    2017-10-01

    A database of about 1800 shots from the 2015 campaign on the Alcator C-Mod tokamak is assembled, including disruptive and non-disruptive discharges. The database consists of 40 relevant plasma parameters with data taken from 160k time slices. In order to investigate the possibility of developing a robust disruption prediction algorithm that is tokamak-independent, we focused machine learning studies on a subset of dimensionless parameters such as βp, n /nG , etc. The Random Forests machine learning algorithm provides insight on the available data set by ranking the relative importance of the input features. Its application on the C-Mod database, however, reveals that virtually no one parameter has more importance than any other, and that its classification algorithm has a low rate of successfully predicted samples, as well as poor false positive and false negative rates. Comparing the analysis of this algorithm on the C-Mod database with its application to a similar database on DIII-D, we conclude that disruption prediction may not be feasible on C-Mod. This conclusion is supported by empirical observations that most C-Mod disruptions are caused by radiative collapse due to molybdenum from the first wall, which happens on just a 1-2ms timescale. Supported by the US Dept. of Energy under DE-FC02-99ER54512 and DE-FC02-04ER54698.

  14. Random field theory to interpret the spatial variability of lacustrine soils

    NASA Astrophysics Data System (ADS)

    Russo, Savino; Vessia, Giovanna

    2015-04-01

    The lacustrine soils are quaternary soils, dated from Pleistocene to Holocene periods, generated in low-energy depositional environments and characterized by soil mixture of clays, sands and silts with alternations of finer and coarser grain size layers. They are often met at shallow depth filling several tens of meters of tectonic or erosive basins typically placed in internal Appenine areas. The lacustrine deposits are often locally interbedded by detritic soils resulting from the failure of surrounding reliefs. Their heterogeneous lithology is associated with high spatial variability of physical and mechanical properties both along horizontal and vertical directions. The deterministic approach is still commonly adopted to accomplish the mechanical characterization of these heterogeneous soils where undisturbed sampling is practically not feasible (if the incoherent fraction is prevalent) or not spatially representative (if the cohesive fraction prevails). The deterministic approach consists on performing in situ tests, like Standard Penetration Tests (SPT) or Cone Penetration Tests (CPT) and deriving design parameters through "expert judgment" interpretation of the measure profiles. These readings of tip and lateral resistances (Rp and RL respectively) are almost continuous but highly variable in soil classification according to Schmertmann (1978). Thus, neglecting the spatial variability cannot be the best strategy to estimated spatial representative values of physical and mechanical parameters of lacustrine soils to be used for engineering applications. Hereafter, a method to draw the spatial variability structure of the aforementioned measure profiles is presented. It is based on the theory of the Random Fields (Vanmarcke 1984) applied to vertical readings of Rp measures from mechanical CPTs. The proposed method relies on the application of the regression analysis, by which the spatial mean trend and fluctuations about this trend are derived. Moreover, the scale of fluctuation is calculated to measure the maximum length beyond which profiles of measures are independent. The spatial mean trend can be used to identify "quasi-homogeneous" soil layers where the standard deviation and the scale of fluctuation can be calculated. In this study, five Rp profiles performed in the lacustrine deposits of the high River Pescara Valley have been analyzed. There, silty clay deposits with thickness ranging from a few meters to about 60m, and locally rich in sands and peats, are investigated. In this study, vertical trends of Rp profiles have been derived to be converted into design parameter mean trends. Furthermore, the variability structure derived from Rp readings can be propagated to design parameters to calculate the "characteristic values" requested by the European building codes. References Schmertmann J.H. 1978. Guidelines for Cone Penetration Test, Performance and Design. Report No. FHWA-TS-78-209, U.S. Department of Transportation, Washington, D.C., pp. 145. Vanmarcke E.H. 1984. Random Fields, analysis and synthesis. Cambridge (USA): MIT Press.

  15. Extracting random numbers from quantum tunnelling through a single diode.

    PubMed

    Bernardo-Gavito, Ramón; Bagci, Ibrahim Ethem; Roberts, Jonathan; Sexton, James; Astbury, Benjamin; Shokeir, Hamzah; McGrath, Thomas; Noori, Yasir J; Woodhead, Christopher S; Missous, Mohamed; Roedig, Utz; Young, Robert J

    2017-12-19

    Random number generation is crucial in many aspects of everyday life, as online security and privacy depend ultimately on the quality of random numbers. Many current implementations are based on pseudo-random number generators, but information security requires true random numbers for sensitive applications like key generation in banking, defence or even social media. True random number generators are systems whose outputs cannot be determined, even if their internal structure and response history are known. Sources of quantum noise are thus ideal for this application due to their intrinsic uncertainty. In this work, we propose using resonant tunnelling diodes as practical true random number generators based on a quantum mechanical effect. The output of the proposed devices can be directly used as a random stream of bits or can be further distilled using randomness extraction algorithms, depending on the application.

  16. Influence of ionospheric disturbances onto long-baseline relative positioning in kinematic mode

    NASA Astrophysics Data System (ADS)

    Wezka, Kinga; Herrera, Ivan; Cokrlic, Marija; Galas, Roman

    2013-04-01

    Ionospheric disturbances are fast and random variabilities in the ionosphere and they are difficult to detect and model. Some strong disturbances can cause, among others, interruption of GNSS signal or even lead to loss of signal lock. These phenomena are especially harmful for kinematic real-time applications, where the system availability is one of the most important parameters influencing positioning reliability. Our investigations were conducted using long time series of GNSS observations gathered at high latitude, where ionospheric disturbances more frequently occur. Selected processing strategy was used to monitor ionospheric signatures in time series of the coordinates. Quality of the data of input and of the processing results were examined and described by a set of proposed parameters. Variations in the coordinates were compared with available information about the state of ionosphere derived from Neustrelitz TEC Model (NTCM) and with the time series of raw observations. Some selected parameters were also calculated with the "iono-tools" module of the TUB-NavSolutions software developed by the Precise Navigation and Positioning Group at Technische Universitaet Berlin. The paper presents very first results of evaluation of the robustness of positioning algorithms with respect to ionospheric anomalies using the NTCM model and our calculated ionospheric parameters.

  17. Truly random number generation: an example

    NASA Astrophysics Data System (ADS)

    Frauchiger, Daniela; Renner, Renato

    2013-10-01

    Randomness is crucial for a variety of applications, ranging from gambling to computer simulations, and from cryptography to statistics. However, many of the currently used methods for generating randomness do not meet the criteria that are necessary for these applications to work properly and safely. A common problem is that a sequence of numbers may look random but nevertheless not be truly random. In fact, the sequence may pass all standard statistical tests and yet be perfectly predictable. This renders it useless for many applications. For example, in cryptography, the predictability of a "andomly" chosen password is obviously undesirable. Here, we review a recently developed approach to generating true | and hence unpredictable | randomness.

  18. Evaluation of a Biostimulant (Pepton) Based in Enzymatic Hydrolyzed Animal Protein in Comparison to Seaweed Extracts on Root Development, Vegetative Growth, Flowering, and Yield of Gold Cherry Tomatoes Grown under Low Stress Ambient Field Conditions

    PubMed Central

    Polo, Javier; Mata, Pedro

    2018-01-01

    The objectives of this experiment were to determine the effects of different application rates of an enzyme hydrolyzed animal protein biostimulant (Pepton) compared to a standard application rate of a biostimulant derived from seaweed extract (Acadian) on plant growth parameters and yield of gold cherry tomatoes (Solanum lycopersicum L.). Biostimulant treatments were applied starting at 15 days after transplant and every 2 weeks thereafter for a total of 5 applications. One treatment group received no biostimulant (Control). Three treatment groups (Pepton-2, Pepton-3, Pepton-4) received Pepton at different application rates equivalent to 2, 3, or 4 kg/ha applied by foliar (first 2 applications) and by irrigation (last 3 applications). Another treatment group (Acadian) received Acadian at 1.5 L/ha by irrigation for all five applications. All groups received the regular fertilizer application for this crop at transplantation, flowering, and fruiting periods. There were four plots per treatment group. Each plot had a surface area of 21 m2 that consisted of two rows that were 7 m long and 1.5 m wide. Plant height, stem diameter, distance from head to bouquet flowering, fruit set distance between the entire cluster and cluster flowering fruit set, leaf length, and number of leaves per plant was recorded for 20 plants (5 plants per plot) at 56 and 61 days after the first application. Root length and diameter of cherry tomatoes were determined at harvest from 20 randomly selected plants. Harvesting yield per plot was registered and production per hectare was calculated. Both biostimulants improved (P < 0.05) all vegetative parameters compared with the control group. There was a positive linear (P < 0.001) effect of Pepton application rate for all parameters. The calculated yield was 7.8 and 1 Ton/ha greater that represent 27 and 2.9% higher production for Pepton applied at 4 kg/ha compared to the control and to Acadian, respectively. In conclusion, Pepton was effective improving yield of gold cherry tomatoes under the low stress ambient growing conditions of this experiment. Probably short-chain peptides present in Pepton are involved in endogenous hormones and metabolic mediators that could explain the results obtained in this study. PMID:29403513

  19. MIRO Computational Model

    NASA Technical Reports Server (NTRS)

    Broderick, Daniel

    2010-01-01

    A computational model calculates the excitation of water rotational levels and emission-line spectra in a cometary coma with applications for the Micro-wave Instrument for Rosetta Orbiter (MIRO). MIRO is a millimeter-submillimeter spectrometer that will be used to study the nature of cometary nuclei, the physical processes of outgassing, and the formation of the head region of a comet (coma). The computational model is a means to interpret the data measured by MIRO. The model is based on the accelerated Monte Carlo method, which performs a random angular, spatial, and frequency sampling of the radiation field to calculate the local average intensity of the field. With the model, the water rotational level populations in the cometary coma and the line profiles for the emission from the water molecules as a function of cometary parameters (such as outgassing rate, gas temperature, and gas and electron density) and observation parameters (such as distance to the comet and beam width) are calculated.

  20. Static and low frequency noise characterization of ultra-thin body InAs MOSFETs

    NASA Astrophysics Data System (ADS)

    Karatsori, T. A.; Pastorek, M.; Theodorou, C. G.; Fadjie, A.; Wichmann, N.; Desplanque, L.; Wallart, X.; Bollaert, S.; Dimitriadis, C. A.; Ghibaudo, G.

    2018-05-01

    A complete static and low frequency noise characterization of ultra-thin body InAs MOSFETs is presented. Characterization techniques, such as the well-known Y-function method established for Si MOSFETs, are applied in order to extract the electrical parameters and study the behavior of these research grade devices. Additionally, the Lambert-W function parameter extraction methodology valid from weak to strong inversion is also used in order to verify its applicability in these experimental level devices. Moreover, a low-frequency noise characterization of the UTB InAs MOSFETs is presented, revealing carrier trapping/detrapping in slow oxide traps and remote Coulomb scattering as origin of 1/f noise, which allowed for the extraction of the oxide trap areal density. Finally, Lorentzian-like noise is also observed in the sub-micron area devices and attributed to both Random Telegraph Noise from oxide individual traps and g-r noise from the semiconductor interface.

  1. Probabilistic finite elements for fatigue and fracture analysis

    NASA Astrophysics Data System (ADS)

    Belytschko, Ted; Liu, Wing Kam

    1993-04-01

    An overview of the probabilistic finite element method (PFEM) developed by the authors and their colleagues in recent years is presented. The primary focus is placed on the development of PFEM for both structural mechanics problems and fracture mechanics problems. The perturbation techniques are used as major tools for the analytical derivation. The following topics are covered: (1) representation and discretization of random fields; (2) development of PFEM for the general linear transient problem and nonlinear elasticity using Hu-Washizu variational principle; (3) computational aspects; (4) discussions of the application of PFEM to the reliability analysis of both brittle fracture and fatigue; and (5) a stochastic computational tool based on stochastic boundary element (SBEM). Results are obtained for the reliability index and corresponding probability of failure for: (1) fatigue crack growth; (2) defect geometry; (3) fatigue parameters; and (4) applied loads. These results show that initial defect is a critical parameter.

  2. Static terrestrial laser scanning of juvenile understory trees for field phenotyping

    NASA Astrophysics Data System (ADS)

    Wang, Huanhuan; Lin, Yi

    2014-11-01

    This study was to attempt the cutting-edge 3D remote sensing technique of static terrestrial laser scanning (TLS) for parametric 3D reconstruction of juvenile understory trees. The data for test was collected with a Leica HDS6100 TLS system in a single-scan way. The geometrical structures of juvenile understory trees are extracted by model fitting. Cones are used to model trunks and branches. Principal component analysis (PCA) is adopted to calculate their major axes. Coordinate transformation and orthogonal projection are used to estimate the parameters of the cones. Then, AutoCAD is utilized to simulate the morphological characteristics of the understory trees, and to add secondary branches and leaves in a random way. Comparison of the reference values and the estimated values gives the regression equation and shows that the proposed algorithm of extracting parameters is credible. The results have basically verified the applicability of TLS for field phenotyping of juvenile understory trees.

  3. $n$ -Dimensional Discrete Cat Map Generation Using Laplace Expansions.

    PubMed

    Wu, Yue; Hua, Zhongyun; Zhou, Yicong

    2016-11-01

    Different from existing methods that use matrix multiplications and have high computation complexity, this paper proposes an efficient generation method of n -dimensional ( [Formula: see text]) Cat maps using Laplace expansions. New parameters are also introduced to control the spatial configurations of the [Formula: see text] Cat matrix. Thus, the proposed method provides an efficient way to mix dynamics of all dimensions at one time. To investigate its implementations and applications, we further introduce a fast implementation algorithm of the proposed method with time complexity O(n 4 ) and a pseudorandom number generator using the Cat map generated by the proposed method. The experimental results show that, compared with existing generation methods, the proposed method has a larger parameter space and simpler algorithm complexity, generates [Formula: see text] Cat matrices with a lower inner correlation, and thus yields more random and unpredictable outputs of [Formula: see text] Cat maps.

  4. Probabilistic finite elements for fatigue and fracture analysis

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Liu, Wing Kam

    1993-01-01

    An overview of the probabilistic finite element method (PFEM) developed by the authors and their colleagues in recent years is presented. The primary focus is placed on the development of PFEM for both structural mechanics problems and fracture mechanics problems. The perturbation techniques are used as major tools for the analytical derivation. The following topics are covered: (1) representation and discretization of random fields; (2) development of PFEM for the general linear transient problem and nonlinear elasticity using Hu-Washizu variational principle; (3) computational aspects; (4) discussions of the application of PFEM to the reliability analysis of both brittle fracture and fatigue; and (5) a stochastic computational tool based on stochastic boundary element (SBEM). Results are obtained for the reliability index and corresponding probability of failure for: (1) fatigue crack growth; (2) defect geometry; (3) fatigue parameters; and (4) applied loads. These results show that initial defect is a critical parameter.

  5. Bonded-cell model for particle fracture.

    PubMed

    Nguyen, Duc-Hanh; Azéma, Emilien; Sornay, Philippe; Radjai, Farhang

    2015-02-01

    Particle degradation and fracture play an important role in natural granular flows and in many applications of granular materials. We analyze the fracture properties of two-dimensional disklike particles modeled as aggregates of rigid cells bonded along their sides by a cohesive Mohr-Coulomb law and simulated by the contact dynamics method. We show that the compressive strength scales with tensile strength between cells but depends also on the friction coefficient and a parameter describing cell shape distribution. The statistical scatter of compressive strength is well described by the Weibull distribution function with a shape parameter varying from 6 to 10 depending on cell shape distribution. We show that this distribution may be understood in terms of percolating critical intercellular contacts. We propose a random-walk model of critical contacts that leads to particle size dependence of the compressive strength in good agreement with our simulation data.

  6. Estimation of the Nonlinear Random Coefficient Model when Some Random Effects Are Separable

    ERIC Educational Resources Information Center

    du Toit, Stephen H. C.; Cudeck, Robert

    2009-01-01

    A method is presented for marginal maximum likelihood estimation of the nonlinear random coefficient model when the response function has some linear parameters. This is done by writing the marginal distribution of the repeated measures as a conditional distribution of the response given the nonlinear random effects. The resulting distribution…

  7. A polynomial chaos ensemble hydrologic prediction system for efficient parameter inference and robust uncertainty assessment

    NASA Astrophysics Data System (ADS)

    Wang, S.; Huang, G. H.; Baetz, B. W.; Huang, W.

    2015-11-01

    This paper presents a polynomial chaos ensemble hydrologic prediction system (PCEHPS) for an efficient and robust uncertainty assessment of model parameters and predictions, in which possibilistic reasoning is infused into probabilistic parameter inference with simultaneous consideration of randomness and fuzziness. The PCEHPS is developed through a two-stage factorial polynomial chaos expansion (PCE) framework, which consists of an ensemble of PCEs to approximate the behavior of the hydrologic model, significantly speeding up the exhaustive sampling of the parameter space. Multiple hypothesis testing is then conducted to construct an ensemble of reduced-dimensionality PCEs with only the most influential terms, which is meaningful for achieving uncertainty reduction and further acceleration of parameter inference. The PCEHPS is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability. A detailed comparison between the HYMOD hydrologic model, the ensemble of PCEs, and the ensemble of reduced PCEs is performed in terms of accuracy and efficiency. Results reveal temporal and spatial variations in parameter sensitivities due to the dynamic behavior of hydrologic systems, and the effects (magnitude and direction) of parametric interactions depending on different hydrological metrics. The case study demonstrates that the PCEHPS is capable not only of capturing both expert knowledge and probabilistic information in the calibration process, but also of implementing an acceleration of more than 10 times faster than the hydrologic model without compromising the predictive accuracy.

  8. Application of the Markov Chain Monte Carlo method for snow water equivalent retrieval based on passive microwave measurements

    NASA Astrophysics Data System (ADS)

    Pan, J.; Durand, M. T.; Vanderjagt, B. J.

    2015-12-01

    Markov Chain Monte Carlo (MCMC) method is a retrieval algorithm based on Bayes' rule, which starts from an initial state of snow/soil parameters, and updates it to a series of new states by comparing the posterior probability of simulated snow microwave signals before and after each time of random walk. It is a realization of the Bayes' rule, which gives an approximation to the probability of the snow/soil parameters in condition of the measured microwave TB signals at different bands. Although this method could solve all snow parameters including depth, density, snow grain size and temperature at the same time, it still needs prior information of these parameters for posterior probability calculation. How the priors will influence the SWE retrieval is a big concern. Therefore, in this paper at first, a sensitivity test will be carried out to study how accurate the snow emission models and how explicit the snow priors need to be to maintain the SWE error within certain amount. The synthetic TB simulated from the measured snow properties plus a 2-K observation error will be used for this purpose. It aims to provide a guidance on the MCMC application under different circumstances. Later, the method will be used for the snowpits at different sites, including Sodankyla, Finland, Churchill, Canada and Colorado, USA, using the measured TB from ground-based radiometers at different bands. Based on the previous work, the error in these practical cases will be studied, and the error sources will be separated and quantified.

  9. Effects of behavioral patterns and network topology structures on Parrondo’s paradox

    PubMed Central

    Ye, Ye; Cheong, Kang Hao; Cen, Yu-wan; Xie, Neng-gang

    2016-01-01

    A multi-agent Parrondo’s model based on complex networks is used in the current study. For Parrondo’s game A, the individual interaction can be categorized into five types of behavioral patterns: the Matthew effect, harmony, cooperation, poor-competition-rich-cooperation and a random mode. The parameter space of Parrondo’s paradox pertaining to each behavioral pattern, and the gradual change of the parameter space from a two-dimensional lattice to a random network and from a random network to a scale-free network was analyzed. The simulation results suggest that the size of the region of the parameter space that elicits Parrondo’s paradox is positively correlated with the heterogeneity of the degree distribution of the network. For two distinct sets of probability parameters, the microcosmic reasons underlying the occurrence of the paradox under the scale-free network are elaborated. Common interaction mechanisms of the asymmetric structure of game B, behavioral patterns and network topology are also revealed. PMID:27845430

  10. Visible and near infrared spectroscopy coupled to random forest to quantify some soil quality parameters

    NASA Astrophysics Data System (ADS)

    de Santana, Felipe Bachion; de Souza, André Marcelo; Poppi, Ronei Jesus

    2018-02-01

    This study evaluates the use of visible and near infrared spectroscopy (Vis-NIRS) combined with multivariate regression based on random forest to quantify some quality soil parameters. The parameters analyzed were soil cation exchange capacity (CEC), sum of exchange bases (SB), organic matter (OM), clay and sand present in the soils of several regions of Brazil. Current methods for evaluating these parameters are laborious, timely and require various wet analytical methods that are not adequate for use in precision agriculture, where faster and automatic responses are required. The random forest regression models were statistically better than PLS regression models for CEC, OM, clay and sand, demonstrating resistance to overfitting, attenuating the effect of outlier samples and indicating the most important variables for the model. The methodology demonstrates the potential of the Vis-NIR as an alternative for determination of CEC, SB, OM, sand and clay, making possible to develop a fast and automatic analytical procedure.

  11. Effects of behavioral patterns and network topology structures on Parrondo’s paradox

    NASA Astrophysics Data System (ADS)

    Ye, Ye; Cheong, Kang Hao; Cen, Yu-Wan; Xie, Neng-Gang

    2016-11-01

    A multi-agent Parrondo’s model based on complex networks is used in the current study. For Parrondo’s game A, the individual interaction can be categorized into five types of behavioral patterns: the Matthew effect, harmony, cooperation, poor-competition-rich-cooperation and a random mode. The parameter space of Parrondo’s paradox pertaining to each behavioral pattern, and the gradual change of the parameter space from a two-dimensional lattice to a random network and from a random network to a scale-free network was analyzed. The simulation results suggest that the size of the region of the parameter space that elicits Parrondo’s paradox is positively correlated with the heterogeneity of the degree distribution of the network. For two distinct sets of probability parameters, the microcosmic reasons underlying the occurrence of the paradox under the scale-free network are elaborated. Common interaction mechanisms of the asymmetric structure of game B, behavioral patterns and network topology are also revealed.

  12. The Impact of Two Workplace-Based Health Risk Appraisal Interventions on Employee Lifestyle Parameters, Mental Health and Work Ability: Results of a Randomized Controlled Trial

    ERIC Educational Resources Information Center

    Addley, K.; Boyd, S.; Kerr, R.; McQuillan, P.; Houdmont, J.; McCrory, M.

    2014-01-01

    Health risk appraisals (HRA) are a common type of workplace health promotion programme offered by American employers. In the United Kingdom, evidence of their effectiveness for promoting health behaviour change remains inconclusive. This randomized controlled trial examined the effects of two HRA interventions on lifestyle parameters, mental…

  13. Restricted maximum likelihood estimation of genetic principal components and smoothed covariance matrices

    PubMed Central

    Meyer, Karin; Kirkpatrick, Mark

    2005-01-01

    Principal component analysis is a widely used 'dimension reduction' technique, albeit generally at a phenotypic level. It is shown that we can estimate genetic principal components directly through a simple reparameterisation of the usual linear, mixed model. This is applicable to any analysis fitting multiple, correlated genetic effects, whether effects for individual traits or sets of random regression coefficients to model trajectories. Depending on the magnitude of genetic correlation, a subset of the principal component generally suffices to capture the bulk of genetic variation. Corresponding estimates of genetic covariance matrices are more parsimonious, have reduced rank and are smoothed, with the number of parameters required to model the dispersion structure reduced from k(k + 1)/2 to m(2k - m + 1)/2 for k effects and m principal components. Estimation of these parameters, the largest eigenvalues and pertaining eigenvectors of the genetic covariance matrix, via restricted maximum likelihood using derivatives of the likelihood, is described. It is shown that reduced rank estimation can reduce computational requirements of multivariate analyses substantially. An application to the analysis of eight traits recorded via live ultrasound scanning of beef cattle is given. PMID:15588566

  14. Improvement of antioxidant and defense properties of Tomato (var. Pusa Rohini) by application of bioaugmented compost

    PubMed Central

    Verma, Shikha; Sharma, Anamika; Kumar, Raj; Kaur, Charanjit; Arora, Anju; Shah, Raghubir; Nain, Lata

    2014-01-01

    Nutrient management practices play a significant role in improving the nutritional quality of tomato. The present study deals with the evaluation of compost prepared using Effective Microorganisms (EM), on antioxidant and defense enzyme activities of Tomato (Lycopersicon esculentum). A field experiment with five treatments (control, chemical fertilizer and EM compost alone and in combination) was conducted in randomized block design. An increment of 31.83% in tomato yield was recorded with the combined use of EM compost and half recommended dose of chemical fertilizers (N50P30K25 + EM compost at the rate of 5 t ha−1). Similarly, fruit quality was improved in terms of lycopene content (35.52%), antioxidant activity (24–63%) and defense enzymes activity (11–54%), in tomatoes in this treatment as compared to the application of recommended dose of fertilizers. Soil microbiological parameters also exhibited an increase of 7–31% in the enzyme activities in this treatment. Significant correlation among fruit quality parameters with soil microbiological activities reveals the positive impact of EM compost which may be adopted as an eco-friendly strategy for production of high quality edible products. PMID:25972746

  15. Application of tooth brushing behavior to active rest.

    PubMed

    Sadachi, Hidetoshi; Murakami, Yoshinori; Tonomura, Manabu; Yada, Yukihiro; Simoyama, Ichiro

    2010-01-01

    We evaluated the usefulness of tooth brushing with toothpaste as active rest using the flicker value as a physiological parameter and a subjective questionnaire as a psychological parameter. Seventeen healthy, right-handed subjects (12 males and 5 females) aged 22.5 +/- 1.5 yr (mean +/- standard deviation) were randomly divided into tooth brushing with toothpaste (N=9) and non-tooth brushing groups (N=8). The subjects performed a serial calculation task for 20 min using personal computers. Subsequently, the tooth brushing group brushed their teeth, and the flicker value and mood were compared before and after the tooth brushing. The flicker value significantly increased in the tooth brushing group compared with the non-tooth brushing group (p<0.05). Concerning the mood, in the tooth brushing group, the incidence of a "feeling of being refreshed" significantly increased (p<0.05), that of "concentration power" or a "feeling of clear-headedness" tended to increase (p<0.1), and that of "lassitude" or "sleepiness" significantly decreased (p<0.01). Somatosensory stimulation and intraoral tactile stimulation during tooth brushing activated cerebral activity, producing refreshing effects. These results suggest the applicability of tooth brushing to active rest.

  16. Analysis of S-box in Image Encryption Using Root Mean Square Error Method

    NASA Astrophysics Data System (ADS)

    Hussain, Iqtadar; Shah, Tariq; Gondal, Muhammad Asif; Mahmood, Hasan

    2012-07-01

    The use of substitution boxes (S-boxes) in encryption applications has proven to be an effective nonlinear component in creating confusion and randomness. The S-box is evolving and many variants appear in literature, which include advanced encryption standard (AES) S-box, affine power affine (APA) S-box, Skipjack S-box, Gray S-box, Lui J S-box, residue prime number S-box, Xyi S-box, and S8 S-box. These S-boxes have algebraic and statistical properties which distinguish them from each other in terms of encryption strength. In some circumstances, the parameters from algebraic and statistical analysis yield results which do not provide clear evidence in distinguishing an S-box for an application to a particular set of data. In image encryption applications, the use of S-boxes needs special care because the visual analysis and perception of a viewer can sometimes identify artifacts embedded in the image. In addition to existing algebraic and statistical analysis already used for image encryption applications, we propose an application of root mean square error technique, which further elaborates the results and enables the analyst to vividly distinguish between the performances of various S-boxes. While the use of the root mean square error analysis in statistics has proven to be effective in determining the difference in original data and the processed data, its use in image encryption has shown promising results in estimating the strength of the encryption method. In this paper, we show the application of the root mean square error analysis to S-box image encryption. The parameters from this analysis are used in determining the strength of S-boxes

  17. D-Optimal Experimental Design for Contaminant Source Identification

    NASA Astrophysics Data System (ADS)

    Sai Baba, A. K.; Alexanderian, A.

    2016-12-01

    Contaminant source identification seeks to estimate the release history of a conservative solute given point concentration measurements at some time after the release. This can be mathematically expressed as an inverse problem, with a linear observation operator or a parameter-to-observation map, which we tackle using a Bayesian approach. Acquisition of experimental data can be laborious and expensive. The goal is to control the experimental parameters - in our case, the sparsity of the sensors, to maximize the information gain subject to some physical or budget constraints. This is known as optimal experimental design (OED). D-optimal experimental design seeks to maximize the expected information gain, and has long been considered the gold standard in the statistics community. Our goal is to develop scalable methods for D-optimal experimental designs involving large-scale PDE constrained problems with high-dimensional parameter fields. A major challenge for the OED, is that a nonlinear optimization algorithm for the D-optimality criterion requires repeated evaluation of objective function and gradient involving the determinant of large and dense matrices - this cost can be prohibitively expensive for applications of interest. We propose novel randomized matrix techniques that bring down the computational costs of the objective function and gradient evaluations by several orders of magnitude compared to the naive approach. The effect of randomized estimators on the accuracy and the convergence of the optimization solver will be discussed. The features and benefits of our new approach will be demonstrated on a challenging model problem from contaminant source identification involving the inference of the initial condition from spatio-temporal observations in a time-dependent advection-diffusion problem.

  18. Suitability of Smartphone Inertial Sensors for Real-Time Biofeedback Applications.

    PubMed

    Kos, Anton; Tomažič, Sašo; Umek, Anton

    2016-02-27

    This article studies the suitability of smartphones with built-in inertial sensors for biofeedback applications. Biofeedback systems use various sensors to measure body functions and parameters. These sensor data are analyzed, and the results are communicated back to the user, who then tries to act on the feedback signals. Smartphone inertial sensors can be used to capture body movements in biomechanical biofeedback systems. These sensors exhibit various inaccuracies that induce significant angular and positional errors. We studied deterministic and random errors of smartphone accelerometers and gyroscopes, primarily focusing on their biases. Based on extensive measurements, we determined accelerometer and gyroscope noise models and bias variation ranges. Then, we compiled a table of predicted positional and angular errors under various biofeedback system operation conditions. We suggest several bias compensation options that are suitable for various examples of use in real-time biofeedback applications. Measurements within the developed experimental biofeedback application show that under certain conditions, even uncompensated sensors can be used for real-time biofeedback. For general use, especially for more demanding biofeedback applications, sensor biases should be compensated. We are convinced that real-time biofeedback systems based on smartphone inertial sensors are applicable to many similar examples in sports, healthcare, and other areas.

  19. Suitability of Smartphone Inertial Sensors for Real-Time Biofeedback Applications

    PubMed Central

    Kos, Anton; Tomažič, Sašo; Umek, Anton

    2016-01-01

    This article studies the suitability of smartphones with built-in inertial sensors for biofeedback applications. Biofeedback systems use various sensors to measure body functions and parameters. These sensor data are analyzed, and the results are communicated back to the user, who then tries to act on the feedback signals. Smartphone inertial sensors can be used to capture body movements in biomechanical biofeedback systems. These sensors exhibit various inaccuracies that induce significant angular and positional errors. We studied deterministic and random errors of smartphone accelerometers and gyroscopes, primarily focusing on their biases. Based on extensive measurements, we determined accelerometer and gyroscope noise models and bias variation ranges. Then, we compiled a table of predicted positional and angular errors under various biofeedback system operation conditions. We suggest several bias compensation options that are suitable for various examples of use in real-time biofeedback applications. Measurements within the developed experimental biofeedback application show that under certain conditions, even uncompensated sensors can be used for real-time biofeedback. For general use, especially for more demanding biofeedback applications, sensor biases should be compensated. We are convinced that real-time biofeedback systems based on smartphone inertial sensors are applicable to many similar examples in sports, healthcare, and other areas. PMID:26927125

  20. A system performance throughput model applicable to advanced manned telescience systems

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.

    1990-01-01

    As automated space systems become more complex, autonomous, and opaque to the flight crew, it becomes increasingly difficult to determine whether the total system is performing as it should. Some of the complex and interrelated human performance measurement issues are addressed that are related to total system validation. An evaluative throughput model is presented which can be used to generate a human operator-related benchmark or figure of merit for a given system which involves humans at the input and output ends as well as other automated intelligent agents. The concept of sustained and accurate command/control data information transfer is introduced. The first two input parameters of the model involve nominal and off-nominal predicted events. The first of these calls for a detailed task analysis while the second is for a contingency event assessment. The last two required input parameters involving actual (measured) events, namely human performance and continuous semi-automated system performance. An expression combining these four parameters was found using digital simulations and identical, representative, random data to yield the smallest variance.

  1. Optical asymmetric image encryption using gyrator wavelet transform

    NASA Astrophysics Data System (ADS)

    Mehra, Isha; Nishchal, Naveen K.

    2015-11-01

    In this paper, we propose a new optical information processing tool termed as gyrator wavelet transform to secure a fully phase image, based on amplitude- and phase-truncation approach. The gyrator wavelet transform constitutes four basic parameters; gyrator transform order, type and level of mother wavelet, and position of different frequency bands. These parameters are used as encryption keys in addition to the random phase codes to the optical cryptosystem. This tool has also been applied for simultaneous compression and encryption of an image. The system's performance and its sensitivity to the encryption parameters, such as, gyrator transform order, and robustness has also been analyzed. It is expected that this tool will not only update current optical security systems, but may also shed some light on future developments. The computer simulation results demonstrate the abilities of the gyrator wavelet transform as an effective tool, which can be used in various optical information processing applications, including image encryption, and image compression. Also this tool can be applied for securing the color image, multispectral, and three-dimensional images.

  2. Identifying Bearing Rotodynamic Coefficients Using an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Miller, Brad A.; Howard, Samuel A.

    2008-01-01

    An Extended Kalman Filter is developed to estimate the linearized direct and indirect stiffness and damping force coefficients for bearings in rotor dynamic applications from noisy measurements of the shaft displacement in response to imbalance and impact excitation. The bearing properties are modeled as stochastic random variables using a Gauss-Markov model. Noise terms are introduced into the system model to account for all of the estimation error, including modeling errors and uncertainties and the propagation of measurement errors into the parameter estimates. The system model contains two user-defined parameters that can be tuned to improve the filter's performance; these parameters correspond to the covariance of the system and measurement noise variables. The filter is also strongly influenced by the initial values of the states and the error covariance matrix. The filter is demonstrated using numerically simulated data for a rotor bearing system with two identical bearings, which reduces the number of unknown linear dynamic coefficients to eight. The filter estimates for the direct damping coefficients and all four stiffness coefficients correlated well with actual values, whereas the estimates for the cross-coupled damping coefficients were the least accurate.

  3. On some dynamical chameleon systems

    NASA Astrophysics Data System (ADS)

    Burkin, I. M.; Kuznetsova, O. I.

    2018-03-01

    It is now well known that dynamical systems can be categorized into systems with self-excited attractors and systems with hidden attractors. A self-excited attractor has a basin of attraction that is associated with an unstable equilibrium, while a hidden attractor has a basin of attraction that does not intersect with small neighborhoods of any equilibrium points. Hidden attractors play the important role in engineering applications because they allow unexpected and potentially disastrous responses to perturbations in a structure like a bridge or an airplane wing. In addition, complex behaviors of chaotic systems have been applied in various areas from image watermarking, audio encryption scheme, asymmetric color pathological image encryption, chaotic masking communication to random number generator. Recently, researchers have discovered the so-called “chameleon systems”. These systems were so named because they demonstrate self-excited or hidden oscillations depending on the value of parameters. The present paper offers a simple algorithm of synthesizing one-parameter chameleon systems. The authors trace the evolution of Lyapunov exponents and the Kaplan-Yorke dimension of such systems which occur when parameters change.

  4. Hierarchical Bayesian modeling of ionospheric TEC disturbances as non-stationary processes

    NASA Astrophysics Data System (ADS)

    Seid, Abdu Mohammed; Berhane, Tesfahun; Roininen, Lassi; Nigussie, Melessew

    2018-03-01

    We model regular and irregular variation of ionospheric total electron content as stationary and non-stationary processes, respectively. We apply the method developed to SCINDA GPS data set observed at Bahir Dar, Ethiopia (11.6 °N, 37.4 °E) . We use hierarchical Bayesian inversion with Gaussian Markov random process priors, and we model the prior parameters in the hyperprior. We use Matérn priors via stochastic partial differential equations, and use scaled Inv -χ2 hyperpriors for the hyperparameters. For drawing posterior estimates, we use Markov Chain Monte Carlo methods: Gibbs sampling and Metropolis-within-Gibbs for parameter and hyperparameter estimations, respectively. This allows us to quantify model parameter estimation uncertainties as well. We demonstrate the applicability of the method proposed using a synthetic test case. Finally, we apply the method to real GPS data set, which we decompose to regular and irregular variation components. The result shows that the approach can be used as an accurate ionospheric disturbance characterization technique that quantifies the total electron content variability with corresponding error uncertainties.

  5. Bayesian random-effect model for predicting outcome fraught with heterogeneity--an illustration with episodes of 44 patients with intractable epilepsy.

    PubMed

    Yen, A M-F; Liou, H-H; Lin, H-L; Chen, T H-H

    2006-01-01

    The study aimed to develop a predictive model to deal with data fraught with heterogeneity that cannot be explained by sampling variation or measured covariates. The random-effect Poisson regression model was first proposed to deal with over-dispersion for data fraught with heterogeneity after making allowance for measured covariates. Bayesian acyclic graphic model in conjunction with Markov Chain Monte Carlo (MCMC) technique was then applied to estimate the parameters of both relevant covariates and random effect. Predictive distribution was then generated to compare the predicted with the observed for the Bayesian model with and without random effect. Data from repeated measurement of episodes among 44 patients with intractable epilepsy were used as an illustration. The application of Poisson regression without taking heterogeneity into account to epilepsy data yielded a large value of heterogeneity (heterogeneity factor = 17.90, deviance = 1485, degree of freedom (df) = 83). After taking the random effect into account, the value of heterogeneity factor was greatly reduced (heterogeneity factor = 0.52, deviance = 42.5, df = 81). The Pearson chi2 for the comparison between the expected seizure frequencies and the observed ones at two and three months of the model with and without random effect were 34.27 (p = 1.00) and 1799.90 (p < 0.0001), respectively. The Bayesian acyclic model using the MCMC method was demonstrated to have great potential for disease prediction while data show over-dispersion attributed either to correlated property or to subject-to-subject variability.

  6. Double-blind comparison of two types of benzocaine lozenges for the treatment of acute pharyngitis.

    PubMed

    Busch, Regina; Graubaum, Hans-Joachim; Grünwald, Jörg; Schmidt, Mathias

    2010-01-01

    In a reference-controlled double-blind trial in patients with acute pharyngitis the effects of a newly developed lozenge containing 8 mg of benzocaine (p-aminobenzoic acid ethyl ester, CAS 94-09-7) were compared with those of an identically dosed commercial pastille. 246 patients were randomized to receive either the lozenges (group A, n = 123) or the pastilles (group B, n = 123). Each patient took a total of six doses within 12 h according to the double-dummy principle, with each single dose spaced by 2 h. The primary parameter was the assessment of the responder rate with = 50 % pain relief within 15 min post application. Further parameters included the relative relief of pain in the course of the study and the tolerability of the formulation. After application of the first unit the comparison of groups yielded very similar and statistically not differing results for efficacy in both groups, with responder rates of 25.2 % and 22.0 % in groups A and B, respectively. One adverse drug reaction was observed in group B (burning and tingling feeling on the tongue), which, however, did not lead to discontinuation of study participation. In all other cases tolerability was stated to be "good to very good". The application of the benzocaine lozenges was statistically non-inferior to the use of the pastilles.

  7. Measuring order in disordered systems and disorder in ordered systems: Random matrix theory for isotropic and nematic liquid crystals and its perspective on pseudo-nematic domains

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Stratt, Richard M.

    2018-05-01

    Surprisingly long-ranged intermolecular correlations begin to appear in isotropic (orientationally disordered) phases of liquid crystal forming molecules when the temperature or density starts to close in on the boundary with the nematic (ordered) phase. Indeed, the presence of slowly relaxing, strongly orientationally correlated, sets of molecules under putatively disordered conditions ("pseudo-nematic domains") has been apparent for some time from light-scattering and optical-Kerr experiments. Still, a fully microscopic characterization of these domains has been lacking. We illustrate in this paper how pseudo-nematic domains can be studied in even relatively small computer simulations by looking for order-parameter tensor fluctuations much larger than one would expect from random matrix theory. To develop this idea, we show that random matrix theory offers an exact description of how the probability distribution for liquid-crystal order parameter tensors converges to its macroscopic-system limit. We then illustrate how domain properties can be inferred from finite-size-induced deviations from these random matrix predictions. A straightforward generalization of time-independent random matrix theory also allows us to prove that the analogous random matrix predictions for the time dependence of the order-parameter tensor are similarly exact in the macroscopic limit, and that relaxation behavior of the domains can be seen in the breakdown of the finite-size scaling required by that random-matrix theory.

  8. Time evolution of strategic and non-strategic 2-party competitions

    NASA Astrophysics Data System (ADS)

    Shanahan, Linda Lee

    The study of the nature of conflict and competition and its many manifestations---military, social, environmental, biological---has enjoyed a long history and garnered the attention of researchers in many disciplines. It will no doubt continue to do so. That the topic is of interest to some in the physics community has to do with the critical role physicists have shouldered in furthering knowledge in every sphere with reference to behavior observed in nature. The techniques, in the case of this research, have been rooted in statistical physics and the science of probability. Our tools include the use of cellular automata and random number generators in an agent-based modeling approach. In this work, we first examine a type of "conflict" model where two parties vye for the same resources with no apparent strategy or intelligence, their interactions devolving to random encounters. Analytical results for the time evolution of the model are presented with multiple examples. What at first encounter seems a trivial formulation is found to be a model with rich possibilities for adaptation to far more interesting and potentially relevant scenarios. An example of one such possibility---random events punctuated by correlated non-random ones---is included. We then turn our attention to a different conflict scenario, one in which one party acts with no strategy and in a random manner while the other receives intelligence, makes decisions, and acts with a specific purpose. We develop a set of parameters and examine several examples for insight into the model behavior in different regions of the parameter space, finding both intuitive and non-intuitive results. Of particular interest is the role of the so-called "intelligence" in determining the outcome of a conflict. We consider two applications for which specific conditions are imposed on the parameters. First, can an invader beginning in a single cell or site and utilizing a search and deploy strategy gain territory in an environment defined by constant exposure to random attacks? What magnitude of defense is sufficient to eliminate or contain such growth, and what role does the quantity and quality of available information play? Second, we build on the idea of a single intruder to include a look at a scenario where a single intruder or a small group of intruders invades or attacks a space which may have significant restrictions (such as walls or other inaccessible spaces). The importance of information and strategy emerges in keeping with intuitive expectations. Additional derivations are provided in the appendix, along with the MATLAB codes for the models. References are relegated to the end of the thesis.

  9. Uncertainty characterization of HOAPS 3.3 latent heat-flux-related parameters

    NASA Astrophysics Data System (ADS)

    Liman, Julian; Schröder, Marc; Fennig, Karsten; Andersson, Axel; Hollmann, Rainer

    2018-03-01

    Latent heat flux (LHF) is one of the main contributors to the global energy budget. As the density of in situ LHF measurements over the global oceans is generally poor, the potential of remotely sensed LHF for meteorological applications is enormous. However, to date none of the available satellite products have included estimates of systematic, random, and sampling uncertainties, all of which are essential for assessing their quality. Here, the challenge is taken on by matching LHF-related pixel-level data of the Hamburg Ocean Atmosphere Parameters and Fluxes from Satellite (HOAPS) climatology (version 3.3) to in situ measurements originating from a high-quality data archive of buoys and selected ships. Assuming the ground reference to be bias-free, this allows for deriving instantaneous systematic uncertainties as a function of four atmospheric predictor variables. The approach is regionally independent and therefore overcomes the issue of sparse in situ data densities over large oceanic areas. Likewise, random uncertainties are derived, which include not only a retrieval component but also contributions from in situ measurement noise and the collocation procedure. A recently published random uncertainty decomposition approach is applied to isolate the random retrieval uncertainty of all LHF-related HOAPS parameters. It makes use of two combinations of independent data triplets of both satellite and in situ data, which are analysed in terms of their pairwise variances of differences. Instantaneous uncertainties are finally aggregated, allowing for uncertainty characterizations on monthly to multi-annual timescales. Results show that systematic LHF uncertainties range between 15 and 50 W m-2 with a global mean of 25 W m-2. Local maxima are mainly found over the subtropical ocean basins as well as along the western boundary currents. Investigations indicate that contributions from qa (U) to the overall LHF uncertainty are on the order of 60 % (25 %). From an instantaneous point of view, random retrieval uncertainties are specifically large over the subtropics with a global average of 37 W m-2. In a climatological sense, their magnitudes become negligible, as do respective sampling uncertainties. Regional and seasonal analyses suggest that largest total LHF uncertainties are seen over the Gulf Stream and the Indian monsoon region during boreal winter. In light of the uncertainty measures, the observed continuous global mean LHF increase up to 2009 needs to be treated with caution. The demonstrated approach can easily be transferred to other satellite retrievals, which increases the significance of the present work.

  10. Construction and identification of a D-Vine model applied to the probability distribution of modal parameters in structural dynamics

    NASA Astrophysics Data System (ADS)

    Dubreuil, S.; Salaün, M.; Rodriguez, E.; Petitjean, F.

    2018-01-01

    This study investigates the construction and identification of the probability distribution of random modal parameters (natural frequencies and effective parameters) in structural dynamics. As these parameters present various types of dependence structures, the retained approach is based on pair copula construction (PCC). A literature review leads us to choose a D-Vine model for the construction of modal parameters probability distributions. Identification of this model is based on likelihood maximization which makes it sensitive to the dimension of the distribution, namely the number of considered modes in our context. To this respect, a mode selection preprocessing step is proposed. It allows the selection of the relevant random modes for a given transfer function. The second point, addressed in this study, concerns the choice of the D-Vine model. Indeed, D-Vine model is not uniquely defined. Two strategies are proposed and compared. The first one is based on the context of the study whereas the second one is purely based on statistical considerations. Finally, the proposed approaches are numerically studied and compared with respect to their capabilities, first in the identification of the probability distribution of random modal parameters and second in the estimation of the 99 % quantiles of some transfer functions.

  11. Random close packing of polydisperse jammed emulsions

    NASA Astrophysics Data System (ADS)

    Brujic, Jasna

    2010-03-01

    Packing problems are everywhere, ranging from oil extraction through porous rocks to grain storage in silos and the compaction of pharmaceutical powders into tablets. At a given density, particulate systems pack into a mechanically stable and amorphous jammed state. Theoretical frameworks have proposed a connection between this jammed state and the glass transition, a thermodynamics of jamming, as well as geometric modeling of random packings. Nevertheless, a simple underlying mechanism for the random assembly of athermal particles, analogous to crystalline ordering, remains unknown. Here we use 3D measurements of polydisperse packings of emulsion droplets to build a simple statistical model in which the complexity of the global packing is distilled into a local stochastic process. From the perspective of a single particle the packing problem is reduced to the random formation of nearest neighbors, followed by a choice of contacts among them. The two key parameters in the model, the available space around a particle and the ratio of contacts to neighbors, are directly obtained from experiments. Remarkably, we demonstrate that this ``granocentric'' view captures the properties of the polydisperse emulsion packing, ranging from the microscopic distributions of nearest neighbors and contacts to local density fluctuations and all the way to the global packing density. Further applications to monodisperse and bidisperse systems quantitatively agree with previously measured trends in global density. This model therefore reveals a general principle of organization for random packing and lays the foundations for a theory of jammed matter.

  12. Investigation of the contextual interference effect in the manipulation of the motor parameter of over-all force.

    PubMed

    Goodwin, J E; Meeuwsen, H J

    1996-12-01

    This investigation examined the contextual interference effect when manipulating over-all force in a golf-putting task. Undergraduate women (N = 30) were randomly assigned to a Random, Blocked-Random, or Blocked practice condition and practiced golf putting from distances of 2.43 m, 3.95 m, and 5.47 m during acquisition. Subjects in the Random condition practiced trials in a quasirandom sequence and those in the Blocked-Random condition practiced trials initially in a blocked sequence with the remainder of the trials practiced in a quasirandom sequence. In the Blocked condition subjects practiced trials in a blocked sequence. A 24-hr. transfer test consisted of 30 trials with 10 trials each from 1.67 m, 3.19 m, and 6.23 m. Transfer scores supported the Magill and Hall (1990) hypothesis that, when task variations involve learning parameters of a generalized motor program, the benefit of random practice over blocked practice would not be found.

  13. Dose postural control improve following application of transcutaneous electrical nerve stimulation in diabetic peripheral neuropathic patients? A randomized placebo control trial.

    PubMed

    Saadat, Z; Rojhani-Shirazi, Z; Abbasi, L

    2017-12-01

    peripheral neuropathy is the most common problem of diabetes. Neuropathy leads to lower extremity somatosensory deficits and postural instability in these patients. However, there are not sufficient evidences for improving postural control in these patients. To investigate the effects of transcutaneous electrical nerve stimulation (TENS) on postural control in patients with diabetic neuropathy. Twenty eighth patients with diabetic neuropathy (40-55 Y/O) participated in this RCT study. Fourteen patients in case group received TENS and sham TENS was used for control group. Force plate platform was used to extract sway velocity and COP displacement parameters for postural control evaluation. The mean sway velocity and center of pressure displacement along the mediolateral and anteroposterior axes were not significantly different between two groups after TENS application (p>0.05). Application of 5min high frequency TENS on the knee joint could not improve postural control in patients with diabetic neuropathy. Copyright © 2017. Published by Elsevier Ltd.

  14. Performance Evaluation of an Enhanced Uplink 3.5G System for Mobile Healthcare Applications.

    PubMed

    Komnakos, Dimitris; Vouyioukas, Demosthenes; Maglogiannis, Ilias; Constantinou, Philip

    2008-01-01

    The present paper studies the prospective and the performance of a forthcoming high-speed third generation (3.5G) networking technology, called enhanced uplink, for delivering mobile health (m-health) applications. The performance of 3.5G networks is a critical factor for successful development of m-health services perceived by end users. In this paper, we propose a methodology for performance assessment based on the joint uplink transmission of voice, real-time video, biological data (such as electrocardiogram, vital signals, and heart sounds), and healthcare records file transfer. Various scenarios were concerned in terms of real-time, nonreal-time, and emergency applications in random locations, where no other system but 3.5G is available. The accomplishment of quality of service (QoS) was explored through a step-by-step improvement of enhanced uplink system's parameters, attributing the network system for the best performance in the context of the desired m-health services.

  15. Performance Evaluation of an Enhanced Uplink 3.5G System for Mobile Healthcare Applications

    PubMed Central

    Komnakos, Dimitris; Vouyioukas, Demosthenes; Maglogiannis, Ilias; Constantinou, Philip

    2008-01-01

    The present paper studies the prospective and the performance of a forthcoming high-speed third generation (3.5G) networking technology, called enhanced uplink, for delivering mobile health (m-health) applications. The performance of 3.5G networks is a critical factor for successful development of m-health services perceived by end users. In this paper, we propose a methodology for performance assessment based on the joint uplink transmission of voice, real-time video, biological data (such as electrocardiogram, vital signals, and heart sounds), and healthcare records file transfer. Various scenarios were concerned in terms of real-time, nonreal-time, and emergency applications in random locations, where no other system but 3.5G is available. The accomplishment of quality of service (QoS) was explored through a step-by-step improvement of enhanced uplink system's parameters, attributing the network system for the best performance in the context of the desired m-health services. PMID:19132096

  16. Feasibility and safety of xenon compared with sevoflurane anaesthesia in coronary surgical patients: a randomized controlled pilot study.

    PubMed

    Stoppe, C; Fahlenkamp, A V; Rex, S; Veeck, N C; Gozdowsky, S C; Schälte, G; Autschbach, R; Rossaint, R; Coburn, M

    2013-09-01

    To date, only limited data exist about the use of xenon as an anaesthetic agent in patients undergoing cardiac surgery. The favourable cardio- and neuroprotective properties of xenon might attenuate postoperative complications, improve outcome, and reduce the incidence of delirium. Thus, the aims of this study were to investigate the feasibility and safety of balanced xenon anaesthesia in patients undergoing cardiac surgery and to gather pilot data for a future randomized multicentre study. Thirty patients undergoing elective coronary artery bypass grafting were enrolled in this randomized, single-blind controlled trial. They were randomized to receive balanced general anaesthesia with either xenon (45-50 vol%) or sevoflurane (1-1.4 vol%). The primary outcome was the occurrence of adverse events (AEs). Secondary outcome parameters were feasibility criteria (bispectral index, perioperative haemodynamic, and respiratory profile) and safety parameters (dosage of study treatments, renal function, intraoperative blood loss, need for inotropic support, regional cerebral tissue oxygenation). Furthermore, at predefined time points, systemic and pulmonary haemodynamics were assessed by the use of a pulmonary artery catheter. There were no patient characteristic differences between the groups. Patients undergoing xenon anaesthesia did not differ with respect to the incidence of AE (6 vs 8, P=0.464) compared with the sevoflurane group. No differences were detected regarding secondary feasibility and safety criteria. The haemodynamic and respiratory profile was comparable between the treatment groups. Balanced xenon anaesthesia is feasible and safe compared with sevoflurane anaesthesia in patients undergoing coronary artery bypass surgery. Acronym CARDIAX: A pre- and post-coronary artery bypass graft implantation disposed application of xenon. Clinical trial registration ClinicalTrials.gov: NCT01285271; EudraCT-number: 2010-023942-63. Approved by the ethics committee 'Ethik-Kommission an der Medizinischen Fakultät der Rheinisch-Westfälischen Technischen Hochschule Aachen (RWTH Aachen)': EK-218/10.

  17. Effects of live sax music on various physiological parameters, pain level, and mood level in cancer patients: a randomized controlled trial.

    PubMed

    Burrai, Francesco; Micheluzzi, Valentina; Bugani, Valentina

    2014-01-01

    Few randomized controlled trial studies have focused on the effect of music in cancer patients, and there are no randomized controlled trials on the effects of live music with saxophone in cancer patients. To determine the effects of live saxophone music on various physiological parameters, pain level, and mood level. A randomized controlled trial study. 52 cancer patients were randomized to a control group (n = 26), an experimental group (n = 26) whose members received 30 minutes of live music therapy with saxophone. Systolic and diastolic blood pressure, pulse rate, glycemia, oxygen saturation, pain level, and mood level were measured before and after the live music performance. There was a statistical difference between the groups for oxygen saturation (0.003) and mood level (0.001). Live music performed with a saxophone could be introduced in oncology care to improve the oxygen saturation and mood in cancer patients.

  18. Polynomial chaos expansion with random and fuzzy variables

    NASA Astrophysics Data System (ADS)

    Jacquelin, E.; Friswell, M. I.; Adhikari, S.; Dessombz, O.; Sinou, J.-J.

    2016-06-01

    A dynamical uncertain system is studied in this paper. Two kinds of uncertainties are addressed, where the uncertain parameters are described through random variables and/or fuzzy variables. A general framework is proposed to deal with both kinds of uncertainty using a polynomial chaos expansion (PCE). It is shown that fuzzy variables may be expanded in terms of polynomial chaos when Legendre polynomials are used. The components of the PCE are a solution of an equation that does not depend on the nature of uncertainty. Once this equation is solved, the post-processing of the data gives the moments of the random response when the uncertainties are random or gives the response interval when the variables are fuzzy. With the PCE approach, it is also possible to deal with mixed uncertainty, when some parameters are random and others are fuzzy. The results provide a fuzzy description of the response statistical moments.

  19. A theoretical approach to quantify the effect of random cracks on rock deformation in uniaxial compression

    NASA Astrophysics Data System (ADS)

    Zhou, Shuwei; Xia, Caichu; Zhou, Yu

    2018-06-01

    Cracks have a significant effect on the uniaxial compression of rocks. Thus, a theoretically analytical approach was proposed to assess the effects of randomly distributed cracks on the effective Young’s modulus during the uniaxial compression of rocks. Each stage of the rock failure during uniaxial compression was analyzed and classified. The analytical approach for the effective Young’s modulus of a rock with only a single crack was derived while considering the three crack states under stress, namely, opening, closure-sliding, and closure-nonsliding. The rock was then assumed to have many cracks with randomly distributed direction, and the effect of crack shape and number during each stage of the uniaxial compression on the effective Young’s modulus was considered. Thus, the approach for the effective Young’s modulus was used to obtain the whole stress-strain process of uniaxial compression. Afterward, the proposed approach was employed to analyze the effects of related parameters on the whole stress-stain curve. The proposed approach was eventually compared with some existing rock tests to validate its applicability and feasibility. The proposed approach has clear physical meaning and shows favorable agreement with the rock test results.

  20. Divergence instability of pipes conveying fluid with uncertain flow velocity

    NASA Astrophysics Data System (ADS)

    Rahmati, Mehdi; Mirdamadi, Hamid Reza; Goli, Sareh

    2018-02-01

    This article deals with investigation of probabilistic stability of pipes conveying fluid with stochastic flow velocity in time domain. As a matter of fact, this study has focused on the randomness effects of flow velocity on stability of pipes conveying fluid while most of research efforts have only focused on the influences of deterministic parameters on the system stability. The Euler-Bernoulli beam and plug flow theory are employed to model pipe structure and internal flow, respectively. In addition, flow velocity is considered as a stationary random process with Gaussian distribution. Afterwards, the stochastic averaging method and Routh's stability criterion are used so as to investigate the stability conditions of system. Consequently, the effects of boundary conditions, viscoelastic damping, mass ratio, and elastic foundation on the stability regions are discussed. Results delineate that the critical mean flow velocity decreases by increasing power spectral density (PSD) of the random velocity. Moreover, by increasing PSD from zero, the type effects of boundary condition and presence of elastic foundation are diminished, while the influences of viscoelastic damping and mass ratio could increase. Finally, to have a more applicable study, regression analysis is utilized to develop design equations and facilitate further analyses for design purposes.

  1. Wave-induced fluid flow in random porous media: Attenuation and dispersion of elastic waves

    NASA Astrophysics Data System (ADS)

    Müller, Tobias M.; Gurevich, Boris

    2005-05-01

    A detailed analysis of the relationship between elastic waves in inhomogeneous, porous media and the effect of wave-induced fluid flow is presented. Based on the results of the poroelastic first-order statistical smoothing approximation applied to Biot's equations of poroelasticity, a model for elastic wave attenuation and dispersion due to wave-induced fluid flow in 3-D randomly inhomogeneous poroelastic media is developed. Attenuation and dispersion depend on linear combinations of the spatial correlations of the fluctuating poroelastic parameters. The observed frequency dependence is typical for a relaxation phenomenon. Further, the analytic properties of attenuation and dispersion are analyzed. It is shown that the low-frequency asymptote of the attenuation coefficient of a plane compressional wave is proportional to the square of frequency. At high frequencies the attenuation coefficient becomes proportional to the square root of frequency. A comparison with the 1-D theory shows that attenuation is of the same order but slightly larger in 3-D random media. Several modeling choices of the approach including the effect of cross correlations between fluid and solid phase properties are demonstrated. The potential application of the results to real porous materials is discussed. .

  2. The Bayesian group lasso for confounded spatial data

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.; Hanks, Ephraim M.; Russell, Robin E.; Walsh, Daniel P.

    2017-01-01

    Generalized linear mixed models for spatial processes are widely used in applied statistics. In many applications of the spatial generalized linear mixed model (SGLMM), the goal is to obtain inference about regression coefficients while achieving optimal predictive ability. When implementing the SGLMM, multicollinearity among covariates and the spatial random effects can make computation challenging and influence inference. We present a Bayesian group lasso prior with a single tuning parameter that can be chosen to optimize predictive ability of the SGLMM and jointly regularize the regression coefficients and spatial random effect. We implement the group lasso SGLMM using efficient Markov chain Monte Carlo (MCMC) algorithms and demonstrate how multicollinearity among covariates and the spatial random effect can be monitored as a derived quantity. To test our method, we compared several parameterizations of the SGLMM using simulated data and two examples from plant ecology and disease ecology. In all examples, problematic levels multicollinearity occurred and influenced sampling efficiency and inference. We found that the group lasso prior resulted in roughly twice the effective sample size for MCMC samples of regression coefficients and can have higher and less variable predictive accuracy based on out-of-sample data when compared to the standard SGLMM.

  3. New techniques for the analysis of manual control systems. [mathematical models of human operator behavior

    NASA Technical Reports Server (NTRS)

    Bekey, G. A.

    1971-01-01

    Studies are summarized on the application of advanced analytical and computational methods to the development of mathematical models of human controllers in multiaxis manual control systems. Specific accomplishments include the following: (1) The development of analytical and computer methods for the measurement of random parameters in linear models of human operators. (2) Discrete models of human operator behavior in a multiple display situation were developed. (3) Sensitivity techniques were developed which make possible the identification of unknown sampling intervals in linear systems. (4) The adaptive behavior of human operators following particular classes of vehicle failures was studied and a model structure proposed.

  4. Effects of manual lymph drainage on cardiac autonomic tone in healthy subjects.

    PubMed

    Kim, Sung-Joong; Kwon, Oh-Yun; Yi, Chung-Hwi

    2009-01-01

    This study was designed to investigate the effects of manual lymph drainage on the cardiac autonomic tone. Thirty-two healthy male subjects were randomly assigned to manual lymph drainage (MLD) (experimental) and rest (control) groups. Electrocardiogram (ECG) parameters were recorded with bipolar electrocardiography using standard limb lead positions. The pressure-pain threshold (PPT) was quantitatively measured using an algometer. Heart rate variability differed significantly between the experimental and control groups (p < 0.05), but the PPT in the upper trapezius muscle did not (p > 0.05). These findings indicate that the application of MLD was effective in reducing the activity of the sympathetic nervous system.

  5. Processing and properties of Pb(Mg(1/3)Nb(2/3))O3--PbTiO3 thin films by pulsed laser deposition

    NASA Astrophysics Data System (ADS)

    Tantigate, C.; Lee, J.; Safari, A.

    1995-03-01

    The objectives of this study were to prepare in situ Pb(Mg(1/3)Nb(2/3))O3 (PMN) and PMN-PT thin films by pulsed laser deposition and to investigate the electrical features of thin films for possible dynamic random access memory (DRAM) and microactuator applications. The impact of processing parameters such compositions, substrate temperature, and oxygen pressure on perovskite phase formation and dielectric characteristics were reported. It was found that the highest dielectric constant, measured at room temperature and 10 kHz, was attained from the PMN with 99% perovskite.

  6. Random vs. Combinatorial Methods for Discrete Event Simulation of a Grid Computer Network

    NASA Technical Reports Server (NTRS)

    Kuhn, D. Richard; Kacker, Raghu; Lei, Yu

    2010-01-01

    This study compared random and t-way combinatorial inputs of a network simulator, to determine if these two approaches produce significantly different deadlock detection for varying network configurations. Modeling deadlock detection is important for analyzing configuration changes that could inadvertently degrade network operations, or to determine modifications that could be made by attackers to deliberately induce deadlock. Discrete event simulation of a network may be conducted using random generation, of inputs. In this study, we compare random with combinatorial generation of inputs. Combinatorial (or t-way) testing requires every combination of any t parameter values to be covered by at least one test. Combinatorial methods can be highly effective because empirical data suggest that nearly all failures involve the interaction of a small number of parameters (1 to 6). Thus, for example, if all deadlocks involve at most 5-way interactions between n parameters, then exhaustive testing of all n-way interactions adds no additional information that would not be obtained by testing all 5-way interactions. While the maximum degree of interaction between parameters involved in the deadlocks clearly cannot be known in advance, covering all t-way interactions may be more efficient than using random generation of inputs. In this study we tested this hypothesis for t = 2, 3, and 4 for deadlock detection in a network simulation. Achieving the same degree of coverage provided by 4-way tests would have required approximately 3.2 times as many random tests; thus combinatorial methods were more efficient for detecting deadlocks involving a higher degree of interactions. The paper reviews explanations for these results and implications for modeling and simulation.

  7. Genetic Parameter Estimates for Metabolizing Two Common Pharmaceuticals in Swine.

    PubMed

    Howard, Jeremy T; Ashwell, Melissa S; Baynes, Ronald E; Brooks, James D; Yeatts, James L; Maltecca, Christian

    2018-01-01

    In livestock, the regulation of drugs used to treat livestock has received increased attention and it is currently unknown how much of the phenotypic variation in drug metabolism is due to the genetics of an animal. Therefore, the objective of the study was to determine the amount of phenotypic variation in fenbendazole and flunixin meglumine drug metabolism due to genetics. The population consisted of crossbred female and castrated male nursery pigs ( n = 198) that were sired by boars represented by four breeds. The animals were spread across nine batches. Drugs were administered intravenously and blood collected a minimum of 10 times over a 48 h period. Genetic parameters for the parent drug and metabolite concentration within each drug were estimated based on pharmacokinetics (PK) parameters or concentrations across time utilizing a random regression model. The PK parameters were estimated using a non-compartmental analysis. The PK model included fixed effects of sex and breed of sire along with random sire and batch effects. The random regression model utilized Legendre polynomials and included a fixed population concentration curve, sex, and breed of sire effects along with a random sire deviation from the population curve and batch effect. The sire effect included the intercept for all models except for the fenbendazole metabolite (i.e., intercept and slope). The mean heritability across PK parameters for the fenbendazole and flunixin meglumine parent drug (metabolite) was 0.15 (0.18) and 0.31 (0.40), respectively. For the parent drug (metabolite), the mean heritability across time was 0.27 (0.60) and 0.14 (0.44) for fenbendazole and flunixin meglumine, respectively. The errors surrounding the heritability estimates for the random regression model were smaller compared to estimates obtained from PK parameters. Across both the PK and plasma drug concentration across model, a moderate heritability was estimated. The model that utilized the plasma drug concentration across time resulted in estimates with a smaller standard error compared to models that utilized PK parameters. The current study found a low to moderate proportion of the phenotypic variation in metabolizing fenbendazole and flunixin meglumine that was explained by genetics in the current study.

  8. Genetic Parameter Estimates for Metabolizing Two Common Pharmaceuticals in Swine

    PubMed Central

    Howard, Jeremy T.; Ashwell, Melissa S.; Baynes, Ronald E.; Brooks, James D.; Yeatts, James L.; Maltecca, Christian

    2018-01-01

    In livestock, the regulation of drugs used to treat livestock has received increased attention and it is currently unknown how much of the phenotypic variation in drug metabolism is due to the genetics of an animal. Therefore, the objective of the study was to determine the amount of phenotypic variation in fenbendazole and flunixin meglumine drug metabolism due to genetics. The population consisted of crossbred female and castrated male nursery pigs (n = 198) that were sired by boars represented by four breeds. The animals were spread across nine batches. Drugs were administered intravenously and blood collected a minimum of 10 times over a 48 h period. Genetic parameters for the parent drug and metabolite concentration within each drug were estimated based on pharmacokinetics (PK) parameters or concentrations across time utilizing a random regression model. The PK parameters were estimated using a non-compartmental analysis. The PK model included fixed effects of sex and breed of sire along with random sire and batch effects. The random regression model utilized Legendre polynomials and included a fixed population concentration curve, sex, and breed of sire effects along with a random sire deviation from the population curve and batch effect. The sire effect included the intercept for all models except for the fenbendazole metabolite (i.e., intercept and slope). The mean heritability across PK parameters for the fenbendazole and flunixin meglumine parent drug (metabolite) was 0.15 (0.18) and 0.31 (0.40), respectively. For the parent drug (metabolite), the mean heritability across time was 0.27 (0.60) and 0.14 (0.44) for fenbendazole and flunixin meglumine, respectively. The errors surrounding the heritability estimates for the random regression model were smaller compared to estimates obtained from PK parameters. Across both the PK and plasma drug concentration across model, a moderate heritability was estimated. The model that utilized the plasma drug concentration across time resulted in estimates with a smaller standard error compared to models that utilized PK parameters. The current study found a low to moderate proportion of the phenotypic variation in metabolizing fenbendazole and flunixin meglumine that was explained by genetics in the current study. PMID:29487615

  9. Contact Versus Non-Contact Measurement of a Helicopter Main Rotor Composite Blade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luczak, Marcin; Dziedziech, Kajetan; Peeters, Bart

    2010-05-28

    The dynamic characterization of lightweight structures is particularly complex as the impact of the weight of sensors and instrumentation (cables, mounting of exciters...) can distort the results. Varying mass loading or constraint effects between partial measurements may determine several errors on the final conclusions. Frequency shifts can lead to erroneous interpretations of the dynamics parameters. Typically these errors remain limited to a few percent. Inconsistent data sets however can result in major processing errors, with all related consequences towards applications based on the consistency assumption, such as global modal parameter identification, model-based damage detection and FRF-based matrix inversion in substructuring,more » load identification and transfer path analysis [1]. This paper addresses the subject of accuracy in the context of the measurement of the dynamic properties of a particular lightweight structure. It presents a comprehensive comparative study between the use of accelerometer, laser vibrometer (scanning LDV) and PU-probe (acoustic particle velocity and pressure) measurements to measure the structural responses, with as final aim the comparison of modal model quality assessment. The object of the investigation is a composite material blade from the main rotor of a helicopter. The presented results are part of an extensive test campaign performed with application of SIMO, MIMO, random and harmonic excitation, and the use of the mentioned contact and non-contact measurement techniques. The advantages and disadvantages of the applied instrumentation are discussed. Presented are real-life measurement problems related to the different set up conditions. Finally an analysis of estimated models is made in view of assessing the applicability of the various measurement approaches for successful fault detection based on modal parameters observation as well as in uncertain non-deterministic numerical model updating.« less

  10. Contact Versus Non-Contact Measurement of a Helicopter Main Rotor Composite Blade

    NASA Astrophysics Data System (ADS)

    Luczak, Marcin; Dziedziech, Kajetan; Vivolo, Marianna; Desmet, Wim; Peeters, Bart; Van der Auweraer, Herman

    2010-05-01

    The dynamic characterization of lightweight structures is particularly complex as the impact of the weight of sensors and instrumentation (cables, mounting of exciters…) can distort the results. Varying mass loading or constraint effects between partial measurements may determine several errors on the final conclusions. Frequency shifts can lead to erroneous interpretations of the dynamics parameters. Typically these errors remain limited to a few percent. Inconsistent data sets however can result in major processing errors, with all related consequences towards applications based on the consistency assumption, such as global modal parameter identification, model-based damage detection and FRF-based matrix inversion in substructuring, load identification and transfer path analysis [1]. This paper addresses the subject of accuracy in the context of the measurement of the dynamic properties of a particular lightweight structure. It presents a comprehensive comparative study between the use of accelerometer, laser vibrometer (scanning LDV) and PU-probe (acoustic particle velocity and pressure) measurements to measure the structural responses, with as final aim the comparison of modal model quality assessment. The object of the investigation is a composite material blade from the main rotor of a helicopter. The presented results are part of an extensive test campaign performed with application of SIMO, MIMO, random and harmonic excitation, and the use of the mentioned contact and non-contact measurement techniques. The advantages and disadvantages of the applied instrumentation are discussed. Presented are real-life measurement problems related to the different set up conditions. Finally an analysis of estimated models is made in view of assessing the applicability of the various measurement approaches for successful fault detection based on modal parameters observation as well as in uncertain non-deterministic numerical model updating.

  11. Analysis of random drop for gateway congestion control. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Hashem, Emam Salaheddin

    1989-01-01

    Lately, the growing demand on the Internet has prompted the need for more effective congestion control policies. Currently No Gateway Policy is used to relieve and signal congestion, which leads to unfair service to the individual users and a degradation of overall network performance. Network simulation was used to illustrate the character of Internet congestion and its causes. A newly proposed gateway congestion control policy, called Random Drop, was considered as a promising solution to the pressing problem. Random Drop relieves resource congestion upon buffer overflow by choosing a random packet from the service queue to be dropped. The random choice should result in a drop distribution proportional to the bandwidth distribution among all contending TCP connections, thus applying the necessary fairness. Nonetheless, the simulation experiments demonstrate several shortcomings with this policy. Because Random Drop is a congestion control policy, which is not applied until congestion has already occurred, it usually results in a high drop rate that hurts too many connections including well-behaved ones. Even though the number of packets dropped is different from one connection to another depending on the buffer utilization upon overflow, the TCP recovery overhead is high enough to neutralize these differences, causing unfair congestion penalties. Besides, the drop distribution itself is an inaccurate representation of the average bandwidth distribution, missing much important information about the bandwidth utilization between buffer overflow events. A modification of Random Drop to do congestion avoidance by applying the policy early was also proposed. Early Random Drop has the advantage of avoiding the high drop rate of buffer overflow. The early application of the policy removes the pressure of congestion relief and allows more accurate signaling of congestion. To be used effectively, algorithms for the dynamic adjustment of the parameters of Early Random Drop to suite the current network load must still be developed.

  12. Oscillations and chaos in neural networks: an exactly solvable model.

    PubMed Central

    Wang, L P; Pichler, E E; Ross, J

    1990-01-01

    We consider a randomly diluted higher-order network with noise, consisting of McCulloch-Pitts neurons that interact by Hebbian-type connections. For this model, exact dynamical equations are derived and solved for both parallel and random sequential updating algorithms. For parallel dynamics, we find a rich spectrum of different behaviors including static retrieving and oscillatory and chaotic phenomena in different parts of the parameter space. The bifurcation parameters include first- and second-order neuronal interaction coefficients and a rescaled noise level, which represents the combined effects of the random synaptic dilution, interference between stored patterns, and additional background noise. We show that a marked difference in terms of the occurrence of oscillations or chaos exists between neural networks with parallel and random sequential dynamics. Images PMID:2251287

  13. Different mechanisms for the short-term effects of real versus sham transcutaneous electrical nerve stimulation (TENS) in patients with chronic pain: a pilot study.

    PubMed

    Oosterhof, Jan; Wilder-Smith, Oliver H; Oostendorp, Rob A; Crul, Ben J

    2012-01-01

    Transcutaneous electrical nerve stimulation (TENS) has existed since the early 1970s. However, randomized placebo controlled studies show inconclusive results in the treatment of chronic pain. These results could be explained by assuming that TENS elicits a placebo response. However, in animal research TENS has been found to decrease hyperalgesia, which contradicts this assumption. The aim of this study is to use quantitative sensory testing to explore changes in pain processing during sham versus real TENS in patients with chronic pain. Patients with chronic pain (N = 20) were randomly allocated to real TENS or sham TENS application. Electrical pain thresholds (EPTs) were determined inside and outside the segment stimulated, before and after the first 20 minutes of the intervention, and after a period of 10 days of daily real/sham TENS application. Pain relief did not differ significantly for real versus sham TENS. However, by comparing time courses of EPTs, it was found that EPT values outside the segment of stimulation increased for sham TENS, whereas for real TENS these values decreased. There were, however, no differences for EPT measurements inside the segment stimulated. These results illustrate the importance of including mechanism-reflecting parameters in addition to symptoms when conducting pain research.

  14. Multilayer Markov Random Field models for change detection in optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Benedek, Csaba; Shadaydeh, Maha; Kato, Zoltan; Szirányi, Tamás; Zerubia, Josiane

    2015-09-01

    In this paper, we give a comparative study on three Multilayer Markov Random Field (MRF) based solutions proposed for change detection in optical remote sensing images, called Multicue MRF, Conditional Mixed Markov model, and Fusion MRF. Our purposes are twofold. On one hand, we highlight the significance of the focused model family and we set them against various state-of-the-art approaches through a thematic analysis and quantitative tests. We discuss the advantages and drawbacks of class comparison vs. direct approaches, usage of training data, various targeted application fields and different ways of Ground Truth generation, meantime informing the Reader in which roles the Multilayer MRFs can be efficiently applied. On the other hand we also emphasize the differences between the three focused models at various levels, considering the model structures, feature extraction, layer interpretation, change concept definition, parameter tuning and performance. We provide qualitative and quantitative comparison results using principally a publicly available change detection database which contains aerial image pairs and Ground Truth change masks. We conclude that the discussed models are competitive against alternative state-of-the-art solutions, if one uses them as pre-processing filters in multitemporal optical image analysis. In addition, they cover together a large range of applications, considering the different usage options of the three approaches.

  15. An adaptive incremental approach to constructing ensemble classifiers: application in an information-theoretic computer-aided decision system for detection of masses in mammograms.

    PubMed

    Mazurowski, Maciej A; Zurada, Jacek M; Tourassi, Georgia D

    2009-07-01

    Ensemble classifiers have been shown efficient in multiple applications. In this article, the authors explore the effectiveness of ensemble classifiers in a case-based computer-aided diagnosis system for detection of masses in mammograms. They evaluate two general ways of constructing subclassifiers by resampling of the available development dataset: Random division and random selection. Furthermore, they discuss the problem of selecting the ensemble size and propose two adaptive incremental techniques that automatically select the size for the problem at hand. All the techniques are evaluated with respect to a previously proposed information-theoretic CAD system (IT-CAD). The experimental results show that the examined ensemble techniques provide a statistically significant improvement (AUC = 0.905 +/- 0.024) in performance as compared to the original IT-CAD system (AUC = 0.865 +/- 0.029). Some of the techniques allow for a notable reduction in the total number of examples stored in the case base (to 1.3% of the original size), which, in turn, results in lower storage requirements and a shorter response time of the system. Among the methods examined in this article, the two proposed adaptive techniques are by far the most effective for this purpose. Furthermore, the authors provide some discussion and guidance for choosing the ensemble parameters.

  16. Superslow relaxation in identical phase oscillators with random and frustrated interactions

    NASA Astrophysics Data System (ADS)

    Daido, H.

    2018-04-01

    This paper is concerned with the relaxation dynamics of a large population of identical phase oscillators, each of which interacts with all the others through random couplings whose parameters obey the same Gaussian distribution with the average equal to zero and are mutually independent. The results obtained by numerical simulation suggest that for the infinite-size system, the absolute value of Kuramoto's order parameter exhibits superslow relaxation, i.e., 1/ln t as time t increases. Moreover, the statistics on both the transient time T for the system to reach a fixed point and the absolute value of Kuramoto's order parameter at t = T are also presented together with their distribution densities over many realizations of the coupling parameters.

  17. Quantitative analysis of random migration of cells using time-lapse video microscopy.

    PubMed

    Jain, Prachi; Worthylake, Rebecca A; Alahari, Suresh K

    2012-05-13

    Cell migration is a dynamic process, which is important for embryonic development, tissue repair, immune system function, and tumor invasion (1, 2). During directional migration, cells move rapidly in response to an extracellular chemotactic signal, or in response to intrinsic cues (3) provided by the basic motility machinery. Random migration occurs when a cell possesses low intrinsic directionality, allowing the cells to explore their local environment. Cell migration is a complex process, in the initial response cell undergoes polarization and extends protrusions in the direction of migration (2). Traditional methods to measure migration such as the Boyden chamber migration assay is an easy method to measure chemotaxis in vitro, which allows measuring migration as an end point result. However, this approach neither allows measurement of individual migration parameters, nor does it allow to visualization of morphological changes that cell undergoes during migration. Here, we present a method that allows us to monitor migrating cells in real time using video - time lapse microscopy. Since cell migration and invasion are hallmarks of cancer, this method will be applicable in studying cancer cell migration and invasion in vitro. Random migration of platelets has been considered as one of the parameters of platelet function (4), hence this method could also be helpful in studying platelet functions. This assay has the advantage of being rapid, reliable, reproducible, and does not require optimization of cell numbers. In order to maintain physiologically suitable conditions for cells, the microscope is equipped with CO(2) supply and temperature thermostat. Cell movement is monitored by taking pictures using a camera fitted to the microscope at regular intervals. Cell migration can be calculated by measuring average speed and average displacement, which is calculated by Slidebook software.

  18. Probabilistic structural analysis methods for improving Space Shuttle engine reliability

    NASA Technical Reports Server (NTRS)

    Boyce, L.

    1989-01-01

    Probabilistic structural analysis methods are particularly useful in the design and analysis of critical structural components and systems that operate in very severe and uncertain environments. These methods have recently found application in space propulsion systems to improve the structural reliability of Space Shuttle Main Engine (SSME) components. A computer program, NESSUS, based on a deterministic finite-element program and a method of probabilistic analysis (fast probability integration) provides probabilistic structural analysis for selected SSME components. While computationally efficient, it considers both correlated and nonnormal random variables as well as an implicit functional relationship between independent and dependent variables. The program is used to determine the response of a nickel-based superalloy SSME turbopump blade. Results include blade tip displacement statistics due to the variability in blade thickness, modulus of elasticity, Poisson's ratio or density. Modulus of elasticity significantly contributed to blade tip variability while Poisson's ratio did not. Thus, a rational method for choosing parameters to be modeled as random is provided.

  19. The quotient of normal random variables and application to asset price fat tails

    NASA Astrophysics Data System (ADS)

    Caginalp, Carey; Caginalp, Gunduz

    2018-06-01

    The quotient of random variables with normal distributions is examined and proven to have power law decay, with density f(x) ≃f0x-2, with the coefficient depending on the means and variances of the numerator and denominator and their correlation. We also obtain the conditional probability densities for each of the four quadrants given by the signs of the numerator and denominator for arbitrary correlation ρ ∈ [ - 1 , 1) . For ρ = - 1 we obtain a particularly simple closed form solution for all x ∈ R. The results are applied to a basic issue in economics and finance, namely the density of relative price changes. Classical finance stipulates a normal distribution of relative price changes, though empirical studies suggest a power law at the tail end. By considering the supply and demand in a basic price change model, we prove that the relative price change has density that decays with an x-2 power law. Various parameter limits are established.

  20. An overview of reliability assessment and control for design of civil engineering structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Field, R.V. Jr.; Grigoriadis, K.M.; Bergman, L.A.

    1998-06-01

    Random variations, whether they occur in the input signal or the system parameters, are phenomena that occur in nearly all engineering systems of interest. As a result, nondeterministic modeling techniques must somehow account for these variations to ensure validity of the solution. As might be expected, this is a difficult proposition and the focus of many current research efforts. Controlling seismically excited structures is one pertinent application of nondeterministic analysis and is the subject of the work presented herein. This overview paper is organized into two sections. First, techniques to assess system reliability, in a context familiar to civil engineers,more » are discussed. Second, and as a consequence of the first, active control methods that ensure good performance in this random environment are presented. It is the hope of the authors that these discussions will ignite further interest in the area of reliability assessment and design of controlled civil engineering structures.« less

  1. Approximate Genealogies Under Genetic Hitchhiking

    PubMed Central

    Pfaffelhuber, P.; Haubold, B.; Wakolbinger, A.

    2006-01-01

    The rapid fixation of an advantageous allele leads to a reduction in linked neutral variation around the target of selection. The genealogy at a neutral locus in such a selective sweep can be simulated by first generating a random path of the advantageous allele's frequency and then a structured coalescent in this background. Usually the frequency path is approximated by a logistic growth curve. We discuss an alternative method that approximates the genealogy by a random binary splitting tree, a so-called Yule tree that does not require first constructing a frequency path. Compared to the coalescent in a logistic background, this method gives a slightly better approximation for identity by descent during the selective phase and a much better approximation for the number of lineages that stem from the founder of the selective sweep. In applications such as the approximation of the distribution of Tajima's D, the two approximation methods perform equally well. For relevant parameter ranges, the Yule approximation is faster. PMID:17182733

  2. Semiconducting double-dot exchange-only qubit dynamics in the presence of magnetic and charge noises

    NASA Astrophysics Data System (ADS)

    Ferraro, E.; Fanciulli, M.; De Michielis, M.

    2018-06-01

    The effects of magnetic and charge noises on the dynamical evolution of the double-dot exchange-only qubit (DEOQ) is theoretically investigated. The DEOQ consisting of three electrons arranged in an electrostatically defined double quantum dot deserves special interest in quantum computation applications. Its advantages are in terms of fabrication, control and manipulation in view of implementation of fast single and two-qubit operations through only electrical tuning. The presence of the environmental noise due to nuclear spins and charge traps, in addition to fluctuations in the applied magnetic field and charge fluctuations on the electrostatic gates adopted to confine the electrons, is taken into account including random magnetic field and random coupling terms in the Hamiltonian. The behavior of the return probability as a function of time for initial conditions of interest is presented. Moreover, through an envelope-fitting procedure on the return probabilities, coherence times are extracted when model parameters take values achievable experimentally in semiconducting devices.

  3. Heritability estimations for inner muscular fat in Hereford cattle using random regressions

    USDA-ARS?s Scientific Manuscript database

    Random regressions make possible to make genetic predictions and parameters estimation across a gradient of environments, allowing a more accurate and beneficial use of animals as breeders in specific environments. The objective of this study was to use random regression models to estimate heritabil...

  4. Robust electroencephalogram phase estimation with applications in brain-computer interface systems.

    PubMed

    Seraj, Esmaeil; Sameni, Reza

    2017-03-01

    In this study, a robust method is developed for frequency-specific electroencephalogram (EEG) phase extraction using the analytic representation of the EEG. Based on recent theoretical findings in this area, it is shown that some of the phase variations-previously associated to the brain response-are systematic side-effects of the methods used for EEG phase calculation, especially during low analytical amplitude segments of the EEG. With this insight, the proposed method generates randomized ensembles of the EEG phase using minor perturbations in the zero-pole loci of narrow-band filters, followed by phase estimation using the signal's analytical form and ensemble averaging over the randomized ensembles to obtain a robust EEG phase and frequency. This Monte Carlo estimation method is shown to be very robust to noise and minor changes of the filter parameters and reduces the effect of fake EEG phase jumps, which do not have a cerebral origin. As proof of concept, the proposed method is used for extracting EEG phase features for a brain computer interface (BCI) application. The results show significant improvement in classification rates using rather simple phase-related features and a standard K-nearest neighbors and random forest classifiers, over a standard BCI dataset. The average performance was improved between 4-7% (in absence of additive noise) and 8-12% (in presence of additive noise). The significance of these improvements was statistically confirmed by a paired sample t-test, with 0.01 and 0.03 p-values, respectively. The proposed method for EEG phase calculation is very generic and may be applied to other EEG phase-based studies.

  5. Dirty two-band superconductivity with interband pairing order

    NASA Astrophysics Data System (ADS)

    Asano, Yasuhiro; Sasaki, Akihiro; Golubov, Alexander A.

    2018-04-01

    We study theoretically the effects of random nonmagnetic impurities on the superconducting transition temperature T c in a two-band superconductor characterized by an equal-time s-wave interband pairing order parameter. Because of the two-band degree of freedom, it is possible to define a spin-triplet s-wave pairing order parameter as well as a spin-singlet s-wave order parameter. The former belongs to odd-band-parity symmetry class, whereas the latter belongs to even-band-parity symmetry class. In a spin-singlet superconductor, T c is insensitive to the impurity concentration when we estimate the self-energy due to the random impurity potential within the Born approximation. On the other hand in a spin-triplet superconductor, T c decreases with the increase of the impurity concentration. We conclude that Cooper pairs belonging to odd-band-parity symmetry class are fragile under the random impurity potential even though they have s-wave pairing symmetry.

  6. GRAM-86 - FOUR DIMENSIONAL GLOBAL REFERENCE ATMOSPHERE MODEL

    NASA Technical Reports Server (NTRS)

    Johnson, D.

    1994-01-01

    The Four-D Global Reference Atmosphere program was developed from an empirical atmospheric model which generates values for pressure, density, temperature, and winds from surface level to orbital altitudes. This program can be used to generate altitude profiles of atmospheric parameters along any simulated trajectory through the atmosphere. The program was developed for design applications in the Space Shuttle program, such as the simulation of external tank re-entry trajectories. Other potential applications would be global circulation and diffusion studies, and generating profiles for comparison with other atmospheric measurement techniques, such as satellite measured temperature profiles and infrasonic measurement of wind profiles. The program is an amalgamation of two empirical atmospheric models for the low (25km) and the high (90km) atmosphere, with a newly developed latitude-longitude dependent model for the middle atmosphere. The high atmospheric region above 115km is simulated entirely by the Jacchia (1970) model. The Jacchia program sections are in separate subroutines so that other thermosphericexospheric models could easily be adapted if required for special applications. The atmospheric region between 30km and 90km is simulated by a latitude-longitude dependent empirical model modification of the latitude dependent empirical model of Groves (1971). Between 90km and 115km a smooth transition between the modified Groves values and the Jacchia values is accomplished by a fairing technique. Below 25km the atmospheric parameters are computed by the 4-D worldwide atmospheric model of Spiegler and Fowler (1972). This data set is not included. Between 25km and 30km an interpolation scheme is used between the 4-D results and the modified Groves values. The output parameters consist of components for: (1) latitude, longitude, and altitude dependent monthly and annual means, (2) quasi-biennial oscillations (QBO), and (3) random perturbations to partially simulate the variability due to synoptic, diurnal, planetary wave, and gravity wave variations. Quasi-biennial and random variation perturbations are computed from parameters determined by various empirical studies and are added to the monthly mean values. The UNIVAC version of GRAM is written in UNIVAC FORTRAN and has been implemented on a UNIVAC 1110 under control of EXEC 8 with a central memory requirement of approximately 30K of 36 bit words. The GRAM program was developed in 1976 and GRAM-86 was released in 1986. The monthly data files were last updated in 1986. The DEC VAX version of GRAM is written in FORTRAN 77 and has been implemented on a DEC VAX 11/780 under control of VMS 4.X with a central memory requirement of approximately 100K of 8 bit bytes. The GRAM program was originally developed in 1976 and later converted to the VAX in 1986 (GRAM-86). The monthly data files were last updated in 1986.

  7. Geometric Modeling of Inclusions as Ellipsoids

    NASA Technical Reports Server (NTRS)

    Bonacuse, Peter J.

    2008-01-01

    Nonmetallic inclusions in gas turbine disk alloys can have a significant detrimental impact on fatigue life. Because large inclusions that lead to anomalously low lives occur infrequently, probabilistic approaches can be utilized to avoid the excessively conservative assumption of lifing to a large inclusion in a high stress location. A prerequisite to modeling the impact of inclusions on the fatigue life distribution is a characterization of the inclusion occurrence rate and size distribution. To help facilitate this process, a geometric simulation of the inclusions was devised. To make the simulation problem tractable, the irregularly sized and shaped inclusions were modeled as arbitrarily oriented, three independent dimensioned, ellipsoids. Random orientation of the ellipsoid is accomplished through a series of three orthogonal rotations of axes. In this report, a set of mathematical models for the following parameters are described: the intercepted area of a randomly sectioned ellipsoid, the dimensions and orientation of the intercepted ellipse, the area of a randomly oriented sectioned ellipse, the depth and width of a randomly oriented sectioned ellipse, and the projected area of a randomly oriented ellipsoid. These parameters are necessary to determine an inclusion s potential to develop a propagating fatigue crack. Without these mathematical models, computationally expensive search algorithms would be required to compute these parameters.

  8. Analytical solution for haze values of aluminium-induced texture (AIT) glass superstrates for a-Si:H solar cells.

    PubMed

    Sahraei, Nasim; Forberich, Karen; Venkataraj, Selvaraj; Aberle, Armin G; Peters, Marius

    2014-01-13

    Light scattering at randomly textured interfaces is essential to improve the absorption of thin-film silicon solar cells. Aluminium-induced texture (AIT) glass provides suitable scattering for amorphous silicon (a-Si:H) solar cells. The scattering properties of textured surfaces are usually characterised by two properties: the angularly resolved intensity distribution and the haze. However, we find that the commonly used haze equations cannot accurately describe the experimentally observed spectral dependence of the haze of AIT glass. This is particularly the case for surface morphologies with a large rms roughness and small lateral feature sizes. In this paper we present an improved method for haze calculation, based on the power spectral density (PSD) function of the randomly textured surface. To better reproduce the measured haze characteristics, we suggest two improvements: i) inclusion of the average lateral feature size of the textured surface into the haze calculation, and ii) considering the opening angle of the haze measurement. We show that with these two improvements an accurate prediction of the haze of AIT glass is possible. Furthermore, we use the new equation to define optimum morphology parameters for AIT glass to be used for a-Si:H solar cell applications. The autocorrelation length is identified as the critical parameter. For the investigated a-Si:H solar cells, the optimum autocorrelation length is shown to be 320 nm.

  9. Single-vehicle crashes along rural mountainous highways in Malaysia: An application of random parameters negative binomial model.

    PubMed

    Rusli, Rusdi; Haque, Md Mazharul; King, Mark; Voon, Wong Shaw

    2017-05-01

    Mountainous highways generally associate with complex driving environment because of constrained road geometries, limited cross-section elements, inappropriate roadside features, and adverse weather conditions. As a result, single-vehicle (SV) crashes are overrepresented along mountainous roads, particularly in developing countries, but little attention is known about the roadway geometric, traffic and weather factors contributing to these SV crashes. As such, the main objective of the present study is to investigate SV crashes using detailed data obtained from a rigorous site survey and existing databases. The final dataset included a total of 56 variables representing road geometries including horizontal and vertical alignment, traffic characteristics, real-time weather condition, cross-sectional elements, roadside features, and spatial characteristics. To account for structured heterogeneities resulting from multiple observations within a site and other unobserved heterogeneities, the study applied a random parameters negative binomial model. Results suggest that rainfall during the crash is positively associated with SV crashes, but real-time visibility is negatively associated. The presence of a road shoulder, particularly a bitumen shoulder or wider shoulders, along mountainous highways is associated with less SV crashes. While speeding along downgrade slopes increases the likelihood of SV crashes, proper delineation decreases the likelihood. Findings of this study have significant implications for designing safer highways in mountainous areas, particularly in the context of a developing country. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Stochastic modelling of a single ion channel: an alternating renewal approach with application to limited time resolution.

    PubMed

    Milne, R K; Yeo, G F; Edeson, R O; Madsen, B W

    1988-04-22

    Stochastic models of ion channels have been based largely on Markov theory where individual states and transition rates must be specified, and sojourn-time densities for each state are constrained to be exponential. This study presents an approach based on random-sum methods and alternating-renewal theory, allowing individual states to be grouped into classes provided the successive sojourn times in a given class are independent and identically distributed. Under these conditions Markov models form a special case. The utility of the approach is illustrated by considering the effects of limited time resolution (modelled by using a discrete detection limit, xi) on the properties of observable events, with emphasis on the observed open-time (xi-open-time). The cumulants and Laplace transform for a xi-open-time are derived for a range of Markov and non-Markov models; several useful approximations to the xi-open-time density function are presented. Numerical studies show that the effects of limited time resolution can be extreme, and also highlight the relative importance of the various model parameters. The theory could form a basis for future inferential studies in which parameter estimation takes account of limited time resolution in single channel records. Appendixes include relevant results concerning random sums and a discussion of the role of exponential distributions in Markov models.

  11. A systematic review of randomized controlled trials using music therapy for children.

    PubMed

    Mrázová, Marcela; Celec, Peter

    2010-10-01

    Music therapy is a promising approach widening the potential applications of psychotherapy. Music influences both, psychologic and physiologic parameters, and children are especially responsive to this form of therapy. Many aspects of its action mechanisms remain to be elucidated, underscoring the need for evidence-based medicine (EBM) for clinical use of music therapy. This review seeks to highlight some of the issues of music therapy research and to initiate a discussion about the need for international multicenter cooperation to bring scientifically sound evidence of the benefits of music therapy in pediatric patients. Scientific bibliographic databases were searched for randomized controlled trials on use of music therapy for children. Identified articles were evaluated according to criteria for scientific quality. Twenty-eight studies were identified. Most of the trials were biased by the number of participants, and some trials showed the need to improve design of control groups. Indeed, the novelty of this area of study has produced a large number of different studies (with variability in diagnoses, interventions, control groups, duration, and/or outcome parameters), and there is a need for a more homogeneous and systematic approach. Available studies highlight the need to address reproducibility issues. This analysis identifies the need for a subsequent series of clinical studies on the efficacy of music in the pediatric population, with more focus on eligibility criteria with respect to EBM and reproducibility.

  12. Effect of a single session of muscle-biased therapy on pain sensitivity: a systematic review and meta-analysis of randomized controlled trials

    PubMed Central

    Gay, Charles W; Alappattu, Meryl J; Coronado, Rogelio A; Horn, Maggie E; Bishop, Mark D

    2013-01-01

    Background Muscle-biased therapies (MBT) are commonly used to treat pain, yet several reviews suggest evidence for the clinical effectiveness of these therapies is lacking. Inadequate treatment parameters have been suggested to account for inconsistent effects across studies. Pain sensitivity may serve as an intermediate physiologic endpoint helping to establish optimal MBT treatment parameters. The purpose of this review was to summarize the current literature investigating the short-term effect of a single dose of MBT on pain sensitivity in both healthy and clinical populations, with particular attention to specific MBT parameters of intensity and duration. Methods A systematic search for articles meeting our prespecified criteria was conducted using Cumulative Index to Nursing and Allied Health Literature (CINAHL) and MEDLINE from the inception of each database until July 2012, in accordance with guidelines from the Preferred Reporting Items for Systematic reviews and Meta-Analysis. Relevant characteristics from studies included type, intensity, and duration of MBT and whether short-term changes in pain sensitivity and clinical pain were noted with MBT application. Study results were pooled using a random-effects model to estimate the overall effect size of a single dose of MBT on pain sensitivity as well as the effect of MBT, dependent on comparison group and population type. Results Reports from 24 randomized controlled trials (23 articles) were included, representing 36 MBT treatment arms and 29 comparative groups, where 10 groups received active agents, 11 received sham/inert treatments, and eight received no treatment. MBT demonstrated a favorable and consistent ability to modulate pain sensitivity. Short-term modulation of pain sensitivity was associated with short-term beneficial effects on clinical pain. Intensity of MBT, but not duration, was linked with change in pain sensitivity. A meta-analysis was conducted on 17 studies that assessed the effect of MBT on pressure pain thresholds. The results suggest that MBT had a favorable effect on pressure pain thresholds when compared with no-treatment and sham/inert groups, and effects comparable with those of other active treatments. Conclusion The evidence supports the use of pain sensitivity measures by future research to help elucidate optimal therapeutic parameters for MBT as an intermediate physiologic marker. PMID:23403507

  13. A semi-automatic method for analysis of landscape elements using Shuttle Radar Topography Mission and Landsat ETM+ data

    NASA Astrophysics Data System (ADS)

    Ehsani, Amir Houshang; Quiel, Friedrich

    2009-02-01

    In this paper, we demonstrate artificial neural networks—self-organizing map (SOM)—as a semi-automatic method for extraction and analysis of landscape elements in the man and biosphere reserve "Eastern Carpathians". The Shuttle Radar Topography Mission (SRTM) collected data to produce generally available digital elevation models (DEM). Together with Landsat Thematic Mapper data, this provides a unique, consistent and nearly worldwide data set. To integrate the DEM with Landsat data, it was re-projected from geographic coordinates to UTM with 28.5 m spatial resolution using cubic convolution interpolation. To provide quantitative morphometric parameters, first-order (slope) and second-order derivatives of the DEM—minimum curvature, maximum curvature and cross-sectional curvature—were calculated by fitting a bivariate quadratic surface with a window size of 9×9 pixels. These surface curvatures are strongly related to landform features and geomorphological processes. Four morphometric parameters and seven Landsat-enhanced thematic mapper (ETM+) bands were used as input for the SOM algorithm. Once the network weights have been randomly initialized, different learning parameter sets, e.g. initial radius, final radius and number of iterations, were investigated. An optimal SOM with 20 classes using 1000 iterations and a final neighborhood radius of 0.05 provided a low average quantization error of 0.3394 and was used for further analysis. The effect of randomization of initial weights for optimal SOM was also studied. Feature space analysis, three-dimensional inspection and auxiliary data facilitated the assignment of semantic meaning to the output classes in terms of landform, based on morphometric analysis, and land use, based on spectral properties. Results were displayed as thematic map of landscape elements according to form, cover and slope. Spectral and morphometric signature analysis with corresponding zoom samples superimposed by contour lines were compared in detail to clarify the role of morphometric parameters to separate landscape elements. The results revealed the efficiency of SOM to integrate SRTM and Landsat data in landscape analysis. Despite the stochastic nature of SOM, the results in this particular study are not sensitive to randomization of initial weight vectors if many iterations are used. This procedure is reproducible for the same application with consistent results.

  14. Parameter estimation and forecasting for multiplicative log-normal cascades.

    PubMed

    Leövey, Andrés E; Lux, Thomas

    2012-04-01

    We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing et al. [Physica D 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica D 193, 195 (2004)] and Kiyono et al. [Phys. Rev. E 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono et al.'s procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.

  15. Bayesian estimation of the discrete coefficient of determination.

    PubMed

    Chen, Ting; Braga-Neto, Ulisses M

    2016-12-01

    The discrete coefficient of determination (CoD) measures the nonlinear interaction between discrete predictor and target variables and has had far-reaching applications in Genomic Signal Processing. Previous work has addressed the inference of the discrete CoD using classical parametric and nonparametric approaches. In this paper, we introduce a Bayesian framework for the inference of the discrete CoD. We derive analytically the optimal minimum mean-square error (MMSE) CoD estimator, as well as a CoD estimator based on the Optimal Bayesian Predictor (OBP). For the latter estimator, exact expressions for its bias, variance, and root-mean-square (RMS) are given. The accuracy of both Bayesian CoD estimators with non-informative and informative priors, under fixed or random parameters, is studied via analytical and numerical approaches. We also demonstrate the application of the proposed Bayesian approach in the inference of gene regulatory networks, using gene-expression data from a previously published study on metastatic melanoma.

  16. Changes in urination according to the sound of running water using a mobile phone application.

    PubMed

    Kwon, Whi-An; Kim, Sung Han; Kim, Sohee; Joung, Jae Young; Chung, Jinsoo; Lee, Kang Hyun; Lee, Sang-Jin; Seo, Ho Kyung

    2015-01-01

    The sound of running water (SRW) has been effectively used for toilet training during toddlerhood. However, the effect of SRW on voiding functions in adult males with lower urinary tract symptoms (LUTS) has not been evaluated. To determine the effect of SRW on urination in male patients with LUTS, multiple voiding parameters of uroflowmetry with postvoid residual urine (PVR) were assessed according to the presence of SRW played by a mobile application. Eighteen consecutive male patients with LUTS were prospectively enrolled between March and April 2014. Uroflowmetry with PVR measured by a bladder scan was randomly performed once weekly for two consecutive weeks with and without SRW in a completely sealed room after pre-checked bladder volume was scanned to be more than 150 cc. SRW was played with river water sounds amongst relaxed melodies from a smartphone mobile application. The mean age of enrolled patients and their mean International Prostate Symptom Score (IPSS) were 58.9 ± 7.7 years (range: 46-70) and 13.1 ± 5.9, respectively. All patients had not been prescribed any medications, including alpha-blockers or anti-muscarinic agents, in the last 3 months. There was a significant increase in mean peak flow rate (PFR) with SRW in comparison to without SRW (15.7 mL/s vs. 12.3 mL/s, respectively, p = 0.0125). However, there were no differences in other uroflowmetric parameters, including PVR. The study showed that SRW from a mobile phone application may be helpful in facilitating voiding functions by increasing PFR in male LUTS patients.

  17. Validation of Fourier analysis of videokeratographic data.

    PubMed

    Sideroudi, Haris; Labiris, Georgios; Ditzel, Fienke; Tsaragli, Efi; Georgatzoglou, Kimonas; Siganos, Haralampos; Kozobolis, Vassilios

    2017-06-15

    The aim was to assess the repeatability of Fourier transfom analysis of videokeratographic data using Pentacam in normal (CG), keratoconic (KC) and post-CXL (CXL) corneas. This was a prospective, clinic-based, observational study. One randomly selected eye from all study participants was included in the analysis: 62 normal eyes (CG group), 33 keratoconus eyes (KC group), while 34 eyes, which had already received CXL treatment, formed the CXL group. Fourier analysis of keratometric data were obtained using Pentacam, by two different operators within each of two sessions. Precision, repeatability and Intraclass Correlation Coefficient (ICC), were calculated for evaluating intrassesion and intersession repeatability for the following parameters: Spherical Component (SphRmin, SphEcc), Maximum Decentration (Max Dec), Regular Astigmatism, and Irregularitiy (Irr). Bland-Altman analysis was used for assessing interobserver repeatability. All parameters were presented to be repeatable, reliable and reproductible in all groups. Best intrasession and intersession repeatability and reliability were detected for parameters SphRmin, SphEcc and Max Dec parameters for both operators using ICC (intrasession: ICC > 98%, intersession: ICC > 94.7%) and within subject standard deviation. Best precision and lowest range of agreement was found for the SphRmin parameter (CG: 0.05, KC: 0.16, and CXL: 0.2) in all groups, while the lowest repeatability, reliability and reproducibility was detected for the Irr parameter. The Pentacam system provides accurate measurements of Fourier tranform keratometric data. A single Pentacam scan will be sufficient for most clinical applications.

  18. Stochastic seismic response of building with super-elastic damper

    NASA Astrophysics Data System (ADS)

    Gur, Sourav; Mishra, Sudib Kumar; Roy, Koushik

    2016-05-01

    Hysteretic yield dampers are widely employed for seismic vibration control of buildings. An improved version of such damper has been proposed recently by exploiting the superelastic force-deformation characteristics of the Shape-Memory-Alloy (SMA). Although a number of studies have illustrated the performance of such damper, precise estimate of the optimal parameters and performances, along with the comparison with the conventional yield damper is lacking. Presently, the optimal parameters for the superelastic damper are proposed by conducting systematic design optimization, in which, the stochastic response serves as the objective function, evaluated through nonlinear random vibration analysis. These optimal parameters can be employed to establish an initial design for the SMA-damper. Further, a comparison among the optimal responses is also presented in order to assess the improvement that can be achieved by the superelastic damper over the yield damper. The consistency of the improvements is also checked by considering the anticipated variation in the system parameters as well as seismic loading condition. In spite of the improved performance of super-elastic damper, the available variant of SMA(s) is quite expensive to limit their applicability. However, recently developed ferrous SMA are expected to offer even superior performance along with improved cost effectiveness, that can be studied through a life cycle cost analysis in future work.

  19. Compressed Sensing for Metrics Development

    NASA Astrophysics Data System (ADS)

    McGraw, R. L.; Giangrande, S. E.; Liu, Y.

    2012-12-01

    Models by their very nature tend to be sparse in the sense that they are designed, with a few optimally selected key parameters, to provide simple yet faithful representations of a complex observational dataset or computer simulation output. This paper seeks to apply methods from compressed sensing (CS), a new area of applied mathematics currently undergoing a very rapid development (see for example Candes et al., 2006), to FASTER needs for new approaches to model evaluation and metrics development. The CS approach will be illustrated for a time series generated using a few-parameter (i.e. sparse) model. A seemingly incomplete set of measurements, taken at a just few random sampling times, is then used to recover the hidden model parameters. Remarkably there is a sharp transition in the number of required measurements, beyond which both the model parameters and time series are recovered exactly. Applications to data compression, data sampling/collection strategies, and to the development of metrics for model evaluation by comparison with observation (e.g. evaluation of model predictions of cloud fraction using cloud radar observations) are presented and discussed in context of the CS approach. Cited reference: Candes, E. J., Romberg, J., and Tao, T. (2006), Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory, 52, 489-509.

  20. Estimation for the Rasch Model When Both Ability and Difficulty Parameters are Random.

    DTIC Science & Technology

    1987-02-01

    Office of Naval Research. The authors would also like to thank Hsin Ying Lin for performing the computations of the third section and the reviewers of an...MODEL 0’) WHEN BOTH ABILITY AND_ DIFFICULTY PARAMETERS ARE RANDOM Steven E. Rigdon and Robert K. Tsutakawa Mathematical Sciences Technical Report No...13, NR 150-535 with the Personnel and Training Research Programs Psychological Sciences Division Office of Naval Research Approved for public release

  1. An approximate generalized linear model with random effects for informative missing data.

    PubMed

    Follmann, D; Wu, M

    1995-03-01

    This paper develops a class of models to deal with missing data from longitudinal studies. We assume that separate models for the primary response and missingness (e.g., number of missed visits) are linked by a common random parameter. Such models have been developed in the econometrics (Heckman, 1979, Econometrica 47, 153-161) and biostatistics (Wu and Carroll, 1988, Biometrics 44, 175-188) literature for a Gaussian primary response. We allow the primary response, conditional on the random parameter, to follow a generalized linear model and approximate the generalized linear model by conditioning on the data that describes missingness. The resultant approximation is a mixed generalized linear model with possibly heterogeneous random effects. An example is given to illustrate the approximate approach, and simulations are performed to critique the adequacy of the approximation for repeated binary data.

  2. Under-sampling trajectory design for compressed sensing based DCE-MRI.

    PubMed

    Liu, Duan-duan; Liang, Dong; Zhang, Na; Liu, Xin; Zhang, Yuan-ting

    2013-01-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) needs high temporal and spatial resolution to accurately estimate quantitative parameters and characterize tumor vasculature. Compressed Sensing (CS) has the potential to accomplish this mutual importance. However, the randomness in CS under-sampling trajectory designed using the traditional variable density (VD) scheme may translate to uncertainty in kinetic parameter estimation when high reduction factors are used. Therefore, accurate parameter estimation using VD scheme usually needs multiple adjustments on parameters of Probability Density Function (PDF), and multiple reconstructions even with fixed PDF, which is inapplicable for DCE-MRI. In this paper, an under-sampling trajectory design which is robust to the change on PDF parameters and randomness with fixed PDF is studied. The strategy is to adaptively segment k-space into low-and high frequency domain, and only apply VD scheme in high-frequency domain. Simulation results demonstrate high accuracy and robustness comparing to VD design.

  3. Analyzing crash frequency in freeway tunnels: A correlated random parameters approach.

    PubMed

    Hou, Qinzhong; Tarko, Andrew P; Meng, Xianghai

    2018-02-01

    The majority of past road safety studies focused on open road segments while only a few focused on tunnels. Moreover, the past tunnel studies produced some inconsistent results about the safety effects of the traffic patterns, the tunnel design, and the pavement conditions. The effects of these conditions therefore remain unknown, especially for freeway tunnels in China. The study presented in this paper investigated the safety effects of these various factors utilizing a four-year period (2009-2012) of data as well as three models: 1) a random effects negative binomial model (RENB), 2) an uncorrelated random parameters negative binomial model (URPNB), and 3) a correlated random parameters negative binomial model (CRPNB). Of these three, the results showed that the CRPNB model provided better goodness-of-fit and offered more insights into the factors that contribute to tunnel safety. The CRPNB was not only able to allocate the part of the otherwise unobserved heterogeneity to the individual model parameters but also was able to estimate the cross-correlations between these parameters. Furthermore, the study results showed that traffic volume, tunnel length, proportion of heavy trucks, curvature, and pavement rutting were associated with higher frequencies of traffic crashes, while the distance to the tunnel wall, distance to the adjacent tunnel, distress ratio, International Roughness Index (IRI), and friction coefficient were associated with lower crash frequencies. In addition, the effects of the heterogeneity of the proportion of heavy trucks, the curvature, the rutting depth, and the friction coefficient were identified and their inter-correlations were analyzed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Assessing variance components in multilevel linear models using approximate Bayes factors: A case study of ethnic disparities in birthweight

    PubMed Central

    Saville, Benjamin R.; Herring, Amy H.; Kaufman, Jay S.

    2013-01-01

    Racial/ethnic disparities in birthweight are a large source of differential morbidity and mortality worldwide and have remained largely unexplained in epidemiologic models. We assess the impact of maternal ancestry and census tract residence on infant birth weights in New York City and the modifying effects of race and nativity by incorporating random effects in a multilevel linear model. Evaluating the significance of these predictors involves the test of whether the variances of the random effects are equal to zero. This is problematic because the null hypothesis lies on the boundary of the parameter space. We generalize an approach for assessing random effects in the two-level linear model to a broader class of multilevel linear models by scaling the random effects to the residual variance and introducing parameters that control the relative contribution of the random effects. After integrating over the random effects and variance components, the resulting integrals needed to calculate the Bayes factor can be efficiently approximated with Laplace’s method. PMID:24082430

  5. Effect of sexual intercourse on the absorption of levonorgestrel after vaginal administration of 0.75 mg in Carraguard® gel: a randomized, cross-over, pharmacokinetic study☆

    PubMed Central

    Brache, Vivian; Croxatto, Horacio; Kumar, Narender; Sitruk-Ware, Regine; Cochón, Leila; Schiappacasse, Veronica; Sivin, Irving; Muñoz, Carla; Maguire, Robin; Faundes, Anibal

    2010-01-01

    Background The Population Council studied a pre-coital contraceptive microbicide vaginal product containing levonorgestrel (LNG) as active component and Carraguard® gel as a vehicle (Carra/LNG gel) for couples who engage in occasional unplanned intercourse. The objective of this study was to evaluate the effect of sexual intercourse after vaginal application of Carra/LNG gel on serum levels of LNG in women and to assess LNG absorption by the male partner. Study Design This was a randomized, cross-over, pharmacokinetic study including an abstinence arm and an arm in which couples engaged in sexual intercourse between 2 and 4 h after gel application. In each study arm, each woman received a single application of Carra/LNG gel (0.75 mg in 4 mL gel) followed by serial blood samples taken at 0, 1, 2, 4, 8, 24 and 48 h after gel application for LNG measurements. In the intercourse arm, LNG was measured in blood samples taken from the male partner before intercourse and at 4, 8 and 24 h after gel application in the female partner. Results Time concentration curves for serum LNG levels showed a mean Cmax of 7.8±5.5 and 8.3±5.7 nmol/L, a mean Tmax of 6.2±5.9 and 7.5±5.7, and comparable area under the curve for the intercourse and abstinence arm, respectively. Pharmacokinetic parameters presented large variability between subjects, but excellent reproducibility within each subject. LNG was undetectable in 10 out of 12 male partners. Conclusion Sexual intercourse does not appear to interfere with vaginal absorption of LNG after application of a Carra/LNG gel. A vaginal pre-coital contraceptive gel is feasible. PMID:19135574

  6. Kalman filter data assimilation: targeting observations and parameter estimation.

    PubMed

    Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex

    2014-06-01

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.

  7. Kalman filter data assimilation: Targeting observations and parameter estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bellsky, Thomas, E-mail: bellskyt@asu.edu; Kostelich, Eric J.; Mahalov, Alex

    2014-06-15

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly locatedmore » observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.« less

  8. Deterministic diffusion in flower-shaped billiards.

    PubMed

    Harayama, Takahisa; Klages, Rainer; Gaspard, Pierre

    2002-08-01

    We propose a flower-shaped billiard in order to study the irregular parameter dependence of chaotic normal diffusion. Our model is an open system consisting of periodically distributed obstacles in the shape of a flower, and it is strongly chaotic for almost all parameter values. We compute the parameter dependent diffusion coefficient of this model from computer simulations and analyze its functional form using different schemes, all generalizing the simple random walk approximation of Machta and Zwanzig. The improved methods we use are based either on heuristic higher-order corrections to the simple random walk model, on lattice gas simulation methods, or they start from a suitable Green-Kubo formula for diffusion. We show that dynamical correlations, or memory effects, are of crucial importance in reproducing the precise parameter dependence of the diffusion coefficent.

  9. Robot-assisted gait training in multiple sclerosis patients: a randomized trial.

    PubMed

    Schwartz, Isabella; Sajin, Anna; Moreh, Elior; Fisher, Iris; Neeb, Martin; Forest, Adina; Vaknin-Dembinsky, Adi; Karusis, Dimitrios; Meiner, Zeev

    2012-06-01

    Preservation of locomotor activity in multiple sclerosis (MS) patients is of utmost importance. Robotic-assisted body weight-supported treadmill training is a promising method to improve gait functions in neurologically impaired patients, although its effectiveness in MS patients is still unknown. To compare the effectiveness of robot-assisted gait training (RAGT) with that of conventional walking treatment (CWT) on gait and generalized functions in a group of stable MS patients. A prospective randomized controlled trial of 12 sessions of RAGT or CWT in MS patients of EDSS score 5-7. Primary outcome measures were gait parameters and the secondary outcomes were functional and quality of life parameters. All tests were performed at baseline, 3 and 6 months post-treatment by a blinded rater. Fifteen and 17 patients were randomly allocated to RAGT and CWT, respectively. Both groups were comparable at baseline in all parameters. As compared with baseline, although some gait parameters improved significantly following the treatment at each time point there was no difference between the groups. Both FIM and EDSS scores improved significantly post-treatment with no difference between the groups. At 6 months, most gait and functional parameters had returned to baseline. Robot-assisted gait training is feasible and safe and may be an effective additional therapeutic option in MS patients with severe walking disabilities.

  10. The Impact of Model and Rainfall Forcing Errors on Characterizing Soil Moisture Uncertainty in Land Surface Modeling

    NASA Technical Reports Server (NTRS)

    Maggioni, V.; Anagnostou, E. N.; Reichle, R. H.

    2013-01-01

    The contribution of rainfall forcing errors relative to model (structural and parameter) uncertainty in the prediction of soil moisture is investigated by integrating the NASA Catchment Land Surface Model (CLSM), forced with hydro-meteorological data, in the Oklahoma region. Rainfall-forcing uncertainty is introduced using a stochastic error model that generates ensemble rainfall fields from satellite rainfall products. The ensemble satellite rain fields are propagated through CLSM to produce soil moisture ensembles. Errors in CLSM are modeled with two different approaches: either by perturbing model parameters (representing model parameter uncertainty) or by adding randomly generated noise (representing model structure and parameter uncertainty) to the model prognostic variables. Our findings highlight that the method currently used in the NASA GEOS-5 Land Data Assimilation System to perturb CLSM variables poorly describes the uncertainty in the predicted soil moisture, even when combined with rainfall model perturbations. On the other hand, by adding model parameter perturbations to rainfall forcing perturbations, a better characterization of uncertainty in soil moisture simulations is observed. Specifically, an analysis of the rank histograms shows that the most consistent ensemble of soil moisture is obtained by combining rainfall and model parameter perturbations. When rainfall forcing and model prognostic perturbations are added, the rank histogram shows a U-shape at the domain average scale, which corresponds to a lack of variability in the forecast ensemble. The more accurate estimation of the soil moisture prediction uncertainty obtained by combining rainfall and parameter perturbations is encouraging for the application of this approach in ensemble data assimilation systems.

  11. The effects of the one-step replica symmetry breaking on the Sherrington-Kirkpatrick spin glass model in the presence of random field with a joint Gaussian probability density function for the exchange interactions and random fields

    NASA Astrophysics Data System (ADS)

    Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.

    2018-07-01

    The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.

  12. Optimal strategy analysis based on robust predictive control for inventory system with random demand

    NASA Astrophysics Data System (ADS)

    Saputra, Aditya; Widowati, Sutrisno

    2017-12-01

    In this paper, the optimal strategy for a single product single supplier inventory system with random demand is analyzed by using robust predictive control with additive random parameter. We formulate the dynamical system of this system as a linear state space with additive random parameter. To determine and analyze the optimal strategy for the given inventory system, we use robust predictive control approach which gives the optimal strategy i.e. the optimal product volume that should be purchased from the supplier for each time period so that the expected cost is minimal. A numerical simulation is performed with some generated random inventory data. We simulate in MATLAB software where the inventory level must be controlled as close as possible to a set point decided by us. From the results, robust predictive control model provides the optimal strategy i.e. the optimal product volume that should be purchased and the inventory level was followed the given set point.

  13. Engineering applications of strong ground motion simulation

    NASA Astrophysics Data System (ADS)

    Somerville, Paul

    1993-02-01

    The formulation, validation and application of a procedure for simulating strong ground motions for use in engineering practice are described. The procedure uses empirical source functions (derived from near-source strong motion recordings of small earthquakes) to provide a realistic representation of effects such as source radiation that are difficult to model at high frequencies due to their partly stochastic behavior. Wave propagation effects are modeled using simplified Green's functions that are designed to transfer empirical source functions from their recording sites to those required for use in simulations at a specific site. The procedure has been validated against strong motion recordings of both crustal and subduction earthquakes. For the validation process we choose earthquakes whose source models (including a spatially heterogeneous distribution of the slip of the fault) are independently known and which have abundant strong motion recordings. A quantitative measurement of the fit between the simulated and recorded motion in this validation process is used to estimate the modeling and random uncertainty associated with the simulation procedure. This modeling and random uncertainty is one part of the overall uncertainty in estimates of ground motions of future earthquakes at a specific site derived using the simulation procedure. The other contribution to uncertainty is that due to uncertainty in the source parameters of future earthquakes that affect the site, which is estimated from a suite of simulations generated by varying the source parameters over their ranges of uncertainty. In this paper, we describe the validation of the simulation procedure for crustal earthquakes against strong motion recordings of the 1989 Loma Prieta, California, earthquake, and for subduction earthquakes against the 1985 Michoacán, Mexico, and Valparaiso, Chile, earthquakes. We then show examples of the application of the simulation procedure to the estimatation of the design response spectra for crustal earthquakes at a power plant site in California and for subduction earthquakes in the Seattle-Portland region. We also demonstrate the use of simulation methods for modeling the attenuation of strong ground motion, and show evidence of the effect of critical reflections from the lower crust in causing the observed flattening of the attenuation of strong ground motion from the 1988 Saguenay, Quebec, and 1989 Loma Prieta earthquakes.

  14. What is the ideal dose and power output of low-level laser therapy (810 nm) on muscle performance and post-exercise recovery? Study protocol for a double-blind, randomized, placebo-controlled trial.

    PubMed

    de Oliveira, Adriano Rodrigues; Vanin, Adriane Aver; De Marchi, Thiago; Antonialli, Fernanda Colella; Grandinetti, Vanessa dos Santos; de Paiva, Paulo Roberto Vicente; Albuquerque Pontes, Gianna Móes; Santos, Larissa Aline; Aleixo Junior, Ivo de Oliveira; de Carvalho, Paulo de Tarso Camillo; Bjordal, Jan Magnus; Leal-Junior, Ernesto Cesar Pinto

    2014-02-27

    Recent studies involving phototherapy applied prior to exercise have demonstrated positive results regarding the attenuation of muscle fatigue and the expression of biochemical markers associated with recovery. However, a number of factors remain unknown, such as the ideal dose and application parameters, mechanisms of action and long-term effects on muscle recovery. The aims of the proposed project are to evaluate the long-term effects of low-level laser therapy on post-exercise musculoskeletal recovery and identify the best dose andapplication power/irradiation time. A double-blind, randomized, placebo-controlled clinical trial with be conducted. After fulfilling the eligibility criteria, 28 high-performance athletes will be allocated to four groups of seven volunteers each. In phase 1, the laser power will be 200 mW and different doses will be tested: Group A (2 J), Group B (6 J), Group C (10 J) and Group D (0 J). In phase 2, the best dose obtained in phase 1 will be used with the same distribution of the volunteers, but with different powers: Group A (100 mW), Group B (200 mW), Group C (400 mW) and Group D (0 mW). The isokinetic test will be performed based on maximum voluntary contraction prior to the application of the laser and after the eccentric contraction protocol, which will also be performed using the isokinetic dynamometer. The following variables related to physical performance will be analyzed: peak torque/maximum voluntary contraction, delayed onset muscle soreness (algometer), biochemical markers of muscle damage, inflammation and oxidative stress. Our intention, is to determine optimal laser therapy application parameters capable of slowing down the physiological muscle fatigue process, reducing injuries or micro-injuries in skeletal muscle stemming from physical exertion and accelerating post-exercise muscle recovery. We believe that, unlike drug therapy, LLLT has a biphasic dose-response pattern. The protocol for this study is registered with the Protocol Registry System, ClinicalTrials.gov identifier NCT01844271.

  15. Random parameter models of interstate crash frequencies by severity, number of vehicles involved, collision and location type.

    PubMed

    Venkataraman, Narayan; Ulfarsson, Gudmundur F; Shankar, Venky N

    2013-10-01

    A nine-year (1999-2007) continuous panel of crash histories on interstates in Washington State, USA, was used to estimate random parameter negative binomial (RPNB) models for various aggregations of crashes. A total of 21 different models were assessed in terms of four ways to aggregate crashes, by: (a) severity, (b) number of vehicles involved, (c) crash type, and by (d) location characteristics. The models within these aggregations include specifications for all severities (property damage only, possible injury, evident injury, disabling injury, and fatality), number of vehicles involved (one-vehicle to five-or-more-vehicle), crash type (sideswipe, same direction, overturn, head-on, fixed object, rear-end, and other), and location types (urban interchange, rural interchange, urban non-interchange, rural non-interchange). A total of 1153 directional road segments comprising of the seven Washington State interstates were analyzed, yielding statistical models of crash frequency based on 10,377 observations. These results suggest that in general there was a significant improvement in log-likelihood when using RPNB compared to a fixed parameter negative binomial baseline model. Heterogeneity effects are most noticeable for lighting type, road curvature, and traffic volume (ADT). Median lighting or right-side lighting are linked to increased crash frequencies in many models for more than half of the road segments compared to both-sides lighting. Both-sides lighting thereby appears to generally lead to a safety improvement. Traffic volume has a random parameter but the effect is always toward increasing crash frequencies as expected. However that the effect is random shows that the effect of traffic volume on crash frequency is complex and varies by road segment. The number of lanes has a random parameter effect only in the interchange type models. The results show that road segment-specific insights into crash frequency occurrence can lead to improved design policy and project prioritization. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. The effect of environment and task on gait parameters after stroke: A randomized comparison of measurement conditions.

    PubMed

    Lord, Susan E; Rochester, Lynn; Weatherall, Mark; McPherson, Kathryn M; McNaughton, Harry K

    2006-07-01

    To assess the effect of environment and a secondary task on gait parameters in community ambulant stroke survivors and to assess the contribution of clinical symptoms to gait performance. A 2x3 randomized factorial design with 2 main factors: task (no task, motor task, cognitive task) and environment (clinic, suburban street, shopping mall). Subjects were assessed in 1 of 3 settings: 2 in the community (a suburban street and shopping mall) and 1 clinical environment. Twenty-seven people with stroke (mean age, 61+/-11.6y; mean time since stroke onset, 45.8+/-34.2mo), living at home, were recruited from community stroke groups and from a local rehabilitation unit. Selection criteria included the following: ability to give informed consent, unilateral first ever or recurrent stroke at least 6 months previously, walking independently in the community, a gait speed between 24 and 50 m/min, Mini-Mental State Examination score of 24 or higher, and no severe comorbidity. Not applicable. Gait speed (in m/min), cadence, and step length were assessed by using an accelerometer with adjustable thresholds. Clinical measures hypothesized to influence gait parameters in community environments were also assessed including fatigue, anxiety and depression, and attentional deficit. Twenty-seven people with a mean baseline gait speed of 42.2+/-5.9 m/min were randomly allocated to 1 of 9 conditions in which the setting and distraction were manipulated. Analysis of variance showed a significant main effect for environment (P = .046) but not for task (P = .37). The interaction between task and environment was not significant (P = .73). Adjusting for baseline gait speed, people walked on average 8.8m/min faster in the clinic (95% confidence interval, 0.3-17.3m/min) than in the mall. Scores for fatigue, anxiety and depression, and attentional deficit were higher than normative values but did not influence gait performance. This study suggests that people with chronic stroke cope well with the challenges of varied environments and can maintain their gait speed while performing a secondary task. Despite moderate levels of gait impairment, gait automaticity may be restored over time to a functional level.

  17. Achieving a strongly negative scattering asymmetry factor in random media composed of dual-dipolar particles

    NASA Astrophysics Data System (ADS)

    Wang, B. X.; Zhao, C. Y.

    2018-02-01

    Understanding radiative transfer in random media like micro- or nanoporous and particulate materials, allows people to manipulate the scattering and absorption of radiation, as well as opens new possibilities in applications such as imaging through turbid media, photovoltaics, and radiative cooling. A strong-backscattering phase function, i.e., a negative scattering asymmetry parameter g , is of great interest, which can possibly lead to unusual radiative transport phenomena, for instance, Anderson localization of light. Here we demonstrate that by utilizing the structural correlations and second Kerker condition for a disordered medium composed of randomly distributed silicon nanoparticles, a strongly negative scattering asymmetry factor (g ˜-0.5 ) for multiple light scattering can be realized in the near infrared. Based on the multipole expansion of Foldy-Lax equations and quasicrystalline approximation (QCA), we have rigorously derived analytical expressions for the effective propagation constant and scattering phase function for a random system containing spherical particles, by taking the effect of structural correlations into account. We show that as the concentration of scattering particles rises, the backscattering is also enhanced. Moreover, in this circumstance, the transport mean free path is largely reduced and even becomes smaller than that predicted by independent scattering approximation. We further explore the dependent scattering effects, including the modification of electric and magnetic dipole excitations and far-field interference effect, both induced and influenced by the structural correlations, for volume fraction of particles up to fv˜0.25 . Our results have profound implications in harnessing micro- or nanoscale radiative transfer through random media.

  18. A nonlinear HP-type complementary resistive switch

    NASA Astrophysics Data System (ADS)

    Radtke, Paul K.; Schimansky-Geier, Lutz

    2016-05-01

    Resistive Switching (RS) is the change in resistance of a dielectric under the influence of an external current or electric field. This change is non-volatile, and the basis of both the memristor and resistive random access memory. In the latter, high integration densities favor the anti-serial combination of two RS-elements to a single cell, termed the complementary resistive switch (CRS). Motivated by the irregular shape of the filament protruding into the device, we suggest a nonlinearity in the resistance-interpolation function, characterized by a single parameter p. Thereby the original HP-memristor is expanded upon. We numerically simulate and analytically solve this model. Further, the nonlinearity allows for its application to the CRS.

  19. Bayesian Modeling for Identification and Estimation of the Learning Effects of Pointing Tasks

    NASA Astrophysics Data System (ADS)

    Kyo, Koki

    Recently, in the field of human-computer interaction, a model containing the systematic factor and human factor has been proposed to evaluate the performance of the input devices of a computer. This is called the SH-model. In this paper, in order to extend the range of application of the SH-model, we propose some new models based on the Box-Cox transformation and apply a Bayesian modeling method for identification and estimation of the learning effects of pointing tasks. We consider the parameters describing the learning effect as random variables and introduce smoothness priors for them. Illustrative results show that the newly-proposed models work well.

  20. Dimensional Reduction for the General Markov Model on Phylogenetic Trees.

    PubMed

    Sumner, Jeremy G

    2017-03-01

    We present a method of dimensional reduction for the general Markov model of sequence evolution on a phylogenetic tree. We show that taking certain linear combinations of the associated random variables (site pattern counts) reduces the dimensionality of the model from exponential in the number of extant taxa, to quadratic in the number of taxa, while retaining the ability to statistically identify phylogenetic divergence events. A key feature is the identification of an invariant subspace which depends only bilinearly on the model parameters, in contrast to the usual multi-linear dependence in the full space. We discuss potential applications including the computation of split (edge) weights on phylogenetic trees from observed sequence data.

  1. Application of spatial Poisson process models to air mass thunderstorm rainfall

    NASA Technical Reports Server (NTRS)

    Eagleson, P. S.; Fennessy, N. M.; Wang, Qinliang; Rodriguez-Iturbe, I.

    1987-01-01

    Eight years of summer storm rainfall observations from 93 stations in and around the 154 sq km Walnut Gulch catchment of the Agricultural Research Service, U.S. Department of Agriculture, in Arizona are processed to yield the total station depths of 428 storms. Statistical analysis of these random fields yields the first two moments, the spatial correlation and variance functions, and the spatial distribution of total rainfall for each storm. The absolute and relative worth of three Poisson models are evaluated by comparing their prediction of the spatial distribution of storm rainfall with observations from the second half of the sample. The effect of interstorm parameter variation is examined.

  2. Statistical mechanics of budget-constrained auctions

    NASA Astrophysics Data System (ADS)

    Altarelli, F.; Braunstein, A.; Realpe-Gomez, J.; Zecchina, R.

    2009-07-01

    Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being in the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). On the basis of the cavity method of statistical mechanics, we introduce a message-passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution, and we derive from its properties the phase diagram of the problem. As the control parameter (average value of the budgets) is varied, we find two phase transitions delimiting a region in which long-range correlations arise.

  3. Nanophase and Composite Optical Materials

    NASA Technical Reports Server (NTRS)

    2003-01-01

    This talk will focus on accomplishments, current developments, and future directions of our work on composite optical materials for microgravity science and space exploration. This research spans the order parameter from quasi-fractal structures such as sol-gels and other aggregated or porous media, to statistically random cluster media such as metal colloids, to highly ordered materials such as layered media and photonic bandgap materials. The common focus is on flexible materials that can be used to produce composite or artificial materials with superior optical properties that could not be achieved with homogeneous materials. Applications of this work to NASA exploration goals such as terraforming, biosensors, solar sails, solar cells, and vehicle health monitoring, will be discussed.

  4. Uncertainty Analysis in 3D Equilibrium Reconstruction

    DOE PAGES

    Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.

    2018-02-21

    Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less

  5. Uncertainty Analysis in 3D Equilibrium Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.

    Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less

  6. A Probabilistic Design Method Applied to Smart Composite Structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1995-01-01

    A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.

  7. Design Parameters for Subwavelength Transparent Conductive Nanolattices

    DOE PAGES

    Diaz Leon, Juan J.; Feigenbaum, Eyal; Kobayashi, Nobuhiko P.; ...

    2017-09-29

    Recent advancements with the directed assembly of block copolymers have enabled the fabrication over cm 2 areas of highly ordered metal nanowire meshes, or nanolattices, which are of significant interest as transparent electrodes. Compared to randomly dispersed metal nanowire networks that have long been considered the most promising next-generation transparent electrode material, such ordered nanolattices represent a new design paradigm that is yet to be optimized. Here in this paper, through optical and electrical simulations, we explore the potential design parameters for such nanolattices as transparent conductive electrodes, elucidating relationships between the nanowire dimensions, defects, and the nanolattices’ conductivity andmore » transmissivity. We find that having an ordered nanowire network significantly decreases the length of nanowires required to attain both high transmissivity and high conductivity, and we quantify the network’s tolerance to defects in relation to other design constraints. Furthermore, we explore how both optical and electrical anisotropy can be introduced to such nanolattices, opening an even broader materials design space and possible set of applications.« less

  8. Blending Determinism with Evolutionary Computing: Applications to the Calculation of the Molecular Electronic Structure of Polythiophene.

    PubMed

    Sarkar, Kanchan; Sharma, Rahul; Bhattacharyya, S P

    2010-03-09

    A density matrix based soft-computing solution to the quantum mechanical problem of computing the molecular electronic structure of fairly long polythiophene (PT) chains is proposed. The soft-computing solution is based on a "random mutation hill climbing" scheme which is modified by blending it with a deterministic method based on a trial single-particle density matrix [P((0))(R)] for the guessed structural parameters (R), which is allowed to evolve under a unitary transformation generated by the Hamiltonian H(R). The Hamiltonian itself changes as the geometrical parameters (R) defining the polythiophene chain undergo mutation. The scale (λ) of the transformation is optimized by making the energy [E(λ)] stationary with respect to λ. The robustness and the performance levels of variants of the algorithm are analyzed and compared with those of other derivative free methods. The method is further tested successfully with optimization of the geometry of bipolaron-doped long PT chains.

  9. Application of validation data for assessing spatial interpolation methods for 8-h ozone or other sparsely monitored constituents.

    PubMed

    Joseph, John; Sharif, Hatim O; Sunil, Thankam; Alamgir, Hasanat

    2013-07-01

    The adverse health effects of high concentrations of ground-level ozone are well-known, but estimating exposure is difficult due to the sparseness of urban monitoring networks. This sparseness discourages the reservation of a portion of the monitoring stations for validation of interpolation techniques precisely when the risk of overfitting is greatest. In this study, we test a variety of simple spatial interpolation techniques for 8-h ozone with thousands of randomly selected subsets of data from two urban areas with monitoring stations sufficiently numerous to allow for true validation. Results indicate that ordinary kriging with only the range parameter calibrated in an exponential variogram is the generally superior method, and yields reliable confidence intervals. Sparse data sets may contain sufficient information for calibration of the range parameter even if the Moran I p-value is close to unity. R script is made available to apply the methodology to other sparsely monitored constituents. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Modified Hartree-Fock-Bogoliubov theory at finite temperature

    NASA Astrophysics Data System (ADS)

    Dinh Dang, Nguyen; Arima, Akito

    2003-07-01

    The modified Hartree-Fock-Bogoliubov (MHFB) theory at finite temperature is derived, which conserves the unitarity relation of the particle-density matrix. This is achieved by constructing a modified-quasiparticle-density matrix, where the fluctuation of the quasiparticle number is microscopically built in. This matrix can be directly obtained from the usual quasiparticle-density matrix by applying the secondary Bogoliubov transformation, which includes the quasiparticle-occupation number. It is shown that, in the limit of constant pairing parameter, the MHFB theory yields the previously obtained modified BCS (MBCS) equations. It is also proved that the modified quasiparticle-random-phase approximation, which is based on the MBCS quasiparticle excitations, conserves the Ikeda sum rule. The numerical calculations of the pairing gap, heat capacity, level density, and level-density parameter within the MBCS theory are carried out for 120Sn. The results show that the superfluid-normal phase transition is completely washed out. The applicability of the MBCS up to a temperature as high as T˜5 MeV is analyzed in detail.

  11. Optimal design of earth-moving machine elements with cusp catastrophe theory application

    NASA Astrophysics Data System (ADS)

    Pitukhin, A. V.; Skobtsov, I. G.

    2017-10-01

    This paper deals with the optimal design problem solution for the operator of an earth-moving machine with a roll-over protective structure (ROPS) in terms of the catastrophe theory. A brief description of the catastrophe theory is presented, the cusp catastrophe is considered, control parameters are viewed as Gaussian stochastic quantities in the first part of the paper. The statement of optimal design problem is given in the second part of the paper. It includes the choice of the objective function and independent design variables, establishment of system limits. The objective function is determined as mean total cost that includes initial cost and cost of failure according to the cusp catastrophe probability. Algorithm of random search method with an interval reduction subject to side and functional constraints is given in the last part of the paper. The way of optimal design problem solution can be applied to choose rational ROPS parameters, which will increase safety and reduce production and exploitation expenses.

  12. Laser-Aided Directed Energy Deposition of Steel Powder over Flat Surfaces and Edges.

    PubMed

    Caiazzo, Fabrizia; Alfieri, Vittorio

    2018-03-16

    In the framework of Additive Manufacturing of metals, Directed Energy Deposition of steel powder over flat surfaces and edges has been investigated in this paper. The aims are the repair and overhaul of actual, worn-out, high price sensitive metal components. A full-factorial experimental plan has been arranged, the results have been discussed in terms of geometry, microhardness and thermal affection as functions of the main governing parameters, laser power, scanning speed and mass flow rate; dilution and catching efficiency have been evaluated as well to compare quality and effectiveness of the process under conditions of both flat and edge depositions. Convincing results are presented to give grounds for shifting the process to actual applications: namely, no cracks or pores have been found in random cross-sections of the samples in the processing window. Interestingly an effect of the scanning conditions has been proven on the resulting hardness in the fusion zone; therefore, the mechanical characteristics are expected to depend on the processing parameters.

  13. Agronomic, chemical and genetic profiles of hot peppers (Capsicum annuum ssp.).

    PubMed

    De Masi, Luigi; Siviero, Pietro; Castaldo, Domenico; Cautela, Domenico; Esposito, Castrese; Laratta, Bruna

    2007-08-01

    A study on morphology, productive yield, main quality parameters and genetic variability of eight landraces of hot pepper (Capsicum annuum ssp.) from Southern Italy has been performed. Morphological characters of berries and productivity values were evaluated by agronomic analyses. Chemical and genetic investigations were performed by HPLC and random amplified polymorphic DNA (RAPD)-PCR, respectively. In particular, carotenoid and capsaicinoid (pungency) contents were considered as main quality parameters of hot pepper. For the eight selected samples, genetic similarity values were calculated from the generated RAPD fragments and a dendrogram of genetic similarity was constructed. All the eight landraces exhibited characteristic RAPD patterns that allowed their characterization. Agro-morphological and chemical determinations were found to be adequate for selection, but they resulted useful only for plants grown in the same environmental conditions. RAPD application may provide a more reliable way based on DNA identification. The results of our study led to the identification of three noteworthy populations, suitable for processing, which fitted into different clusters of the dendrogram.

  14. Design Parameters for Subwavelength Transparent Conductive Nanolattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diaz Leon, Juan J.; Feigenbaum, Eyal; Kobayashi, Nobuhiko P.

    Recent advancements with the directed assembly of block copolymers have enabled the fabrication over cm 2 areas of highly ordered metal nanowire meshes, or nanolattices, which are of significant interest as transparent electrodes. Compared to randomly dispersed metal nanowire networks that have long been considered the most promising next-generation transparent electrode material, such ordered nanolattices represent a new design paradigm that is yet to be optimized. Here in this paper, through optical and electrical simulations, we explore the potential design parameters for such nanolattices as transparent conductive electrodes, elucidating relationships between the nanowire dimensions, defects, and the nanolattices’ conductivity andmore » transmissivity. We find that having an ordered nanowire network significantly decreases the length of nanowires required to attain both high transmissivity and high conductivity, and we quantify the network’s tolerance to defects in relation to other design constraints. Furthermore, we explore how both optical and electrical anisotropy can be introduced to such nanolattices, opening an even broader materials design space and possible set of applications.« less

  15. Study of wavelet packet energy entropy for emotion classification in speech and glottal signals

    NASA Astrophysics Data System (ADS)

    He, Ling; Lech, Margaret; Zhang, Jing; Ren, Xiaomei; Deng, Lihua

    2013-07-01

    The automatic speech emotion recognition has important applications in human-machine communication. Majority of current research in this area is focused on finding optimal feature parameters. In recent studies, several glottal features were examined as potential cues for emotion differentiation. In this study, a new type of feature parameter is proposed, which calculates energy entropy on values within selected Wavelet Packet frequency bands. The modeling and classification tasks are conducted using the classical GMM algorithm. The experiments use two data sets: the Speech Under Simulated Emotion (SUSE) data set annotated with three different emotions (angry, neutral and soft) and Berlin Emotional Speech (BES) database annotated with seven different emotions (angry, bored, disgust, fear, happy, sad and neutral). The average classification accuracy achieved for the SUSE data (74%-76%) is significantly higher than the accuracy achieved for the BES data (51%-54%). In both cases, the accuracy was significantly higher than the respective random guessing levels (33% for SUSE and 14.3% for BES).

  16. Laser-Aided Directed Energy Deposition of Steel Powder over Flat Surfaces and Edges

    PubMed Central

    2018-01-01

    In the framework of Additive Manufacturing of metals, Directed Energy Deposition of steel powder over flat surfaces and edges has been investigated in this paper. The aims are the repair and overhaul of actual, worn-out, high price sensitive metal components. A full-factorial experimental plan has been arranged, the results have been discussed in terms of geometry, microhardness and thermal affection as functions of the main governing parameters, laser power, scanning speed and mass flow rate; dilution and catching efficiency have been evaluated as well to compare quality and effectiveness of the process under conditions of both flat and edge depositions. Convincing results are presented to give grounds for shifting the process to actual applications: namely, no cracks or pores have been found in random cross-sections of the samples in the processing window. Interestingly an effect of the scanning conditions has been proven on the resulting hardness in the fusion zone; therefore, the mechanical characteristics are expected to depend on the processing parameters. PMID:29547571

  17. Detecting labor using graph theory on connectivity matrices of uterine EMG.

    PubMed

    Al-Omar, S; Diab, A; Nader, N; Khalil, M; Karlsson, B; Marque, C

    2015-08-01

    Premature labor is one of the most serious health problems in the developed world. One of the main reasons for this is that no good way exists to distinguish true labor from normal pregnancy contractions. The aim of this paper is to investigate if the application of graph theory techniques to multi-electrode uterine EMG signals can improve the discrimination between pregnancy contractions and labor. To test our methods we first applied them to synthetic graphs where we detected some differences in the parameters results and changes in the graph model from pregnancy-like graphs to labor-like graphs. Then, we applied the same methods to real signals. We obtained the best differentiation between pregnancy and labor through the same parameters. Major improvements in differentiating between pregnancy and labor were obtained using a low pass windowing preprocessing step. Results show that real graphs generally became more organized when moving from pregnancy, where the graph showed random characteristics, to labor where the graph became a more small-world like graph.

  18. Maximum mutual information estimation of a simplified hidden MRF for offline handwritten Chinese character recognition

    NASA Astrophysics Data System (ADS)

    Xiong, Yan; Reichenbach, Stephen E.

    1999-01-01

    Understanding of hand-written Chinese characters is at such a primitive stage that models include some assumptions about hand-written Chinese characters that are simply false. So Maximum Likelihood Estimation (MLE) may not be an optimal method for hand-written Chinese characters recognition. This concern motivates the research effort to consider alternative criteria. Maximum Mutual Information Estimation (MMIE) is an alternative method for parameter estimation that does not derive its rationale from presumed model correctness, but instead examines the pattern-modeling problem in automatic recognition system from an information- theoretic point of view. The objective of MMIE is to find a set of parameters in such that the resultant model allows the system to derive from the observed data as much information as possible about the class. We consider MMIE for recognition of hand-written Chinese characters using on a simplified hidden Markov Random Field. MMIE provides improved performance improvement over MLE in this application.

  19. How large are the consequences of covariate imbalance in cluster randomized trials: a simulation study with a continuous outcome and a binary covariate at the cluster level.

    PubMed

    Moerbeek, Mirjam; van Schie, Sander

    2016-07-11

    The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.

  20. New criteria for isotropic and textured metals

    NASA Astrophysics Data System (ADS)

    Cazacu, Oana

    2018-05-01

    In this paper a isotropic criterion expressed in terms of both invariants of the stress deviator, J2 and J3 is proposed. This criterion involves a unique parameter, α, which depends only on the ratio between the yield stresses in uniaxial tension and pure shear. If this parameter is zero, the von Mises yield criterion is recovered; if a is positive the yield surface is interior to the von Mises yield surface whereas when a is negative, the new yield surface is exterior to it. Comparison with polycrystalline calculations using Taylor-Bishop-Hill model [1] for randomly oriented face-centered (FCC) polycrystalline metallic materials show that this new criterion captures well the numerical yield points. Furthermore, the criterion reproduces well yielding under combined tension-shear loadings for a variety of isotropic materials. An extension of this isotropic yield criterion such as to account for orthotropy in yielding is developed using the generalized invariants approach of Cazacu and Barlat [2]. This new orthotropic criterion is general and applicable to three-dimensional stress states. The procedure for the identification of the material parameters is outlined. Illustration of the predictive capabilities of the new orthotropic is demonstrated through comparison between the model predictions and data on aluminum sheet samples.

  1. Predicting acidification recovery at the Hubbard Brook Experimental Forest, New Hampshire: evaluation of four models.

    PubMed

    Tominaga, Koji; Aherne, Julian; Watmough, Shaun A; Alveteg, Mattias; Cosby, Bernard J; Driscoll, Charles T; Posch, Maximilian; Pourmokhtarian, Afshin

    2010-12-01

    The performance and prediction uncertainty (owing to parameter and structural uncertainties) of four dynamic watershed acidification models (MAGIC, PnET-BGC, SAFE, and VSD) were assessed by systematically applying them to data from the Hubbard Brook Experimental Forest (HBEF), New Hampshire, where long-term records of precipitation and stream chemistry were available. In order to facilitate systematic evaluation, Monte Carlo simulation was used to randomly generate common model input data sets (n = 10,000) from parameter distributions; input data were subsequently translated among models to retain consistency. The model simulations were objectively calibrated against observed data (streamwater: 1963-2004, soil: 1983). The ensemble of calibrated models was used to assess future response of soil and stream chemistry to reduced sulfur deposition at the HBEF. Although both hindcast (1850-1962) and forecast (2005-2100) predictions were qualitatively similar across the four models, the temporal pattern of key indicators of acidification recovery (stream acid neutralizing capacity and soil base saturation) differed substantially. The range in predictions resulted from differences in model structure and their associated posterior parameter distributions. These differences can be accommodated by employing multiple models (ensemble analysis) but have implications for individual model applications.

  2. Wavefield reconstruction inversion with a multiplicative cost function

    NASA Astrophysics Data System (ADS)

    da Silva, Nuno V.; Yao, Gang

    2018-01-01

    We present a method for the automatic estimation of the trade-off parameter in the context of wavefield reconstruction inversion (WRI). WRI formulates the inverse problem as an optimisation problem, minimising the data misfit while penalising with a wave equation constraining term. The trade-off between the two terms is balanced by a scaling factor that balances the contributions of the data-misfit term and the constraining term to the value of the objective function. If this parameter is too large then it implies penalizing for the wave equation imposing a hard constraint in the inversion. If it is too small, then this leads to a poorly constrained solution as it is essentially penalizing for the data misfit and not taking into account the physics that explains the data. This paper introduces a new approach for the formulation of WRI recasting its formulation into a multiplicative cost function. We demonstrate that the proposed method outperforms the additive cost function when the trade-off parameter is appropriately scaled in the latter, when adapting it throughout the iterations, and when the data is contaminated with Gaussian random noise. Thus this work contributes with a framework for a more automated application of WRI.

  3. Identifying Bearing Rotordynamic Coefficients using an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Miller, Brad A.; Howard, Samuel A.

    2008-01-01

    An Extended Kalman Filter is developed to estimate the linearized direct and indirect stiffness and damping force coefficients for bearings in rotor-dynamic applications from noisy measurements of the shaft displacement in response to imbalance and impact excitation. The bearing properties are modeled as stochastic random variables using a Gauss-Markov model. Noise terms are introduced into the system model to account for all of the estimation error, including modeling errors and uncertainties and the propagation of measurement errors into the parameter estimates. The system model contains two user-defined parameters that can be tuned to improve the filter s performance; these parameters correspond to the covariance of the system and measurement noise variables. The filter is also strongly influenced by the initial values of the states and the error covariance matrix. The filter is demonstrated using numerically simulated data for a rotor-bearing system with two identical bearings, which reduces the number of unknown linear dynamic coefficients to eight. The filter estimates for the direct damping coefficients and all four stiffness coefficients correlated well with actual values, whereas the estimates for the cross-coupled damping coefficients were the least accurate.

  4. Low-level laser therapy of myofascial pain syndromes of patients with osteoarthritis of knee and hip joints

    NASA Astrophysics Data System (ADS)

    Gasparyan, Levon V.

    2001-04-01

    The purpose of the given research is the comparison of efficiency of conventional treatment of myofascial pain syndromes of patients with osteoarthritis (OA) of hip and knee joints and therapy with additional application of low level laser therapy (LLLT) under dynamic control of clinical picture, rheovasographic, electromyographic examinations, and parameters of peroxide lipid oxidation. The investigation was made on 143 patients with OA of hip and knee joints. Patients were randomized in 2 groups: basic group included 91 patients, receiving conventional therapy with a course of LLLT, control group included 52 patients, receiving conventional treatment only. Transcutaneous ((lambda) equals 890 nm, output peak power 5 W, frequency 80 - 3000 Hz) and intravenous ((lambda) equals 633 nm, output 2 mW in the vein) laser irradiation were used for LLLT. Studied showed, that clinical efficiency of LLLT in the complex with conventional treatment of myofascial pain syndromes at the patients with OA is connected with attenuation of pain syndrome, normalization of parameters of myofascial syndrome, normalization of the vascular tension and parameters of rheographic curves, as well as with activation of antioxidant protection system.

  5. Quantitative tissue polarimetry using polar decomposition of 3 x 3 Mueller matrix

    NASA Astrophysics Data System (ADS)

    Swami, M. K.; Manhas, S.; Buddhiwant, P.; Ghosh, N.; Uppal, A.; Gupta, P. K.

    2007-05-01

    Polarization properties of any optical system are completely described by a sixteen-element (4 x 4) matrix called Mueller matrix, which transform the Stokes vector describing the polarization properties of incident light to the stokes vector of scattered light. Measurement of all the elements of the matrix requires a minimum of sixteen measurements involving both linear and circularly polarized light. However, for many diagnostic applications, it would be useful if all the polarization parameters of the medium (depolarization (Δ), differential attenuation of two orthogonal polarizations, that is, diattenuation (d), and differential phase retardance of two orthogonal polarizations, i.e., retardance (δ )) can be quantified with linear polarization measurements alone. In this paper we show that for a turbid medium, like biological tissue, where the depolarization of linearly polarized light arises primarily due to the randomization of the field vector's direction by multiple scattering, the polarization parameters of the medium can be obtained from the nine Mueller matrix elements involving linear polarization measurements only. Use of the approach for measurement of polarization parameters (Δ, d and δ) of normal and malignant (squamous cell carcinoma) tissues resected from human oral cavity are presented.

  6. Under What Circumstances Does External Knowledge about the Correlation Structure Improve Power in Cluster Randomized Designs?

    ERIC Educational Resources Information Center

    Rhoads, Christopher

    2014-01-01

    Recent publications have drawn attention to the idea of utilizing prior information about the correlation structure to improve statistical power in cluster randomized experiments. Because power in cluster randomized designs is a function of many different parameters, it has been difficult for applied researchers to discern a simple rule explaining…

  7. Kernel-Correlated Levy Field Driven Forward Rate and Application to Derivative Pricing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bo Lijun; Wang Yongjin; Yang Xuewei, E-mail: xwyangnk@yahoo.com.cn

    2013-08-01

    We propose a term structure of forward rates driven by a kernel-correlated Levy random field under the HJM framework. The kernel-correlated Levy random field is composed of a kernel-correlated Gaussian random field and a centered Poisson random measure. We shall give a criterion to preclude arbitrage under the risk-neutral pricing measure. As applications, an interest rate derivative with general payoff functional is priced under this pricing measure.

  8. Radiation-related quality of life parameters after targeted intraoperative radiotherapy versus whole breast radiotherapy in patients with breast cancer: results from the randomized phase III trial TARGIT-A.

    PubMed

    Welzel, Grit; Boch, Angela; Sperk, Elena; Hofmann, Frank; Kraus-Tiefenbacher, Uta; Gerhardt, Axel; Suetterlin, Marc; Wenz, Frederik

    2013-01-07

    Intraoperative radiotherapy (IORT) is a new treatment approach for early stage breast cancer. This study reports on the effects of IORT on radiation-related quality of life (QoL) parameters. Two hundred and thirty women with stage I-III breast cancer (age, 31 to 84 years) were entered into the study. A single-center subgroup of 87 women from the two arms of the randomized phase III trial TARGIT-A (TARGeted Intra-operative radioTherapy versus whole breast radiotherapy for breast cancer) was analyzed. Furthermore, results were compared to non-randomized control groups: n = 90 receiving IORT as a tumor bed boost followed by external beam whole breast radiotherapy (EBRT) outside of TARGIT-A (IORT-boost), and n = 53 treated with EBRT followed by an external-beam boost (EBRT-boost). QoL was collected using the European Organization for Research and Treatment of Cancer Quality of Life Questionnaires C30 (QLQ-C30) and BR23 (QLQ-BR23). The mean follow-up period in the TARGIT-A groups was 32 versus 39 months in the non-randomized control groups. Patients receiving IORT alone reported less general pain (21.3 points), breast (7.0 points) and arm (15.1 points) symptoms, and better role functioning (78.7 points) as patients receiving EBRT (40.9; 19.0; 32.8; and 60.5 points, respectively, P < 0.01). Patients receiving IORT alone also had fewer breast symptoms than TARGIT-A patients receiving IORT followed by EBRT for high risk features on final pathology (IORT-EBRT; 7.0 versus 29.7 points, P < 0.01). There were no significant differences between TARGIT-A patients receiving IORT-EBRT compared to non-randomized IORT-boost or EBRT-boost patients and patients receiving EBRT without a boost. In the randomized setting, important radiation-related QoL parameters after IORT were superior to EBRT. Non-randomized comparisons showed equivalent parameters in the IORT-EBRT group and the control groups.

  9. Sequential change detection and monitoring of temporal trends in random-effects meta-analysis.

    PubMed

    Dogo, Samson Henry; Clark, Allan; Kulinskaya, Elena

    2017-06-01

    Temporal changes in magnitude of effect sizes reported in many areas of research are a threat to the credibility of the results and conclusions of meta-analysis. Numerous sequential methods for meta-analysis have been proposed to detect changes and monitor trends in effect sizes so that meta-analysis can be updated when necessary and interpreted based on the time it was conducted. The difficulties of sequential meta-analysis under the random-effects model are caused by dependencies in increments introduced by the estimation of the heterogeneity parameter τ 2 . In this paper, we propose the use of a retrospective cumulative sum (CUSUM)-type test with bootstrap critical values. This method allows retrospective analysis of the past trajectory of cumulative effects in random-effects meta-analysis and its visualization on a chart similar to CUSUM chart. Simulation results show that the new method demonstrates good control of Type I error regardless of the number or size of the studies and the amount of heterogeneity. Application of the new method is illustrated on two examples of medical meta-analyses. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.

  10. Prebiotics, Prosynbiotics and Synbiotics: Can They Reduce Plasma Oxidative Stress Parameters? A Systematic Review.

    PubMed

    Salehi-Abargouei, Amin; Ghiasvand, Reza; Hariri, Mitra

    2017-03-01

    This study assessed the effectiveness of presybiotics, prosybiotics and synbiotics on reducing serum oxidative stress parameters. PubMed/Medline, Ovid, Google Scholar, ISI Web of Science and SCOPUS were searched up to September 2016. English language randomized clinical trials reporting the effect of presybiotics, prosybiotics or synbiotic interventions on serum oxidative stress parameters in human adults were included. Twenty-one randomized clinical trials met the inclusion criteria for systematic review. Two studies investigated prebiotics, four studies synbiotics and fifteen studies probiotics. According to our systematic review, prebiotic could decrease malondialdehyde and increase superoxidative dismutase, but evidence is not enough. In comparison with fructo-oligosaccharide, inulin is much more useful for oxidative stress reduction. Using probiotics with dairy products could reduce oxidative stress significantly, but probiotic in form of supplementation did not have any effect on oxidative stress. There is limited but supportive evidence that presybiotics, prosybiotics and synbiotics are effective for reducing oxidative stress parameters. Further randomized clinical trials with longer duration of intervention especially on population with increased oxidative stress are needed to provide more definitive results before any recommendation for clinical use of these interventions.

  11. Effect of aromatherapy massage on anxiety, depression, and physiologic parameters in older patients with the acute coronary syndrome: A randomized clinical trial.

    PubMed

    Bahrami, Tahereh; Rejeh, Nahid; Heravi-Karimooi, Majideh; Vaismoradi, Mojtaba; Tadrisi, Seyed Davood; Sieloff, Christina

    2017-12-01

    This study aimed to investigate the effect of aromatherapy massage on anxiety, depression, and physiologic parameters in older patients with acute coronary syndrome. This randomized controlled trial was conducted on 90 older women with acute coronary syndrome. The participants were randomly assigned into the intervention and control groups (n = 45). The intervention group received reflexology with lavender essential oil, but the control group only received routine care. Physiologic parameters, the levels of anxiety and depression in the hospital were evaluated using a checklist and the Hospital's Anxiety and Depression Scale, respectively, before and immediately after the intervention. Significant differences in the levels of anxiety and depression were reported between the groups after the intervention. The analysis of physiological parameters revealed a statistically significant reduction (P < .05) in systolic blood pressure, diastolic blood pressure, mean arterial pressure, and heart rate. However, no significant difference was observed in the respiratory rate. Aromatherapy massage can be considered by clinical nurses an efficient therapy for alleviating psychological and physiological responses among older women suffering from acute coronary syndrome. © 2017 John Wiley & Sons Australia, Ltd.

  12. Adaptive Kalman filtering for real-time mapping of the visual field

    PubMed Central

    Ward, B. Douglas; Janik, John; Mazaheri, Yousef; Ma, Yan; DeYoe, Edgar A.

    2013-01-01

    This paper demonstrates the feasibility of real-time mapping of the visual field for clinical applications. Specifically, three aspects of this problem were considered: (1) experimental design, (2) statistical analysis, and (3) display of results. Proper experimental design is essential to achieving a successful outcome, particularly for real-time applications. A random-block experimental design was shown to have less sensitivity to measurement noise, as well as greater robustness to error in modeling of the hemodynamic impulse response function (IRF) and greater flexibility than common alternatives. In addition, random encoding of the visual field allows for the detection of voxels that are responsive to multiple, not necessarily contiguous, regions of the visual field. Due to its recursive nature, the Kalman filter is ideally suited for real-time statistical analysis of visual field mapping data. An important feature of the Kalman filter is that it can be used for nonstationary time series analysis. The capability of the Kalman filter to adapt, in real time, to abrupt changes in the baseline arising from subject motion inside the scanner and other external system disturbances is important for the success of clinical applications. The clinician needs real-time information to evaluate the success or failure of the imaging run and to decide whether to extend, modify, or terminate the run. Accordingly, the analytical software provides real-time displays of (1) brain activation maps for each stimulus segment, (2) voxel-wise spatial tuning profiles, (3) time plots of the variability of response parameters, and (4) time plots of activated volume. PMID:22100663

  13. Probabilistic Design Methodology and its Application to the Design of an Umbilical Retract Mechanism

    NASA Technical Reports Server (NTRS)

    Onyebueke, Landon; Ameye, Olusesan

    2002-01-01

    A lot has been learned from past experience with structural and machine element failures. The understanding of failure modes and the application of an appropriate design analysis method can lead to improved structural and machine element safety as well as serviceability. To apply Probabilistic Design Methodology (PDM), all uncertainties are modeled as random variables with selected distribution types, means, and standard deviations. It is quite difficult to achieve a robust design without considering the randomness of the design parameters which is the case in the use of the Deterministic Design Approach. The US Navy has a fleet of submarine-launched ballistic missiles. An umbilical plug joins the missile to the submarine in order to provide electrical and cooling water connections. As the missile leaves the submarine, an umbilical retract mechanism retracts the umbilical plug clear of the advancing missile after disengagement during launch and retrains the plug in the retracted position. The design of the current retract mechanism in use was based on the deterministic approach which puts emphasis on factor of safety. A new umbilical retract mechanism that is simpler in design, lighter in weight, more reliable, easier to adjust, and more cost effective has become desirable since this will increase the performance and efficiency of the system. This paper reports on a recent project performed at Tennessee State University for the US Navy that involved the application of PDM to the design of an umbilical retract mechanism. This paper demonstrates how the use of PDM lead to the minimization of weight and cost, and the maximization of reliability and performance.

  14. Atomic Layer Deposited Oxide-Based Nanocomposite Structures with Embedded CoPtx Nanocrystals for Resistive Random Access Memory Applications.

    PubMed

    Wang, Lai-Guo; Cao, Zheng-Yi; Qian, Xu; Zhu, Lin; Cui, Da-Peng; Li, Ai-Dong; Wu, Di

    2017-02-22

    Al 2 O 3 - or HfO 2 -based nanocomposite structures with embedded CoPt x nanocrystals (NCs) on TiN-coated Si substrates have been prepared by combination of thermal atomic layer deposition (ALD) and plasma-enhanced ALD for resistive random access memory (RRAM) applications. The impact of CoPt x NCs and their average size/density on the resistive switching properties has been explored. Compared to the control sample without CoPt x NCs, ALD-derived Pt/oxide/100 cycle-CoPt x NCs/TiN/SiO 2 /Si exhibits a typical bipolar, reliable, and reproducible resistive switching behavior, such as sharp distribution of RRAM parameters, smaller set/reset voltages, stable resistance ratio (≥10 2 ) of OFF/ON states, better switching endurance up to 10 4 cycles, and longer data retention over 10 5 s. The possible resistive switching mechanism based on nanocomposite structures of oxide/CoPt x NCs has been proposed. The dominant conduction mechanisms in low- and high-resistance states of oxide-based device units with embedded CoPt x NCs are Ohmic behavior and space-charge-limited current, respectively. The insertion of CoPt x NCs can effectively improve the formation of conducting filaments due to the CoPt x NC-enhanced electric field intensity. Besides excellent resistive switching performances, the nanocomposite structures also simultaneously present ferromagnetic property. This work provides a flexible pathway by combining PEALD and TALD compatible with state-of-the-art Si-based technology for multifunctional electronic devices applications containing RRAM.

  15. Real-time fast physical random number generator with a photonic integrated circuit.

    PubMed

    Ugajin, Kazusa; Terashima, Yuta; Iwakawa, Kento; Uchida, Atsushi; Harayama, Takahisa; Yoshimura, Kazuyuki; Inubushi, Masanobu

    2017-03-20

    Random number generators are essential for applications in information security and numerical simulations. Most optical-chaos-based random number generators produce random bit sequences by offline post-processing with large optical components. We demonstrate a real-time hardware implementation of a fast physical random number generator with a photonic integrated circuit and a field programmable gate array (FPGA) electronic board. We generate 1-Tbit random bit sequences and evaluate their statistical randomness using NIST Special Publication 800-22 and TestU01. All of the BigCrush tests in TestU01 are passed using 410-Gbit random bit sequences. A maximum real-time generation rate of 21.1 Gb/s is achieved for random bit sequences in binary format stored in a computer, which can be directly used for applications involving secret keys in cryptography and random seeds in large-scale numerical simulations.

  16. Contact stiffness of regularly patterned multi-asperity interfaces

    NASA Astrophysics Data System (ADS)

    Li, Shen; Yao, Quanzhou; Li, Qunyang; Feng, Xi-Qiao; Gao, Huajian

    2018-02-01

    Contact stiffness is a fundamental mechanical index of solid surfaces and relevant to a wide range of applications. Although the correlation between contact stiffness, contact size and load has long been explored for single-asperity contacts, our understanding of the contact stiffness of rough interfaces is less clear. In this work, the contact stiffness of hexagonally patterned multi-asperity interfaces is studied using a discrete asperity model. We confirm that the elastic interaction among asperities is critical in determining the mechanical behavior of rough contact interfaces. More importantly, in contrast to the common wisdom that the interplay of asperities is solely dictated by the inter-asperity spacing, we show that the number of asperities in contact (or equivalently, the apparent size of contact) also plays an indispensable role. Based on the theoretical analysis, we propose a new parameter for gauging the closeness of asperities. Our theoretical model is validated by a set of experiments. To facilitate the application of the discrete asperity model, we present a general equation for contact stiffness estimation of regularly rough interfaces, which is further proved to be applicable for interfaces with single-scale random roughness.

  17. A chi-square goodness-of-fit test for non-identically distributed random variables: with application to empirical Bayes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conover, W.J.; Cox, D.D.; Martz, H.F.

    1997-12-01

    When using parametric empirical Bayes estimation methods for estimating the binomial or Poisson parameter, the validity of the assumed beta or gamma conjugate prior distribution is an important diagnostic consideration. Chi-square goodness-of-fit tests of the beta or gamma prior hypothesis are developed for use when the binomial sample sizes or Poisson exposure times vary. Nine examples illustrate the application of the methods, using real data from such diverse applications as the loss of feedwater flow rates in nuclear power plants, the probability of failure to run on demand and the failure rates of the high pressure coolant injection systems atmore » US commercial boiling water reactors, the probability of failure to run on demand of emergency diesel generators in US commercial nuclear power plants, the rate of failure of aircraft air conditioners, baseball batting averages, the probability of testing positive for toxoplasmosis, and the probability of tumors in rats. The tests are easily applied in practice by means of corresponding Mathematica{reg_sign} computer programs which are provided.« less

  18. Insights into the latent multinomial model through mark-resight data on female grizzly bears with cubs-of-the-year

    USGS Publications Warehouse

    Higgs, Megan D.; Link, William; White, Gary C.; Haroldson, Mark A.; Bjornlie, Daniel D.

    2013-01-01

    Mark-resight designs for estimation of population abundance are common and attractive to researchers. However, inference from such designs is very limited when faced with sparse data, either from a low number of marked animals, a low probability of detection, or both. In the Greater Yellowstone Ecosystem, yearly mark-resight data are collected for female grizzly bears with cubs-of-the-year (FCOY), and inference suffers from both limitations. To overcome difficulties due to sparseness, we assume homogeneity in sighting probabilities over 16 years of bi-annual aerial surveys. We model counts of marked and unmarked animals as multinomial random variables, using the capture frequencies of marked animals for inference about the latent multinomial frequencies for unmarked animals. We discuss undesirable behavior of the commonly used discrete uniform prior distribution on the population size parameter and provide OpenBUGS code for fitting such models. The application provides valuable insights into subtleties of implementing Bayesian inference for latent multinomial models. We tie the discussion to our application, though the insights are broadly useful for applications of the latent multinomial model.

  19. Applications of Random Differential Equations to Engineering Science. Wave Propagation in Turbulent Media and Random Linear Hyperbolic Systems.

    DTIC Science & Technology

    1981-11-10

    1976), 745-754. 4. (with W. C. Tam) Periodic and traveling wave solutions to Volterra - Lotka equation with diffusion. Bull. Math. Biol. 38 (1976), 643...with applications [17,19,20). (5) A general method for reconstructing the mutual coherent function of a static or moving source from the random

  20. A Bayesian ridge regression analysis of congestion's impact on urban expressway safety.

    PubMed

    Shi, Qi; Abdel-Aty, Mohamed; Lee, Jaeyoung

    2016-03-01

    With the rapid growth of traffic in urban areas, concerns about congestion and traffic safety have been heightened. This study leveraged both Automatic Vehicle Identification (AVI) system and Microwave Vehicle Detection System (MVDS) installed on an expressway in Central Florida to explore how congestion impacts the crash occurrence in urban areas. Multiple congestion measures from the two systems were developed. To ensure more precise estimates of the congestion's effects, the traffic data were aggregated into peak and non-peak hours. Multicollinearity among traffic parameters was examined. The results showed the presence of multicollinearity especially during peak hours. As a response, ridge regression was introduced to cope with this issue. Poisson models with uncorrelated random effects, correlated random effects, and both correlated random effects and random parameters were constructed within the Bayesian framework. It was proven that correlated random effects could significantly enhance model performance. The random parameters model has similar goodness-of-fit compared with the model with only correlated random effects. However, by accounting for the unobserved heterogeneity, more variables were found to be significantly related to crash frequency. The models indicated that congestion increased crash frequency during peak hours while during non-peak hours it was not a major crash contributing factor. Using the random parameter model, the three congestion measures were compared. It was found that all congestion indicators had similar effects while Congestion Index (CI) derived from MVDS data was a better congestion indicator for safety analysis. Also, analyses showed that the segments with higher congestion intensity could not only increase property damage only (PDO) crashes, but also more severe crashes. In addition, the issues regarding the necessity to incorporate specific congestion indicator for congestion's effects on safety and to take care of the multicollinearity between explanatory variables were also discussed. By including a specific congestion indicator, the model performance significantly improved. When comparing models with and without ridge regression, the magnitude of the coefficients was altered in the existence of multicollinearity. These conclusions suggest that the use of appropriate congestion measure and consideration of multicolilnearity among the variables would improve the models and our understanding about the effects of congestion on traffic safety. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. A comparison of observation-level random effect and Beta-Binomial models for modelling overdispersion in Binomial data in ecology & evolution.

    PubMed

    Harrison, Xavier A

    2015-01-01

    Overdispersion is a common feature of models of biological data, but researchers often fail to model the excess variation driving the overdispersion, resulting in biased parameter estimates and standard errors. Quantifying and modeling overdispersion when it is present is therefore critical for robust biological inference. One means to account for overdispersion is to add an observation-level random effect (OLRE) to a model, where each data point receives a unique level of a random effect that can absorb the extra-parametric variation in the data. Although some studies have investigated the utility of OLRE to model overdispersion in Poisson count data, studies doing so for Binomial proportion data are scarce. Here I use a simulation approach to investigate the ability of both OLRE models and Beta-Binomial models to recover unbiased parameter estimates in mixed effects models of Binomial data under various degrees of overdispersion. In addition, as ecologists often fit random intercept terms to models when the random effect sample size is low (<5 levels), I investigate the performance of both model types under a range of random effect sample sizes when overdispersion is present. Simulation results revealed that the efficacy of OLRE depends on the process that generated the overdispersion; OLRE failed to cope with overdispersion generated from a Beta-Binomial mixture model, leading to biased slope and intercept estimates, but performed well for overdispersion generated by adding random noise to the linear predictor. Comparison of parameter estimates from an OLRE model with those from its corresponding Beta-Binomial model readily identified when OLRE were performing poorly due to disagreement between effect sizes, and this strategy should be employed whenever OLRE are used for Binomial data to assess their reliability. Beta-Binomial models performed well across all contexts, but showed a tendency to underestimate effect sizes when modelling non-Beta-Binomial data. Finally, both OLRE and Beta-Binomial models performed poorly when models contained <5 levels of the random intercept term, especially for estimating variance components, and this effect appeared independent of total sample size. These results suggest that OLRE are a useful tool for modelling overdispersion in Binomial data, but that they do not perform well in all circumstances and researchers should take care to verify the robustness of parameter estimates of OLRE models.

  2. Estimation of distribution overlap of urn models.

    PubMed

    Hampton, Jerrad; Lladser, Manuel E

    2012-01-01

    A classical problem in statistics is estimating the expected coverage of a sample, which has had applications in gene expression, microbial ecology, optimization, and even numismatics. Here we consider a related extension of this problem to random samples of two discrete distributions. Specifically, we estimate what we call the dissimilarity probability of a sample, i.e., the probability of a draw from one distribution not being observed in [Formula: see text] draws from another distribution. We show our estimator of dissimilarity to be a [Formula: see text]-statistic and a uniformly minimum variance unbiased estimator of dissimilarity over the largest appropriate range of [Formula: see text]. Furthermore, despite the non-Markovian nature of our estimator when applied sequentially over [Formula: see text], we show it converges uniformly in probability to the dissimilarity parameter, and we present criteria when it is approximately normally distributed and admits a consistent jackknife estimator of its variance. As proof of concept, we analyze V35 16S rRNA data to discern between various microbial environments. Other potential applications concern any situation where dissimilarity of two discrete distributions may be of interest. For instance, in SELEX experiments, each urn could represent a random RNA pool and each draw a possible solution to a particular binding site problem over that pool. The dissimilarity of these pools is then related to the probability of finding binding site solutions in one pool that are absent in the other.

  3. Effect of Subgingivally Delivered 10% Emblica officinalis Gel as an Adjunct to Scaling and Root Planing in the Treatment of Chronic Periodontitis - A Randomized Placebo-controlled Clinical Trial.

    PubMed

    Grover, Shilpa; Tewari, Shikha; Sharma, Rajinder K; Singh, Gajendra; Yadav, Aparna; Naula, Satish C

    2016-06-01

    Emblica officinalis fruit possesses varied medicinal properties including cytoprotective antimicrobial, antioxidant, antiresorptive and antiinflammatory activity. The present study aimed to investigate the effect of subgingival application of indigenously prepared E. officinalis (Amla) sustained-release gel adjunctive to scaling and root planing (SRP) on chronic periodontitis. Forty-six patients (528 sites) were randomly assigned to control group (23;264): SRP +placebo gel and test group (23;264): SRP + 10% E. officinalis gel application. Periodontal parameters: plaque index, gingival index, probing pocket depth (PPD), clinical attachment level (CAL) and modified sulcus bleeding index (mSBI) were assessed at baseline, 2 and 3-month post-therapy. Forty patients (470 sites) completed the trial. When test and control sites were compared, significantly more reduction in mean PPD, mSBI, number of sites with PPD = 5-6 mm, PPD ≥ 7 mm, CAL ≥ 6 mm and greater CAL gain were achieved in test sites at 2- and 3-month post-therapy (p < 0.05). Locally delivered 10% E. officinalis sustained-release gel used as an adjunct to SRP may be more effective in reducing inflammation and periodontal destruction in patients with chronic periodontitis when compared with SRP alone. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Experiments of 10 Gbit/sec quantum stream cipher applicable to optical Ethernet and optical satellite link

    NASA Astrophysics Data System (ADS)

    Hirota, Osamu; Ohhata, Kenichi; Honda, Makoto; Akutsu, Shigeto; Doi, Yoshifumi; Harasawa, Katsuyoshi; Yamashita, Kiichi

    2009-08-01

    The security issue for the next generation optical network which realizes Cloud Computing System Service with data center" is urgent problem. In such a network, the encryption by physical layer which provide super security and small delay should be employed. It must provide, however, very high speed encryption because the basic link is operated at 2.5 Gbit/sec or 10 Gbit/sec. The quantum stream cipher by Yuen-2000 protocol (Y-00) is a completely new type random cipher so called Gauss-Yuen random cipher, which can break the Shannon limit for the symmetric key cipher. We develop such a cipher which has good balance of the security, speed and cost performance. In SPIE conference on quantum communication and quantum imaging V, we reported a demonstration of 2.5 Gbit/sec system for the commercial link and proposed how to improve it to 10 Gbit/sec. This paper reports a demonstration of the Y-00 cipher system which works at 10 Gbit/sec. A transmission test in a laboratory is tried to get the basic data on what parameters are important to operate in the real commercial networks. In addition, we give some theoretical results on the security. It is clarified that the necessary condition to break the Shannon limit requires indeed the quantum phenomenon, and that the full information theoretically secure system is available in the satellite link application.

  5. Analyzing repeated measures semi-continuous data, with application to an alcohol dependence study.

    PubMed

    Liu, Lei; Strawderman, Robert L; Johnson, Bankole A; O'Quigley, John M

    2016-02-01

    Two-part random effects models (Olsen and Schafer,(1) Tooze et al.(2)) have been applied to repeated measures of semi-continuous data, characterized by a mixture of a substantial proportion of zero values and a skewed distribution of positive values. In the original formulation of this model, the natural logarithm of the positive values is assumed to follow a normal distribution with a constant variance parameter. In this article, we review and consider three extensions of this model, allowing the positive values to follow (a) a generalized gamma distribution, (b) a log-skew-normal distribution, and (c) a normal distribution after the Box-Cox transformation. We allow for the possibility of heteroscedasticity. Maximum likelihood estimation is shown to be conveniently implemented in SAS Proc NLMIXED. The performance of the methods is compared through applications to daily drinking records in a secondary data analysis from a randomized controlled trial of topiramate for alcohol dependence treatment. We find that all three models provide a significantly better fit than the log-normal model, and there exists strong evidence for heteroscedasticity. We also compare the three models by the likelihood ratio tests for non-nested hypotheses (Vuong(3)). The results suggest that the generalized gamma distribution provides the best fit, though no statistically significant differences are found in pairwise model comparisons. © The Author(s) 2012.

  6. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    PubMed

    Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  7. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications

    PubMed Central

    W. Hasan, W. Z.

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554

  8. Distance error correction for time-of-flight cameras

    NASA Astrophysics Data System (ADS)

    Fuersattel, Peter; Schaller, Christian; Maier, Andreas; Riess, Christian

    2017-06-01

    The measurement accuracy of time-of-flight cameras is limited due to properties of the scene and systematic errors. These errors can accumulate to multiple centimeters which may limit the applicability of these range sensors. In the past, different approaches have been proposed for improving the accuracy of these cameras. In this work, we propose a new method that improves two important aspects of the range calibration. First, we propose a new checkerboard which is augmented by a gray-level gradient. With this addition it becomes possible to capture the calibration features for intrinsic and distance calibration at the same time. The gradient strip allows to acquire a large amount of distance measurements for different surface reflectivities, which results in more meaningful training data. Second, we present multiple new features which are used as input to a random forest regressor. By using random regression forests, we circumvent the problem of finding an accurate model for the measurement error. During application, a correction value for each individual pixel is estimated with the trained forest based on a specifically tailored feature vector. With our approach the measurement error can be reduced by more than 40% for the Mesa SR4000 and by more than 30% for the Microsoft Kinect V2. In our evaluation we also investigate the impact of the individual forest parameters and illustrate the importance of the individual features.

  9. An adaptive incremental approach to constructing ensemble classifiers: Application in an information-theoretic computer-aided decision system for detection of masses in mammograms

    PubMed Central

    Mazurowski, Maciej A.; Zurada, Jacek M.; Tourassi, Georgia D.

    2009-01-01

    Ensemble classifiers have been shown efficient in multiple applications. In this article, the authors explore the effectiveness of ensemble classifiers in a case-based computer-aided diagnosis system for detection of masses in mammograms. They evaluate two general ways of constructing subclassifiers by resampling of the available development dataset: Random division and random selection. Furthermore, they discuss the problem of selecting the ensemble size and propose two adaptive incremental techniques that automatically select the size for the problem at hand. All the techniques are evaluated with respect to a previously proposed information-theoretic CAD system (IT-CAD). The experimental results show that the examined ensemble techniques provide a statistically significant improvement (AUC=0.905±0.024) in performance as compared to the original IT-CAD system (AUC=0.865±0.029). Some of the techniques allow for a notable reduction in the total number of examples stored in the case base (to 1.3% of the original size), which, in turn, results in lower storage requirements and a shorter response time of the system. Among the methods examined in this article, the two proposed adaptive techniques are by far the most effective for this purpose. Furthermore, the authors provide some discussion and guidance for choosing the ensemble parameters. PMID:19673196

  10. Sunspot random walk and 22-year variation

    USGS Publications Warehouse

    Love, Jeffrey J.; Rigler, E. Joshua

    2012-01-01

    We examine two stochastic models for consistency with observed long-term secular trends in sunspot number and a faint, but semi-persistent, 22-yr signal: (1) a null hypothesis, a simple one-parameter random-walk model of sunspot-number cycle-to-cycle change, and, (2) an alternative hypothesis, a two-parameter random-walk model with an imposed 22-yr alternating amplitude. The observed secular trend in sunspots, seen from solar cycle 5 to 23, would not be an unlikely result of the accumulation of multiple random-walk steps. Statistical tests show that a 22-yr signal can be resolved in historical sunspot data; that is, the probability is low that it would be realized from random data. On the other hand, the 22-yr signal has a small amplitude compared to random variation, and so it has a relatively small effect on sunspot predictions. Many published predictions for cycle 24 sunspots fall within the dispersion of previous cycle-to-cycle sunspot differences. The probability is low that the Sun will, with the accumulation of random steps over the next few cycles, walk down to a Dalton-like minimum. Our models support published interpretations of sunspot secular variation and 22-yr variation resulting from cycle-to-cycle accumulation of dynamo-generated magnetic energy.

  11. A New Metamodeling Approach for Time-dependent Reliability of Dynamic Systems with Random Parameters Excited by Input Random Processes

    DTIC Science & Technology

    2014-04-09

    Excited by Input Random Processes Igor Baseski1,2, Dorin Drignei3, Zissimos P. Mourelatos1, Monica Majcher1 Oakland University, Rochester MI 48309 1...CONTRACT NUMBER W56HZV-04-2-0001 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Igor Baseski; Dorin Drignei; Zissimos Mourelatos; Monica

  12. Effects of laryngeal manual therapy (LMT) and transcutaneous electrical nerve stimulation (TENS) in vocal folds diadochokinesis of dysphonic women: a randomized clinical trial.

    PubMed

    Siqueira, Larissa Thaís Donalonso; Silverio, Kelly Cristina Alves; Brasolotto, Alcione Ghedini; Guirro, Rinaldo Roberto de Jesus; Carneiro, Christiano Giácomo; Behlau, Mara

    2017-05-15

    To verify and compare the effect of transcutaneous electrical nerve stimulation (TENS) and laryngeal manual therapy (LMT) on laryngeal diadochokinesis (DDK) of dysphonic women. Twenty women with bilateral vocal nodules participated and were equally divided into: LMT Group - LMT application; TENS Group - TENS application; both groups received 12 sessions of treatment, twice a week, with a duration of 20 minutes each, applied by the same therapist. The women were evaluated as to laryngeal DDK at three moments: diagnostic, pre-treatment, and post-treatment, which produced three groups of measurements. The DDK recording was performed with intersected repetition of vowels /a/ and / i/. The analysis of vowels was performed by the program Motor Speech Profile Advanced (MSP)-KayPentax. The DDK parameters of the three evaluations were compared by means of the paired t-test (p≤0.05). The measurements of laryngeal DDK parameters were similar in the phase without treatment, indicating no individual variability over time. There was no change with respect to the speed of DDK after intervention, but after LMT, DDK of the vowel /i/ was more stable in terms of the duration of the emissions and intensity of emissions repeated. These results show improved coordination of vocal folds movement during phonation. There were no changes in the DDK parameters following TENS. LMT provides greater regularity of movement during laryngeal diadochokinesis in dysphonic women, which extends knowledge on the effect of rebalancing the larynx muscles during phonation, although TENS does not impact laryngeal diadochokinesis.

  13. Task Performance with List-Mode Data

    NASA Astrophysics Data System (ADS)

    Caucci, Luca

    This dissertation investigates the application of list-mode data to detection, estimation, and image reconstruction problems, with an emphasis on emission tomography in medical imaging. We begin by introducing a theoretical framework for list-mode data and we use it to define two observers that operate on list-mode data. These observers are applied to the problem of detecting a signal (known in shape and location) buried in a random lumpy background. We then consider maximum-likelihood methods for the estimation of numerical parameters from list-mode data, and we characterize the performance of these estimators via the so-called Fisher information matrix. Reconstruction from PET list-mode data is then considered. In a process we called "double maximum-likelihood" reconstruction, we consider a simple PET imaging system and we use maximum-likelihood methods to first estimate a parameter vector for each pair of gamma-ray photons that is detected by the hardware. The collection of these parameter vectors forms a list, which is then fed to another maximum-likelihood algorithm for volumetric reconstruction over a grid of voxels. Efficient parallel implementation of the algorithms discussed above is then presented. In this work, we take advantage of two low-cost, mass-produced computing platforms that have recently appeared on the market, and we provide some details on implementing our algorithms on these devices. We conclude this dissertation work by elaborating on a possible application of list-mode data to X-ray digital mammography. We argue that today's CMOS detectors and computing platforms have become fast enough to make X-ray digital mammography list-mode data acquisition and processing feasible.

  14. Applying petrophysical models to radar travel time and electrical resistivity tomograms: Resolution-dependent limitations

    USGS Publications Warehouse

    Day-Lewis, F. D.; Singha, K.; Binley, A.M.

    2005-01-01

    Geophysical imaging has traditionally provided qualitative information about geologic structure; however, there is increasing interest in using petrophysical models to convert tomograms to quantitative estimates of hydrogeologic, mechanical, or geochemical parameters of interest (e.g., permeability, porosity, water content, and salinity). Unfortunately, petrophysical estimation based on tomograms is complicated by limited and variable image resolution, which depends on (1) measurement physics (e.g., electrical conduction or electromagnetic wave propagation), (2) parameterization and regularization, (3) measurement error, and (4) spatial variability. We present a framework to predict how core-scale relations between geophysical properties and hydrologic parameters are altered by the inversion, which produces smoothly varying pixel-scale estimates. We refer to this loss of information as "correlation loss." Our approach upscales the core-scale relation to the pixel scale using the model resolution matrix from the inversion, random field averaging, and spatial statistics of the geophysical property. Synthetic examples evaluate the utility of radar travel time tomography (RTT) and electrical-resistivity tomography (ERT) for estimating water content. This work provides (1) a framework to assess tomograms for geologic parameter estimation and (2) insights into the different patterns of correlation loss for ERT and RTT. Whereas ERT generally performs better near boreholes, RTT performs better in the interwell region. Application of petrophysical models to the tomograms in our examples would yield misleading estimates of water content. Although the examples presented illustrate the problem of correlation loss in the context of near-surface geophysical imaging, our results have clear implications for quantitative analysis of tomograms for diverse geoscience applications. Copyright 2005 by the American Geophysical Union.

  15. Therapeutic exercise for rotator cuff tendinopathy: a systematic review of contextual factors and prescription parameters.

    PubMed

    Littlewood, Chris; Malliaras, Peter; Chance-Larsen, Ken

    2015-06-01

    Exercise is widely regarded as an effective intervention for symptomatic rotator cuff tendinopathy but the prescription is diverse and the important components of such programmes are not well understood. The objective of this study was to systematically review the contextual factors and prescription parameters of published exercise programmes for rotator cuff tendinopathy, to generate recommendations based on current evidence. An electronic search of AMED, CiNAHL, CENTRAL, MEDLINE, PEDro and SPORTDiscus was undertaken from their inception to June 2014 and supplemented by hand searching. Eligible studies included randomized controlled trials evaluating the effectiveness of exercise in participants with rotator cuff tendinopathy. Included studies were appraised using the Cochrane risk of bias tool and synthesized narratively. Fourteen studies were included, and suggested that exercise programmes are widely applicable and can be successfully designed by physiotherapists with varying experience; whether the exercise is completed at home or within a clinic setting does not appear to matter and neither does pain production or pain avoidance during exercise; inclusion of some level of resistance does seem to matter although the optimal level is unclear, the optimal number of repetitions is also unclear but higher repetitions might confer superior outcomes; three sets of exercise are preferable to two or one set but the optimal frequency is unknown; most programmes should demonstrate clinically significant outcomes by 12 weeks. This systematic review has offered preliminary guidance in relation to contextual factors and prescription parameters to aid development and application of exercise programmes for rotator cuff tendinopathy.

  16. Scaled CMOS Technology Reliability Users Guide

    NASA Technical Reports Server (NTRS)

    White, Mark

    2010-01-01

    The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is presented revealing a power relationship. General models describing the soft error rates across scaled product generations are presented. The analysis methodology may be applied to other scaled microelectronic products and their key parameters.

  17. Effects of data structure on the estimation of covariance functions to describe genotype by environment interactions in a reaction norm model

    PubMed Central

    Calus, Mario PL; Bijma, Piter; Veerkamp, Roel F

    2004-01-01

    Covariance functions have been proposed to predict breeding values and genetic (co)variances as a function of phenotypic within herd-year averages (environmental parameters) to include genotype by environment interaction. The objective of this paper was to investigate the influence of definition of environmental parameters and non-random use of sires on expected breeding values and estimated genetic variances across environments. Breeding values were simulated as a linear function of simulated herd effects. The definition of environmental parameters hardly influenced the results. In situations with random use of sires, estimated genetic correlations between the trait expressed in different environments were 0.93, 0.93 and 0.97 while simulated at 0.89 and estimated genetic variances deviated up to 30% from the simulated values. Non random use of sires, poor genetic connectedness and small herd size had a large impact on the estimated covariance functions, expected breeding values and calculated environmental parameters. Estimated genetic correlations between a trait expressed in different environments were biased upwards and breeding values were more biased when genetic connectedness became poorer and herd composition more diverse. The best possible solution at this stage is to use environmental parameters combining large numbers of animals per herd, while losing some information on genotype by environment interaction in the data. PMID:15339629

  18. Comparison of Random Forest and Parametric Imputation Models for Imputing Missing Data Using MICE: A CALIBER Study

    PubMed Central

    Shah, Anoop D.; Bartlett, Jonathan W.; Carpenter, James; Nicholas, Owen; Hemingway, Harry

    2014-01-01

    Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The “true” imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001–2010) with complete data on all covariates. Variables were artificially made “missing at random,” and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data. PMID:24589914

  19. Comparison of random forest and parametric imputation models for imputing missing data using MICE: a CALIBER study.

    PubMed

    Shah, Anoop D; Bartlett, Jonathan W; Carpenter, James; Nicholas, Owen; Hemingway, Harry

    2014-03-15

    Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The "true" imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001-2010) with complete data on all covariates. Variables were artificially made "missing at random," and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data.

  20. Parameter estimation and forecasting for multiplicative log-normal cascades

    NASA Astrophysics Data System (ADS)

    Leövey, Andrés E.; Lux, Thomas

    2012-04-01

    We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing [Physica DPDNPDT0167-278910.1016/0167-2789(90)90035-N 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica DPDNPDT0167-278910.1016/j.physd.2004.01.020 193, 195 (2004)] and Kiyono [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.76.041113 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono 's procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.

  1. Improved Horvitz-Thompson Estimation of Model Parameters from Two-phase Stratified Samples: Applications in Epidemiology

    PubMed Central

    Breslow, Norman E.; Lumley, Thomas; Ballantyne, Christie M; Chambless, Lloyd E.; Kulich, Michal

    2009-01-01

    The case-cohort study involves two-phase sampling: simple random sampling from an infinite super-population at phase one and stratified random sampling from a finite cohort at phase two. Standard analyses of case-cohort data involve solution of inverse probability weighted (IPW) estimating equations, with weights determined by the known phase two sampling fractions. The variance of parameter estimates in (semi)parametric models, including the Cox model, is the sum of two terms: (i) the model based variance of the usual estimates that would be calculated if full data were available for the entire cohort; and (ii) the design based variance from IPW estimation of the unknown cohort total of the efficient influence function (IF) contributions. This second variance component may be reduced by adjusting the sampling weights, either by calibration to known cohort totals of auxiliary variables correlated with the IF contributions or by their estimation using these same auxiliary variables. Both adjustment methods are implemented in the R survey package. We derive the limit laws of coefficients estimated using adjusted weights. The asymptotic results suggest practical methods for construction of auxiliary variables that are evaluated by simulation of case-cohort samples from the National Wilms Tumor Study and by log-linear modeling of case-cohort data from the Atherosclerosis Risk in Communities Study. Although not semiparametric efficient, estimators based on adjusted weights may come close to achieving full efficiency within the class of augmented IPW estimators. PMID:20174455

  2. Bayesian inference for multivariate meta-analysis Box-Cox transformation models for individual patient data with applications to evaluation of cholesterol lowering drugs

    PubMed Central

    Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G.; Shah, Arvind K.; Lin, Jianxin

    2013-01-01

    In this paper, we propose a class of Box-Cox transformation regression models with multidimensional random effects for analyzing multivariate responses for individual patient data (IPD) in meta-analysis. Our modeling formulation uses a multivariate normal response meta-analysis model with multivariate random effects, in which each response is allowed to have its own Box-Cox transformation. Prior distributions are specified for the Box-Cox transformation parameters as well as the regression coefficients in this complex model, and the Deviance Information Criterion (DIC) is used to select the best transformation model. Since the model is quite complex, a novel Monte Carlo Markov chain (MCMC) sampling scheme is developed to sample from the joint posterior of the parameters. This model is motivated by a very rich dataset comprising 26 clinical trials involving cholesterol lowering drugs where the goal is to jointly model the three dimensional response consisting of Low Density Lipoprotein Cholesterol (LDL-C), High Density Lipoprotein Cholesterol (HDL-C), and Triglycerides (TG) (LDL-C, HDL-C, TG). Since the joint distribution of (LDL-C, HDL-C, TG) is not multivariate normal and in fact quite skewed, a Box-Cox transformation is needed to achieve normality. In the clinical literature, these three variables are usually analyzed univariately: however, a multivariate approach would be more appropriate since these variables are correlated with each other. A detailed analysis of these data is carried out using the proposed methodology. PMID:23580436

  3. Bayesian inference for multivariate meta-analysis Box-Cox transformation models for individual patient data with applications to evaluation of cholesterol-lowering drugs.

    PubMed

    Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G; Shah, Arvind K; Lin, Jianxin

    2013-10-15

    In this paper, we propose a class of Box-Cox transformation regression models with multidimensional random effects for analyzing multivariate responses for individual patient data in meta-analysis. Our modeling formulation uses a multivariate normal response meta-analysis model with multivariate random effects, in which each response is allowed to have its own Box-Cox transformation. Prior distributions are specified for the Box-Cox transformation parameters as well as the regression coefficients in this complex model, and the deviance information criterion is used to select the best transformation model. Because the model is quite complex, we develop a novel Monte Carlo Markov chain sampling scheme to sample from the joint posterior of the parameters. This model is motivated by a very rich dataset comprising 26 clinical trials involving cholesterol-lowering drugs where the goal is to jointly model the three-dimensional response consisting of low density lipoprotein cholesterol (LDL-C), high density lipoprotein cholesterol (HDL-C), and triglycerides (TG) (LDL-C, HDL-C, TG). Because the joint distribution of (LDL-C, HDL-C, TG) is not multivariate normal and in fact quite skewed, a Box-Cox transformation is needed to achieve normality. In the clinical literature, these three variables are usually analyzed univariately; however, a multivariate approach would be more appropriate because these variables are correlated with each other. We carry out a detailed analysis of these data by using the proposed methodology. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Coupling of light into the fundamental diffusion mode of a scattering medium (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ojambati, Oluwafemi S.; Yılmaz, Hasan; Lagendijk, Ad; Mosk, Allard P.; Vos, Willem L.

    2016-03-01

    Diffusion equation describes the energy density inside a scattering medium such as biological tissues and paint [1]. The solution of the diffusion equation is a sum over a complete set of eigensolutions that shows a characteristic linear decrease with depth in the medium. It is of particular interest if one could launch energy in the fundamental eigensolution, as this opens the opportunity to achieve a much greater internal energy density. For applications in optics, an enhanced energy density is vital for solid-state lighting, light harvesting in solar cells, low-threshold random lasers, and biomedical optics. Here we demonstrate the first ever selective coupling of optical energy into a diffusion eigensolution of a scattering medium of zinc oxide (ZnO) paint. To this end, we exploit wavefront shaping to selectively couple energy into the fundamental diffusion mode, employing fluorescence of nanoparticles randomly positioned inside the medium as a probe of the energy density. We observe an enhanced fluorescence in case of optimized incident wavefronts, and the enhancement increases with sample thickness, a typical mesoscopic control parameter. We interpret successfully our result by invoking the fundamental eigensolution of the diffusion equation, and we obtain excellent agreement with our observations, even in absence of adjustable parameters [2]. References [1] R. Pierrat, P. Ambichl, S. Gigan, A. Haber, R. Carminati, and R. Rotter, Proc. Natl. Acad. Sci. U.S.A. 111, 17765 (2014). [2] O. S. Ojambati, H. Yilmaz, A. Lagendijk, A. P. Mosk, and W. L. Vos, arXiv:1505.08103.

  5. Performance in population models for count data, part II: a new SAEM algorithm

    PubMed Central

    Savic, Radojka; Lavielle, Marc

    2009-01-01

    Analysis of count data from clinical trials using mixed effect analysis has recently become widely used. However, algorithms available for the parameter estimation, including LAPLACE and Gaussian quadrature (GQ), are associated with certain limitations, including bias in parameter estimates and the long analysis runtime. The stochastic approximation expectation maximization (SAEM) algorithm has proven to be a very efficient and powerful tool in the analysis of continuous data. The aim of this study was to implement and investigate the performance of a new SAEM algorithm for application to count data. A new SAEM algorithm was implemented in MATLAB for estimation of both, parameters and the Fisher information matrix. Stochastic Monte Carlo simulations followed by re-estimation were performed according to scenarios used in previous studies (part I) to investigate properties of alternative algorithms (1). A single scenario was used to explore six probability distribution models. For parameter estimation, the relative bias was less than 0.92% and 4.13 % for fixed and random effects, for all models studied including ones accounting for over- or under-dispersion. Empirical and estimated relative standard errors were similar, with distance between them being <1.7 % for all explored scenarios. The longest CPU time was 95s for parameter estimation and 56s for SE estimation. The SAEM algorithm was extended for analysis of count data. It provides accurate estimates of both, parameters and standard errors. The estimation is significantly faster compared to LAPLACE and GQ. The algorithm is implemented in Monolix 3.1, (beta-version available in July 2009). PMID:19680795

  6. Trans-scleral selective laser trabeculoplasty (SLT) without a gonioscopy lens (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Belkin, Michael; Geffen, Noa; Goldenfeld, Modi; Ofir, Shay; Belkin, Avner; Assia, Ehud

    2016-03-01

    Developing a one-second automatic glaucoma treatment using trans-scleral laser trabeculoplasty (LTP) without a gonioscopy lens Purpose: Developing an LTP device for delivering multiple simultaneous trans-scleral applications of low energy laser irradiation to the trabecular meshwork (TM) for reducing Intraocular Pressure (IOP). Methods: Concept proof: A randomized, masked, controlled one was performed on open angle glaucoma patients. The control group underwent conventional SLT (100 laser spots through a gonioscope for 360 degrees directly on the TM). The trial group underwent irradiation by the same laser at the same irradiation parameters on the sclera overlying the TM. Topical glaucoma therapy was not changed during the 12 months trial. Feasibility trial: Using optimized laser parameters, 60 discrete applications were administered on similar locations of patients' sclera. Results: Concept proof: Trans-scleral applications: (N=15), IOP decrease from 20.21 mmHg before treatment to 16.00 (27.1%) at one year. The corresponding numbers for the control group (n=15), were 21.14 mmHg and 14.30 (23.4%). There was no statistical difference between the two groups in IOP reduction. The complications rate was significantly higher in the control group. Trial 2: IOP was reduced from an of 25.3 mmHg to 19.3 (23.7%) in the 11 patients. Conclusions: Laser coherency, lost in tissue transmission, is not required for the therapeutic effect. The new method will possibly enable treatment of angle closure glaucoma as well as simultaneous applications of all laser spots to the sclera. When used conjointly with target acquisition, will make feasible an automatic glaucoma treatment in less than one second.

  7. A design methodology for nonlinear systems containing parameter uncertainty

    NASA Technical Reports Server (NTRS)

    Young, G. E.; Auslander, D. M.

    1983-01-01

    In the present design methodology for nonlinear systems containing parameter uncertainty, a generalized sensitivity analysis is incorporated which employs parameter space sampling and statistical inference. For the case of a system with j adjustable and k nonadjustable parameters, this methodology (which includes an adaptive random search strategy) is used to determine the combination of j adjustable parameter values which maximize the probability of those performance indices which simultaneously satisfy design criteria in spite of the uncertainty due to k nonadjustable parameters.

  8. Estimating daily time series of streamflow using hydrological model calibrated based on satellite observations of river water surface width: Toward real world applications.

    PubMed

    Sun, Wenchao; Ishidaira, Hiroshi; Bastola, Satish; Yu, Jingshan

    2015-05-01

    Lacking observation data for calibration constrains applications of hydrological models to estimate daily time series of streamflow. Recent improvements in remote sensing enable detection of river water-surface width from satellite observations, making possible the tracking of streamflow from space. In this study, a method calibrating hydrological models using river width derived from remote sensing is demonstrated through application to the ungauged Irrawaddy Basin in Myanmar. Generalized likelihood uncertainty estimation (GLUE) is selected as a tool for automatic calibration and uncertainty analysis. Of 50,000 randomly generated parameter sets, 997 are identified as behavioral, based on comparing model simulation with satellite observations. The uncertainty band of streamflow simulation can span most of 10-year average monthly observed streamflow for moderate and high flow conditions. Nash-Sutcliffe efficiency is 95.7% for the simulated streamflow at the 50% quantile. These results indicate that application to the target basin is generally successful. Beyond evaluating the method in a basin lacking streamflow data, difficulties and possible solutions for applications in the real world are addressed to promote future use of the proposed method in more ungauged basins. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Effect of foliar application of chitosan and salicylic acid on the growth of soybean (Glycine max (L.) Merr.) varieties

    NASA Astrophysics Data System (ADS)

    Hasanah, Y.; Sembiring, M.

    2018-02-01

    Elicitors such as chitosan and salicylic acid could be used not only to increase isoflavone concentration of soybean seeds, but also to increase the growth and seed yield. The objective of the present study was to determine the effects of foliar application of elicitor compounds (i.e. chitosan, and salicylic acid)on the growth of two soybean varieties under dry land conditions. Experimental design was a randomized block design with 2 factors and 3 replications. The first factor was soybean varieties (Wilis and Devon). The second factor was foliar application of elicitors consisted of without elicitor; chitosan at V4 (four trifoliate leaves are fully developed); chitosan at R3 (early podding); chitosan at V4 and R3; salicylic acid at V4; salicylic acid at R3 and salicylic acid at V4 and R3. Parameters observed was plant height at 2-7 week after planting (WAP), shoot dry weight and root dry weight. The results suggest that the Wilis variety had higher plant height 7 WAP than Devon. The foliar application of chitosan increased the plant height at 7 WAP, shoot dry weight and root dry weight. The foliar application of chitosan at V4 and R3 on Devon variety increased shoot dry weight.

  10. A unified approach for squeal instability analysis of disc brakes with two types of random-fuzzy uncertainties

    NASA Astrophysics Data System (ADS)

    Lü, Hui; Shangguan, Wen-Bin; Yu, Dejie

    2017-09-01

    Automotive brake systems are always subjected to various types of uncertainties and two types of random-fuzzy uncertainties may exist in the brakes. In this paper, a unified approach is proposed for squeal instability analysis of disc brakes with two types of random-fuzzy uncertainties. In the proposed approach, two uncertainty analysis models with mixed variables are introduced to model the random-fuzzy uncertainties. The first one is the random and fuzzy model, in which random variables and fuzzy variables exist simultaneously and independently. The second one is the fuzzy random model, in which uncertain parameters are all treated as random variables while their distribution parameters are expressed as fuzzy numbers. Firstly, the fuzziness is discretized by using α-cut technique and the two uncertainty analysis models are simplified into random-interval models. Afterwards, by temporarily neglecting interval uncertainties, the random-interval models are degraded into random models, in which the expectations, variances, reliability indexes and reliability probabilities of system stability functions are calculated. And then, by reconsidering the interval uncertainties, the bounds of the expectations, variances, reliability indexes and reliability probabilities are computed based on Taylor series expansion. Finally, by recomposing the analysis results at each α-cut level, the fuzzy reliability indexes and probabilities can be obtained, by which the brake squeal instability can be evaluated. The proposed approach gives a general framework to deal with both types of random-fuzzy uncertainties that may exist in the brakes and its effectiveness is demonstrated by numerical examples. It will be a valuable supplement to the systematic study of brake squeal considering uncertainty.

  11. Randomized placebo controlled trial of furosemide on subjective perception of dyspnoea in patients with pulmonary oedema because of hypertensive crisis.

    PubMed

    Holzer-Richling, Nina; Holzer, Michael; Herkner, Harald; Riedmüller, Eva; Havel, Christof; Kaff, Alfred; Malzer, Reinhard; Schreiber, Wolfgang

    2011-06-01

    To compare the administration of furosemide with placebo on the subjective perception of dyspnoea in patients with acute pulmonary oedema because of hypertensive crisis. Design  Randomized, controlled and double-blinded clinical trial. Municipal emergency medical service system and university-based emergency department. Fifty-nine patients with pulmonary oedema because of hypertensive crisis. Additional to administration of oxygen, morphine-hydrochloride and urapidil until the systolic blood pressure was below 160mmHg, the patients were randomized to receive furosemide 80mg IV bolus (furosemide group) or saline placebo (placebo group). The primary outcome was the subjective perception of dyspnoea as measured with a modified BORG scale at one hour after randomization. Secondary outcome parameters were the subjective perception of dyspnoea of patients as measured with a modified BORG scale and a visual analogue scale at 2, 3 and 6h after randomization of the patient; course of the systolic arterial pressure and peripheral oxygen saturation and lactate at admission and at 6h after admission. In 25 patients in the furosemide group and in 28 patients in the placebo group, a BORG score could be obtained. There was no statistically significant difference in the severity of dyspnoea at one hour after randomization (P=0·40). The median BORG score at 1h after randomization in the furosemide group was 3 (IQR 2 to 4) compared to 3 (IQR 2 to 7) in the placebo group (P=0·40). Those patients who were randomized to the placebo group needed higher doses of urapidil at 20min after randomization. There were no significant differences in the rate of adverse events, nonfatal cardiac arrests or death between the two groups. The subjective perception of dyspnoea in patients with hypertensive pulmonary oedema was not influenced by the application of a loop-diuretic. Therefore, additional furosemide therapy needs to be scrutinized in the therapy of these patients. © 2010 The Authors. European Journal of Clinical Investigation © 2010 Stichting European Society for Clinical Investigation Journal Foundation.

  12. Effects of random initial conditions on the dynamical scaling behaviors of a fixed-energy Manna sandpile model in one dimension

    NASA Astrophysics Data System (ADS)

    Kwon, Sungchul; Kim, Jin Min

    2015-01-01

    For a fixed-energy (FE) Manna sandpile model in one dimension, we investigate the effects of random initial conditions on the dynamical scaling behavior of an order parameter. In the FE Manna model, the density ρ of total particles is conserved, and an absorbing phase transition occurs at ρc as ρ varies. In this work, we show that, for a given ρ , random initial distributions of particles lead to the domain structure in which domains with particle densities higher and lower than ρc alternate with each other. In the domain structure, the dominant length scale is the average domain length, which increases via the coalescence of adjacent domains. At ρc, the domain structure slows down the decay of an order parameter and also causes anomalous finite-size effects, i.e., power-law decay followed by an exponential one before the quasisteady state. As a result, the interplay of particle conservation and random initial conditions causes the domain structure, which is the origin of the anomalous dynamical scaling behaviors for random initial conditions.

  13. Demonstration of Numerical Equivalence of Ensemble and Spectral Averaging in Electromagnetic Scattering by Random Particulate Media

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Zakharova, Nadezhda T.

    2016-01-01

    The numerically exact superposition T-matrix method is used to model far-field electromagnetic scattering by two types of particulate object. Object 1 is a fixed configuration which consists of N identical spherical particles (with N 200 or 400) quasi-randomly populating a spherical volume V having a median size parameter of 50. Object 2 is a true discrete random medium (DRM) comprising the same number N of particles randomly moving throughout V. The median particle size parameter is fixed at 4. We show that if Object 1 is illuminated by a quasi-monochromatic parallel beam then it generates a typical speckle pattern having no resemblance to the scattering pattern generated by Object 2. However, if Object 1 is illuminated by a parallel polychromatic beam with a 10 bandwidth then it generates a scattering pattern that is largely devoid of speckles and closely reproduces the quasi-monochromatic pattern generated by Object 2. This result serves to illustrate the capacity of the concept of electromagnetic scattering by a DRM to encompass fixed quasi-random particulate samples provided that they are illuminated by polychromatic light.

  14. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less

  15. Complete convergence of randomly weighted END sequences and its application.

    PubMed

    Li, Penghua; Li, Xiaoqin; Wu, Kehan

    2017-01-01

    We investigate the complete convergence of partial sums of randomly weighted extended negatively dependent (END) random variables. Some results of complete moment convergence, complete convergence and the strong law of large numbers for this dependent structure are obtained. As an application, we study the convergence of the state observers of linear-time-invariant systems. Our results extend the corresponding earlier ones.

  16. A stylistic classification of Russian-language texts based on the random walk model

    NASA Astrophysics Data System (ADS)

    Kramarenko, A. A.; Nekrasov, K. A.; Filimonov, V. V.; Zhivoderov, A. A.; Amieva, A. A.

    2017-09-01

    A formal approach to text analysis is suggested that is based on the random walk model. The frequencies and reciprocal positions of the vowel letters are matched up by a process of quasi-particle migration. Statistically significant difference in the migration parameters for the texts of different functional styles is found. Thus, a possibility of classification of texts using the suggested method is demonstrated. Five groups of the texts are singled out that can be distinguished from one another by the parameters of the quasi-particle migration process.

  17. A generator for unique quantum random numbers based on vacuum states

    NASA Astrophysics Data System (ADS)

    Gabriel, Christian; Wittmann, Christoffer; Sych, Denis; Dong, Ruifang; Mauerer, Wolfgang; Andersen, Ulrik L.; Marquardt, Christoph; Leuchs, Gerd

    2010-10-01

    Random numbers are a valuable component in diverse applications that range from simulations over gambling to cryptography. The quest for true randomness in these applications has engendered a large variety of different proposals for producing random numbers based on the foundational unpredictability of quantum mechanics. However, most approaches do not consider that a potential adversary could have knowledge about the generated numbers, so the numbers are not verifiably random and unique. Here we present a simple experimental setup based on homodyne measurements that uses the purity of a continuous-variable quantum vacuum state to generate unique random numbers. We use the intrinsic randomness in measuring the quadratures of a mode in the lowest energy vacuum state, which cannot be correlated to any other state. The simplicity of our source, combined with its verifiably unique randomness, are important attributes for achieving high-reliability, high-speed and low-cost quantum random number generators.

  18. Fisher information at the edge of chaos in random Boolean networks.

    PubMed

    Wang, X Rosalind; Lizier, Joseph T; Prokopenko, Mikhail

    2011-01-01

    We study the order-chaos phase transition in random Boolean networks (RBNs), which have been used as models of gene regulatory networks. In particular we seek to characterize the phase diagram in information-theoretic terms, focusing on the effect of the control parameters (activity level and connectivity). Fisher information, which measures how much system dynamics can reveal about the control parameters, offers a natural interpretation of the phase diagram in RBNs. We report that this measure is maximized near the order-chaos phase transitions in RBNs, since this is the region where the system is most sensitive to its parameters. Furthermore, we use this study of RBNs to clarify the relationship between Shannon and Fisher information measures.

  19. Longitudinal analysis of the strengths and difficulties questionnaire scores of the Millennium Cohort Study children in England using M-quantile random-effects regression.

    PubMed

    Tzavidis, Nikos; Salvati, Nicola; Schmid, Timo; Flouri, Eirini; Midouhas, Emily

    2016-02-01

    Multilevel modelling is a popular approach for longitudinal data analysis. Statistical models conventionally target a parameter at the centre of a distribution. However, when the distribution of the data is asymmetric, modelling other location parameters, e.g. percentiles, may be more informative. We present a new approach, M -quantile random-effects regression, for modelling multilevel data. The proposed method is used for modelling location parameters of the distribution of the strengths and difficulties questionnaire scores of children in England who participate in the Millennium Cohort Study. Quantile mixed models are also considered. The analyses offer insights to child psychologists about the differential effects of risk factors on children's outcomes.

  20. Retinal blood vessel segmentation in high resolution fundus photographs using automated feature parameter estimation

    NASA Astrophysics Data System (ADS)

    Orlando, José Ignacio; Fracchia, Marcos; del Río, Valeria; del Fresno, Mariana

    2017-11-01

    Several ophthalmological and systemic diseases are manifested through pathological changes in the properties and the distribution of the retinal blood vessels. The characterization of such alterations requires the segmentation of the vasculature, which is a tedious and time-consuming task that is infeasible to be performed manually. Numerous attempts have been made to propose automated methods for segmenting the retinal vasculature from fundus photographs, although their application in real clinical scenarios is usually limited by their ability to deal with images taken at different resolutions. This is likely due to the large number of parameters that have to be properly calibrated according to each image scale. In this paper we propose to apply a novel strategy for automated feature parameter estimation, combined with a vessel segmentation method based on fully connected conditional random fields. The estimation model is learned by linear regression from structural properties of the images and known optimal configurations, that were previously obtained for low resolution data sets. Our experiments in high resolution images show that this approach is able to estimate appropriate configurations that are suitable for performing the segmentation task without requiring to re-engineer parameters. Furthermore, our combined approach reported state of the art performance on the benchmark data set HRF, as measured in terms of the F1-score and the Matthews correlation coefficient.

  1. GRAM 88 - 4D GLOBAL REFERENCE ATMOSPHERE MODEL-1988

    NASA Technical Reports Server (NTRS)

    Johnson, D. L.

    1994-01-01

    The Four-D Global Reference Atmosphere program was developed from an empirical atmospheric model which generates values for pressure, density, temperature, and winds from surface level to orbital altitudes. This program can generate altitude profiles of atmospheric parameters along any simulated trajectory through the atmosphere. The program was developed for design applications in the Space Shuttle program, such as the simulation of external tank re-entry trajectories. Other potential applications are global circulation and diffusion studies; also the generation of profiles for comparison with other atmospheric measurement techniques such as satellite measured temperature profiles and infrasonic measurement of wind profiles. GRAM-88 is the latest version of the software GRAM. The software GRAM-88 contains a number of changes that have improved the model statistics, in particular, the small scale density perturbation statistics. It also corrected a low latitude grid problem as well as the SCIDAT data base. Furthermore, GRAM-88 now uses the U.S. Standard Atmosphere 1976 as a comparison standard rather than the US62 used in other versions. The program is an amalgamation of two empirical atmospheric models for the low (25km) and the high (90km) atmosphere, with a newly developed latitude-longitude dependent model for the middle atmosphere. The Jacchia (1970) model simulates the high atmospheric region above 115km. The Jacchia program sections are in separate subroutines so that other thermosphericexospheric models could easily be adapted if required for special applications. The improved code eliminated the calculation of geostrophic winds above 125 km altitude from the model. The atmospheric region between 30km and 90km is simulated by a latitude-longitude dependent empirical model modification of the latitude dependent empirical model of Groves (1971). A fairing technique between 90km and 115km accomplished a smooth transition between the modified Groves values and the Jacchia values. Below 25km the atmospheric parameters are computed by the 4-D worldwide atmospheric model of Spiegler and Fowler (1972). This data set is not included. GRAM-88 incorporates a hydrostatic/gas law check in the 0-30 km altitude range to flag and change any bad data points. Between 5km and 30km, an interpolation scheme is used between the 4-D results and the modified Groves values. The output parameters consist of components for: (1) latitude, longitude, and altitude dependent monthly and annual means, (2) quasi-biennial oscillations (QBO), and (3) random perturbations to partially simulate the variability due to synoptic, diurnal, planetary wave, and gravity wave variations. Quasi-biennial and random variation perturbations are computed from parameters determined by various empirical studies and are added to the monthly mean values. The GRAM-88 program is for batch execution on the IBM 3084. It is written in STANDARD FORTRAN 77 under the MVS/XA operating system. The IBM DISPLA graphics routines are necessary for graphical output. The program was developed in 1988.

  2. Anterior inferior plating versus superior plating for clavicle fracture: a meta-analysis.

    PubMed

    Ai, Jie; Kan, Shun-Li; Li, Hai-Liang; Xu, Hong; Liu, Yang; Ning, Guang-Zhi; Feng, Shi-Qing

    2017-04-18

    The position of plate fixation for clavicle fracture remains controversial. Our objective was to perform a comprehensive review of the literature and quantify the surgical parameters and clinical indexes between the anterior inferior plating and superior plating for clavicle fracture. PubMed, EMBASE, and the Cochrane Library were searched for randomized and non-randomized studies that compared the anterior inferior plating with the superior plating for clavicle fracture. The relative risk or standardized mean difference with 95% confidence interval was calculated using either a fixed- or random-effects model. Four randomized controlled trials and eight observational studies were identified to compare the surgical parameters and clinical indexes. For the surgical parameters, the anterior inferior plating group was better than the superior plating group in operation time and blood loss (P < 0.05). Furthermore, in terms of clinical indexes, the anterior inferior plating was superior to the superior plating in reducing the union time, and the two kinds of plate fixation methods were comparable in constant score, and the rate of infection, nonunion, and complications (P > 0.05). Based on the current evidence, the anterior inferior plating may reduce the blood loss, the operation and union time, but no differences were observed in constant score, and the rate of infection, nonunion, and complications between the two groups. Given that some of the studies have low quality, more randomized controlled trails with high quality should be conduct to further verify the findings.

  3. Parameter estimation of multivariate multiple regression model using bayesian with non-informative Jeffreys’ prior distribution

    NASA Astrophysics Data System (ADS)

    Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.

    2018-05-01

    Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.

  4. Extending cluster Lot Quality Assurance Sampling designs for surveillance programs

    PubMed Central

    Hund, Lauren; Pagano, Marcello

    2014-01-01

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance based on the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible non-parametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. PMID:24633656

  5. Quasi-analytical treatment of spatially averaged radiation transfer in complex terrain

    NASA Astrophysics Data System (ADS)

    Löwe, H.; Helbig, N.

    2012-04-01

    We provide a new quasi-analytical method to compute the topographic influence on the effective albedo of complex topography as required for meteorological, land-surface or climate models. We investigate radiative transfer in complex terrain via the radiosity equation on isotropic Gaussian random fields. Under controlled approximations we derive expressions for domain averages of direct, diffuse and terrain radiation and the sky view factor. Domain averaged quantities are related to a type of level-crossing probability of the random field which is approximated by longstanding results developed for acoustic scattering at ocean boundaries. This allows us to express all non-local horizon effects in terms of a local terrain parameter, namely the mean squared slope. Emerging integrals are computed numerically and fit formulas are given for practical purposes. As an implication of our approach we provide an expression for the effective albedo of complex terrain in terms of the sun elevation angle, mean squared slope, the area averaged surface albedo, and the direct-to-diffuse ratio of solar radiation. As an application, we compute the effective albedo for the Swiss Alps and discuss possible generalizations of the method.

  6. Monte Carlo simulation of reflection spectra of random multilayer media strongly scattering and absorbing light

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meglinskii, I V

    2001-12-31

    The reflection spectra of a multilayer random medium - the human skin - strongly scattering and absorbing light are numerically simulated. The propagation of light in the medium and the absorption spectra are simulated by the stochastic Monte Carlo method, which combines schemes for calculations of real photon trajectories and the statistical weight method. The model takes into account the inhomogeneous spatial distribution of blood vessels, water, and melanin, the degree of blood oxygenation, and the hematocrit index. The attenuation of the incident radiation caused by reflection and refraction at Fresnel boundaries of layers inside the medium is also considered.more » The simulated reflection spectra are compared with the experimental reflection spectra of the human skin. It is shown that a set of parameters that was used to describe the optical properties of skin layers and their possible variations, despite being far from complete, is nevertheless sufficient for the simulation of the reflection spectra of the human skin and their quantitative analysis. (laser applications and other topics in quantum electronics)« less

  7. Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.

    PubMed

    Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai

    2017-11-01

    For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.

  8. Temporal coherence of the acoustic field forward propagated through a continental shelf with random internal waves.

    PubMed

    Gong, Zheng; Chen, Tianrun; Ratilal, Purnima; Makris, Nicholas C

    2013-11-01

    An analytical model derived from normal mode theory for the accumulated effects of range-dependent multiple forward scattering is applied to estimate the temporal coherence of the acoustic field forward propagated through a continental-shelf waveguide containing random three-dimensional internal waves. The modeled coherence time scale of narrow band low-frequency acoustic field fluctuations after propagating through a continental-shelf waveguide is shown to decay with a power-law of range to the -1/2 beyond roughly 1 km, decrease with increasing internal wave energy, to be consistent with measured acoustic coherence time scales. The model should provide a useful prediction of the acoustic coherence time scale as a function of internal wave energy in continental-shelf environments. The acoustic coherence time scale is an important parameter in remote sensing applications because it determines (i) the time window within which standard coherent processing such as matched filtering may be conducted, and (ii) the number of statistically independent fluctuations in a given measurement period that determines the variance reduction possible by stationary averaging.

  9. Extending cluster lot quality assurance sampling designs for surveillance programs.

    PubMed

    Hund, Lauren; Pagano, Marcello

    2014-07-20

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance on the basis of the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible nonparametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Quantum Coherence and Random Fields at Mesoscopic Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenbaum, Thomas F.

    2016-03-01

    We seek to explore and exploit model, disordered and geometrically frustrated magnets where coherent spin clusters stably detach themselves from their surroundings, leading to extreme sensitivity to finite frequency excitations and the ability to encode information. Global changes in either the spin concentration or the quantum tunneling probability via the application of an external magnetic field can tune the relative weights of quantum entanglement and random field effects on the mesoscopic scale. These same parameters can be harnessed to manipulate domain wall dynamics in the ferromagnetic state, with technological possibilities for magnetic information storage. Finally, extensions from quantum ferromagnets tomore » antiferromagnets promise new insights into the physics of quantum fluctuations and effective dimensional reduction. A combination of ac susceptometry, dc magnetometry, noise measurements, hole burning, non-linear Fano experiments, and neutron diffraction as functions of temperature, magnetic field, frequency, excitation amplitude, dipole concentration, and disorder address issues of stability, overlap, coherence, and control. We have been especially interested in probing the evolution of the local order in the progression from spin liquid to spin glass to long-range-ordered magnet.« less

  11. Random Forest as an Imputation Method for Education and Psychology Research: Its Impact on Item Fit and Difficulty of the Rasch Model

    ERIC Educational Resources Information Center

    Golino, Hudson F.; Gomes, Cristiano M. A.

    2016-01-01

    This paper presents a non-parametric imputation technique, named random forest, from the machine learning field. The random forest procedure has two main tuning parameters: the number of trees grown in the prediction and the number of predictors used. Fifty experimental conditions were created in the imputation procedure, with different…

  12. Conflicting results between randomized trials and observational studies on the impact of proton pump inhibitors on cardiovascular events when coadministered with dual antiplatelet therapy: systematic review.

    PubMed

    Melloni, Chiara; Washam, Jeffrey B; Jones, W Schuyler; Halim, Sharif A; Hasselblad, Victor; Mayer, Stephanie B; Heidenfelder, Brooke L; Dolor, Rowena J

    2015-01-01

    Discordant results have been reported on the effects of concomitant use of proton pump inhibitors (PPIs) and dual antiplatelet therapy (DAPT) for cardiovascular outcomes. We conducted a systematic review comparing the effectiveness and safety of concomitant use of PPIs and DAPT in the postdischarge treatment of unstable angina/non-ST-segment-elevation myocardial infarction patients. We searched for clinical studies in MEDLINE, EMBASE, and the Cochrane Database of Systematic Reviews, from 1995 to 2012. Reviewers screened and extracted data, assessed applicability and quality, and graded the strength of evidence. We performed meta-analyses of direct comparisons when outcomes and follow-up periods were comparable. Thirty-five studies were eligible. Five (4 randomized controlled trials and 1 observational) assessed the effect of omeprazole when added to DAPT; the other 30 (observational) assessed the effect of PPIs as a class when compared with no PPIs. Random-effects meta-analyses of the studies assessing PPIs as a class consistently reported higher event rates in patients receiving PPIs for various clinical outcomes at 1 year (composite ischemic end points, all-cause mortality, nonfatal MI, stroke, revascularization, and stent thrombosis). However, the results from randomized controlled trials evaluating omeprazole compared with placebo showed no difference in ischemic outcomes, despite a reduction in upper gastrointestinal bleeding with omeprazole. Large, well-conducted observational studies of PPIs and randomized controlled trials of omeprazole seem to provide conflicting results for the effect of PPIs on cardiovascular outcomes when coadministered with DAPT. Prospective trials that directly compare pharmacodynamic parameters and clinical events among specific PPI agents in patients with unstable angina/non-ST-segment-elevation myocardial infarction treated with DAPT are warranted. © 2015 American Heart Association, Inc.

  13. Repetitive extracorporeal shock wave applications are superior in inducing angiogenesis after full thickness burn compared to single application.

    PubMed

    Goertz, O; von der Lohe, L; Lauer, H; Khosrawipour, T; Ring, A; Daigeler, A; Lehnhardt, M; Kolbenschlag, J

    2014-11-01

    Burn wounds remain a challenge due to subsequent wound infection and septicemia, which can be prevented by acceleration of wound healing. The aim of the study was to analyze microcirculation and leukocyte endothelium interaction with particular focus on angiogenesis after full-thickness burn using three different repetitions of low energy shock waves. Full-thickness burns were inflicted to the ears of hairless mice (n=44; area: 1.6±0.05 mm2 (mean±SEM)). Mice were randomized into four groups: the control group received a burn injury but no shock waves; group A received ESWA (0.03 mJ/mm2) on day one after burn injury; group B received shock waves on day one and day three after burn injury; group C ESWA on day one, three and seven after burn injury. Intravital fluorescent microscopy was used to assess microcirculatory parameters, angiogenesis and leukocyte interaction. Values were obtained before burn (baseline value) immediately after and on days 1, 3, 7 and 12 after burn. Shock-wave treated groups showed significantly accelerated angiogenesis compared to the control group. The non-perfused area (NPA) is regarded as a parameter for angiogenesis and showed the following data on day 12 2.7±0.4% (group A, p=0.001), 1.4±0.5% (group B, p<0.001), 1.0±0.3% (group C, p<0.001), 6.1±0.9% (control group). Edema formation is positively correlated with the number of shock wave applications: day 12: group A: 173.2±9.8%, group B: 184.2±6.6%, group C: 201.1±6.9%, p=0.009 vs. control: 162.3±8.7% (all data: mean±SEM). According to our data shock waves positively impact the wound healing process following burn injury. Angiogenesis showed significantly improved activity after shock wave application. In all three treatment groups angiogenesis was higher compared to the control group. Within the ESWA groups, double applications showed better results than single application and three applications showed better results than single or double applications. Copyright © 2014 Elsevier Ltd and ISBI. All rights reserved.

  14. Reconstruction and Applications of Collective Storylines from Web Photo Collections

    DTIC Science & Technology

    2013-09-01

    a random surfer model as follows. α = min ( πG(s ∗)q(s∗, st−1) πG(st−1)q(st−1, s∗) , 1 ) where q(i, j ) = λw̃ij + (1− λ)πG( j ). (3.3) 25 In Eq.3.3, the...probability α in Eq.(3.3) where w̃ij is the element (i, j ) of G̃. We repeat this process until the desired numbers of training samples are selected. For...exponential of a linear summation of the functions f lj of the covariates xj with a parameter vector θl = (θl1, · · · , θlJ): log λl(ti|θl) = J ∑ j =1 θljf l j

  15. Universal dispersion model for characterization of optical thin films over wide spectral range: Application to magnesium fluoride

    NASA Astrophysics Data System (ADS)

    Franta, Daniel; Nečas, David; Giglia, Angelo; Franta, Pavel; Ohlídal, Ivan

    2017-11-01

    Optical characterization of magnesium fluoride thin films is performed in a wide spectral range from far infrared to extreme ultraviolet (0.01-45 eV) utilizing the universal dispersion model. Two film defects, i.e. random roughness of the upper boundaries and defect transition layer at lower boundary are taken into account. An extension of universal dispersion model consisting in expressing the excitonic contributions as linear combinations of Gaussian and truncated Lorentzian terms is introduced. The spectral dependencies of the optical constants are presented in a graphical form and by the complete set of dispersion parameters that allows generating tabulated optical constants with required range and step using a simple utility in the newAD2 software package.

  16. Parameter estimation for slit-type scanning sensors

    NASA Technical Reports Server (NTRS)

    Fowler, J. W.; Rolfe, E. G.

    1981-01-01

    The Infrared Astronomical Satellite, scheduled for launch into a 900 km near-polar orbit in August 1982, will perform an infrared point source survey by scanning the sky with slit-type sensors. The description of position information is shown to require the use of a non-Gaussian random variable. Methods are described for deciding whether separate detections stem from a single common source, and a formulism is developed for the scan-to-scan problems of identifying multiple sightings of inertially fixed point sources for combining their individual measurements into a refined estimate. Several cases are given where the general theory yields results which are quite different from the corresponding Gaussian applications, showing that argument by Gaussian analogy would lead to error.

  17. Statistical properties of exciton fine structure splitting and polarization angles in quantum dot ensembles

    NASA Astrophysics Data System (ADS)

    Gong, Ming; Hofer, B.; Zallo, E.; Trotta, R.; Luo, Jun-Wei; Schmidt, O. G.; Zhang, Chuanwei

    2014-05-01

    We develop an effective model to describe the statistical properties of exciton fine structure splitting (FSS) and polarization angle in quantum dot ensembles (QDEs) using only a few symmetry-related parameters. The connection between the effective model and the random matrix theory is established. Such effective model is verified both theoretically and experimentally using several rather different types of QDEs, each of which contains hundreds to thousands of QDs. The model naturally addresses three fundamental issues regarding the FSS and polarization angels of QDEs, which are frequently encountered in both theories and experiments. The answers to these fundamental questions yield an approach to characterize the optical properties of QDEs. Potential applications of the effective model are also discussed.

  18. Studies of silicon pn junction solar cells

    NASA Technical Reports Server (NTRS)

    Lindholm, F. A.; Neugroschel, A.

    1977-01-01

    Modifications of the basic Shockley equations that result from the random and nonrandom spatial variations of the chemical composition of a semiconductor were developed. These modifications underlie the existence of the extensive emitter recombination current that limits the voltage over the open circuit of solar cells. The measurement of parameters, series resistance and the base diffusion length is discussed. Two methods are presented for establishing the energy bandgap narrowing in the heavily-doped emitter region. Corrections that can be important in the application of one of these methods to small test cells are examined. Oxide-charge-induced high-low-junction emitter (OCI-HLE) test cells which exhibit considerably higher voltage over the open circuit than was previously seen in n-on-p solar cells are described.

  19. Development of program package for investigation and modeling of carbon nanostructures in diamond like carbon films with the help of Raman scattering and infrared absorption spectra line resolving

    NASA Astrophysics Data System (ADS)

    Hayrapetyan, David B.; Hovhannisyan, Levon; Mantashyan, Paytsar A.

    2013-04-01

    The analysis of complex spectra is an actual problem for modern science. The work is devoted to the creation of a software package, which analyzes spectrum in the different formats, possesses by dynamic knowledge database and self-study mechanism, performs automated analysis of the spectra compound based on knowledge database by application of certain algorithms. In the software package as searching systems, hyper-spherical random search algorithms, gradient algorithms and genetic searching algorithms were used. The analysis of Raman and IR spectrum of diamond-like carbon (DLC) samples were performed by elaborated program. After processing the data, the program immediately displays all the calculated parameters of DLC.

  20. [Azilsartan Medoxomil Capabilities in Arterial Hypertension and Obesity].

    PubMed

    Vasyuk, Y A; Shupenina, E Y; Nesvetov, V V; Nesterova, E A; Golubkova, E I

    2016-12-01

    Arterial hypertension (AH) is one of the most common cardiovascular disease. Angiotensin II (AT II), the hormone of renin-angiotensin-aldosterone system, realizes its negative effects through AT 1 receptors - application point of angiotensin receptor blockers (ARB). Due to different dissociation AT 1 receptors properties some ARBs are more effective than others. Multiply multicenter randomized and observational studies approve the effectiveness and safety of azilsartan medoxomil in patients with AH 1-2 grade. Several preclinical studies have shown the additional properties of azilsartan, including increase of insulin sensitivity, cardio- and nephron protection in obesity. In our clinical case we showed the positive influence of azilsartan medoxomil on clinic and ambulatory blood pressure, 24-hour aortic stiffness parameters, longitudinal left ventricular strain in patient with AH and obesity.

Top