Reliability Assessment of a Robust Design Under Uncertainty for a 3-D Flexible Wing
NASA Technical Reports Server (NTRS)
Gumbert, Clyde R.; Hou, Gene J. -W.; Newman, Perry A.
2003-01-01
The paper presents reliability assessment results for the robust designs under uncertainty of a 3-D flexible wing previously reported by the authors. Reliability assessments (additional optimization problems) of the active constraints at the various probabilistic robust design points are obtained and compared with the constraint values or target constraint probabilities specified in the robust design. In addition, reliability-based sensitivity derivatives with respect to design variable mean values are also obtained and shown to agree with finite difference values. These derivatives allow one to perform reliability based design without having to obtain second-order sensitivity derivatives. However, an inner-loop optimization problem must be solved for each active constraint to find the most probable point on that constraint failure surface.
Design for robustness of unique, multi-component engineering systems
NASA Astrophysics Data System (ADS)
Shelton, Kenneth A.
2007-12-01
The purpose of this research is to advance the science of conceptual designing for robustness in unique, multi-component engineering systems. Robustness is herein defined as the ability of an engineering system to operate within a desired performance range even if the actual configuration has differences from specifications within specified tolerances. These differences are caused by three sources, namely manufacturing errors, system degradation (operational wear and tear), and parts availability. Unique, multi-component engineering systems are defined as systems produced in unique or very small production numbers. They typically have design and manufacturing costs on the order of billions of dollars, and have multiple, competing performance objectives. Design time for these systems must be minimized due to competition, high manpower costs, long manufacturing times, technology obsolescence, and limited available manpower expertise. Most importantly, design mistakes cannot be easily corrected after the systems are operational. For all these reasons, robustness of these systems is absolutely critical. This research examines the space satellite industry in particular. Although inherent robustness assurance is absolutely critical, it is difficult to achieve in practice. The current state of the art for robustness in the industry is to overdesign components and subsystems with redundancy and margin. The shortfall is that it is not known if the added margins were either necessary or sufficient given the risk management preferences of the designer or engineering system customer. To address this shortcoming, new assessment criteria to evaluate robustness in design concepts have been developed. The criteria are comprised of the "Value Distance", addressing manufacturing errors and system degradation, and "Component Distance", addressing parts availability. They are based on an evolutionary computation format that uses a string of alleles to describe the components in the design concept. These allele values are unitless themselves, but map to both configuration descriptions and attribute values. The Value Distance and Component Distance are metrics that measure the relative differences between two design concepts using the allele values, and all differences in a population of design concepts are calculated relative to a reference design, called the "base design". The base design is the top-ranked member of the population in weighted terms of robustness and performance. Robustness is determined based on the change in multi-objective performance as Value Distance and Component Distance (and thus differences in design) increases. It is assessed as acceptable if differences in design configurations up to specified tolerances result in performance changes that remain within a specified performance range. The design configuration difference tolerances and performance range together define the designer's risk management preferences for the final design concepts. Additionally, a complementary visualization capability was developed, called the "Design Solution Topography". This concept allows the visualization of a population of design concepts, and is a 3-axis plot where each point represents an entire design concept. The axes are the Value Distance, Component Distance and Performance Objective. The key benefit of the Design Solution Topography is that it allows the designer to visually identify and interpret the overall robustness of the current population of design concepts for a particular performance objective. In a multi-objective problem, each performance objective has its own Design Solution Topography view. These new concepts are implemented in an evolutionary computation-based conceptual designing method called the "Design for Robustness Method" that produces robust design concepts. The design procedures associated with this method enable designers to evaluate and ensure robustness in selected designs that also perform within a desired performance range. The method uses an evolutionary computation-based procedure to generate populations of large numbers of alternative design concepts, which are assessed for robustness using the Value Distance, Component Distance and Design Solution Topography procedures. The Design for Robustness Method provides a working conceptual designing structure in which to implement and gain the benefits of these new concepts. In the included experiments, the method was used on several mathematical examples to demonstrate feasibility, which showed favorable results as compared to existing known methods. Furthermore, it was tested on a real-world satellite conceptual designing problem to illustrate the applicability and benefits to industry. Risk management insights were demonstrated for the robustness-related issues of manufacturing errors, operational degradation, parts availability, and impacts based on selections of particular types of components.
Robustness results in LQG based multivariable control designs
NASA Technical Reports Server (NTRS)
Lehtomaki, N. A.; Sandell, N. R., Jr.; Athans, M.
1980-01-01
The robustness of control systems with respect to model uncertainty is considered using simple frequency domain criteria. Results are derived under a common framework in which the minimum singular value of the return difference transfer matrix is the key quantity. In particular, the LQ and LQG robustness results are discussed.
Chu, Hui-May; Ette, Ene I
2005-09-02
his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.
Robust Regression for Slope Estimation in Curriculum-Based Measurement Progress Monitoring
ERIC Educational Resources Information Center
Mercer, Sterett H.; Lyons, Alina F.; Johnston, Lauren E.; Millhoff, Courtney L.
2015-01-01
Although ordinary least-squares (OLS) regression has been identified as a preferred method to calculate rates of improvement for individual students during curriculum-based measurement (CBM) progress monitoring, OLS slope estimates are sensitive to the presence of extreme values. Robust estimators have been developed that are less biased by…
NASA Technical Reports Server (NTRS)
Nissim, E.
1989-01-01
The aerodynamic energy method is used in this paper to synthesize control laws for NASA's Drone for Aerodynamic and Structural Testing-Aerodynamic Research Wing 1 (DAST-ARW1) mathematical model. The performance of these control laws in terms of closed-loop flutter dynamic pressure, control surface activity, and robustness is compared against other control laws that appear in the literature and relate to the same model. A control law synthesis technique that makes use of the return difference singular values is developed in this paper. it is based on the aerodynamic energy approach and is shown to yield results superior to those given in the literature and based on optimal control theory. Nyquist plots are presented together with a short discussion regarding the relative merits of the minimum singular value as a measure of robustness, compared with the more traditional measure of robustness involving phase and gain margins.
NASA Astrophysics Data System (ADS)
Hou, Liqiang; Cai, Yuanli; Liu, Jin; Hou, Chongyuan
2016-04-01
A variable fidelity robust optimization method for pulsed laser orbital debris removal (LODR) under uncertainty is proposed. Dempster-shafer theory of evidence (DST), which merges interval-based and probabilistic uncertainty modeling, is used in the robust optimization. The robust optimization method optimizes the performance while at the same time maximizing its belief value. A population based multi-objective optimization (MOO) algorithm based on a steepest descent like strategy with proper orthogonal decomposition (POD) is used to search robust Pareto solutions. Analytical and numerical lifetime predictors are used to evaluate the debris lifetime after the laser pulses. Trust region based fidelity management is designed to reduce the computational cost caused by the expensive model. When the solutions fall into the trust region, the analytical model is used to reduce the computational cost. The proposed robust optimization method is first tested on a set of standard problems and then applied to the removal of Iridium 33 with pulsed lasers. It will be shown that the proposed approach can identify the most robust solutions with minimum lifetime under uncertainty.
Homeostatic enhancement of active mechanotransduction
NASA Astrophysics Data System (ADS)
Milewski, Andrew; O'Maoiléidigh, Dáibhid; Hudspeth, A. J.
2018-05-01
Our sense of hearing boasts exquisite sensitivity to periodic signals. Experiments and modeling imply, however, that the auditory system achieves this performance for only a narrow range of parameter values. As a result, small changes in these values could compromise the ability of the mechanosensory hair cells to detect stimuli. We propose that, rather than exerting tight control over parameters, the auditory system employs a homeostatic mechanism that ensures the robustness of its operation to variation in parameter values. Through analytical techniques and computer simulations we investigate whether a homeostatic mechanism renders the hair bundle's signal-detection ability more robust to alterations in experimentally accessible parameters. When homeostasis is enforced, the range of values for which the bundle's sensitivity exceeds a threshold can increase by more than an order of magnitude. The robustness of cochlear function based on somatic motility or hair bundle motility may be achieved by employing the approach we describe here.
Liu, Jiaen; Zhang, Xiaotong; Schmitter, Sebastian; Van de Moortele, Pierre-Francois; He, Bin
2014-01-01
Purpose To develop high-resolution electrical properties tomography (EPT) methods and investigate a gradient-based EPT (gEPT) approach which aims to reconstruct the electrical properties (EP), including conductivity and permittivity, of an imaged sample from experimentally measured B1 maps with improved boundary reconstruction and robustness against measurement noise. Theory and Methods Using a multi-channel transmit/receive stripline head coil, with acquired B1 maps for each coil element, by assuming negligible Bz component compared to transverse B1 components, a theory describing the relationship between B1 field, EP value and their spatial gradient has been proposed. The final EP images were obtained through spatial integration over the reconstructed EP gradient. Numerical simulation, physical phantom and in vivo human experiments at 7 T have been conducted to evaluate the performance of the proposed methods. Results Reconstruction results were compared with target EP values in both simulations and phantom experiments. Human experimental results were compared with EP values in literature. Satisfactory agreement was observed with improved boundary reconstruction. Importantly, the proposed gEPT method proved to be more robust against noise when compared to previously described non-gradient-based EPT approaches. Conclusion The proposed gEPT approach holds promises to improve EP mapping quality by recovering the boundary information and enhancing robustness against noise. PMID:25213371
Bayes factors based on robust TDT-type tests for family trio design.
Yuan, Min; Pan, Xiaoqing; Yang, Yaning
2015-06-01
Adaptive transmission disequilibrium test (aTDT) and MAX3 test are two robust-efficient association tests for case-parent family trio data. Both tests incorporate information of common genetic models including recessive, additive and dominant models and are efficient in power and robust to genetic model specifications. The aTDT uses information of departure from Hardy-Weinberg disequilibrium to identify the potential genetic model underlying the data and then applies the corresponding TDT-type test, and the MAX3 test is defined as the maximum of the absolute value of three TDT-type tests under the three common genetic models. In this article, we propose three robust Bayes procedures, the aTDT based Bayes factor, MAX3 based Bayes factor and Bayes model averaging (BMA), for association analysis with case-parent trio design. The asymptotic distributions of aTDT under the null and alternative hypothesis are derived in order to calculate its Bayes factor. Extensive simulations show that the Bayes factors and the p-values of the corresponding tests are generally consistent and these Bayes factors are robust to genetic model specifications, especially so when the priors on the genetic models are equal. When equal priors are used for the underlying genetic models, the Bayes factor method based on aTDT is more powerful than those based on MAX3 and Bayes model averaging. When the prior placed a small (large) probability on the true model, the Bayes factor based on aTDT (BMA) is more powerful. Analysis of a simulation data about RA from GAW15 is presented to illustrate applications of the proposed methods.
Robust Fault Detection for Aircraft Using Mixed Structured Singular Value Theory and Fuzzy Logic
NASA Technical Reports Server (NTRS)
Collins, Emmanuel G.
2000-01-01
The purpose of fault detection is to identify when a fault or failure has occurred in a system such as an aircraft or expendable launch vehicle. The faults may occur in sensors, actuators, structural components, etc. One of the primary approaches to model-based fault detection relies on analytical redundancy. That is the output of a computer-based model (actually a state estimator) is compared with the sensor measurements of the actual system to determine when a fault has occurred. Unfortunately, the state estimator is based on an idealized mathematical description of the underlying plant that is never totally accurate. As a result of these modeling errors, false alarms can occur. This research uses mixed structured singular value theory, a relatively recent and powerful robustness analysis tool, to develop robust estimators and demonstrates the use of these estimators in fault detection. To allow qualitative human experience to be effectively incorporated into the detection process fuzzy logic is used to predict the seriousness of the fault that has occurred.
Kong, Xiang-Zhen; Liu, Jin-Xing; Zheng, Chun-Hou; Hou, Mi-Xiao; Wang, Juan
2017-07-01
High dimensionality has become a typical feature of biomolecular data. In this paper, a novel dimension reduction method named p-norm singular value decomposition (PSVD) is proposed to seek the low-rank approximation matrix to the biomolecular data. To enhance the robustness to outliers, the Lp-norm is taken as the error function and the Schatten p-norm is used as the regularization function in the optimization model. To evaluate the performance of PSVD, the Kmeans clustering method is then employed for tumor clustering based on the low-rank approximation matrix. Extensive experiments are carried out on five gene expression data sets including two benchmark data sets and three higher dimensional data sets from the cancer genome atlas. The experimental results demonstrate that the PSVD-based method outperforms many existing methods. Especially, it is experimentally proved that the proposed method is more efficient for processing higher dimensional data with good robustness, stability, and superior time performance.
Bin Ratio-Based Histogram Distances and Their Application to Image Classification.
Hu, Weiming; Xie, Nianhua; Hu, Ruiguang; Ling, Haibin; Chen, Qiang; Yan, Shuicheng; Maybank, Stephen
2014-12-01
Large variations in image background may cause partial matching and normalization problems for histogram-based representations, i.e., the histograms of the same category may have bins which are significantly different, and normalization may produce large changes in the differences between corresponding bins. In this paper, we deal with this problem by using the ratios between bin values of histograms, rather than bin values' differences which are used in the traditional histogram distances. We propose a bin ratio-based histogram distance (BRD), which is an intra-cross-bin distance, in contrast with previous bin-to-bin distances and cross-bin distances. The BRD is robust to partial matching and histogram normalization, and captures correlations between bins with only a linear computational complexity. We combine the BRD with the ℓ1 histogram distance and the χ(2) histogram distance to generate the ℓ1 BRD and the χ(2) BRD, respectively. These combinations exploit and benefit from the robustness of the BRD under partial matching and the robustness of the ℓ1 and χ(2) distances to small noise. We propose a method for assessing the robustness of histogram distances to partial matching. The BRDs and logistic regression-based histogram fusion are applied to image classification. The experimental results on synthetic data sets show the robustness of the BRDs to partial matching, and the experiments on seven benchmark data sets demonstrate promising results of the BRDs for image classification.
Robust pupil center detection using a curvature algorithm
NASA Technical Reports Server (NTRS)
Zhu, D.; Moore, S. T.; Raphan, T.; Wall, C. C. (Principal Investigator)
1999-01-01
Determining the pupil center is fundamental for calculating eye orientation in video-based systems. Existing techniques are error prone and not robust because eyelids, eyelashes, corneal reflections or shadows in many instances occlude the pupil. We have developed a new algorithm which utilizes curvature characteristics of the pupil boundary to eliminate these artifacts. Pupil center is computed based solely on points related to the pupil boundary. For each boundary point, a curvature value is computed. Occlusion of the boundary induces characteristic peaks in the curvature function. Curvature values for normal pupil sizes were determined and a threshold was found which together with heuristics discriminated normal from abnormal curvature. Remaining boundary points were fit with an ellipse using a least squares error criterion. The center of the ellipse is an estimate of the pupil center. This technique is robust and accurately estimates pupil center with less than 40% of the pupil boundary points visible.
NASA Astrophysics Data System (ADS)
Emaminejad, Nastaran; Wahi-Anwar, Muhammad; Hoffman, John; Kim, Grace H.; Brown, Matthew S.; McNitt-Gray, Michael
2018-02-01
Translation of radiomics into clinical practice requires confidence in its interpretations. This may be obtained via understanding and overcoming the limitations in current radiomic approaches. Currently there is a lack of standardization in radiomic feature extraction. In this study we examined a few factors that are potential sources of inconsistency in characterizing lung nodules, such as 1)different choices of parameters and algorithms in feature calculation, 2)two CT image dose levels, 3)different CT reconstruction algorithms (WFBP, denoised WFBP, and Iterative). We investigated the effect of variation of these factors on entropy textural feature of lung nodules. CT images of 19 lung nodules identified from our lung cancer screening program were identified by a CAD tool and contours provided. The radiomics features were extracted by calculating 36 GLCM based and 4 histogram based entropy features in addition to 2 intensity based features. A robustness index was calculated across different image acquisition parameters to illustrate the reproducibility of features. Most GLCM based and all histogram based entropy features were robust across two CT image dose levels. Denoising of images slightly improved robustness of some entropy features at WFBP. Iterative reconstruction resulted in improvement of robustness in a fewer times and caused more variation in entropy feature values and their robustness. Within different choices of parameters and algorithms texture features showed a wide range of variation, as much as 75% for individual nodules. Results indicate the need for harmonization of feature calculations and identification of optimum parameters and algorithms in a radiomics study.
Standard and Robust Methods in Regression Imputation
ERIC Educational Resources Information Center
Moraveji, Behjat; Jafarian, Koorosh
2014-01-01
The aim of this paper is to provide an introduction of new imputation algorithms for estimating missing values from official statistics in larger data sets of data pre-processing, or outliers. The goal is to propose a new algorithm called IRMI (iterative robust model-based imputation). This algorithm is able to deal with all challenges like…
Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III
2004-01-01
A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.
A comparative robustness evaluation of feedforward neurofilters
NASA Technical Reports Server (NTRS)
Troudet, Terry; Merrill, Walter
1993-01-01
A comparative performance and robustness analysis is provided for feedforward neurofilters trained with back propagation to filter additive white noise. The signals used in this analysis are simulated pitch rate responses to typical pilot command inputs for a modern fighter aircraft model. Various configurations of nonlinear and linear neurofilters are trained to estimate exact signal values from input sequences of noisy sampled signal values. In this application, nonlinear neurofiltering is found to be more efficient than linear neurofiltering in removing the noise from responses of the nominal vehicle model, whereas linear neurofiltering is found to be more robust in the presence of changes in the vehicle dynamics. The possibility of enhancing neurofiltering through hybrid architectures based on linear and nonlinear neuroprocessing is therefore suggested as a way of taking advantage of the robustness of linear neurofiltering, while maintaining the nominal performance advantage of nonlinear neurofiltering.
A Weak Value Based QKD Protocol Robust Against Detector Attacks
NASA Astrophysics Data System (ADS)
Troupe, James
2015-03-01
We propose a variation of the BB84 quantum key distribution protocol that utilizes the properties of weak values to insure the validity of the quantum bit error rate estimates used to detect an eavesdropper. The protocol is shown theoretically to be secure against recently demonstrated attacks utilizing detector blinding and control and should also be robust against all detector based hacking. Importantly, the new protocol promises to achieve this additional security without negatively impacting the secure key generation rate as compared to that originally promised by the standard BB84 scheme. Implementation of the weak measurements needed by the protocol should be very feasible using standard quantum optical techniques.
Using Quotitive Division Problems to Promote Place-Value Understanding
ERIC Educational Resources Information Center
Bicknell, Brenda; Young-Loveridge, Jenny; Simpson, Jackie
2017-01-01
A robust understanding of place value is essential. Using a problem-based approach set within meaningful contexts, students' attention may be drawn to the multiplicative structure of place value. By using quotitive division problems through a concrete-representational-abstract lesson structure, this study showed a powerful strengthening of Year 3…
Multi-Objective Memetic Search for Robust Motion and Distortion Correction in Diffusion MRI.
Hering, Jan; Wolf, Ivo; Maier-Hein, Klaus H
2016-10-01
Effective image-based artifact correction is an essential step in the analysis of diffusion MR images. Many current approaches are based on retrospective registration, which becomes challenging in the realm of high b -values and low signal-to-noise ratio, rendering the corresponding correction schemes more and more ineffective. We propose a novel registration scheme based on memetic search optimization that allows for simultaneous exploitation of different signal intensity relationships between the images, leading to more robust registration results. We demonstrate the increased robustness and efficacy of our method on simulated as well as in vivo datasets. In contrast to the state-of-art methods, the median target registration error (TRE) stayed below the voxel size even for high b -values (3000 s ·mm -2 and higher) and low SNR conditions. We also demonstrate the increased precision in diffusion-derived quantities by evaluating Neurite Orientation Dispersion and Density Imaging (NODDI) derived measures on a in vivo dataset with severe motion artifacts. These promising results will potentially inspire further studies on metaheuristic optimization in diffusion MRI artifact correction and image registration in general.
2016-02-01
In addition , the parser updates some parameters based on uncertainties. For example, Analytica was very slow to update Pk values based on...moderate range. The additional security environments helped to fill gaps in lower severity. Weapons Effectiveness Pk values were modified to account for two...project is to help improve the value and character of defense resource planning in an era of growing uncertainty and complex strategic challenges
DOT National Transportation Integrated Search
2012-11-01
New methods are proposed for mitigating risk in hazardous materials (hazmat) transportation, based on Conditional : Value-at-Risk (CVaR) measure, on time-dependent vehicular networks. While the CVaR risk measure has been : popularly used in financial...
Extended robust support vector machine based on financial risk minimization.
Takeda, Akiko; Fujiwara, Shuhei; Kanamori, Takafumi
2014-11-01
Financial risk measures have been used recently in machine learning. For example, ν-support vector machine ν-SVM) minimizes the conditional value at risk (CVaR) of margin distribution. The measure is popular in finance because of the subadditivity property, but it is very sensitive to a few outliers in the tail of the distribution. We propose a new classification method, extended robust SVM (ER-SVM), which minimizes an intermediate risk measure between the CVaR and value at risk (VaR) by expecting that the resulting model becomes less sensitive than ν-SVM to outliers. We can regard ER-SVM as an extension of robust SVM, which uses a truncated hinge loss. Numerical experiments imply the ER-SVM's possibility of achieving a better prediction performance with proper parameter setting.
Progress in multirate digital control system design
NASA Technical Reports Server (NTRS)
Berg, Martin C.; Mason, Gregory S.
1991-01-01
A new methodology for multirate sampled-data control design based on a new generalized control law structure, two new parameter-optimization-based control law synthesis methods, and a new singular-value-based robustness analysis method are described. The control law structure can represent multirate sampled-data control laws of arbitrary structure and dynamic order, with arbitrarily prescribed sampling rates for all sensors and update rates for all processor states and actuators. The two control law synthesis methods employ numerical optimization to determine values for the control law parameters. The robustness analysis method is based on the multivariable Nyquist criterion applied to the loop transfer function for the sampling period equal to the period of repetition of the system's complete sampling/update schedule. The complete methodology is demonstrated by application to the design of a combination yaw damper and modal suppression system for a commercial aircraft.
Will Courts Shape Value-Added Methods for Teacher Evaluation? ACT Working Paper Series. WP-2014-2
ERIC Educational Resources Information Center
Croft, Michelle; Buddin, Richard
2014-01-01
As more states begin to adopt teacher evaluation systems based on value-added measures, legal challenges have been filed both seeking to limit the use of value-added measures ("Cook v. Stewart") and others seeking to require more robust evaluation systems ("Vergara v. California"). This study reviews existing teacher evaluation…
NASA Technical Reports Server (NTRS)
Arnold, S. M.; Saleeb, A. F.; Tan, H. Q.; Zhang, Y.
1993-01-01
The issue of developing effective and robust schemes to implement a class of the Ogden-type hyperelastic constitutive models is addressed. To this end, special purpose functions (running under MACSYMA) are developed for the symbolic derivation, evaluation, and automatic FORTRAN code generation of explicit expressions for the corresponding stress function and material tangent stiffness tensors. These explicit forms are valid over the entire deformation range, since the singularities resulting from repeated principal-stretch values have been theoretically removed. The required computational algorithms are outlined, and the resulting FORTRAN computer code is presented.
A Simple and Robust Method for Partially Matched Samples Using the P-Values Pooling Approach
Kuan, Pei Fen; Huang, Bo
2013-01-01
This paper focuses on statistical analyses in scenarios where some samples from the matched pairs design are missing, resulting in partially matched samples. Motivated by the idea of meta-analysis, we recast the partially matched samples as coming from two experimental designs, and propose a simple yet robust approach based on the weighted Z-test to integrate the p-values computed from these two designs. We show that the proposed approach achieves better operating characteristics in simulations and a case study, compared to existing methods for partially matched samples. PMID:23417968
Sehgal, Muhammad Shoaib B; Gondal, Iqbal; Dooley, Laurence S
2005-05-15
Microarray data are used in a range of application areas in biology, although often it contains considerable numbers of missing values. These missing values can significantly affect subsequent statistical analysis and machine learning algorithms so there is a strong motivation to estimate these values as accurately as possible before using these algorithms. While many imputation algorithms have been proposed, more robust techniques need to be developed so that further analysis of biological data can be accurately undertaken. In this paper, an innovative missing value imputation algorithm called collateral missing value estimation (CMVE) is presented which uses multiple covariance-based imputation matrices for the final prediction of missing values. The matrices are computed and optimized using least square regression and linear programming methods. The new CMVE algorithm has been compared with existing estimation techniques including Bayesian principal component analysis imputation (BPCA), least square impute (LSImpute) and K-nearest neighbour (KNN). All these methods were rigorously tested to estimate missing values in three separate non-time series (ovarian cancer based) and one time series (yeast sporulation) dataset. Each method was quantitatively analyzed using the normalized root mean square (NRMS) error measure, covering a wide range of randomly introduced missing value probabilities from 0.01 to 0.2. Experiments were also undertaken on the yeast dataset, which comprised 1.7% actual missing values, to test the hypothesis that CMVE performed better not only for randomly occurring but also for a real distribution of missing values. The results confirmed that CMVE consistently demonstrated superior and robust estimation capability of missing values compared with other methods for both series types of data, for the same order of computational complexity. A concise theoretical framework has also been formulated to validate the improved performance of the CMVE algorithm. The CMVE software is available upon request from the authors.
Lin, Zhaozhou; Zhang, Qiao; Liu, Ruixin; Gao, Xiaojie; Zhang, Lu; Kang, Bingya; Shi, Junhan; Wu, Zidan; Gui, Xinjing; Li, Xuelin
2016-01-25
To accurately, safely, and efficiently evaluate the bitterness of Traditional Chinese Medicines (TCMs), a robust predictor was developed using robust partial least squares (RPLS) regression method based on data obtained from an electronic tongue (e-tongue) system. The data quality was verified by the Grubb's test. Moreover, potential outliers were detected based on both the standardized residual and score distance calculated for each sample. The performance of RPLS on the dataset before and after outlier detection was compared to other state-of-the-art methods including multivariate linear regression, least squares support vector machine, and the plain partial least squares regression. Both R² and root-mean-squares error (RMSE) of cross-validation (CV) were recorded for each model. With four latent variables, a robust RMSECV value of 0.3916 with bitterness values ranging from 0.63 to 4.78 were obtained for the RPLS model that was constructed based on the dataset including outliers. Meanwhile, the RMSECV, which was calculated using the models constructed by other methods, was larger than that of the RPLS model. After six outliers were excluded, the performance of all benchmark methods markedly improved, but the difference between the RPLS model constructed before and after outlier exclusion was negligible. In conclusion, the bitterness of TCM decoctions can be accurately evaluated with the RPLS model constructed using e-tongue data.
Selck, David A; Karymov, Mikhail A; Sun, Bing; Ismagilov, Rustem F
2013-11-19
Quantitative bioanalytical measurements are commonly performed in a kinetic format and are known to not be robust to perturbation that affects the kinetics itself or the measurement of kinetics. We hypothesized that the same measurements performed in a "digital" (single-molecule) format would show increased robustness to such perturbations. Here, we investigated the robustness of an amplification reaction (reverse-transcription loop-mediated amplification, RT-LAMP) in the context of fluctuations in temperature and time when this reaction is used for quantitative measurements of HIV-1 RNA molecules under limited-resource settings (LRS). The digital format that counts molecules using dRT-LAMP chemistry detected a 2-fold change in concentration of HIV-1 RNA despite a 6 °C temperature variation (p-value = 6.7 × 10(-7)), whereas the traditional kinetic (real-time) format did not (p-value = 0.25). Digital analysis was also robust to a 20 min change in reaction time, to poor imaging conditions obtained with a consumer cell-phone camera, and to automated cloud-based processing of these images (R(2) = 0.9997 vs true counts over a 100-fold dynamic range). Fluorescent output of multiplexed PCR amplification could also be imaged with the cell phone camera using flash as the excitation source. Many nonlinear amplification schemes based on organic, inorganic, and biochemical reactions have been developed, but their robustness is not well understood. This work implies that these chemistries may be significantly more robust in the digital, rather than kinetic, format. It also calls for theoretical studies to predict robustness of these chemistries and, more generally, to design robust reaction architectures. The SlipChip that we used here and other digital microfluidic technologies already exist to enable testing of these predictions. Such work may lead to identification or creation of robust amplification chemistries that enable rapid and precise quantitative molecular measurements under LRS. Furthermore, it may provide more general principles describing robustness of chemical and biological networks in digital formats.
Determination of perpendicular magnetic anisotropy based on the magnetic droplet nucleation
NASA Astrophysics Data System (ADS)
Nishimura, Tomoe; Kim, Duck-Ho; Okuno, Takaya; Hirata, Yuushou; Futakawa, Yasuhiro; Yoshikawa, Hiroki; Kim, Sanghoon; Tsukamoto, Arata; Shiota, Yoichi; Moriyama, Takahiro; Ono, Teruo
2018-05-01
We propose an alternative method of determining the magnetic anisotropy field μ0 H K in ferro-/ferrimagnets. On the basis of the droplet nucleation model, there exists linearity between domain-wall (DW) energy density and in-plane magnetic field. We find that the slope is simply represented by μ0 H K and Dzyaloshinskii–Moriya interaction (DMI). By measuring the in-plane magnetic field dependence of the coercivity field, closely corresponding to the DW energy density, a robust value for μ0 H K can be quantified. This robust value can be used to determine μ0 H K over a wide range of values, overcoming the limitations caused by the small strength of the external magnetic field typically used in experiments.
Robustness analysis of bogie suspension components Pareto optimised values
NASA Astrophysics Data System (ADS)
Mousavi Bideleh, Seyed Milad
2017-08-01
Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.
Practical robustness measures in multivariable control system analysis. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Lehtomaki, N. A.
1981-01-01
The robustness of the stability of multivariable linear time invariant feedback control systems with respect to model uncertainty is considered using frequency domain criteria. Available robustness tests are unified under a common framework based on the nature and structure of model errors. These results are derived using a multivariable version of Nyquist's stability theorem in which the minimum singular value of the return difference transfer matrix is shown to be the multivariable generalization of the distance to the critical point on a single input, single output Nyquist diagram. Using the return difference transfer matrix, a very general robustness theorem is presented from which all of the robustness tests dealing with specific model errors may be derived. The robustness tests that explicitly utilized model error structure are able to guarantee feedback system stability in the face of model errors of larger magnitude than those robustness tests that do not. The robustness of linear quadratic Gaussian control systems are analyzed.
Info-gap robust-satisficing model of foraging behavior: do foragers optimize or satisfice?
Carmel, Yohay; Ben-Haim, Yakov
2005-11-01
In this note we compare two mathematical models of foraging that reflect two competing theories of animal behavior: optimizing and robust satisficing. The optimal-foraging model is based on the marginal value theorem (MVT). The robust-satisficing model developed here is an application of info-gap decision theory. The info-gap robust-satisficing model relates to the same circumstances described by the MVT. We show how these two alternatives translate into specific predictions that at some points are quite disparate. We test these alternative predictions against available data collected in numerous field studies with a large number of species from diverse taxonomic groups. We show that a large majority of studies appear to support the robust-satisficing model and reject the optimal-foraging model.
NASA Technical Reports Server (NTRS)
Nissim, Eli
1990-01-01
The aerodynamic energy method is used to synthesize control laws for NASA's drone for aerodynamic and structural testing-aerodynamic research wing 1 (DAST-ARW1) mathematical model. The performance of these control laws in terms of closed-loop flutter dynamic pressure, control surface activity, and robustness is compared with other control laws that relate to the same model. A control law synthesis technique that makes use of the return difference singular values is developed. It is based on the aerodynamic energy approach and is shown to yield results that are superior to those results given in the literature and are based on optimal control theory. Nyquist plots are presented, together with a short discussion regarding the relative merits of the minimum singular value as a measure of robustness as compared with the more traditional measure involving phase and gain margins.
Robust Stability Analysis of the Space Launch System Control Design: A Singular Value Approach
NASA Technical Reports Server (NTRS)
Pei, Jing; Newsome, Jerry R.
2015-01-01
Classical stability analysis consists of breaking the feedback loops one at a time and determining separately how much gain or phase variations would destabilize the stable nominal feedback system. For typical launch vehicle control design, classical control techniques are generally employed. In addition to stability margins, frequency domain Monte Carlo methods are used to evaluate the robustness of the design. However, such techniques were developed for Single-Input-Single-Output (SISO) systems and do not take into consideration the off-diagonal terms in the transfer function matrix of Multi-Input-Multi-Output (MIMO) systems. Robust stability analysis techniques such as H(sub infinity) and mu are applicable to MIMO systems but have not been adopted as standard practices within the launch vehicle controls community. This paper took advantage of a simple singular-value-based MIMO stability margin evaluation method based on work done by Mukhopadhyay and Newsom and applied it to the SLS high-fidelity dynamics model. The method computes a simultaneous multi-loop gain and phase margin that could be related back to classical margins. The results presented in this paper suggest that for the SLS system, traditional SISO stability margins are similar to the MIMO margins. This additional level of verification provides confidence in the robustness of the control design.
Optimally robust redundancy relations for failure detection in uncertain systems
NASA Technical Reports Server (NTRS)
Lou, X.-C.; Willsky, A. S.; Verghese, G. C.
1986-01-01
All failure detection methods are based, either explicitly or implicitly, on the use of redundancy, i.e. on (possibly dynamic) relations among the measured variables. The robustness of the failure detection process consequently depends to a great degree on the reliability of the redundancy relations, which in turn is affected by the inevitable presence of model uncertainties. In this paper the problem of determining redundancy relations that are optimally robust is addressed in a sense that includes several major issues of importance in practical failure detection and that provides a significant amount of intuition concerning the geometry of robust failure detection. A procedure is given involving the construction of a single matrix and its singular value decomposition for the determination of a complete sequence of redundancy relations, ordered in terms of their level of robustness. This procedure also provides the basis for comparing levels of robustness in redundancy provided by different sets of sensors.
Modern CACSD using the Robust-Control Toolbox
NASA Technical Reports Server (NTRS)
Chiang, Richard Y.; Safonov, Michael G.
1989-01-01
The Robust-Control Toolbox is a collection of 40 M-files which extend the capability of PC/PRO-MATLAB to do modern multivariable robust control system design. Included are robust analysis tools like singular values and structured singular values, robust synthesis tools like continuous/discrete H(exp 2)/H infinity synthesis and Linear Quadratic Gaussian Loop Transfer Recovery methods and a variety of robust model reduction tools such as Hankel approximation, balanced truncation and balanced stochastic truncation, etc. The capabilities of the toolbox are described and illustated with examples to show how easily they can be used in practice. Examples include structured singular value analysis, H infinity loop-shaping and large space structure model reduction.
Direct adaptive robust tracking control for 6 DOF industrial robot with enhanced accuracy.
Yin, Xiuxing; Pan, Li
2018-01-01
A direct adaptive robust tracking control is proposed for trajectory tracking of 6 DOF industrial robot in the presence of parametric uncertainties, external disturbances and uncertain nonlinearities. The controller is designed based on the dynamic characteristics in the working space of the end-effector of the 6 DOF robot. The controller includes robust control term and model compensation term that is developed directly based on the input reference or desired motion trajectory. A projection-type parametric adaptation law is also designed to compensate for parametric estimation errors for the adaptive robust control. The feasibility and effectiveness of the proposed direct adaptive robust control law and the associated projection-type parametric adaptation law have been comparatively evaluated based on two 6 DOF industrial robots. The test results demonstrate that the proposed control can be employed to better maintain the desired trajectory tracking even in the presence of large parametric uncertainties and external disturbances as compared with PD controller and nonlinear controller. The parametric estimates also eventually converge to the real values along with the convergence of tracking errors, which further validate the effectiveness of the proposed parametric adaption law. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Lin, Zhaozhou; Zhang, Qiao; Liu, Ruixin; Gao, Xiaojie; Zhang, Lu; Kang, Bingya; Shi, Junhan; Wu, Zidan; Gui, Xinjing; Li, Xuelin
2016-01-01
To accurately, safely, and efficiently evaluate the bitterness of Traditional Chinese Medicines (TCMs), a robust predictor was developed using robust partial least squares (RPLS) regression method based on data obtained from an electronic tongue (e-tongue) system. The data quality was verified by the Grubb’s test. Moreover, potential outliers were detected based on both the standardized residual and score distance calculated for each sample. The performance of RPLS on the dataset before and after outlier detection was compared to other state-of-the-art methods including multivariate linear regression, least squares support vector machine, and the plain partial least squares regression. Both R2 and root-mean-squares error (RMSE) of cross-validation (CV) were recorded for each model. With four latent variables, a robust RMSECV value of 0.3916 with bitterness values ranging from 0.63 to 4.78 were obtained for the RPLS model that was constructed based on the dataset including outliers. Meanwhile, the RMSECV, which was calculated using the models constructed by other methods, was larger than that of the RPLS model. After six outliers were excluded, the performance of all benchmark methods markedly improved, but the difference between the RPLS model constructed before and after outlier exclusion was negligible. In conclusion, the bitterness of TCM decoctions can be accurately evaluated with the RPLS model constructed using e-tongue data. PMID:26821026
Factors influencing the robustness of P-value measurements in CT texture prognosis studies
NASA Astrophysics Data System (ADS)
McQuaid, Sarah; Scuffham, James; Alobaidli, Sheaka; Prakash, Vineet; Ezhil, Veni; Nisbet, Andrew; South, Christopher; Evans, Philip
2017-07-01
Several studies have recently reported on the value of CT texture analysis in predicting survival, although the topic remains controversial, with further validation needed in order to consolidate the evidence base. The aim of this study was to investigate the effect of varying the input parameters in the Kaplan-Meier analysis, to determine whether the resulting P-value can be considered to be a robust indicator of the parameter’s prognostic potential. A retrospective analysis of the CT-based normalised entropy of 51 patients with lung cancer was performed and overall survival data for these patients were collected. A normalised entropy cut-off was chosen to split the patient cohort into two groups and log-rank testing was performed to assess the survival difference of the two groups. This was repeated for varying normalised entropy cut-offs and varying follow-up periods. Our findings were also compared with previously published results to assess robustness of this parameter in a multi-centre patient cohort. The P-value was found to be highly sensitive to the choice of cut-off value, with small changes in cut-off producing substantial changes in P. The P-value was also sensitive to follow-up period, with particularly noisy results at short follow-up periods. Using matched conditions to previously published results, a P-value of 0.162 was obtained. Survival analysis results can be highly sensitive to the choice in texture cut-off value in dichotomising patients, which should be taken into account when performing such studies to avoid reporting false positive results. Short follow-up periods also produce unstable results and should therefore be avoided to ensure the results produced are reproducible. Previously published findings that indicated the prognostic value of normalised entropy were not replicated here, but further studies with larger patient numbers would be required to determine the cause of the different outcomes.
NASA Astrophysics Data System (ADS)
Friedel, M. J.; Daughney, C.
2016-12-01
The development of a successful surface-groundwater management strategy depends on the quality of data provided for analysis. This study evaluates the statistical robustness when using a modified self-organizing map (MSOM) technique to estimate missing values for three hypersurface models: synoptic groundwater-surface water hydrochemistry, time-series of groundwater-surface water hydrochemistry, and mixed-survey (combination of groundwater-surface water hydrochemistry and lithologies) hydrostratigraphic unit data. These models of increasing complexity are developed and validated based on observations from the Southland region of New Zealand. In each case, the estimation method is sufficiently robust to cope with groundwater-surface water hydrochemistry vagaries due to sample size and extreme data insufficiency, even when >80% of the data are missing. The estimation of surface water hydrochemistry time series values enabled the evaluation of seasonal variation, and the imputation of lithologies facilitated the evaluation of hydrostratigraphic controls on groundwater-surface water interaction. The robust statistical results for groundwater-surface water models of increasing data complexity provide justification to apply the MSOM technique in other regions of New Zealand and abroad.
Comparing capacity value estimation techniques for photovoltaic solar power
Madaeni, Seyed Hossein; Sioshansi, Ramteen; Denholm, Paul
2012-09-28
In this paper, we estimate the capacity value of photovoltaic (PV) solar plants in the western U.S. Our results show that PV plants have capacity values that range between 52% and 93%, depending on location and sun-tracking capability. We further compare more robust but data- and computationally-intense reliability-based estimation techniques with simpler approximation methods. We show that if implemented properly, these techniques provide accurate approximations of reliability-based methods. Overall, methods that are based on the weighted capacity factor of the plant provide the most accurate estimate. As a result, we also examine the sensitivity of PV capacity value to themore » inclusion of sun-tracking systems.« less
NASA Astrophysics Data System (ADS)
Gusriani, N.; Firdaniza
2018-03-01
The existence of outliers on multiple linear regression analysis causes the Gaussian assumption to be unfulfilled. If the Least Square method is forcedly used on these data, it will produce a model that cannot represent most data. For that, we need a robust regression method against outliers. This paper will compare the Minimum Covariance Determinant (MCD) method and the TELBS method on secondary data on the productivity of phytoplankton, which contains outliers. Based on the robust determinant coefficient value, MCD method produces a better model compared to TELBS method.
Robust reinforcement learning.
Morimoto, Jun; Doya, Kenji
2005-02-01
This letter proposes a new reinforcement learning (RL) paradigm that explicitly takes into account input disturbance as well as modeling errors. The use of environmental models in RL is quite popular for both offline learning using simulations and for online action planning. However, the difference between the model and the real environment can lead to unpredictable, and often unwanted, results. Based on the theory of H(infinity) control, we consider a differential game in which a "disturbing" agent tries to make the worst possible disturbance while a "control" agent tries to make the best control input. The problem is formulated as finding a min-max solution of a value function that takes into account the amount of the reward and the norm of the disturbance. We derive online learning algorithms for estimating the value function and for calculating the worst disturbance and the best control in reference to the value function. We tested the paradigm, which we call robust reinforcement learning (RRL), on the control task of an inverted pendulum. In the linear domain, the policy and the value function learned by online algorithms coincided with those derived analytically by the linear H(infinity) control theory. For a fully nonlinear swing-up task, RRL achieved robust performance with changes in the pendulum weight and friction, while a standard reinforcement learning algorithm could not deal with these changes. We also applied RRL to the cart-pole swing-up task, and a robust swing-up policy was acquired.
NASA Technical Reports Server (NTRS)
Newsom, J. R.; Mukhopadhyay, V.
1983-01-01
A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two-input/two-output drone flight control system.
NASA Technical Reports Server (NTRS)
Newsom, J. R.; Mukhopadhyay, V.
1983-01-01
A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two output drone flight control system.
Impacting Early Childhood Teachers' Understanding of the Complexities of Place Value
ERIC Educational Resources Information Center
Cady, Jo Ann; Hopkins, Theresa M.; Price, Jamie
2014-01-01
In order to help children gain a more robust understanding of place value, teachers must understand the connections and relationships among the related concepts as well as possess knowledge of how children learn early number concepts. Unfortunately, teachers' familiarity with the base-ten number system and/or lack of an understanding of…
Robustness of Value-Added Analysis of School Effectiveness. Research Report. ETS RR-08-22
ERIC Educational Resources Information Center
Braun, Henry; Qu, Yanxuan
2008-01-01
This paper reports on a study conducted to investigate the consistency of the results between 2 approaches to estimating school effectiveness through value-added modeling. Estimates of school effects from the layered model employing item response theory (IRT) scaled data are compared to estimates derived from a discrete growth model based on the…
A robust watermarking scheme using lifting wavelet transform and singular value decomposition
NASA Astrophysics Data System (ADS)
Bhardwaj, Anuj; Verma, Deval; Verma, Vivek Singh
2017-01-01
The present paper proposes a robust image watermarking scheme using lifting wavelet transform (LWT) and singular value decomposition (SVD). Second level LWT is applied on host/cover image to decompose into different subbands. SVD is used to obtain singular values of watermark image and then these singular values are updated with the singular values of LH2 subband. The algorithm is tested on a number of benchmark images and it is found that the present algorithm is robust against different geometric and image processing operations. A comparison of the proposed scheme is performed with other existing schemes and observed that the present scheme is better not only in terms of robustness but also in terms of imperceptibility.
1984-07-01
34robustness" analysis for multiloop feedback systems. Reference [55] describes a simple method based on the Perron - Frobenius Theory of non-negative...Viewpoint, " Operator Theory : Advances and Applications, 12, pp. 277-302, 1984. - E. A. Jonckheere, "New Bound on the Sensitivity -- of the Solution of...Reidel, Dordrecht, Holland, 1984. M. G. Safonov, "Comments on Singular Value Theory in Uncertain Feedback Systems, " to appear IEEE Trans. on Automatic
NASA Astrophysics Data System (ADS)
Cui, Ximing; Wang, Zhe; Kang, Yihua; Pu, Haiming; Deng, Zhiyang
2018-05-01
Singular value decomposition (SVD) has been proven to be an effective de-noising tool for flaw echo signal feature detection in ultrasonic non-destructive evaluation (NDE). However, the uncertainty in the arbitrary manner of the selection of an effective singular value weakens the robustness of this technique. Improper selection of effective singular values will lead to bad performance of SVD de-noising. What is more, the computational complexity of SVD is too large for it to be applied in real-time applications. In this paper, to eliminate the uncertainty in SVD de-noising, a novel flaw indicator, named the maximum singular value indicator (MSI), based on short-time SVD (STSVD), is proposed for flaw feature detection from a measured signal in ultrasonic NDE. In this technique, the measured signal is first truncated into overlapping short-time data segments to put feature information of a transient flaw echo signal in local field, and then the MSI can be obtained from the SVD of each short-time data segment. Research shows that this indicator can clearly indicate the location of ultrasonic flaw signals, and the computational complexity of this STSVD-based indicator is significantly reduced with the algorithm proposed in this paper. Both simulation and experiments show that this technique is very efficient for real-time application in flaw detection from noisy data.
NASA Astrophysics Data System (ADS)
Soldner, Dominic; Brands, Benjamin; Zabihyan, Reza; Steinmann, Paul; Mergheim, Julia
2017-10-01
Computing the macroscopic material response of a continuum body commonly involves the formulation of a phenomenological constitutive model. However, the response is mainly influenced by the heterogeneous microstructure. Computational homogenisation can be used to determine the constitutive behaviour on the macro-scale by solving a boundary value problem at the micro-scale for every so-called macroscopic material point within a nested solution scheme. Hence, this procedure requires the repeated solution of similar microscopic boundary value problems. To reduce the computational cost, model order reduction techniques can be applied. An important aspect thereby is the robustness of the obtained reduced model. Within this study reduced-order modelling (ROM) for the geometrically nonlinear case using hyperelastic materials is applied for the boundary value problem on the micro-scale. This involves the Proper Orthogonal Decomposition (POD) for the primary unknown and hyper-reduction methods for the arising nonlinearity. Therein three methods for hyper-reduction, differing in how the nonlinearity is approximated and the subsequent projection, are compared in terms of accuracy and robustness. Introducing interpolation or Gappy-POD based approximations may not preserve the symmetry of the system tangent, rendering the widely used Galerkin projection sub-optimal. Hence, a different projection related to a Gauss-Newton scheme (Gauss-Newton with Approximated Tensors- GNAT) is favoured to obtain an optimal projection and a robust reduced model.
Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.
2002-01-01
An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Inverse transport calculations in optical imaging with subspace optimization algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Tian, E-mail: tding@math.utexas.edu; Ren, Kui, E-mail: ren@math.utexas.edu
2014-09-15
Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analyticallymore » recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.« less
Robust Audio Watermarking Scheme Based on Deterministic Plus Stochastic Model
NASA Astrophysics Data System (ADS)
Dhar, Pranab Kumar; Kim, Cheol Hong; Kim, Jong-Myon
Digital watermarking has been widely used for protecting digital contents from unauthorized duplication. This paper proposes a new watermarking scheme based on spectral modeling synthesis (SMS) for copyright protection of digital contents. SMS defines a sound as a combination of deterministic events plus a stochastic component that makes it possible for a synthesized sound to attain all of the perceptual characteristics of the original sound. In our proposed scheme, watermarks are embedded into the highest prominent peak of the magnitude spectrum of each non-overlapping frame in peak trajectories. Simulation results indicate that the proposed watermarking scheme is highly robust against various kinds of attacks such as noise addition, cropping, re-sampling, re-quantization, and MP3 compression and achieves similarity values ranging from 17 to 22. In addition, our proposed scheme achieves signal-to-noise ratio (SNR) values ranging from 29 dB to 30 dB.
NASA Technical Reports Server (NTRS)
Joshi, S. M.; Armstrong, E. S.; Sundararajan, N.
1986-01-01
The problem of synthesizing a robust controller is considered for a large, flexible space-based antenna by using the linear-quadratic-Gaussian (LQG)/loop transfer recovery (LTR) method. The study is based on a finite-element model of the 122-m hoop/column antenna, which consists of three rigid-body rotational modes and the first 10 elastic modes. A robust compensator design for achieving the required performance bandwidth in the presence of modeling uncertainties is obtained using the LQG/LTR method for loop-shaping in the frequency domain. Different sensor actuator locations are analyzed in terms of the pole/zero locations of the multivariable systems and possible best locations are indicated. The computations are performed by using the LQG design package ORACLS augmented with frequency domain singular value analysis software.
NASA Technical Reports Server (NTRS)
Mavris, Dimitri N.; Bandte, Oliver; Schrage, Daniel P.
1996-01-01
This paper outlines an approach for the determination of economically viable robust design solutions using the High Speed Civil Transport (HSCT) as a case study. Furthermore, the paper states the advantages of a probability based aircraft design over the traditional point design approach. It also proposes a new methodology called Robust Design Simulation (RDS) which treats customer satisfaction as the ultimate design objective. RDS is based on a probabilistic approach to aerospace systems design, which views the chosen objective as a distribution function introduced by so called noise or uncertainty variables. Since the designer has no control over these variables, a variability distribution is defined for each one of them. The cumulative effect of all these distributions causes the overall variability of the objective function. For cases where the selected objective function depends heavily on these noise variables, it may be desirable to obtain a design solution that minimizes this dependence. The paper outlines a step by step approach on how to achieve such a solution for the HSCT case study and introduces an evaluation criterion which guarantees the highest customer satisfaction. This customer satisfaction is expressed by the probability of achieving objective function values less than a desired target value.
Robust Regression through Robust Covariances.
1985-01-01
we apply (2.3). But first let us examine the influence function (see Hampel (1974)). In order to simplify the formulas we will first consider the case...remember that the influence function is an asymptotic 0tooL" and that therefore the population Values of our estimators appear in the formula. V(GR) is...the parameter a , V) based on the data Z1 , ... DZ. via tp =~t 0. Now we can apply the standard formulas to get influence function (see Huber (1981
Jiang, Xuejun; Guo, Xu; Zhang, Ning; Wang, Bo
2018-01-01
This article presents and investigates performance of a series of robust multivariate nonparametric tests for detection of location shift between two multivariate samples in randomized controlled trials. The tests are built upon robust estimators of distribution locations (medians, Hodges-Lehmann estimators, and an extended U statistic) with both unscaled and scaled versions. The nonparametric tests are robust to outliers and do not assume that the two samples are drawn from multivariate normal distributions. Bootstrap and permutation approaches are introduced for determining the p-values of the proposed test statistics. Simulation studies are conducted and numerical results are reported to examine performance of the proposed statistical tests. The numerical results demonstrate that the robust multivariate nonparametric tests constructed from the Hodges-Lehmann estimators are more efficient than those based on medians and the extended U statistic. The permutation approach can provide a more stringent control of Type I error and is generally more powerful than the bootstrap procedure. The proposed robust nonparametric tests are applied to detect multivariate distributional difference between the intervention and control groups in the Thai Healthy Choices study and examine the intervention effect of a four-session motivational interviewing-based intervention developed in the study to reduce risk behaviors among youth living with HIV. PMID:29672555
A robust background regression based score estimation algorithm for hyperspectral anomaly detection
NASA Astrophysics Data System (ADS)
Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei
2016-12-01
Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement in practice.
The power and robustness of maximum LOD score statistics.
Yoo, Y J; Mendell, N R
2008-07-01
The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.
GMR-based PhC biosensor: FOM analysis and experimental studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Syamprasad, Jagadeesh; Narayanan, Roshni; Joseph, Joby
2014-02-20
Guided Mode Resonance based Photonic crystal biosensor has a lot of potential applications. In our work, we are trying to improve their figure of merit values in order to achieve an optimum level through design and fabrication techniques. A robust and low-cost alternative for current biosensors is also explored through this research.
NASA Technical Reports Server (NTRS)
Saleeb, A. F.; Arnold, S. M.
1991-01-01
The issue of developing effective and robust schemes to implement a class of the Ogden-type hyperelastic constitutive models is addressed. To this end, explicit forms for the corresponding material tangent stiffness tensors are developed, and these are valid for the entire deformation range; i.e., with both distinct as well as repeated principal-stretch values. Throughout the analysis the various implications of the underlying property of separability of the strain-energy functions are exploited, thus leading to compact final forms of the tensor expressions. In particular, this facilitated the treatment of complex cases of uncoupled volumetric/deviatoric formulations for incompressible materials. The forms derived are also amenable for use with symbolic-manipulation packages for systematic code generation.
Yong, Alan K.; Hough, Susan E.; Iwahashi, Junko; Braverman, Amy
2012-01-01
We present an approach based on geomorphometry to predict material properties and characterize site conditions using the VS30 parameter (time‐averaged shear‐wave velocity to a depth of 30 m). Our framework consists of an automated terrain classification scheme based on taxonomic criteria (slope gradient, local convexity, and surface texture) that systematically identifies 16 terrain types from 1‐km spatial resolution (30 arcsec) Shuttle Radar Topography Mission digital elevation models (SRTM DEMs). Using 853 VS30 values from California, we apply a simulation‐based statistical method to determine the mean VS30 for each terrain type in California. We then compare the VS30 values with models based on individual proxies, such as mapped surface geology and topographic slope, and show that our systematic terrain‐based approach consistently performs better than semiempirical estimates based on individual proxies. To further evaluate our model, we apply our California‐based estimates to terrains of the contiguous United States. Comparisons of our estimates with 325 VS30 measurements outside of California, as well as estimates based on the topographic slope model, indicate our method to be statistically robust and more accurate. Our approach thus provides an objective and robust method for extending estimates of VS30 for regions where in situ measurements are sparse or not readily available.
Chen, Wen; Chowdhury, Fahmida N; Djuric, Ana; Yeh, Chih-Ping
2014-09-01
This paper provides a new design of robust fault detection for turbofan engines with adaptive controllers. The critical issue is that the adaptive controllers can depress the faulty effects such that the actual system outputs remain the pre-specified values, making it difficult to detect faults/failures. To solve this problem, a Total Measurable Fault Information Residual (ToMFIR) technique with the aid of system transformation is adopted to detect faults in turbofan engines with adaptive controllers. This design is a ToMFIR-redundancy-based robust fault detection. The ToMFIR is first introduced and existing results are also summarized. The Detailed design process of the ToMFIRs is presented and a turbofan engine model is simulated to verify the effectiveness of the proposed ToMFIR-based fault-detection strategy. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
A bi-objective model for robust yard allocation scheduling for outbound containers
NASA Astrophysics Data System (ADS)
Liu, Changchun; Zhang, Canrong; Zheng, Li
2017-01-01
This article examines the yard allocation problem for outbound containers, with consideration of uncertainty factors, mainly including the arrival and operation time of calling vessels. Based on the time buffer inserting method, a bi-objective model is constructed to minimize the total operational cost and to maximize the robustness of fighting against the uncertainty. Due to the NP-hardness of the constructed model, a two-stage heuristic is developed to solve the problem. In the first stage, initial solutions are obtained by a greedy algorithm that looks n-steps ahead with the uncertainty factors set as their respective expected values; in the second stage, based on the solutions obtained in the first stage and with consideration of uncertainty factors, a neighbourhood search heuristic is employed to generate robust solutions that can fight better against the fluctuation of uncertainty factors. Finally, extensive numerical experiments are conducted to test the performance of the proposed method.
Competence-Based Approach in Value Chain Processes
NASA Astrophysics Data System (ADS)
Azevedo, Rodrigo Cambiaghi; D'Amours, Sophie; Rönnqvist, Mikael
There is a gap between competence theory and value chain processes frameworks. While individually considered as core elements in contemporary management thinking, the integration of the two concepts is still lacking. We claim that this integration would allow for the development of more robust business models by structuring value chain activities around aspects such as capabilities and skills, as well as individual and organizational knowledge. In this context, the objective of this article is to reduce this gap and consequently open a field for further improvements of value chain processes frameworks.
Robust power detector for wideband signals among many single tone signals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Musgrove, Cameron H.; Thompson, Douglas
Various technologies for isolating a signal of interest from signals received contemporaneously by an antenna are described herein. A time period for which a signal of interest is present in a second signal can be identified based upon ratios of values of the second signal to the mean value of the second signal. When the ratio of the value of the second signal at a particular time to the mean of the second signal exceeds a threshold value, the signal of interest is considered to be present in the second signal.
A Statistical Analysis of Brain Morphology Using Wild Bootstrapping
Ibrahim, Joseph G.; Tang, Niansheng; Rowe, Daniel B.; Hao, Xuejun; Bansal, Ravi; Peterson, Bradley S.
2008-01-01
Methods for the analysis of brain morphology, including voxel-based morphology and surface-based morphometries, have been used to detect associations between brain structure and covariates of interest, such as diagnosis, severity of disease, age, IQ, and genotype. The statistical analysis of morphometric measures usually involves two statistical procedures: 1) invoking a statistical model at each voxel (or point) on the surface of the brain or brain subregion, followed by mapping test statistics (e.g., t test) or their associated p values at each of those voxels; 2) correction for the multiple statistical tests conducted across all voxels on the surface of the brain region under investigation. We propose the use of new statistical methods for each of these procedures. We first use a heteroscedastic linear model to test the associations between the morphological measures at each voxel on the surface of the specified subregion (e.g., cortical or subcortical surfaces) and the covariates of interest. Moreover, we develop a robust test procedure that is based on a resampling method, called wild bootstrapping. This procedure assesses the statistical significance of the associations between a measure of given brain structure and the covariates of interest. The value of this robust test procedure lies in its computationally simplicity and in its applicability to a wide range of imaging data, including data from both anatomical and functional magnetic resonance imaging (fMRI). Simulation studies demonstrate that this robust test procedure can accurately control the family-wise error rate. We demonstrate the application of this robust test procedure to the detection of statistically significant differences in the morphology of the hippocampus over time across gender groups in a large sample of healthy subjects. PMID:17649909
Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.
2001-01-01
This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Modeling and analyzing cascading dynamics of the Internet based on local congestion information
NASA Astrophysics Data System (ADS)
Zhu, Qian; Nie, Jianlong; Zhu, Zhiliang; Yu, Hai; Xue, Yang
2018-06-01
Cascading failure has already become one of the vital issues in network science. By considering realistic network operational settings, we propose the congestion function to represent the congested extent of node and construct a local congestion-aware routing strategy with a tunable parameter. We investigate the cascading failures on the Internet triggered by deliberate attacks. Simulation results show that the tunable parameter has an optimal value that makes the network achieve a maximum level of robustness. The robustness of the network has a positive correlation with tolerance parameter, but it has a negative correlation with the packets generation rate. In addition, there exists a threshold of the attacking proportion of nodes that makes the network achieve the lowest robustness. Moreover, by introducing the concept of time delay for information transmission on the Internet, we found that an increase of the time delay will decrease the robustness of the network rapidly. The findings of the paper will be useful for enhancing the robustness of the Internet in the future.
NASA Astrophysics Data System (ADS)
McPhail, C.; Maier, H. R.; Kwakkel, J. H.; Giuliani, M.; Castelletti, A.; Westra, S.
2018-02-01
Robustness is being used increasingly for decision analysis in relation to deep uncertainty and many metrics have been proposed for its quantification. Recent studies have shown that the application of different robustness metrics can result in different rankings of decision alternatives, but there has been little discussion of what potential causes for this might be. To shed some light on this issue, we present a unifying framework for the calculation of robustness metrics, which assists with understanding how robustness metrics work, when they should be used, and why they sometimes disagree. The framework categorizes the suitability of metrics to a decision-maker based on (1) the decision-context (i.e., the suitability of using absolute performance or regret), (2) the decision-maker's preferred level of risk aversion, and (3) the decision-maker's preference toward maximizing performance, minimizing variance, or some higher-order moment. This article also introduces a conceptual framework describing when relative robustness values of decision alternatives obtained using different metrics are likely to agree and disagree. This is used as a measure of how "stable" the ranking of decision alternatives is when determined using different robustness metrics. The framework is tested on three case studies, including water supply augmentation in Adelaide, Australia, the operation of a multipurpose regulated lake in Italy, and flood protection for a hypothetical river based on a reach of the river Rhine in the Netherlands. The proposed conceptual framework is confirmed by the case study results, providing insight into the reasons for disagreements between rankings obtained using different robustness metrics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newpower, M; Ge, S; Mohan, R
Purpose: To report an approach to quantify the normal tissue sparing for 4D robustly-optimized versus PTV-optimized IMPT plans. Methods: We generated two sets of 90 DVHs from a patient’s 10-phase 4D CT set; one by conventional PTV-based optimization done in the Eclipse treatment planning system, and the other by an in-house robust optimization algorithm. The 90 DVHs were created for the following scenarios in each of the ten phases of the 4DCT: ± 5mm shift along x, y, z; ± 3.5% range uncertainty and a nominal scenario. A Matlab function written by Gay and Niemierko was modified to calculate EUDmore » for each DVH for the following structures: esophagus, heart, ipsilateral lung and spinal cord. An F-test determined whether or not the variances of each structure’s DVHs were statistically different. Then a t-test determined if the average EUDs for each optimization algorithm were statistically significantly different. Results: T-test results showed each structure had a statistically significant difference in average EUD when comparing robust optimization versus PTV-based optimization. Under robust optimization all structures except the spinal cord received lower EUDs than PTV-based optimization. Using robust optimization the average EUDs decreased 1.45% for the esophagus, 1.54% for the heart and 5.45% for the ipsilateral lung. The average EUD to the spinal cord increased 24.86% but was still well below tolerance. Conclusion: This work has helped quantify a qualitative relationship noted earlier in our work: that robust optimization leads to plans with greater normal tissue sparing compared to PTV-based optimization. Except in the case of the spinal cord all structures received a lower EUD under robust optimization and these results are statistically significant. While the average EUD to the spinal cord increased to 25.06 Gy under robust optimization it is still well under the TD50 value of 66.5 Gy from Emami et al. Supported in part by the NCI U19 CA021239.« less
Vector autoregressive models: A Gini approach
NASA Astrophysics Data System (ADS)
Mussard, Stéphane; Ndiaye, Oumar Hamady
2018-02-01
In this paper, it is proven that the usual VAR models may be performed in the Gini sense, that is, on a ℓ1 metric space. The Gini regression is robust to outliers. As a consequence, when data are contaminated by extreme values, we show that semi-parametric VAR-Gini regressions may be used to obtain robust estimators. The inference about the estimators is made with the ℓ1 norm. Also, impulse response functions and Gini decompositions for prevision errors are introduced. Finally, Granger's causality tests are properly derived based on U-statistics.
Explicit robust schemes for implementation of general principal value-based constitutive models
NASA Technical Reports Server (NTRS)
Arnold, S. M.; Saleeb, A. F.; Tan, H. Q.; Zhang, Y.
1993-01-01
The issue of developing effective and robust schemes to implement general hyperelastic constitutive models is addressed. To this end, special purpose functions are used to symbolically derive, evaluate, and automatically generate the associated FORTRAN code for the explicit forms of the corresponding stress function and material tangent stiffness tensors. These explicit forms are valid for the entire deformation range. The analytical form of these explicit expressions is given here for the case in which the strain-energy potential is taken as a nonseparable polynomial function of the principle stretches.
Robust interferometry against imperfections based on weak value amplification
NASA Astrophysics Data System (ADS)
Fang, Chen; Huang, Jing-Zheng; Zeng, Guihua
2018-06-01
Optical interferometry has been widely used in various high-precision applications. Usually, the minimum precision of an interferometry is limited by various technical noises in practice. To suppress such kinds of noises, we propose a scheme which combines the weak measurement with the standard interferometry. The proposed scheme dramatically outperforms the standard interferometry in the signal-to-noise ratio and the robustness against noises caused by the optical elements' reflections and the offset fluctuation between two paths. A proof-of-principle experiment is demonstrated to validate the amplification theory.
Song, Qiankun; Yu, Qinqin; Zhao, Zhenjiang; Liu, Yurong; Alsaadi, Fuad E
2018-07-01
In this paper, the boundedness and robust stability for a class of delayed complex-valued neural networks with interval parameter uncertainties are investigated. By using Homomorphic mapping theorem, Lyapunov method and inequality techniques, sufficient condition to guarantee the boundedness of networks and the existence, uniqueness and global robust stability of equilibrium point is derived for the considered uncertain neural networks. The obtained robust stability criterion is expressed in complex-valued LMI, which can be calculated numerically using YALMIP with solver of SDPT3 in MATLAB. An example with simulations is supplied to show the applicability and advantages of the acquired result. Copyright © 2018 Elsevier Ltd. All rights reserved.
Robust inference under the beta regression model with application to health care studies.
Ghosh, Abhik
2017-01-01
Data on rates, percentages, or proportions arise frequently in many different applied disciplines like medical biology, health care, psychology, and several others. In this paper, we develop a robust inference procedure for the beta regression model, which is used to describe such response variables taking values in (0, 1) through some related explanatory variables. In relation to the beta regression model, the issue of robustness has been largely ignored in the literature so far. The existing maximum likelihood-based inference has serious lack of robustness against outliers in data and generate drastically different (erroneous) inference in the presence of data contamination. Here, we develop the robust minimum density power divergence estimator and a class of robust Wald-type tests for the beta regression model along with several applications. We derive their asymptotic properties and describe their robustness theoretically through the influence function analyses. Finite sample performances of the proposed estimators and tests are examined through suitable simulation studies and real data applications in the context of health care and psychology. Although we primarily focus on the beta regression models with a fixed dispersion parameter, some indications are also provided for extension to the variable dispersion beta regression models with an application.
Robust Short-Lag Spatial Coherence Imaging.
Nair, Arun Asokan; Tran, Trac Duy; Bell, Muyinatu A Lediju
2018-03-01
Short-lag spatial coherence (SLSC) imaging displays the spatial coherence between backscattered ultrasound echoes instead of their signal amplitudes and is more robust to noise and clutter artifacts when compared with traditional delay-and-sum (DAS) B-mode imaging. However, SLSC imaging does not consider the content of images formed with different lags, and thus does not exploit the differences in tissue texture at each short-lag value. Our proposed method improves SLSC imaging by weighting the addition of lag values (i.e., M-weighting) and by applying robust principal component analysis (RPCA) to search for a low-dimensional subspace for projecting coherence images created with different lag values. The RPCA-based projections are considered to be denoised versions of the originals that are then weighted and added across lags to yield a final robust SLSC (R-SLSC) image. Our approach was tested on simulation, phantom, and in vivo liver data. Relative to DAS B-mode images, the mean contrast, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) improvements with R-SLSC images are 21.22 dB, 2.54, and 2.36, respectively, when averaged over simulated, phantom, and in vivo data and over all lags considered, which corresponds to mean improvements of 96.4%, 121.2%, and 120.5%, respectively. When compared with SLSC images, the corresponding mean improvements with R-SLSC images were 7.38 dB, 1.52, and 1.30, respectively (i.e., mean improvements of 14.5%, 50.5%, and 43.2%, respectively). Results show great promise for smoothing out the tissue texture of SLSC images and enhancing anechoic or hypoechoic target visibility at higher lag values, which could be useful in clinical tasks such as breast cyst visualization, liver vessel tracking, and obese patient imaging.
Artificial Potential Field Controllers for Robust Communications in a Network of Swarm Robots
2005-05-18
vectors are less than 90◦ apart. Algorithm 1 The Algorithm for generating a feasible set of vectors P ← set of high priority vectors Csum ← [( LOS1 +R1...the 46 C program was finished reading and writing the values to the serial line it would delete the timing file. Only after the timing file had been... deleted would the base station write new values for the wheel velocities. The timing file kept both the Linux PC and the base station synchronized so
NASA Astrophysics Data System (ADS)
Shariff, Nurul Sima Mohamad; Ferdaos, Nur Aqilah
2017-08-01
Multicollinearity often leads to inconsistent and unreliable parameter estimates in regression analysis. This situation will be more severe in the presence of outliers it will cause fatter tails in the error distributions than the normal distributions. The well-known procedure that is robust to multicollinearity problem is the ridge regression method. This method however is expected to be affected by the presence of outliers due to some assumptions imposed in the modeling procedure. Thus, the robust version of existing ridge method with some modification in the inverse matrix and the estimated response value is introduced. The performance of the proposed method is discussed and comparisons are made with several existing estimators namely, Ordinary Least Squares (OLS), ridge regression and robust ridge regression based on GM-estimates. The finding of this study is able to produce reliable parameter estimates in the presence of both multicollinearity and outliers in the data.
REGRESSION MODELS OF RESIDENTIAL EXPOSURE TO CHLORPYRIFOS AND DIAZINON
This study examines the ability of regression models to predict residential exposures to chlorpyrifos and diazinon, based on the information from the NHEXAS-AZ database. The robust method was used to generate "fill-in" values for samples that are below the detection l...
The Speech multi features fusion perceptual hash algorithm based on tensor decomposition
NASA Astrophysics Data System (ADS)
Huang, Y. B.; Fan, M. H.; Zhang, Q. Y.
2018-03-01
With constant progress in modern speech communication technologies, the speech data is prone to be attacked by the noise or maliciously tampered. In order to make the speech perception hash algorithm has strong robustness and high efficiency, this paper put forward a speech perception hash algorithm based on the tensor decomposition and multi features is proposed. This algorithm analyses the speech perception feature acquires each speech component wavelet packet decomposition. LPCC, LSP and ISP feature of each speech component are extracted to constitute the speech feature tensor. Speech authentication is done by generating the hash values through feature matrix quantification which use mid-value. Experimental results showing that the proposed algorithm is robust for content to maintain operations compared with similar algorithms. It is able to resist the attack of the common background noise. Also, the algorithm is highly efficiency in terms of arithmetic, and is able to meet the real-time requirements of speech communication and complete the speech authentication quickly.
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J; Stayman, J Webster; Zbijewski, Wojciech; Brock, Kristy K; Daly, Michael J; Chan, Harley; Irish, Jonathan C; Siewerdsen, Jeffrey H
2011-04-01
A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values ("intensity"). A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5 +/- 2.8) mm compared to (3.5 +/- 3.0) mm with rigid registration. A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.
SU-F-BRD-05: Robustness of Dose Painting by Numbers in Proton Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montero, A Barragan; Sterpin, E; Lee, J
Purpose: Proton range uncertainties may cause important dose perturbations within the target volume, especially when steep dose gradients are present as in dose painting. The aim of this study is to assess the robustness against setup and range errors for high heterogeneous dose prescriptions (i.e., dose painting by numbers), delivered by proton pencil beam scanning. Methods: An automatic workflow, based on MATLAB functions, was implemented through scripting in RayStation (RaySearch Laboratories). It performs a gradient-based segmentation of the dose painting volume from 18FDG-PET images (GTVPET), and calculates the dose prescription as a linear function of the FDG-uptake value on eachmore » voxel. The workflow was applied to two patients with head and neck cancer. Robustness against setup and range errors of the conventional PTV margin strategy (prescription dilated by 2.5 mm) versus CTV-based (minimax) robust optimization (2.5 mm setup, 3% range error) was assessed by comparing the prescription with the planned dose for a set of error scenarios. Results: In order to ensure dose coverage above 95% of the prescribed dose in more than 95% of the GTVPET voxels while compensating for the uncertainties, the plans with a PTV generated a high overdose. For the nominal case, up to 35% of the GTVPET received doses 5% beyond prescription. For the worst of the evaluated error scenarios, the volume with 5% overdose increased to 50%. In contrast, for CTV-based plans this 5% overdose was present only in a small fraction of the GTVPET, which ranged from 7% in the nominal case to 15% in the worst of the evaluated scenarios. Conclusion: The use of a PTV leads to non-robust dose distributions with excessive overdose in the painted volume. In contrast, robust optimization yields robust dose distributions with limited overdose. RaySearch Laboratories is sincerely acknowledged for providing us with RayStation treatment planning system and for the support provided.« less
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
Liu, Xing-Cai; He, Shi-Wei; Song, Rui; Sun, Yang; Li, Hao-Dong
2014-01-01
Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA) was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.
Angelis, Aris; Kanavos, Panos
2016-05-01
In recent years, multiple criteria decision analysis (MCDA) has emerged as a likely alternative to address shortcomings in health technology assessment (HTA) by offering a more holistic perspective to value assessment and acting as an alternative priority setting tool. In this paper, we argue that MCDA needs to subscribe to robust methodological processes related to the selection of objectives, criteria and attributes in order to be meaningful in the context of healthcare decision making and fulfil its role in value-based assessment (VBA). We propose a methodological process, based on multi-attribute value theory (MAVT) methods comprising five distinct phases, outline the stages involved in each phase and discuss their relevance in the HTA process. Importantly, criteria and attributes need to satisfy a set of desired properties, otherwise the outcome of the analysis can produce spurious results and misleading recommendations. Assuming the methodological process we propose is adhered to, the application of MCDA presents three very distinct advantages to decision makers in the context of HTA and VBA: first, it acts as an instrument for eliciting preferences on the performance of alternative options across a wider set of explicit criteria, leading to a more complete assessment of value; second, it allows the elicitation of preferences across the criteria themselves to reflect differences in their relative importance; and, third, the entire process of preference elicitation can be informed by direct stakeholder engagement, and can therefore reflect their own preferences. All features are fully transparent and facilitate decision making.
Robust, nonlinear, high angle-of-attack control design for a supermaneuverable vehicle
NASA Technical Reports Server (NTRS)
Adams, Richard J.
1993-01-01
High angle-of-attack flight control laws are developed for a supermaneuverable fighter aircraft. The methods of dynamic inversion and structured singular value synthesis are combined into an approach which addresses both the nonlinearity and robustness problems of flight at extreme operating conditions. The primary purpose of the dynamic inversion control elements is to linearize the vehicle response across the flight envelope. Structured singular value synthesis is used to design a dynamic controller which provides robust tracking to pilot commands. The resulting control system achieves desired flying qualities and guarantees a large margin of robustness to uncertainties for high angle-of-attack flight conditions. The results of linear simulation and structured singular value stability analysis are presented to demonstrate satisfaction of the design criteria. High fidelity nonlinear simulation results show that the combined dynamics inversion/structured singular value synthesis control law achieves a high level of performance in a realistic environment.
Ganju, Jitendra; Yu, Xinxin; Ma, Guoguang Julie
2013-01-01
Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre-specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre-specifying multiple test statistics and relying on the minimum p-value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p-value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p-value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p-value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd.
Robustness surfaces of complex networks
NASA Astrophysics Data System (ADS)
Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis
2014-09-01
Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared.
Robustness surfaces of complex networks.
Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis
2014-09-02
Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared.
On the contributions of topological features to transcriptional regulatory network robustness
2012-01-01
Background Because biological networks exhibit a high-degree of robustness, a systemic understanding of their architecture and function requires an appraisal of the network design principles that confer robustness. In this project, we conduct a computational study of the contribution of three degree-based topological properties (transcription factor-target ratio, degree distribution, cross-talk suppression) and their combinations on the robustness of transcriptional regulatory networks. We seek to quantify the relative degree of robustness conferred by each property (and combination) and also to determine the extent to which these properties alone can explain the robustness observed in transcriptional networks. Results To study individual properties and their combinations, we generated synthetic, random networks that retained one or more of the three properties with values derived from either the yeast or E. coli gene regulatory networks. Robustness of these networks were estimated through simulation. Our results indicate that the combination of the three properties we considered explains the majority of the structural robustness observed in the real transcriptional networks. Surprisingly, scale-free degree distribution is, overall, a minor contributor to robustness. Instead, most robustness is gained through topological features that limit the complexity of the overall network and increase the transcription factor subnetwork sparsity. Conclusions Our work demonstrates that (i) different types of robustness are implemented by different topological aspects of the network and (ii) size and sparsity of the transcription factor subnetwork play an important role for robustness induction. Our results are conserved across yeast and E Coli, which suggests that the design principles examined are present within an array of living systems. PMID:23194062
ERIC Educational Resources Information Center
Wilcox, Rand R.; Serang, Sarfaraz
2017-01-01
The article provides perspectives on p values, null hypothesis testing, and alternative techniques in light of modern robust statistical methods. Null hypothesis testing and "p" values can provide useful information provided they are interpreted in a sound manner, which includes taking into account insights and advances that have…
Bao, Yihai; Main, Joseph A; Noh, Sam-Young
2017-08-01
A computational methodology is presented for evaluating structural robustness against column loss. The methodology is illustrated through application to reinforced concrete (RC) frame buildings, using a reduced-order modeling approach for three-dimensional RC framing systems that includes the floor slabs. Comparisons with high-fidelity finite-element model results are presented to verify the approach. Pushdown analyses of prototype buildings under column loss scenarios are performed using the reduced-order modeling approach, and an energy-based procedure is employed to account for the dynamic effects associated with sudden column loss. Results obtained using the energy-based approach are found to be in good agreement with results from direct dynamic analysis of sudden column loss. A metric for structural robustness is proposed, calculated by normalizing the ultimate capacities of the structural system under sudden column loss by the applicable service-level gravity loading and by evaluating the minimum value of this normalized ultimate capacity over all column removal scenarios. The procedure is applied to two prototype 10-story RC buildings, one employing intermediate moment frames (IMFs) and the other employing special moment frames (SMFs). The SMF building, with its more stringent seismic design and detailing, is found to have greater robustness.
Robust calibration of an optical-lattice depth based on a phase shift
NASA Astrophysics Data System (ADS)
Cabrera-Gutiérrez, C.; Michon, E.; Brunaud, V.; Kawalec, T.; Fortun, A.; Arnal, M.; Billy, J.; Guéry-Odelin, D.
2018-04-01
We report on a method to calibrate the depth of an optical lattice. It consists of triggering the intrasite dipole mode of the cloud by a sudden phase shift. The corresponding oscillatory motion is directly related to the interband frequencies on a large range of lattice depths. Remarkably, for a moderate displacement, a single frequency dominates the oscillation of the zeroth and first orders of the interference pattern observed after a sufficiently long time of flight. The method is robust against atom-atom interactions and the exact value of the extra weak external confinement superimposed to the optical lattice.
High-Throughput RNA Interference Screening: Tricks of the Trade
Nebane, N. Miranda; Coric, Tatjana; Whig, Kanupriya; McKellip, Sara; Woods, LaKeisha; Sosa, Melinda; Sheppard, Russell; Rasmussen, Lynn; Bjornsti, Mary-Ann; White, E. Lucile
2016-01-01
The process of validating an assay for high-throughput screening (HTS) involves identifying sources of variability and developing procedures that minimize the variability at each step in the protocol. The goal is to produce a robust and reproducible assay with good metrics. In all good cell-based assays, this means coefficient of variation (CV) values of less than 10% and a signal window of fivefold or greater. HTS assays are usually evaluated using Z′ factor, which incorporates both standard deviation and signal window. A Z′ factor value of 0.5 or higher is acceptable for HTS. We used a standard HTS validation procedure in developing small interfering RNA (siRNA) screening technology at the HTS center at Southern Research. Initially, our assay performance was similar to published screens, with CV values greater than 10% and Z′ factor values of 0.51 ± 0.16 (average ± standard deviation). After optimizing the siRNA assay, we got CV values averaging 7.2% and a robust Z′ factor value of 0.78 ± 0.06 (average ± standard deviation). We present an overview of the problems encountered in developing this whole-genome siRNA screening program at Southern Research and how equipment optimization led to improved data quality. PMID:23616418
Robustness surfaces of complex networks
Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis
2014-01-01
Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared. PMID:25178402
Linear, multivariable robust control with a mu perspective
NASA Technical Reports Server (NTRS)
Packard, Andy; Doyle, John; Balas, Gary
1993-01-01
The structured singular value is a linear algebra tool developed to study a particular class of matrix perturbation problems arising in robust feedback control of multivariable systems. These perturbations are called linear fractional, and are a natural way to model many types of uncertainty in linear systems, including state-space parameter uncertainty, multiplicative and additive unmodeled dynamics uncertainty, and coprime factor and gap metric uncertainty. The structured singular value theory provides a natural extension of classical SISO robustness measures and concepts to MIMO systems. The structured singular value analysis, coupled with approximate synthesis methods, make it possible to study the tradeoff between performance and uncertainty that occurs in all feedback systems. In MIMO systems, the complexity of the spatial interactions in the loop gains make it difficult to heuristically quantify the tradeoffs that must occur. This paper examines the role played by the structured singular value (and its computable bounds) in answering these questions, as well as its role in the general robust, multivariable control analysis and design problem.
Probability-based hazard avoidance guidance for planetary landing
NASA Astrophysics Data System (ADS)
Yuan, Xu; Yu, Zhengshi; Cui, Pingyuan; Xu, Rui; Zhu, Shengying; Cao, Menglong; Luan, Enjie
2018-03-01
Future landing and sample return missions on planets and small bodies will seek landing sites with high scientific value, which may be located in hazardous terrains. Autonomous landing in such hazardous terrains and highly uncertain planetary environments is particularly challenging. Onboard hazard avoidance ability is indispensable, and the algorithms must be robust to uncertainties. In this paper, a novel probability-based hazard avoidance guidance method is developed for landing in hazardous terrains on planets or small bodies. By regarding the lander state as probabilistic, the proposed guidance algorithm exploits information on the uncertainty of lander position and calculates the probability of collision with each hazard. The collision probability serves as an accurate safety index, which quantifies the impact of uncertainties on the lander safety. Based on the collision probability evaluation, the state uncertainty of the lander is explicitly taken into account in the derivation of the hazard avoidance guidance law, which contributes to enhancing the robustness to the uncertain dynamics of planetary landing. The proposed probability-based method derives fully analytic expressions and does not require off-line trajectory generation. Therefore, it is appropriate for real-time implementation. The performance of the probability-based guidance law is investigated via a set of simulations, and the effectiveness and robustness under uncertainties are demonstrated.
A robust nonlinear filter for image restoration.
Koivunen, V
1995-01-01
A class of nonlinear regression filters based on robust estimation theory is introduced. The goal of the filtering is to recover a high-quality image from degraded observations. Models for desired image structures and contaminating processes are employed, but deviations from strict assumptions are allowed since the assumptions on signal and noise are typically only approximately true. The robustness of filters is usually addressed only in a distributional sense, i.e., the actual error distribution deviates from the nominal one. In this paper, the robustness is considered in a broad sense since the outliers may also be due to inappropriate signal model, or there may be more than one statistical population present in the processing window, causing biased estimates. Two filtering algorithms minimizing a least trimmed squares criterion are provided. The design of the filters is simple since no scale parameters or context-dependent threshold values are required. Experimental results using both real and simulated data are presented. The filters effectively attenuate both impulsive and nonimpulsive noise while recovering the signal structure and preserving interesting details.
2011-01-01
Background The performance of 3D-based virtual screening similarity functions is affected by the applied conformations of compounds. Therefore, the results of 3D approaches are often less robust than 2D approaches. The application of 3D methods on multiple conformer data sets normally reduces this weakness, but entails a significant computational overhead. Therefore, we developed a special conformational space encoding by means of Gaussian mixture models and a similarity function that operates on these models. The application of a model-based encoding allows an efficient comparison of the conformational space of compounds. Results Comparisons of our 4D flexible atom-pair approach with over 15 state-of-the-art 2D- and 3D-based virtual screening similarity functions on the 40 data sets of the Directory of Useful Decoys show a robust performance of our approach. Even 3D-based approaches that operate on multiple conformers yield inferior results. The 4D flexible atom-pair method achieves an averaged AUC value of 0.78 on the filtered Directory of Useful Decoys data sets. The best 2D- and 3D-based approaches of this study yield an AUC value of 0.74 and 0.72, respectively. As a result, the 4D flexible atom-pair approach achieves an average rank of 1.25 with respect to 15 other state-of-the-art similarity functions and four different evaluation metrics. Conclusions Our 4D method yields a robust performance on 40 pharmaceutically relevant targets. The conformational space encoding enables an efficient comparison of the conformational space. Therefore, the weakness of the 3D-based approaches on single conformations is circumvented. With over 100,000 similarity calculations on a single desktop CPU, the utilization of the 4D flexible atom-pair in real-world applications is feasible. PMID:21733172
Lee, Jewon; Moon, Seokbae; Jeong, Hyeyun; Kim, Sang Woo
2015-11-20
This paper proposes a diagnosis method for a multipole permanent magnet synchronous motor (PMSM) under an interturn short circuit fault. Previous works in this area have suffered from the uncertainties of the PMSM parameters, which can lead to misdiagnosis. The proposed method estimates the q-axis inductance (Lq) of the faulty PMSM to solve this problem. The proposed method also estimates the faulty phase and the value of G, which serves as an index of the severity of the fault. The q-axis current is used to estimate the faulty phase, the values of G and Lq. For this reason, two open-loop observers and an optimization method based on a particle-swarm are implemented. The q-axis current of a healthy PMSM is estimated by the open-loop observer with the parameters of a healthy PMSM. The Lq estimation significantly compensates for the estimation errors in high-speed operation. The experimental results demonstrate that the proposed method can estimate the faulty phase, G, and Lq besides exhibiting robustness against parameter uncertainties.
Robust PLS approach for KPI-related prediction and diagnosis against outliers and missing data
NASA Astrophysics Data System (ADS)
Yin, Shen; Wang, Guang; Yang, Xu
2014-07-01
In practical industrial applications, the key performance indicator (KPI)-related prediction and diagnosis are quite important for the product quality and economic benefits. To meet these requirements, many advanced prediction and monitoring approaches have been developed which can be classified into model-based or data-driven techniques. Among these approaches, partial least squares (PLS) is one of the most popular data-driven methods due to its simplicity and easy implementation in large-scale industrial process. As PLS is totally based on the measured process data, the characteristics of the process data are critical for the success of PLS. Outliers and missing values are two common characteristics of the measured data which can severely affect the effectiveness of PLS. To ensure the applicability of PLS in practical industrial applications, this paper introduces a robust version of PLS to deal with outliers and missing values, simultaneously. The effectiveness of the proposed method is finally demonstrated by the application results of the KPI-related prediction and diagnosis on an industrial benchmark of Tennessee Eastman process.
Robustness analysis of multirate and periodically time varying systems
NASA Technical Reports Server (NTRS)
Berg, Martin C.; Mason, Gregory S.
1991-01-01
A new method for analyzing the stability and robustness of multirate and periodically time varying systems is presented. It is shown that a multirate or periodically time varying system can be transformed into an equivalent time invariant system. For a SISO system, traditional gain and phase margins can be found by direct application of the Nyquist criterion to this equivalent time invariant system. For a MIMO system, structured and unstructured singular values can be used to determine the system's robustness. The limitations and implications of utilizing this equivalent time invariant system for calculating gain and phase margins, and for estimating robustness via singular value analysis are discussed.
NASA Astrophysics Data System (ADS)
Liu, Xiyao; Lou, Jieting; Wang, Yifan; Du, Jingyu; Zou, Beiji; Chen, Yan
2018-03-01
Authentication and copyright identification are two critical security issues for medical images. Although zerowatermarking schemes can provide durable, reliable and distortion-free protection for medical images, the existing zerowatermarking schemes for medical images still face two problems. On one hand, they rarely considered the distinguishability for medical images, which is critical because different medical images are sometimes similar to each other. On the other hand, their robustness against geometric attacks, such as cropping, rotation and flipping, is insufficient. In this study, a novel discriminative and robust zero-watermarking (DRZW) is proposed to address these two problems. In DRZW, content-based features of medical images are first extracted based on completed local binary pattern (CLBP) operator to ensure the distinguishability and robustness, especially against geometric attacks. Then, master shares and ownership shares are generated from the content-based features and watermark according to (2,2) visual cryptography. Finally, the ownership shares are stored for authentication and copyright identification. For queried medical images, their content-based features are extracted and master shares are generated. Their watermarks for authentication and copyright identification are recovered by stacking the generated master shares and stored ownership shares. 200 different medical images of 5 types are collected as the testing data and our experimental results demonstrate that DRZW ensures both the accuracy and reliability of authentication and copyright identification. When fixing the false positive rate to 1.00%, the average value of false negative rates by using DRZW is only 1.75% under 20 common attacks with different parameters.
Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J.; Stayman, J. Webster; Zbijewski, Wojciech; Brock, Kristy K.; Daly, Michael J.; Chan, Harley; Irish, Jonathan C.; Siewerdsen, Jeffrey H.
2011-01-01
Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (“intensity”). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and∕or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5±2.8) mm compared to (3.5±3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance. PMID:21626913
Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali
2011-04-15
Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (''intensity''). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specificmore » intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5{+-}2.8) mm compared to (3.5{+-}3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.« less
An all-water-based system for robust superhydrophobic surfaces.
Liu, Mingming; Hou, Yuanyuan; Li, Jing; Tie, Lu; Guo, Zhiguang
2018-06-01
Superhydrophobic surfaces with micro-/nanohierarchical structures are mechanically weak. Generally, organic solvents are used to dissolve or disperse organic adhesives and modifiers to enhance the mechanical strength of superhydrophobic surfaces. In this work, an all-water-based spraying solution is developed for the preparation of robust superhydrophobic surfaces, which contains ZnO nanoparticles, aluminum phosphate as an inorganic adhesive, and polytetrafluoroethylene with low surface energy. The all-water-based system is appreciated for low price and less pollution. Importantly, the prepared superhydrophobic surfaces are durable enough against various harsh conditions (such as UV irradiation for 12 h, pH values from 1 to 13, and temperatures from -10 to 300 °C for 12 h) and physical damages (including sandpaper abrasion and sand impact tests for 50 cycles). In addition, the obtained interfacial materials show promise for practical applications such as anti-icing and oil-water separation. Copyright © 2018 Elsevier Inc. All rights reserved.
Watermarking scheme based on singular value decomposition and homomorphic transform
NASA Astrophysics Data System (ADS)
Verma, Deval; Aggarwal, A. K.; Agarwal, Himanshu
2017-10-01
A semi-blind watermarking scheme based on singular-value-decomposition (SVD) and homomorphic transform is pro-posed. This scheme ensures the digital security of an eight bit gray scale image by inserting an invisible eight bit gray scale wa-termark into it. The key approach of the scheme is to apply the homomorphic transform on the host image to obtain its reflectance component. The watermark is embedded into the singular values that are obtained by applying the singular value decomposition on the reflectance component. Peak-signal-to-noise-ratio (PSNR), normalized-correlation-coefficient (NCC) and mean-structural-similarity-index-measure (MSSIM) are used to evaluate the performance of the scheme. Invisibility of watermark is ensured by visual inspection and high value of PSNR of watermarked images. Presence of watermark is ensured by visual inspection and high values of NCC and MSSIM of extracted watermarks. Robustness of the scheme is verified by high values of NCC and MSSIM for attacked watermarked images.
Liu, Derong; Yang, Xiong; Wang, Ding; Wei, Qinglai
2015-07-01
The design of stabilizing controller for uncertain nonlinear systems with control constraints is a challenging problem. The constrained-input coupled with the inability to identify accurately the uncertainties motivates the design of stabilizing controller based on reinforcement-learning (RL) methods. In this paper, a novel RL-based robust adaptive control algorithm is developed for a class of continuous-time uncertain nonlinear systems subject to input constraints. The robust control problem is converted to the constrained optimal control problem with appropriately selecting value functions for the nominal system. Distinct from typical action-critic dual networks employed in RL, only one critic neural network (NN) is constructed to derive the approximate optimal control. Meanwhile, unlike initial stabilizing control often indispensable in RL, there is no special requirement imposed on the initial control. By utilizing Lyapunov's direct method, the closed-loop optimal control system and the estimated weights of the critic NN are proved to be uniformly ultimately bounded. In addition, the derived approximate optimal control is verified to guarantee the uncertain nonlinear system to be stable in the sense of uniform ultimate boundedness. Two simulation examples are provided to illustrate the effectiveness and applicability of the present approach.
Robust local search for spacecraft operations using adaptive noise
NASA Technical Reports Server (NTRS)
Fukunaga, Alex S.; Rabideau, Gregg; Chien, Steve
2004-01-01
Randomization is a standard technique for improving the performance of local search algorithms for constraint satisfaction. However, it is well-known that local search algorithms are constraints satisfaction. However, it is well-known that local search algorithms are to the noise values selected. We investigate the use of an adaptive noise mechanism in an iterative repair-based planner/scheduler for spacecraft operations. Preliminary results indicate that adaptive noise makes the use of randomized repair moves safe and robust; that is, using adaptive noise makes it possible to consistently achieve, performance comparable with the best tuned noise setting without the need for manually tuning the noise parameter.
Improving the quality of the NHS workforce through values and competency-based selection.
McGuire, Clare; Rankin, Jean; Matthews, Lynsay; Cerinus, Marie; Zaveri, Swati
2016-07-01
Robust selection processes are essential to ensure the best and most appropriate candidates for nursing, midwifery and allied health professional (NMAHP) positions are appointed, and subsequently enhance patient care. This article reports on a study that explored interviewers' and interviewees' experiences of using values and competency-based interview (VCBI) methods for NMAHPs. Results suggest that this resource could have a positive effect on the quality of the NMAHP workforce, and therefore on patient care. This method of selection could be used in other practice areas in health care, and refinement of the resource should focus on supporting interview panels to develop their VCBI skills and experience.
Flood risk assessment and robust management under deep uncertainty: Application to Dhaka City
NASA Astrophysics Data System (ADS)
Mojtahed, Vahid; Gain, Animesh Kumar; Giupponi, Carlo
2014-05-01
The socio-economic changes as well as climatic changes have been the main drivers of uncertainty in environmental risk assessment and in particular flood. The level of future uncertainty that researchers face when dealing with problems in a future perspective with focus on climate change is known as Deep Uncertainty (also known as Knightian uncertainty), since nobody has already experienced and undergone those changes before and our knowledge is limited to the extent that we have no notion of probabilities, and therefore consolidated risk management approaches have limited potential.. Deep uncertainty is referred to circumstances that analysts and experts do not know or parties to decision making cannot agree on: i) the appropriate models describing the interaction among system variables, ii) probability distributions to represent uncertainty about key parameters in the model 3) how to value the desirability of alternative outcomes. The need thus emerges to assist policy-makers by providing them with not a single and optimal solution to the problem at hand, such as crisp estimates for the costs of damages of natural hazards considered, but instead ranges of possible future costs, based on the outcomes of ensembles of assessment models and sets of plausible scenarios. Accordingly, we need to substitute optimality as a decision criterion with robustness. Under conditions of deep uncertainty, the decision-makers do not have statistical and mathematical bases to identify optimal solutions, while instead they should prefer to implement "robust" decisions that perform relatively well over all conceivable outcomes out of all future unknown scenarios. Under deep uncertainty, analysts cannot employ probability theory or other statistics that usually can be derived from observed historical data and therefore, we turn to non-statistical measures such as scenario analysis. We construct several plausible scenarios with each scenario being a full description of what may happen in future and based on a meaningful synthesis of parameters' values with control of their correlations for maintaining internal consistencies. This paper aims at incorporating a set of data mining and sampling tools to assess uncertainty of model outputs under future climatic and socio-economic changes for Dhaka city and providing a decision support system for robust flood management and mitigation policies. After constructing an uncertainty matrix to identify the main sources of uncertainty for Dhaka City, we identify several hazard and vulnerability maps based on future climatic and socio-economic scenarios. The vulnerability of each flood management alternative under different set of scenarios is determined and finally the robustness of each plausible solution considered is defined based on the above assessment.
Bao, Yihai; Main, Joseph A.; Noh, Sam-Young
2017-01-01
A computational methodology is presented for evaluating structural robustness against column loss. The methodology is illustrated through application to reinforced concrete (RC) frame buildings, using a reduced-order modeling approach for three-dimensional RC framing systems that includes the floor slabs. Comparisons with high-fidelity finite-element model results are presented to verify the approach. Pushdown analyses of prototype buildings under column loss scenarios are performed using the reduced-order modeling approach, and an energy-based procedure is employed to account for the dynamic effects associated with sudden column loss. Results obtained using the energy-based approach are found to be in good agreement with results from direct dynamic analysis of sudden column loss. A metric for structural robustness is proposed, calculated by normalizing the ultimate capacities of the structural system under sudden column loss by the applicable service-level gravity loading and by evaluating the minimum value of this normalized ultimate capacity over all column removal scenarios. The procedure is applied to two prototype 10-story RC buildings, one employing intermediate moment frames (IMFs) and the other employing special moment frames (SMFs). The SMF building, with its more stringent seismic design and detailing, is found to have greater robustness. PMID:28890599
Fiber optic sensor for continuous health monitoring in CFRP composite materials
NASA Astrophysics Data System (ADS)
Rippert, Laurent; Papy, Jean-Michel; Wevers, Martine; Van Huffel, Sabine
2002-07-01
An intensity modulated sensor, based on the microbending concept, has been incorporated in laminates produced from a C/epoxy prepreg. Pencil lead break tests (Hsu-Neilsen sources) and tensile tests have been performed on this material. In this research study, fibre optic sensors will be proven to offer an alternative for the robust piezoelectric transducers used for Acoustic Emission (AE) monitoring. The main emphasis has been put on the use of advanced signal processing techniques based on time-frequency analysis. The signal Short Time Fourier Transform (STFT) has been computed and several robust noise reduction algorithms, such as Wiener adaptive filtering, improved spectral subtraction filtering, and Singular Value Decomposition (SVD) -based filtering, have been applied. An energy and frequency -based detection criterion is put forward to detect transient signals that can be correlated with Modal Acoustic Emission (MAE) results and thus damage in the composite material. There is a strong indication that time-frequency analysis and the Hankel Total Least Squares (HTLS) method can also be used for damage characterization. This study shows that the signal from a quite simple microbend optical sensor contains information on the elastic energy released whenever damage is being introduced in the host material by mechanical loading. Robust algorithms can be used to retrieve and analyze this information.
Beyond singular values and loop shapes
NASA Technical Reports Server (NTRS)
Stein, G.
1985-01-01
The status of singular value loop-shaping as a design paradigm for multivariable feedback systems is reviewed. It shows that this paradigm is an effective design tool whenever the problem specifications are spacially round. The tool can be arbitrarily conservative, however, when they are not. This happens because singular value conditions for robust performance are not tight (necessary and sufficient) and can severely overstate actual requirements. An alternate paradign is discussed which overcomes these limitations. The alternative includes a more general problem formulation, a new matrix function mu, and tight conditions for both robust stability and robust performance. The state of the art currently permits analysis of feedback systems within this new paradigm. Synthesis remains a subject of research.
Robust Angle Estimation for MIMO Radar with the Coexistence of Mutual Coupling and Colored Noise.
Wang, Junxiang; Wang, Xianpeng; Xu, Dingjie; Bi, Guoan
2018-03-09
This paper deals with joint estimation of direction-of-departure (DOD) and direction-of- arrival (DOA) in bistatic multiple-input multiple-output (MIMO) radar with the coexistence of unknown mutual coupling and spatial colored noise by developing a novel robust covariance tensor-based angle estimation method. In the proposed method, a third-order tensor is firstly formulated for capturing the multidimensional nature of the received data. Then taking advantage of the temporal uncorrelated characteristic of colored noise and the banded complex symmetric Toeplitz structure of the mutual coupling matrices, a novel fourth-order covariance tensor is constructed for eliminating the influence of both spatial colored noise and mutual coupling. After a robust signal subspace estimation is obtained by using the higher-order singular value decomposition (HOSVD) technique, the rotational invariance technique is applied to achieve the DODs and DOAs. Compared with the existing HOSVD-based subspace methods, the proposed method can provide superior angle estimation performance and automatically jointly perform the DODs and DOAs. Results from numerical experiments are presented to verify the effectiveness of the proposed method.
Robust Angle Estimation for MIMO Radar with the Coexistence of Mutual Coupling and Colored Noise
Wang, Junxiang; Wang, Xianpeng; Xu, Dingjie; Bi, Guoan
2018-01-01
This paper deals with joint estimation of direction-of-departure (DOD) and direction-of- arrival (DOA) in bistatic multiple-input multiple-output (MIMO) radar with the coexistence of unknown mutual coupling and spatial colored noise by developing a novel robust covariance tensor-based angle estimation method. In the proposed method, a third-order tensor is firstly formulated for capturing the multidimensional nature of the received data. Then taking advantage of the temporal uncorrelated characteristic of colored noise and the banded complex symmetric Toeplitz structure of the mutual coupling matrices, a novel fourth-order covariance tensor is constructed for eliminating the influence of both spatial colored noise and mutual coupling. After a robust signal subspace estimation is obtained by using the higher-order singular value decomposition (HOSVD) technique, the rotational invariance technique is applied to achieve the DODs and DOAs. Compared with the existing HOSVD-based subspace methods, the proposed method can provide superior angle estimation performance and automatically jointly perform the DODs and DOAs. Results from numerical experiments are presented to verify the effectiveness of the proposed method. PMID:29522499
Including robustness in multi-criteria optimization for intensity-modulated proton therapy
NASA Astrophysics Data System (ADS)
Chen, Wei; Unkelbach, Jan; Trofimov, Alexei; Madden, Thomas; Kooy, Hanne; Bortfeld, Thomas; Craft, David
2012-02-01
We present a method to include robustness in a multi-criteria optimization (MCO) framework for intensity-modulated proton therapy (IMPT). The approach allows one to simultaneously explore the trade-off between different objectives as well as the trade-off between robustness and nominal plan quality. In MCO, a database of plans each emphasizing different treatment planning objectives, is pre-computed to approximate the Pareto surface. An IMPT treatment plan that strikes the best balance between the different objectives can be selected by navigating on the Pareto surface. In our approach, robustness is integrated into MCO by adding robustified objectives and constraints to the MCO problem. Uncertainties (or errors) of the robust problem are modeled by pre-calculated dose-influence matrices for a nominal scenario and a number of pre-defined error scenarios (shifted patient positions, proton beam undershoot and overshoot). Objectives and constraints can be defined for the nominal scenario, thus characterizing nominal plan quality. A robustified objective represents the worst objective function value that can be realized for any of the error scenarios and thus provides a measure of plan robustness. The optimization method is based on a linear projection solver and is capable of handling large problem sizes resulting from a fine dose grid resolution, many scenarios, and a large number of proton pencil beams. A base-of-skull case is used to demonstrate the robust optimization method. It is demonstrated that the robust optimization method reduces the sensitivity of the treatment plan to setup and range errors to a degree that is not achieved by a safety margin approach. A chordoma case is analyzed in more detail to demonstrate the involved trade-offs between target underdose and brainstem sparing as well as robustness and nominal plan quality. The latter illustrates the advantage of MCO in the context of robust planning. For all cases examined, the robust optimization for each Pareto optimal plan takes less than 5 min on a standard computer, making a computationally friendly interface possible to the planner. In conclusion, the uncertainty pertinent to the IMPT procedure can be reduced during treatment planning by optimizing plans that emphasize different treatment objectives, including robustness, and then interactively seeking for a most-preferred one from the solution Pareto surface.
Truong, Cong-Doan; Kwon, Yung-Keun
2017-12-21
Biological networks consisting of molecular components and interactions are represented by a graph model. There have been some studies based on that model to analyze a relationship between structural characteristics and dynamical behaviors in signaling network. However, little attention has been paid to changes of modularity and robustness in mutant networks. In this paper, we investigated the changes of modularity and robustness by edge-removal mutations in three signaling networks. We first observed that both the modularity and robustness increased on average in the mutant network by the edge-removal mutations. However, the modularity change was negatively correlated with the robustness change. This implies that it is unlikely that both the modularity and the robustness values simultaneously increase by the edge-removal mutations. Another interesting finding is that the modularity change was positively correlated with the degree, the number of feedback loops, and the edge betweenness of the removed edges whereas the robustness change was negatively correlated with them. We note that these results were consistently observed in randomly structure networks. Additionally, we identified two groups of genes which are incident to the highly-modularity-increasing and the highly-robustness-decreasing edges with respect to the edge-removal mutations, respectively, and observed that they are likely to be central by forming a connected component of a considerably large size. The gene-ontology enrichment of each of these gene groups was significantly different from the rest of genes. Finally, we showed that the highly-robustness-decreasing edges can be promising edgetic drug-targets, which validates the usefulness of our analysis. Taken together, the analysis of changes of robustness and modularity against edge-removal mutations can be useful to unravel novel dynamical characteristics underlying in signaling networks.
Robust solid polymer electrolyte for conducting IPN actuators
NASA Astrophysics Data System (ADS)
Festin, Nicolas; Maziz, Ali; Plesse, Cédric; Teyssié, Dominique; Chevrot, Claude; Vidal, Frédéric
2013-10-01
Interpenetrating polymer networks (IPNs) based on nitrile butadiene rubber (NBR) as first component and poly(ethylene oxide) (PEO) as second component were synthesized and used as a solid polymer electrolyte film in the design of a mechanically robust conducting IPN actuator. IPN mechanical properties and morphologies were mainly investigated by dynamic mechanical analysis and transmission electron microscopy. For 1-ethyl-3-methylimidazolium bis-(trifluoromethylsulfonyl)-imide (EMITFSI) swollen IPNs, conductivity values are close to 1 × 10-3 S cm-1 at 25 ° C. Conducting IPN actuators have been synthesized by chemical polymerization of 3,4-ethylenedioxythiophene (EDOT) within the PEO/NBR IPN. A pseudo-trilayer configuration has been obtained with PEO/NBR IPN sandwiched between two interpenetrated PEDOT electrodes. The robust conducting IPN actuators showed a free strain of 2.4% and a blocking force of 30 mN for a low applied potential of ±2 V.
Keshavan, J; Gremillion, G; Escobar-Alvarez, H; Humbert, J S
2014-06-01
Safe, autonomous navigation by aerial microsystems in less-structured environments is a difficult challenge to overcome with current technology. This paper presents a novel visual-navigation approach that combines bioinspired wide-field processing of optic flow information with control-theoretic tools for synthesis of closed loop systems, resulting in robustness and performance guarantees. Structured singular value analysis is used to synthesize a dynamic controller that provides good tracking performance in uncertain environments without resorting to explicit pose estimation or extraction of a detailed environmental depth map. Experimental results with a quadrotor demonstrate the vehicle's robust obstacle-avoidance behaviour in a straight line corridor, an S-shaped corridor and a corridor with obstacles distributed in the vehicle's path. The computational efficiency and simplicity of the current approach offers a promising alternative to satisfying the payload, power and bandwidth constraints imposed by aerial microsystems.
Time Series Imputation via L1 Norm-Based Singular Spectrum Analysis
NASA Astrophysics Data System (ADS)
Kalantari, Mahdi; Yarmohammadi, Masoud; Hassani, Hossein; Silva, Emmanuel Sirimal
Missing values in time series data is a well-known and important problem which many researchers have studied extensively in various fields. In this paper, a new nonparametric approach for missing value imputation in time series is proposed. The main novelty of this research is applying the L1 norm-based version of Singular Spectrum Analysis (SSA), namely L1-SSA which is robust against outliers. The performance of the new imputation method has been compared with many other established methods. The comparison is done by applying them to various real and simulated time series. The obtained results confirm that the SSA-based methods, especially L1-SSA can provide better imputation in comparison to other methods.
NASA Astrophysics Data System (ADS)
WANG, Qingrong; ZHU, Changfeng; LI, Ying; ZHANG, Zhengkun
2017-06-01
Considering the time dependence of emergency logistic network and complexity of the environment that the network exists in, in this paper the time dependent network optimization theory and robust discrete optimization theory are combined, and the emergency logistics dynamic network optimization model with characteristics of robustness is built to maximize the timeliness of emergency logistics. On this basis, considering the complexity of dynamic network and the time dependence of edge weight, an improved ant colony algorithm is proposed to realize the coupling of the optimization algorithm and the network time dependence and robustness. Finally, a case study has been carried out in order to testify validity of this robustness optimization model and its algorithm, and the value of different regulation factors was analyzed considering the importance of the value of the control factor in solving the optimal path. Analysis results show that this model and its algorithm above-mentioned have good timeliness and strong robustness.
Tremblay, Louis A; Clark, Dana; Sinner, Jim; Ellis, Joanne I
2017-09-20
The sustainable management of estuarine and coastal ecosystems requires robust frameworks due to the presence of multiple physical and chemical stressors. In this study, we assessed whether ecological health decline, based on community structure composition changes along a pollution gradient, occurred at levels below guideline threshold values for copper, zinc and lead. Canonical analysis of principal coordinates (CAP) was used to characterise benthic communities along a metal contamination gradient. The analysis revealed changes in benthic community distribution at levels below the individual guideline values for the three metals. These results suggest that field-based measures of ecological health analysed with multivariate tools can provide additional information to single metal guideline threshold values to monitor large systems exposed to multiple stressors.
NASA Technical Reports Server (NTRS)
Schierman, John D.; Lovell, T. A.; Schmidt, David K.
1993-01-01
Three multivariable robustness analysis methods are compared and contrasted. The focus of the analysis is on system stability and performance robustness to uncertainty in the coupling dynamics between two interacting subsystems. Of particular interest is interacting airframe and engine subsystems, and an example airframe/engine vehicle configuration is utilized in the demonstration of these approaches. The singular value (SV) and structured singular value (SSV) analysis methods are compared to a method especially well suited for analysis of robustness to uncertainties in subsystem interactions. This approach is referred to here as the interacting subsystem (IS) analysis method. This method has been used previously to analyze airframe/engine systems, emphasizing the study of stability robustness. However, performance robustness is also investigated here, and a new measure of allowable uncertainty for acceptable performance robustness is introduced. The IS methodology does not require plant uncertainty models to measure the robustness of the system, and is shown to yield valuable information regarding the effects of subsystem interactions. In contrast, the SV and SSV methods allow for the evaluation of the robustness of the system to particular models of uncertainty, and do not directly indicate how the airframe (engine) subsystem interacts with the engine (airframe) subsystem.
Chen, Zhaoxue; Yu, Haizhong; Chen, Hao
2013-12-01
To solve the problem of traditional K-means clustering in which initial clustering centers are selected randomly, we proposed a new K-means segmentation algorithm based on robustly selecting 'peaks' standing for White Matter, Gray Matter and Cerebrospinal Fluid in multi-peaks gray histogram of MRI brain image. The new algorithm takes gray value of selected histogram 'peaks' as the initial K-means clustering center and can segment the MRI brain image into three parts of tissue more effectively, accurately, steadily and successfully. Massive experiments have proved that the proposed algorithm can overcome many shortcomings caused by traditional K-means clustering method such as low efficiency, veracity, robustness and time consuming. The histogram 'peak' selecting idea of the proposed segmentootion method is of more universal availability.
Real-time driver fatigue detection based on face alignment
NASA Astrophysics Data System (ADS)
Tao, Huanhuan; Zhang, Guiying; Zhao, Yong; Zhou, Yi
2017-07-01
The performance and robustness of fatigue detection largely decrease if the driver with glasses. To address this issue, this paper proposes a practical driver fatigue detection method based on face alignment at 3000 FPS algorithm. Firstly, the eye regions of the driver are localized by exploiting 6 landmarks surrounding each eye. Secondly, the HOG features of the extracted eye regions are calculated and put into SVM classifier to recognize the eye state. Finally, the value of PERCLOS is calculated to determine whether the driver is drowsy or not. An alarm will be generated if the eye is closed for a specified period of time. The accuracy and real-time on testing videos with different drivers demonstrate that the proposed algorithm is robust and obtain better accuracy for driver fatigue detection compared with some previous method.
Qing Liu; Zhihui Lai; Zongwei Zhou; Fangjun Kuang; Zhong Jin
2016-01-01
Low-rank matrix completion aims to recover a matrix from a small subset of its entries and has received much attention in the field of computer vision. Most existing methods formulate the task as a low-rank matrix approximation problem. A truncated nuclear norm has recently been proposed as a better approximation to the rank of matrix than a nuclear norm. The corresponding optimization method, truncated nuclear norm regularization (TNNR), converges better than the nuclear norm minimization-based methods. However, it is not robust to the number of subtracted singular values and requires a large number of iterations to converge. In this paper, a TNNR method based on weighted residual error (TNNR-WRE) for matrix completion and its extension model (ETNNR-WRE) are proposed. TNNR-WRE assigns different weights to the rows of the residual error matrix in an augmented Lagrange function to accelerate the convergence of the TNNR method. The ETNNR-WRE is much more robust to the number of subtracted singular values than the TNNR-WRE, TNNR alternating direction method of multipliers, and TNNR accelerated proximal gradient with Line search methods. Experimental results using both synthetic and real visual data sets show that the proposed TNNR-WRE and ETNNR-WRE methods perform better than TNNR and Iteratively Reweighted Nuclear Norm (IRNN) methods.
Trabelsi, O; Villalobos, J L López; Ginel, A; Cortes, E Barrot; Doblaré, M
2014-05-01
Swallowing depends on physiological variables that have a decisive influence on the swallowing capacity and on the tracheal stress distribution. Prosthetic implantation modifies these values and the overall performance of the trachea. The objective of this work was to develop a decision support system based on experimental, numerical and statistical approaches, with clinical verification, to help the thoracic surgeon in deciding the position and appropriate dimensions of a Dumon prosthesis for a specific patient in an optimal time and with sufficient robustness. A code for mesh adaptation to any tracheal geometry was implemented and used to develop a robust experimental design, based on the Taguchi's method and the analysis of variance. This design was able to establish the main swallowing influencing factors. The equations to fit the stress and the vertical displacement distributions were obtained. The resulting fitted values were compared to those calculated directly by the finite element method (FEM). Finally, a checking and clinical validation of the statistical study were made, by studying two cases of real patients. The vertical displacements and principal stress distribution obtained for the specific tracheal model were in agreement with those calculated by FE simulations with a maximum absolute error of 1.2 mm and 0.17 MPa, respectively. It was concluded that the resulting decision support tool provides a fast, accurate and simple tool for the thoracic surgeon to predict the stress state of the trachea and the reduction in the ability to swallow after implantation. Thus, it will help them in taking decisions during pre-operative planning of tracheal interventions.
Preprocessing of gene expression data by optimally robust estimators
2010-01-01
Background The preprocessing of gene expression data obtained from several platforms routinely includes the aggregation of multiple raw signal intensities to one expression value. Examples are the computation of a single expression measure based on the perfect match (PM) and mismatch (MM) probes for the Affymetrix technology, the summarization of bead level values to bead summary values for the Illumina technology or the aggregation of replicated measurements in the case of other technologies including real-time quantitative polymerase chain reaction (RT-qPCR) platforms. The summarization of technical replicates is also performed in other "-omics" disciplines like proteomics or metabolomics. Preprocessing methods like MAS 5.0, Illumina's default summarization method, RMA, or VSN show that the use of robust estimators is widely accepted in gene expression analysis. However, the selection of robust methods seems to be mainly driven by their high breakdown point and not by efficiency. Results We describe how optimally robust radius-minimax (rmx) estimators, i.e. estimators that minimize an asymptotic maximum risk on shrinking neighborhoods about an ideal model, can be used for the aggregation of multiple raw signal intensities to one expression value for Affymetrix and Illumina data. With regard to the Affymetrix data, we have implemented an algorithm which is a variant of MAS 5.0. Using datasets from the literature and Monte-Carlo simulations we provide some reasoning for assuming approximate log-normal distributions of the raw signal intensities by means of the Kolmogorov distance, at least for the discussed datasets, and compare the results of our preprocessing algorithms with the results of Affymetrix's MAS 5.0 and Illumina's default method. The numerical results indicate that when using rmx estimators an accuracy improvement of about 10-20% is obtained compared to Affymetrix's MAS 5.0 and about 1-5% compared to Illumina's default method. The improvement is also visible in the analysis of technical replicates where the reproducibility of the values (in terms of Pearson and Spearman correlation) is increased for all Affymetrix and almost all Illumina examples considered. Our algorithms are implemented in the R package named RobLoxBioC which is publicly available via CRAN, The Comprehensive R Archive Network (http://cran.r-project.org/web/packages/RobLoxBioC/). Conclusions Optimally robust rmx estimators have a high breakdown point and are computationally feasible. They can lead to a considerable gain in efficiency for well-established bioinformatics procedures and thus, can increase the reproducibility and power of subsequent statistical analysis. PMID:21118506
Missing value imputation: with application to handwriting data
NASA Astrophysics Data System (ADS)
Xu, Zhen; Srihari, Sargur N.
2015-01-01
Missing values make pattern analysis difficult, particularly with limited available data. In longitudinal research, missing values accumulate, thereby aggravating the problem. Here we consider how to deal with temporal data with missing values in handwriting analysis. In the task of studying development of individuality of handwriting, we encountered the fact that feature values are missing for several individuals at several time instances. Six algorithms, i.e., random imputation, mean imputation, most likely independent value imputation, and three methods based on Bayesian network (static Bayesian network, parameter EM, and structural EM), are compared with children's handwriting data. We evaluate the accuracy and robustness of the algorithms under different ratios of missing data and missing values, and useful conclusions are given. Specifically, static Bayesian network is used for our data which contain around 5% missing data to provide adequate accuracy and low computational cost.
Space resection model calculation based on Random Sample Consensus algorithm
NASA Astrophysics Data System (ADS)
Liu, Xinzhu; Kang, Zhizhong
2016-03-01
Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.
NASA Astrophysics Data System (ADS)
Balzani, Daniel; Gandhi, Ashutosh; Tanaka, Masato; Schröder, Jörg
2015-05-01
In this paper a robust approximation scheme for the numerical calculation of tangent stiffness matrices is presented in the context of nonlinear thermo-mechanical finite element problems and its performance is analyzed. The scheme extends the approach proposed in Kim et al. (Comput Methods Appl Mech Eng 200:403-413, 2011) and Tanaka et al. (Comput Methods Appl Mech Eng 269:454-470, 2014 and bases on applying the complex-step-derivative approximation to the linearizations of the weak forms of the balance of linear momentum and the balance of energy. By incorporating consistent perturbations along the imaginary axis to the displacement as well as thermal degrees of freedom, we demonstrate that numerical tangent stiffness matrices can be obtained with accuracy up to computer precision leading to quadratically converging schemes. The main advantage of this approach is that contrary to the classical forward difference scheme no round-off errors due to floating-point arithmetics exist within the calculation of the tangent stiffness. This enables arbitrarily small perturbation values and therefore leads to robust schemes even when choosing small values. An efficient algorithmic treatment is presented which enables a straightforward implementation of the method in any standard finite-element program. By means of thermo-elastic and thermo-elastoplastic boundary value problems at finite strains the performance of the proposed approach is analyzed.
Robustness of topological Hall effect of nontrivial spin textures
NASA Astrophysics Data System (ADS)
Jalil, Mansoor B. A.; Tan, Seng Ghee
2014-05-01
We analyze the topological Hall conductivity (THC) of topologically nontrivial spin textures like magnetic vortices and skyrmions and investigate its possible application in the readback for magnetic memory based on those spin textures. Under adiabatic conditions, such spin textures would theoretically yield quantized THC values, which are related to topological invariants such as the winding number and polarity, and as such are insensitive to fluctuations and smooth deformations. However, in a practical setting, the finite size of spin texture elements and the influence of edges may cause them to deviate from their ideal configurations. We calculate the degree of robustness of the THC output in practical magnetic memories in the presence of edge and finite size effects.
Engineering Robust Yeasts for Biorefinery Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Taek Soon; Niles, Brad; Chow, Ruthie
2016-06-22
Isoprene is highly-valued terpene based-chemical feedstock and can be derived from either petroleum or from fermentation of plant biomass. This project enabled more efficient isoprene fermentation using renewable resources and at yields that can compete economically with non-renewable sources. This Phase I project applied a novel synthetic biology approach, the Artificial Positive Feedback Loop (APFL) technology, to improve production yields of isoprene.
ERIC Educational Resources Information Center
Korendijk, Elly J. H.; Moerbeek, Mirjam; Maas, Cora J. M.
2010-01-01
In the case of trials with nested data, the optimal allocation of units depends on the budget, the costs, and the intracluster correlation coefficient. In general, the intracluster correlation coefficient is unknown in advance and an initial guess has to be made based on published values or subject matter knowledge. This initial estimate is likely…
Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta
2016-01-01
This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.
Robust inference for group sequential trials.
Ganju, Jitendra; Lin, Yunzhi; Zhou, Kefei
2017-03-01
For ethical reasons, group sequential trials were introduced to allow trials to stop early in the event of extreme results. Endpoints in such trials are usually mortality or irreversible morbidity. For a given endpoint, the norm is to use a single test statistic and to use that same statistic for each analysis. This approach is risky because the test statistic has to be specified before the study is unblinded, and there is loss in power if the assumptions that ensure optimality for each analysis are not met. To minimize the risk of moderate to substantial loss in power due to a suboptimal choice of a statistic, a robust method was developed for nonsequential trials. The concept is analogous to diversification of financial investments to minimize risk. The method is based on combining P values from multiple test statistics for formal inference while controlling the type I error rate at its designated value.This article evaluates the performance of 2 P value combining methods for group sequential trials. The emphasis is on time to event trials although results from less complex trials are also included. The gain or loss in power with the combination method relative to a single statistic is asymmetric in its favor. Depending on the power of each individual test, the combination method can give more power than any single test or give power that is closer to the test with the most power. The versatility of the method is that it can combine P values from different test statistics for analysis at different times. The robustness of results suggests that inference from group sequential trials can be strengthened with the use of combined tests. Copyright © 2017 John Wiley & Sons, Ltd.
Euro Banknote Recognition System for Blind People.
Dunai Dunai, Larisa; Chillarón Pérez, Mónica; Peris-Fajarnés, Guillermo; Lengua Lengua, Ismael
2017-01-20
This paper presents the development of a portable system with the aim of allowing blind people to detect and recognize Euro banknotes. The developed device is based on a Raspberry Pi electronic instrument and a Raspberry Pi camera, Pi NoIR (No Infrared filter) dotted with additional infrared light, which is embedded into a pair of sunglasses that permit blind and visually impaired people to independently handle Euro banknotes, especially when receiving their cash back when shopping. The banknote detection is based on the modified Viola and Jones algorithms, while the banknote value recognition relies on the Speed Up Robust Features (SURF) technique. The accuracies of banknote detection and banknote value recognition are 84% and 97.5%, respectively.
Mathur, Sunil; Sadana, Ajit
2015-12-01
We present a rank-based test statistic for the identification of differentially expressed genes using a distance measure. The proposed test statistic is highly robust against extreme values and does not assume the distribution of parent population. Simulation studies show that the proposed test is more powerful than some of the commonly used methods, such as paired t-test, Wilcoxon signed rank test, and significance analysis of microarray (SAM) under certain non-normal distributions. The asymptotic distribution of the test statistic, and the p-value function are discussed. The application of proposed method is shown using a real-life data set. © The Author(s) 2011.
Euro Banknote Recognition System for Blind People
Dunai Dunai, Larisa; Chillarón Pérez, Mónica; Peris-Fajarnés, Guillermo; Lengua Lengua, Ismael
2017-01-01
This paper presents the development of a portable system with the aim of allowing blind people to detect and recognize Euro banknotes. The developed device is based on a Raspberry Pi electronic instrument and a Raspberry Pi camera, Pi NoIR (No Infrared filter) dotted with additional infrared light, which is embedded into a pair of sunglasses that permit blind and visually impaired people to independently handle Euro banknotes, especially when receiving their cash back when shopping. The banknote detection is based on the modified Viola and Jones algorithms, while the banknote value recognition relies on the Speed Up Robust Features (SURF) technique. The accuracies of banknote detection and banknote value recognition are 84% and 97.5%, respectively. PMID:28117703
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Belcastro, Christine
2008-01-01
Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. As a part of the validation process, this paper describes an analysis method for determining a reliable flight regime in the flight envelope within which an integrated resilent control system can achieve the desired performance of tracking command signals and detecting additive faults in the presence of parameter uncertainty and unmodeled dynamics. To calculate a reliable flight regime, a structured singular value analysis method is applied to analyze the closed-loop system over the entire flight envelope. To use the structured singular value analysis method, a linear fractional transform (LFT) model of a transport aircraft longitudinal dynamics is developed over the flight envelope by using a preliminary LFT modeling software tool developed at the NASA Langley Research Center, which utilizes a matrix-based computational approach. The developed LFT model can capture original nonlinear dynamics over the flight envelope with the ! block which contains key varying parameters: angle of attack and velocity, and real parameter uncertainty: aerodynamic coefficient uncertainty and moment of inertia uncertainty. Using the developed LFT model and a formal robustness analysis method, a reliable flight regime is calculated for a transport aircraft closed-loop system.
Kuselman, Ilya; Pennecchi, Francesca; Epstein, Malka; Fajgelj, Ales; Ellison, Stephen L R
2014-12-01
Monte Carlo simulation of expert judgments on human errors in a chemical analysis was used for determination of distributions of the error quantification scores (scores of likelihood and severity, and scores of effectiveness of a laboratory quality system in prevention of the errors). The simulation was based on modeling of an expert behavior: confident, reasonably doubting and irresolute expert judgments were taken into account by means of different probability mass functions (pmfs). As a case study, 36 scenarios of human errors which may occur in elemental analysis of geological samples by ICP-MS were examined. Characteristics of the score distributions for three pmfs of an expert behavior were compared. Variability of the scores, as standard deviation of the simulated score values from the distribution mean, was used for assessment of the score robustness. A range of the score values, calculated directly from elicited data and simulated by a Monte Carlo method for different pmfs, was also discussed from the robustness point of view. It was shown that robustness of the scores, obtained in the case study, can be assessed as satisfactory for the quality risk management and improvement of a laboratory quality system against human errors. Copyright © 2014 Elsevier B.V. All rights reserved.
Arbitrary-step randomly delayed robust filter with application to boost phase tracking
NASA Astrophysics Data System (ADS)
Qin, Wutao; Wang, Xiaogang; Bai, Yuliang; Cui, Naigang
2018-04-01
The conventional filters such as extended Kalman filter, unscented Kalman filter and cubature Kalman filter assume that the measurement is available in real-time and the measurement noise is Gaussian white noise. But in practice, both two assumptions are invalid. To solve this problem, a novel algorithm is proposed by taking the following four steps. At first, the measurement model is modified by the Bernoulli random variables to describe the random delay. Then, the expression of predicted measurement and covariance are reformulated, which could get rid of the restriction that the maximum number of delay must be one or two and the assumption that probabilities of Bernoulli random variables taking the value one are equal. Next, the arbitrary-step randomly delayed high-degree cubature Kalman filter is derived based on the 5th-degree spherical-radial rule and the reformulated expressions. Finally, the arbitrary-step randomly delayed high-degree cubature Kalman filter is modified to the arbitrary-step randomly delayed high-degree cubature Huber-based filter based on the Huber technique, which is essentially an M-estimator. Therefore, the proposed filter is not only robust to the randomly delayed measurements, but robust to the glint noise. The application to the boost phase tracking example demonstrate the superiority of the proposed algorithms.
Strict Constraint Feasibility in Analysis and Design of Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.
Petrou, Panagiotis; Talias, Michael A
2014-01-01
The continuing increase of pharmaceutical expenditure calls for new approaches to pricing and reimbursement of pharmaceuticals. Value based pricing of pharmaceuticals is emerging as a useful tool and possess theoretical attributes to help health system cope with rising pharmaceutical expenditure. To assess the feasibility of introducing a value-based pricing scheme of pharmaceuticals in Cyprus and explore the integrative framework. A probabilistic Markov chain Monte Carlo model was created to simulate progression of advanced renal cell cancer for comparison of sorafenib to standard best supportive care. Literature review was performed and efficacy data were transferred from a published landmark trial, while official pricelists and clinical guidelines from Cyprus Ministry of Health were utilised for cost calculation. Based on proposed willingness to pay threshold the maximum price of sorafenib for the indication of second line renal cell cancer was assessed. Sorafenib value based price was found to be significantly lower compared to its current reference price. Feasibility of Value Based Pricing is documented and pharmacoeconomic modelling can lead to robust results. Integration of value and affordability in the price are its main advantages which have to be weighed against lack of documentation for several theoretical parameters that influence outcome. Smaller countries such as Cyprus may experience adversities in establishing and sustaining essential structures for this scheme.
Robust distributed control of spacecraft formation flying with adaptive network topology
NASA Astrophysics Data System (ADS)
Shasti, Behrouz; Alasty, Aria; Assadian, Nima
2017-07-01
In this study, the distributed six degree-of-freedom (6-DOF) coordinated control of spacecraft formation flying in low earth orbit (LEO) has been investigated. For this purpose, an accurate coupled translational and attitude relative dynamics model of the spacecraft with respect to the reference orbit (virtual leader) is presented by considering the most effective perturbation acceleration forces on LEO satellites, i.e. the second zonal harmonic and the atmospheric drag. Subsequently, the 6-DOF coordinated control of spacecraft in formation is studied. During the mission, the spacecraft communicate with each other through a switching network topology in which the weights of its graph Laplacian matrix change adaptively based on a distance-based connectivity function between neighboring agents. Because some of the dynamical system parameters such as spacecraft masses and moments of inertia may vary with time, an adaptive law is developed to estimate the parameter values during the mission. Furthermore, for the case that there is no knowledge of the unknown and time-varying parameters of the system, a robust controller has been developed. It is proved that the stability of the closed-loop system coupled with adaptation in network topology structure and optimality and robustness in control is guaranteed by the robust contraction analysis as an incremental stability method for multiple synchronized systems. The simulation results show the effectiveness of each control method in the presence of uncertainties and parameter variations. The adaptive and robust controllers show their superiority in reducing the state error integral as well as decreasing the control effort and settling time.
Mutual information based feature selection for medical image retrieval
NASA Astrophysics Data System (ADS)
Zhi, Lijia; Zhang, Shaomin; Li, Yan
2018-04-01
In this paper, authors propose a mutual information based method for lung CT image retrieval. This method is designed to adapt to different datasets and different retrieval task. For practical applying consideration, this method avoids using a large amount of training data. Instead, with a well-designed training process and robust fundamental features and measurements, the method in this paper can get promising performance and maintain economic training computation. Experimental results show that the method has potential practical values for clinical routine application.
Gaussian mixed model in support of semiglobal matching leveraged by ground control points
NASA Astrophysics Data System (ADS)
Ma, Hao; Zheng, Shunyi; Li, Chang; Li, Yingsong; Gui, Li
2017-04-01
Semiglobal matching (SGM) has been widely applied in large aerial images because of its good tradeoff between complexity and robustness. The concept of ground control points (GCPs) is adopted to make SGM more robust. We model the effect of GCPs as two data terms for stereo matching between high-resolution aerial epipolar images in an iterative scheme. One term based on GCPs is formulated by Gaussian mixture model, which strengths the relation between GCPs and the pixels to be estimated and encodes some degree of consistency between them with respect to disparity values. Another term depends on pixel-wise confidence, and we further design a confidence updating equation based on three rules. With this confidence-based term, the assignment of disparity can be heuristically selected among disparity search ranges during the iteration process. Several iterations are sufficient to bring out satisfactory results according to our experiments. Experimental results validate that the proposed method outperforms surface reconstruction, which is a representative variant of SGM and behaves excellently on aerial images.
On Robust Methodologies for Managing Public Health Care Systems
Nimmagadda, Shastri L.; Dreher, Heinz V.
2014-01-01
Authors focus on ontology-based multidimensional data warehousing and mining methodologies, addressing various issues on organizing, reporting and documenting diabetic cases and their associated ailments, including causalities. Map and other diagnostic data views, depicting similarity and comparison of attributes, extracted from warehouses, are used for understanding the ailments, based on gender, age, geography, food-habits and other hereditary event attributes. In addition to rigor on data mining and visualization, an added focus is on values of interpretation of data views, from processed full-bodied diagnosis, subsequent prescription and appropriate medications. The proposed methodology, is a robust back-end application, for web-based patient-doctor consultations and e-Health care management systems through which, billions of dollars spent on medical services, can be saved, in addition to improving quality of life and average life span of a person. Government health departments and agencies, private and government medical practitioners including social welfare organizations are typical users of these systems. PMID:24445953
Automated combinatorial method for fast and robust prediction of lattice thermal conductivity
NASA Astrophysics Data System (ADS)
Plata, Jose J.; Nath, Pinku; Usanmaz, Demet; Toher, Cormac; Fornari, Marco; Buongiorno Nardelli, Marco; Curtarolo, Stefano
The lack of computationally inexpensive and accurate ab-initio based methodologies to predict lattice thermal conductivity, κl, without computing the anharmonic force constants or performing time-consuming ab-initio molecular dynamics, is one of the obstacles preventing the accelerated discovery of new high or low thermal conductivity materials. The Slack equation is the best alternative to other more expensive methodologies but is highly dependent on two variables: the acoustic Debye temperature, θa, and the Grüneisen parameter, γ. Furthermore, different definitions can be used for these two quantities depending on the model or approximation. Here, we present a combinatorial approach based on the quasi-harmonic approximation to elucidate which definitions of both variables produce the best predictions of κl. A set of 42 compounds was used to test accuracy and robustness of all possible combinations. This approach is ideal for obtaining more accurate values than fast screening models based on the Debye model, while being significantly less expensive than methodologies that solve the Boltzmann transport equation.
Kargar, Soudabeh; Borisch, Eric A; Froemming, Adam T; Kawashima, Akira; Mynderse, Lance A; Stinson, Eric G; Trzasko, Joshua D; Riederer, Stephen J
2018-05-01
To describe an efficient numerical optimization technique using non-linear least squares to estimate perfusion parameters for the Tofts and extended Tofts models from dynamic contrast enhanced (DCE) MRI data and apply the technique to prostate cancer. Parameters were estimated by fitting the two Tofts-based perfusion models to the acquired data via non-linear least squares. We apply Variable Projection (VP) to convert the fitting problem from a multi-dimensional to a one-dimensional line search to improve computational efficiency and robustness. Using simulation and DCE-MRI studies in twenty patients with suspected prostate cancer, the VP-based solver was compared against the traditional Levenberg-Marquardt (LM) strategy for accuracy, noise amplification, robustness to converge, and computation time. The simulation demonstrated that VP and LM were both accurate in that the medians closely matched assumed values across typical signal to noise ratio (SNR) levels for both Tofts models. VP and LM showed similar noise sensitivity. Studies using the patient data showed that the VP method reliably converged and matched results from LM with approximate 3× and 2× reductions in computation time for the standard (two-parameter) and extended (three-parameter) Tofts models. While LM failed to converge in 14% of the patient data, VP converged in the ideal 100%. The VP-based method for non-linear least squares estimation of perfusion parameters for prostate MRI is equivalent in accuracy and robustness to noise, while being more reliably (100%) convergent and computationally about 3× (TM) and 2× (ETM) faster than the LM-based method. Copyright © 2017 Elsevier Inc. All rights reserved.
Parametric Grid Information in the DOE Knowledge Base: Data Preparation, Storage, and Access
DOE Office of Scientific and Technical Information (OSTI.GOV)
HIPP,JAMES R.; MOORE,SUSAN G.; MYERS,STEPHEN C.
The parametric grid capability of the Knowledge Base provides an efficient, robust way to store and access interpolatable information which is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use a new approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation (NNI). The method involves three basic steps: data preparation (DP), data storage (DS), and data access (DA). The goal of data preparation is to process a set of raw data points to produce a sufficient basis formore » accurate NNI of value and error estimates in the Data Access step. This basis includes a set of nodes and their connectedness, collectively known as a tessellation, and the corresponding values and errors that map to each node, which we call surfaces. In many cases, the raw data point distribution is not sufficiently dense to guarantee accurate error estimates from the NNI, so the original data set must be densified using a newly developed interpolation technique known as Modified Bayesian Kriging. Once appropriate kriging parameters have been determined by variogram analysis, the optimum basis for NNI is determined in a process they call mesh refinement, which involves iterative kriging, new node insertion, and Delauny triangle smoothing. The process terminates when an NNI basis has been calculated which will fir the kriged values within a specified tolerance. In the data storage step, the tessellations and surfaces are stored in the Knowledge Base, currently in a binary flatfile format but perhaps in the future in a spatially-indexed database. Finally, in the data access step, a client application makes a request for an interpolated value, which triggers a data fetch from the Knowledge Base through the libKBI interface, a walking triangle search for the containing triangle, and finally the NNI interpolation.« less
Ni, Xiao Yu; Drengstig, Tormod; Ruoff, Peter
2009-09-02
Organisms have the property to adapt to a changing environment and keep certain components within a cell regulated at the same level (homeostasis). "Perfect adaptation" describes an organism's response to an external stepwise perturbation by regulating some of its variables/components precisely to their original preperturbation values. Numerous examples of perfect adaptation/homeostasis have been found, as for example, in bacterial chemotaxis, photoreceptor responses, MAP kinase activities, or in metal-ion homeostasis. Two concepts have evolved to explain how perfect adaptation may be understood: In one approach (robust perfect adaptation), the adaptation is a network property, which is mostly, but not entirely, independent of rate constant values; in the other approach (nonrobust perfect adaptation), a fine-tuning of rate constant values is needed. Here we identify two classes of robust molecular homeostatic mechanisms, which compensate for environmental variations in a controlled variable's inflow or outflow fluxes, and allow for the presence of robust temperature compensation. These two classes of homeostatic mechanisms arise due to the fact that concentrations must have positive values. We show that the concept of integral control (or integral feedback), which leads to robust homeostasis, is associated with a control species that has to work under zero-order flux conditions and does not necessarily require the presence of a physico-chemical feedback structure. There are interesting links between the two identified classes of homeostatic mechanisms and molecular mechanisms found in mammalian iron and calcium homeostasis, indicating that homeostatic mechanisms may underlie similar molecular control structures.
Savalei, Victoria
2018-01-01
A new type of nonnormality correction to the RMSEA has recently been developed, which has several advantages over existing corrections. In particular, the new correction adjusts the sample estimate of the RMSEA for the inflation due to nonnormality, while leaving its population value unchanged, so that established cutoff criteria can still be used to judge the degree of approximate fit. A confidence interval (CI) for the new robust RMSEA based on the mean-corrected ("Satorra-Bentler") test statistic has also been proposed. Follow up work has provided the same type of nonnormality correction for the CFI (Brosseau-Liard & Savalei, 2014). These developments have recently been implemented in lavaan. This note has three goals: a) to show how to compute the new robust RMSEA and CFI from the mean-and-variance corrected test statistic; b) to offer a new CI for the robust RMSEA based on the mean-and-variance corrected test statistic; and c) to caution that the logic of the new nonnormality corrections to RMSEA and CFI is most appropriate for the maximum likelihood (ML) estimator, and cannot easily be generalized to the most commonly used categorical data estimators.
Igne, Benoît; Drennen, James K; Anderson, Carl A
2014-01-01
Changes in raw materials and process wear and tear can have significant effects on the prediction error of near-infrared calibration models. When the variability that is present during routine manufacturing is not included in the calibration, test, and validation sets, the long-term performance and robustness of the model will be limited. Nonlinearity is a major source of interference. In near-infrared spectroscopy, nonlinearity can arise from light path-length differences that can come from differences in particle size or density. The usefulness of support vector machine (SVM) regression to handle nonlinearity and improve the robustness of calibration models in scenarios where the calibration set did not include all the variability present in test was evaluated. Compared to partial least squares (PLS) regression, SVM regression was less affected by physical (particle size) and chemical (moisture) differences. The linearity of the SVM predicted values was also improved. Nevertheless, although visualization and interpretation tools have been developed to enhance the usability of SVM-based methods, work is yet to be done to provide chemometricians in the pharmaceutical industry with a regression method that can supplement PLS-based methods.
Zulkifley, Mohd Asyraf; Rawlinson, David; Moran, Bill
2012-01-01
In video analytics, robust observation detection is very important as the content of the videos varies a lot, especially for tracking implementation. Contrary to the image processing field, the problems of blurring, moderate deformation, low illumination surroundings, illumination change and homogenous texture are normally encountered in video analytics. Patch-Based Observation Detection (PBOD) is developed to improve detection robustness to complex scenes by fusing both feature- and template-based recognition methods. While we believe that feature-based detectors are more distinctive, however, for finding the matching between the frames are best achieved by a collection of points as in template-based detectors. Two methods of PBOD—the deterministic and probabilistic approaches—have been tested to find the best mode of detection. Both algorithms start by building comparison vectors at each detected points of interest. The vectors are matched to build candidate patches based on their respective coordination. For the deterministic method, patch matching is done in 2-level test where threshold-based position and size smoothing are applied to the patch with the highest correlation value. For the second approach, patch matching is done probabilistically by modelling the histograms of the patches by Poisson distributions for both RGB and HSV colour models. Then, maximum likelihood is applied for position smoothing while a Bayesian approach is applied for size smoothing. The result showed that probabilistic PBOD outperforms the deterministic approach with average distance error of 10.03% compared with 21.03%. This algorithm is best implemented as a complement to other simpler detection methods due to heavy processing requirement. PMID:23202226
Jennings, Natasha; Lutze, Matthew; Clifford, Stuart; Maw, Michael
2017-03-01
The emergency nurse practitioner is now a well established and respected member of the healthcare team. Evaluation of the role has focused on patient safety, effectiveness and quality of care outcomes. Comparisons of the role continue to focus on cost, with findings based on incomplete and almost impossible to define, recognition of contribution to service delivery by paralleled practitioners. Currently there is no clear definition as to how nurse practitioners contribute to value in health service delivery. Robust and rigorous research needs to be commissioned taking into consideration the unique hybrid nature of the emergency nurse practitioner role and focusing on the value they contribute to health care delivery.
NASA Astrophysics Data System (ADS)
Tamamitsu, Miu; Zhang, Yibo; Wang, Hongda; Wu, Yichen; Ozcan, Aydogan
2018-02-01
The Sparsity of the Gradient (SoG) is a robust autofocusing criterion for holography, where the gradient modulus of the complex refocused hologram is calculated, on which a sparsity metric is applied. Here, we compare two different choices of sparsity metrics used in SoG, specifically, the Gini index (GI) and the Tamura coefficient (TC), for holographic autofocusing on dense/connected or sparse samples. We provide a theoretical analysis predicting that for uniformly distributed image data, TC and GI exhibit similar behavior, while for naturally sparse images containing few high-valued signal entries and many low-valued noisy background pixels, TC is more sensitive to distribution changes in the signal and more resistive to background noise. These predictions are also confirmed by experimental results using SoG-based holographic autofocusing on dense and connected samples (such as stained breast tissue sections) as well as highly sparse samples (such as isolated Giardia lamblia cysts). Through these experiments, we found that ToG and GoG offer almost identical autofocusing performance on dense and connected samples, whereas for naturally sparse samples, GoG should be calculated on a relatively small region of interest (ROI) closely surrounding the object, while ToG offers more flexibility in choosing a larger ROI containing more background pixels.
A robust optimization model for distribution and evacuation in the disaster response phase
NASA Astrophysics Data System (ADS)
Fereiduni, Meysam; Shahanaghi, Kamran
2017-03-01
Natural disasters, such as earthquakes, affect thousands of people and can cause enormous financial loss. Therefore, an efficient response immediately following a natural disaster is vital to minimize the aforementioned negative effects. This research paper presents a network design model for humanitarian logistics which will assist in location and allocation decisions for multiple disaster periods. At first, a single-objective optimization model is presented that addresses the response phase of disaster management. This model will help the decision makers to make the most optimal choices in regard to location, allocation, and evacuation simultaneously. The proposed model also considers emergency tents as temporary medical centers. To cope with the uncertainty and dynamic nature of disasters, and their consequences, our multi-period robust model considers the values of critical input data in a set of various scenarios. Second, because of probable disruption in the distribution infrastructure (such as bridges), the Monte Carlo simulation is used for generating related random numbers and different scenarios; the p-robust approach is utilized to formulate the new network. The p-robust approach can predict possible damages along pathways and among relief bases. We render a case study of our robust optimization approach for Tehran's plausible earthquake in region 1. Sensitivity analysis' experiments are proposed to explore the effects of various problem parameters. These experiments will give managerial insights and can guide DMs under a variety of conditions. Then, the performances of the "robust optimization" approach and the "p-robust optimization" approach are evaluated. Intriguing results and practical insights are demonstrated by our analysis on this comparison.
A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network
NASA Astrophysics Data System (ADS)
Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien
2017-03-01
With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices’ non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing.
A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network.
Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien
2017-03-21
With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices' non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing.
A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network
Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien
2017-01-01
With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices’ non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing. PMID:28322262
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Kim, Hye-Young; Junkins, John L.
2003-01-01
A new star pattern recognition method is developed using singular value decomposition of a measured unit column vector matrix in a measurement frame and the corresponding cataloged vector matrix in a reference frame. It is shown that singular values and right singular vectors are invariant with respect to coordinate transformation and robust under uncertainty. One advantage of singular value comparison is that a pairing process for individual measured and cataloged stars is not necessary, and the attitude estimation and pattern recognition process are not separated. An associated method for mission catalog design is introduced and simulation results are presented.
Adaptive control of servo system based on LuGre model
NASA Astrophysics Data System (ADS)
Jin, Wang; Niancong, Liu; Jianlong, Chen; Weitao, Geng
2018-03-01
This paper established a mechanical model of feed system based on LuGre model. In order to solve the influence of nonlinear factors on the system running stability, a nonlinear single observer is designed to estimate the parameter z in the LuGre model and an adaptive friction compensation controller is designed. Simulink simulation results show that the control method can effectively suppress the adverse effects of friction and external disturbances. The simulation show that the adaptive parameter kz is between 0.11-0.13, and the value of gamma1 is between 1.9-2.1. Position tracking error reaches level 10-3 and is stabilized near 0 values within 0.3 seconds, the compensation method has better tracking accuracy and robustness.
NASA Astrophysics Data System (ADS)
Mayvan, Ali D.; Aghaeinia, Hassan; Kazemi, Mohammad
2017-12-01
This paper focuses on robust transceiver design for throughput enhancement on the interference channel (IC), under imperfect channel state information (CSI). In this paper, two algorithms are proposed to improve the throughput of the multi-input multi-output (MIMO) IC. Each transmitter and receiver has, respectively, M and N antennas and IC operates in a time division duplex mode. In the first proposed algorithm, each transceiver adjusts its filter to maximize the expected value of signal-to-interference-plus-noise ratio (SINR). On the other hand, the second algorithm tries to minimize the variances of the SINRs to hedge against the variability due to CSI error. Taylor expansion is exploited to approximate the effect of CSI imperfection on mean and variance. The proposed robust algorithms utilize the reciprocity of wireless networks to optimize the estimated statistical properties in two different working modes. Monte Carlo simulations are employed to investigate sum rate performance of the proposed algorithms and the advantage of incorporating variation minimization into the transceiver design.
High-throughput countercurrent microextraction in passive mode.
Xie, Tingliang; Xu, Cong
2018-05-15
Although microextraction is much more efficient than conventional macroextraction, its practical application has been limited by low throughputs and difficulties in constructing robust countercurrent microextraction (CCME) systems. In this work, a robust CCME process was established based on a novel passive microextractor with four units without any moving parts. The passive microextractor has internal recirculation and can efficiently mix two immiscible liquids. The hydraulic characteristics as well as the extraction and back-extraction performance of the passive CCME were investigated experimentally. The recovery efficiencies of the passive CCME were 1.43-1.68 times larger than the best values achieved using cocurrent extraction. Furthermore, the total throughput of the passive CCME developed in this work was about one to three orders of magnitude higher than that of other passive CCME systems reported in the literature. Therefore, a robust CCME process with high throughputs has been successfully constructed, which may promote the application of passive CCME in a wide variety of fields.
The valuation of the EQ-5D in Portugal.
Ferreira, Lara N; Ferreira, Pedro L; Pereira, Luis N; Oppe, Mark
2014-03-01
The EQ-5D is a preference-based measure widely used in cost-utility analysis (CUA). Several countries have conducted surveys to derive value sets, but this was not the case for Portugal. The purpose of this study was to estimate a value set for the EQ-5D for Portugal using the time trade-off (TTO). A representative sample of the Portuguese general population (n = 450) stratified by age and gender valued 24 health states. Face-to-face interviews were conducted by trained interviewers. Each respondent ranked and valued seven health states using the TTO. Several models were estimated at both the individual and aggregated levels to predict health state valuations. Alternative functional forms were considered to account for the skewed distribution of these valuations. The models were analyzed in terms of their coefficients, overall fit and the ability for predicting the TTO values. Random effects models were estimated using generalized least squares and were robust across model specification. The results are generally consistent with other value sets. This research provides the Portuguese EQ-5D value set based on the preferences of the Portuguese general population as measured by the TTO. This value set is recommended for use in CUA conducted in Portugal.
Robust Design of Biological Circuits: Evolutionary Systems Biology Approach
Chen, Bor-Sen; Hsu, Chih-Yuan; Liou, Jing-Jia
2011-01-01
Artificial gene circuits have been proposed to be embedded into microbial cells that function as switches, timers, oscillators, and the Boolean logic gates. Building more complex systems from these basic gene circuit components is one key advance for biologic circuit design and synthetic biology. However, the behavior of bioengineered gene circuits remains unstable and uncertain. In this study, a nonlinear stochastic system is proposed to model the biological systems with intrinsic parameter fluctuations and environmental molecular noise from the cellular context in the host cell. Based on evolutionary systems biology algorithm, the design parameters of target gene circuits can evolve to specific values in order to robustly track a desired biologic function in spite of intrinsic and environmental noise. The fitness function is selected to be inversely proportional to the tracking error so that the evolutionary biological circuit can achieve the optimal tracking mimicking the evolutionary process of a gene circuit. Finally, several design examples are given in silico with the Monte Carlo simulation to illustrate the design procedure and to confirm the robust performance of the proposed design method. The result shows that the designed gene circuits can robustly track desired behaviors with minimal errors even with nontrivial intrinsic and external noise. PMID:22187523
Fuzzy-information-based robustness of interconnected networks against attacks and failures
NASA Astrophysics Data System (ADS)
Zhu, Qian; Zhu, Zhiliang; Wang, Yifan; Yu, Hai
2016-09-01
Cascading failure is fatal in applications and its investigation is essential and therefore became a focal topic in the field of complex networks in the last decade. In this paper, a cascading failure model is established for interconnected networks and the associated data-packet transport problem is discussed. A distinguished feature of the new model is its utilization of fuzzy information in resisting uncertain failures and malicious attacks. We numerically find that the giant component of the network after failures increases with tolerance parameter for any coupling preference and attacking ambiguity. Moreover, considering the effect of the coupling probability on the robustness of the networks, we find that the robustness of the assortative coupling and random coupling of the network model increases with the coupling probability. However, for disassortative coupling, there exists a critical phenomenon for coupling probability. In addition, a critical value that attacking information accuracy affects the network robustness is observed. Finally, as a practical example, the interconnected AS-level Internet in South Korea and Japan is analyzed. The actual data validates the theoretical model and analytic results. This paper thus provides some guidelines for preventing cascading failures in the design of architecture and optimization of real-world interconnected networks.
Robust geographically weighted regression of modeling the Air Polluter Standard Index (APSI)
NASA Astrophysics Data System (ADS)
Warsito, Budi; Yasin, Hasbi; Ispriyanti, Dwi; Hoyyi, Abdul
2018-05-01
The Geographically Weighted Regression (GWR) model has been widely applied to many practical fields for exploring spatial heterogenity of a regression model. However, this method is inherently not robust to outliers. Outliers commonly exist in data sets and may lead to a distorted estimate of the underlying regression model. One of solution to handle the outliers in the regression model is to use the robust models. So this model was called Robust Geographically Weighted Regression (RGWR). This research aims to aid the government in the policy making process related to air pollution mitigation by developing a standard index model for air polluter (Air Polluter Standard Index - APSI) based on the RGWR approach. In this research, we also consider seven variables that are directly related to the air pollution level, which are the traffic velocity, the population density, the business center aspect, the air humidity, the wind velocity, the air temperature, and the area size of the urban forest. The best model is determined by the smallest AIC value. There are significance differences between Regression and RGWR in this case, but Basic GWR using the Gaussian kernel is the best model to modeling APSI because it has smallest AIC.
Robust design of biological circuits: evolutionary systems biology approach.
Chen, Bor-Sen; Hsu, Chih-Yuan; Liou, Jing-Jia
2011-01-01
Artificial gene circuits have been proposed to be embedded into microbial cells that function as switches, timers, oscillators, and the Boolean logic gates. Building more complex systems from these basic gene circuit components is one key advance for biologic circuit design and synthetic biology. However, the behavior of bioengineered gene circuits remains unstable and uncertain. In this study, a nonlinear stochastic system is proposed to model the biological systems with intrinsic parameter fluctuations and environmental molecular noise from the cellular context in the host cell. Based on evolutionary systems biology algorithm, the design parameters of target gene circuits can evolve to specific values in order to robustly track a desired biologic function in spite of intrinsic and environmental noise. The fitness function is selected to be inversely proportional to the tracking error so that the evolutionary biological circuit can achieve the optimal tracking mimicking the evolutionary process of a gene circuit. Finally, several design examples are given in silico with the Monte Carlo simulation to illustrate the design procedure and to confirm the robust performance of the proposed design method. The result shows that the designed gene circuits can robustly track desired behaviors with minimal errors even with nontrivial intrinsic and external noise.
Robustness of assembly supply chain networks by considering risk propagation and cascading failure
NASA Astrophysics Data System (ADS)
Tang, Liang; Jing, Ke; He, Jie; Stanley, H. Eugene
2016-10-01
An assembly supply chain network (ASCN) is composed of manufacturers located in different geographical regions. To analyze the robustness of this ASCN when it suffers from catastrophe disruption events, we construct a cascading failure model of risk propagation. In our model, different disruption scenarios s are considered and the probability equation of all disruption scenarios is developed. Using production capability loss as the robustness index (RI) of an ASCN, we conduct a numerical simulation to assess its robustness. Through simulation, we compare the network robustness at different values of linking intensity and node threshold and find that weak linking intensity or high node threshold increases the robustness of the ASCN. We also compare network robustness levels under different disruption scenarios.
Missing Value Monitoring Enhances the Robustness in Proteomics Quantitation.
Matafora, Vittoria; Corno, Andrea; Ciliberto, Andrea; Bachi, Angela
2017-04-07
In global proteomic analysis, it is estimated that proteins span from millions to less than 100 copies per cell. The challenge of protein quantitation by classic shotgun proteomic techniques relies on the presence of missing values in peptides belonging to low-abundance proteins that lowers intraruns reproducibility affecting postdata statistical analysis. Here, we present a new analytical workflow MvM (missing value monitoring) able to recover quantitation of missing values generated by shotgun analysis. In particular, we used confident data-dependent acquisition (DDA) quantitation only for proteins measured in all the runs, while we filled the missing values with data-independent acquisition analysis using the library previously generated in DDA. We analyzed cell cycle regulated proteins, as they are low abundance proteins with highly dynamic expression levels. Indeed, we found that cell cycle related proteins are the major components of the missing values-rich proteome. Using the MvM workflow, we doubled the number of robustly quantified cell cycle related proteins, and we reduced the number of missing values achieving robust quantitation for proteins over ∼50 molecules per cell. MvM allows lower quantification variance among replicates for low abundance proteins with respect to DDA analysis, which demonstrates the potential of this novel workflow to measure low abundance, dynamically regulated proteins.
Robust digital image watermarking using distortion-compensated dither modulation
NASA Astrophysics Data System (ADS)
Li, Mianjie; Yuan, Xiaochen
2018-04-01
In this paper, we propose a robust feature extraction based digital image watermarking method using Distortion- Compensated Dither Modulation (DC-DM). Our proposed local watermarking method provides stronger robustness and better flexibility than traditional global watermarking methods. We improve robustness by introducing feature extraction and DC-DM method. To extract the robust feature points, we propose a DAISY-based Robust Feature Extraction (DRFE) method by employing the DAISY descriptor and applying the entropy calculation based filtering. The experimental results show that the proposed method achieves satisfactory robustness under the premise of ensuring watermark imperceptibility quality compared to other existing methods.
Zaghian, Maryam; Cao, Wenhua; Liu, Wei; Kardar, Laleh; Randeniya, Sharmalee; Mohan, Radhe; Lim, Gino
2017-03-01
Robust optimization of intensity-modulated proton therapy (IMPT) takes uncertainties into account during spot weight optimization and leads to dose distributions that are resilient to uncertainties. Previous studies demonstrated benefits of linear programming (LP) for IMPT in terms of delivery efficiency by considerably reducing the number of spots required for the same quality of plans. However, a reduction in the number of spots may lead to loss of robustness. The purpose of this study was to evaluate and compare the performance in terms of plan quality and robustness of two robust optimization approaches using LP and nonlinear programming (NLP) models. The so-called "worst case dose" and "minmax" robust optimization approaches and conventional planning target volume (PTV)-based optimization approach were applied to designing IMPT plans for five patients: two with prostate cancer, one with skull-based cancer, and two with head and neck cancer. For each approach, both LP and NLP models were used. Thus, for each case, six sets of IMPT plans were generated and assessed: LP-PTV-based, NLP-PTV-based, LP-worst case dose, NLP-worst case dose, LP-minmax, and NLP-minmax. The four robust optimization methods behaved differently from patient to patient, and no method emerged as superior to the others in terms of nominal plan quality and robustness against uncertainties. The plans generated using LP-based robust optimization were more robust regarding patient setup and range uncertainties than were those generated using NLP-based robust optimization for the prostate cancer patients. However, the robustness of plans generated using NLP-based methods was superior for the skull-based and head and neck cancer patients. Overall, LP-based methods were suitable for the less challenging cancer cases in which all uncertainty scenarios were able to satisfy tight dose constraints, while NLP performed better in more difficult cases in which most uncertainty scenarios were hard to meet tight dose limits. For robust optimization, the worst case dose approach was less sensitive to uncertainties than was the minmax approach for the prostate and skull-based cancer patients, whereas the minmax approach was superior for the head and neck cancer patients. The robustness of the IMPT plans was remarkably better after robust optimization than after PTV-based optimization, and the NLP-PTV-based optimization outperformed the LP-PTV-based optimization regarding robustness of clinical target volume coverage. In addition, plans generated using LP-based methods had notably fewer scanning spots than did those generated using NLP-based methods. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
O’Grady, Shannon P.; Valenzuela, Luciano O.; Remien, Christopher H.; Enright, Lindsey E.; Jorgensen, Matthew J.; Kaplan, Jay R.; Wagner, Janice D.; Cerling, Thure E.; Ehleringer, James R.
2012-01-01
The stable isotopic composition of drinking water, diet, and atmospheric oxygen influence the isotopic composition of body water (2H/1H, 18O/16O expressed as δ2H and δ18O). In turn, body water influences the isotopic composition of organic matter in tissues, such as hair and teeth, which are often used to reconstruct historical dietary and movement patterns of animals and humans. Here, we used a nonhuman primate system (Macaca fascicularis) to test the robustness of two different mechanistic stable isotope models: a model to predict the δ2H and δ18O values of body water and a second model to predict the δ2H and δ18O values of hair. In contrast to previous human-based studies, use of nonhuman primates fed controlled diets allowed us to further constrain model parameter values and evaluate model predictions. Both models reliably predicted the δ2H and δ18O values of body water and of hair. Moreover, the isotope data allowed us to better quantify values for two critical variables in the models: the δ2H and δ18O values of gut water and the 18O isotope fractionation associated with a carbonyl oxygen-water interaction in the gut (αow). Our modeling efforts indicated that better predictions for body water and hair isotope values were achieved by making the isotopic composition of gut water approached that of body water. Additionally, the value of αow was 1.0164, in close agreement with the only other previously measured observation (microbial spore cell walls), suggesting robustness of this fractionation factor across different biological systems. PMID:22553163
O'Grady, Shannon P; Valenzuela, Luciano O; Remien, Christopher H; Enright, Lindsey E; Jorgensen, Matthew J; Kaplan, Jay R; Wagner, Janice D; Cerling, Thure E; Ehleringer, James R
2012-07-01
The stable isotopic composition of drinking water, diet, and atmospheric oxygen influence the isotopic composition of body water ((2)H/(1)H, (18)O/(16)O expressed as δ(2) H and δ(18)O). In turn, body water influences the isotopic composition of organic matter in tissues, such as hair and teeth, which are often used to reconstruct historical dietary and movement patterns of animals and humans. Here, we used a nonhuman primate system (Macaca fascicularis) to test the robustness of two different mechanistic stable isotope models: a model to predict the δ(2)H and δ(18)O values of body water and a second model to predict the δ(2)H and δ(18)O values of hair. In contrast to previous human-based studies, use of nonhuman primates fed controlled diets allowed us to further constrain model parameter values and evaluate model predictions. Both models reliably predicted the δ(2)H and δ(18)O values of body water and of hair. Moreover, the isotope data allowed us to better quantify values for two critical variables in the models: the δ(2)H and δ(18)O values of gut water and the (18)O isotope fractionation associated with a carbonyl oxygen-water interaction in the gut (α(ow)). Our modeling efforts indicated that better predictions for body water and hair isotope values were achieved by making the isotopic composition of gut water approached that of body water. Additionally, the value of α(ow) was 1.0164, in close agreement with the only other previously measured observation (microbial spore cell walls), suggesting robustness of this fractionation factor across different biological systems. © 2012 Wiley Periodicals, Inc.
Robust biological parametric mapping: an improved technique for multimodal brain image analysis
NASA Astrophysics Data System (ADS)
Yang, Xue; Beason-Held, Lori; Resnick, Susan M.; Landman, Bennett A.
2011-03-01
Mapping the quantitative relationship between structure and function in the human brain is an important and challenging problem. Numerous volumetric, surface, region of interest and voxelwise image processing techniques have been developed to statistically assess potential correlations between imaging and non-imaging metrics. Recently, biological parametric mapping has extended the widely popular statistical parametric approach to enable application of the general linear model to multiple image modalities (both for regressors and regressands) along with scalar valued observations. This approach offers great promise for direct, voxelwise assessment of structural and functional relationships with multiple imaging modalities. However, as presented, the biological parametric mapping approach is not robust to outliers and may lead to invalid inferences (e.g., artifactual low p-values) due to slight mis-registration or variation in anatomy between subjects. To enable widespread application of this approach, we introduce robust regression and robust inference in the neuroimaging context of application of the general linear model. Through simulation and empirical studies, we demonstrate that our robust approach reduces sensitivity to outliers without substantial degradation in power. The robust approach and associated software package provides a reliable way to quantitatively assess voxelwise correlations between structural and functional neuroimaging modalities.
No longer simply a Practice-based Research Network (PBRN) health improvement networks.
Williams, Robert L; Rhyne, Robert L
2011-01-01
While primary care Practice-based Research Networks are best known for their original, research purpose, evidence accumulating over the last several years is demonstrating broader values of these collaborations. Studies have demonstrated their role in quality improvement and practice change, in continuing professional education, in clinician retention in medically underserved areas, and in facilitating transition of primary care organization. A role in informing and facilitating health policy development is also suggested. Taking into account this more robust potential, we propose a new title, the Health Improvement Network, and a new vision for Practice-based Research Networks.
Information Hiding: an Annotated Bibliography
1999-04-13
parameters needed for reconstruction are enciphered using DES . The encrypted image is hidden in a cover image . [153] 074115, ‘Watermarking algorithm ...authors present a block based watermarking algorithm for digital images . The D.C.T. of the block is increased by a certain value. Quality control is...includes evaluation of the watermark robustness and the subjec- tive visual image quality. Two algorithms use the frequency domain while the two others use
Incomplete augmented Lagrangian preconditioner for steady incompressible Navier-Stokes equations.
Tan, Ning-Bo; Huang, Ting-Zhu; Hu, Ze-Jun
2013-01-01
An incomplete augmented Lagrangian preconditioner, for the steady incompressible Navier-Stokes equations discretized by stable finite elements, is proposed. The eigenvalues of the preconditioned matrix are analyzed. Numerical experiments show that the incomplete augmented Lagrangian-based preconditioner proposed is very robust and performs quite well by the Picard linearization or the Newton linearization over a wide range of values of the viscosity on both uniform and stretched grids.
2014-01-01
Background The continuing increase of pharmaceutical expenditure calls for new approaches to pricing and reimbursement of pharmaceuticals. Value based pricing of pharmaceuticals is emerging as a useful tool and possess theoretical attributes to help health system cope with rising pharmaceutical expenditure. Aim To assess the feasibility of introducing a value-based pricing scheme of pharmaceuticals in Cyprus and explore the integrative framework. Methods A probabilistic Markov chain Monte Carlo model was created to simulate progression of advanced renal cell cancer for comparison of sorafenib to standard best supportive care. Literature review was performed and efficacy data were transferred from a published landmark trial, while official pricelists and clinical guidelines from Cyprus Ministry of Health were utilised for cost calculation. Based on proposed willingness to pay threshold the maximum price of sorafenib for the indication of second line renal cell cancer was assessed. Results Sorafenib value based price was found to be significantly lower compared to its current reference price. Conclusion Feasibility of Value Based Pricing is documented and pharmacoeconomic modelling can lead to robust results. Integration of value and affordability in the price are its main advantages which have to be weighed against lack of documentation for several theoretical parameters that influence outcome. Smaller countries such as Cyprus may experience adversities in establishing and sustaining essential structures for this scheme. PMID:24910539
Motulsky, Harvey J; Brown, Ronald E
2006-01-01
Background Nonlinear regression, like linear regression, assumes that the scatter of data around the ideal curve follows a Gaussian or normal distribution. This assumption leads to the familiar goal of regression: to minimize the sum of the squares of the vertical or Y-value distances between the points and the curve. Outliers can dominate the sum-of-the-squares calculation, and lead to misleading results. However, we know of no practical method for routinely identifying outliers when fitting curves with nonlinear regression. Results We describe a new method for identifying outliers when fitting data with nonlinear regression. We first fit the data using a robust form of nonlinear regression, based on the assumption that scatter follows a Lorentzian distribution. We devised a new adaptive method that gradually becomes more robust as the method proceeds. To define outliers, we adapted the false discovery rate approach to handling multiple comparisons. We then remove the outliers, and analyze the data using ordinary least-squares regression. Because the method combines robust regression and outlier removal, we call it the ROUT method. When analyzing simulated data, where all scatter is Gaussian, our method detects (falsely) one or more outlier in only about 1–3% of experiments. When analyzing data contaminated with one or several outliers, the ROUT method performs well at outlier identification, with an average False Discovery Rate less than 1%. Conclusion Our method, which combines a new method of robust nonlinear regression with a new method of outlier identification, identifies outliers from nonlinear curve fits with reasonable power and few false positives. PMID:16526949
Juang, Chia-Feng; Hsu, Chia-Hung
2009-12-01
This paper proposes a new reinforcement-learning method using online rule generation and Q-value-aided ant colony optimization (ORGQACO) for fuzzy controller design. The fuzzy controller is based on an interval type-2 fuzzy system (IT2FS). The antecedent part in the designed IT2FS uses interval type-2 fuzzy sets to improve controller robustness to noise. There are initially no fuzzy rules in the IT2FS. The ORGQACO concurrently designs both the structure and parameters of an IT2FS. We propose an online interval type-2 rule generation method for the evolution of system structure and flexible partitioning of the input space. Consequent part parameters in an IT2FS are designed using Q -values and the reinforcement local-global ant colony optimization algorithm. This algorithm selects the consequent part from a set of candidate actions according to ant pheromone trails and Q-values, both of which are updated using reinforcement signals. The ORGQACO design method is applied to the following three control problems: 1) truck-backing control; 2) magnetic-levitation control; and 3) chaotic-system control. The ORGQACO is compared with other reinforcement-learning methods to verify its efficiency and effectiveness. Comparisons with type-1 fuzzy systems verify the noise robustness property of using an IT2FS.
Pathological Bases for a Robust Application of Cancer Molecular Classification
Diaz-Cano, Salvador J.
2015-01-01
Any robust classification system depends on its purpose and must refer to accepted standards, its strength relying on predictive values and a careful consideration of known factors that can affect its reliability. In this context, a molecular classification of human cancer must refer to the current gold standard (histological classification) and try to improve it with key prognosticators for metastatic potential, staging and grading. Although organ-specific examples have been published based on proteomics, transcriptomics and genomics evaluations, the most popular approach uses gene expression analysis as a direct correlate of cellular differentiation, which represents the key feature of the histological classification. RNA is a labile molecule that varies significantly according with the preservation protocol, its transcription reflect the adaptation of the tumor cells to the microenvironment, it can be passed through mechanisms of intercellular transference of genetic information (exosomes), and it is exposed to epigenetic modifications. More robust classifications should be based on stable molecules, at the genetic level represented by DNA to improve reliability, and its analysis must deal with the concept of intratumoral heterogeneity, which is at the origin of tumor progression and is the byproduct of the selection process during the clonal expansion and progression of neoplasms. The simultaneous analysis of multiple DNA targets and next generation sequencing offer the best practical approach for an analytical genomic classification of tumors. PMID:25898411
Simulated discharge trends indicate robustness of hydrological models in a changing climate
NASA Astrophysics Data System (ADS)
Addor, Nans; Nikolova, Silviya; Seibert, Jan
2016-04-01
Assessing the robustness of hydrological models under contrasted climatic conditions should be part any hydrological model evaluation. Robust models are particularly important for climate impact studies, as models performing well under current conditions are not necessarily capable of correctly simulating hydrological perturbations caused by climate change. A pressing issue is the usually assumed stationarity of parameter values over time. Modeling experiments using conceptual hydrological models revealed that assuming transposability of parameters values in changing climatic conditions can lead to significant biases in discharge simulations. This raises the question whether parameter values should to be modified over time to reflect changes in hydrological processes induced by climate change. Such a question denotes a focus on the contribution of internal processes (i.e., catchment processes) to discharge generation. Here we adopt a different perspective and explore the contribution of external forcing (i.e., changes in precipitation and temperature) to changes in discharge. We argue that in a robust hydrological model, discharge variability should be induced by changes in the boundary conditions, and not by changes in parameter values. In this study, we explore how well the conceptual hydrological model HBV captures transient changes in hydrological signatures over the period 1970-2009. Our analysis focuses on research catchments in Switzerland undisturbed by human activities. The precipitation and temperature forcing are extracted from recently released 2km gridded data sets. We use a genetic algorithm to calibrate HBV for the whole 40-year period and for the eight successive 5-year periods to assess eventual trends in parameter values. Model calibration is run multiple times to account for parameter uncertainty. We find that in alpine catchments showing a significant increase of winter discharge, this trend can be captured reasonably well with constant parameter values over the whole reference period. Further, preliminary results suggest that some trends in parameter values do not reflect changes in hydrological processes, as reported by others previously, but instead might stem from a modeling artifact related to the parameterization of evapotranspiration, which is overly sensitive to temperature increase. We adopt a trading-space-for-time approach to better understand whether robust relationships between parameter values and forcing can be established, and to critically explore the rationale behind time-dependent parameter values in conceptual hydrological models.
Robust image watermarking using DWT and SVD for copyright protection
NASA Astrophysics Data System (ADS)
Harjito, Bambang; Suryani, Esti
2017-02-01
The Objective of this paper is proposed a robust combined Discrete Wavelet Transform (DWT) and Singular Value Decomposition (SVD). The RGB image is called a cover medium, and watermark image is converted into gray scale. Then, they are transformed using DWT so that they can be split into several subbands, namely sub-band LL2, LH2, HL2. The watermark image embeds into the cover medium on sub-band LL2. This scheme aims to obtain the higher robustness level than the previous method which performs of SVD matrix factorization image for copyright protection. The experiment results show that the proposed method has robustness against several image processing attacks such as Gaussian, Poisson and Salt and Pepper Noise. In these attacks, noise has average Normalized Correlation (NC) values of 0.574863 0.889784, 0.889782 respectively. The watermark image can be detected and extracted.
Narayanaswamy, Arunachalam; Dwarakapuram, Saritha; Bjornsson, Christopher S; Cutler, Barbara M; Shain, William; Roysam, Badrinath
2010-03-01
This paper presents robust 3-D algorithms to segment vasculature that is imaged by labeling laminae, rather than the lumenal volume. The signal is weak, sparse, noisy, nonuniform, low-contrast, and exhibits gaps and spectral artifacts, so adaptive thresholding and Hessian filtering based methods are not effective. The structure deviates from a tubular geometry, so tracing algorithms are not effective. We propose a four step approach. The first step detects candidate voxels using a robust hypothesis test based on a model that assumes Poisson noise and locally planar geometry. The second step performs an adaptive region growth to extract weakly labeled and fine vessels while rejecting spectral artifacts. To enable interactive visualization and estimation of features such as statistical confidence, local curvature, local thickness, and local normal, we perform the third step. In the third step, we construct an accurate mesh representation using marching tetrahedra, volume-preserving smoothing, and adaptive decimation algorithms. To enable topological analysis and efficient validation, we describe a method to estimate vessel centerlines using a ray casting and vote accumulation algorithm which forms the final step of our algorithm. Our algorithm lends itself to parallel processing, and yielded an 8 x speedup on a graphics processor (GPU). On synthetic data, our meshes had average error per face (EPF) values of (0.1-1.6) voxels per mesh face for peak signal-to-noise ratios from (110-28 dB). Separately, the error from decimating the mesh to less than 1% of its original size, the EPF was less than 1 voxel/face. When validated on real datasets, the average recall and precision values were found to be 94.66% and 94.84%, respectively.
SU-F-R-51: Radiomics in CT Perfusion Maps of Head and Neck Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nesteruk, M; Riesterer, O; Veit-Haibach, P
2016-06-15
Purpose: The aim of this study was to test the predictive value of radiomics features of CT perfusion (CTP) for tumor control, based on a preselection of radiomics features in a robustness study. Methods: 11 patients with head and neck cancer (HNC) and 11 patients with lung cancer were included in the robustness study to preselect stable radiomics parameters. Data from 36 HNC patients treated with definitive radiochemotherapy (median follow-up 30 months) was used to build a predictive model based on these parameters. All patients underwent pre-treatment CTP. 315 texture parameters were computed for three perfusion maps: blood volume, bloodmore » flow and mean transit time. The variability of texture parameters was tested with respect to non-standardizable perfusion computation factors (noise level and artery contouring) using intraclass correlation coefficients (ICC). The parameter with the highest ICC in the correlated group of parameters (inter-parameter Spearman correlations) was tested for its predictive value. The final model to predict tumor control was built using multivariate Cox regression analysis with backward selection of the variables. For comparison, a predictive model based on tumor volume was created. Results: Ten parameters were found to be stable in both HNC and lung cancer regarding potentially non-standardizable factors after the correction for inter-parameter correlations. In the multivariate backward selection of the variables, blood flow entropy showed a highly significant impact on tumor control (p=0.03) with concordance index (CI) of 0.76. Blood flow entropy was significantly lower in the patient group with controlled tumors at 18 months (p<0.1). The new model showed a higher concordance index compared to the tumor volume model (CI=0.68). Conclusion: The preselection of variables in the robustness study allowed building a predictive radiomics-based model of tumor control in HNC despite a small patient cohort. This model was found to be superior to the volume-based model. The project was supported by the KFSP Tumor Oxygenation of the University of Zurich, by a grant of the Center for Clinical Research, University and University Hospital Zurich and by a research grant from Merck (Schweiz) AG.« less
Robust Path Planning and Feedback Design Under Stochastic Uncertainty
NASA Technical Reports Server (NTRS)
Blackmore, Lars
2008-01-01
Autonomous vehicles require optimal path planning algorithms to achieve mission goals while avoiding obstacles and being robust to uncertainties. The uncertainties arise from exogenous disturbances, modeling errors, and sensor noise, which can be characterized via stochastic models. Previous work defined a notion of robustness in a stochastic setting by using the concept of chance constraints. This requires that mission constraint violation can occur with a probability less than a prescribed value.In this paper we describe a novel method for optimal chance constrained path planning with feedback design. The approach optimizes both the reference trajectory to be followed and the feedback controller used to reject uncertainty. Our method extends recent results in constrained control synthesis based on convex optimization to solve control problems with nonconvex constraints. This extension is essential for path planning problems, which inherently have nonconvex obstacle avoidance constraints. Unlike previous approaches to chance constrained path planning, the new approach optimizes the feedback gain as wellas the reference trajectory.The key idea is to couple a fast, nonconvex solver that does not take into account uncertainty, with existing robust approaches that apply only to convex feasible regions. By alternating between robust and nonrobust solutions, the new algorithm guarantees convergence to a global optimum. We apply the new method to an unmanned aircraft and show simulation results that demonstrate the efficacy of the approach.
Generating Multivariate Ordinal Data via Entropy Principles.
Lee, Yen; Kaplan, David
2018-03-01
When conducting robustness research where the focus of attention is on the impact of non-normality, the marginal skewness and kurtosis are often used to set the degree of non-normality. Monte Carlo methods are commonly applied to conduct this type of research by simulating data from distributions with skewness and kurtosis constrained to pre-specified values. Although several procedures have been proposed to simulate data from distributions with these constraints, no corresponding procedures have been applied for discrete distributions. In this paper, we present two procedures based on the principles of maximum entropy and minimum cross-entropy to estimate the multivariate observed ordinal distributions with constraints on skewness and kurtosis. For these procedures, the correlation matrix of the observed variables is not specified but depends on the relationships between the latent response variables. With the estimated distributions, researchers can study robustness not only focusing on the levels of non-normality but also on the variations in the distribution shapes. A simulation study demonstrates that these procedures yield excellent agreement between specified parameters and those of estimated distributions. A robustness study concerning the effect of distribution shape in the context of confirmatory factor analysis shows that shape can affect the robust [Formula: see text] and robust fit indices, especially when the sample size is small, the data are severely non-normal, and the fitted model is complex.
Lee, Sook-Kyung; Cheng, Nancy; Hull-Ryde, Emily; Potempa, Marc; Schiffer, Celia A; Janzen, William; Swanstrom, Ronald
2013-07-23
The matrix/capsid processing site in the HIV-1 Gag precursor is likely the most sensitive target to inhibit HIV-1 replication. We have previously shown that modest incomplete processing at the site leads to a complete loss of virion infectivity. In the study presented here, a sensitive assay based on fluorescence polarization that can monitor cleavage at the MA/CA site in the context of the folded protein substrate is described. The substrate, an MA/CA fusion protein, was labeled with the fluorescein-based FlAsH (fluorescein arsenical hairpin) reagent that binds to a tetracysteine motif (CCGPCC) that was introduced within the N-terminal domain of CA. By limiting the size of CA and increasing the size of MA (with an N-terminal GST fusion), we were able to measure significant differences in polarization values as a function of HIV-1 protease cleavage. The sensitivity of the assay was tested in the presence of increasing amounts of an HIV-1 protease inhibitor, which resulted in a gradual decrease in the fluorescence polarization values demonstrating that the assay is sensitive in discerning changes in protease processing. The high-throughput screening assay validation in 384-well plates showed that the assay is reproducible and robust with an average Z' value of 0.79 and average coefficient of variation values of <3%. The robustness and reproducibility of the assay were further validated using the LOPAC(1280) compound library, demonstrating that the assay provides a sensitive high-throughput screening platform that can be used with large compound libraries for identifying novel maturation inhibitors targeting the MA/CA site of the HIV-1 Gag polyprotein.
NASA Astrophysics Data System (ADS)
Lancaster, N.; LeBlanc, D.; Bebis, G.; Nicolescu, M.
2015-12-01
Dune-field patterns are believed to behave as self-organizing systems, but what causes the patterns to form is still poorly understood. The most obvious (and in many cases the most significant) aspect of a dune system is the pattern of dune crest lines. Extracting meaningful features such as crest length, orientation, spacing, bifurcations, and merging of crests from image data can reveal important information about the specific dune-field morphological properties, development, and response to changes in boundary conditions, but manual methods are labor-intensive and time-consuming. We are developing the capability to recognize and characterize patterns of sand dunes on planetary surfaces. Our goal is to develop a robust methodology and the necessary algorithms for automated or semi-automated extraction of dune morphometric information from image data. Our main approach uses image processing methods to extract gradient information from satellite images of dune fields. Typically, the gradients have a dominant magnitude and orientation. In many cases, the images have two major dominant gradient orientations, for the sunny and shaded side of the dunes. A histogram of the gradient orientations is used to determine the dominant orientation. A threshold is applied to the image based on gradient orientations which agree with the dominant orientation. The contours of the binary image can then be used to determine the dune crest-lines, based on pixel intensity values. Once the crest-lines have been extracted, the morphological properties can be computed. We have tested our approach on a variety of images of linear and crescentic (transverse) dunes and compared dune detection algorithms with manually-digitized dune crest lines, achieving true positive values of 0.57-0.99; and false positives values of 0.30-0.67, indicating that out approach is generally robust.
Analysis of gene network robustness based on saturated fixed point attractors
2014-01-01
The analysis of gene network robustness to noise and mutation is important for fundamental and practical reasons. Robustness refers to the stability of the equilibrium expression state of a gene network to variations of the initial expression state and network topology. Numerical simulation of these variations is commonly used for the assessment of robustness. Since there exists a great number of possible gene network topologies and initial states, even millions of simulations may be still too small to give reliable results. When the initial and equilibrium expression states are restricted to being saturated (i.e., their elements can only take values 1 or −1 corresponding to maximum activation and maximum repression of genes), an analytical gene network robustness assessment is possible. We present this analytical treatment based on determination of the saturated fixed point attractors for sigmoidal function models. The analysis can determine (a) for a given network, which and how many saturated equilibrium states exist and which and how many saturated initial states converge to each of these saturated equilibrium states and (b) for a given saturated equilibrium state or a given pair of saturated equilibrium and initial states, which and how many gene networks, referred to as viable, share this saturated equilibrium state or the pair of saturated equilibrium and initial states. We also show that the viable networks sharing a given saturated equilibrium state must follow certain patterns. These capabilities of the analytical treatment make it possible to properly define and accurately determine robustness to noise and mutation for gene networks. Previous network research conclusions drawn from performing millions of simulations follow directly from the results of our analytical treatment. Furthermore, the analytical results provide criteria for the identification of model validity and suggest modified models of gene network dynamics. The yeast cell-cycle network is used as an illustration of the practical application of this analytical treatment. PMID:24650364
Sehgal, Vasudha; Seviour, Elena G; Moss, Tyler J; Mills, Gordon B; Azencott, Robert; Ram, Prahlad T
2015-01-01
MicroRNAs (miRNAs) play a crucial role in the maintenance of cellular homeostasis by regulating the expression of their target genes. As such, the dysregulation of miRNA expression has been frequently linked to cancer. With rapidly accumulating molecular data linked to patient outcome, the need for identification of robust multi-omic molecular markers is critical in order to provide clinical impact. While previous bioinformatic tools have been developed to identify potential biomarkers in cancer, these methods do not allow for rapid classification of oncogenes versus tumor suppressors taking into account robust differential expression, cutoffs, p-values and non-normality of the data. Here, we propose a methodology, Robust Selection Algorithm (RSA) that addresses these important problems in big data omics analysis. The robustness of the survival analysis is ensured by identification of optimal cutoff values of omics expression, strengthened by p-value computed through intensive random resampling taking into account any non-normality in the data and integration into multi-omic functional networks. Here we have analyzed pan-cancer miRNA patient data to identify functional pathways involved in cancer progression that are associated with selected miRNA identified by RSA. Our approach demonstrates the way in which existing survival analysis techniques can be integrated with a functional network analysis framework to efficiently identify promising biomarkers and novel therapeutic candidates across diseases.
Whitaker, May
2016-01-01
Purpose Inverse planning simulated annealing (IPSA) optimized brachytherapy treatment plans are characterized with large isolated dwell times at the first or last dwell position of each catheter. The potential of catheter shifts relative to the target and organs at risk in these plans may lead to a more significant change in delivered dose to the volumes of interest relative to plans with more uniform dwell times. Material and methods This study aims to determine if the Nucletron Oncentra dwell time deviation constraint (DTDC) parameter can be optimized to improve the robustness of high-dose-rate (HDR) prostate brachytherapy plans to catheter displacements. A set of 10 clinically acceptable prostate plans were re-optimized with a DTDC parameter of 0 and 0.4. For each plan, catheter displacements of 3, 7, and 14 mm were retrospectively applied and the change in dose volume histogram (DVH) indices and conformity indices analyzed. Results The robustness of clinically acceptable prostate plans to catheter displacements in the caudal direction was found to be dependent on the DTDC parameter. A DTDC value of 0 improves the robustness of planning target volume (PTV) coverage to catheter displacements, whereas a DTDC value of 0.4 improves the robustness of the plans to changes in hotspots. Conclusions The results indicate that if used in conjunction with a pre-treatment catheter displacement correction protocol and a tolerance of 3 mm, a DTDC value of 0.4 may produce clinically superior plans. However, the effect of the DTDC parameter in plan robustness was not observed to be as strong as initially suspected. PMID:27504129
Poder, Joel; Whitaker, May
2016-06-01
Inverse planning simulated annealing (IPSA) optimized brachytherapy treatment plans are characterized with large isolated dwell times at the first or last dwell position of each catheter. The potential of catheter shifts relative to the target and organs at risk in these plans may lead to a more significant change in delivered dose to the volumes of interest relative to plans with more uniform dwell times. This study aims to determine if the Nucletron Oncentra dwell time deviation constraint (DTDC) parameter can be optimized to improve the robustness of high-dose-rate (HDR) prostate brachytherapy plans to catheter displacements. A set of 10 clinically acceptable prostate plans were re-optimized with a DTDC parameter of 0 and 0.4. For each plan, catheter displacements of 3, 7, and 14 mm were retrospectively applied and the change in dose volume histogram (DVH) indices and conformity indices analyzed. The robustness of clinically acceptable prostate plans to catheter displacements in the caudal direction was found to be dependent on the DTDC parameter. A DTDC value of 0 improves the robustness of planning target volume (PTV) coverage to catheter displacements, whereas a DTDC value of 0.4 improves the robustness of the plans to changes in hotspots. The results indicate that if used in conjunction with a pre-treatment catheter displacement correction protocol and a tolerance of 3 mm, a DTDC value of 0.4 may produce clinically superior plans. However, the effect of the DTDC parameter in plan robustness was not observed to be as strong as initially suspected.
Multirate sampled-data yaw-damper and modal suppression system design
NASA Technical Reports Server (NTRS)
Berg, Martin C.; Mason, Gregory S.
1990-01-01
A multirate control law synthesized algorithm based on an infinite-time quadratic cost function, was developed along with a method for analyzing the robustness of multirate systems. A generalized multirate sampled-data control law structure (GMCLS) was introduced. A new infinite-time-based parameter optimization multirate sampled-data control law synthesis method and solution algorithm were developed. A singular-value-based method for determining gain and phase margins for multirate systems was also developed. The finite-time-based parameter optimization multirate sampled-data control law synthesis algorithm originally intended to be applied to the aircraft problem was instead demonstrated by application to a simpler problem involving the control of the tip position of a two-link robot arm. The GMCLS, the infinite-time-based parameter optimization multirate control law synthesis method and solution algorithm, and the singular-value based method for determining gain and phase margins were all demonstrated by application to the aircraft control problem originally proposed for this project.
A Public Health Grid (PHGrid): Architecture and value proposition for 21st century public health.
Savel, T; Hall, K; Lee, B; McMullin, V; Miles, M; Stinn, J; White, P; Washington, D; Boyd, T; Lenert, L
2010-07-01
This manuscript describes the value of and proposal for a high-level architectural framework for a Public Health Grid (PHGrid), which the authors feel has the capability to afford the public health community a robust technology infrastructure for secure and timely data, information, and knowledge exchange, not only within the public health domain, but between public health and the overall health care system. The CDC facilitated multiple Proof-of-Concept (PoC) projects, leveraging an open-source-based software development methodology, to test four hypotheses with regard to this high-level framework. The outcomes of the four PoCs in combination with the use of the Federal Enterprise Architecture Framework (FEAF) and the newly emerging Federal Segment Architecture Methodology (FSAM) was used to develop and refine a high-level architectural framework for a Public Health Grid infrastructure. The authors were successful in documenting a robust high-level architectural framework for a PHGrid. The documentation generated provided a level of granularity needed to validate the proposal, and included examples of both information standards and services to be implemented. Both the results of the PoCs as well as feedback from selected public health partners were used to develop the granular documentation. A robust high-level cohesive architectural framework for a Public Health Grid (PHGrid) has been successfully articulated, with its feasibility demonstrated via multiple PoCs. In order to successfully implement this framework for a Public Health Grid, the authors recommend moving forward with a three-pronged approach focusing on interoperability and standards, streamlining the PHGrid infrastructure, and developing robust and high-impact public health services. Published by Elsevier Ireland Ltd.
Nurhuda, M; Rouf, A
2017-09-01
The paper presents a method for simultaneous computation of eigenfunction and eigenvalue of the stationary Schrödinger equation on a grid, without imposing boundary-value condition. The method is based on the filter operator, which selects the eigenfunction from wave packet at the rate comparable to δ function. The efficacy and reliability of the method are demonstrated by comparing the simulation results with analytical or numerical solutions obtained by using other methods for various boundary-value conditions. It is found that the method is robust, accurate, and reliable. Further prospect of filter method for simulation of the Schrödinger equation in higher-dimensional space will also be highlighted.
Analytic proof of the existence of the Lorenz attractor in the extended Lorenz model
NASA Astrophysics Data System (ADS)
Ovsyannikov, I. I.; Turaev, D. V.
2017-01-01
We give an analytic (free of computer assistance) proof of the existence of a classical Lorenz attractor for an open set of parameter values of the Lorenz model in the form of Yudovich-Morioka-Shimizu. The proof is based on detection of a homoclinic butterfly with a zero saddle value and rigorous verification of one of the Shilnikov criteria for the birth of the Lorenz attractor; we also supply a proof for this criterion. The results are applied in order to give an analytic proof for the existence of a robust, pseudohyperbolic strange attractor (the so-called discrete Lorenz attractor) for an open set of parameter values in a 4-parameter family of 3D Henon-like diffeomorphisms.
Testing the robustness of management decisions to uncertainty: Everglades restoration scenarios.
Fuller, Michael M; Gross, Louis J; Duke-Sylvester, Scott M; Palmer, Mark
2008-04-01
To effectively manage large natural reserves, resource managers must prepare for future contingencies while balancing the often conflicting priorities of different stakeholders. To deal with these issues, managers routinely employ models to project the response of ecosystems to different scenarios that represent alternative management plans or environmental forecasts. Scenario analysis is often used to rank such alternatives to aid the decision making process. However, model projections are subject to uncertainty in assumptions about model structure, parameter values, environmental inputs, and subcomponent interactions. We introduce an approach for testing the robustness of model-based management decisions to the uncertainty inherent in complex ecological models and their inputs. We use relative assessment to quantify the relative impacts of uncertainty on scenario ranking. To illustrate our approach we consider uncertainty in parameter values and uncertainty in input data, with specific examples drawn from the Florida Everglades restoration project. Our examples focus on two alternative 30-year hydrologic management plans that were ranked according to their overall impacts on wildlife habitat potential. We tested the assumption that varying the parameter settings and inputs of habitat index models does not change the rank order of the hydrologic plans. We compared the average projected index of habitat potential for four endemic species and two wading-bird guilds to rank the plans, accounting for variations in parameter settings and water level inputs associated with hypothetical future climates. Indices of habitat potential were based on projections from spatially explicit models that are closely tied to hydrology. For the American alligator, the rank order of the hydrologic plans was unaffected by substantial variation in model parameters. By contrast, simulated major shifts in water levels led to reversals in the ranks of the hydrologic plans in 24.1-30.6% of the projections for the wading bird guilds and several individual species. By exposing the differential effects of uncertainty, relative assessment can help resource managers assess the robustness of scenario choice in model-based policy decisions.
Belli, Maria Luisa; Mori, Martina; Broggi, Sara; Cattaneo, Giovanni Mauro; Bettinardi, Valentino; Dell'Oca, Italo; Fallanca, Federico; Passoni, Paolo; Vanoli, Emilia Giovanna; Calandrino, Riccardo; Di Muzio, Nadia; Picchio, Maria; Fiorino, Claudio
2018-05-01
To investigate the robustness of PET radiomic features (RF) against tumour delineation uncertainty in two clinically relevant situations. Twenty-five head-and-neck (HN) and 25 pancreatic cancer patients previously treated with 18 F-Fluorodeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT)-based planning optimization were considered. Seven FDG-based contours were delineated for tumour (T) and positive lymph nodes (N, for HN patients only) following manual (2 observers), semi-automatic (based on SUV maximum gradient: PET_Edge) and automatic (40%, 50%, 60%, 70% SUV_max thresholds) methods. Seventy-three RF (14 of first order and 59 of higher order) were extracted using the CGITA software (v.1.4). The impact of delineation on volume agreement and RF was assessed by DICE and Intra-class Correlation Coefficients (ICC). A large disagreement between manual and SUV_max method was found for thresholds ≥50%. Inter-observer variability showed median DICE values between 0.81 (HN-T) and 0.73 (pancreas). Volumes defined by PET_Edge were better consistent with the manual ones compared to SUV40%. Regarding RF, 19%/19%/47% of the features showed ICC < 0.80 between observers for HN-N/HN-T/pancreas, mostly in the Voxel-alignment matrix and in the intensity-size zone matrix families. RFs with ICC < 0.80 against manual delineation (taking the worst value) increased to 44%/36%/61% for PET_Edge and to 69%/53%/75% for SUV40%. About 80%/50% of 72 RF were consistent between observers for HN/pancreas patients. PET_edge was sufficiently robust against manual delineation while SUV40% showed a worse performance. This result suggests the possibility to replace manual with semi-automatic delineation of HN and pancreas tumours in studies including PET radiomic analyses. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Reproduction of Meloidogyne chitwoodi on Popcorn Cultivars
Cardwell, D. M.; Ingham, R. E.
1997-01-01
Popcorn cultivars were evaluated in field and greenhouse tests for resistance to the Columbia root-knot nematode, Meloidogyne chitwoodi, as potential resistant crops in potato rotations. A nematode reproductive factor (Rf) was calculated for each cultivar. Reproductive factor values also were compared on a relative basis as percentages of the Rf on a susceptible field corn standard, Pioneer 3578. Popcorn cultivars W206 and Robust 33-77 consistently supported low population densities of M. chitwoodi in repeated tests. However, WOC 9508 had the greatest resistance in any of the field tests, with an Rf value of 0.04. Cultivars with a mean field and greenhouse Rf value less than 50% of the value for Pioneer 3578 were WOC 9508 (8%), WOC 9554 (13%), W206 (15%), WOX 9512 (23%), Robust 33-77 (30%), Robust 20-70 (38%), WOC 9510 (41%), and WOC 9504 (42%). If these cultivars were used in rotation, M. chitwoodi population densities at the end of the popcorn season would be between 58% and 92% less than if Pioneer 3578 were grown. In greenhouse tests, WOX 9511, WOX 9528, WOC 9556, and WOX 9531 also had low Rf values (7-46% that of Pioneer 3578), but field testing of these cultivars is needed. PMID:19274265
NASA Astrophysics Data System (ADS)
Bai, Wen; Dai, Junwu; Zhou, Huimeng; Yang, Yongqiang; Ning, Xiaoqing
2017-10-01
Porcelain electrical equipment (PEE), such as current transformers, is critical to power supply systems, but its seismic performance during past earthquakes has not been satisfactory. This paper studies the seismic performance of two typical types of PEE and proposes a damping method for PEE based on multiple tuned mass dampers (MTMD). An MTMD damping device involving three mass units, named a triple tuned mass damper (TTMD), is designed and manufactured. Through shake table tests and finite element analysis, the dynamic characteristics of the PEE are studied and the effectiveness of the MTMD damping method is verified. The adverse influence of MTMD redundant mass to damping efficiency is studied and relevant equations are derived. MTMD robustness is verified through adjusting TTMD control frequencies. The damping effectiveness of TTMD, when the peak ground acceleration far exceeds the design value, is studied. Both shake table tests and finite element analysis indicate that MTMD is effective and robust in attenuating PEE seismic responses. TTMD remains effective when the PGA far exceeds the design value and when control deviations are considered.
NASA Astrophysics Data System (ADS)
Perdana, T. A.; Suprijanto, J.; Pribadi, R.; Collet, C. R.; Bailly, D.
2018-03-01
Ecosystem resilience is the capacity of ecosystems to tolerate disorders without collapsing into different circumstances qualitatively controlled by a different set of processes. A robust ecosystem is one that can withstand shocks and rebuild itself when necessary. This study aims to identify the value of use-based economy and non-use value of current economy; calculating the total economic value of mangrove resources; and provide suggestions and recommendations based on observations in Timbulsloko, Sayung, Demak. The method used is economic valuation with total economic value technique. The sampling technique used non-probability and purposive sampling method. The results showed that the direct use value of mangroves was utilized by fisherman, fish pond farmers, branjang catchers, oystercatchers, trap makers, shop owner, grilled fish makers and shrimp chip makers. Indirect use value was derived from function as the breakwater, beach belt and hybrid engineering. Existing value was not less than 10 % of the direct use value. The total economic value was Rp. 6,361,430,639/year or about Rp. 202,335,580.1/ha/year. It is need to improve the community awareness to mangrove ecosystem and to the role of breakwater in order to reduce risk disaster and to develop an ecotourism in the area.
NASA Astrophysics Data System (ADS)
Dong, Zhengcheng; Fang, Yanjun; Tian, Meng; Kong, Zhengmin
The hierarchical structure, k-core, is common in various complex networks, and the actual network always has successive layers from 1-core layer (the peripheral layer) to km-core layer (the core layer). The nodes within the core layer have been proved to be the most influential spreaders, but there is few work about how the depth of k-core layers (the value of km) can affect the robustness against cascading failures, rather than the interdependent networks. First, following the preferential attachment, a novel method is proposed to generate the scale-free network with successive k-core layers (KCBA network), and the KCBA network is validated more realistic than the traditional BA network. Then, with KCBA interdependent networks, the effect of the depth of k-core layers is investigated. Considering the load-based model, the loss of capacity on nodes is adopted to quantify the robustness instead of the number of functional nodes in the end. We conduct two attacking strategies, i.e. the RO-attack (Randomly remove only one node) and the RF-attack (Randomly remove a fraction of nodes). Results show that the robustness of KCBA networks not only depends on the depth of k-core layers, but also is slightly influenced by the initial load. With RO-attack, the networks with less k-core layers are more robust when the initial load is small. With RF-attack, the robustness improves with small km, but the improvement is getting weaker with the increment of the initial load. In a word, the lower the depth is, the more robust the networks will be.
Nonlinear control of magnetic bearings
NASA Technical Reports Server (NTRS)
Pradeep, A. K.; Gurumoorthy, R.
1994-01-01
In this paper we present a variety of nonlinear controllers for the magnetic bearing that ensure both stability and robustness. We utilize techniques of discontinuous control to design novel control laws for the magnetic bearing. We present in particular sliding mode controllers, time optimal controllers, winding algorithm based controllers, nested switching controllers, fractional controllers, and synchronous switching controllers for the magnetic bearing. We show existence of solutions to systems governed by discontinuous control laws, and prove stability and robustness of the chosen control laws in a rigorous setting. We design sliding mode observers for the magnetic bearing and prove the convergence of the state estimates to their true values. We present simulation results of the performance of the magnetic bearing subject to the aforementioned control laws, and conclude with comments on design.
Taguchi experimental design to determine the taste quality characteristic of candied carrot
NASA Astrophysics Data System (ADS)
Ekawati, Y.; Hapsari, A. A.
2018-03-01
Robust parameter design is used to design product that is robust to noise factors so the product’s performance fits the target and delivers a better quality. In the process of designing and developing the innovative product of candied carrot, robust parameter design is carried out using Taguchi Method. The method is used to determine an optimal quality design. The optimal quality design is based on the process and the composition of product ingredients that are in accordance with consumer needs and requirements. According to the identification of consumer needs from the previous research, quality dimensions that need to be assessed are the taste and texture of the product. The quality dimension assessed in this research is limited to the taste dimension. Organoleptic testing is used for this assessment, specifically hedonic testing that makes assessment based on consumer preferences. The data processing uses mean and signal to noise ratio calculation and optimal level setting to determine the optimal process/composition of product ingredients. The optimal value is analyzed using confirmation experiments to prove that proposed product match consumer needs and requirements. The result of this research is identification of factors that affect the product taste and the optimal quality of product according to Taguchi Method.
Robust Damage-Mitigating Control of Aircraft for High Performance and Structural Durability
NASA Technical Reports Server (NTRS)
Caplin, Jeffrey; Ray, Asok; Joshi, Suresh M.
1999-01-01
This paper presents the concept and a design methodology for robust damage-mitigating control (DMC) of aircraft. The goal of DMC is to simultaneously achieve high performance and structural durability. The controller design procedure involves consideration of damage at critical points of the structure, as well as the performance requirements of the aircraft. An aeroelastic model of the wings has been formulated and is incorporated into a nonlinear rigid-body model of aircraft flight-dynamics. Robust damage-mitigating controllers are then designed using the H(infinity)-based structured singular value (mu) synthesis method based on a linearized model of the aircraft. In addition to penalizing the error between the ideal performance and the actual performance of the aircraft, frequency-dependent weights are placed on the strain amplitude at the root of each wing. Using each controller in turn, the control system is put through an identical sequence of maneuvers, and the resulting (varying amplitude cyclic) stress profiles are analyzed using a fatigue crack growth model that incorporates the effects of stress overload. Comparisons are made to determine the impact of different weights on the resulting fatigue crack damage in the wings. The results of simulation experiments show significant savings in fatigue life of the wings while retaining the dynamic performance of the aircraft.
Bakal, Tomas; Janata, Jiri; Sabova, Lenka; Grabic, Roman; Zlabek, Vladimir; Najmanova, Lucie
2018-06-16
A robust and widely applicable method for sampling of aquatic microbial biofilm and further sample processing is presented. The method is based on next-generation sequencing of V4-V5 variable regions of 16S rRNA gene and further statistical analysis of sequencing data, which could be useful not only to investigate taxonomic composition of biofilm bacterial consortia but also to assess aquatic ecosystem health. Five artificial materials commonly used for biofilm growth (glass, stainless steel, aluminum, polypropylene, polyethylene) were tested to determine the one giving most robust and reproducible results. The effect of used sampler material on total microbial composition was not statistically significant; however, the non-plastic materials (glass, metal) gave more stable outputs without irregularities among sample parallels. The bias of the method is assessed with respect to the employment of a non-quantitative step (PCR amplification) to obtain quantitative results (relative abundance of identified taxa). This aspect is often overlooked in ecological and medical studies. We document that sequencing of a mixture of three merged primary PCR reactions for each sample and further evaluation of median values from three technical replicates for each sample enables to overcome this bias and gives robust and repeatable results well distinguishing among sampling localities and seasons.
Seismic noise attenuation using an online subspace tracking algorithm
NASA Astrophysics Data System (ADS)
Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang
2018-02-01
We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.
Lee, Jae Won; Cho, Hye Jin; Chun, Jinsung; Kim, Kyeong Nam; Kim, Seongsu; Ahn, Chang Won; Kim, Ill Won; Kim, Ju-Young; Kim, Sang-Woo; Yang, Changduk; Baik, Jeong Min
2017-01-01
A robust nanogenerator based on poly(tert-butyl acrylate) (PtBA)–grafted polyvinylidene difluoride (PVDF) copolymers via dielectric constant control through an atom-transfer radical polymerization technique, which can markedly increase the output power, is demonstrated. The copolymer is mainly composed of α phases with enhanced dipole moments due to the π-bonding and polar characteristics of the ester functional groups in the PtBA, resulting in the increase of dielectric constant values by approximately twice, supported by Kelvin probe force microscopy measurements. This increase in the dielectric constant significantly increased the density of the charges that can be accumulated on the copolymer during physical contact. The nanogenerator generates output signals of 105 V and 25 μA/cm2, a 20-fold enhancement in output power, compared to pristine PVDF–based nanogenerator after tuning the surface potential using a poling method. The markedly enhanced output performance is quite stable and reliable in harsh mechanical environments due to the high flexibility of the films. On the basis of these results, a much faster charging characteristic is demonstrated in this study. PMID:28560339
Incomplete Augmented Lagrangian Preconditioner for Steady Incompressible Navier-Stokes Equations
Tan, Ning-Bo; Huang, Ting-Zhu; Hu, Ze-Jun
2013-01-01
An incomplete augmented Lagrangian preconditioner, for the steady incompressible Navier-Stokes equations discretized by stable finite elements, is proposed. The eigenvalues of the preconditioned matrix are analyzed. Numerical experiments show that the incomplete augmented Lagrangian-based preconditioner proposed is very robust and performs quite well by the Picard linearization or the Newton linearization over a wide range of values of the viscosity on both uniform and stretched grids. PMID:24235888
A zwitterionic gel electrolyte for efficient solid-state supercapacitors
Peng, Xu; Liu, Huili; Yin, Qin; Wu, Junchi; Chen, Pengzuo; Zhang, Guangzhao; Liu, Guangming; Wu, Changzheng; Xie, Yi
2016-01-01
Gel electrolytes have attracted increasing attention for solid-state supercapacitors. An ideal gel electrolyte usually requires a combination of advantages of high ion migration rate, reasonable mechanical strength and robust water retention ability at the solid state for ensuring excellent work durability. Here we report a zwitterionic gel electrolyte that successfully brings the synergic advantages of robust water retention ability and ion migration channels, manifesting in superior electrochemical performance. When applying the zwitterionic gel electrolyte, our graphene-based solid-state supercapacitor reaches a volume capacitance of 300.8 F cm−3 at 0.8 A cm−3 with a rate capacity of only 14.9% capacitance loss as the current density increases from 0.8 to 20 A cm−3, representing the best value among the previously reported graphene-based solid-state supercapacitors, to the best of our knowledge. We anticipate that zwitterionic gel electrolyte may be developed as a gel electrolyte in solid-state supercapacitors. PMID:27225484
Statistical Methods and Sampling Design for Estimating Step Trends in Surface-Water Quality
Hirsch, Robert M.
1988-01-01
This paper addresses two components of the problem of estimating the magnitude of step trends in surface water quality. The first is finding a robust estimator appropriate to the data characteristics expected in water-quality time series. The J. L. Hodges-E. L. Lehmann class of estimators is found to be robust in comparison to other nonparametric and moment-based estimators. A seasonal Hodges-Lehmann estimator is developed and shown to have desirable properties. Second, the effectiveness of various sampling strategies is examined using Monte Carlo simulation coupled with application of this estimator. The simulation is based on a large set of total phosphorus data from the Potomac River. To assure that the simulated records have realistic properties, the data are modeled in a multiplicative fashion incorporating flow, hysteresis, seasonal, and noise components. The results demonstrate the importance of balancing the length of the two sampling periods and balancing the number of data values between the two periods.
Robust automatic measurement of 3D scanned models for the human body fat estimation.
Giachetti, Andrea; Lovato, Christian; Piscitelli, Francesco; Milanese, Chiara; Zancanaro, Carlo
2015-03-01
In this paper, we present an automatic tool for estimating geometrical parameters from 3-D human scans independent on pose and robustly against the topological noise. It is based on an automatic segmentation of body parts exploiting curve skeleton processing and ad hoc heuristics able to remove problems due to different acquisition poses and body types. The software is able to locate body trunk and limbs, detect their directions, and compute parameters like volumes, areas, girths, and lengths. Experimental results demonstrate that measurements provided by our system on 3-D body scans of normal and overweight subjects acquired in different poses are highly correlated with the body fat estimates obtained on the same subjects with dual-energy X-rays absorptiometry (DXA) scanning. In particular, maximal lengths and girths, not requiring precise localization of anatomical landmarks, demonstrate a good correlation (up to 96%) with the body fat and trunk fat. Regression models based on our automatic measurements can be used to predict body fat values reasonably well.
Robust Control Design for Systems With Probabilistic Uncertainty
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.
2005-01-01
This paper presents a reliability- and robustness-based formulation for robust control synthesis for systems with probabilistic uncertainty. In a reliability-based formulation, the probability of violating design requirements prescribed by inequality constraints is minimized. In a robustness-based formulation, a metric which measures the tendency of a random variable/process to cluster close to a target scalar/function is minimized. A multi-objective optimization procedure, which combines stability and performance requirements in time and frequency domains, is used to search for robustly optimal compensators. Some of the fundamental differences between the proposed strategy and conventional robust control methods are: (i) unnecessary conservatism is eliminated since there is not need for convex supports, (ii) the most likely plants are favored during synthesis allowing for probabilistic robust optimality, (iii) the tradeoff between robust stability and robust performance can be explored numerically, (iv) the uncertainty set is closely related to parameters with clear physical meaning, and (v) compensators with improved robust characteristics for a given control structure can be synthesized.
Zheng, Dandan; Todor, Dorin A
2011-01-01
In real-time trans-rectal ultrasound (TRUS)-based high-dose-rate prostate brachytherapy, the accurate identification of needle-tip position is critical for treatment planning and delivery. Currently, needle-tip identification on ultrasound images can be subject to large uncertainty and errors because of ultrasound image quality and imaging artifacts. To address this problem, we developed a method based on physical measurements with simple and practical implementation to improve the accuracy and robustness of needle-tip identification. Our method uses measurements of the residual needle length and an off-line pre-established coordinate transformation factor, to calculate the needle-tip position on the TRUS images. The transformation factor was established through a one-time systematic set of measurements of the probe and template holder positions, applicable to all patients. To compare the accuracy and robustness of the proposed method and the conventional method (ultrasound detection), based on the gold-standard X-ray fluoroscopy, extensive measurements were conducted in water and gel phantoms. In water phantom, our method showed an average tip-detection accuracy of 0.7 mm compared with 1.6 mm of the conventional method. In gel phantom (more realistic and tissue-like), our method maintained its level of accuracy while the uncertainty of the conventional method was 3.4mm on average with maximum values of over 10mm because of imaging artifacts. A novel method based on simple physical measurements was developed to accurately detect the needle-tip position for TRUS-based high-dose-rate prostate brachytherapy. The method demonstrated much improved accuracy and robustness over the conventional method. Copyright © 2011 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Devlin, Michelle; Painting, Suzanne; Best, Mike
2007-01-01
The EU Water Framework Directive recognises that ecological status is supported by the prevailing physico-chemical conditions in each water body. This paper describes an approach to providing guidance on setting thresholds for nutrients taking account of the biological response to nutrient enrichment evident in different types of water. Indices of pressure, state and impact are used to achieve a robust nutrient (nitrogen) threshold by considering each individual index relative to a defined standard, scale or threshold. These indices include winter nitrogen concentrations relative to a predetermined reference value; the potential of the waterbody to support phytoplankton growth (estimated as primary production); and detection of an undesirable disturbance (measured as dissolved oxygen). Proposed reference values are based on a combination of historical records, offshore (limited human influence) nutrient concentrations, literature values and modelled data. Statistical confidence is based on a number of attributes, including distance of confidence limits away from a reference threshold and how well the model is populated with real data. This evidence based approach ensures that nutrient thresholds are based on knowledge of real and measurable biological responses in transitional and coastal waters.
2013-01-01
Background Rapid development of highly saturated genetic maps aids molecular breeding, which can accelerate gain per breeding cycle in woody perennial plants such as Rubus idaeus (red raspberry). Recently, robust genotyping methods based on high-throughput sequencing were developed, which provide high marker density, but result in some genotype errors and a large number of missing genotype values. Imputation can reduce the number of missing values and can correct genotyping errors, but current methods of imputation require a reference genome and thus are not an option for most species. Results Genotyping by Sequencing (GBS) was used to produce highly saturated maps for a R. idaeus pseudo-testcross progeny. While low coverage and high variance in sequencing resulted in a large number of missing values for some individuals, a novel method of imputation based on maximum likelihood marker ordering from initial marker segregation overcame the challenge of missing values, and made map construction computationally tractable. The two resulting parental maps contained 4521 and 2391 molecular markers spanning 462.7 and 376.6 cM respectively over seven linkage groups. Detection of precise genomic regions with segregation distortion was possible because of map saturation. Microsatellites (SSRs) linked these results to published maps for cross-validation and map comparison. Conclusions GBS together with genome-independent imputation provides a rapid method for genetic map construction in any pseudo-testcross progeny. Our method of imputation estimates the correct genotype call of missing values and corrects genotyping errors that lead to inflated map size and reduced precision in marker placement. Comparison of SSRs to published R. idaeus maps showed that the linkage maps constructed with GBS and our method of imputation were robust, and marker positioning reliable. The high marker density allowed identification of genomic regions with segregation distortion in R. idaeus, which may help to identify deleterious alleles that are the basis of inbreeding depression in the species. PMID:23324311
Projection rule for complex-valued associative memory with large constant terms
NASA Astrophysics Data System (ADS)
Kitahara, Michimasa; Kobayashi, Masaki
Complex-valued Associative Memory (CAM) has an inherent property of rotation invariance. Rotation invariance produces many undesirable stable states and reduces the noise robustness of CAM. Constant terms may remove rotation invariance, but if the constant terms are too small, rotation invariance does not vanish. In this paper, we eliminate rotation invariance by introducing large constant terms to complex-valued neurons. We have to make constant terms sufficiently large to improve the noise robustness. We introduce a parameter to control the amplitudes of constant terms into projection rule. The large constant terms are proved to be effective by our computer simulations.
Improving particle beam acceleration in plasmas
NASA Astrophysics Data System (ADS)
C. de Sousa, M.; L. Caldas, I.
2018-04-01
The dynamics of wave-particle interactions in magnetized plasmas restricts the wave amplitude to moderate values for particle beam acceleration from rest energy. We analyze how a perturbing invariant robust barrier modifies the phase space of the system and enlarges the wave amplitude interval for particle acceleration. For low values of the wave amplitude, the acceleration becomes effective for particles with initial energy close to the rest energy. For higher values of the wave amplitude, the robust barrier controls chaos in the system and restores the acceleration process. We also determine the best position for the perturbing barrier in phase space in order to increase the final energy of the particles.
NASA Astrophysics Data System (ADS)
Yozgatligil, Ceylan; Aslan, Sipan; Iyigun, Cem; Batmaz, Inci
2013-04-01
This study aims to compare several imputation methods to complete the missing values of spatio-temporal meteorological time series. To this end, six imputation methods are assessed with respect to various criteria including accuracy, robustness, precision, and efficiency for artificially created missing data in monthly total precipitation and mean temperature series obtained from the Turkish State Meteorological Service. Of these methods, simple arithmetic average, normal ratio (NR), and NR weighted with correlations comprise the simple ones, whereas multilayer perceptron type neural network and multiple imputation strategy adopted by Monte Carlo Markov Chain based on expectation-maximization (EM-MCMC) are computationally intensive ones. In addition, we propose a modification on the EM-MCMC method. Besides using a conventional accuracy measure based on squared errors, we also suggest the correlation dimension (CD) technique of nonlinear dynamic time series analysis which takes spatio-temporal dependencies into account for evaluating imputation performances. Depending on the detailed graphical and quantitative analysis, it can be said that although computational methods, particularly EM-MCMC method, are computationally inefficient, they seem favorable for imputation of meteorological time series with respect to different missingness periods considering both measures and both series studied. To conclude, using the EM-MCMC algorithm for imputing missing values before conducting any statistical analyses of meteorological data will definitely decrease the amount of uncertainty and give more robust results. Moreover, the CD measure can be suggested for the performance evaluation of missing data imputation particularly with computational methods since it gives more precise results in meteorological time series.
Robust control of systems with real parameter uncertainty and unmodelled dynamics
NASA Technical Reports Server (NTRS)
Chang, Bor-Chin; Fischl, Robert
1991-01-01
During this research period we have made significant progress in the four proposed areas: (1) design of robust controllers via H infinity optimization; (2) design of robust controllers via mixed H2/H infinity optimization; (3) M-delta structure and robust stability analysis for structured uncertainties; and (4) a study on controllability and observability of perturbed plant. It is well known now that the two-Riccati-equation solution to the H infinity control problem can be used to characterize all possible stabilizing optimal or suboptimal H infinity controllers if the optimal H infinity norm or gamma, an upper bound of a suboptimal H infinity norm, is given. In this research, we discovered some useful properties of these H infinity Riccati solutions. Among them, the most prominent one is that the spectral radius of the product of these two Riccati solutions is a continuous, nonincreasing, convex function of gamma in the domain of interest. Based on these properties, quadratically convergent algorithms are developed to compute the optimal H infinity norm. We also set up a detailed procedure for applying the H infinity theory to robust control systems design. The desire to design controllers with H infinity robustness but H(exp 2) performance has recently resulted in mixed H(exp 2) and H infinity control problem formulation. The mixed H(exp 2)/H infinity problem have drawn the attention of many investigators. However, solution is only available for special cases of this problem. We formulated a relatively realistic control problem with H(exp 2) performance index and H infinity robustness constraint into a more general mixed H(exp 2)/H infinity problem. No optimal solution yet is available for this more general mixed H(exp 2)/H infinity problem. Although the optimal solution for this mixed H(exp 2)/H infinity control has not yet been found, we proposed a design approach which can be used through proper choice of the available design parameters to influence both robustness and performance. For a large class of linear time-invariant systems with real parametric perturbations, the coefficient vector of the characteristic polynomial is a multilinear function of the real parameter vector. Based on this multilinear mapping relationship together with the recent developments for polytopic polynomials and parameter domain partition technique, we proposed an iterative algorithm for coupling the real structured singular value.
Reasoning with Vectors: A Continuous Model for Fast Robust Inference.
Widdows, Dominic; Cohen, Trevor
2015-10-01
This paper describes the use of continuous vector space models for reasoning with a formal knowledge base. The practical significance of these models is that they support fast, approximate but robust inference and hypothesis generation, which is complementary to the slow, exact, but sometimes brittle behavior of more traditional deduction engines such as theorem provers. The paper explains the way logical connectives can be used in semantic vector models, and summarizes the development of Predication-based Semantic Indexing, which involves the use of Vector Symbolic Architectures to represent the concepts and relationships from a knowledge base of subject-predicate-object triples. Experiments show that the use of continuous models for formal reasoning is not only possible, but already demonstrably effective for some recognized informatics tasks, and showing promise in other traditional problem areas. Examples described in this paper include: predicting new uses for existing drugs in biomedical informatics; removing unwanted meanings from search results in information retrieval and concept navigation; type-inference from attributes; comparing words based on their orthography; and representing tabular data, including modelling numerical values. The algorithms and techniques described in this paper are all publicly released and freely available in the Semantic Vectors open-source software package.
NASA Astrophysics Data System (ADS)
Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia
2017-05-01
A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.
Reasoning with Vectors: A Continuous Model for Fast Robust Inference
Widdows, Dominic; Cohen, Trevor
2015-01-01
This paper describes the use of continuous vector space models for reasoning with a formal knowledge base. The practical significance of these models is that they support fast, approximate but robust inference and hypothesis generation, which is complementary to the slow, exact, but sometimes brittle behavior of more traditional deduction engines such as theorem provers. The paper explains the way logical connectives can be used in semantic vector models, and summarizes the development of Predication-based Semantic Indexing, which involves the use of Vector Symbolic Architectures to represent the concepts and relationships from a knowledge base of subject-predicate-object triples. Experiments show that the use of continuous models for formal reasoning is not only possible, but already demonstrably effective for some recognized informatics tasks, and showing promise in other traditional problem areas. Examples described in this paper include: predicting new uses for existing drugs in biomedical informatics; removing unwanted meanings from search results in information retrieval and concept navigation; type-inference from attributes; comparing words based on their orthography; and representing tabular data, including modelling numerical values. The algorithms and techniques described in this paper are all publicly released and freely available in the Semantic Vectors open-source software package.1 PMID:26582967
Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia
2017-05-07
A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.
Detection and identification of concealed weapons using matrix pencil
NASA Astrophysics Data System (ADS)
Adve, Raviraj S.; Thayaparan, Thayananthan
2011-06-01
The detection and identification of concealed weapons is an extremely hard problem due to the weak signature of the target buried within the much stronger signal from the human body. This paper furthers the automatic detection and identification of concealed weapons by proposing the use of an effective approach to obtain the resonant frequencies in a measurement. The technique, based on Matrix Pencil, a scheme for model based parameter estimation also provides amplitude information, hence providing a level of confidence in the results. Of specific interest is the fact that Matrix Pencil is based on a singular value decomposition, making the scheme robust against noise.
Hasanvand, Hamed; Mozafari, Babak; Arvan, Mohammad R; Amraee, Turaj
2015-11-01
This paper addresses the application of a static Var compensator (SVC) to improve the damping of interarea oscillations. Optimal location and size of SVC are defined using bifurcation and modal analysis to satisfy its primary application. Furthermore, the best-input signal for damping controller is selected using Hankel singular values and right half plane-zeros. The proposed approach is aimed to design a robust PI controller based on interval plants and Kharitonov's theorem. The objective here is to determine the stability region to attain robust stability, the desired phase margin, gain margin, and bandwidth. The intersection of the resulting stability regions yields the set of kp-ki parameters. In addition, optimal multiobjective design of PI controller using particle swarm optimization (PSO) algorithm is presented. The effectiveness of the suggested controllers in damping of local and interarea oscillation modes of a multimachine power system, over a wide range of loading conditions and system configurations, is confirmed through eigenvalue analysis and nonlinear time domain simulation. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Resolving Recent Plant Radiations: Power and Robustness of Genotyping-by-Sequencing.
Fernández-Mazuecos, Mario; Mellers, Greg; Vigalondo, Beatriz; Sáez, Llorenç; Vargas, Pablo; Glover, Beverley J
2018-03-01
Disentangling species boundaries and phylogenetic relationships within recent evolutionary radiations is a challenge due to the poor morphological differentiation and low genetic divergence between species, frequently accompanied by phenotypic convergence, interspecific gene flow and incomplete lineage sorting. Here we employed a genotyping-by-sequencing (GBS) approach, in combination with morphometric analyses, to investigate a small western Mediterranean clade in the flowering plant genus Linaria that radiated in the Quaternary. After confirming the morphological and genetic distinctness of eight species, we evaluated the relative performances of concatenation and coalescent methods to resolve phylogenetic relationships. Specifically, we focused on assessing the robustness of both approaches to variations in the parameter used to estimate sequence homology (clustering threshold). Concatenation analyses suffered from strong systematic bias, as revealed by the high statistical support for multiple alternative topologies depending on clustering threshold values. By contrast, topologies produced by two coalescent-based methods (NJ$_{\\mathrm{st}}$, SVDquartets) were robust to variations in the clustering threshold. Reticulate evolution may partly explain incongruences between NJ$_{\\mathrm{st}}$, SVDquartets and concatenated trees. Integration of morphometric and coalescent-based phylogenetic results revealed (i) extensive morphological divergence associated with recent splits between geographically close or sympatric sister species and (ii) morphological convergence in geographically disjunct species. These patterns are particularly true for floral traits related to pollinator specialization, including nectar spur length, tube width and corolla color, suggesting pollinator-driven diversification. Given its relatively simple and inexpensive implementation, GBS is a promising technique for the phylogenetic and systematic study of recent radiations, but care must be taken to evaluate the robustness of results to variation of data assembly parameters.
Robust electroencephalogram phase estimation with applications in brain-computer interface systems.
Seraj, Esmaeil; Sameni, Reza
2017-03-01
In this study, a robust method is developed for frequency-specific electroencephalogram (EEG) phase extraction using the analytic representation of the EEG. Based on recent theoretical findings in this area, it is shown that some of the phase variations-previously associated to the brain response-are systematic side-effects of the methods used for EEG phase calculation, especially during low analytical amplitude segments of the EEG. With this insight, the proposed method generates randomized ensembles of the EEG phase using minor perturbations in the zero-pole loci of narrow-band filters, followed by phase estimation using the signal's analytical form and ensemble averaging over the randomized ensembles to obtain a robust EEG phase and frequency. This Monte Carlo estimation method is shown to be very robust to noise and minor changes of the filter parameters and reduces the effect of fake EEG phase jumps, which do not have a cerebral origin. As proof of concept, the proposed method is used for extracting EEG phase features for a brain computer interface (BCI) application. The results show significant improvement in classification rates using rather simple phase-related features and a standard K-nearest neighbors and random forest classifiers, over a standard BCI dataset. The average performance was improved between 4-7% (in absence of additive noise) and 8-12% (in presence of additive noise). The significance of these improvements was statistically confirmed by a paired sample t-test, with 0.01 and 0.03 p-values, respectively. The proposed method for EEG phase calculation is very generic and may be applied to other EEG phase-based studies.
Robustness analysis of elastoplastic structure subjected to double impulse
NASA Astrophysics Data System (ADS)
Kanno, Yoshihiro; Takewaki, Izuru
2016-11-01
The double impulse has extensively been used to evaluate the critical response of an elastoplastic structure against a pulse-type input, including near-fault earthquake ground motions. In this paper, we propose a robustness assessment method for elastoplastic single-degree-of-freedom structures subjected to the double impulse input. Uncertainties in the initial velocity of the input, as well as the natural frequency and the strength of the structure, are considered. As fundamental properties of the structural robustness, we show monotonicity of the robustness measure with respect to the natural frequency. In contrast, we show that robustness is not necessarily improved even if the structural strength is increased. Moreover, the robustness preference between two structures with different values of structural strength can possibly reverse when the performance requirement is changed.
Robust Skull-Stripping Segmentation Based on Irrational Mask for Magnetic Resonance Brain Images.
Moldovanu, Simona; Moraru, Luminița; Biswas, Anjan
2015-12-01
This paper proposes a new method for simple, efficient, and robust removal of the non-brain tissues in MR images based on an irrational mask for filtration within a binary morphological operation framework. The proposed skull-stripping segmentation is based on two irrational 3 × 3 and 5 × 5 masks, having the sum of its weights equal to the transcendental number π value provided by the Gregory-Leibniz infinite series. It allows maintaining a lower rate of useful pixel loss. The proposed method has been tested in two ways. First, it has been validated as a binary method by comparing and contrasting with Otsu's, Sauvola's, Niblack's, and Bernsen's binary methods. Secondly, its accuracy has been verified against three state-of-the-art skull-stripping methods: the graph cuts method, the method based on Chan-Vese active contour model, and the simplex mesh and histogram analysis skull stripping. The performance of the proposed method has been assessed using the Dice scores, overlap and extra fractions, and sensitivity and specificity as statistical methods. The gold standard has been provided by two neurologist experts. The proposed method has been tested and validated on 26 image series which contain 216 images from two publicly available databases: the Whole Brain Atlas and the Internet Brain Segmentation Repository that include a highly variable sample population (with reference to age, sex, healthy/diseased). The approach performs accurately on both standardized databases. The main advantage of the proposed method is its robustness and speed.
Geffré, Anne; Concordet, Didier; Braun, Jean-Pierre; Trumel, Catherine
2011-03-01
International recommendations for determination of reference intervals have been recently updated, especially for small reference sample groups, and use of the robust method and Box-Cox transformation is now recommended. Unfortunately, these methods are not included in most software programs used for data analysis by clinical laboratories. We have created a set of macroinstructions, named Reference Value Advisor, for use in Microsoft Excel to calculate reference limits applying different methods. For any series of data, Reference Value Advisor calculates reference limits (with 90% confidence intervals [CI]) using a nonparametric method when n≥40 and by parametric and robust methods from native and Box-Cox transformed values; tests normality of distributions using the Anderson-Darling test and outliers using Tukey and Dixon-Reed tests; displays the distribution of values in dot plots and histograms and constructs Q-Q plots for visual inspection of normality; and provides minimal guidelines in the form of comments based on international recommendations. The critical steps in determination of reference intervals are correct selection of as many reference individuals as possible and analysis of specimens in controlled preanalytical and analytical conditions. Computing tools cannot compensate for flaws in selection and size of the reference sample group and handling and analysis of samples. However, if those steps are performed properly, Reference Value Advisor, available as freeware at http://www.biostat.envt.fr/spip/spip.php?article63, permits rapid assessment and comparison of results calculated using different methods, including currently unavailable methods. This allows for selection of the most appropriate method, especially as the program provides the CI of limits. It should be useful in veterinary clinical pathology when only small reference sample groups are available. ©2011 American Society for Veterinary Clinical Pathology.
Xiaodong Zhuge; Palenstijn, Willem Jan; Batenburg, Kees Joost
2016-01-01
In this paper, we present a novel iterative reconstruction algorithm for discrete tomography (DT) named total variation regularized discrete algebraic reconstruction technique (TVR-DART) with automated gray value estimation. This algorithm is more robust and automated than the original DART algorithm, and is aimed at imaging of objects consisting of only a few different material compositions, each corresponding to a different gray value in the reconstruction. By exploiting two types of prior knowledge of the scanned object simultaneously, TVR-DART solves the discrete reconstruction problem within an optimization framework inspired by compressive sensing to steer the current reconstruction toward a solution with the specified number of discrete gray values. The gray values and the thresholds are estimated as the reconstruction improves through iterations. Extensive experiments from simulated data, experimental μCT, and electron tomography data sets show that TVR-DART is capable of providing more accurate reconstruction than existing algorithms under noisy conditions from a small number of projection images and/or from a small angular range. Furthermore, the new algorithm requires less effort on parameter tuning compared with the original DART algorithm. With TVR-DART, we aim to provide the tomography society with an easy-to-use and robust algorithm for DT.
New Approaches to Robust Confidence Intervals for Location: A Simulation Study.
1984-06-01
obtain a denominator for the test statistic. Those statistics based on location estimates derived from Hampel’s redescending influence function or v...defined an influence function for a test in terms of the behavior of its P-values when the data are sampled from a model distribution modified by point...proposal could be used for interval estimation as well as hypothesis testing, the extension is immediate. Once an influence function has been defined
Direction of Coupling from Phases of Interacting Oscillators: A Permutation Information Approach
NASA Astrophysics Data System (ADS)
Bahraminasab, A.; Ghasemi, F.; Stefanovska, A.; McClintock, P. V. E.; Kantz, H.
2008-02-01
We introduce a directionality index for a time series based on a comparison of neighboring values. It can distinguish unidirectional from bidirectional coupling, as well as reveal and quantify asymmetry in bidirectional coupling. It is tested on a numerical model of coupled van der Pol oscillators, and applied to cardiorespiratory data from healthy subjects. There is no need for preprocessing and fine-tuning the parameters, which makes the method very simple, computationally fast and robust.
Peroni, M; Golland, P; Sharp, G C; Baroni, G
2011-01-01
Deformable Image Registration is a complex optimization algorithm with the goal of modeling a non-rigid transformation between two images. A crucial issue in this field is guaranteeing the user a robust but computationally reasonable algorithm. We rank the performances of four stopping criteria and six stopping value computation strategies for a log domain deformable registration. The stopping criteria we test are: (a) velocity field update magnitude, (b) vector field Jacobian, (c) mean squared error, and (d) harmonic energy. Experiments demonstrate that comparing the metric value over the last three iterations with the metric minimum of between four and six previous iterations is a robust and appropriate strategy. The harmonic energy and vector field update magnitude metrics give the best results in terms of robustness and speed of convergence.
Towards Robust Designs Via Multiple-Objective Optimization Methods
NASA Technical Reports Server (NTRS)
Man Mohan, Rai
2006-01-01
Fabricating and operating complex systems involves dealing with uncertainty in the relevant variables. In the case of aircraft, flow conditions are subject to change during operation. Efficiency and engine noise may be different from the expected values because of manufacturing tolerances and normal wear and tear. Engine components may have a shorter life than expected because of manufacturing tolerances. In spite of the important effect of operating- and manufacturing-uncertainty on the performance and expected life of the component or system, traditional aerodynamic shape optimization has focused on obtaining the best design given a set of deterministic flow conditions. Clearly it is important to both maintain near-optimal performance levels at off-design operating conditions, and, ensure that performance does not degrade appreciably when the component shape differs from the optimal shape due to manufacturing tolerances and normal wear and tear. These requirements naturally lead to the idea of robust optimal design wherein the concept of robustness to various perturbations is built into the design optimization procedure. The basic ideas involved in robust optimal design will be included in this lecture. The imposition of the additional requirement of robustness results in a multiple-objective optimization problem requiring appropriate solution procedures. Typically the costs associated with multiple-objective optimization are substantial. Therefore efficient multiple-objective optimization procedures are crucial to the rapid deployment of the principles of robust design in industry. Hence the companion set of lecture notes (Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks ) deals with methodology for solving multiple-objective Optimization problems efficiently, reliably and with little user intervention. Applications of the methodologies presented in the companion lecture to robust design will be included here. The evolutionary method (DE) is first used to solve a relatively difficult problem in extended surface heat transfer wherein optimal fin geometries are obtained for different safe operating base temperatures. The objective of maximizing the safe operating base temperature range is in direct conflict with the objective of maximizing fin heat transfer. This problem is a good example of achieving robustness in the context of changing operating conditions. The evolutionary method is then used to design a turbine airfoil; the two objectives being reduced sensitivity of the pressure distribution to small changes in the airfoil shape and the maximization of the trailing edge wedge angle with the consequent increase in airfoil thickness and strength. This is a relevant example of achieving robustness to manufacturing tolerances and wear and tear in the presence of other objectives.
Statistics-based model for prediction of chemical biosynthesis yield from Saccharomyces cerevisiae
2011-01-01
Background The robustness of Saccharomyces cerevisiae in facilitating industrial-scale production of ethanol extends its utilization as a platform to synthesize other metabolites. Metabolic engineering strategies, typically via pathway overexpression and deletion, continue to play a key role for optimizing the conversion efficiency of substrates into the desired products. However, chemical production titer or yield remains difficult to predict based on reaction stoichiometry and mass balance. We sampled a large space of data of chemical production from S. cerevisiae, and developed a statistics-based model to calculate production yield using input variables that represent the number of enzymatic steps in the key biosynthetic pathway of interest, metabolic modifications, cultivation modes, nutrition and oxygen availability. Results Based on the production data of about 40 chemicals produced from S. cerevisiae, metabolic engineering methods, nutrient supplementation, and fermentation conditions described therein, we generated mathematical models with numerical and categorical variables to predict production yield. Statistically, the models showed that: 1. Chemical production from central metabolic precursors decreased exponentially with increasing number of enzymatic steps for biosynthesis (>30% loss of yield per enzymatic step, P-value = 0); 2. Categorical variables of gene overexpression and knockout improved product yield by 2~4 folds (P-value < 0.1); 3. Addition of notable amount of intermediate precursors or nutrients improved product yield by over five folds (P-value < 0.05); 4. Performing the cultivation in a well-controlled bioreactor enhanced the yield of product by three folds (P-value < 0.05); 5. Contribution of oxygen to product yield was not statistically significant. Yield calculations for various chemicals using the linear model were in fairly good agreement with the experimental values. The model generally underestimated the ethanol production as compared to other chemicals, which supported the notion that the metabolism of Saccharomyces cerevisiae has historically evolved for robust alcohol fermentation. Conclusions We generated simple mathematical models for first-order approximation of chemical production yield from S. cerevisiae. These linear models provide empirical insights to the effects of strain engineering and cultivation conditions toward biosynthetic efficiency. These models may not only provide guidelines for metabolic engineers to synthesize desired products, but also be useful to compare the biosynthesis performance among different research papers. PMID:21689458
Tolerancing aspheres based on manufacturing knowledge
NASA Astrophysics Data System (ADS)
Wickenhagen, S.; Kokot, S.; Fuchs, U.
2017-10-01
A standard way of tolerancing optical elements or systems is to perform a Monte Carlo based analysis within a common optical design software package. Although, different weightings and distributions are assumed they are all counting on statistics, which usually means several hundreds or thousands of systems for reliable results. Thus, employing these methods for small batch sizes is unreliable, especially when aspheric surfaces are involved. The huge database of asphericon was used to investigate the correlation between the given tolerance values and measured data sets. The resulting probability distributions of these measured data were analyzed aiming for a robust optical tolerancing process.
Tolerancing aspheres based on manufacturing statistics
NASA Astrophysics Data System (ADS)
Wickenhagen, S.; Möhl, A.; Fuchs, U.
2017-11-01
A standard way of tolerancing optical elements or systems is to perform a Monte Carlo based analysis within a common optical design software package. Although, different weightings and distributions are assumed they are all counting on statistics, which usually means several hundreds or thousands of systems for reliable results. Thus, employing these methods for small batch sizes is unreliable, especially when aspheric surfaces are involved. The huge database of asphericon was used to investigate the correlation between the given tolerance values and measured data sets. The resulting probability distributions of these measured data were analyzed aiming for a robust optical tolerancing process.
Castells, Xavier; Acebes, Juan José; Majós, Carles; Boluda, Susana; Julià-Sapé, Margarida; Candiota, Ana Paula; Ariño, Joaquín; Barceló, Anna; Arús, Carles
2015-01-01
Glioblastoma (Gb) is one of the most deadly tumors. Its molecular subtypes are yet to be fully characterized while the attendant efforts for personalized medicine need to be intensified in relation to glioblastoma diagnosis, treatment, and prognosis. Several molecular signatures based on gene expression microarrays were reported, but the use of microarrays for routine clinical practice is challenged by attendant economic costs. Several authors have proposed discriminant equations based on RT-PCR. Still, the discriminant threshold is often incompletely described, which makes proper validation difficult. In a previous work, we have reported two Gb subtypes based on the expression levels of four genes: CHI3L1, LDHA, LGALS1, and IGFBP3. One Gb subtype presented with low expression of the four genes mentioned, and of MGMT in a large portion of the patients (with anticipated high methylation of its promoter), and mutated IDH1. Here, we evaluate the robustness of the equations fitted with these genes using RT-PCR values in a set of 64 cases and importantly, define an unequivocal discriminant threshold with a view to prognostic implications. We developed two approaches to generate the discriminant equations: 1) using the expression level of the four genes mentioned above, and 2) using those genes displaying the highest correlation with survival among the aforementioned four ones, plus MGMT, as an attempt to further reduce the number of genes. The ease of equations' applicability, reduction in cost for raw data, and robustness in terms of resampling-based classification accuracy warrant further evaluation of these equations to discern Gb tumor biopsy heterogeneity at molecular level, diagnose potential malignancy, and prognosis of individual patients with glioblastomas.
Using instrumental (CIE and reflectance) measures to predict consumers' acceptance of beef colour.
Holman, Benjamin W B; van de Ven, Remy J; Mao, Yanwei; Coombs, Cassius E O; Hopkins, David L
2017-05-01
We aimed to establish colorimetric thresholds based upon the capacity for instrumental measures to predict consumer satisfaction with beef colour. A web-based survey was used to distribute standardised photographs of beef M. longissimus lumborum with known colorimetrics (L*, a*, b*, hue, chroma, ratio of reflectance at 630nm and 580nm, and estimated deoxymyoglobin, oxymyoglobin and metmyoglobin concentrations) for scrutiny. Consumer demographics and perceived importance of colour to beef value were also evaluated. It was found that a* provided the most simple and robust prediction of beef colour acceptability. Beef colour was considered acceptable (with 95% acceptance) when a* values were equal to or above 14.5. Demographic effects on this threshold were negligible, but consumer nationality and gender did contribute to variation in the relative importance of colour to beef value. These results provide future beef colour studies with context to interpret objective colour measures in terms of consumer acceptance and market appeal. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Omar, Saad; Omeragic, Dzevat
2018-04-01
The concept of apparent thicknesses is introduced for the inversion-based, multicasing evaluation interpretation workflow using multifrequency and multispacing electromagnetic measurements. A thickness value is assigned to each measurement, enabling the development of two new preprocessing algorithms to remove casing collar artifacts. First, long-spacing apparent thicknesses are used to remove, from the pipe sections, artifacts ("ghosts") caused by the transmitter crossing a casing collar or corrosion. Second, a collar identification, localization, and assignment algorithm is developed to enable robust inversion in collar sections. Last, casing eccentering can also be identified on the basis of opposite deviation of short-spacing phase and magnitude apparent thicknesses from the nominal value. The proposed workflow can handle an arbitrary number of nested casings and has been validated on synthetic and field data.
NASA Astrophysics Data System (ADS)
Min, Kyoungwon; Farah, Annette E.; Lee, Seung Ryeol; Lee, Jong Ik
2017-01-01
Shock conditions of Martian meteorites provide crucial information about ejection dynamics and original features of the Martian rocks. To better constrain equilibrium shock temperatures (Tequi-shock) of Martian meteorites, we investigated (U-Th)/He systematics of moderately-shocked (Zagami) and intensively shocked (ALHA77005) Martian meteorites. Multiple phosphate aggregates from Zagami and ALHA77005 yielded overall (U-Th)/He ages 92.2 ± 4.4 Ma (2σ) and 8.4 ± 1.2 Ma, respectively. These ages correspond to fractional losses of 0.49 ± 0.03 (Zagami) and 0.97 ± 0.01 (ALHA77005), assuming that the ejection-related shock event at ∼3 Ma is solely responsible for diffusive helium loss since crystallization. For He diffusion modeling, the diffusion domain radius is estimated based on detailed examination of fracture patterns in phosphates using a scanning electron microscope. For Zagami, the diffusion domain radius is estimated to be ∼2-9 μm, which is generally consistent with calculations from isothermal heating experiments (1-4 μm). For ALHA77005, the diffusion domain radius of ∼4-20 μm is estimated. Using the newly constrained (U-Th)/He data, diffusion domain radii, and other previously estimated parameters, the conductive cooling models yield Tequi-shock estimates of 360-410 °C and 460-560 °C for Zagami and ALHA77005, respectively. According to the sensitivity test, the estimated Tequi-shock values are relatively robust to input parameters. The Tequi-shock estimates for Zagami are more robust than those for ALHA77005, primarily because Zagami yielded intermediate fHe value (0.49) compared to ALHA77005 (0.97). For less intensively shocked Zagami, the He diffusion-based Tequi-shock estimates (this study) are significantly higher than expected from previously reported Tpost-shock values. For intensively shocked ALHA77005, the two independent approaches yielded generally consistent results. Using two other examples of previously studied Martian meteorites (ALHA84001 and Los Angeles), we compared Tequi-shock and Tpost-shock estimates. For intensively shocked meteorites (ALHA77005, Los Angeles), the He diffusion-based approach yield slightly higher or consistent Tequi-shock with estimations from Tpost-shock, and the discrepancy between the two methods increases as the intensity of shock increases. The reason for the discrepancy between the two methods, particularly for less-intensively shocked meteorites (Zagami, ALHA84001), remains to be resolved, but we prefer the He diffusion-based approach because its Tequi-shock estimates are relatively robust to input parameters.
Selective robust optimization: A new intensity-modulated proton therapy optimization strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yupeng; Niemela, Perttu; Siljamaki, Sami
2015-08-15
Purpose: To develop a new robust optimization strategy for intensity-modulated proton therapy as an important step in translating robust proton treatment planning from research to clinical applications. Methods: In selective robust optimization, a worst-case-based robust optimization algorithm is extended, and terms of the objective function are selectively computed from either the worst-case dose or the nominal dose. Two lung cancer cases and one head and neck cancer case were used to demonstrate the practical significance of the proposed robust planning strategy. The lung cancer cases had minimal tumor motion less than 5 mm, and, for the demonstration of the methodology,more » are assumed to be static. Results: Selective robust optimization achieved robust clinical target volume (CTV) coverage and at the same time increased nominal planning target volume coverage to 95.8%, compared to the 84.6% coverage achieved with CTV-based robust optimization in one of the lung cases. In the other lung case, the maximum dose in selective robust optimization was lowered from a dose of 131.3% in the CTV-based robust optimization to 113.6%. Selective robust optimization provided robust CTV coverage in the head and neck case, and at the same time improved controls over isodose distribution so that clinical requirements may be readily met. Conclusions: Selective robust optimization may provide the flexibility and capability necessary for meeting various clinical requirements in addition to achieving the required plan robustness in practical proton treatment planning settings.« less
New development of the image matching algorithm
NASA Astrophysics Data System (ADS)
Zhang, Xiaoqiang; Feng, Zhao
2018-04-01
To study the image matching algorithm, algorithm four elements are described, i.e., similarity measurement, feature space, search space and search strategy. Four common indexes for evaluating the image matching algorithm are described, i.e., matching accuracy, matching efficiency, robustness and universality. Meanwhile, this paper describes the principle of image matching algorithm based on the gray value, image matching algorithm based on the feature, image matching algorithm based on the frequency domain analysis, image matching algorithm based on the neural network and image matching algorithm based on the semantic recognition, and analyzes their characteristics and latest research achievements. Finally, the development trend of image matching algorithm is discussed. This study is significant for the algorithm improvement, new algorithm design and algorithm selection in practice.
DART: a practical reconstruction algorithm for discrete tomography.
Batenburg, Kees Joost; Sijbers, Jan
2011-09-01
In this paper, we present an iterative reconstruction algorithm for discrete tomography, called discrete algebraic reconstruction technique (DART). DART can be applied if the scanned object is known to consist of only a few different compositions, each corresponding to a constant gray value in the reconstruction. Prior knowledge of the gray values for each of the compositions is exploited to steer the current reconstruction towards a reconstruction that contains only these gray values. Based on experiments with both simulated CT data and experimental μCT data, it is shown that DART is capable of computing more accurate reconstructions from a small number of projection images, or from a small angular range, than alternative methods. It is also shown that DART can deal effectively with noisy projection data and that the algorithm is robust with respect to errors in the estimation of the gray values.
Loi, Gianfranco; Dominietto, Marco; Manfredda, Irene; Mones, Eleonora; Carriero, Alessandro; Inglese, Eugenio; Krengli, Marco; Brambilla, Marco
2008-09-01
This note describes a method to characterize the performances of image fusion software (Syntegra) with respect to accuracy and robustness. Computed tomography (CT), magnetic resonance imaging (MRI), and single-photon emission computed tomography (SPECT) studies were acquired from two phantoms and 10 patients. Image registration was performed independently by two couples composed of one radiotherapist and one physicist by means of superposition of anatomic landmarks. Each couple performed jointly and saved the registration. The two solutions were averaged to obtain the gold standard registration. A new set of estimators was defined to identify translation and rotation errors in the coordinate axes, independently from point position in image field of view (FOV). Algorithms evaluated were local correlation (LC) for CT-MRI, normalized mutual information (MI) for CT-MRI, and CT-SPECT registrations. To evaluate accuracy, estimator values were compared to limiting values for the algorithms employed, both in phantoms and in patients. To evaluate robustness, different alignments between images taken from a sample patient were produced and registration errors determined. LC algorithm resulted accurate in CT-MRI registrations in phantoms, but exceeded limiting values in 3 of 10 patients. MI algorithm resulted accurate in CT-MRI and CT-SPECT registrations in phantoms; limiting values were exceeded in one case in CT-MRI and never reached in CT-SPECT registrations. Thus, the evaluation of robustness was restricted to the algorithm of MI both for CT-MRI and CT-SPECT registrations. The algorithm of MI proved to be robust: limiting values were not exceeded with translation perturbations up to 2.5 cm, rotation perturbations up to 10 degrees and roto-translational perturbation up to 3 cm and 5 degrees.
Assessing climate change-robustness of protected area management plans-The case of Germany.
Geyer, Juliane; Kreft, Stefan; Jeltsch, Florian; Ibisch, Pierre L
2017-01-01
Protected areas are arguably the most important instrument of biodiversity conservation. To keep them fit under climate change, their management needs to be adapted to address related direct and indirect changes. In our study we focus on the adaptation of conservation management planning, evaluating management plans of 60 protected areas throughout Germany with regard to their climate change-robustness. First, climate change-robust conservation management was defined using 11 principles and 44 criteria, which followed an approach similar to sustainability standards. We then evaluated the performance of individual management plans concerning the climate change-robustness framework. We found that climate change-robustness of protected areas hardly exceeded 50 percent of the potential performance, with most plans ranking in the lower quarter. Most Natura 2000 protected areas, established under conservation legislation of the European Union, belong to the sites with especially poor performance, with lower values in smaller areas. In general, the individual principles showed very different rates of accordance with our principles, but similarly low intensity. Principles with generally higher performance values included holistic knowledge management, public accountability and acceptance as well as systemic and strategic coherence. Deficiencies were connected to dealing with the future and uncertainty. Lastly, we recommended the presented principles and criteria as essential guideposts that can be used as a checklist for working towards more climate change-robust planning.
Assessing climate change-robustness of protected area management plans—The case of Germany
Geyer, Juliane; Kreft, Stefan; Jeltsch, Florian; Ibisch, Pierre L.
2017-01-01
Protected areas are arguably the most important instrument of biodiversity conservation. To keep them fit under climate change, their management needs to be adapted to address related direct and indirect changes. In our study we focus on the adaptation of conservation management planning, evaluating management plans of 60 protected areas throughout Germany with regard to their climate change-robustness. First, climate change-robust conservation management was defined using 11 principles and 44 criteria, which followed an approach similar to sustainability standards. We then evaluated the performance of individual management plans concerning the climate change-robustness framework. We found that climate change-robustness of protected areas hardly exceeded 50 percent of the potential performance, with most plans ranking in the lower quarter. Most Natura 2000 protected areas, established under conservation legislation of the European Union, belong to the sites with especially poor performance, with lower values in smaller areas. In general, the individual principles showed very different rates of accordance with our principles, but similarly low intensity. Principles with generally higher performance values included holistic knowledge management, public accountability and acceptance as well as systemic and strategic coherence. Deficiencies were connected to dealing with the future and uncertainty. Lastly, we recommended the presented principles and criteria as essential guideposts that can be used as a checklist for working towards more climate change-robust planning. PMID:28982187
A Secret 3D Model Sharing Scheme with Reversible Data Hiding Based on Space Subdivision
NASA Astrophysics Data System (ADS)
Tsai, Yuan-Yu
2016-03-01
Secret sharing is a highly relevant research field, and its application to 2D images has been thoroughly studied. However, secret sharing schemes have not kept pace with the advances of 3D models. With the rapid development of 3D multimedia techniques, extending the application of secret sharing schemes to 3D models has become necessary. In this study, an innovative secret 3D model sharing scheme for point geometries based on space subdivision is proposed. Each point in the secret point geometry is first encoded into a series of integer values that fall within [0, p - 1], where p is a predefined prime number. The share values are derived by substituting the specified integer values for all coefficients of the sharing polynomial. The surface reconstruction and the sampling concepts are then integrated to derive a cover model with sufficient model complexity for each participant. Finally, each participant has a separate 3D stego model with embedded share values. Experimental results show that the proposed technique supports reversible data hiding and the share values have higher levels of privacy and improved robustness. This technique is simple and has proven to be a feasible secret 3D model sharing scheme.
NASA Astrophysics Data System (ADS)
Whitney, Heather M.; Drukker, Karen; Edwards, Alexandra; Papaioannou, John; Giger, Maryellen L.
2018-02-01
Radiomics features extracted from breast lesion images have shown potential in diagnosis and prognosis of breast cancer. As clinical institutions transition from 1.5 T to 3.0 T magnetic resonance imaging (MRI), it is helpful to identify robust features across these field strengths. In this study, dynamic contrast-enhanced MR images were acquired retrospectively under IRB/HIPAA compliance, yielding 738 cases: 241 and 124 benign lesions imaged at 1.5 T and 3.0 T and 231 and 142 luminal A cancers imaged at 1.5 T and 3.0 T, respectively. Lesions were segmented using a fuzzy C-means method. Extracted radiomic values for each group of lesions by cancer status and field strength of acquisition were compared using a Kolmogorov-Smirnov test for the null hypothesis that two groups being compared came from the same distribution, with p-values being corrected for multiple comparisons by the Holm-Bonferroni method. Two shape features, one texture feature, and three enhancement variance kinetics features were found to be potentially robust. All potentially robust features had areas under the receiver operating characteristic curve (AUC) statistically greater than 0.5 in the task of distinguishing between lesion types (range of means 0.57-0.78). The significant difference in voxel size between field strength of acquisition limits the ability to affirm more features as robust or not robust according to field strength alone, and inhomogeneities in static field strength and radiofrequency field could also have affected the assessment of kinetic curve features as robust or not. Vendor-specific image scaling could have also been a factor. These findings will contribute to the development of radiomic signatures that use features identified as robust across field strength.
A Robust Cooperated Control Method with Reinforcement Learning and Adaptive H∞ Control
NASA Astrophysics Data System (ADS)
Obayashi, Masanao; Uchiyama, Shogo; Kuremoto, Takashi; Kobayashi, Kunikazu
This study proposes a robust cooperated control method combining reinforcement learning with robust control to control the system. A remarkable characteristic of the reinforcement learning is that it doesn't require model formula, however, it doesn't guarantee the stability of the system. On the other hand, robust control system guarantees stability and robustness, however, it requires model formula. We employ both the actor-critic method which is a kind of reinforcement learning with minimal amount of computation to control continuous valued actions and the traditional robust control, that is, H∞ control. The proposed system was compared method with the conventional control method, that is, the actor-critic only used, through the computer simulation of controlling the angle and the position of a crane system, and the simulation result showed the effectiveness of the proposed method.
Johnson, Fred A.; Jensen, Gitte H.; Madsen, Jesper; Williams, Byron K.
2014-01-01
We explored the application of dynamic-optimization methods to the problem of pink-footed goose (Anser brachyrhynchus) management in western Europe. We were especially concerned with the extent to which uncertainty in population dynamics influenced an optimal management strategy, the gain in management performance that could be expected if uncertainty could be eliminated or reduced, and whether an adaptive or robust management strategy might be most appropriate in the face of uncertainty. We combined three alternative survival models with three alternative reproductive models to form a set of nine annual-cycle models for pink-footed geese. These models represent a wide range of possibilities concerning the extent to which demographic rates are density dependent or independent, and the extent to which they are influenced by spring temperatures. We calculated state-dependent harvest strategies for these models using stochastic dynamic programming and an objective function that maximized sustainable harvest, subject to a constraint on desired population size. As expected, attaining the largest mean objective value (i.e., the relative measure of management performance) depended on the ability to match a model-dependent optimal strategy with its generating model of population dynamics. The nine models suggested widely varying objective values regardless of the harvest strategy, with the density-independent models generally producing higher objective values than models with density-dependent survival. In the face of uncertainty as to which of the nine models is most appropriate, the optimal strategy assuming that both survival and reproduction were a function of goose abundance and spring temperatures maximized the expected minimum objective value (i.e., maxi–min). In contrast, the optimal strategy assuming equal model weights minimized the expected maximum loss in objective value. The expected value of eliminating model uncertainty was an increase in objective value of only 3.0%. This value represents the difference between the best that could be expected if the most appropriate model were known and the best that could be expected in the face of model uncertainty. The value of eliminating uncertainty about the survival process was substantially higher than that associated with the reproductive process, which is consistent with evidence that variation in survival is more important than variation in reproduction in relatively long-lived avian species. Comparing the expected objective value if the most appropriate model were known with that of the maxi–min robust strategy, we found the value of eliminating uncertainty to be an expected increase of 6.2% in objective value. This result underscores the conservatism of the maxi–min rule and suggests that risk-neutral managers would prefer the optimal strategy that maximizes expected value, which is also the strategy that is expected to minimize the maximum loss (i.e., a strategy based on equal model weights). The low value of information calculated for pink-footed geese suggests that a robust strategy (i.e., one in which no learning is anticipated) could be as nearly effective as an adaptive one (i.e., a strategy in which the relative credibility of models is assessed through time). Of course, an alternative explanation for the low value of information is that the set of population models we considered was too narrow to represent key uncertainties in population dynamics. Yet we know that questions about the presence of density dependence must be central to the development of a sustainable harvest strategy. And while there are potentially many environmental covariates that could help explain variation in survival or reproduction, our admission of models in which vital rates are drawn randomly from reasonable distributions represents a worst-case scenario for management. We suspect that much of the value of the various harvest strategies we calculated is derived from the fact that they are state dependent, such that appropriate harvest rates depend on population abundance and weather conditions, as well as our focus on an infinite time horizon for sustainability.
NASA Astrophysics Data System (ADS)
Sutrisno, Agung; Gunawan, Indra; Vanany, Iwan
2017-11-01
In spite of being integral part in risk - based quality improvement effort, studies improving quality of selection of corrective action priority using FMEA technique are still limited in literature. If any, none is considering robustness and risk in selecting competing improvement initiatives. This study proposed a theoretical model to select risk - based competing corrective action by considering robustness and risk of competing corrective actions. We incorporated the principle of robust design in counting the preference score among corrective action candidates. Along with considering cost and benefit of competing corrective actions, we also incorporate the risk and robustness of corrective actions. An example is provided to represent the applicability of the proposed model.
2014-01-01
Background A number of microtubule disassembly blocking agents and inhibitors of tubulin polymerization have been elements of great interest in anti-cancer therapy, some of them even entering into the clinical trials. One such class of tubulin assembly inhibitors is of arylthioindole derivatives which results in effective microtubule disorganization responsible for cell apoptosis by interacting with the colchicine binding site of the β-unit of tubulin close to the interface with the α unit. We modelled the human tubulin β unit (chain D) protein and performed docking studies to elucidate the detailed binding mode of actions associated with their inhibition. The activity enhancing structural aspects were evaluated using a fragment-based Group QSAR (G-QSAR) model and was validated statistically to determine its robustness. A combinatorial library was generated keeping the arylthioindole moiety as the template and their activities were predicted. Results The G-QSAR model obtained was statistically significant with r2 value of 0.85, cross validated correlation coefficient q2 value of 0.71 and pred_r2 (r2 value for test set) value of 0.89. A high F test value of 65.76 suggests robustness of the model. Screening of the combinatorial library on the basis of predicted activity values yielded two compounds HPI (predicted pIC50 = 6.042) and MSI (predicted pIC50 = 6.001) whose interactions with the D chain of modelled human tubulin protein were evaluated in detail. A toxicity evaluation resulted in MSI being less toxic in comparison to HPI. Conclusions The study provides an insight into the crucial structural requirements and the necessary chemical substitutions required for the arylthioindole moiety to exhibit enhanced inhibitory activity against human tubulin. The two reported compounds HPI and MSI showed promising anti cancer activities and thus can be considered as potent leads against cancer. The toxicity evaluation of these compounds suggests that MSI is a promising therapeutic candidate. This study provided another stepping stone in the direction of evaluating tubulin inhibition and microtubule disassembly degeneration as viable targets for development of novel therapeutics against cancer. PMID:25521775
Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou
2015-01-01
Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.
Robust dynamic inversion controller design and analysis (using the X-38 vehicle as a case study)
NASA Astrophysics Data System (ADS)
Ito, Daigoro
A new way to approach robust Dynamic Inversion controller synthesis is addressed in this paper. A Linear Quadratic Gaussian outer-loop controller improves the robustness of a Dynamic Inversion inner-loop controller in the presence of uncertainties. Desired dynamics are given by the dynamic compensator, which shapes the loop. The selected dynamics are based on both performance and stability robustness requirements. These requirements are straightforwardly formulated as frequency-dependent singular value bounds during synthesis of the controller. Performance and robustness of the designed controller is tested using a worst case time domain quadratic index, which is a simple but effective way to measure robustness due to parameter variation. Using this approach, a lateral-directional controller for the X-38 vehicle is designed and its robustness to parameter variations and disturbances is analyzed. It is found that if full state measurements are available, the performance of the designed lateral-directional control system, measured by the chosen cost function, improves by approximately a factor of four. Also, it is found that the designed system is stable up to a parametric variation of 1.65 standard deviation with the set of uncertainty considered. The system robustness is determined to be highly sensitive to the dihedral derivative and the roll damping coefficients. The controller analysis is extended to the nonlinear system where both control input displacements and rates are bounded. In this case, the considered nonlinear system is stable up to 48.1° in bank angle and 1.59° in sideslip angle variations, indicating it is more sensitive to variations in sideslip angle than in bank angle. This nonlinear approach is further extended for the actuator failure mode analysis. The results suggest that the designed system maintains a high level of stability in the event of aileron failure. However, only 35% or less of the original stability range is maintained for the rudder failure case. Overall, this combination of controller synthesis and robustness criteria compares well with the mu-synthesis technique. It also is readily accessible to the practicing engineer, in terms of understanding and use.
NASA Astrophysics Data System (ADS)
Dai, Yimian; Wu, Yiquan; Song, Yu; Guo, Jun
2017-03-01
To further enhance the small targets and suppress the heavy clutters simultaneously, a robust non-negative infrared patch-image model via partial sum minimization of singular values is proposed. First, the intrinsic reason behind the undesirable performance of the state-of-the-art infrared patch-image (IPI) model when facing extremely complex backgrounds is analyzed. We point out that it lies in the mismatching of IPI model's implicit assumption of a large number of observations with the reality of deficient observations of strong edges. To fix this problem, instead of the nuclear norm, we adopt the partial sum of singular values to constrain the low-rank background patch-image, which could provide a more accurate background estimation and almost eliminate all the salient residuals in the decomposed target image. In addition, considering the fact that the infrared small target is always brighter than its adjacent background, we propose an additional non-negative constraint to the sparse target patch-image, which could not only wipe off more undesirable components ulteriorly but also accelerate the convergence rate. Finally, an algorithm based on inexact augmented Lagrange multiplier method is developed to solve the proposed model. A large number of experiments are conducted demonstrating that the proposed model has a significant improvement over the other nine competitive methods in terms of both clutter suppressing performance and convergence rate.
A H∞/μ solution for microvibration mitigation in satellites: A case study
NASA Astrophysics Data System (ADS)
Preda, Valentin; Cieslak, Jérôme; Henry, David; Bennani, Samir; Falcoz, Alexandre
2017-07-01
The research work presented in this paper focuses on the development of a mixed active-passive microvibration mitigation solution capable of attenuating the transmitted vibrations generated by reaction wheels to a satellite structure. A representative benchmark provided by the European Space Agency (ESA) and Airbus Defence and Space, serves as a support for testing the proposed solution. The paper also covers modeling and design issues as well as a deep analysis of the solution within the H∞ / μ setting. Especially, an uncertainty modeling strategy is proposed to extract a Linear Fractional Transformation (LFT) model. Insight is naturally provided into various dynamical interactions between the plant elements such as bearing and isolator flexibility, gyroscopic effects, actuator dynamics and feedback-loop delays. The design of the mitigation solution is formulated into the H∞ / μ framework leading to a robust H∞ control strategy capable of achieving exemplary active attenuation performance across a wide range of reaction wheel speeds. A systematic analysis procedure based on the structured singular value μ is used to assess and demonstrate the robust stability and robust performance of the microvibration mitigation strategy. The proposed analysis method is also shown to be a powerful and reliable solution to identify worst-case scenarios without relying on traditional Monte Carlo campaigns. Time domain simulations based on a nonlinear high-fidelity industrial simulator are included as a validation step.
Passive forensics for copy-move image forgery using a method based on DCT and SVD.
Zhao, Jie; Guo, Jichang
2013-12-10
As powerful image editing tools are widely used, the demand for identifying the authenticity of an image is much increased. Copy-move forgery is one of the tampering techniques which are frequently used. Most existing techniques to expose this forgery need to improve the robustness for common post-processing operations and fail to precisely locate the tampering region especially when there are large similar or flat regions in the image. In this paper, a robust method based on DCT and SVD is proposed to detect this specific artifact. Firstly, the suspicious image is divided into fixed-size overlapping blocks and 2D-DCT is applied to each block, then the DCT coefficients are quantized by a quantization matrix to obtain a more robust representation of each block. Secondly, each quantized block is divided non-overlapping sub-blocks and SVD is applied to each sub-block, then features are extracted to reduce the dimension of each block using its largest singular value. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks will be matched by predefined shift frequency threshold. Experiment results demonstrate that our proposed method can effectively detect multiple copy-move forgery and precisely locate the duplicated regions, even when an image was distorted by Gaussian blurring, AWGN, JPEG compression and their mixed operations. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.
Kervrann, C; Legland, D; Pardini, L
2004-06-01
Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.
Robust Methods for Moderation Analysis with a Two-Level Regression Model.
Yang, Miao; Yuan, Ke-Hai
2016-01-01
Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.
NASA Astrophysics Data System (ADS)
Xu, Liangfei; Hu, Junming; Cheng, Siliang; Fang, Chuan; Li, Jianqiu; Ouyang, Minggao; Lehnert, Werner
2017-07-01
A scheme for designing a second-order sliding-mode (SOSM) observer that estimates critical internal states on the cathode side of a polymer electrolyte membrane (PEM) fuel cell system is presented. A nonlinear, isothermal dynamic model for the cathode side and a membrane electrolyte assembly are first described. A nonlinear observer topology based on an SOSM algorithm is then introduced, and equations for the SOSM observer deduced. Online calculation of the inverse matrix produces numerical errors, so a modified matrix is introduced to eliminate the negative effects of these on the observer. The simulation results indicate that the SOSM observer performs well for the gas partial pressures and air stoichiometry. The estimation results follow the simulated values in the model with relative errors within ± 2% at stable status. Large errors occur during the fast dynamic processes (<1 s). Moreover, the nonlinear observer shows good robustness against variations in the initial values of the internal states, but less robustness against variations in system parameters. The partial pressures are more sensitive than the air stoichiometry to system parameters. Finally, the order of effects of parameter uncertainties on the estimation results is outlined and analyzed.
[Surface electromyography signal classification using gray system theory].
Xie, Hongbo; Ma, Congbin; Wang, Zhizhong; Huang, Hai
2004-12-01
A new method based on gray correlation was introduced to improve the identification rate in artificial limb. The electromyography (EMG) signal was first transformed into time-frequency domain by wavelet transform. Singular value decomposition (SVD) was then used to extract feature vector from the wavelet coefficient for pattern recognition. The decision was made according to the maximum gray correlation coefficient. Compared with neural network recognition, this robust method has an almost equivalent recognition rate but much lower computation costs and less training samples.
Zhang, Yi; Askenazi, Manor; Jiang, Jingrui; Luckey, C. John; Griffin, James D.; Marto, Jarrod A.
2010-01-01
The FLT3 receptor tyrosine kinase plays an important role in normal hematopoietic development and leukemogenesis. Point mutations within the activation loop and in-frame tandem duplications of the juxtamembrane domain represent the most frequent molecular abnormalities observed in acute myeloid leukemia. Interestingly these gain-of-function mutations correlate with different clinical outcomes, suggesting that signals from constitutive FLT3 mutants activate different downstream targets. In principle, mass spectrometry offers a powerful means to quantify protein phosphorylation and identify signaling events associated with constitutively active kinases or other oncogenic events. However, regulation of individual phosphorylation sites presents a challenging case for proteomics studies whereby quantification is based on individual peptides rather than an average across different peptides derived from the same protein. Here we describe a robust experimental framework and associated error model for iTRAQ-based quantification on an Orbitrap mass spectrometer that relates variance of peptide ratios to mass spectral peak height and provides for assignment of p value, q value, and confidence interval to every peptide identification, all based on routine measurements, obviating the need for detailed characterization of individual ion peaks. Moreover, we demonstrate that our model is stable over time and can be applied in a manner directly analogous to ubiquitously used external mass calibration routines. Application of our error model to quantitative proteomics data for FLT3 signaling provides evidence that phosphorylation of tyrosine phosphatase SHP1 abrogates the transformative potential, but not overall kinase activity, of FLT3-D835Y in acute myeloid leukemia. PMID:20019052
Multiple template-based image matching using alpha-rooted quaternion phase correlation
NASA Astrophysics Data System (ADS)
DelMarco, Stephen
2010-04-01
In computer vision applications, image matching performed on quality-degraded imagery is difficult due to image content distortion and noise effects. State-of-the art keypoint based matchers, such as SURF and SIFT, work very well on clean imagery. However, performance can degrade significantly in the presence of high noise and clutter levels. Noise and clutter cause the formation of false features which can degrade recognition performance. To address this problem, previously we developed an extension to the classical amplitude and phase correlation forms, which provides improved robustness and tolerance to image geometric misalignments and noise. This extension, called Alpha-Rooted Phase Correlation (ARPC), combines Fourier domain-based alpha-rooting enhancement with classical phase correlation. ARPC provides tunable parameters to control the alpha-rooting enhancement. These parameter values can be optimized to tradeoff between high narrow correlation peaks, and more robust wider, but smaller peaks. Previously, we applied ARPC in the radon transform domain for logo image recognition in the presence of rotational image misalignments. In this paper, we extend ARPC to incorporate quaternion Fourier transforms, thereby creating Alpha-Rooted Quaternion Phase Correlation (ARQPC). We apply ARQPC to the logo image recognition problem. We use ARQPC to perform multiple-reference logo template matching by representing multiple same-class reference templates as quaternion-valued images. We generate recognition performance results on publicly-available logo imagery, and compare recognition results to results generated from standard approaches. We show that small deviations in reference templates of sameclass logos can lead to improved recognition performance using the joint matching inherent in ARQPC.
NASA Astrophysics Data System (ADS)
Carranza, N.; Cristóbal, G.; Sroubek, F.; Ledesma-Carbayo, M. J.; Santos, A.
2006-08-01
Myocardial motion analysis and quantification is of utmost importance for analyzing contractile heart abnormalities and it can be a symptom of a coronary artery disease. A fundamental problem in processing sequences of images is the computation of the optical flow, which is an approximation to the real image motion. This paper presents a new algorithm for optical flow estimation based on a spatiotemporal-frequency (STF) approach, more specifically on the computation of the Wigner-Ville distribution (WVD) and the Hough Transform (HT) of the motion sequences. The later is a well-known line and shape detection method very robust against incomplete data and noise. The rationale of using the HT in this context is because it provides a value of the displacement field from the STF representation. In addition, a probabilistic approach based on Gaussian mixtures has been implemented in order to improve the accuracy of the motion detection. Experimental results with synthetic sequences are compared against an implementation of the variational technique for local and global motion estimation, where it is shown that the results obtained here are accurate and robust to noise degradations. Real cardiac magnetic resonance images have been tested and evaluated with the current method.
Rank-based pooling for deep convolutional neural networks.
Shi, Zenglin; Ye, Yangdong; Wu, Yunpeng
2016-11-01
Pooling is a key mechanism in deep convolutional neural networks (CNNs) which helps to achieve translation invariance. Numerous studies, both empirically and theoretically, show that pooling consistently boosts the performance of the CNNs. The conventional pooling methods are operated on activation values. In this work, we alternatively propose rank-based pooling. It is derived from the observations that ranking list is invariant under changes of activation values in a pooling region, and thus rank-based pooling operation may achieve more robust performance. In addition, the reasonable usage of rank can avoid the scale problems encountered by value-based methods. The novel pooling mechanism can be regarded as an instance of weighted pooling where a weighted sum of activations is used to generate the pooling output. This pooling mechanism can also be realized as rank-based average pooling (RAP), rank-based weighted pooling (RWP) and rank-based stochastic pooling (RSP) according to different weighting strategies. As another major contribution, we present a novel criterion to analyze the discriminant ability of various pooling methods, which is heavily under-researched in machine learning and computer vision community. Experimental results on several image benchmarks show that rank-based pooling outperforms the existing pooling methods in classification performance. We further demonstrate better performance on CIFAR datasets by integrating RSP into Network-in-Network. Copyright © 2016 Elsevier Ltd. All rights reserved.
Adeli, Khosrow; Higgins, Victoria; Nieuwesteeg, Michelle; Raizman, Joshua E; Chen, Yunqi; Wong, Suzy L; Blais, David
2015-08-01
Defining laboratory biomarker reference values in a healthy population and understanding the fluctuations in biomarker concentrations throughout life and between sexes are critical to clinical interpretation of laboratory test results in different disease states. The Canadian Health Measures Survey (CHMS) has collected blood samples and health information from the Canadian household population. In collaboration with the Canadian Laboratory Initiative on Pediatric Reference Intervals (CALIPER), the data have been analyzed to determine reference value distributions and reference intervals for several endocrine and special chemistry biomarkers in pediatric, adult, and geriatric age groups. CHMS collected data and blood samples from thousands of community participants aged 3 to 79 years. We used serum samples to measure 13 immunoassay-based special chemistry and endocrine markers. We assessed reference value distributions and, after excluding outliers, calculated age- and sex-specific reference intervals, along with corresponding 90% CIs, according to CLSI C28-A3 guidelines. We observed fluctuations in biomarker reference values across the pediatric, adult, and geriatric age range, with stratification required on the basis of age for all analytes. Additional sex partitions were required for apolipoprotein AI, homocysteine, ferritin, and high sensitivity C-reactive protein. The unique collaboration between CALIPER and CHMS has enabled, for the first time, a detailed examination of the changes in various immunochemical markers that occur in healthy individuals of different ages. The robust age- and sex-specific reference intervals established in this study provide insight into the complex biological changes that take place throughout development and aging and will contribute to improved clinical test interpretation. © 2015 American Association for Clinical Chemistry.
1000 Norms Project: protocol of a cross-sectional study cataloging human variation.
McKay, Marnee J; Baldwin, Jennifer N; Ferreira, Paulo; Simic, Milena; Vanicek, Natalie; Hiller, Claire E; Nightingale, Elizabeth J; Moloney, Niamh A; Quinlan, Kate G; Pourkazemi, Fereshteh; Sman, Amy D; Nicholson, Leslie L; Mousavi, Seyed J; Rose, Kristy; Raymond, Jacqueline; Mackey, Martin G; Chard, Angus; Hübscher, Markus; Wegener, Caleb; Fong Yan, Alycia; Refshauge, Kathryn M; Burns, Joshua
2016-03-01
Clinical decision-making regarding diagnosis and management largely depends on comparison with healthy or 'normal' values. Physiotherapists and researchers therefore need access to robust patient-centred outcome measures and appropriate reference values. However there is a lack of high-quality reference data for many clinical measures. The aim of the 1000 Norms Project is to generate a freely accessible database of musculoskeletal and neurological reference values representative of the healthy population across the lifespan. In 2012 the 1000 Norms Project Consortium defined the concept of 'normal', established a sampling strategy and selected measures based on clinical significance, psychometric properties and the need for reference data. Musculoskeletal and neurological items tapping the constructs of dexterity, balance, ambulation, joint range of motion, strength and power, endurance and motor planning will be collected in this cross-sectional study. Standardised questionnaires will evaluate quality of life, physical activity, and musculoskeletal health. Saliva DNA will be analysed for the ACTN3 genotype ('gene for speed'). A volunteer cohort of 1000 participants aged 3 to 100 years will be recruited according to a set of self-reported health criteria. Descriptive statistics will be generated, creating tables of mean values and standard deviations stratified for age and gender. Quantile regression equations will be used to generate age charts and age-specific centile values. This project will be a powerful resource to assist physiotherapists and clinicians across all areas of healthcare to diagnose pathology, track disease progression and evaluate treatment response. This reference dataset will also contribute to the development of robust patient-centred clinical trial outcome measures. Copyright © 2015 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rock, N. M. S.
ROBUST calculates 53 statistics, plus significance levels for 6 hypothesis tests, on each of up to 52 variables. These together allow the following properties of the data distribution for each variable to be examined in detail: (1) Location. Three means (arithmetic, geometric, harmonic) are calculated, together with the midrange and 19 high-performance robust L-, M-, and W-estimates of location (combined, adaptive, trimmed estimates, etc.) (2) Scale. The standard deviation is calculated along with the H-spread/2 (≈ semi-interquartile range), the mean and median absolute deviations from both mean and median, and a biweight scale estimator. The 23 location and 6 scale estimators programmed cover all possible degrees of robustness. (3) Normality: Distributions are tested against the null hypothesis that they are normal, using the 3rd (√ h1) and 4th ( b 2) moments, Geary's ratio (mean deviation/standard deviation), Filliben's probability plot correlation coefficient, and a more robust test based on the biweight scale estimator. These statistics collectively are sensitive to most usual departures from normality. (4) Presence of outliers. The maximum and minimum values are assessed individually or jointly using Grubbs' maximum Studentized residuals, Harvey's and Dixon's criteria, and the Studentized range. For a single input variable, outliers can be either winsorized or eliminated and all estimates recalculated iteratively as desired. The following data-transformations also can be applied: linear, log 10, generalized Box Cox power (including log, reciprocal, and square root), exponentiation, and standardization. For more than one variable, all results are tabulated in a single run of ROBUST. Further options are incorporated to assess ratios (of two variables) as well as discrete variables, and be concerned with missing data. Cumulative S-plots (for assessing normality graphically) also can be generated. The mutual consistency or inconsistency of all these measures helps to detect errors in data as well as to assess data-distributions themselves.
Developing appropriate methods for cost-effectiveness analysis of cluster randomized trials.
Gomes, Manuel; Ng, Edmond S-W; Grieve, Richard; Nixon, Richard; Carpenter, James; Thompson, Simon G
2012-01-01
Cost-effectiveness analyses (CEAs) may use data from cluster randomized trials (CRTs), where the unit of randomization is the cluster, not the individual. However, most studies use analytical methods that ignore clustering. This article compares alternative statistical methods for accommodating clustering in CEAs of CRTs. Our simulation study compared the performance of statistical methods for CEAs of CRTs with 2 treatment arms. The study considered a method that ignored clustering--seemingly unrelated regression (SUR) without a robust standard error (SE)--and 4 methods that recognized clustering--SUR and generalized estimating equations (GEEs), both with robust SE, a "2-stage" nonparametric bootstrap (TSB) with shrinkage correction, and a multilevel model (MLM). The base case assumed CRTs with moderate numbers of balanced clusters (20 per arm) and normally distributed costs. Other scenarios included CRTs with few clusters, imbalanced cluster sizes, and skewed costs. Performance was reported as bias, root mean squared error (rMSE), and confidence interval (CI) coverage for estimating incremental net benefits (INBs). We also compared the methods in a case study. Each method reported low levels of bias. Without the robust SE, SUR gave poor CI coverage (base case: 0.89 v. nominal level: 0.95). The MLM and TSB performed well in each scenario (CI coverage, 0.92-0.95). With few clusters, the GEE and SUR (with robust SE) had coverage below 0.90. In the case study, the mean INBs were similar across all methods, but ignoring clustering underestimated statistical uncertainty and the value of further research. MLMs and the TSB are appropriate analytical methods for CEAs of CRTs with the characteristics described. SUR and GEE are not recommended for studies with few clusters.
Developing Appropriate Methods for Cost-Effectiveness Analysis of Cluster Randomized Trials
Gomes, Manuel; Ng, Edmond S.-W.; Nixon, Richard; Carpenter, James; Thompson, Simon G.
2012-01-01
Aim. Cost-effectiveness analyses (CEAs) may use data from cluster randomized trials (CRTs), where the unit of randomization is the cluster, not the individual. However, most studies use analytical methods that ignore clustering. This article compares alternative statistical methods for accommodating clustering in CEAs of CRTs. Methods. Our simulation study compared the performance of statistical methods for CEAs of CRTs with 2 treatment arms. The study considered a method that ignored clustering—seemingly unrelated regression (SUR) without a robust standard error (SE)—and 4 methods that recognized clustering—SUR and generalized estimating equations (GEEs), both with robust SE, a “2-stage” nonparametric bootstrap (TSB) with shrinkage correction, and a multilevel model (MLM). The base case assumed CRTs with moderate numbers of balanced clusters (20 per arm) and normally distributed costs. Other scenarios included CRTs with few clusters, imbalanced cluster sizes, and skewed costs. Performance was reported as bias, root mean squared error (rMSE), and confidence interval (CI) coverage for estimating incremental net benefits (INBs). We also compared the methods in a case study. Results. Each method reported low levels of bias. Without the robust SE, SUR gave poor CI coverage (base case: 0.89 v. nominal level: 0.95). The MLM and TSB performed well in each scenario (CI coverage, 0.92–0.95). With few clusters, the GEE and SUR (with robust SE) had coverage below 0.90. In the case study, the mean INBs were similar across all methods, but ignoring clustering underestimated statistical uncertainty and the value of further research. Conclusions. MLMs and the TSB are appropriate analytical methods for CEAs of CRTs with the characteristics described. SUR and GEE are not recommended for studies with few clusters. PMID:22016450
A Secure Trust Establishment Scheme for Wireless Sensor Networks
Ishmanov, Farruh; Kim, Sung Won; Nam, Seung Yeob
2014-01-01
Trust establishment is an important tool to improve cooperation and enhance security in wireless sensor networks. The core of trust establishment is trust estimation. If a trust estimation method is not robust against attack and misbehavior, the trust values produced will be meaningless, and system performance will be degraded. We present a novel trust estimation method that is robust against on-off attacks and persistent malicious behavior. Moreover, in order to aggregate recommendations securely, we propose using a modified one-step M-estimator scheme. The novelty of the proposed scheme arises from combining past misbehavior with current status in a comprehensive way. Specifically, we introduce an aggregated misbehavior component in trust estimation, which assists in detecting an on-off attack and persistent malicious behavior. In order to determine the current status of the node, we employ previous trust values and current measured misbehavior components. These components are combined to obtain a robust trust value. Theoretical analyses and evaluation results show that our scheme performs better than other trust schemes in terms of detecting an on-off attack and persistent misbehavior. PMID:24451471
Finger vein recognition based on personalized weight maps.
Yang, Gongping; Xiao, Rongyang; Yin, Yilong; Yang, Lu
2013-09-10
Finger vein recognition is a promising biometric recognition technology, which verifies identities via the vein patterns in the fingers. Binary pattern based methods were thoroughly studied in order to cope with the difficulties of extracting the blood vessel network. However, current binary pattern based finger vein matching methods treat every bit of feature codes derived from different image of various individuals as equally important and assign the same weight value to them. In this paper, we propose a finger vein recognition method based on personalized weight maps (PWMs). The different bits have different weight values according to their stabilities in a certain number of training samples from an individual. Firstly we present the concept of PWM, and then propose the finger vein recognition framework, which mainly consists of preprocessing, feature extraction, and matching. Finally, we design extensive experiments to evaluate the effectiveness of our proposal. Experimental results show that PWM achieves not only better performance, but also high robustness and reliability. In addition, PWM can be used as a general framework for binary pattern based recognition.
Finger Vein Recognition Based on Personalized Weight Maps
Yang, Gongping; Xiao, Rongyang; Yin, Yilong; Yang, Lu
2013-01-01
Finger vein recognition is a promising biometric recognition technology, which verifies identities via the vein patterns in the fingers. Binary pattern based methods were thoroughly studied in order to cope with the difficulties of extracting the blood vessel network. However, current binary pattern based finger vein matching methods treat every bit of feature codes derived from different image of various individuals as equally important and assign the same weight value to them. In this paper, we propose a finger vein recognition method based on personalized weight maps (PWMs). The different bits have different weight values according to their stabilities in a certain number of training samples from an individual. Firstly we present the concept of PWM, and then propose the finger vein recognition framework, which mainly consists of preprocessing, feature extraction, and matching. Finally, we design extensive experiments to evaluate the effectiveness of our proposal. Experimental results show that PWM achieves not only better performance, but also high robustness and reliability. In addition, PWM can be used as a general framework for binary pattern based recognition. PMID:24025556
NASA Astrophysics Data System (ADS)
Martowicz, Adam; Uhl, Tadeusz
2012-10-01
The paper discusses the applicability of a reliability- and performance-based multi-criteria robust design optimization technique for micro-electromechanical systems, considering their technological uncertainties. Nowadays, micro-devices are commonly applied systems, especially in the automotive industry, taking advantage of utilizing both the mechanical structure and electronic control circuit on one board. Their frequent use motivates the elaboration of virtual prototyping tools that can be applied in design optimization with the introduction of technological uncertainties and reliability. The authors present a procedure for the optimization of micro-devices, which is based on the theory of reliability-based robust design optimization. This takes into consideration the performance of a micro-device and its reliability assessed by means of uncertainty analysis. The procedure assumes that, for each checked design configuration, the assessment of uncertainty propagation is performed with the meta-modeling technique. The described procedure is illustrated with an example of the optimization carried out for a finite element model of a micro-mirror. The multi-physics approach allowed the introduction of several physical phenomena to correctly model the electrostatic actuation and the squeezing effect present between electrodes. The optimization was preceded by sensitivity analysis to establish the design and uncertain domains. The genetic algorithms fulfilled the defined optimization task effectively. The best discovered individuals are characterized by a minimized value of the multi-criteria objective function, simultaneously satisfying the constraint on material strength. The restriction of the maximum equivalent stresses was introduced with the conditionally formulated objective function with a penalty component. The yielded results were successfully verified with a global uniform search through the input design domain.
Bonmati, Ester; Hu, Yipeng; Gibson, Eli; Uribarri, Laura; Keane, Geri; Gurusami, Kurinchi; Davidson, Brian; Pereira, Stephen P; Clarkson, Matthew J; Barratt, Dean C
2018-06-01
Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated ([Formula: see text]) or retrospective clinical ([Formula: see text]) EUS landmarks. The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value [Formula: see text]). The proposed simulation-based method to find optimised EUS planes and landmarks for EUS-guided procedures may have the potential to improve registration accuracy. Further work will investigate applying the technique in a clinical setting.
Kappenman, Emily S; Keil, Andreas
2017-01-01
In recent years, the psychological and behavioral sciences have increased efforts to strengthen methodological practices and publication standards, with the ultimate goal of enhancing the value and reproducibility of published reports. These issues are especially important in the multidisciplinary field of psychophysiology, which yields rich and complex data sets with a large number of observations. In addition, the technological tools and analysis methods available in the field of psychophysiology are continually evolving, widening the array of techniques and approaches available to researchers. This special issue presents articles detailing rigorous and systematic evaluations of tasks, measures, materials, analysis approaches, and statistical practices in a variety of subdisciplines of psychophysiology. These articles highlight challenges in conducting and interpreting psychophysiological research and provide data-driven, evidence-based recommendations for overcoming those challenges to produce robust, reproducible results in the field of psychophysiology. © 2016 Society for Psychophysiological Research.
The Nitrogen Balancing Act: Tracking the Environmental Performance of Food Production
McLellan, Eileen L; Cassman, Kenneth G; Eagle, Alison J; Woodbury, Peter B; Sela, Shai; Tonitto, Christina; Marjerison, Rebecca D; van Es, Harold M
2018-01-01
Abstract Farmers, food supply-chain entities, and policymakers need a simple but robust indicator to demonstrate progress toward reducing nitrogen pollution associated with food production. We show that nitrogen balance—the difference between nitrogen inputs and nitrogen outputs in an agricultural production system—is a robust measure of nitrogen losses that is simple to calculate, easily understood, and based on readily available farm data. Nitrogen balance provides farmers with a means of demonstrating to an increasingly concerned public that they are succeeding in reducing nitrogen losses while also improving the overall sustainability of their farming operation. Likewise, supply-chain companies and policymakers can use nitrogen balance to track progress toward sustainability goals. We describe the value of nitrogen balance in translating environmental targets into actionable goals for farmers and illustrate the potential roles of science, policy, and agricultural support networks in helping farmers achieve them. PMID:29662247
High-fidelity data embedding for image annotation.
He, Shan; Kirovski, Darko; Wu, Min
2009-02-01
High fidelity is a demanding requirement for data hiding, especially for images with artistic or medical value. This correspondence proposes a high-fidelity image watermarking for annotation with robustness to moderate distortion. To achieve the high fidelity of the embedded image, we introduce a visual perception model that aims at quantifying the local tolerance to noise for arbitrary imagery. Based on this model, we embed two kinds of watermarks: a pilot watermark that indicates the existence of the watermark and an information watermark that conveys a payload of several dozen bits. The objective is to embed 32 bits of metadata into a single image in such a way that it is robust to JPEG compression and cropping. We demonstrate the effectiveness of the visual model and the application of the proposed annotation technology using a database of challenging photographic and medical images that contain a large amount of smooth regions.
Medeiros, Renan Landau Paiva de; Barra, Walter; Bessa, Iury Valente de; Chaves Filho, João Edgar; Ayres, Florindo Antonio de Cavalho; Neves, Cleonor Crescêncio das
2018-02-01
This paper describes a novel robust decentralized control design methodology for a single inductor multiple output (SIMO) DC-DC converter. Based on a nominal multiple input multiple output (MIMO) plant model and performance requirements, a pairing input-output analysis is performed to select the suitable input to control each output aiming to attenuate the loop coupling. Thus, the plant uncertainty limits are selected and expressed in interval form with parameter values of the plant model. A single inductor dual output (SIDO) DC-DC buck converter board is developed for experimental tests. The experimental results show that the proposed methodology can maintain a desirable performance even in the presence of parametric uncertainties. Furthermore, the performance indexes calculated from experimental data show that the proposed methodology outperforms classical MIMO control techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cools, S.; Vanroose, W.
2016-03-01
This paper improves the convergence and robustness of a multigrid-based solver for the cross sections of the driven Schrödinger equation. Adding a Coupled Channel Correction Step (CCCS) after each multigrid (MG) V-cycle efficiently removes the errors that remain after the V-cycle sweep. The combined iterative solution scheme (MG-CCCS) is shown to feature significantly improved convergence rates over the classical MG method at energies where bound states dominate the solution, resulting in a fast and scalable solution method for the complex-valued Schrödinger break-up problem for any energy regime. The proposed solver displays optimal scaling; a solution is found in a time that is linear in the number of unknowns. The method is validated on a 2D Temkin-Poet model problem, and convergence results both as a solver and preconditioner are provided to support the O (N) scalability of the method. This paper extends the applicability of the complex contour approach for far field map computation (Cools et al. (2014) [10]).
Du, Shaoyi; Xu, Yiting; Wan, Teng; Hu, Huaizhong; Zhang, Sirui; Xu, Guanglin; Zhang, Xuetao
2017-01-01
The iterative closest point (ICP) algorithm is efficient and accurate for rigid registration but it needs the good initial parameters. It is easily failed when the rotation angle between two point sets is large. To deal with this problem, a new objective function is proposed by introducing a rotation invariant feature based on the Euclidean distance between each point and a global reference point, where the global reference point is a rotation invariant. After that, this optimization problem is solved by a variant of ICP algorithm, which is an iterative method. Firstly, the accurate correspondence is established by using the weighted rotation invariant feature distance and position distance together. Secondly, the rigid transformation is solved by the singular value decomposition method. Thirdly, the weight is adjusted to control the relative contribution of the positions and features. Finally this new algorithm accomplishes the registration by a coarse-to-fine way whatever the initial rotation angle is, which is demonstrated to converge monotonically. The experimental results validate that the proposed algorithm is more accurate and robust compared with the original ICP algorithm.
Du, Shaoyi; Xu, Yiting; Wan, Teng; Zhang, Sirui; Xu, Guanglin; Zhang, Xuetao
2017-01-01
The iterative closest point (ICP) algorithm is efficient and accurate for rigid registration but it needs the good initial parameters. It is easily failed when the rotation angle between two point sets is large. To deal with this problem, a new objective function is proposed by introducing a rotation invariant feature based on the Euclidean distance between each point and a global reference point, where the global reference point is a rotation invariant. After that, this optimization problem is solved by a variant of ICP algorithm, which is an iterative method. Firstly, the accurate correspondence is established by using the weighted rotation invariant feature distance and position distance together. Secondly, the rigid transformation is solved by the singular value decomposition method. Thirdly, the weight is adjusted to control the relative contribution of the positions and features. Finally this new algorithm accomplishes the registration by a coarse-to-fine way whatever the initial rotation angle is, which is demonstrated to converge monotonically. The experimental results validate that the proposed algorithm is more accurate and robust compared with the original ICP algorithm. PMID:29176780
Van Daele, Timothy; Gernaey, Krist V; Ringborg, Rolf H; Börner, Tim; Heintz, Søren; Van Hauwermeiren, Daan; Grey, Carl; Krühne, Ulrich; Adlercreutz, Patrick; Nopens, Ingmar
2017-09-01
The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during experimentation is not actively used to optimize the experimental design. By applying an iterative robust model-based optimal experimental design, the limited amount of data collected is used to design additional informative experiments. The algorithm is used here to calibrate the initial reaction rate of an ω-transaminase catalyzed reaction in a more accurate way. The parameter confidence region estimated from the Fisher Information Matrix is compared with the likelihood confidence region, which is not only more accurate but also a computationally more expensive method. As a result, an important deviation between both approaches is found, confirming that linearization methods should be applied with care for nonlinear models. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:1278-1293, 2017. © 2017 American Institute of Chemical Engineers.
A Practical, Robust and Fast Method for Location Localization in Range-Based Systems.
Huang, Shiping; Wu, Zhifeng; Misra, Anil
2017-12-11
Location localization technology is used in a number of industrial and civil applications. Real time location localization accuracy is highly dependent on the quality of the distance measurements and efficiency of solving the localization equations. In this paper, we provide a novel approach to solve the nonlinear localization equations efficiently and simultaneously eliminate the bad measurement data in range-based systems. A geometric intersection model was developed to narrow the target search area, where Newton's Method and the Direct Search Method are used to search for the unknown position. Not only does the geometric intersection model offer a small bounded search domain for Newton's Method and the Direct Search Method, but also it can self-correct bad measurement data. The Direct Search Method is useful for the coarse localization or small target search domain, while the Newton's Method can be used for accurate localization. For accurate localization, by utilizing the proposed Modified Newton's Method (MNM), challenges of avoiding the local extrema, singularities, and initial value choice are addressed. The applicability and robustness of the developed method has been demonstrated by experiments with an indoor system.
Optimization of an electromagnetic linear actuator using a network and a finite element model
NASA Astrophysics Data System (ADS)
Neubert, Holger; Kamusella, Alfred; Lienig, Jens
2011-03-01
Model based design optimization leads to robust solutions only if the statistical deviations of design, load and ambient parameters from nominal values are considered. We describe an optimization methodology that involves these deviations as stochastic variables for an exemplary electromagnetic actuator used to drive a Braille printer. A combined model simulates the dynamic behavior of the actuator and its non-linear load. It consists of a dynamic network model and a stationary magnetic finite element (FE) model. The network model utilizes lookup tables of the magnetic force and the flux linkage computed by the FE model. After a sensitivity analysis using design of experiment (DoE) methods and a nominal optimization based on gradient methods, a robust design optimization is performed. Selected design variables are involved in form of their density functions. In order to reduce the computational effort we use response surfaces instead of the combined system model obtained in all stochastic analysis steps. Thus, Monte-Carlo simulations can be applied. As a result we found an optimum system design meeting our requirements with regard to function and reliability.
Hierarchical design of an electro-hydraulic actuator based on robust LPV methods
NASA Astrophysics Data System (ADS)
Németh, Balázs; Varga, Balázs; Gáspár, Péter
2015-08-01
The paper proposes a hierarchical control design of an electro-hydraulic actuator, which is used to improve the roll stability of vehicles. The purpose of the control system is to generate a reference torque, which is required by the vehicle dynamic control. The control-oriented model of the actuator is formulated in two subsystems. The high-level hydromotor is described in a linear form, while the low-level spool valve is a polynomial system. These subsystems require different control strategies. At the high level, a linear parameter-varying control is used to guarantee performance specifications. At the low level, a control Lyapunov-function-based algorithm, which creates discrete control input values of the valve, is proposed. The interaction between the two subsystems is guaranteed by the spool displacement, which is control input at the high level and must be tracked at the low-level control. The spool displacement has physical constraints, which must also be incorporated into the control design. The robust design of the high-level control incorporates the imprecision of the low-level control as an uncertainty of the system.
Research on Robustness of Tree-based P2P Streaming
NASA Astrophysics Data System (ADS)
Chu, Chen; Yan, Jinyao; Ding, Kuangzheng; Wang, Xi
Research on P2P streaming media is a hot topic in the area of Internet technology. It has emerged as a promising technique. This new paradigm brings a number of unique advantages such as scalability, resilience and also effectiveness in coping with dynamics and heterogeneity. However, There are also many problems in P2P streaming media systems using traditional tree-based topology such as the bandwidth limits between parents and child nodes; node's joining or leaving has a great effect on robustness of tree-based topology. This paper will introduce a method of measuring the robustness of tree-based topology: using network measurement, we observe and record the bandwidth between all the nodes, analyses the correlation between all the sibling flows, measure the robustness of tree-based topology. And the result shows that in the Tree-based topology, the different links which have similar routing paths would share the bandwidth bottleneck, reduce the robustness of the Tree-based topology.
Xu, Jinfeng; Yuan, Ao; Zheng, Gang
2012-01-01
Summary In the analysis of case-control genetic association, the trend test and Pearson’s test are the two most commonly used tests. In genome-wide association studies (GWAS), Bayes factor is a useful tool to support significant p-values, and a better measure than p-value when results are compared across studies with different sample sizes. When reporting the p-value of the trend test, we propose a Bayes factor directly based on the trend test. To improve the power to detect association under recessive or dominant genetic models, we propose a Bayes factor based on the trend test and incorporating Hardy-Weinberg disequilibrium in cases. When the true model is unknown, or both the trend test and Pearson’s test or other robust tests are applied in genome-wide scans, we propose a joint Bayes factor, combining the previous two Bayes factors. All three Bayes factors studied in this paper have closed forms and are easy to compute without integrations, so they can be reported along with p-values, especially in GWAS. We discuss how to use each of them and how to specify priors. Simulation studies and applications to three GWAS are provided to illustrate their usefulness to detect non-additive gene susceptibility in practice. PMID:22607017
Kim, SungHwan; Lin, Chien-Wei; Tseng, George C
2016-07-01
Supervised machine learning is widely applied to transcriptomic data to predict disease diagnosis, prognosis or survival. Robust and interpretable classifiers with high accuracy are usually favored for their clinical and translational potential. The top scoring pair (TSP) algorithm is an example that applies a simple rank-based algorithm to identify rank-altered gene pairs for classifier construction. Although many classification methods perform well in cross-validation of single expression profile, the performance usually greatly reduces in cross-study validation (i.e. the prediction model is established in the training study and applied to an independent test study) for all machine learning methods, including TSP. The failure of cross-study validation has largely diminished the potential translational and clinical values of the models. The purpose of this article is to develop a meta-analytic top scoring pair (MetaKTSP) framework that combines multiple transcriptomic studies and generates a robust prediction model applicable to independent test studies. We proposed two frameworks, by averaging TSP scores or by combining P-values from individual studies, to select the top gene pairs for model construction. We applied the proposed methods in simulated data sets and three large-scale real applications in breast cancer, idiopathic pulmonary fibrosis and pan-cancer methylation. The result showed superior performance of cross-study validation accuracy and biomarker selection for the new meta-analytic framework. In conclusion, combining multiple omics data sets in the public domain increases robustness and accuracy of the classification model that will ultimately improve disease understanding and clinical treatment decisions to benefit patients. An R package MetaKTSP is available online. (http://tsenglab.biostat.pitt.edu/software.htm). ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Petroleum refinery operational planning using robust optimization
NASA Astrophysics Data System (ADS)
Leiras, A.; Hamacher, S.; Elkamel, A.
2010-12-01
In this article, the robust optimization methodology is applied to deal with uncertainties in the prices of saleable products, operating costs, product demand, and product yield in the context of refinery operational planning. A numerical study demonstrates the effectiveness of the proposed robust approach. The benefits of incorporating uncertainty in the different model parameters were evaluated in terms of the cost of ignoring uncertainty in the problem. The calculations suggest that this benefit is equivalent to 7.47% of the deterministic solution value, which indicates that the robust model may offer advantages to those involved with refinery operational planning. In addition, the probability bounds of constraint violation are calculated to help the decision-maker adopt a more appropriate parameter to control robustness and judge the tradeoff between conservatism and total profit.
Robust model predictive control for constrained continuous-time nonlinear systems
NASA Astrophysics Data System (ADS)
Sun, Tairen; Pan, Yongping; Zhang, Jun; Yu, Haoyong
2018-02-01
In this paper, a robust model predictive control (MPC) is designed for a class of constrained continuous-time nonlinear systems with bounded additive disturbances. The robust MPC consists of a nonlinear feedback control and a continuous-time model-based dual-mode MPC. The nonlinear feedback control guarantees the actual trajectory being contained in a tube centred at the nominal trajectory. The dual-mode MPC is designed to ensure asymptotic convergence of the nominal trajectory to zero. This paper extends current results on discrete-time model-based tube MPC and linear system model-based tube MPC to continuous-time nonlinear model-based tube MPC. The feasibility and robustness of the proposed robust MPC have been demonstrated by theoretical analysis and applications to a cart-damper springer system and a one-link robot manipulator.
NASA Astrophysics Data System (ADS)
Koma, Zsófia; Székely, Balázs; Folly-Ritvay, Zoltán; Skobrák, Ferenc; Koenig, Kristina; Höfle, Bernhard
2016-04-01
Mobile Laser Scanning (MLS) is an evolving operational measurement technique for urban environment providing large amounts of high resolution information about trees, street features, pole-like objects on the street sides or near to motorways. In this study we investigate a robust segmentation method to extract the individual trees automatically in order to build an object-based tree database system. We focused on the large urban parks in Budapest (Margitsziget and Városliget; KARESZ project) which contained large diversity of different kind of tree species. The MLS data contained high density point cloud data with 1-8 cm mean absolute accuracy 80-100 meter distance from streets. The robust segmentation method contained following steps: The ground points are determined first. As a second step cylinders are fitted in vertical slice 1-1.5 meter relative height above ground, which is used to determine the potential location of each single trees trunk and cylinder-like object. Finally, residual values are calculated as deviation of each point from a vertically expanded fitted cylinder; these residual values are used to separate cylinder-like object from individual trees. After successful parameterization, the model parameters and the corresponding residual values of the fitted object are extracted and imported into the tree database. Additionally, geometric features are calculated for each segmented individual tree like crown base, crown width, crown length, diameter of trunk, volume of the individual trees. In case of incompletely scanned trees, the extraction of geometric features is based on fitted circles. The result of the study is a tree database containing detailed information about urban trees, which can be a valuable dataset for ecologist, city planners, planting and mapping purposes. Furthermore, the established database will be the initial point for classification trees into single species. MLS data used in this project had been measured in the framework of KARESZ project for whole Budapest. BSz contributed as an Alexander von Humboldt Research Fellow.
NASA Astrophysics Data System (ADS)
Dong, Huanhuan; Liu, Jing; Liu, Xiaoru; Yu, Yanying; Cao, Shuwen
2018-01-01
A collection of thirty-six aromatic heterocycle thiosemicarbazone analogues presented a broad span of anti-tyrosinase activities were designed and obtained. A robust and reliable two-dimensional quantitative structure-activity relationship model, as evidenced by the high q2 and r2 values (0.848 and 0.893, respectively), was gained based on the analogues to predict the quantitative chemical-biological relationship and the new modifier direction. Inhibitory activities of the compounds were found to greatly depend on molecular shape and orbital energy. Substituents brought out large ovality and high highest-occupied molecular orbital energy values helped to improve the activity of these analogues. The molecular docking results provided visual evidence for QSAR analysis and inhibition mechanism. Based on these, two novel tyrosinase inhibitors O04 and O05 with predicted IC50 of 0.5384 and 0.8752 nM were designed and suggested for further research.
The motivating role of violence in video games.
Przybylski, Andrew K; Ryan, Richard M; Rigby, C Scott
2009-02-01
Six studies, two survey based and four experimental, explored the relations between violent content and people's motivation and enjoyment of video game play. Based on self-determination theory, the authors hypothesized that violence adds little to enjoyment or motivation for typical players once autonomy and competence need satisfactions are considered. As predicted, results from all studies showed that enjoyment, value, and desire for future play were robustly associated with the experience of autonomy and competence in gameplay. Violent content added little unique variance in accounting for these outcomes and was also largely unrelated to need satisfactions. The studies also showed that players high in trait aggression were more likely to prefer or value games with violent contents, even though violent contents did not reliably enhance their game enjoyment or immersion. Discussion focuses on the significance of the current findings for individuals and the understanding of motivation in virtual environments.
Food reward in the absence of taste receptor signaling.
de Araujo, Ivan E; Oliveira-Maia, Albino J; Sotnikova, Tatyana D; Gainetdinov, Raul R; Caron, Marc G; Nicolelis, Miguel A L; Simon, Sidney A
2008-03-27
Food palatability and hedonic value play central roles in nutrient intake. However, postingestive effects can influence food preferences independently of palatability, although the neurobiological bases of such mechanisms remain poorly understood. Of central interest is whether the same brain reward circuitry that is responsive to palatable rewards also encodes metabolic value independently of taste signaling. Here we show that trpm5-/- mice, which lack the cellular machinery required for sweet taste transduction, can develop a robust preference for sucrose solutions based solely on caloric content. Sucrose intake induced dopamine release in the ventral striatum of these sweet-blind mice, a pattern usually associated with receipt of palatable rewards. Furthermore, single neurons in this same ventral striatal region showed increased sensitivity to caloric intake even in the absence of gustatory inputs. Our findings suggest that calorie-rich nutrients can directly influence brain reward circuits that control food intake independently of palatability or functional taste transduction.
Lucena, Rafael; Cárdenas, Soledad; Gallego, Mercedes; Valcárcel, Miguel
2006-03-01
Monitoring the exhaustion of alkaline degreasing baths is one of the main aspects in metal mechanizing industrial process control. The global level of surfactant, and mainly grease, can be used as ageing indicators. In this paper, an attenuated total reflection-Fourier transform infrared (ATR-FTIR) membrane-based sensor is presented for the determination of these parameters. The system is based on a micro-liquid-liquid extraction of the analytes through a polymeric membrane from the aqueous to the organic solvent layer which is in close contact with the internal reflection element and continuously monitored. Samples are automatically processed using a simple, robust sequential injection analysis (SIA) configuration, on-line coupled to the instrument. The global signal obtained for both families of compounds are processed via a multivariate calibration technique (partial least squares, PLS). Excellent correlation was obtained for the values given by the proposed method compared to those of the gravimetric reference one with very low error values for both calibration and validation.
A kriging metamodel-assisted robust optimization method based on a reverse model
NASA Astrophysics Data System (ADS)
Zhou, Hui; Zhou, Qi; Liu, Congwei; Zhou, Taotao
2018-02-01
The goal of robust optimization methods is to obtain a solution that is both optimum and relatively insensitive to uncertainty factors. Most existing robust optimization approaches use outer-inner nested optimization structures where a large amount of computational effort is required because the robustness of each candidate solution delivered from the outer level should be evaluated in the inner level. In this article, a kriging metamodel-assisted robust optimization method based on a reverse model (K-RMRO) is first proposed, in which the nested optimization structure is reduced into a single-loop optimization structure to ease the computational burden. Ignoring the interpolation uncertainties from kriging, K-RMRO may yield non-robust optima. Hence, an improved kriging-assisted robust optimization method based on a reverse model (IK-RMRO) is presented to take the interpolation uncertainty of kriging metamodel into consideration. In IK-RMRO, an objective switching criterion is introduced to determine whether the inner level robust optimization or the kriging metamodel replacement should be used to evaluate the robustness of design alternatives. The proposed criterion is developed according to whether or not the robust status of the individual can be changed because of the interpolation uncertainties from the kriging metamodel. Numerical and engineering cases are used to demonstrate the applicability and efficiency of the proposed approach.
The comparison between SVD-DCT and SVD-DWT digital image watermarking
NASA Astrophysics Data System (ADS)
Wira Handito, Kurniawan; Fauzi, Zulfikar; Aminy Ma’ruf, Firda; Widyaningrum, Tanti; Muslim Lhaksmana, Kemas
2018-03-01
With internet, anyone can publish their creation into digital data simply, inexpensively, and absolutely easy to be accessed by everyone. However, the problem appears when anyone else claims that the creation is their property or modifies some part of that creation. It causes necessary protection of copyrights; one of the examples is with watermarking method in digital image. The application of watermarking technique on digital data, especially on image, enables total invisibility if inserted in carrier image. Carrier image will not undergo any decrease of quality and also the inserted image will not be affected by attack. In this paper, watermarking will be implemented on digital image using Singular Value Decomposition based on Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) by expectation in good performance of watermarking result. In this case, trade-off happen between invisibility and robustness of image watermarking. In embedding process, image watermarking has a good quality for scaling factor < 0.1. The quality of image watermarking in decomposition level 3 is better than level 2 and level 1. Embedding watermark in low-frequency is robust to Gaussian blur attack, rescale, and JPEG compression, but in high-frequency is robust to Gaussian noise.
Robust and transferable quantification of NMR spectral quality using IROC analysis
NASA Astrophysics Data System (ADS)
Zambrello, Matthew A.; Maciejewski, Mark W.; Schuyler, Adam D.; Weatherby, Gerard; Hoch, Jeffrey C.
2017-12-01
Non-Fourier methods are increasingly utilized in NMR spectroscopy because of their ability to handle nonuniformly-sampled data. However, non-Fourier methods present unique challenges due to their nonlinearity, which can produce nonrandom noise and render conventional metrics for spectral quality such as signal-to-noise ratio unreliable. The lack of robust and transferable metrics (i.e. applicable to methods exhibiting different nonlinearities) has hampered comparison of non-Fourier methods and nonuniform sampling schemes, preventing the identification of best practices. We describe a novel method, in situ receiver operating characteristic analysis (IROC), for characterizing spectral quality based on the Receiver Operating Characteristic curve. IROC utilizes synthetic signals added to empirical data as "ground truth", and provides several robust scalar-valued metrics for spectral quality. This approach avoids problems posed by nonlinear spectral estimates, and provides a versatile quantitative means of characterizing many aspects of spectral quality. We demonstrate applications to parameter optimization in Fourier and non-Fourier spectral estimation, critical comparison of different methods for spectrum analysis, and optimization of nonuniform sampling schemes. The approach will accelerate the discovery of optimal approaches to nonuniform sampling experiment design and non-Fourier spectrum analysis for multidimensional NMR.
Robust and Adaptive Online Time Series Prediction with Long Short-Term Memory
Tao, Qing
2017-01-01
Online time series prediction is the mainstream method in a wide range of fields, ranging from speech analysis and noise cancelation to stock market analysis. However, the data often contains many outliers with the increasing length of time series in real world. These outliers can mislead the learned model if treated as normal points in the process of prediction. To address this issue, in this paper, we propose a robust and adaptive online gradient learning method, RoAdam (Robust Adam), for long short-term memory (LSTM) to predict time series with outliers. This method tunes the learning rate of the stochastic gradient algorithm adaptively in the process of prediction, which reduces the adverse effect of outliers. It tracks the relative prediction error of the loss function with a weighted average through modifying Adam, a popular stochastic gradient method algorithm for training deep neural networks. In our algorithm, the large value of the relative prediction error corresponds to a small learning rate, and vice versa. The experiments on both synthetic data and real time series show that our method achieves better performance compared to the existing methods based on LSTM. PMID:29391864
Robust and Adaptive Online Time Series Prediction with Long Short-Term Memory.
Yang, Haimin; Pan, Zhisong; Tao, Qing
2017-01-01
Online time series prediction is the mainstream method in a wide range of fields, ranging from speech analysis and noise cancelation to stock market analysis. However, the data often contains many outliers with the increasing length of time series in real world. These outliers can mislead the learned model if treated as normal points in the process of prediction. To address this issue, in this paper, we propose a robust and adaptive online gradient learning method, RoAdam (Robust Adam), for long short-term memory (LSTM) to predict time series with outliers. This method tunes the learning rate of the stochastic gradient algorithm adaptively in the process of prediction, which reduces the adverse effect of outliers. It tracks the relative prediction error of the loss function with a weighted average through modifying Adam, a popular stochastic gradient method algorithm for training deep neural networks. In our algorithm, the large value of the relative prediction error corresponds to a small learning rate, and vice versa. The experiments on both synthetic data and real time series show that our method achieves better performance compared to the existing methods based on LSTM.
Zhang, Bitao; Pi, YouGuo
2013-07-01
The traditional integer order proportional-integral-differential (IO-PID) controller is sensitive to the parameter variation or/and external load disturbance of permanent magnet synchronous motor (PMSM). And the fractional order proportional-integral-differential (FO-PID) control scheme based on robustness tuning method is proposed to enhance the robustness. But the robustness focuses on the open-loop gain variation of controlled plant. In this paper, an enhanced robust fractional order proportional-plus-integral (ERFOPI) controller based on neural network is proposed. The control law of the ERFOPI controller is acted on a fractional order implement function (FOIF) of tracking error but not tracking error directly, which, according to theory analysis, can enhance the robust performance of system. Tuning rules and approaches, based on phase margin, crossover frequency specification and robustness rejecting gain variation, are introduced to obtain the parameters of ERFOPI controller. And the neural network algorithm is used to adjust the parameter of FOIF. Simulation and experimental results show that the method proposed in this paper not only achieve favorable tracking performance, but also is robust with regard to external load disturbance and parameter variation. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Predicting the rate of change in timber value for forest stands infested with gypsy moth
David A. Gansner; Owen W. Herrick
1982-01-01
Presents a method for estimating the potential impact of gypsy moth attacks on forest-stand value. Robust regression analysis is used to develop an equation for predicting the rate of change in timber value from easy-to-measure key characteristics of stand condition.
Optimization-Based Robust Nonlinear Control
2006-08-01
ABSTRACT New control algorithms were developed for robust stabilization of nonlinear dynamical systems . Novel, linear matrix inequality-based synthesis...was to further advance optimization-based robust nonlinear control design, for general nonlinear systems (especially in discrete time ), for linear...Teel, IEEE Transactions on Control Systems Technology, vol. 14, no. 3, p. 398-407, May 2006. 3. "A unified framework for input-to-state stability in
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neeway, James J.; Rieke, Peter C.; Parruzot, Benjamin P.
In far-from-equilibrium conditions, the dissolution of borosilicate glasses used to immobilize nuclear waste is known to be a function of both temperature and pH. The aim of this paper is to study effects of these variables on three model waste glasses (SON68, ISG, AFCI). To do this, experiments were conducted at temperatures of 23, 40, 70, and 90 °C and pH(RT) values of 9, 10, 11, and 12 with the single-pass flow-through (SPFT) test method. The results from these tests were then used to parameterize a kinetic rate model based on transition state theory. Both the absolute dissolution rates andmore » the rate model parameters are compared with previous results. Discrepancies in the absolute dissolution rates as compared to those obtained using other test methods are discussed. Rate model parameters for the three glasses studied here are nearly equivalent within error and in relative agreement with previous studies. The results were analyzed with a linear multivariate regression (LMR) and a nonlinear multivariate regression performed with the use of the Glass Corrosion Modeling Tool (GCMT), which is capable of providing a robust uncertainty analysis. This robust analysis highlights the high degree of correlation of various parameters in the kinetic rate model. As more data are obtained on borosilicate glasses with varying compositions, the effect of glass composition on the rate parameter values could possibly be obtained. This would allow for the possibility of predicting the forward dissolution rate of glass based solely on composition« less
NASA Astrophysics Data System (ADS)
Munhoven, G.
2013-08-01
The total alkalinity-pH equation, which relates total alkalinity and pH for a given set of total concentrations of the acid-base systems that contribute to total alkalinity in a given water sample, is reviewed and its mathematical properties established. We prove that the equation function is strictly monotone and always has exactly one positive root. Different commonly used approximations are discussed and compared. An original method to derive appropriate initial values for the iterative solution of the cubic polynomial equation based upon carbonate-borate-alkalinity is presented. We then review different methods that have been used to solve the total alkalinity-pH equation, with a main focus on biogeochemical models. The shortcomings and limitations of these methods are made out and discussed. We then present two variants of a new, robust and universally convergent algorithm to solve the total alkalinity-pH equation. This algorithm does not require any a priori knowledge of the solution. SolveSAPHE (Solver Suite for Alkalinity-PH Equations) provides reference implementations of several variants of the new algorithm in Fortran 90, together with new implementations of other, previously published solvers. The new iterative procedure is shown to converge from any starting value to the physical solution. The extra computational cost for the convergence security is only 10-15% compared to the fastest algorithm in our test series.
Lakdawalla, Darius N; Chou, Jacquelyn W; Linthicum, Mark T; MacEwan, Joanna P; Zhang, Jie; Goldman, Dana P
2015-05-01
Surrogate end points may be used as proxy for more robust clinical end points. One prominent example is the use of progression-free survival (PFS) as a surrogate for overall survival (OS) in trials for oncologic treatments. Decisions based on surrogate end points may expedite regulatory approval but may not accurately reflect drug efficacy. Payers and clinicians must balance the potential benefits of earlier treatment access based on surrogate end points against the risks of clinical uncertainty. To present a framework for evaluating the expected net benefit or cost of providing early access to new treatments on the basis of evidence of PFS benefits before OS results are available, using non-small-cell lung cancer (NSCLC) as an example. A probabilistic decision model was used to estimate expected incremental social value of the decision to grant access to a new treatment on the basis of PFS evidence. The model analyzed a hypothetical population of patients with NSCLC who could be treated during the period between PFS and OS evidence publication. Estimates for delay in publication of OS evidence following publication of PFS evidence, expected OS benefit given PFS benefit, incremental cost of new treatment, and other parameters were drawn from the literature on treatment of NSCLC. Incremental social value of early access for each additional patient per month (in 2014 US dollars). For "medium-value" model parameters, early reimbursement of drugs with any PFS benefit yields an incremental social cost of more than $170,000 per newly treated patient per month. In contrast, granting early access on the basis of PFS benefit between 1 and 3.5 months produces more than $73,000 in incremental social value. Across the full range of model parameter values, granting access for drugs with PFS benefit between 3 and 3.5 months is robustly beneficial, generating incremental social value ranging from $38,000 to more than $1 million per newly treated patient per month, whereas access for all drugs with any PFS benefit is usually not beneficial. The value of providing access to new treatments on the basis of surrogate end points, and PFS in particular, likely varies considerably. Payers and clinicians should carefully consider how to use PFS data in balancing potential benefits against costs in each particular disease.
Robust, Optimal Water Infrastructure Planning Under Deep Uncertainty Using Metamodels
NASA Astrophysics Data System (ADS)
Maier, H. R.; Beh, E. H. Y.; Zheng, F.; Dandy, G. C.; Kapelan, Z.
2015-12-01
Optimal long-term planning plays an important role in many water infrastructure problems. However, this task is complicated by deep uncertainty about future conditions, such as the impact of population dynamics and climate change. One way to deal with this uncertainty is by means of robustness, which aims to ensure that water infrastructure performs adequately under a range of plausible future conditions. However, as robustness calculations require computationally expensive system models to be run for a large number of scenarios, it is generally computationally intractable to include robustness as an objective in the development of optimal long-term infrastructure plans. In order to overcome this shortcoming, an approach is developed that uses metamodels instead of computationally expensive simulation models in robustness calculations. The approach is demonstrated for the optimal sequencing of water supply augmentation options for the southern portion of the water supply for Adelaide, South Australia. A 100-year planning horizon is subdivided into ten equal decision stages for the purpose of sequencing various water supply augmentation options, including desalination, stormwater harvesting and household rainwater tanks. The objectives include the minimization of average present value of supply augmentation costs, the minimization of average present value of greenhouse gas emissions and the maximization of supply robustness. The uncertain variables are rainfall, per capita water consumption and population. Decision variables are the implementation stages of the different water supply augmentation options. Artificial neural networks are used as metamodels to enable all objectives to be calculated in a computationally efficient manner at each of the decision stages. The results illustrate the importance of identifying optimal staged solutions to ensure robustness and sustainability of water supply into an uncertain long-term future.
Courtright, Katherine R; Weinberger, Steven E; Wagner, Jason
2015-04-01
Physician decision making is partially responsible for the roughly 30% of U.S. healthcare expenditures that are wasted annually on low-value care. In response to both the widespread public demand for higher-quality care and the cost crisis, payers are transitioning toward value-based payment models whereby physicians are rewarded for high-value, cost-conscious care. Furthermore, to target physicians in training to practice with cost awareness, the Accreditation Council for Graduate Medical Education has created both individual objective milestones and institutional requirements to incorporate quality improvement and cost awareness into fellowship training. Subsequently, some professional medical societies have initiated high-value care educational campaigns, but the overwhelming majority target either medical students or residents in training. Currently, there are few resources available to help guide subspecialty fellowship programs to successfully design durable high-value care curricula. The resource-intensive nature of pulmonary and critical care medicine offers unique opportunities for the specialty to lead in modeling and teaching high-value care. To ensure that fellows graduate with the capability to practice high-value care, we recommend that fellowship programs focus on four major educational domains. These include fostering a value-based culture, providing a robust didactic experience, engaging trainees in process improvement projects, and encouraging scholarship. In doing so, pulmonary and critical care educators can strive to train future physicians who are prepared to provide care that is both high quality and informed by cost awareness.
Adaptive Critic Nonlinear Robust Control: A Survey.
Wang, Ding; He, Haibo; Liu, Derong
2017-10-01
Adaptive dynamic programming (ADP) and reinforcement learning are quite relevant to each other when performing intelligent optimization. They are both regarded as promising methods involving important components of evaluation and improvement, at the background of information technology, such as artificial intelligence, big data, and deep learning. Although great progresses have been achieved and surveyed when addressing nonlinear optimal control problems, the research on robustness of ADP-based control strategies under uncertain environment has not been fully summarized. Hence, this survey reviews the recent main results of adaptive-critic-based robust control design of continuous-time nonlinear systems. The ADP-based nonlinear optimal regulation is reviewed, followed by robust stabilization of nonlinear systems with matched uncertainties, guaranteed cost control design of unmatched plants, and decentralized stabilization of interconnected systems. Additionally, further comprehensive discussions are presented, including event-based robust control design, improvement of the critic learning rule, nonlinear H ∞ control design, and several notes on future perspectives. By applying the ADP-based optimal and robust control methods to a practical power system and an overhead crane plant, two typical examples are provided to verify the effectiveness of theoretical results. Overall, this survey is beneficial to promote the development of adaptive critic control methods with robustness guarantee and the construction of higher level intelligent systems.
Reanalyzing Head et al. (2015): investigating the robustness of widespread p-hacking.
Hartgerink, Chris H J
2017-01-01
Head et al. (2015) provided a large collection of p -values that, from their perspective, indicates widespread statistical significance seeking (i.e., p -hacking). This paper inspects this result for robustness. Theoretically, the p -value distribution should be a smooth, decreasing function, but the distribution of reported p -values shows systematically more reported p -values for .01, .02, .03, .04, and .05 than p -values reported to three decimal places, due to apparent tendencies to round p -values to two decimal places. Head et al. (2015) correctly argue that an aggregate p -value distribution could show a bump below .05 when left-skew p -hacking occurs frequently. Moreover, the elimination of p = .045 and p = .05, as done in the original paper, is debatable. Given that eliminating p = .045 is a result of the need for symmetric bins and systematically more p -values are reported to two decimal places than to three decimal places, I did not exclude p = .045 and p = .05. I conducted Fisher's method .04 < p < .05 and reanalyzed the data by adjusting the bin selection to .03875 < p ≤ .04 versus .04875 < p ≤ .05. Results of the reanalysis indicate that no evidence for left-skew p -hacking remains when we look at the entire range between .04 < p < .05 or when we inspect the second-decimal. Taking into account reporting tendencies when selecting the bins to compare is especially important because this dataset does not allow for the recalculation of the p -values. Moreover, inspecting the bins that include two-decimal reported p -values potentially increases sensitivity if strategic rounding down of p -values as a form of p -hacking is widespread. Given the far-reaching implications of supposed widespread p -hacking throughout the sciences Head et al. (2015), it is important that these findings are robust to data analysis choices if the conclusion is to be considered unequivocal. Although no evidence of widespread left-skew p -hacking is found in this reanalysis, this does not mean that there is no p -hacking at all. These results nuance the conclusion by Head et al. (2015), indicating that the results are not robust and that the evidence for widespread left-skew p -hacking is ambiguous at best.
Reanalyzing Head et al. (2015): investigating the robustness of widespread p-hacking
2017-01-01
Head et al. (2015) provided a large collection of p-values that, from their perspective, indicates widespread statistical significance seeking (i.e., p-hacking). This paper inspects this result for robustness. Theoretically, the p-value distribution should be a smooth, decreasing function, but the distribution of reported p-values shows systematically more reported p-values for .01, .02, .03, .04, and .05 than p-values reported to three decimal places, due to apparent tendencies to round p-values to two decimal places. Head et al. (2015) correctly argue that an aggregate p-value distribution could show a bump below .05 when left-skew p-hacking occurs frequently. Moreover, the elimination of p = .045 and p = .05, as done in the original paper, is debatable. Given that eliminating p = .045 is a result of the need for symmetric bins and systematically more p-values are reported to two decimal places than to three decimal places, I did not exclude p = .045 and p = .05. I conducted Fisher’s method .04 < p < .05 and reanalyzed the data by adjusting the bin selection to .03875 < p ≤ .04 versus .04875 < p ≤ .05. Results of the reanalysis indicate that no evidence for left-skew p-hacking remains when we look at the entire range between .04 < p < .05 or when we inspect the second-decimal. Taking into account reporting tendencies when selecting the bins to compare is especially important because this dataset does not allow for the recalculation of the p-values. Moreover, inspecting the bins that include two-decimal reported p-values potentially increases sensitivity if strategic rounding down of p-values as a form of p-hacking is widespread. Given the far-reaching implications of supposed widespread p-hacking throughout the sciences Head et al. (2015), it is important that these findings are robust to data analysis choices if the conclusion is to be considered unequivocal. Although no evidence of widespread left-skew p-hacking is found in this reanalysis, this does not mean that there is no p-hacking at all. These results nuance the conclusion by Head et al. (2015), indicating that the results are not robust and that the evidence for widespread left-skew p-hacking is ambiguous at best. PMID:28265523
Using Biweight M-Estimates in the Two-Sample Problem. 1. Symmetric Populations
1982-01-01
to a Student’s t distribution, across a broad range of a - levels . To be conservative, we might wish to approximate "t" by a Student’s t on nine-tenths...n-i0). While the robustness of classical procedures for extreme a - levels has not been investigated, a comparison with the values in Lee and...D’Agostino (1976) indicates that this procedure is highly robust of validity at a - .05, presumably this robustness extends to the extreme a - levels as well
Robust stability of fractional order polynomials with complicated uncertainty structure
Şenol, Bilal; Pekař, Libor
2017-01-01
The main aim of this article is to present a graphical approach to robust stability analysis for families of fractional order (quasi-)polynomials with complicated uncertainty structure. More specifically, the work emphasizes the multilinear, polynomial and general structures of uncertainty and, moreover, the retarded quasi-polynomials with parametric uncertainty are studied. Since the families with these complex uncertainty structures suffer from the lack of analytical tools, their robust stability is investigated by numerical calculation and depiction of the value sets and subsequent application of the zero exclusion condition. PMID:28662173
Comparing biomarkers as principal surrogate endpoints.
Huang, Ying; Gilbert, Peter B
2011-12-01
Recently a new definition of surrogate endpoint, the "principal surrogate," was proposed based on causal associations between treatment effects on the biomarker and on the clinical endpoint. Despite its appealing interpretation, limited research has been conducted to evaluate principal surrogates, and existing methods focus on risk models that consider a single biomarker. How to compare principal surrogate value of biomarkers or general risk models that consider multiple biomarkers remains an open research question. We propose to characterize a marker or risk model's principal surrogate value based on the distribution of risk difference between interventions. In addition, we propose a novel summary measure (the standardized total gain) that can be used to compare markers and to assess the incremental value of a new marker. We develop a semiparametric estimated-likelihood method to estimate the joint surrogate value of multiple biomarkers. This method accommodates two-phase sampling of biomarkers and is more widely applicable than existing nonparametric methods by incorporating continuous baseline covariates to predict the biomarker(s), and is more robust than existing parametric methods by leaving the error distribution of markers unspecified. The methodology is illustrated using a simulated example set and a real data set in the context of HIV vaccine trials. © 2011, The International Biometric Society.
A novel image watermarking method based on singular value decomposition and digital holography
NASA Astrophysics Data System (ADS)
Cai, Zhishan
2016-10-01
According to the information optics theory, a novel watermarking method based on Fourier-transformed digital holography and singular value decomposition (SVD) is proposed in this paper. First of all, a watermark image is converted to a digital hologram using the Fourier transform. After that, the original image is divided into many non-overlapping blocks. All the blocks and the hologram are decomposed using SVD. The singular value components of the hologram are then embedded into the singular value components of each block using an addition principle. Finally, SVD inverse transformation is carried out on the blocks and hologram to generate the watermarked image. The watermark information embedded in each block is extracted at first when the watermark is extracted. After that, an averaging operation is carried out on the extracted information to generate the final watermark information. Finally, the algorithm is simulated. Furthermore, to test the encrypted image's resistance performance against attacks, various attack tests are carried out. The results show that the proposed algorithm has very good robustness against noise interference, image cut, compression, brightness stretching, etc. In particular, when the image is rotated by a large angle, the watermark information can still be extracted correctly.
Wang, Huanqing; Chen, Bing; Liu, Xiaoping; Liu, Kefu; Lin, Chong
2013-12-01
This paper is concerned with the problem of adaptive fuzzy tracking control for a class of pure-feedback stochastic nonlinear systems with input saturation. To overcome the design difficulty from nondifferential saturation nonlinearity, a smooth nonlinear function of the control input signal is first introduced to approximate the saturation function; then, an adaptive fuzzy tracking controller based on the mean-value theorem is constructed by using backstepping technique. The proposed adaptive fuzzy controller guarantees that all signals in the closed-loop system are bounded in probability and the system output eventually converges to a small neighborhood of the desired reference signal in the sense of mean quartic value. Simulation results further illustrate the effectiveness of the proposed control scheme.
Estimating a WTP-based value of a QALY: the 'chained' approach.
Robinson, Angela; Gyrd-Hansen, Dorte; Bacon, Philomena; Baker, Rachel; Pennington, Mark; Donaldson, Cam
2013-09-01
A major issue in health economic evaluation is that of the value to place on a quality adjusted life year (QALY), commonly used as a measure of health care effectiveness across Europe. This critical policy issue is reflected in the growing interest across Europe in development of more sound methods to elicit such a value. EuroVaQ was a collaboration of researchers from 9 European countries, the main aim being to develop more robust methods to determine the monetary value of a QALY based on surveys of the general public. The 'chained' approach of deriving a societal willingness-to-pay (WTP) based monetary value of a QALY used the following basic procedure. First, utility values were elicited for health states using the standard gamble (SG) and time trade off (TTO) methods. Second, a monetary value to avoid some risk/duration of that health state was elicited and the implied WTP per QALY estimated. We developed within EuroVaQ an adaptation to the 'chained approach' that attempts to overcome problems documented previously (in particular the tendency to arrive at exceedingly high WTP per QALY values). The survey was administered via Internet panels in each participating country and almost 22,000 responses achieved. Estimates of the value of a QALY varied across question and were, if anything, on the low side with the (trimmed) 'all country' mean WTP per QALY ranging from $18,247 to $34,097. Untrimmed means were considerably higher and medians considerably lower in each case. We conclude that the adaptation to the chained approach described here is a potentially useful technique for estimating WTP per QALY. A number of methodological challenges do still exist, however, and there is scope for further refinement. Copyright © 2013 Elsevier Ltd. All rights reserved.
PDE based scheme for multi-modal medical image watermarking.
Aherrahrou, N; Tairi, H
2015-11-25
This work deals with copyright protection of digital images, an issue that needs protection of intellectual property rights. It is an important issue with a large number of medical images interchanged on the Internet every day. So, it is a challenging task to ensure the integrity of received images as well as authenticity. Digital watermarking techniques have been proposed as valid solution for this problem. It is worth mentioning that the Region Of Interest (ROI)/Region Of Non Interest (RONI) selection can be seen as a significant limitation from which suffers most of ROI/RONI based watermarking schemes and that in turn affects and limit their applicability in an effective way. Generally, the ROI/RONI is defined by a radiologist or a computer-aided selection tool. And thus, this will not be efficient for an institute or health care system, where one has to process a large number of images. Therefore, developing an automatic ROI/RONI selection is a challenge task. The major aim of this work is to develop an automatic selection algorithm of embedding region based on the so called Partial Differential Equation (PDE) method. Thus avoiding ROI/RONI selection problems including: (1) computational overhead, (2) time consuming, and (3) modality dependent selection. The algorithm is evaluated in terms of imperceptibility, robustness, tamper localization and recovery using MRI, Ultrasound, CT and X-ray grey scale medical images. From experimental results that we have conducted on a database of 100 medical images of four modalities, it can be inferred that our method can achieve high imperceptibility, while showing good robustness against attacks. Furthermore, the experiment results confirm the effectiveness of the proposed algorithm in detecting and recovering the various types of tampering. The highest PSNR value reached over the 100 images is 94,746 dB, while the lowest PSNR value is 60,1272 dB, which demonstrates the higher imperceptibility nature of the proposed method. Moreover, the Normalized Correlation (NC) between the original watermark and the corresponding extracted watermark for 100 images is computed. We get a NC value greater than or equal to 0.998. This indicates that the extracted watermark is very similar to the original watermark for all modalities. The key features of our proposed method are to (1) increase the robustness of the watermark against attacks; (2) provide more transparency to the embedded watermark. (3) provide more authenticity and integrity protection of the content of medical images. (4) provide minimum ROI/RONI selection complexity.
Cascading failures with local load redistribution in interdependent Watts-Strogatz networks
NASA Astrophysics Data System (ADS)
Hong, Chen; Zhang, Jun; Du, Wen-Bo; Sallan, Jose Maria; Lordan, Oriol
2016-05-01
Cascading failures of loads in isolated networks have been studied extensively over the last decade. Since 2010, such research has extended to interdependent networks. In this paper, we study cascading failures with local load redistribution in interdependent Watts-Strogatz (WS) networks. The effects of rewiring probability and coupling strength on the resilience of interdependent WS networks have been extensively investigated. It has been found that, for small values of the tolerance parameter, interdependent networks are more vulnerable as rewiring probability increases. For larger values of the tolerance parameter, the robustness of interdependent networks firstly decreases and then increases as rewiring probability increases. Coupling strength has a different impact on robustness. For low values of coupling strength, the resilience of interdependent networks decreases with the increment of the coupling strength until it reaches a certain threshold value. For values of coupling strength above this threshold, the opposite effect is observed. Our results are helpful to understand and design resilient interdependent networks.
Quality Scalability Aware Watermarking for Visual Content.
Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.
Uncertainty, robustness, and the value of information in managing a population of northern bobwhites
Johnson, Fred A.; Hagan, Greg; Palmer, William E.; Kemmerer, Michael
2014-01-01
The abundance of northern bobwhites (Colinus virginianus) has decreased throughout their range. Managers often respond by considering improvements in harvest and habitat management practices, but this can be challenging if substantial uncertainty exists concerning the cause(s) of the decline. We were interested in how application of decision science could be used to help managers on a large, public management area in southwestern Florida where the bobwhite is a featured species and where abundance has severely declined. We conducted a workshop with managers and scientists to elicit management objectives, alternative hypotheses concerning population limitation in bobwhites, potential management actions, and predicted management outcomes. Using standard and robust approaches to decision making, we determined that improved water management and perhaps some changes in hunting practices would be expected to produce the best management outcomes in the face of uncertainty about what is limiting bobwhite abundance. We used a criterion called the expected value of perfect information to determine that a robust management strategy may perform nearly as well as an optimal management strategy (i.e., a strategy that is expected to perform best, given the relative importance of different management objectives) with all uncertainty resolved. We used the expected value of partial information to determine that management performance could be increased most by eliminating uncertainty over excessive-harvest and human-disturbance hypotheses. Beyond learning about the factors limiting bobwhites, adoption of a dynamic management strategy, which recognizes temporal changes in resource and environmental conditions, might produce the greatest management benefit. Our research demonstrates that robust approaches to decision making, combined with estimates of the value of information, can offer considerable insight into preferred management approaches when great uncertainty exists about system dynamics and the effects of management.
Friggens, N C; Blanc, F; Berry, D P; Puillet, L
2017-12-01
As the environments in which livestock are reared become more variable, animal robustness becomes an increasingly valuable attribute. Consequently, there is increasing focus on managing and breeding for it. However, robustness is a difficult phenotype to properly characterise because it is a complex trait composed of multiple components, including dynamic elements such as the rates of response to, and recovery from, environmental perturbations. In this review, the following definition of robustness is used: the ability, in the face of environmental constraints, to carry on doing the various things that the animal needs to do to favour its future ability to reproduce. The different elements of this definition are discussed to provide a clearer understanding of the components of robustness. The implications for quantifying robustness are that there is no single measure of robustness but rather that it is the combination of multiple and interacting component mechanisms whose relative value is context dependent. This context encompasses both the prevailing environment and the prevailing selection pressure. One key issue for measuring robustness is to be clear on the use to which the robustness measurements will employed. If the purpose is to identify biomarkers that may be useful for molecular phenotyping or genotyping, the measurements should focus on the physiological mechanisms underlying robustness. However, if the purpose of measuring robustness is to quantify the extent to which animals can adapt to limiting conditions then the measurements should focus on the life functions, the trade-offs between them and the animal's capacity to increase resource acquisition. The time-related aspect of robustness also has important implications. Single time-point measurements are of limited value because they do not permit measurement of responses to (and recovery from) environmental perturbations. The exception being single measurements of the accumulated consequence of a good (or bad) adaptive capacity, such as productive longevity and lifetime efficiency. In contrast, repeated measurements over time have a high potential for quantification of the animal's ability to cope with environmental challenges. Thus, we should be able to quantify differences in adaptive capacity from the data that are increasingly becoming available with the deployment of automated monitoring technology on farm. The challenge for future management and breeding will be how to combine various proxy measures to obtain reliable estimates of robustness components in large populations. A key aspect for achieving this is to define phenotypes from consideration of their biological properties and not just from available measures.
NASA Astrophysics Data System (ADS)
Shen, C.; Fang, K.
2017-12-01
Deep Learning (DL) methods have made revolutionary strides in recent years. A core value proposition of DL is that abstract notions and patterns can be extracted purely from data, without the need for domain expertise. Process-based models (PBM), on the other hand, can be regarded as repositories of human knowledge or hypotheses about how systems function. Here, through computational examples, we argue that there is merit in integrating PBMs with DL due to the imbalance and lack of data in many situations, especially in hydrology. We trained a deep-in-time neural network, the Long Short-Term Memory (LSTM), to learn soil moisture dynamics from Soil Moisture Active Passive (SMAP) Level 3 product. We show that when PBM solutions are integrated into LSTM, the network is able to better generalize across regions. LSTM is able to better utilize PBM solutions than simpler statistical methods. Our results suggest PBMs have generalization value which should be carefully assessed and utilized. We also emphasize that when properly regularized, the deep network is robust and is of superior testing performance compared to simpler methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voort, Sebastian van der; Section of Nuclear Energy and Radiation Applications, Department of Radiation, Science and Technology, Delft University of Technology, Delft; Water, Steven van de
Purpose: We aimed to derive a “robustness recipe” giving the range robustness (RR) and setup robustness (SR) settings (ie, the error values) that ensure adequate clinical target volume (CTV) coverage in oropharyngeal cancer patients for given gaussian distributions of systematic setup, random setup, and range errors (characterized by standard deviations of Σ, σ, and ρ, respectively) when used in minimax worst-case robust intensity modulated proton therapy (IMPT) optimization. Methods and Materials: For the analysis, contoured computed tomography (CT) scans of 9 unilateral and 9 bilateral patients were used. An IMPT plan was considered robust if, for at least 98% of themore » simulated fractionated treatments, 98% of the CTV received 95% or more of the prescribed dose. For fast assessment of the CTV coverage for given error distributions (ie, different values of Σ, σ, and ρ), polynomial chaos methods were used. Separate recipes were derived for the unilateral and bilateral cases using one patient from each group, and all 18 patients were included in the validation of the recipes. Results: Treatment plans for bilateral cases are intrinsically more robust than those for unilateral cases. The required RR only depends on the ρ, and SR can be fitted by second-order polynomials in Σ and σ. The formulas for the derived robustness recipes are as follows: Unilateral patients need SR = −0.15Σ{sup 2} + 0.27σ{sup 2} + 1.85Σ − 0.06σ + 1.22 and RR=3% for ρ = 1% and ρ = 2%; bilateral patients need SR = −0.07Σ{sup 2} + 0.19σ{sup 2} + 1.34Σ − 0.07σ + 1.17 and RR=3% and 4% for ρ = 1% and 2%, respectively. For the recipe validation, 2 plans were generated for each of the 18 patients corresponding to Σ = σ = 1.5 mm and ρ = 0% and 2%. Thirty-four plans had adequate CTV coverage in 98% or more of the simulated fractionated treatments; the remaining 2 had adequate coverage in 97.8% and 97.9%. Conclusions: Robustness recipes were derived that can be used in minimax robust optimization of IMPT treatment plans to ensure adequate CTV coverage for oropharyngeal cancer patients.« less
A novel time of arrival estimation algorithm using an energy detector receiver in MMW systems
NASA Astrophysics Data System (ADS)
Liang, Xiaolin; Zhang, Hao; Lyu, Tingting; Xiao, Han; Gulliver, T. Aaron
2017-12-01
This paper presents a new time of arrival (TOA) estimation technique using an improved energy detection (ED) receiver based on the empirical mode decomposition (EMD) in an impulse radio (IR) 60 GHz millimeter wave (MMW) system. A threshold is employed via analyzing the characteristics of the received energy values with an extreme learning machine (ELM). The effect of the channel and integration period on the TOA estimation is evaluated. Several well-known ED-based TOA algorithms are used to compare with the proposed technique. It is shown that this ELM-based technique has lower TOA estimation error compared to other approaches and provides robust performance with the IEEE 802.15.3c channel models.
Oghli, Mostafa Ghelich; Dehlaghi, Vahab; Zadeh, Ali Mohammad; Fallahi, Alireza; Pooyan, Mohammad
2014-07-01
Assessment of cardiac right-ventricle functions plays an essential role in diagnosis of arrhythmogenic right ventricular dysplasia (ARVD). Among clinical tests, cardiac magnetic resonance imaging (MRI) is now becoming the most valid imaging technique to diagnose ARVD. Fatty infiltration of the right ventricular free wall can be visible on cardiac MRI. Finding right-ventricle functional parameters from cardiac MRI images contains segmentation of right-ventricle in each slice of end diastole and end systole phases of cardiac cycle and calculation of end diastolic and end systolic volume and furthermore other functional parameters. The main problem of this task is the segmentation part. We used a robust method based on deformable model that uses shape information for segmentation of right-ventricle in short axis MRI images. After segmentation of right-ventricle from base to apex in end diastole and end systole phases of cardiac cycle, volume of right-ventricle in these phases calculated and then, ejection fraction calculated. We performed a quantitative evaluation of clinical cardiac parameters derived from the automatic segmentation by comparison against a manual delineation of the ventricles. The manually and automatically determined quantitative clinical parameters were statistically compared by means of linear regression. This fits a line to the data such that the root-mean-square error (RMSE) of the residuals is minimized. The results show low RMSE for Right Ventricle Ejection Fraction and Volume (≤ 0.06 for RV EF, and ≤ 10 mL for RV volume). Evaluation of segmentation results is also done by means of four statistical measures including sensitivity, specificity, similarity index and Jaccard index. The average value of similarity index is 86.87%. The Jaccard index mean value is 83.85% which shows a good accuracy of segmentation. The average of sensitivity is 93.9% and mean value of the specificity is 89.45%. These results show the reliability of proposed method in these cases that manual segmentation is inapplicable. Huge shape variety of right-ventricle led us to use a shape prior based method and this work can develop by four-dimensional processing for determining the first ventricular slices.
Robust Design Optimization via Failure Domain Bounding
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2007-01-01
This paper extends and applies the strategies recently developed by the authors for handling constraints under uncertainty to robust design optimization. For the scope of this paper, robust optimization is a methodology aimed at problems for which some parameters are uncertain and are only known to belong to some uncertainty set. This set can be described by either a deterministic or a probabilistic model. In the methodology developed herein, optimization-based strategies are used to bound the constraint violation region using hyper-spheres and hyper-rectangles. By comparing the resulting bounding sets with any given uncertainty model, it can be determined whether the constraints are satisfied for all members of the uncertainty model (i.e., constraints are feasible) or not (i.e., constraints are infeasible). If constraints are infeasible and a probabilistic uncertainty model is available, upper bounds to the probability of constraint violation can be efficiently calculated. The tools developed enable approximating not only the set of designs that make the constraints feasible but also, when required, the set of designs for which the probability of constraint violation is below a prescribed admissible value. When constraint feasibility is possible, several design criteria can be used to shape the uncertainty model of performance metrics of interest. Worst-case, least-second-moment, and reliability-based design criteria are considered herein. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, these strategies are easily applicable to a broad range of engineering problems.
Mathew, B; Schmitz, A; Muñoz-Descalzo, S; Ansari, N; Pampaloni, F; Stelzer, E H K; Fischer, S C
2015-06-08
Due to the large amount of data produced by advanced microscopy, automated image analysis is crucial in modern biology. Most applications require reliable cell nuclei segmentation. However, in many biological specimens cell nuclei are densely packed and appear to touch one another in the images. Therefore, a major difficulty of three-dimensional cell nuclei segmentation is the decomposition of cell nuclei that apparently touch each other. Current methods are highly adapted to a certain biological specimen or a specific microscope. They do not ensure similarly accurate segmentation performance, i.e. their robustness for different datasets is not guaranteed. Hence, these methods require elaborate adjustments to each dataset. We present an advanced three-dimensional cell nuclei segmentation algorithm that is accurate and robust. Our approach combines local adaptive pre-processing with decomposition based on Lines-of-Sight (LoS) to separate apparently touching cell nuclei into approximately convex parts. We demonstrate the superior performance of our algorithm using data from different specimens recorded with different microscopes. The three-dimensional images were recorded with confocal and light sheet-based fluorescence microscopes. The specimens are an early mouse embryo and two different cellular spheroids. We compared the segmentation accuracy of our algorithm with ground truth data for the test images and results from state-of-the-art methods. The analysis shows that our method is accurate throughout all test datasets (mean F-measure: 91%) whereas the other methods each failed for at least one dataset (F-measure≤69%). Furthermore, nuclei volume measurements are improved for LoS decomposition. The state-of-the-art methods required laborious adjustments of parameter values to achieve these results. Our LoS algorithm did not require parameter value adjustments. The accurate performance was achieved with one fixed set of parameter values. We developed a novel and fully automated three-dimensional cell nuclei segmentation method incorporating LoS decomposition. LoS are easily accessible features that ensure correct splitting of apparently touching cell nuclei independent of their shape, size or intensity. Our method showed superior performance compared to state-of-the-art methods, performing accurately for a variety of test images. Hence, our LoS approach can be readily applied to quantitative evaluation in drug testing, developmental and cell biology.
Improving numeracy through values affirmation enhances decision and STEM outcomes
Peters, Ellen; Tompkins, Mary Kate; Schley, Dan; Meilleur, Louise; Sinayev, Aleksander; Tusler, Martin; Wagner, Laura; Crocker, Jennifer
2017-01-01
Greater numeracy has been correlated with better health and financial outcomes in past studies, but causal effects in adults are unknown. In a 9-week longitudinal study, undergraduate students, all taking a psychology statistics course, were randomly assigned to a control condition or a values-affirmation manipulation intended to improve numeracy. By the final week in the course, the numeracy intervention (statistics-course enrollment combined with values affirmation) enhanced objective numeracy, subjective numeracy, and two decision-related outcomes (financial literacy and health-related behaviors). It also showed positive indirect-only effects on financial outcomes and a series of STEM-related outcomes (course grades, intentions to take more math-intensive courses, later math-intensive courses taken based on academic transcripts). All decision and STEM-related outcome effects were mediated by the changes in objective and/or subjective numeracy and demonstrated similar and robust enhancements. Improvements to abstract numeric reasoning can improve everyday outcomes. PMID:28704410
Image segmentation-based robust feature extraction for color image watermarking
NASA Astrophysics Data System (ADS)
Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen
2018-04-01
This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.
Roberts, Norman B; Dutton, John; Higgins, Gerald; Allars, Lesley
2005-01-01
The problem in the measurement of cyclosporin (CyA) is that the widely used immuno-based assays suffer from interference by metabolites present in unpredictable excess. To resolve this, the consensus view has been to develop more specific and robust procedures for the measurement of CyA alone in order to give values similar to those obtained by HPLC. We developed an alternative strategy based on Abbott poly- and monoclonal assays to derive an adjusted monoclonal value as an equivalent measurement to HPLC. We have now evaluated a recently developed semi-automated HPLC procedure and used it to test the validity of the adjusted monoclonal value. The automated HPLC procedure with online clean-up was optimised for the separation of CyA and internal standard CyD. The assay was simple to use, precise and gave good recovery of cyclosporin from whole blood. Comparisons with the more specific immunoassays Abbott AxSym and EMIT showed close agreement, whereas Abbott monoclonal values indicated up to 20% positive bias. In contrast, the adjusted monoclonal values gave good agreement with HPLC. Data obtained from HPLC linked to tandem mass spectrometry (MS) indicated closer agreement with Abbott monoclonal values than expected, suggesting some positive bias with MS. The benefit of using an adjusted monoclonal value is that a result equivalent to HPLC is obtained, as well as an indication of the concentration of metabolites from the Abbott polyclonal measurement.
Current-based detection of nonlocal spin transport in graphene for spin-based logic applications
NASA Astrophysics Data System (ADS)
Wen, Hua; Zhu, Tiancong; Luo, Yunqiu Kelly; Amamou, Walid; Kawakami, Roland K.
2014-05-01
Graphene has been proposed for novel spintronic devices due to its robust and efficient spin transport properties at room temperature. Some of the most promising proposals require current-based readout for integration purposes, but the current-based detection of spin accumulation has not yet been developed. In this work, we demonstrate current-based detection of spin transport in graphene using a modified nonlocal geometry. By adding a variable shunt resistor in parallel to the nonlocal voltmeter, we are able to systematically cross over from the conventional voltage-based detection to current-based detection. As the shunt resistor is reduced, the output current from the spin accumulation increases as the shunt resistance drops below a characteristic value R*. We analyze this behavior using a one-dimensional drift-diffusion model, which accounts well for the observed behavior. These results provide the experimental and theoretical foundation for current-based detection of nonlocal spin transport.
Order-restricted inference for means with missing values.
Wang, Heng; Zhong, Ping-Shou
2017-09-01
Missing values appear very often in many applications, but the problem of missing values has not received much attention in testing order-restricted alternatives. Under the missing at random (MAR) assumption, we impute the missing values nonparametrically using kernel regression. For data with imputation, the classical likelihood ratio test designed for testing the order-restricted means is no longer applicable since the likelihood does not exist. This article proposes a novel method for constructing test statistics for assessing means with an increasing order or a decreasing order based on jackknife empirical likelihood (JEL) ratio. It is shown that the JEL ratio statistic evaluated under the null hypothesis converges to a chi-bar-square distribution, whose weights depend on missing probabilities and nonparametric imputation. Simulation study shows that the proposed test performs well under various missing scenarios and is robust for normally and nonnormally distributed data. The proposed method is applied to an Alzheimer's disease neuroimaging initiative data set for finding a biomarker for the diagnosis of the Alzheimer's disease. © 2017, The International Biometric Society.
Identification of significant features by the Global Mean Rank test.
Klammer, Martin; Dybowski, J Nikolaj; Hoffmann, Daniel; Schaab, Christoph
2014-01-01
With the introduction of omics-technologies such as transcriptomics and proteomics, numerous methods for the reliable identification of significantly regulated features (genes, proteins, etc.) have been developed. Experimental practice requires these tests to successfully deal with conditions such as small numbers of replicates, missing values, non-normally distributed expression levels, and non-identical distributions of features. With the MeanRank test we aimed at developing a test that performs robustly under these conditions, while favorably scaling with the number of replicates. The test proposed here is a global one-sample location test, which is based on the mean ranks across replicates, and internally estimates and controls the false discovery rate. Furthermore, missing data is accounted for without the need of imputation. In extensive simulations comparing MeanRank to other frequently used methods, we found that it performs well with small and large numbers of replicates, feature dependent variance between replicates, and variable regulation across features on simulation data and a recent two-color microarray spike-in dataset. The tests were then used to identify significant changes in the phosphoproteomes of cancer cells induced by the kinase inhibitors erlotinib and 3-MB-PP1 in two independently published mass spectrometry-based studies. MeanRank outperformed the other global rank-based methods applied in this study. Compared to the popular Significance Analysis of Microarrays and Linear Models for Microarray methods, MeanRank performed similar or better. Furthermore, MeanRank exhibits more consistent behavior regarding the degree of regulation and is robust against the choice of preprocessing methods. MeanRank does not require any imputation of missing values, is easy to understand, and yields results that are easy to interpret. The software implementing the algorithm is freely available for academic and commercial use.
Roncali, Emilie; Phipps, Jennifer E; Marcu, Laura; Cherry, Simon R
2012-10-21
In previous work we demonstrated the potential of positron emission tomography (PET) detectors with depth-of-interaction (DOI) encoding capability based on phosphor-coated crystals. A DOI resolution of 8 mm full-width at half-maximum was obtained for 20 mm long scintillator crystals using a delayed charge integration linear regression method (DCI-LR). Phosphor-coated crystals modify the pulse shape to allow continuous DOI information determination, but the relationship between pulse shape and DOI is complex. We are therefore interested in developing a sensitive and robust method to estimate the DOI. Here, linear discriminant analysis (LDA) was implemented to classify the events based on information extracted from the pulse shape. Pulses were acquired with 2×2×20 mm(3) phosphor-coated crystals at five irradiation depths and characterized by their DCI values or Laguerre coefficients. These coefficients were obtained by expanding the pulses on a Laguerre basis set and constituted a unique signature for each pulse. The DOI of individual events was predicted using LDA based on Laguerre coefficients (Laguerre-LDA) or DCI values (DCI-LDA) as discriminant features. Predicted DOIs were compared to true irradiation depths. Laguerre-LDA showed higher sensitivity and accuracy than DCI-LDA and DCI-LR and was also more robust to predict the DOI of pulses with higher statistical noise due to low light levels (interaction depths further from the photodetector face). This indicates that Laguerre-LDA may be more suitable to DOI estimation in smaller crystals where lower collected light levels are expected. This novel approach is promising for calculating DOI using pulse shape discrimination in single-ended readout depth-encoding PET detectors.
Roncali, Emilie; Phipps, Jennifer E.; Marcu, Laura; Cherry, Simon R.
2012-01-01
In previous work we demonstrated the potential of positron emission tomography (PET) detectors with depth-of-interaction (DOI) encoding capability based on phosphor-coated crystals. A DOI resolution of 8 mm full-width at half-maximum was obtained for 20 mm long scintillator crystals using a delayed charge integration linear regression method (DCI-LR). Phosphor-coated crystals modify the pulse shape to allow continuous DOI information determination, but the relationship between pulse shape and DOI is complex. We are therefore interested in developing a sensitive and robust method to estimate the DOI. Here, linear discriminant analysis (LDA) was implemented to classify the events based on information extracted from the pulse shape. Pulses were acquired with 2 × 2 × 20 mm3 phosphor-coated crystals at five irradiation depths and characterized by their DCI values or Laguerre coefficients. These coefficients were obtained by expanding the pulses on a Laguerre basis set and constituted a unique signature for each pulse. The DOI of individual events was predicted using LDA based on Laguerre coefficients (Laguerre-LDA) or DCI values (DCI-LDA) as discriminant features. Predicted DOIs were compared to true irradiation depths. Laguerre-LDA showed higher sensitivity and accuracy than DCI-LDA and DCI-LR and was also more robust to predict the DOI of pulses with higher statistical noise due to low light levels (interaction depths further from the photodetector face). This indicates that Laguerre-LDA may be more suitable to DOI estimation in smaller crystals where lower collected light levels are expected. This novel approach is promising for calculating DOI using pulse shape discrimination in single-ended readout depth-encoding PET detectors. PMID:23010690
Doloc-Mihu, Anca; Calabrese, Ronald L
2016-01-01
The underlying mechanisms that support robustness in neuronal networks are as yet unknown. However, recent studies provide evidence that neuronal networks are robust to natural variations, modulation, and environmental perturbations of parameters, such as maximal conductances of intrinsic membrane and synaptic currents. Here we sought a method for assessing robustness, which might easily be applied to large brute-force databases of model instances. Starting with groups of instances with appropriate activity (e.g., tonic spiking), our method classifies instances into much smaller subgroups, called families, in which all members vary only by the one parameter that defines the family. By analyzing the structures of families, we developed measures of robustness for activity type. Then, we applied these measures to our previously developed model database, HCO-db, of a two-neuron half-center oscillator (HCO), a neuronal microcircuit from the leech heartbeat central pattern generator where the appropriate activity type is alternating bursting. In HCO-db, the maximal conductances of five intrinsic and two synaptic currents were varied over eight values (leak reversal potential also varied, five values). We focused on how variations of particular conductance parameters maintain normal alternating bursting activity while still allowing for functional modulation of period and spike frequency. We explored the trade-off between robustness of activity type and desirable change in activity characteristics when intrinsic conductances are altered and identified the hyperpolarization-activated (h) current as an ideal target for modulation. We also identified ensembles of model instances that closely approximate physiological activity and can be used in future modeling studies.
ERIC Educational Resources Information Center
Smrtnik Vitulic, Helena; Zupancic, Maja
2013-01-01
The study investigated the predictive value of robust and specific personality traits in adolescents (M[subscript age]?=?14.7 years), in explaining their academic achievement at the end of basic compulsory schooling. Personality data were obtained through self, maternal, and peer reports using the Inventory of Child/Adolescent Individual…
VCD Robustness of the Amide-I and Amide-II Vibrational Modes of Small Peptide Models.
Góbi, Sándor; Magyarfalvi, Gábor; Tarczay, György
2015-09-01
The rotational strengths and the robustness values of amide-I and amide-II vibrational modes of For(AA)n NHMe (where AA is Val, Asn, Asp, or Cys, n = 1-5 for Val and Asn; n = 1 for Asp and Cys) model peptides with α-helix and β-sheet backbone conformations were computed by density functional methods. The robustness results verify empirical rules drawn from experiments and from computed rotational strengths linking amide-I and amide-II patterns in the vibrational circular dichroism (VCD) spectra of peptides with their backbone structures. For peptides with at least three residues (n ≥ 3) these characteristic patterns from coupled amide vibrational modes have robust signatures. For shorter peptide models many vibrational modes are nonrobust, and the robust modes can be dependent on the residues or on their side chain conformations in addition to backbone conformations. These robust VCD bands, however, provide information for the detailed structural analysis of these smaller systems. © 2015 Wiley Periodicals, Inc.
Smith predictor with sliding mode control for processes with large dead times
NASA Astrophysics Data System (ADS)
Mehta, Utkal; Kaya, İbrahim
2017-11-01
The paper discusses the Smith Predictor scheme with Sliding Mode Controller (SP-SMC) for processes with large dead times. This technique gives improved load-disturbance rejection with optimum input control signal variations. A power rate reaching law is incorporated in the sporadic part of sliding mode control such that the overall performance recovers meaningfully. The proposed scheme obtains parameter values by satisfying a new performance index which is based on biobjective constraint. In simulation study, the efficiency of the method is evaluated for robustness and transient performance over reported techniques.
Shi, Lei; Tuzer, T Umut; Fenollosa, Roberto; Meseguer, Francisco
2012-11-20
A new dielectric metamaterial building block based on high refractive index silicon spherical nanocavities with Mie resonances appearing in the near infrared optical region is prepared and characterized. It is demonstrated both experimentally and theoretically that a single silicon nanocavity supports well-defined and robust magnetic resonances, even in a liquid medium environment, at wavelength values up to six times larger than the cavity radius. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Puente, Gabriela F; Bonetto, Fabián J
2005-05-01
We used the temporal evolution of the bubble radius in single-bubble sonoluminescence to estimate the water liquid-vapor accommodation coefficient. The rapid changes in the bubble radius that occur during the bubble collapse and rebounds are a function of the actual value of the accommodation coefficient. We selected bubble radius measurements obtained from two different experimental techniques in conjunction with a robust parameter estimation strategy and we obtained that for water at room temperature the mass accommodation coefficient is in the confidence interval [0.217,0.329].
Robust design of configurations and parameters of adaptable products
NASA Astrophysics Data System (ADS)
Zhang, Jian; Chen, Yongliang; Xue, Deyi; Gu, Peihua
2014-03-01
An adaptable product can satisfy different customer requirements by changing its configuration and parameter values during the operation stage. Design of adaptable products aims at reducing the environment impact through replacement of multiple different products with single adaptable ones. Due to the complex architecture, multiple functional requirements, and changes of product configurations and parameter values in operation, impact of uncertainties to the functional performance measures needs to be considered in design of adaptable products. In this paper, a robust design approach is introduced to identify the optimal design configuration and parameters of an adaptable product whose functional performance measures are the least sensitive to uncertainties. An adaptable product in this paper is modeled by both configurations and parameters. At the configuration level, methods to model different product configuration candidates in design and different product configuration states in operation to satisfy design requirements are introduced. At the parameter level, four types of product/operating parameters and relations among these parameters are discussed. A two-level optimization approach is developed to identify the optimal design configuration and its parameter values of the adaptable product. A case study is implemented to illustrate the effectiveness of the newly developed robust adaptable design method.
Risk, Robustness and Water Resources Planning Under Uncertainty
NASA Astrophysics Data System (ADS)
Borgomeo, Edoardo; Mortazavi-Naeini, Mohammad; Hall, Jim W.; Guillod, Benoit P.
2018-03-01
Risk-based water resources planning is based on the premise that water managers should invest up to the point where the marginal benefit of risk reduction equals the marginal cost of achieving that benefit. However, this cost-benefit approach may not guarantee robustness under uncertain future conditions, for instance under climatic changes. In this paper, we expand risk-based decision analysis to explore possible ways of enhancing robustness in engineered water resources systems under different risk attitudes. Risk is measured as the expected annual cost of water use restrictions, while robustness is interpreted in the decision-theoretic sense as the ability of a water resource system to maintain performance—expressed as a tolerable risk of water use restrictions—under a wide range of possible future conditions. Linking risk attitudes with robustness allows stakeholders to explicitly trade-off incremental increases in robustness with investment costs for a given level of risk. We illustrate the framework through a case study of London's water supply system using state-of-the -art regional climate simulations to inform the estimation of risk and robustness.
Robust Flutter Margin Analysis that Incorporates Flight Data
NASA Technical Reports Server (NTRS)
Lind, Rick; Brenner, Martin J.
1998-01-01
An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, mu, computes a stability margin that directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The mu margins are robust margins that indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 Systems Research Aircraft using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.
Conflicts in Coalitions: A Stability Analysis of Robust Multi-City Regional Water Supply Portfolios
NASA Astrophysics Data System (ADS)
Gold, D.; Trindade, B. C.; Reed, P. M.; Characklis, G. W.
2017-12-01
Regional cooperation among water utilities can improve the robustness of urban water supply portfolios to deeply uncertain future conditions such as those caused by climate change or population growth. Coordination mechanisms such as water transfers, coordinated demand management, and shared infrastructure, can improve the efficiency of resource allocation and delay the need for new infrastructure investments. Regionalization does however come at a cost. Regionally coordinated water supply plans may be vulnerable to any emerging instabilities in the regional coalition. If one or more regional actors does not cooperate or follow the required regional actions in a time of crisis, the overall system performance may degrade. Furthermore, when crafting regional water supply portfolios, decision makers must choose a framework for measuring the performance of regional policies based on the evaluation of the objective values for each individual actor. Regional evaluations may inherently favor one actor's interests over those of another. This work focuses on four interconnected water utilities in the Research Triangle region of North Carolina for which robust regional water supply portfolios have previously been designed using multi-objective optimization to maximize the robustness of the worst performing utility across several objectives. This study 1) examines the sensitivity of portfolio performance to deviations from prescribed actions by individual utilities, 2) quantifies the implications of the regional formulation used to evaluate robustness for the portfolio performance of each individual utility and 3) elucidates the inherent regional tensions and conflicts that exist between utilities under this regionalization scheme through visual diagnostics of the system under simulated drought scenarios. Results of this analysis will help inform the creation of future regional water supply portfolios and provide insight into the nature of multi-actor water supply systems.
Cascading failures in interdependent systems under a flow redistribution model
NASA Astrophysics Data System (ADS)
Zhang, Yingrui; Arenas, Alex; Yaǧan, Osman
2018-02-01
Robustness and cascading failures in interdependent systems has been an active research field in the past decade. However, most existing works use percolation-based models where only the largest component of each network remains functional throughout the cascade. Although suitable for communication networks, this assumption fails to capture the dependencies in systems carrying a flow (e.g., power systems, road transportation networks), where cascading failures are often triggered by redistribution of flows leading to overloading of lines. Here, we consider a model consisting of systems A and B with initial line loads and capacities given by {LA,i,CA ,i} i =1 n and {LB,i,CB ,i} i =1 n, respectively. When a line fails in system A , a fraction of its load is redistributed to alive lines in B , while remaining (1 -a ) fraction is redistributed equally among all functional lines in A ; a line failure in B is treated similarly with b giving the fraction to be redistributed to A . We give a thorough analysis of cascading failures of this model initiated by a random attack targeting p1 fraction of lines in A and p2 fraction in B . We show that (i) the model captures the real-world phenomenon of unexpected large scale cascades and exhibits interesting transition behavior: the final collapse is always first order, but it can be preceded by a sequence of first- and second-order transitions; (ii) network robustness tightly depends on the coupling coefficients a and b , and robustness is maximized at non-trivial a ,b values in general; (iii) unlike most existing models, interdependence has a multifaceted impact on system robustness in that interdependency can lead to an improved robustness for each individual network.
Cascading failures in interdependent systems under a flow redistribution model.
Zhang, Yingrui; Arenas, Alex; Yağan, Osman
2018-02-01
Robustness and cascading failures in interdependent systems has been an active research field in the past decade. However, most existing works use percolation-based models where only the largest component of each network remains functional throughout the cascade. Although suitable for communication networks, this assumption fails to capture the dependencies in systems carrying a flow (e.g., power systems, road transportation networks), where cascading failures are often triggered by redistribution of flows leading to overloading of lines. Here, we consider a model consisting of systems A and B with initial line loads and capacities given by {L_{A,i},C_{A,i}}_{i=1}^{n} and {L_{B,i},C_{B,i}}_{i=1}^{n}, respectively. When a line fails in system A, a fraction of its load is redistributed to alive lines in B, while remaining (1-a) fraction is redistributed equally among all functional lines in A; a line failure in B is treated similarly with b giving the fraction to be redistributed to A. We give a thorough analysis of cascading failures of this model initiated by a random attack targeting p_{1} fraction of lines in A and p_{2} fraction in B. We show that (i) the model captures the real-world phenomenon of unexpected large scale cascades and exhibits interesting transition behavior: the final collapse is always first order, but it can be preceded by a sequence of first- and second-order transitions; (ii) network robustness tightly depends on the coupling coefficients a and b, and robustness is maximized at non-trivial a,b values in general; (iii) unlike most existing models, interdependence has a multifaceted impact on system robustness in that interdependency can lead to an improved robustness for each individual network.
A simple analytical infiltration model for short-duration rainfall
NASA Astrophysics Data System (ADS)
Wang, Kaiwen; Yang, Xiaohua; Liu, Xiaomang; Liu, Changming
2017-12-01
Many infiltration models have been proposed to simulate infiltration process. Different initial soil conditions and non-uniform initial water content can lead to infiltration simulation errors, especially for short-duration rainfall (SHR). Few infiltration models are specifically derived to eliminate the errors caused by the complex initial soil conditions. We present a simple analytical infiltration model for SHR infiltration simulation, i.e., Short-duration Infiltration Process model (SHIP model). The infiltration simulated by 5 models (i.e., SHIP (high) model, SHIP (middle) model, SHIP (low) model, Philip model and Parlange model) were compared based on numerical experiments and soil column experiments. In numerical experiments, SHIP (middle) and Parlange models had robust solutions for SHR infiltration simulation of 12 typical soils under different initial soil conditions. The absolute values of percent bias were less than 12% and the values of Nash and Sutcliffe efficiency were greater than 0.83. Additionally, in soil column experiments, infiltration rate fluctuated in a range because of non-uniform initial water content. SHIP (high) and SHIP (low) models can simulate an infiltration range, which successfully covered the fluctuation range of the observed infiltration rate. According to the robustness of solutions and the coverage of fluctuation range of infiltration rate, SHIP model can be integrated into hydrologic models to simulate SHR infiltration process and benefit the flood forecast.
Direct and Absolute Quantification of over 1800 Yeast Proteins via Selected Reaction Monitoring*
Lawless, Craig; Holman, Stephen W.; Brownridge, Philip; Lanthaler, Karin; Harman, Victoria M.; Watkins, Rachel; Hammond, Dean E.; Miller, Rebecca L.; Sims, Paul F. G.; Grant, Christopher M.; Eyers, Claire E.; Beynon, Robert J.
2016-01-01
Defining intracellular protein concentration is critical in molecular systems biology. Although strategies for determining relative protein changes are available, defining robust absolute values in copies per cell has proven significantly more challenging. Here we present a reference data set quantifying over 1800 Saccharomyces cerevisiae proteins by direct means using protein-specific stable-isotope labeled internal standards and selected reaction monitoring (SRM) mass spectrometry, far exceeding any previous study. This was achieved by careful design of over 100 QconCAT recombinant proteins as standards, defining 1167 proteins in terms of copies per cell and upper limits on a further 668, with robust CVs routinely less than 20%. The selected reaction monitoring-derived proteome is compared with existing quantitative data sets, highlighting the disparities between methodologies. Coupled with a quantification of the transcriptome by RNA-seq taken from the same cells, these data support revised estimates of several fundamental molecular parameters: a total protein count of ∼100 million molecules-per-cell, a median of ∼1000 proteins-per-transcript, and a linear model of protein translation explaining 70% of the variance in translation rate. This work contributes a “gold-standard” reference yeast proteome (including 532 values based on high quality, dual peptide quantification) that can be widely used in systems models and for other comparative studies. PMID:26750110
An H-infinity approach to optimal control of oxygen and carbon dioxide contents in blood
NASA Astrophysics Data System (ADS)
Rigatos, Gerasimos; Siano, Pierluigi; Selisteanu, Dan; Precup, Radu
2016-12-01
Nonlinear H-infinity control is proposed for the regulation of the levels of oxygen and carbon dioxide in the blood of patients undergoing heart surgery and extracorporeal blood circulation. The levels of blood gases are administered through a membrane oxygenator and the control inputs are the externally supplied oxygen, the aggregate gas supply (oxygen plus nitrogen), and the blood flow which is regulated by a blood pump. The proposed control method is based on linearization of the oxygenator's dynamical model through Taylor series expansion and the computation of Jacobian matrices. The local linearization points are defined by the present value of the oxygenator's state vector and the last value of the control input that was exerted on this system. The modelling errors due to linearization are considered as disturbances which are compensated by the robustness of the control loop. Next, for the linearized model of the oxygenator an H-infinity control input is computed at each iteration of the control algorithm through the solution of an algebraic Riccati equation. With the use of Lyapunov stability analysis it is demonstrated that the control scheme satisfies the H-infinity tracking performance criterion, which signifies improved robustness against modelling uncertainty and external disturbances. Moreover, under moderate conditions the asymptotic stability of the control loop is also proven.
Stabilization of business cycles of finance agents using nonlinear optimal control
NASA Astrophysics Data System (ADS)
Rigatos, G.; Siano, P.; Ghosh, T.; Sarno, D.
2017-11-01
Stabilization of the business cycles of interconnected finance agents is performed with the use of a new nonlinear optimal control method. First, the dynamics of the interacting finance agents and of the associated business cycles is described by a modeled of coupled nonlinear oscillators. Next, this dynamic model undergoes approximate linearization round a temporary operating point which is defined by the present value of the system's state vector and the last value of the control inputs vector that was exerted on it. The linearization procedure is based on Taylor series expansion of the dynamic model and on the computation of Jacobian matrices. The modelling error, which is due to the truncation of higher-order terms in the Taylor series expansion is considered as a disturbance which is compensated by the robustness of the control loop. Next, for the linearized model of the interacting finance agents, an H-infinity feedback controller is designed. The computation of the feedback control gain requires the solution of an algebraic Riccati equation at each iteration of the control algorithm. Through Lyapunov stability analysis it is proven that the control scheme satisfies an H-infinity tracking performance criterion, which signifies elevated robustness against modelling uncertainty and external perturbations. Moreover, under moderate conditions the global asymptotic stability features of the control loop are proven.
Dera, Dimah; Bouaynaya, Nidhal; Fathallah-Shaykh, Hassan M
2016-07-01
We address the problem of fully automated region discovery and robust image segmentation by devising a new deformable model based on the level set method (LSM) and the probabilistic nonnegative matrix factorization (NMF). We describe the use of NMF to calculate the number of distinct regions in the image and to derive the local distribution of the regions, which is incorporated into the energy functional of the LSM. The results demonstrate that our NMF-LSM method is superior to other approaches when applied to synthetic binary and gray-scale images and to clinical magnetic resonance images (MRI) of the human brain with and without a malignant brain tumor, glioblastoma multiforme. In particular, the NMF-LSM method is fully automated, highly accurate, less sensitive to the initial selection of the contour(s) or initial conditions, more robust to noise and model parameters, and able to detect as small distinct regions as desired. These advantages stem from the fact that the proposed method relies on histogram information instead of intensity values and does not introduce nuisance model parameters. These properties provide a general approach for automated robust region discovery and segmentation in heterogeneous images. Compared with the retrospective radiological diagnoses of two patients with non-enhancing grade 2 and 3 oligodendroglioma, the NMF-LSM detects earlier progression times and appears suitable for monitoring tumor response. The NMF-LSM method fills an important need of automated segmentation of clinical MRI.
GenInfoGuard--a robust and distortion-free watermarking technique for genetic data.
Iftikhar, Saman; Khan, Sharifullah; Anwar, Zahid; Kamran, Muhammad
2015-01-01
Genetic data, in digital format, is used in different biological phenomena such as DNA translation, mRNA transcription and protein synthesis. The accuracy of these biological phenomena depend on genetic codes and all subsequent processes. To computerize the biological procedures, different domain experts are provided with the authorized access of the genetic codes; as a consequence, the ownership protection of such data is inevitable. For this purpose, watermarks serve as the proof of ownership of data. While protecting data, embedded hidden messages (watermarks) influence the genetic data; therefore, the accurate execution of the relevant processes and the overall result becomes questionable. Most of the DNA based watermarking techniques modify the genetic data and are therefore vulnerable to information loss. Distortion-free techniques make sure that no modifications occur during watermarking; however, they are fragile to malicious attacks and therefore cannot be used for ownership protection (particularly, in presence of a threat model). Therefore, there is a need for a technique that must be robust and should also prevent unwanted modifications. In this spirit, a watermarking technique with aforementioned characteristics has been proposed in this paper. The proposed technique makes sure that: (i) the ownership rights are protected by means of a robust watermark; and (ii) the integrity of genetic data is preserved. The proposed technique-GenInfoGuard-ensures its robustness through the "watermark encoding" in permuted values, and exhibits high decoding accuracy against various malicious attacks.
Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard
2016-10-01
In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.
Robust nonlinear control of vectored thrust aircraft
NASA Technical Reports Server (NTRS)
Doyle, John C.; Murray, Richard; Morris, John
1993-01-01
An interdisciplinary program in robust control for nonlinear systems with applications to a variety of engineering problems is outlined. Major emphasis will be placed on flight control, with both experimental and analytical studies. This program builds on recent new results in control theory for stability, stabilization, robust stability, robust performance, synthesis, and model reduction in a unified framework using Linear Fractional Transformations (LFT's), Linear Matrix Inequalities (LMI's), and the structured singular value micron. Most of these new advances have been accomplished by the Caltech controls group independently or in collaboration with researchers in other institutions. These recent results offer a new and remarkably unified framework for all aspects of robust control, but what is particularly important for this program is that they also have important implications for system identification and control of nonlinear systems. This combines well with Caltech's expertise in nonlinear control theory, both in geometric methods and methods for systems with constraints and saturations.
Ground System Architectures Workshop GMSEC SERVICES SUITE (GSS): an Agile Development Story
NASA Technical Reports Server (NTRS)
Ly, Vuong
2017-01-01
The GMSEC (Goddard Mission Services Evolution Center) Services Suite (GSS) is a collection of tools and software services along with a robust customizable web-based portal that enables the user to capture, monitor, report, and analyze system-wide GMSEC data. Given our plug-and-play architecture and the needs for rapid system development, we opted to follow the Scrum Agile Methodology for software development. Being one of the first few projects to implement the Agile methodology at NASA GSFC, in this presentation we will present our approaches, tools, successes, and challenges in implementing this methodology. The GMSEC architecture provides a scalable, extensible ground and flight system for existing and future missions. GMSEC comes with a robust Application Programming Interface (GMSEC API) and a core set of Java-based GMSEC components that facilitate the development of a GMSEC-based ground system. Over the past few years, we have seen an upbeat in the number of customers who are moving from a native desktop application environment to a web based environment particularly for data monitoring and analysis. We also see a need to provide separation of the business logic from the GUI display for our Java-based components and also to consolidate all the GUI displays into one interface. This combination of separation and consolidation brings immediate value to a GMSEC-based ground system through increased ease of data access via a uniform interface, built-in security measures, centralized configuration management, and ease of feature extensibility.
Valuing Reductions in Fatal Illness Risks: Implications of Recent Research.
Robinson, Lisa A; Hammitt, James K
2016-08-01
The value of mortality risk reductions, conventionally expressed as the value per statistical life, is an important determinant of the net benefits of many government policies. US regulators currently rely primarily on studies of fatal injuries, raising questions about whether different values might be appropriate for risks associated with fatal illnesses. Our review suggests that, despite the substantial expansion of the research base in recent years, few US studies of illness-related risks meet criteria for quality, and those that do yield similar values to studies of injury-related risks. Given this result, combining the findings of these few studies with the findings of the more robust literature on injury-related risks appears to provide a reasonable range of estimates for application in regulatory analysis. Our review yields estimates ranging from about $4.2 million to $13.7 million with a mid-point of $9.0 million (2013 dollars). Although the studies we identify differ from those that underlie the values currently used by Federal agencies, the resulting estimates are remarkably similar, suggesting that there is substantial consensus emerging on the values applicable to the general US population. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Pham, Le Thanh Mai; Kim, Su Jin; Kim, Yong Hwan
2016-01-01
Although lignin peroxidase is claimed as a key enzyme in enzyme-catalyzed lignin degradation, in vitro enzymatic degradation of lignin was not easily observed in lab-scale experiments. It implies that other factors may hinder the enzymatic degradation of lignin. Irreversible interaction between phenolic compound and lignin peroxidase was hypothesized when active enzyme could not be recovered after the reaction with degradation product (guaiacol) of lignin phenolic dimer. In the study of lignin peroxidase isozyme H8 from white-rot fungi Phanerochaete chrysosporium (LiPH8), W251 site was revealed to make the covalent coupling with one moiety of monolignolic radical (guaiacol radical) by LC-MS/MS analysis. Hypothetical electron-relay containing W251 residue was newly suggested based on the observation of repressed radical coupling and remarkably lower electron transfer rate for W215A mutant. Furthermore, the retardation of the suicidal radical coupling between the W251 residue and the monolignolic radical was attempted by supplementing the acidic microenvironment around the W251 residue to engineer radical-robust LiPH8. Among many mutants, mutant A242D showed exceptional catalytic performances by yielding 21.1- and 4.9-fold higher increases of k cat and k cat /K M values, respectively, in the oxidation of non-phenolic model lignin dimer. A mechanism-based suicide inhibition of LiPH8 by phenolic compounds was firstly revealed and investigated in this work. Radical-robust LiPH8 was also successfully engineered by manipulating the transient radical state of radical-susceptible electron-relay. Radical-robust LiPH8 will play an essential role in degradation of lignin, which will be consequently linked with improved production of sugars from lignocellulose biomass.
Robust Inference of Risks of Large Portfolios
Fan, Jianqing; Han, Fang; Liu, Han; Vickers, Byron
2016-01-01
We propose a bootstrap-based robust high-confidence level upper bound (Robust H-CLUB) for assessing the risks of large portfolios. The proposed approach exploits rank-based and quantile-based estimators, and can be viewed as a robust extension of the H-CLUB procedure (Fan et al., 2015). Such an extension allows us to handle possibly misspecified models and heavy-tailed data, which are stylized features in financial returns. Under mixing conditions, we analyze the proposed approach and demonstrate its advantage over H-CLUB. We further provide thorough numerical results to back up the developed theory, and also apply the proposed method to analyze a stock market dataset. PMID:27818569
Digital audio watermarking using moment-preserving thresholding
NASA Astrophysics Data System (ADS)
Choi, DooSeop; Jung, Hae Kyung; Choi, Hyuk; Kim, Taejeong
2007-09-01
The Moment-Preserving Thresholding technique for digital images has been used in digital image processing for decades, especially in image binarization and image compression. Its main strength lies in that the binary values that the MPT produces as a result, called representative values, are usually unaffected when the signal being thresholded goes through a signal processing operation. The two representative values in MPT together with the threshold value are obtained by solving the system of the preservation equations for the first, second, and third moment. Relying on this robustness of the representative values to various signal processing attacks considered in the watermarking context, this paper proposes a new watermarking scheme for audio signals. The watermark is embedded in the root-sum-square (RSS) of the two representative values of each signal block using the quantization technique. As a result, the RSS values are modified by scaling the signal according to the watermark bit sequence under the constraint of inaudibility relative to the human psycho-acoustic model. We also address and suggest solutions to the problem of synchronization and power scaling attacks. Experimental results show that the proposed scheme maintains high audio quality and robustness to various attacks including MP3 compression, re-sampling, jittering, and, DA/AD conversion.
Park, Eunjung; Gintant, Gary A; Bi, Daoqin; Kozeli, Devi; Pettit, Syril D; Skinner, Matthew; Willard, James; Wisialowski, Todd; Koerner, John; Valentin, Jean‐Pierre
2018-01-01
Background and Purpose Translation of non‐clinical markers of delayed ventricular repolarization to clinical prolongation of the QT interval corrected for heart rate (QTc) (a biomarker for torsades de pointes proarrhythmia) remains an issue in drug discovery and regulatory evaluations. We retrospectively analysed 150 drug applications in a US Food and Drug Administration database to determine the utility of established non‐clinical in vitro IKr current human ether‐à‐go‐go‐related gene (hERG), action potential duration (APD) and in vivo (QTc) repolarization assays to detect and predict clinical QTc prolongation. Experimental Approach The predictive performance of three non‐clinical assays was compared with clinical thorough QT study outcomes based on free clinical plasma drug concentrations using sensitivity and specificity, receiver operating characteristic (ROC) curves, positive (PPVs) and negative predictive values (NPVs) and likelihood ratios (LRs). Key Results Non‐clinical assays demonstrated robust specificity (high true negative rate) but poor sensitivity (low true positive rate) for clinical QTc prolongation at low‐intermediate (1×–30×) clinical exposure multiples. The QTc assay provided the most robust PPVs and NPVs (ability to predict clinical QTc prolongation). ROC curves (overall test accuracy) and LRs (ability to influence post‐test probabilities) demonstrated overall marginal performance for hERG and QTc assays (best at 30× exposures), while the APD assay demonstrated minimal value. Conclusions and Implications The predictive value of hERG, APD and QTc assays varies, with drug concentrations strongly affecting translational performance. While useful in guiding preclinical candidates without clinical QT prolongation, hERG and QTc repolarization assays provide greater value compared with the APD assay. PMID:29181850
Robust optimization in lung treatment plans accounting for geometric uncertainty.
Zhang, Xin; Rong, Yi; Morrill, Steven; Fang, Jian; Narayanasamy, Ganesh; Galhardo, Edvaldo; Maraboyina, Sanjay; Croft, Christopher; Xia, Fen; Penagaricano, Jose
2018-05-01
Robust optimization generates scenario-based plans by a minimax optimization method to find optimal scenario for the trade-off between target coverage robustness and organ-at-risk (OAR) sparing. In this study, 20 lung cancer patients with tumors located at various anatomical regions within the lungs were selected and robust optimization photon treatment plans including intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT) plans were generated. The plan robustness was analyzed using perturbed doses with setup error boundary of ±3 mm in anterior/posterior (AP), ±3 mm in left/right (LR), and ±5 mm in inferior/superior (IS) directions from isocenter. Perturbed doses for D 99 , D 98 , and D 95 were computed from six shifted isocenter plans to evaluate plan robustness. Dosimetric study was performed to compare the internal target volume-based robust optimization plans (ITV-IMRT and ITV-VMAT) and conventional PTV margin-based plans (PTV-IMRT and PTV-VMAT). The dosimetric comparison parameters were: ITV target mean dose (D mean ), R 95 (D 95 /D prescription ), Paddick's conformity index (CI), homogeneity index (HI), monitor unit (MU), and OAR doses including lung (D mean , V 20 Gy and V 15 Gy ), chest wall, heart, esophagus, and maximum cord doses. A comparison of optimization results showed the robust optimization plan had better ITV dose coverage, better CI, worse HI, and lower OAR doses than conventional PTV margin-based plans. Plan robustness evaluation showed that the perturbed doses of D 99 , D 98 , and D 95 were all satisfied at least 99% of the ITV to received 95% of prescription doses. It was also observed that PTV margin-based plans had higher MU than robust optimization plans. The results also showed robust optimization can generate plans that offer increased OAR sparing, especially for normal lungs and OARs near or abutting the target. Weak correlation was found between normal lung dose and target size, and no other correlation was observed in this study. © 2018 University of Arkansas for Medical Sciences. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Metcalfe, Paul J.; Baker, William; Andrews, Kevin; Atkinson, Giles; Bateman, Ian J.; Butler, Sarah; Carson, Richard T.; East, Jo; Gueron, Yves; Sheldon, Rob; Train, Kenneth
2012-03-01
Results are presented from a large-scale stated preference study designed to estimate the nonmarket benefits for households in England and Wales arising from the European Union Water Framework Directive (WFD). Multiple elicitation methods (a discrete choice experiment and two forms of contingent valuation) are employed, with the order in which they are asked randomly varied across respondents, to obtain a robust model for valuing specified WFD implementation programs applied to all of the lakes, reservoirs, rivers, canals, transitional, and coastal waters of England and Wales. The potential for subsequent policy incorporation and value transfer was enhanced by generating area-based values. These were found to vary from £2,263 to £39,168 per km2 depending on the population density around the location of the improvement, the ecological scope of that improvement, and the value elicitation method employed. While the former factors are consistent with expectations, the latter suggests that decision makers need to be aware of such methodological effects when employing derived values.
Composite Multilinearity, Epistemic Uncertainty and Risk Achievement Worth
DOE Office of Scientific and Technical Information (OSTI.GOV)
E. Borgonovo; C. L. Smith
2012-10-01
Risk Achievement Worth is one of the most widely utilized importance measures. RAW is defined as the ratio of the risk metric value attained when a component has failed over the base case value of the risk metric. Traditionally, both the numerator and denominator are point estimates. Relevant literature has shown that inclusion of epistemic uncertainty i) induces notable variability in the point estimate ranking and ii) causes the expected value of the risk metric to differ from its nominal value. We obtain the conditions under which the equality holds between the nominal and expected values of a reliability riskmore » metric. Among these conditions, separability and state-of-knowledge independence emerge. We then study how the presence of epistemic uncertainty aspects RAW and the associated ranking. We propose an extension of RAW (called ERAW) which allows one to obtain a ranking robust to epistemic uncertainty. We discuss the properties of ERAW and the conditions under which it coincides with RAW. We apply our findings to a probabilistic risk assessment model developed for the safety analysis of NASA lunar space missions.« less
NASA Astrophysics Data System (ADS)
Xiao, Fan; Chen, Zhijun; Chen, Jianguo; Zhou, Yongzhang
2016-05-01
In this study, a novel batch sliding window (BSW) based singularity mapping approach was proposed. Compared to the traditional sliding window (SW) technique with disadvantages of the empirical predetermination of a fixed maximum window size and outliers sensitivity of least-squares (LS) linear regression method, the BSW based singularity mapping approach can automatically determine the optimal size of the largest window for each estimated position, and utilizes robust linear regression (RLR) which is insensitive to outlier values. In the case study, tin geochemical data in Gejiu, Yunnan, have been processed by BSW based singularity mapping approach. The results show that the BSW approach can improve the accuracy of the calculation of singularity exponent values due to the determination of the optimal maximum window size. The utilization of RLR method in the BSW approach can smoothen the distribution of singularity index values with few or even without much high fluctuate values looking like noise points that usually make a singularity map much roughly and discontinuously. Furthermore, the student's t-statistic diagram indicates a strong spatial correlation between high geochemical anomaly and known tin polymetallic deposits. The target areas within high tin geochemical anomaly could probably have much higher potential for the exploration of new tin polymetallic deposits than other areas, particularly for the areas that show strong tin geochemical anomalies whereas no tin polymetallic deposits have been found in them.
Research of MPPT for photovoltaic generation based on two-dimensional cloud model
NASA Astrophysics Data System (ADS)
Liu, Shuping; Fan, Wei
2013-03-01
The cloud model is a mathematical representation to fuzziness and randomness in linguistic concepts. It represents a qualitative concept with expected value Ex, entropy En and hyper entropy He, and integrates the fuzziness and randomness of a linguistic concept in a unified way. This model is a new method for transformation between qualitative and quantitative in the knowledge. This paper is introduced MPPT (maximum power point tracking, MPPT) controller based two- dimensional cloud model through analysis of auto-optimization MPPT control of photovoltaic power system and combining theory of cloud model. Simulation result shows that the cloud controller is simple and easy, directly perceived through the senses, and has strong robustness, better control performance.
Comparisons of Robustness and Sensitivity between Cancer and Normal Cells by Microarray Data
Chu, Liang-Hui; Chen, Bor-Sen
2008-01-01
Robustness is defined as the ability to uphold performance in face of perturbations and uncertainties, and sensitivity is a measure of the system deviations generated by perturbations to the system. While cancer appears as a robust but fragile system, few computational and quantitative evidences demonstrate robustness tradeoffs in cancer. Microarrays have been widely applied to decipher gene expression signatures in human cancer research, and quantification of global gene expression profiles facilitates precise prediction and modeling of cancer in systems biology. We provide several efficient computational methods based on system and control theory to compare robustness and sensitivity between cancer and normal cells by microarray data. Measurement of robustness and sensitivity by linear stochastic model is introduced in this study, which shows oscillations in feedback loops of p53 and demonstrates robustness tradeoffs that cancer is a robust system with some extreme fragilities. In addition, we measure sensitivity of gene expression to perturbations in other gene expression and kinetic parameters, discuss nonlinear effects in feedback loops of p53 and extend our method to robustness-based cancer drug design. PMID:19259409
Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy
NASA Astrophysics Data System (ADS)
Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.
2011-08-01
The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.
A scoring mechanism for the rank aggregation of network robustness
NASA Astrophysics Data System (ADS)
Yazdani, Alireza; Dueñas-Osorio, Leonardo; Li, Qilin
2013-10-01
To date, a number of metrics have been proposed to quantify inherent robustness of network topology against failures. However, each single metric usually only offers a limited view of network vulnerability to different types of random failures and targeted attacks. When applied to certain network configurations, different metrics rank network topology robustness in different orders which is rather inconsistent, and no single metric fully characterizes network robustness against different modes of failure. To overcome such inconsistency, this work proposes a multi-metric approach as the basis of evaluating aggregate ranking of network topology robustness. This is based on simultaneous utilization of a minimal set of distinct robustness metrics that are standardized so to give way to a direct comparison of vulnerability across networks with different sizes and configurations, hence leading to an initial scoring of inherent topology robustness. Subsequently, based on the inputs of initial scoring a rank aggregation method is employed to allocate an overall ranking of robustness to each network topology. A discussion is presented in support of the presented multi-metric approach and its applications to more realistically assess and rank network topology robustness.
Cirujeda, Pol; Muller, Henning; Rubin, Daniel; Aguilera, Todd A; Loo, Billy W; Diehn, Maximilian; Binefa, Xavier; Depeursinge, Adrien
2015-01-01
In this paper we present a novel technique for characterizing and classifying 3D textured volumes belonging to different lung tissue types in 3D CT images. We build a volume-based 3D descriptor, robust to changes of size, rigid spatial transformations and texture variability, thanks to the integration of Riesz-wavelet features within a Covariance-based descriptor formulation. 3D Riesz features characterize the morphology of tissue density due to their response to changes in intensity in CT images. These features are encoded in a Covariance-based descriptor formulation: this provides a compact and flexible representation thanks to the use of feature variations rather than dense features themselves and adds robustness to spatial changes. Furthermore, the particular symmetric definite positive matrix form of these descriptors causes them to lay in a Riemannian manifold. Thus, descriptors can be compared with analytical measures, and accurate techniques from machine learning and clustering can be adapted to their spatial domain. Additionally we present a classification model following a "Bag of Covariance Descriptors" paradigm in order to distinguish three different nodule tissue types in CT: solid, ground-glass opacity, and healthy lung. The method is evaluated on top of an acquired dataset of 95 patients with manually delineated ground truth by radiation oncology specialists in 3D, and quantitative sensitivity and specificity values are presented.
Robust crop and weed segmentation under uncontrolled outdoor illumination
USDA-ARS?s Scientific Manuscript database
A new machine vision for weed detection was developed from RGB color model images. Processes included in the algorithm for the detection were excessive green conversion, threshold value computation by statistical analysis, adaptive image segmentation by adjusting the threshold value, median filter, ...
NASA Astrophysics Data System (ADS)
Polat, Esra; Gunay, Suleyman
2013-10-01
One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.
Assessing value-based health care delivery for haemodialysis.
Parra, Eduardo; Arenas, María Dolores; Alonso, Manuel; Martínez, María Fernanda; Gamen, Ángel; Aguarón, Juan; Escobar, María Teresa; Moreno-Jiménez, José María; Alvarez-Ude, Fernando
2017-06-01
Disparities in haemodialysis outcomes among centres have been well-documented. Besides, attempts to assess haemodialysis results have been based on non-comprehensive methodologies. This study aimed to develop a comprehensive methodology for assessing haemodialysis centres, based on the value of health care. The value of health care is defined as the patient benefit from a specific medical intervention per monetary unit invested (Value = Patient Benefit/Cost). This study assessed the value of health care and ranked different haemodialysis centres. A nephrology quality management group identified the criteria for the assessment. An expert group composed of stakeholders (patients, clinicians and managers) agreed on the weighting of each variable, considering values and preferences. Multi-criteria methodology was used to analyse the data. Four criteria and their weights were identified: evidence-based clinical performance measures = 43 points; yearly mortality = 27 points; patient satisfaction = 13 points; and health-related quality of life = 17 points (100-point scale). Evidence-based clinical performance measures included five sub-criteria, with respective weights, including: dialysis adequacy; haemoglobin concentration; mineral and bone disorders; type of vascular access; and hospitalization rate. The patient benefit was determined from co-morbidity-adjusted results and corresponding weights. The cost of each centre was calculated as the average amount expended per patient per year. The study was conducted in five centres (1-5). After adjusting for co-morbidity, value of health care was calculated, and the centres were ranked. A multi-way sensitivity analysis that considered different weights (10-60% changes) and costs (changes of 10% in direct and 30% in allocated costs) showed that the methodology was robust. The rankings: 4-5-3-2-1 and 4-3-5-2-1 were observed in 62.21% and 21.55%, respectively, of simulations, when weights were varied by 60%. Value assessments may integrate divergent stakeholder perceptions, create a context for improvement and aid in policy-making decisions. © 2015 John Wiley & Sons, Ltd.
Simonsohn, Uri; Simmons, Joseph P; Nelson, Leif D
2015-12-01
When studies examine true effects, they generate right-skewed p-curves, distributions of statistically significant results with more low (.01 s) than high (.04 s) p values. What else can cause a right-skewed p-curve? First, we consider the possibility that researchers report only the smallest significant p value (as conjectured by Ulrich & Miller, 2015), concluding that it is a very uncommon problem. We then consider more common problems, including (a) p-curvers selecting the wrong p values, (b) fake data, (c) honest errors, and (d) ambitiously p-hacked (beyond p < .05) results. We evaluate the impact of these common problems on the validity of p-curve analysis, and provide practical solutions that substantially increase its robustness. (c) 2015 APA, all rights reserved).
Teaching Robust Methods for Exploratory Data Analysis.
1980-10-01
of adding a new point x to a sample x1*9...sX n* The Influence Function of the estimate 0 at the value x is defined to be For example, if 0 is the...mean (Ex )/n, we can calculate II+(x,iZ) x-ix Plotting I+, ’I- -9- we see that the mean has an unbounded Influence Function , and is therefore not robust
The Analysis of Design of Robust Nonlinear Estimators and Robust Signal Coding Schemes.
1982-09-16
b - )’/ 12. between uniform and nonuniform quantizers. For the nonuni- Proof: If b - acca then form quantizer we can expect the mean-square error to...in the window greater than or equal to the value at We define f7 ’(s) as the n-times filtered signal p + 1; consequently, point p + 1 is the median and
WTA estimates using the method of paired comparison: tests of robustness
Patricia A. Champ; John B. Loomis
1998-01-01
The method of paired comparison is modified to allow choices between two alternative gains so as to estimate willingness to accept (WTA) without loss aversion. The robustness of WTA values for two public goods is tested with respect to sensitivity of theWTA measure to the context of the bundle of goods used in the paired comparison exercise and to the scope (scale) of...
NASA Astrophysics Data System (ADS)
Jiang, Yulian; Liu, Jianchang; Tan, Shubin; Ming, Pingsong
2014-09-01
In this paper, a robust consensus algorithm is developed and sufficient conditions for convergence to consensus are proposed for a multi-agent system (MAS) with exogenous disturbances subject to partial information. By utilizing H∞ robust control, differential game theory and a design-based approach, the consensus problem of the MAS with exogenous bounded interference is resolved and the disturbances are restrained, simultaneously. Attention is focused on designing an H∞ robust controller (the robust consensus algorithm) based on minimisation of our proposed rational and individual cost functions according to goals of the MAS. Furthermore, sufficient conditions for convergence of the robust consensus algorithm are given. An example is employed to demonstrate that our results are effective and more capable to restrain exogenous disturbances than the existing literature.
Chen, Bor-Sen; Hsu, Chih-Yuan
2012-10-26
Collective rhythms of gene regulatory networks have been a subject of considerable interest for biologists and theoreticians, in particular the synchronization of dynamic cells mediated by intercellular communication. Synchronization of a population of synthetic genetic oscillators is an important design in practical applications, because such a population distributed over different host cells needs to exploit molecular phenomena simultaneously in order to emerge a biological phenomenon. However, this synchronization may be corrupted by intrinsic kinetic parameter fluctuations and extrinsic environmental molecular noise. Therefore, robust synchronization is an important design topic in nonlinear stochastic coupled synthetic genetic oscillators with intrinsic kinetic parameter fluctuations and extrinsic molecular noise. Initially, the condition for robust synchronization of synthetic genetic oscillators was derived based on Hamilton Jacobi inequality (HJI). We found that if the synchronization robustness can confer enough intrinsic robustness to tolerate intrinsic parameter fluctuation and extrinsic robustness to filter the environmental noise, then robust synchronization of coupled synthetic genetic oscillators is guaranteed. If the synchronization robustness of a population of nonlinear stochastic coupled synthetic genetic oscillators distributed over different host cells could not be maintained, then robust synchronization could be enhanced by external control input through quorum sensing molecules. In order to simplify the analysis and design of robust synchronization of nonlinear stochastic synthetic genetic oscillators, the fuzzy interpolation method was employed to interpolate several local linear stochastic coupled systems to approximate the nonlinear stochastic coupled system so that the HJI-based synchronization design problem could be replaced by a simple linear matrix inequality (LMI)-based design problem, which could be solved with the help of LMI toolbox in MATLAB easily. If the synchronization robustness criterion, i.e. the synchronization robustness ≥ intrinsic robustness + extrinsic robustness, then the stochastic coupled synthetic oscillators can be robustly synchronized in spite of intrinsic parameter fluctuation and extrinsic noise. If the synchronization robustness criterion is violated, external control scheme by adding inducer can be designed to improve synchronization robustness of coupled synthetic genetic oscillators. The investigated robust synchronization criteria and proposed external control method are useful for a population of coupled synthetic networks with emergent synchronization behavior, especially for multi-cellular, engineered networks.
2012-01-01
Background Collective rhythms of gene regulatory networks have been a subject of considerable interest for biologists and theoreticians, in particular the synchronization of dynamic cells mediated by intercellular communication. Synchronization of a population of synthetic genetic oscillators is an important design in practical applications, because such a population distributed over different host cells needs to exploit molecular phenomena simultaneously in order to emerge a biological phenomenon. However, this synchronization may be corrupted by intrinsic kinetic parameter fluctuations and extrinsic environmental molecular noise. Therefore, robust synchronization is an important design topic in nonlinear stochastic coupled synthetic genetic oscillators with intrinsic kinetic parameter fluctuations and extrinsic molecular noise. Results Initially, the condition for robust synchronization of synthetic genetic oscillators was derived based on Hamilton Jacobi inequality (HJI). We found that if the synchronization robustness can confer enough intrinsic robustness to tolerate intrinsic parameter fluctuation and extrinsic robustness to filter the environmental noise, then robust synchronization of coupled synthetic genetic oscillators is guaranteed. If the synchronization robustness of a population of nonlinear stochastic coupled synthetic genetic oscillators distributed over different host cells could not be maintained, then robust synchronization could be enhanced by external control input through quorum sensing molecules. In order to simplify the analysis and design of robust synchronization of nonlinear stochastic synthetic genetic oscillators, the fuzzy interpolation method was employed to interpolate several local linear stochastic coupled systems to approximate the nonlinear stochastic coupled system so that the HJI-based synchronization design problem could be replaced by a simple linear matrix inequality (LMI)-based design problem, which could be solved with the help of LMI toolbox in MATLAB easily. Conclusion If the synchronization robustness criterion, i.e. the synchronization robustness ≥ intrinsic robustness + extrinsic robustness, then the stochastic coupled synthetic oscillators can be robustly synchronized in spite of intrinsic parameter fluctuation and extrinsic noise. If the synchronization robustness criterion is violated, external control scheme by adding inducer can be designed to improve synchronization robustness of coupled synthetic genetic oscillators. The investigated robust synchronization criteria and proposed external control method are useful for a population of coupled synthetic networks with emergent synchronization behavior, especially for multi-cellular, engineered networks. PMID:23101662
The Economic and Social Value of an Image Exchange Network: A Case for the Cloud.
Mayo, Ray Cody; Pearson, Kathryn L; Avrin, David E; Leung, Jessica W T
2017-01-01
As the health care environment continually changes, radiologists look to the ACR's Imaging 3.0 ® initiative to guide the search for value. By leveraging new technology, a cloud-based image exchange network could provide secure universal access to prior images, which were previously siloed, to facilitate accurate interpretation, improved outcomes, and reduced costs. The breast imaging department represents a viable starting point given the robust data supporting the benefit of access to prior imaging studies, existing infrastructure for image sharing, and the current workflow reliance on prior images. This concept is scalable not only to the remainder of the radiology department but also to the broader medical record. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Analytical design of modified Smith predictor for unstable second-order processes with time delay
NASA Astrophysics Data System (ADS)
Ajmeri, Moina; Ali, Ahmad
2017-06-01
In this paper, a modified Smith predictor using three controllers, namely, stabilising (Gc), set-point tracking (Gc1), and load disturbance rejection (Gc2) controllers is proposed for second-order unstable processes with time delay. Controllers of the proposed structure are tuned using direct synthesis approach as this method enables the user to achieve a trade-off between the performance and robustness by adjusting a single design parameter. Furthermore, suitable values of the tuning parameters are recommended after studying their effect on the closed-loop performance and robustness. This is the main advantage of the proposed work over other recently published manuscripts, where authors provide only suitable ranges for the tuning parameters in spite of giving their suitable values. Simulation studies show that the proposed method results in satisfactory performance and improved robustness as compared to the recently reported control schemes. It is observed that the proposed scheme is able to work in the noisy environment also.
Khan, Arshad; Sarkar, Dhiman
2008-04-01
This study aimed at developing a whole cell based high throughput screening protocol to identify inhibitors against both active and dormant tubercle bacilli. A respiratory type of nitrate reductase (NarGHJI), which was induced during dormancy, could reflect the viability of dormant bacilli of Mycobacterium bovis BCG in microplate adopted model of in vitro dormancy. Correlation between reduction in viability and nitrate reductase activity was seen clearly when dormant stage inhibitor metronidazole and itaconic anhydride were applied in this in vitro microplate model. Active replicating stage could also be monitored in the same assay by measuring the A(620) of the culture. MIC values of 0.08, 0.075, 0.3 and 3.0 microg/ml, determined through monitoring A(620) in this assay for rifampin, isoniazid, streptomycin and ethambutol respectively, were well in agreement with previously reported by BACTEC and Bio-Siv assays. S/N ratio and Z' factor for the assay were 8.5 and 0.81 respectively which indicated the robustness of the protocol. Altogether the assay provides an easy, inexpensive, rapid, robust and high content screening tool to search novel antitubercular molecules against both active and dormant bacilli.
Motion estimation of magnetic resonance cardiac images using the Wigner-Ville and hough transforms
NASA Astrophysics Data System (ADS)
Carranza, N.; Cristóbal, G.; Bayerl, P.; Neumann, H.
2007-12-01
Myocardial motion analysis and quantification is of utmost importance for analyzing contractile heart abnormalities and it can be a symptom of a coronary artery disease. A fundamental problem in processing sequences of images is the computation of the optical flow, which is an approximation of the real image motion. This paper presents a new algorithm for optical flow estimation based on a spatiotemporal-frequency (STF) approach. More specifically it relies on the computation of the Wigner-Ville distribution (WVD) and the Hough Transform (HT) of the motion sequences. The latter is a well-known line and shape detection method that is highly robust against incomplete data and noise. The rationale of using the HT in this context is that it provides a value of the displacement field from the STF representation. In addition, a probabilistic approach based on Gaussian mixtures has been implemented in order to improve the accuracy of the motion detection. Experimental results in the case of synthetic sequences are compared with an implementation of the variational technique for local and global motion estimation, where it is shown that the results are accurate and robust to noise degradations. Results obtained with real cardiac magnetic resonance images are presented.
NASA Astrophysics Data System (ADS)
Cui, Sheng; Jin, Shang; Xia, Wenjuan; Ke, Changjian; Liu, Deming
2015-11-01
Symbol rate identification (SRI) based on asynchronous delayed sampling is accurate, cost-effective and robust to impairments. For on-off keying (OOK) signals the symbol rate can be derived from the periodicity of the second-order autocorrelation function (ACF2) of the delay tap samples. But it is found that when applied this method to advanced modulation format signals with auxiliary amplitude modulation (AAM), incorrect results may be produced because AAM has significant impact on ACF2 periodicity, which makes the symbol period harder or even unable to be correctly identified. In this paper it is demonstrated that for these signals the first order autocorrelation function (ACF1) has stronger periodicity and can be used to replace ACF2 to produce more accurate and robust results. Utilizing the characteristics of the ACFs, an improved SRI method is proposed to accommodate both OOK and advanced modulation formant signals in a transparent manner. Furthermore it is proposed that by minimizing the peak to average ratio (PAPR) of the delay tap samples with an additional tunable dispersion compensator (TDC) the limited dispersion tolerance can be expanded to desired values.
Visual Tracking Based on Extreme Learning Machine and Sparse Representation
Wang, Baoxian; Tang, Linbo; Yang, Jinglin; Zhao, Baojun; Wang, Shuigen
2015-01-01
The existing sparse representation-based visual trackers mostly suffer from both being time consuming and having poor robustness problems. To address these issues, a novel tracking method is presented via combining sparse representation and an emerging learning technique, namely extreme learning machine (ELM). Specifically, visual tracking can be divided into two consecutive processes. Firstly, ELM is utilized to find the optimal separate hyperplane between the target observations and background ones. Thus, the trained ELM classification function is able to remove most of the candidate samples related to background contents efficiently, thereby reducing the total computational cost of the following sparse representation. Secondly, to further combine ELM and sparse representation, the resultant confidence values (i.e., probabilities to be a target) of samples on the ELM classification function are used to construct a new manifold learning constraint term of the sparse representation framework, which tends to achieve robuster results. Moreover, the accelerated proximal gradient method is used for deriving the optimal solution (in matrix form) of the constrained sparse tracking model. Additionally, the matrix form solution allows the candidate samples to be calculated in parallel, thereby leading to a higher efficiency. Experiments demonstrate the effectiveness of the proposed tracker. PMID:26506359
Tracking boundary movement and exterior shape modelling in lung EIT imaging.
Biguri, A; Grychtol, B; Adler, A; Soleimani, M
2015-06-01
Electrical impedance tomography (EIT) has shown significant promise for lung imaging. One key challenge for EIT in this application is the movement of electrodes during breathing, which introduces artefacts in reconstructed images. Various approaches have been proposed to compensate for electrode movement, but no comparison of these approaches is available. This paper analyses boundary model mismatch and electrode movement in lung EIT. The aim is to evaluate the extent to which various algorithms tolerate movement, and to determine if a patient specific model is required for EIT lung imaging. Movement data are simulated from a CT-based model, and image analysis is performed using quantitative figures of merit. The electrode movement is modelled based on expected values of chest movement and an extended Jacobian method is proposed to make use of exterior boundary tracking. Results show that a dynamical boundary tracking is the most robust method against any movement, but is computationally more expensive. Simultaneous electrode movement and conductivity reconstruction algorithms show increased robustness compared to only conductivity reconstruction. The results of this comparative study can help develop a better understanding of the impact of shape model mismatch and electrode movement in lung EIT.
Automatic segmentation of the left ventricle cavity and myocardium in MRI data.
Lynch, M; Ghita, O; Whelan, P F
2006-04-01
A novel approach for the automatic segmentation has been developed to extract the epi-cardium and endo-cardium boundaries of the left ventricle (lv) of the heart. The developed segmentation scheme takes multi-slice and multi-phase magnetic resonance (MR) images of the heart, transversing the short-axis length from the base to the apex. Each image is taken at one instance in the heart's phase. The images are segmented using a diffusion-based filter followed by an unsupervised clustering technique and the resulting labels are checked to locate the (lv) cavity. From cardiac anatomy, the closest pool of blood to the lv cavity is the right ventricle cavity. The wall between these two blood-pools (interventricular septum) is measured to give an approximate thickness for the myocardium. This value is used when a radial search is performed on a gradient image to find appropriate robust segments of the epi-cardium boundary. The robust edge segments are then joined using a normal spline curve. Experimental results are presented with very encouraging qualitative and quantitative results and a comparison is made against the state-of-the art level-sets method.
2012-09-01
Robust global image registration based on a hybrid algorithm combining Fourier and spatial domain techniques Peter N. Crabtree, Collin Seanor...00-00-2012 to 00-00-2012 4. TITLE AND SUBTITLE Robust global image registration based on a hybrid algorithm combining Fourier and spatial domain...demonstrate performance of a hybrid algorithm . These results are from analysis of a set of images of an ISO 12233 [12] resolution chart captured in the
DOE Office of Scientific and Technical Information (OSTI.GOV)
La Russa, D
Purpose: The purpose of this project is to develop a robust method of parameter estimation for a Poisson-based TCP model using Bayesian inference. Methods: Bayesian inference was performed using the PyMC3 probabilistic programming framework written in Python. A Poisson-based TCP regression model that accounts for clonogen proliferation was fit to observed rates of local relapse as a function of equivalent dose in 2 Gy fractions for a population of 623 stage-I non-small-cell lung cancer patients. The Slice Markov Chain Monte Carlo sampling algorithm was used to sample the posterior distributions, and was initiated using the maximum of the posterior distributionsmore » found by optimization. The calculation of TCP with each sample step required integration over the free parameter α, which was performed using an adaptive 24-point Gauss-Legendre quadrature. Convergence was verified via inspection of the trace plot and posterior distribution for each of the fit parameters, as well as with comparisons of the most probable parameter values with their respective maximum likelihood estimates. Results: Posterior distributions for α, the standard deviation of α (σ), the average tumour cell-doubling time (Td), and the repopulation delay time (Tk), were generated assuming α/β = 10 Gy, and a fixed clonogen density of 10{sup 7} cm−{sup 3}. Posterior predictive plots generated from samples from these posterior distributions are in excellent agreement with the observed rates of local relapse used in the Bayesian inference. The most probable values of the model parameters also agree well with maximum likelihood estimates. Conclusion: A robust method of performing Bayesian inference of TCP data using a complex TCP model has been established.« less
Role of the anesthesiologist in the wider governance of healthcare and health economics.
Martin, Janet; Cheng, Davy
2013-09-01
Healthcare resources will always be limited, and as a result, difficult decisions must be made about how to allocate limited resources across unlimited demands in order to maximize health gains per resource expended. Governments and hospitals now in severe financial deficits recognize that reengagement of physicians is central to their ability to contain the runaway healthcare costs. Health economic analysis provides tools and techniques to assess which investments in healthcare provide good value for money vs which options should be forgone. Robust decision-making in healthcare requires objective consideration of evidence in order to balance clinical and economic benefits vs risks. Surveys of the literature reveal very few economic analyses related to anesthesia and perioperative medicine despite increasing recognition of the need. Now is an opportune time for anesthesiologists to become familiar with the tools and methodologies of health economics in order to facilitate and lead robust decision-making in quality-based procedures. For most technologies used in anesthesia and perioperative medicine, the responsibility to determine cost-effectiveness falls to those tasked with the governance and stewardship of limited resources for unlimited demands using best evidence plus economics at the local, regional, and national levels. Applicable cost-effectiveness, cost-utility, and cost-benefits in health economics are reviewed in this article with clinical examples in anesthesia. Anesthesiologists can make a difference in the wider governance of healthcare and health economics if we advance our knowledge and skills beyond the technical to address the "other" dimensions of decision-making--most notably, the economic aspects in a value-based healthcare system.
A Robust High-Accuracy Ultrasound Indoor Positioning System Based on a Wireless Sensor Network.
Qi, Jun; Liu, Guo-Ping
2017-11-06
This paper describes the development and implementation of a robust high-accuracy ultrasonic indoor positioning system (UIPS). The UIPS consists of several wireless ultrasonic beacons in the indoor environment. Each of them has a fixed and known position coordinate and can collect all the transmissions from the target node or emit ultrasonic signals. Every wireless sensor network (WSN) node has two communication modules: one is WiFi, that transmits the data to the server, and the other is the radio frequency (RF) module, which is only used for time synchronization between different nodes, with accuracy up to 1 μ s. The distance between the beacon and the target node is calculated by measuring the time-of-flight (TOF) for the ultrasonic signal, and then the position of the target is computed by some distances and the coordinate of the beacons. TOF estimation is the most important technique in the UIPS. A new time domain method to extract the envelope of the ultrasonic signals is presented in order to estimate the TOF. This method, with the envelope detection filter, estimates the value with the sampled values on both sides based on the least squares method (LSM). The simulation results show that the method can achieve envelope detection with a good filtering effect by means of the LSM. The highest precision and variance can reach 0.61 mm and 0.23 mm, respectively, in pseudo-range measurements with UIPS. A maximum location error of 10.2 mm is achieved in the positioning experiments for a moving robot, when UIPS works on the line-of-sight (LOS) signal.
Brown, Gary C; Brown, Melissa M; Campanella, Joseph; Beauchamp, George R
2005-10-01
To assess the value conferred by photodynamic therapy (PDT) and the cost-utility of PDT for the treatment of classic, subfoveal choroidal neovascularization associated with age-related macular degeneration (ARMD). Average cost-utility analysis utilizing clinical trial data, patient-based time tradeoff utility preferences, and a third party insurer cost perspective. Five-year visual acuity data from the TAP (Treatment of Age-related Macular Degeneration With Photodynamic Therapy) Investigation were modeled into a 12-year, value-based, reference case, cost-utility model utilizing year 2004 Medicare costs and an outcome of dollar/QALY (dollars/quality-adjusted life-year). Discounting of outcomes and costs using net present value analysis with a 3% annual rate was performed as recommended by the Panel for Cost-Effectiveness in Health and Medicine. PDT with verteporfin (Visudyne) dye for classic subfoveal choroidal neovascularization confers an 8.1% quality of life (value) improvement over the 12-year life expectancy of the reference case, while during the last 8 years the value improvement is 9.5%. The average cost-utility of the intervention is dollar 31,103/QALY (quality-adjusted life-year). Extensive one-way sensitivity analysis values range from dollar 20,736/QALY if treatment efficacy is increased by 50% to dollar 62,207 if treatment efficacy is decreased by 50%, indicating robustness of the model. PDT using verteporfin dye to treat classic subfoveal choroidal neovascularization is a very cost-effective treatment by conventional standards. The marked improvement in cost-effectiveness compared with a previous report results from the facts that the treatment benefit increasingly accrues during 5 years of follow-up while the number of yearly treatments diminishes markedly during that time.
Taimouri, Vahid; Afacan, Onur; Perez-Rossello, Jeannette M.; Callahan, Michael J.; Mulkern, Robert V.; Warfield, Simon K.; Freiman, Moti
2015-01-01
Purpose: To evaluate the effect of the spatially constrained incoherent motion (SCIM) method on improving the precision and robustness of fast and slow diffusion parameter estimates from diffusion-weighted MRI in liver and spleen in comparison to the independent voxel-wise intravoxel incoherent motion (IVIM) model. Methods: We collected diffusion-weighted MRI (DW-MRI) data of 29 subjects (5 healthy subjects and 24 patients with Crohn’s disease in the ileum). We evaluated parameters estimates’ robustness against different combinations of b-values (i.e., 4 b-values and 7 b-values) by comparing the variance of the estimates obtained with the SCIM and the independent voxel-wise IVIM model. We also evaluated the improvement in the precision of parameter estimates by comparing the coefficient of variation (CV) of the SCIM parameter estimates to that of the IVIM. Results: The SCIM method was more robust compared to IVIM (up to 70% in liver and spleen) for different combinations of b-values. Also, the CV values of the parameter estimations using the SCIM method were significantly lower compared to repeated acquisition and signal averaging estimated using IVIM, especially for the fast diffusion parameter in liver (CVIV IM = 46.61 ± 11.22, CVSCIM = 16.85 ± 2.160, p < 0.001) and spleen (CVIV IM = 95.15 ± 19.82, CVSCIM = 52.55 ± 1.91, p < 0.001). Conclusions: The SCIM method characterizes fast and slow diffusion more precisely compared to the independent voxel-wise IVIM model fitting in the liver and spleen. PMID:25832079
NASA Astrophysics Data System (ADS)
Chakraborty, Amitav; Roy, Sumit; Banerjee, Rahul
2018-03-01
This experimental work highlights the inherent capability of an adaptive-neuro fuzzy inference system (ANFIS) based model to act as a robust system identification tool (SIT) in prognosticating the performance and emission parameters of an existing diesel engine running of diesel-LPG dual fuel mode. The developed model proved its adeptness by successfully harnessing the effects of the input parameters of load, injection duration and LPG energy share on output parameters of BSFCEQ, BTE, NOX, SOOT, CO and HC. Successive evaluation of the ANFIS model, revealed high levels of resemblance with the already forecasted ANN results for the same input parameters and it was evident that similar to ANN, ANFIS also has the innate ability to act as a robust SIT. The ANFIS predicted data harmonized the experimental data with high overall accuracy. The correlation coefficient (R) values are stretched in between 0.99207 to 0.999988. The mean absolute percentage error (MAPE) tallies were recorded in the range of 0.02-0.173% with the root mean square errors (RMSE) in acceptable margins. Hence the developed model is capable of emulating the actual engine parameters with commendable ranges of accuracy, which in turn would act as a robust prediction platform in the future domains of optimization.
Automated detection of microaneurysms using robust blob descriptors
NASA Astrophysics Data System (ADS)
Adal, K.; Ali, S.; Sidibé, D.; Karnowski, T.; Chaum, E.; Mériaudeau, F.
2013-03-01
Microaneurysms (MAs) are among the first signs of diabetic retinopathy (DR) that can be seen as round dark-red structures in digital color fundus photographs of retina. In recent years, automated computer-aided detection and diagnosis (CAD) of MAs has attracted many researchers due to its low-cost and versatile nature. In this paper, the MA detection problem is modeled as finding interest points from a given image and several interest point descriptors are introduced and integrated with machine learning techniques to detect MAs. The proposed approach starts by applying a novel fundus image contrast enhancement technique using Singular Value Decomposition (SVD) of fundus images. Then, Hessian-based candidate selection algorithm is applied to extract image regions which are more likely to be MAs. For each candidate region, robust low-level blob descriptors such as Speeded Up Robust Features (SURF) and Intensity Normalized Radon Transform are extracted to characterize candidate MA regions. The combined features are then classified using SVM which has been trained using ten manually annotated training images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. Preliminary results show the competitiveness of the proposed candidate selection techniques against state-of-the art methods as well as the promising future for the proposed descriptors to be used in the localization of MAs from fundus images.
Jeuken, Judith; Sijben, Angelique; Alenda, Cristina; Rijntjes, Jos; Dekkers, Marieke; Boots-Sprenger, Sandra; McLendon, Roger; Wesseling, Pieter
2009-10-01
Epidermal growth factor receptor (EGFR) is commonly affected in cancer, generally in the form of an increase in DNA copy number and/or as mutation variants [e.g., EGFR variant III (EGFRvIII), an in-frame deletion of exons 2-7]. While detection of EGFR aberrations can be expected to be relevant for glioma patients, such analysis has not yet been implemented in a routine setting, also because feasible and robust assays were lacking. We evaluated multiplex ligation-dependent probe amplification (MLPA) for detection of EGFR amplification and EGFRvIII in DNA of a spectrum of 216 diffuse gliomas. EGFRvIII detection was verified at the protein level by immunohistochemistry and at the RNA level using the conventionally used endpoint RT-PCR as well as a newly developed quantitative RT-PCR. Compared to these techniques, the DNA-based MLPA assay for EGFR/EGFRvIII analysis tested showed 100% sensitivity and specificity. We conclude that MLPA is a robust assay for detection of EGFR/EGFRvIII aberrations. While the exact diagnostic, prognostic and predictive value of such EGFR testing remains to be seen, MLPA has great potential as it can reliably and relatively easily be performed on routinely processed (formalin-fixed, paraffin-embedded) tumor tissue in combination with testing for other relevant glioma markers.
Imamoglu, Nevrez; Dorronzoro, Enrique; Wei, Zhixuan; Shi, Huangjun; Sekine, Masashi; González, José; Gu, Dongyun; Chen, Weidong; Yu, Wenwei
2014-01-01
Our research is focused on the development of an at-home health care biomonitoring mobile robot for the people in demand. Main task of the robot is to detect and track a designated subject while recognizing his/her activity for analysis and to provide warning in an emergency. In order to push forward the system towards its real application, in this study, we tested the robustness of the robot system with several major environment changes, control parameter changes, and subject variation. First, an improved color tracker was analyzed to find out the limitations and constraints of the robot visual tracking considering the suitable illumination values and tracking distance intervals. Then, regarding subject safety and continuous robot based subject tracking, various control parameters were tested on different layouts in a room. Finally, the main objective of the system is to find out walking activities for different patterns for further analysis. Therefore, we proposed a fast, simple, and person specific new activity recognition model by making full use of localization information, which is robust to partial occlusion. The proposed activity recognition algorithm was tested on different walking patterns with different subjects, and the results showed high recognition accuracy.
Imamoglu, Nevrez; Dorronzoro, Enrique; Wei, Zhixuan; Shi, Huangjun; González, José; Gu, Dongyun; Yu, Wenwei
2014-01-01
Our research is focused on the development of an at-home health care biomonitoring mobile robot for the people in demand. Main task of the robot is to detect and track a designated subject while recognizing his/her activity for analysis and to provide warning in an emergency. In order to push forward the system towards its real application, in this study, we tested the robustness of the robot system with several major environment changes, control parameter changes, and subject variation. First, an improved color tracker was analyzed to find out the limitations and constraints of the robot visual tracking considering the suitable illumination values and tracking distance intervals. Then, regarding subject safety and continuous robot based subject tracking, various control parameters were tested on different layouts in a room. Finally, the main objective of the system is to find out walking activities for different patterns for further analysis. Therefore, we proposed a fast, simple, and person specific new activity recognition model by making full use of localization information, which is robust to partial occlusion. The proposed activity recognition algorithm was tested on different walking patterns with different subjects, and the results showed high recognition accuracy. PMID:25587560
Robust reconstruction of B1 (+) maps by projection into a spherical functions space.
Sbrizzi, Alessandro; Hoogduin, Hans; Lagendijk, Jan J; Luijten, Peter; van den Berg, Cornelis A T
2014-01-01
Several parallel transmit MRI techniques require knowledge of the transmit radiofrequency field profiles (B1 (+) ). During the past years, various methods have been developed to acquire this information. Often, these methods suffer from long measurement times and produce maps exhibiting regions with poor signal-to-noise ratio and artifacts. In this article, a model-based reconstruction procedure is introduced that improves the robustness of B1 (+) mapping. The missing information from undersampled B1 (+) maps and the regions of poor signal to noise ratio are reconstructed through projection into the space of spherical functions that arise naturally from the solution of the Helmholtz equations in the spherical coordinate system. As a result, B1 (+) data over a limited range of the field of view/volume is sufficient to reconstruct the B1 (+) over the full spatial domain in a fast and robust way. The same model is exploited to filter the noise of the measured maps. Results from simulations and in vivo measurements confirm the validity of the proposed method. A spherical functions model can well approximate the magnetic fields inside the body with few basis terms. Exploiting this compression capability, B1 (+) maps are reconstructed in regions of unknown or corrupted values. Copyright © 2013 Wiley Periodicals, Inc.
Robust doubly charged nodal lines and nodal surfaces in centrosymmetric systems
NASA Astrophysics Data System (ADS)
Bzdušek, Tomáš; Sigrist, Manfred
2017-10-01
Weyl points in three spatial dimensions are characterized by a Z -valued charge—the Chern number—which makes them stable against a wide range of perturbations. A set of Weyl points can mutually annihilate only if their net charge vanishes, a property we refer to as robustness. While nodal loops are usually not robust in this sense, it has recently been shown using homotopy arguments that in the centrosymmetric extension of the AI symmetry class they nevertheless develop a Z2 charge analogous to the Chern number. Nodal loops carrying a nontrivial value of this Z2 charge are robust, i.e., they can be gapped out only by a pairwise annihilation and not on their own. As this is an additional charge independent of the Berry π -phase flowing along the band degeneracy, such nodal loops are, in fact, doubly charged. In this manuscript, we generalize the homotopy discussion to the centrosymmetric extensions of all Atland-Zirnbauer classes. We develop a tailored mathematical framework dubbed the AZ +I classification and show that in three spatial dimensions such robust and multiply charged nodes appear in four of such centrosymmetric extensions, namely, AZ +I classes CI and AI lead to doubly charged nodal lines, while D and BDI support doubly charged nodal surfaces. We remark that no further crystalline symmetries apart from the spatial inversion are necessary for their stability. We provide a description of the corresponding topological charges, and develop simple tight-binding models of various semimetallic and superconducting phases that exhibit these nodes. We also indicate how the concept of robust and multiply charged nodes generalizes to other spatial dimensions.
Robust modular product family design
NASA Astrophysics Data System (ADS)
Jiang, Lan; Allada, Venkat
2001-10-01
This paper presents a modified Taguchi methodology to improve the robustness of modular product families against changes in customer requirements. The general research questions posed in this paper are: (1) How to effectively design a product family (PF) that is robust enough to accommodate future customer requirements. (2) How far into the future should designers look to design a robust product family? An example of a simplified vacuum product family is used to illustrate our methodology. In the example, customer requirements are selected as signal factors; future changes of customer requirements are selected as noise factors; an index called quality characteristic (QC) is set to evaluate the product vacuum family; and the module instance matrix (M) is selected as control factor. Initially a relation between the objective function (QC) and the control factor (M) is established, and then the feasible M space is systemically explored using a simplex method to determine the optimum M and the corresponding QC values. Next, various noise levels at different time points are introduced into the system. For each noise level, the optimal values of M and QC are computed and plotted on a QC-chart. The tunable time period of the control factor (the module matrix, M) is computed using the QC-chart. The tunable time period represents the maximum time for which a given control factor can be used to satisfy current and future customer needs. Finally, a robustness index is used to break up the tunable time period into suitable time periods that designers should consider while designing product families.
Jopp, Eilin; Scheffler, Christiane; Hermanussen, Michael
2014-01-01
Screening is an important issue in medicine and is used to early identify unrecognised diseases in persons who are apparently in good health. Screening strongly relies on the concept of "normal values". Normal values are defined as values that are frequently observed in a population and usually range within certain statistical limits. Screening for obesity should start early as the prevalence of obesity consolidates already at early school age. Though widely practiced, measuring BMI is not the ultimate solution for detecting obesity. Children with high BMI may be "robust" in skeletal dimensions. Assessing skeletal robustness and in particularly assessing developmental tempo in adolescents are also important issues in health screening. Yet, in spite of the necessity of screening investigations, appropriate reference values are often missing. Meanwhile, new concepts of growth diagrams have been developed. Stage line diagrams are useful for tracking developmental processes over time. Functional data analyses have efficiently been used for analysing longitudinal growth in height and assessing the tempo of maturation. Convenient low-cost statistics have also been developed for generating synthetic national references.
Homeostatic enhancement of sensory transduction
Milewski, Andrew R.; Ó Maoiléidigh, Dáibhid; Salvi, Joshua D.; Hudspeth, A. J.
2017-01-01
Our sense of hearing boasts exquisite sensitivity, precise frequency discrimination, and a broad dynamic range. Experiments and modeling imply, however, that the auditory system achieves this performance for only a narrow range of parameter values. Small changes in these values could compromise hair cells’ ability to detect stimuli. We propose that, rather than exerting tight control over parameters, the auditory system uses a homeostatic mechanism that increases the robustness of its operation to variation in parameter values. To slowly adjust the response to sinusoidal stimulation, the homeostatic mechanism feeds back a rectified version of the hair bundle’s displacement to its adaptation process. When homeostasis is enforced, the range of parameter values for which the sensitivity, tuning sharpness, and dynamic range exceed specified thresholds can increase by more than an order of magnitude. Signatures in the hair cell’s behavior provide a means to determine through experiment whether such a mechanism operates in the auditory system. Robustness of function through homeostasis may be ensured in any system through mechanisms similar to those that we describe here. PMID:28760949
Mechanically robust, electrically conductive ultralow-density carbon nanotube-based aerogels
Worsley, Marcus A; Baumann, Theodore F; Satcher, Jr., Joe H
2014-04-01
A method of making a mechanically robust, electrically conductive ultralow-density carbon nanotube-based aerogel, including the steps of dispersing nanotubes in an aqueous media or other media to form a suspension, adding reactants and catalyst to the suspension to create a reaction mixture, curing the reaction mixture to form a wet gel, drying the wet gel to produce a dry gel, and pyrolyzing the dry gel to produce the mechanically robust, electrically conductive ultralow-density carbon nanotube-based aerogel. The aerogel is mechanically robust, electrically conductive, and ultralow-density, and is made of a porous carbon material having 5 to 95% by weight carbon nanotubes and 5 to 95% carbon binder.
Mechanically robust, electrically conductive ultralow-density carbon nanotube-based aerogels
Worsley, Marcus A.; Baumann, Theodore F.; Satcher, Jr, Joe H.
2016-07-05
A method of making a mechanically robust, electrically conductive ultralow-density carbon nanotube-based aerogel, including the steps of dispersing nanotubes in an aqueous media or other media to form a suspension, adding reactants and catalyst to the suspension to create a reaction mixture, curing the reaction mixture to form a wet gel, drying the wet gel to produce a dry gel, and pyrolyzing the dry gel to produce the mechanically robust, electrically conductive ultralow-density carbon nanotube-based aerogel. The aerogel is mechanically robust, electrically conductive, and ultralow-density, and is made of a porous carbon material having 5 to 95% by weight carbon nanotubes and 5 to 95% carbon binder.
Linear quadratic servo control of a reusable rocket engine
NASA Technical Reports Server (NTRS)
Musgrave, Jeffrey L.
1991-01-01
A design method for a servo compensator is developed in the frequency domain using singular values. The method is applied to a reusable rocket engine. An intelligent control system for reusable rocket engines was proposed which includes a diagnostic system, a control system, and an intelligent coordinator which determines engine control strategies based on the identified failure modes. The method provides a means of generating various linear multivariable controllers capable of meeting performance and robustness specifications and accommodating failure modes identified by the diagnostic system. Command following with set point control is necessary for engine operation. A Kalman filter reconstructs the state while loop transfer recovery recovers the required degree of robustness while maintaining satisfactory rejection of sensor noise from the command error. The approach is applied to the design of a controller for a rocket engine satisfying performance constraints in the frequency domain. Simulation results demonstrate the performance of the linear design on a nonlinear engine model over all power levels during mainstage operation.
NASA Technical Reports Server (NTRS)
Lind, Richard C. (Inventor); Brenner, Martin J.
2001-01-01
A structured singular value (mu) analysis method of computing flutter margins has robust stability of a linear aeroelastic model with uncertainty operators (Delta). Flight data is used to update the uncertainty operators to accurately account for errors in the computed model and the observed range of aircraft dynamics of the aircraft under test caused by time-varying aircraft parameters, nonlinearities, and flight anomalies, such as test nonrepeatability. This mu-based approach computes predict flutter margins that are worst case with respect to the modeling uncertainty for use in determining when the aircraft is approaching a flutter condition and defining an expanded safe flight envelope for the aircraft that is accepted with more confidence than traditional methods that do not update the analysis algorithm with flight data by introducing mu as a flutter margin parameter that presents several advantages over tracking damping trends as a measure of a tendency to instability from available flight data.
A system for real-time measurement of the brachial artery diameter in B-mode ultrasound images.
Gemignani, Vincenzo; Faita, Francesco; Ghiadoni, Lorenzo; Poggianti, Elisa; Demi, Marcello
2007-03-01
The measurement of the brachial artery diameter is frequently used in clinical studies for evaluating the flow-mediated dilation and, in conjunction with the blood pressure value, for assessing arterial stiffness. This paper presents a system for computing the brachial artery diameter in real-time by analyzing B-mode ultrasound images. The method is based on a robust edge detection algorithm which is used to automatically locate the two walls of the vessel. The measure of the diameter is obtained with subpixel precision and with a temporal resolution of 25 samples/s, so that the small dilations induced by the cardiac cycle can also be retrieved. The algorithm is implemented on a standalone video processing board which acquires the analog video signal from the ultrasound equipment. Results are shown in real-time on a graphical user interface. The system was tested both on synthetic ultrasound images and in clinical studies of flow-mediated dilation. Accuracy, robustness, and intra/inter observer variability of the method were evaluated.
Kernelized correlation tracking with long-term motion cues
NASA Astrophysics Data System (ADS)
Lv, Yunqiu; Liu, Kai; Cheng, Fei
2018-04-01
Robust object tracking is a challenging task in computer vision due to interruptions such as deformation, fast motion and especially, occlusion of tracked object. When occlusions occur, image data will be unreliable and is insufficient for the tracker to depict the object of interest. Therefore, most trackers are prone to fail under occlusion. In this paper, an occlusion judgement and handling method based on segmentation of the target is proposed. If the target is occluded, the speed and direction of it must be different from the objects occluding it. Hence, the value of motion features are emphasized. Considering the efficiency and robustness of Kernelized Correlation Filter Tracking (KCF), it is adopted as a pre-tracker to obtain a predicted position of the target. By analyzing long-term motion cues of objects around this position, the tracked object is labelled. Hence, occlusion could be detected easily. Experimental results suggest that our tracker achieves a favorable performance and effectively handles occlusion and drifting problems.
Robust subspace clustering via joint weighted Schatten-p norm and Lq norm minimization
NASA Astrophysics Data System (ADS)
Zhang, Tao; Tang, Zhenmin; Liu, Qing
2017-05-01
Low-rank representation (LRR) has been successfully applied to subspace clustering. However, the nuclear norm in the standard LRR is not optimal for approximating the rank function in many real-world applications. Meanwhile, the L21 norm in LRR also fails to characterize various noises properly. To address the above issues, we propose an improved LRR method, which achieves low rank property via the new formulation with weighted Schatten-p norm and Lq norm (WSPQ). Specifically, the nuclear norm is generalized to be the Schatten-p norm and different weights are assigned to the singular values, and thus it can approximate the rank function more accurately. In addition, Lq norm is further incorporated into WSPQ to model different noises and improve the robustness. An efficient algorithm based on the inexact augmented Lagrange multiplier method is designed for the formulated problem. Extensive experiments on face clustering and motion segmentation clearly demonstrate the superiority of the proposed WSPQ over several state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Córdoba, Rosa; Lorenzoni, Matteo; Pablo-Navarro, Javier; Magén, César; Pérez-Murano, Francesc; María De Teresa, José
2017-11-01
The implementation of three-dimensional (3D) nano-objects as building blocks for the next generation of electro-mechanical, memory and sensing nano-devices is at the forefront of technology. The direct writing of functional 3D nanostructures is made feasible by using a method based on focused ion beam induced deposition (FIBID). We use this technique to grow horizontally suspended tungsten nanowires and then study their nano-mechanical properties by three-point bending method with atomic force microscopy. These measurements reveal that these nanowires exhibit a yield strength up to 12 times higher than that of the bulk tungsten, and near the theoretical value of 0.1 times the Young’s modulus (E). We find a size dependence of E that is adequately described by a core-shell model, which has been confirmed by transmission electron microscopy and compositional analysis at the nanoscale. Additionally, we show that experimental resonance frequencies of suspended nanowires (in the MHz range) are in good agreement with theoretical values. These extraordinary mechanical properties are key to designing electro-mechanically robust nanodevices based on FIBID tungsten nanowires.
Biometric identification based on novel frequency domain facial asymmetry measures
NASA Astrophysics Data System (ADS)
Mitra, Sinjini; Savvides, Marios; Vijaya Kumar, B. V. K.
2005-03-01
In the modern world, the ever-growing need to ensure a system's security has spurred the growth of the newly emerging technology of biometric identification. The present paper introduces a novel set of facial biometrics based on quantified facial asymmetry measures in the frequency domain. In particular, we show that these biometrics work well for face images showing expression variations and have the potential to do so in presence of illumination variations as well. A comparison of the recognition rates with those obtained from spatial domain asymmetry measures based on raw intensity values suggests that the frequency domain representation is more robust to intra-personal distortions and is a novel approach for performing biometric identification. In addition, some feature analysis based on statistical methods comparing the asymmetry measures across different individuals and across different expressions is presented.
Structural robustness with suboptimal responses for linear state space model
NASA Technical Reports Server (NTRS)
Keel, L. H.; Lim, Kyong B.; Juang, Jer-Nan
1989-01-01
A relationship between the closed-loop eigenvalues and the amount of perturbations in the open-loop matrix is addressed in the context of performance robustness. If the allowable perturbation ranges of elements of the open-loop matrix A and the desired tolerance of the closed-loop eigenvalues are given such that max(j) of the absolute value of Delta-lambda(j) (A+BF) should be less than some prescribed value, what is a state feedback controller F which satisfies the closed-loop eigenvalue perturbation-tolerance requirement for a class of given perturbation in A? The paper gives an algorithm to design such a controller. Numerical examples are included for illustration.
Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu
2015-09-01
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
2017-03-01
A Low- Power Wireless Image Sensor Node with Noise-Robust Moving Object Detection and a Region-of-Interest Based Rate Controller Jong Hwan Ko...Atlanta, GA 30332 USA Contact Author Email: jonghwan.ko@gatech.edu Abstract: This paper presents a low- power wireless image sensor node for...present a low- power wireless image sensor node with a noise-robust moving object detection and region-of-interest based rate controller [Fig. 1]. The
Esdar, Moritz; Hübner, Ursula; Liebe, Jan-David; Hüsers, Jens; Thye, Johannes
2017-01-01
Clinical information logistics is a construct that aims to describe and explain various phenomena of information provision to drive clinical processes. It can be measured by the workflow composite score, an aggregated indicator of the degree of IT support in clinical processes. This study primarily aimed to investigate the yet unknown empirical patterns constituting this construct. The second goal was to derive a data-driven weighting scheme for the constituents of the workflow composite score and to contrast this scheme with a literature based, top-down procedure. This approach should finally test the validity and robustness of the workflow composite score. Based on secondary data from 183 German hospitals, a tiered factor analytic approach (confirmatory and subsequent exploratory factor analysis) was pursued. A weighting scheme, which was based on factor loadings obtained in the analyses, was put into practice. We were able to identify five statistically significant factors of clinical information logistics that accounted for 63% of the overall variance. These factors were "flow of data and information", "mobility", "clinical decision support and patient safety", "electronic patient record" and "integration and distribution". The system of weights derived from the factor loadings resulted in values for the workflow composite score that differed only slightly from the score values that had been previously published based on a top-down approach. Our findings give insight into the internal composition of clinical information logistics both in terms of factors and weights. They also allowed us to propose a coherent model of clinical information logistics from a technical perspective that joins empirical findings with theoretical knowledge. Despite the new scheme of weights applied to the calculation of the workflow composite score, the score behaved robustly, which is yet another hint of its validity and therefore its usefulness. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Zhao, Jian; Yang, Ping; Zhao, Yue
2017-06-01
Speckle pattern-based characteristics of digital image correlation (DIC) restrict its application in engineering fields and nonlaboratory environments, since serious decorrelation effect occurs due to localized sudden illumination variation. A simple and efficient speckle pattern adjusting and optimizing approach presented in this paper is aimed at providing a novel speckle pattern robust enough to resist local illumination variation. The new speckle pattern, called neighborhood binary speckle pattern, derived from original speckle pattern, is obtained by means of thresholding the pixels of a neighborhood at its central pixel value and considering the result as a binary number. The efficiency of the proposed speckle pattern is evaluated in six experimental scenarios. Experiment results indicate that the DIC measurements based on neighborhood binary speckle pattern are able to provide reliable and accurate results, even though local brightness and contrast of the deformed images have been seriously changed. It is expected that the new speckle pattern will have more potential value in engineering applications.
Sanni, Steinar; Björkblom, Carina; Jonsson, Henrik; Godal, Brit F; Liewenborg, Birgitta; Lyng, Emily; Pampanin, Daniela M
2017-04-01
The aim of this study was to determine a suitable set of biomarker based methods for environmental monitoring in sub-arctic and temperate offshore areas using scientific knowledge on the sensitivity of fish species to dispersed crude oil. Threshold values for environmental monitoring and risk assessment were obtained based on a quantitative comparison of biomarker responses. Turbot, halibut, salmon and sprat were exposed for up to 8 weeks to five different sub-lethal concentrations of dispersed crude oil. Biomarkers assessing PAH metabolites, oxidative stress, detoxification system I activity, genotoxicity, immunotoxicity, endocrine disruption, general cellular stress and histological changes were measured. Results showed that PAH metabolites, CYP1A/EROD, DNA adducts and histopathology rendered the most robust results across the different fish species, both in terms of sensitivity and dose-responsiveness. The reported results contributed to forming links between biomonitoring and risk assessment procedures by using biomarker species sensitivity distributions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lack of large-angle TT correlations persists in WMAP and Planck
NASA Astrophysics Data System (ADS)
Copi, Craig J.; Huterer, Dragan; Schwarz, Dominik J.; Starkman, Glenn D.
2015-08-01
The lack of large-angle correlations in the observed microwave background temperature fluctuations persists in the final-year maps from Wilkinson Microwave Anisotropy Probe (WMAP) and the first cosmological data release from Planck. We find a statistically robust and significant result: p-values for the missing correlations lying below 0.24 per cent (i.e. evidence at more than 3σ) for foreground cleaned maps, in complete agreement with previous analyses based upon earlier WMAP data. A cut-sky analysis of the Planck HFI 100 GHz frequency band, the `cleanest CMB channel' of this instrument, returns a p-value as small as 0.03 per cent, based on the conservative mask defined by WMAP. These findings are in stark contrast to expectations from the inflationary Lambda cold dark matter model and still lack a convincing explanation. If this lack of large-angle correlations is a true feature of our Universe, and not just a statistical fluke, then the cosmological dipole must be considerably smaller than that predicted in the best-fitting model.
Arrhenius, Åsa; Backhaus, Thomas; Hilvarsson, Annelie; Wendt, Ida; Zgrundo, Aleksandra; Blanck, Hans
2014-10-15
This paper presents a novel assay that allows a quick and robust assessment of the effects of biocides on the initial settling and establishment of marine photoautotrophic biofilms including the multitude of indigenous fouling organisms. Briefly, biofilms are established in the field, sampled, comminuted and re-settled on clean surfaces, after 72h chlorophyll a is measured as an integrating endpoint to reflect both settling and growth. Eight antifoulants were used to evaluate the assay. Efficacy ranking, based on EC98 values from most to least efficacious compound is: copper pyrithione>TPBP>DCOIT>tolylfluanid>zinc pyrithione>medetomidine>copper (Cu(2+)), while ecotoxicological ranking (based on EC10 values) is irgarol, copper pyrithione>zinc pyrithione>TPBP>tolylfluanid>DCOIT>copper (Cu(2+))>medetomidine. The algaecide irgarol did not cause full inhibition. Instead the inhibition leveled out at 95% effect at 30 nmoll(-)(1), a concentration that was clearly lower than for any other of the tested biocides. Copyright © 2014. Published by Elsevier Ltd.
Fuzzy difference-of-Gaussian-based iris recognition method for noisy iris images
NASA Astrophysics Data System (ADS)
Kang, Byung Jun; Park, Kang Ryoung; Yoo, Jang-Hee; Moon, Kiyoung
2010-06-01
Iris recognition is used for information security with a high confidence level because it shows outstanding recognition accuracy by using human iris patterns with high degrees of freedom. However, iris recognition accuracy can be reduced by noisy iris images with optical and motion blurring. We propose a new iris recognition method based on the fuzzy difference-of-Gaussian (DOG) for noisy iris images. This study is novel in three ways compared to previous works: (1) The proposed method extracts iris feature values using the DOG method, which is robust to local variations of illumination and shows fine texture information, including various frequency components. (2) When determining iris binary codes, image noises that cause the quantization error of the feature values are reduced with the fuzzy membership function. (3) The optimal parameters of the DOG filter and the fuzzy membership function are determined in terms of iris recognition accuracy. Experimental results showed that the performance of the proposed method was better than that of previous methods for noisy iris images.
Unequal error control scheme for dimmable visible light communication systems
NASA Astrophysics Data System (ADS)
Deng, Keyan; Yuan, Lei; Wan, Yi; Li, Huaan
2017-01-01
Visible light communication (VLC), which has the advantages of a very large bandwidth, high security, and freedom from license-related restrictions and electromagnetic-interference, has attracted much interest. Because a VLC system simultaneously performs illumination and communication functions, dimming control, efficiency, and reliable transmission are significant and challenging issues of such systems. In this paper, we propose a novel unequal error control (UEC) scheme in which expanding window fountain (EWF) codes in an on-off keying (OOK)-based VLC system are used to support different dimming target values. To evaluate the performance of the scheme for various dimming target values, we apply it to H.264 scalable video coding bitstreams in a VLC system. The results of the simulations that are performed using additive white Gaussian noises (AWGNs) with different signal-to-noise ratios (SNRs) are used to compare the performance of the proposed scheme for various dimming target values. It is found that the proposed UEC scheme enables earlier base layer recovery compared to the use of the equal error control (EEC) scheme for different dimming target values and therefore afford robust transmission for scalable video multicast over optical wireless channels. This is because of the unequal error protection (UEP) and unequal recovery time (URT) of the EWF code in the proposed scheme.
Species Identification and Design Value Estimation of Wooden Members in Covered Bridges
Alex C. Wiedenhoeft; David E. Kretschmann
2014-01-01
Covered timber bridges are historic structures with unique aesthetic value. To preserve this value and maintain bridges in service, robust evaluation of their performance and safety is necessary. The strength of the timber found in covered bridges can vary considerably, not only because of age and condition, but also because of species and grade. For the practicing...
An Assessment of Normalized Difference Skin Index Robustness in Aquatic Environments
2014-03-27
Index NDSI Normalized Difference Skin Index NDVI Normalized Difference Vegetation Index NIR Near-Infrared SAR Search and Rescue SERG Sensors... Vegetation and water-bearing objects with high scatter tend to have NDSI values similar to human skin , potentially causing false positives in certain...AN ASSESSMENT OF NORMALIZED DIFFERENCE SKIN INDEX ROBUSTNESS IN AQUATIC ENVIRONMENTS THESIS Alice W. Chan, First Lieutenant, USAF AFIT-ENG-14-M-17
Extensions of output variance constrained controllers to hard constraints
NASA Technical Reports Server (NTRS)
Skelton, R.; Zhu, G.
1989-01-01
Covariance Controllers assign specified matrix values to the state covariance. A number of robustness results are directly related to the covariance matrix. The conservatism in known upperbounds on the H infinity, L infinity, and L (sub 2) norms for stability and disturbance robustness of linear uncertain systems using covariance controllers is illustrated with examples. These results are illustrated for continuous and discrete time systems. **** ONLY 2 BLOCK MARKERS FOUND -- RETRY *****
Comparison of molecular breeding values based on within- and across-breed training in beef cattle.
Kachman, Stephen D; Spangler, Matthew L; Bennett, Gary L; Hanford, Kathryn J; Kuehn, Larry A; Snelling, Warren M; Thallman, R Mark; Saatchi, Mahdi; Garrick, Dorian J; Schnabel, Robert D; Taylor, Jeremy F; Pollak, E John
2013-08-16
Although the efficacy of genomic predictors based on within-breed training looks promising, it is necessary to develop and evaluate across-breed predictors for the technology to be fully applied in the beef industry. The efficacies of genomic predictors trained in one breed and utilized to predict genetic merit in differing breeds based on simulation studies have been reported, as have the efficacies of predictors trained using data from multiple breeds to predict the genetic merit of purebreds. However, comparable studies using beef cattle field data have not been reported. Molecular breeding values for weaning and yearling weight were derived and evaluated using a database containing BovineSNP50 genotypes for 7294 animals from 13 breeds in the training set and 2277 animals from seven breeds (Angus, Red Angus, Hereford, Charolais, Gelbvieh, Limousin, and Simmental) in the evaluation set. Six single-breed and four across-breed genomic predictors were trained using pooled data from purebred animals. Molecular breeding values were evaluated using field data, including genotypes for 2227 animals and phenotypic records of animals born in 2008 or later. Accuracies of molecular breeding values were estimated based on the genetic correlation between the molecular breeding value and trait phenotype. With one exception, the estimated genetic correlations of within-breed molecular breeding values with trait phenotype were greater than 0.28 when evaluated in the breed used for training. Most estimated genetic correlations for the across-breed trained molecular breeding values were moderate (> 0.30). When molecular breeding values were evaluated in breeds that were not in the training set, estimated genetic correlations clustered around zero. Even for closely related breeds, within- or across-breed trained molecular breeding values have limited prediction accuracy for breeds that were not in the training set. For breeds in the training set, across- and within-breed trained molecular breeding values had similar accuracies. The benefit of adding data from other breeds to a within-breed training population is the ability to produce molecular breeding values that are more robust across breeds and these can be utilized until enough training data has been accumulated to allow for a within-breed training set.
Ni, Jingchao; Koyuturk, Mehmet; Tong, Hanghang; Haines, Jonathan; Xu, Rong; Zhang, Xiang
2016-11-10
Accurately prioritizing candidate disease genes is an important and challenging problem. Various network-based methods have been developed to predict potential disease genes by utilizing the disease similarity network and molecular networks such as protein interaction or gene co-expression networks. Although successful, a common limitation of the existing methods is that they assume all diseases share the same molecular network and a single generic molecular network is used to predict candidate genes for all diseases. However, different diseases tend to manifest in different tissues, and the molecular networks in different tissues are usually different. An ideal method should be able to incorporate tissue-specific molecular networks for different diseases. In this paper, we develop a robust and flexible method to integrate tissue-specific molecular networks for disease gene prioritization. Our method allows each disease to have its own tissue-specific network(s). We formulate the problem of candidate gene prioritization as an optimization problem based on network propagation. When there are multiple tissue-specific networks available for a disease, our method can automatically infer the relative importance of each tissue-specific network. Thus it is robust to the noisy and incomplete network data. To solve the optimization problem, we develop fast algorithms which have linear time complexities in the number of nodes in the molecular networks. We also provide rigorous theoretical foundations for our algorithms in terms of their optimality and convergence properties. Extensive experimental results show that our method can significantly improve the accuracy of candidate gene prioritization compared with the state-of-the-art methods. In our experiments, we compare our methods with 7 popular network-based disease gene prioritization algorithms on diseases from Online Mendelian Inheritance in Man (OMIM) database. The experimental results demonstrate that our methods recover true associations more accurately than other methods in terms of AUC values, and the performance differences are significant (with paired t-test p-values less than 0.05). This validates the importance to integrate tissue-specific molecular networks for studying disease gene prioritization and show the superiority of our network models and ranking algorithms toward this purpose. The source code and datasets are available at http://nijingchao.github.io/CRstar/ .
Linley, Warren G; Hughes, Dyfrig A
2013-08-01
The criteria used by the National Institute for Health and Clinical Excellence (NICE) for accepting higher incremental cost-effectiveness ratios for some medicines over others, and the recent introduction of the Cancer Drugs Fund (CDF) in England, are assumed to reflect societal preferences for National Health Service resource allocation. Robust empirical evidence to this effect is lacking. To explore societal preferences for these and other criteria, including those proposed for rewarding new medicines under the future value-based pricing (VBP) system, we conducted a choice-based experiment in 4118 UK adults via web-based surveys. Preferences were determined by asking respondents to allocate fixed funds between different patient and disease types reflecting nine specific prioritisation criteria. Respondents supported the criteria proposed under the VBP system (for severe diseases, address unmet needs, are innovative--provided they offered substantial health benefits, and have wider societal benefits) but did not support the end-of-life premium or the prioritisation of children or disadvantaged populations as specified by NICE, nor the special funding status for treatments of rare diseases, nor the CDF. Policies introduced on the basis of perceived--and not actual--societal values may lead to inappropriate resource allocation decisions with the potential for significant population health and economic consequences. Copyright © 2012 John Wiley & Sons, Ltd.
Stable computations with flat radial basis functions using vector-valued rational approximations
NASA Astrophysics Data System (ADS)
Wright, Grady B.; Fornberg, Bengt
2017-02-01
One commonly finds in applications of smooth radial basis functions (RBFs) that scaling the kernels so they are 'flat' leads to smaller discretization errors. However, the direct numerical approach for computing with flat RBFs (RBF-Direct) is severely ill-conditioned. We present an algorithm for bypassing this ill-conditioning that is based on a new method for rational approximation (RA) of vector-valued analytic functions with the property that all components of the vector share the same singularities. This new algorithm (RBF-RA) is more accurate, robust, and easier to implement than the Contour-Padé method, which is similarly based on vector-valued rational approximation. In contrast to the stable RBF-QR and RBF-GA algorithms, which are based on finding a better conditioned base in the same RBF-space, the new algorithm can be used with any type of smooth radial kernel, and it is also applicable to a wider range of tasks (including calculating Hermite type implicit RBF-FD stencils). We present a series of numerical experiments demonstrating the effectiveness of this new method for computing RBF interpolants in the flat regime. We also demonstrate the flexibility of the method by using it to compute implicit RBF-FD formulas in the flat regime and then using these for solving Poisson's equation in a 3-D spherical shell.
Denkinger, Michael D; Lukas, Albert; Nikolaus, Thorsten; Hauer, Klaus
2015-01-01
Fear of falling (FOF) is an important threat to autonomy. Current interventions to reduce FOF have yielded conflicting results. A possible reason for this discrepancy could be its multicausality. Some risk factors may not have been identified and addressed in recent studies. The last systematic review included studies until 2006. To identify additional risk factors for FOF and to test those mentioned previously, we conducted a systematic literature review. Studies examining FOF in community-dwelling older adults between 2006 and October 2013 were screened. Outcomes are summarized with respect to different constructs such as FOF, fall-related self-efficacy/balance confidence, and FOF-related activity restriction. Odds ratios and p values are reported. There is no clear pattern with regard to the different FOF-related constructs studied. The only parameters robustly associated across all constructs were female gender, performance-based and questionnaire-based physical function, the use of a walking aid, and, less robust, a history of falls and poor self-rated health. Conflicting results were identified for depression and anxiety, multiple drugs, and psychotropic drugs. Other potentially modifiable risk factors were only mentioned in one or two studies and warrant further investigation. Parameters with mainly negative results are also presented. Only few risk factors identified were robustly associated across all FOF-related constructs and should be included in future studies on FOF. Some newer factors have to be tested again in different cohorts. The comprehensive overview might assist in the conceptualization of future studies. Copyright © 2015 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.
Klett, Hagen; Fuellgraf, Hannah; Levit-Zerdoun, Ella; Hussung, Saskia; Kowar, Silke; Küsters, Simon; Bronsert, Peter; Werner, Martin; Wittel, Uwe; Fritsch, Ralph; Busch, Hauke; Boerries, Melanie
2018-01-01
Late diagnosis and systemic dissemination essentially contribute to the invariably poor prognosis of pancreatic ductal adenocarcinoma (PDAC). Therefore, the development of diagnostic biomarkers for PDAC are urgently needed to improve patient stratification and outcome in the clinic. By studying the transcriptomes of independent PDAC patient cohorts of tumor and non-tumor tissues, we identified 81 robustly regulated genes, through a novel, generally applicable meta-analysis. Using consensus clustering on co-expression values revealed four distinct clusters with genes originating from exocrine/endocrine pancreas, stromal and tumor cells. Three clusters were strongly associated with survival of PDAC patients based on TCGA database underlining the prognostic potential of the identified genes. With the added information of impact of survival and the robustness within the meta-analysis, we extracted a 17-gene subset for further validation. We show that it did not only discriminate PDAC from non-tumor tissue and stroma in fresh-frozen as well as formalin-fixed paraffin embedded samples, but also detected pancreatic precursor lesions and singled out pancreatitis samples. Moreover, the classifier discriminated PDAC from other cancers in the TCGA database. In addition, we experimentally validated the classifier in PDAC patients on transcript level using qPCR and exemplify the usage on protein level for three proteins (AHNAK2, LAMC2, TFF1) using immunohistochemistry and for two secreted proteins (TFF1, SERPINB5) using ELISA-based protein detection in blood-plasma. In conclusion, we present a novel robust diagnostic and prognostic gene signature for PDAC with future potential applicability in the clinic.
Klett, Hagen; Fuellgraf, Hannah; Levit-Zerdoun, Ella; Hussung, Saskia; Kowar, Silke; Küsters, Simon; Bronsert, Peter; Werner, Martin; Wittel, Uwe; Fritsch, Ralph; Busch, Hauke; Boerries, Melanie
2018-01-01
Late diagnosis and systemic dissemination essentially contribute to the invariably poor prognosis of pancreatic ductal adenocarcinoma (PDAC). Therefore, the development of diagnostic biomarkers for PDAC are urgently needed to improve patient stratification and outcome in the clinic. By studying the transcriptomes of independent PDAC patient cohorts of tumor and non-tumor tissues, we identified 81 robustly regulated genes, through a novel, generally applicable meta-analysis. Using consensus clustering on co-expression values revealed four distinct clusters with genes originating from exocrine/endocrine pancreas, stromal and tumor cells. Three clusters were strongly associated with survival of PDAC patients based on TCGA database underlining the prognostic potential of the identified genes. With the added information of impact of survival and the robustness within the meta-analysis, we extracted a 17-gene subset for further validation. We show that it did not only discriminate PDAC from non-tumor tissue and stroma in fresh-frozen as well as formalin-fixed paraffin embedded samples, but also detected pancreatic precursor lesions and singled out pancreatitis samples. Moreover, the classifier discriminated PDAC from other cancers in the TCGA database. In addition, we experimentally validated the classifier in PDAC patients on transcript level using qPCR and exemplify the usage on protein level for three proteins (AHNAK2, LAMC2, TFF1) using immunohistochemistry and for two secreted proteins (TFF1, SERPINB5) using ELISA-based protein detection in blood-plasma. In conclusion, we present a novel robust diagnostic and prognostic gene signature for PDAC with future potential applicability in the clinic. PMID:29675033
Recurrent, Robust and Scalable Patterns Underlie Human Approach and Avoidance
Kennedy, David N.; Lehár, Joseph; Lee, Myung Joo; Blood, Anne J.; Lee, Sang; Perlis, Roy H.; Smoller, Jordan W.; Morris, Robert; Fava, Maurizio
2010-01-01
Background Approach and avoidance behavior provide a means for assessing the rewarding or aversive value of stimuli, and can be quantified by a keypress procedure whereby subjects work to increase (approach), decrease (avoid), or do nothing about time of exposure to a rewarding/aversive stimulus. To investigate whether approach/avoidance behavior might be governed by quantitative principles that meet engineering criteria for lawfulness and that encode known features of reward/aversion function, we evaluated whether keypress responses toward pictures with potential motivational value produced any regular patterns, such as a trade-off between approach and avoidance, or recurrent lawful patterns as observed with prospect theory. Methodology/Principal Findings Three sets of experiments employed this task with beautiful face images, a standardized set of affective photographs, and pictures of food during controlled states of hunger and satiety. An iterative modeling approach to data identified multiple law-like patterns, based on variables grounded in the individual. These patterns were consistent across stimulus types, robust to noise, describable by a simple power law, and scalable between individuals and groups. Patterns included: (i) a preference trade-off counterbalancing approach and avoidance, (ii) a value function linking preference intensity to uncertainty about preference, and (iii) a saturation function linking preference intensity to its standard deviation, thereby setting limits to both. Conclusions/Significance These law-like patterns were compatible with critical features of prospect theory, the matching law, and alliesthesia. Furthermore, they appeared consistent with both mean-variance and expected utility approaches to the assessment of risk. Ordering of responses across categories of stimuli demonstrated three properties thought to be relevant for preference-based choice, suggesting these patterns might be grouped together as a relative preference theory. Since variables in these patterns have been associated with reward circuitry structure and function, they may provide a method for quantitative phenotyping of normative and pathological function (e.g., psychiatric illness). PMID:20532247
A quantitative description for efficient financial markets
NASA Astrophysics Data System (ADS)
Immonen, Eero
2015-09-01
In this article we develop a control system model for describing efficient financial markets. We define the efficiency of a financial market in quantitative terms by robust asymptotic price-value equality in this model. By invoking the Internal Model Principle of robust output regulation theory we then show that under No Bubble Conditions, in the proposed model, the market is efficient if and only if the following conditions hold true: (1) the traders, as a group, can identify any mispricing in asset value (even if no one single trader can do it accurately), and (2) the traders, as a group, incorporate an internal model of the value process (again, even if no one single trader knows it). This main result of the article, which deliberately avoids the requirement for investor rationality, demonstrates, in quantitative terms, that the more transparent the markets are, the more efficient they are. An extensive example is provided to illustrate the theoretical development.
Lo, P; Young, S; Kim, H J; Brown, M S; McNitt-Gray, M F
2016-08-01
To investigate the effects of dose level and reconstruction method on density and texture based features computed from CT lung nodules. This study had two major components. In the first component, a uniform water phantom was scanned at three dose levels and images were reconstructed using four conventional filtered backprojection (FBP) and four iterative reconstruction (IR) methods for a total of 24 different combinations of acquisition and reconstruction conditions. In the second component, raw projection (sinogram) data were obtained for 33 lung nodules from patients scanned as a part of their clinical practice, where low dose acquisitions were simulated by adding noise to sinograms acquired at clinical dose levels (a total of four dose levels) and reconstructed using one FBP kernel and two IR kernels for a total of 12 conditions. For the water phantom, spherical regions of interest (ROIs) were created at multiple locations within the water phantom on one reference image obtained at a reference condition. For the lung nodule cases, the ROI of each nodule was contoured semiautomatically (with manual editing) from images obtained at a reference condition. All ROIs were applied to their corresponding images reconstructed at different conditions. For 17 of the nodule cases, repeat contours were performed to assess repeatability. Histogram (eight features) and gray level co-occurrence matrix (GLCM) based texture features (34 features) were computed for all ROIs. For the lung nodule cases, the reference condition was selected to be 100% of clinical dose with FBP reconstruction using the B45f kernel; feature values calculated from other conditions were compared to this reference condition. A measure was introduced, which the authors refer to as Q, to assess the stability of features across different conditions, which is defined as the ratio of reproducibility (across conditions) to repeatability (across repeat contours) of each feature. The water phantom results demonstrated substantial variability among feature values calculated across conditions, with the exception of histogram mean. Features calculated from lung nodules demonstrated similar results with histogram mean as the most robust feature (Q ≤ 1), having a mean and standard deviation Q of 0.37 and 0.22, respectively. Surprisingly, histogram standard deviation and variance features were also quite robust. Some GLCM features were also quite robust across conditions, namely, diff. variance, sum variance, sum average, variance, and mean. Except for histogram mean, all features have a Q of larger than one in at least one of the 3% dose level conditions. As expected, the histogram mean is the most robust feature in their study. The effects of acquisition and reconstruction conditions on GLCM features vary widely, though trending toward features involving summation of product between intensities and probabilities being more robust, barring a few exceptions. Overall, care should be taken into account for variation in density and texture features if a variety of dose and reconstruction conditions are used for the quantification of lung nodules in CT, otherwise changes in quantification results may be more reflective of changes due to acquisition and reconstruction conditions than in the nodule itself.
Escher, Beate I; Neale, Peta A; Leusch, Frederic D L
2015-09-15
Cell-based bioassays are becoming increasingly popular in water quality assessment. The new generations of reporter-gene assays are very sensitive and effects are often detected in very clean water types such as drinking water and recycled water. For monitoring applications it is therefore imperative to derive trigger values that differentiate between acceptable and unacceptable effect levels. In this proof-of-concept paper, we propose a statistical method to read directly across from chemical guideline values to trigger values without the need to perform in vitro to in vivo extrapolations. The derivation is based on matching effect concentrations with existing chemical guideline values and filtering out appropriate chemicals that are responsive in the given bioassays at concentrations in the range of the guideline values. To account for the mixture effects of many chemicals acting together in a complex water sample, we propose bioanalytical equivalents that integrate the effects of groups of chemicals with the same mode of action that act in a concentration-additive manner. Statistical distribution methods are proposed to derive a specific effect-based trigger bioanalytical equivalent concentration (EBT-BEQ) for each bioassay of environmental interest that targets receptor-mediated toxicity. Even bioassays that are indicative of the same mode of action have slightly different numeric trigger values due to differences in their inherent sensitivity. The algorithm was applied to 18 cell-based bioassays and 11 provisional effect-based trigger bioanalytical equivalents were derived as an illustrative example using the 349 chemical guideline values protective for human health of the Australian Guidelines for Water Recycling. We illustrate the applicability using the example of a diverse set of water samples including recycled water. Most recycled water samples were compliant with the proposed triggers while wastewater effluent would not have been compliant with a few. The approach is readily adaptable to any water type and guideline or regulatory framework and can be expanded from the protection goal of human health to environmental protection targets. While this work constitutes a proof of principle, the applicability remains limited at present due to insufficient experimental bioassay data on individual regulated chemicals and the derived effect-based trigger values are of course only provisional. Once the experimental database is expanded and made more robust, the proposed effect-based trigger values may provide guidance in a regulatory context. Copyright © 2015 Elsevier Ltd. All rights reserved.
Improving power and robustness for detecting genetic association with extreme-value sampling design.
Chen, Hua Yun; Li, Mingyao
2011-12-01
Extreme-value sampling design that samples subjects with extremely large or small quantitative trait values is commonly used in genetic association studies. Samples in such designs are often treated as "cases" and "controls" and analyzed using logistic regression. Such a case-control analysis ignores the potential dose-response relationship between the quantitative trait and the underlying trait locus and thus may lead to loss of power in detecting genetic association. An alternative approach to analyzing such data is to model the dose-response relationship by a linear regression model. However, parameter estimation from this model can be biased, which may lead to inflated type I errors. We propose a robust and efficient approach that takes into consideration of both the biased sampling design and the potential dose-response relationship. Extensive simulations demonstrate that the proposed method is more powerful than the traditional logistic regression analysis and is more robust than the linear regression analysis. We applied our method to the analysis of a candidate gene association study on high-density lipoprotein cholesterol (HDL-C) which includes study subjects with extremely high or low HDL-C levels. Using our method, we identified several SNPs showing a stronger evidence of association with HDL-C than the traditional case-control logistic regression analysis. Our results suggest that it is important to appropriately model the quantitative traits and to adjust for the biased sampling when dose-response relationship exists in extreme-value sampling designs. © 2011 Wiley Periodicals, Inc.
Synthesis Methods for Robust Passification and Control
NASA Technical Reports Server (NTRS)
Kelkar, Atul G.; Joshi, Suresh M. (Technical Monitor)
2000-01-01
The research effort under this cooperative agreement has been essentially the continuation of the work from previous grants. The ongoing work has primarily focused on developing passivity-based control techniques for Linear Time-Invariant (LTI) systems. During this period, there has been a significant progress made in the area of passivity-based control of LTI systems and some preliminary results have also been obtained for nonlinear systems, as well. The prior work has addressed optimal control design for inherently passive as well as non- passive linear systems. For exploiting the robustness characteristics of passivity-based controllers the passification methodology was developed for LTI systems that are not inherently passive. Various methods of passification were first proposed in and further developed. The robustness of passification was addressed for multi-input multi-output (MIMO) systems for certain classes of uncertainties using frequency-domain methods. For MIMO systems, a state-space approach using Linear Matrix Inequality (LMI)-based formulation was presented, for passification of non-passive LTI systems. An LMI-based robust passification technique was presented for systems with redundant actuators and sensors. The redundancy in actuators and sensors was used effectively for robust passification using the LMI formulation. The passification was designed to be robust to an interval-type uncertainties in system parameters. The passification techniques were used to design a robust controller for Benchmark Active Control Technology wing under parametric uncertainties. The results on passive nonlinear systems, however, are very limited to date. Our recent work in this area was presented, wherein some stability results were obtained for passive nonlinear systems that are affine in control.
A vision-based system for fast and accurate laser scanning in robot-assisted phonomicrosurgery.
Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G
2015-02-01
Surgical quality in phonomicrosurgery can be improved by open-loop laser control (e.g., high-speed scanning capabilities) with a robust and accurate closed-loop visual servoing systems. A new vision-based system for laser scanning control during robot-assisted phonomicrosurgery was developed and tested. Laser scanning was accomplished with a dual control strategy, which adds a vision-based trajectory correction phase to a fast open-loop laser controller. The system is designed to eliminate open-loop aiming errors caused by system calibration limitations and by the unpredictable topology of real targets. Evaluation of the new system was performed using CO(2) laser cutting trials on artificial targets and ex-vivo tissue. This system produced accuracy values corresponding to pixel resolution even when smoke created by the laser-target interaction clutters the camera view. In realistic test scenarios, trajectory following RMS errors were reduced by almost 80 % with respect to open-loop system performances, reaching mean error values around 30 μ m and maximum observed errors in the order of 60 μ m. A new vision-based laser microsurgical control system was shown to be effective and promising with significant positive potential impact on the safety and quality of laser microsurgeries.
Haworth, Annette; Mears, Christopher; Betts, John M; Reynolds, Hayley M; Tack, Guido; Leo, Kevin; Williams, Scott; Ebert, Martin A
2016-01-07
Treatment plans for ten patients, initially treated with a conventional approach to low dose-rate brachytherapy (LDR, 145 Gy to entire prostate), were compared with plans for the same patients created with an inverse-optimisation planning process utilising a biologically-based objective. The 'biological optimisation' considered a non-uniform distribution of tumour cell density through the prostate based on known and expected locations of the tumour. Using dose planning-objectives derived from our previous biological-model validation study, the volume of the urethra receiving 125% of the conventional prescription (145 Gy) was reduced from a median value of 64% to less than 8% whilst maintaining high values of TCP. On average, the number of planned seeds was reduced from 85 to less than 75. The robustness of plans to random seed displacements needs to be carefully considered when using contemporary seed placement techniques. We conclude that an inverse planning approach to LDR treatments, based on a biological objective, has the potential to maintain high rates of tumour control whilst minimising dose to healthy tissue. In future, the radiobiological model will be informed using multi-parametric MRI to provide a personalised medicine approach.
NASA Astrophysics Data System (ADS)
Haworth, Annette; Mears, Christopher; Betts, John M.; Reynolds, Hayley M.; Tack, Guido; Leo, Kevin; Williams, Scott; Ebert, Martin A.
2016-01-01
Treatment plans for ten patients, initially treated with a conventional approach to low dose-rate brachytherapy (LDR, 145 Gy to entire prostate), were compared with plans for the same patients created with an inverse-optimisation planning process utilising a biologically-based objective. The ‘biological optimisation’ considered a non-uniform distribution of tumour cell density through the prostate based on known and expected locations of the tumour. Using dose planning-objectives derived from our previous biological-model validation study, the volume of the urethra receiving 125% of the conventional prescription (145 Gy) was reduced from a median value of 64% to less than 8% whilst maintaining high values of TCP. On average, the number of planned seeds was reduced from 85 to less than 75. The robustness of plans to random seed displacements needs to be carefully considered when using contemporary seed placement techniques. We conclude that an inverse planning approach to LDR treatments, based on a biological objective, has the potential to maintain high rates of tumour control whilst minimising dose to healthy tissue. In future, the radiobiological model will be informed using multi-parametric MRI to provide a personalised medicine approach.
Li, Qiao; Mark, Roger G; Clifford, Gari D
2009-01-01
Background Within the intensive care unit (ICU), arterial blood pressure (ABP) is typically recorded at different (and sometimes uneven) sampling frequencies, and from different sensors, and is often corrupted by different artifacts and noise which are often non-Gaussian, nonlinear and nonstationary. Extracting robust parameters from such signals, and providing confidences in the estimates is therefore difficult and requires an adaptive filtering approach which accounts for artifact types. Methods Using a large ICU database, and over 6000 hours of simultaneously acquired electrocardiogram (ECG) and ABP waveforms sampled at 125 Hz from a 437 patient subset, we documented six general types of ABP artifact. We describe a new ABP signal quality index (SQI), based upon the combination of two previously reported signal quality measures weighted together. One index measures morphological normality, and the other degradation due to noise. After extracting a 6084-hour subset of clean data using our SQI, we evaluated a new robust tracking algorithm for estimating blood pressure and heart rate (HR) based upon a Kalman Filter (KF) with an update sequence modified by the KF innovation sequence and the value of the SQI. In order to do this, we have created six novel models of different categories of artifacts that we have identified in our ABP waveform data. These artifact models were then injected into clean ABP waveforms in a controlled manner. Clinical blood pressure (systolic, mean and diastolic) estimates were then made from the ABP waveforms for both clean and corrupted data. The mean absolute error for systolic, mean and diastolic blood pressure was then calculated for different levels of artifact pollution to provide estimates of expected errors given a single value of the SQI. Results Our artifact models demonstrate that artifact types have differing effects on systolic, diastolic and mean ABP estimates. We show that, for most artifact types, diastolic ABP estimates are less noise-sensitive than mean ABP estimates, which in turn are more robust than systolic ABP estimates. We also show that our SQI can provide error bounds for both HR and ABP estimates. Conclusion The KF/SQI-fusion method described in this article was shown to provide an accurate estimate of blood pressure and HR derived from the ABP waveform even in the presence of high levels of persistent noise and artifact, and during extreme bradycardia and tachycardia. Differences in error between artifact types, measurement sensors and the quality of the source signal can be factored into physiological estimation using an unbiased adaptive filter, signal innovation and signal quality measures. PMID:19586547
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, H.
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
Bridging Social Capital and Individual Earnings: Evidence for an Inverted U.
Growiec, Katarzyna; Growiec, Jakub
Based on data on a cross section of individuals surveyed in the 1999-2002 wave of World and European Values Surveys, we investigate the multilateral associations between bridging social capital, individuals' earnings, as well as social trust and employment status. Our analysis provides robust evidence that the relationship between bridging social capital and earnings is inverted-U shaped. We carry out a range of tests in order to ascertain that this result is not driven by regressor endogeneity or omitted variables bias. We also identify significant interaction effects between bridging social capital, social trust, and employment status.
New digital capacitive measurement system for blade clearances
NASA Astrophysics Data System (ADS)
Moenich, Marcel; Bailleul, Gilles
This paper presents a totally new concept for tip blade clearance evaluation in turbine engines. This system is able to detect exact 'measurands' even under high temperature and severe conditions like ionization. The system is based on a heavy duty probe head, a miniaturized thick-film hybrid electronic circuit and a signal processing unit for real time computing. The high frequency individual measurement values are digitally filtered and linearized in real time. The electronic is built in hybrid technology and therefore can be kept extremely small and robust, so that the system can be used on actual flights.
NASA Astrophysics Data System (ADS)
Sui, Liansheng; Liu, Benqing; Wang, Qiang; Li, Ye; Liang, Junli
2015-12-01
A color image encryption scheme is proposed based on Yang-Gu mixture amplitude-phase retrieval algorithm and two-coupled logistic map in gyrator transform domain. First, the color plaintext image is decomposed into red, green and blue components, which are scrambled individually by three random sequences generated by using the two-dimensional Sine logistic modulation map. Second, each scrambled component is encrypted into a real-valued function with stationary white noise distribution in the iterative amplitude-phase retrieval process in the gyrator transform domain, and then three obtained functions are considered as red, green and blue channels to form the color ciphertext image. Obviously, the ciphertext image is real-valued function and more convenient for storing and transmitting. In the encryption and decryption processes, the chaotic random phase mask generated based on logistic map is employed as the phase key, which means that only the initial values are used as private key and the cryptosystem has high convenience on key management. Meanwhile, the security of the cryptosystem is enhanced greatly because of high sensitivity of the private keys. Simulation results are presented to prove the security and robustness of the proposed scheme.
OrthoANI: An improved algorithm and software for calculating average nucleotide identity.
Lee, Imchang; Ouk Kim, Yeong; Park, Sang-Cheol; Chun, Jongsik
2016-02-01
Species demarcation in Bacteria and Archaea is mainly based on overall genome relatedness, which serves a framework for modern microbiology. Current practice for obtaining these measures between two strains is shifting from experimentally determined similarity obtained by DNA-DNA hybridization (DDH) to genome-sequence-based similarity. Average nucleotide identity (ANI) is a simple algorithm that mimics DDH. Like DDH, ANI values between two genome sequences may be different from each other when reciprocal calculations are compared. We compared 63 690 pairs of genome sequences and found that the differences in reciprocal ANI values are significantly high, exceeding 1 % in some cases. To resolve this problem of not being symmetrical, a new algorithm, named OrthoANI, was developed to accommodate the concept of orthology for which both genome sequences were fragmented and only orthologous fragment pairs taken into consideration for calculating nucleotide identities. OrthoANI is highly correlated with ANI (using BLASTn) and the former showed approximately 0.1 % higher values than the latter. In conclusion, OrthoANI provides a more robust and faster means of calculating average nucleotide identity for taxonomic purposes. The standalone software tools are freely available at http://www.ezbiocloud.net/sw/oat.
Li, Wei; Herrman, Timothy J; Dai, Susie Y
2010-01-01
A simple, fast, and robust method was developed for the determination of fumonisin B1 (FB1), fumonisin B2 (FB2), and fumonisin B3 (FB3) in corn-based human food and animal feed (cornmeal). The method involves a single extraction step followed by centrifugation and filtration before analysis by ultra-performance liquid chromatographylelectrospray ionization (UPLC/ESI)-MS/MS. The LC/MS/MS method developed here represents the fastest and simplest procedure (<30 min) among both conventional HPLC methods and other LC/MS methods using SPE cleanup. The potential for high throughput analysis makes the method particularly beneficial for regulatory agencies and analytical laboratories with a high sample volume. A single-laboratory validation was conducted by testing three different spiking levels (200, 500, and 1000 ng/g for FB1 and FB2; 100, 250, and 500 ng/g for FB3) for accuracy and precision. Recoveries of FB1 ranged from 93 to 98% with RSD values of 3-8%. Recoveries of FB2 ranged from 104 to 108%, with RSD values of 2-6%. Recoveries of FB3 ranged from 94 to 108%, with RSD values of 2-5%.
Chang, Yeong-Chan
2005-12-01
This paper addresses the problem of designing adaptive fuzzy-based (or neural network-based) robust controls for a large class of uncertain nonlinear time-varying systems. This class of systems can be perturbed by plant uncertainties, unmodeled perturbations, and external disturbances. Nonlinear H(infinity) control technique incorporated with adaptive control technique and VSC technique is employed to construct the intelligent robust stabilization controller such that an H(infinity) control is achieved. The problem of the robust tracking control design for uncertain robotic systems is employed to demonstrate the effectiveness of the developed robust stabilization control scheme. Therefore, an intelligent robust tracking controller for uncertain robotic systems in the presence of high-degree uncertainties can easily be implemented. Its solution requires only to solve a linear algebraic matrix inequality and a satisfactorily transient and asymptotical tracking performance is guaranteed. A simulation example is made to confirm the performance of the developed control algorithms.
A study of the temporal robustness of the growing global container-shipping network
Wang, Nuo; Wu, Nuan; Dong, Ling-ling; Yan, Hua-kun; Wu, Di
2016-01-01
Whether they thrive as they grow must be determined for all constantly expanding networks. However, few studies have focused on this important network feature or the development of quantitative analytical methods. Given the formation and growth of the global container-shipping network, we proposed the concept of network temporal robustness and quantitative method. As an example, we collected container liner companies’ data at two time points (2004 and 2014) and built a shipping network with ports as nodes and routes as links. We thus obtained a quantitative value of the temporal robustness. The temporal robustness is a significant network property because, for the first time, we can clearly recognize that the shipping network has become more vulnerable to damage over the last decade: When the node failure scale reached 50% of the entire network, the temporal robustness was approximately −0.51% for random errors and −12.63% for intentional attacks. The proposed concept and analytical method described in this paper are significant for other network studies. PMID:27713549
Design principles for robust oscillatory behavior.
Castillo-Hair, Sebastian M; Villota, Elizabeth R; Coronado, Alberto M
2015-09-01
Oscillatory responses are ubiquitous in regulatory networks of living organisms, a fact that has led to extensive efforts to study and replicate the circuits involved. However, to date, design principles that underlie the robustness of natural oscillators are not completely known. Here we study a three-component enzymatic network model in order to determine the topological requirements for robust oscillation. First, by simulating every possible topological arrangement and varying their parameter values, we demonstrate that robust oscillators can be obtained by augmenting the number of both negative feedback loops and positive autoregulations while maintaining an appropriate balance of positive and negative interactions. We then identify network motifs, whose presence in more complex topologies is a necessary condition for obtaining oscillatory responses. Finally, we pinpoint a series of simple architectural patterns that progressively render more robust oscillators. Together, these findings can help in the design of more reliable synthetic biomolecular networks and may also have implications in the understanding of other oscillatory systems.
Ji, Xiaoting; Niu, Yifeng; Shen, Lincheng
2016-01-01
This paper presents a robust satisficing decision-making method for Unmanned Aerial Vehicles (UAVs) executing complex missions in an uncertain environment. Motivated by the info-gap decision theory, we formulate this problem as a novel robust satisficing optimization problem, of which the objective is to maximize the robustness while satisfying some desired mission requirements. Specifically, a new info-gap based Markov Decision Process (IMDP) is constructed to abstract the uncertain UAV system and specify the complex mission requirements with the Linear Temporal Logic (LTL). A robust satisficing policy is obtained to maximize the robustness to the uncertain IMDP while ensuring a desired probability of satisfying the LTL specifications. To this end, we propose a two-stage robust satisficing solution strategy which consists of the construction of a product IMDP and the generation of a robust satisficing policy. In the first stage, a product IMDP is constructed by combining the IMDP with an automaton representing the LTL specifications. In the second, an algorithm based on robust dynamic programming is proposed to generate a robust satisficing policy, while an associated robustness evaluation algorithm is presented to evaluate the robustness. Finally, through Monte Carlo simulation, the effectiveness of our algorithms is demonstrated on an UAV search mission under severe uncertainty so that the resulting policy can maximize the robustness while reaching the desired performance level. Furthermore, by comparing the proposed method with other robust decision-making methods, it can be concluded that our policy can tolerate higher uncertainty so that the desired performance level can be guaranteed, which indicates that the proposed method is much more effective in real applications. PMID:27835670
Ji, Xiaoting; Niu, Yifeng; Shen, Lincheng
2016-01-01
This paper presents a robust satisficing decision-making method for Unmanned Aerial Vehicles (UAVs) executing complex missions in an uncertain environment. Motivated by the info-gap decision theory, we formulate this problem as a novel robust satisficing optimization problem, of which the objective is to maximize the robustness while satisfying some desired mission requirements. Specifically, a new info-gap based Markov Decision Process (IMDP) is constructed to abstract the uncertain UAV system and specify the complex mission requirements with the Linear Temporal Logic (LTL). A robust satisficing policy is obtained to maximize the robustness to the uncertain IMDP while ensuring a desired probability of satisfying the LTL specifications. To this end, we propose a two-stage robust satisficing solution strategy which consists of the construction of a product IMDP and the generation of a robust satisficing policy. In the first stage, a product IMDP is constructed by combining the IMDP with an automaton representing the LTL specifications. In the second, an algorithm based on robust dynamic programming is proposed to generate a robust satisficing policy, while an associated robustness evaluation algorithm is presented to evaluate the robustness. Finally, through Monte Carlo simulation, the effectiveness of our algorithms is demonstrated on an UAV search mission under severe uncertainty so that the resulting policy can maximize the robustness while reaching the desired performance level. Furthermore, by comparing the proposed method with other robust decision-making methods, it can be concluded that our policy can tolerate higher uncertainty so that the desired performance level can be guaranteed, which indicates that the proposed method is much more effective in real applications.
Cui, Xinchun; Niu, Yuying; Zheng, Xiangwei; Han, Yingshuai
2018-01-01
In this paper, a new color watermarking algorithm based on differential evolution is proposed. A color host image is first converted from RGB space to YIQ space, which is more suitable for the human visual system. Then, apply three-level discrete wavelet transformation to luminance component Y and generate four different frequency sub-bands. After that, perform singular value decomposition on these sub-bands. In the watermark embedding process, apply discrete wavelet transformation to a watermark image after the scrambling encryption processing. Our new algorithm uses differential evolution algorithm with adaptive optimization to choose the right scaling factors. Experimental results show that the proposed algorithm has a better performance in terms of invisibility and robustness.
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Gunzburger, Max
2017-06-01
Simulation-based optimization of acoustic liner design in a turbofan engine nacelle for noise reduction purposes can dramatically reduce the cost and time needed for experimental designs. Because uncertainties are inevitable in the design process, a stochastic optimization algorithm is posed based on the conditional value-at-risk measure so that an ideal acoustic liner impedance is determined that is robust in the presence of uncertainties. A parallel reduced-order modeling framework is developed that dramatically improves the computational efficiency of the stochastic optimization solver for a realistic nacelle geometry. The reduced stochastic optimization solver takes less than 500 seconds to execute. In addition, well-posedness and finite element error analyses of the state system and optimization problem are provided.
NASA Astrophysics Data System (ADS)
Wu, Yun-jie; Li, Guo-fei
2018-01-01
Based on sliding mode extended state observer (SMESO) technique, an adaptive disturbance compensation finite control set optimal control (FCS-OC) strategy is proposed for permanent magnet synchronous motor (PMSM) system driven by voltage source inverter (VSI). So as to improve robustness of finite control set optimal control strategy, a SMESO is proposed to estimate the output-effect disturbance. The estimated value is fed back to finite control set optimal controller for implementing disturbance compensation. It is indicated through theoretical analysis that the designed SMESO could converge in finite time. The simulation results illustrate that the proposed adaptive disturbance compensation FCS-OC possesses better dynamical response behavior in the presence of disturbance.
Andries, Erik; Hagstrom, Thomas; Atlas, Susan R; Willman, Cheryl
2007-02-01
Linear discrimination, from the point of view of numerical linear algebra, can be treated as solving an ill-posed system of linear equations. In order to generate a solution that is robust in the presence of noise, these problems require regularization. Here, we examine the ill-posedness involved in the linear discrimination of cancer gene expression data with respect to outcome and tumor subclasses. We show that a filter factor representation, based upon Singular Value Decomposition, yields insight into the numerical ill-posedness of the hyperplane-based separation when applied to gene expression data. We also show that this representation yields useful diagnostic tools for guiding the selection of classifier parameters, thus leading to improved performance.
Pan, Qun-Xiong; Su, Zi-Jian; Zhang, Jian-Hua; Wang, Chong-Ren; Ke, Shao-Ying
2015-01-01
People's Republic of China is one of the countries with the highest incidence of gastric cancer, accounting for 45% of all new gastric cancer cases in the world. Therefore, strong prognostic markers are critical for the diagnosis and survival of Chinese patients suffering from gastric cancer. Recent studies have begun to unravel the mechanisms linking the host inflammatory response to tumor growth, invasion and metastasis in gastric cancers. Based on this relationship between inflammation and cancer progression, several inflammation-based scores have been demonstrated to have prognostic value in many types of malignant solid tumors. To compare the prognostic value of inflammation-based prognostic scores and tumor node metastasis (TNM) stage in patients undergoing gastric cancer resection. The inflammation-based prognostic scores were calculated for 207 patients with gastric cancer who underwent surgery. Glasgow prognostic score (GPS), neutrophil lymphocyte ratio (NLR), platelet lymphocyte ratio (PLR), prognostic nutritional index (PNI), and prognostic index (PI) were analyzed. Linear trend chi-square test, likelihood ratio chi-square test, and receiver operating characteristic were performed to compare the prognostic value of the selected scores and TNM stage. In univariate analysis, preoperative serum C-reactive protein (P<0.001), serum albumin (P<0.001), GPS (P<0.001), PLR (P=0.002), NLR (P<0.001), PI (P<0.001), PNI (P<0.001), and TNM stage (P<0.001) were significantly associated with both overall survival and disease-free survival of patients with gastric cancer. In multivariate analysis, GPS (P=0.024), NLR (P=0.012), PI (P=0.001), TNM stage (P<0.001), and degree of differentiation (P=0.002) were independent predictors of gastric cancer survival. GPS and TNM stage had a comparable prognostic value and higher linear trend chi-square value, likelihood ratio chi-square value, and larger area under the receiver operating characteristic curve as compared to other inflammation-based prognostic scores. The present study indicates that preoperative GPS and TNM stage are robust predictors of gastric cancer survival as compared to NLR, PLR, PI, and PNI in patients undergoing tumor resection.
Pan, Qun-Xiong; Su, Zi-Jian; Zhang, Jian-Hua; Wang, Chong-Ren; Ke, Shao-Ying
2015-01-01
Background People’s Republic of China is one of the countries with the highest incidence of gastric cancer, accounting for 45% of all new gastric cancer cases in the world. Therefore, strong prognostic markers are critical for the diagnosis and survival of Chinese patients suffering from gastric cancer. Recent studies have begun to unravel the mechanisms linking the host inflammatory response to tumor growth, invasion and metastasis in gastric cancers. Based on this relationship between inflammation and cancer progression, several inflammation-based scores have been demonstrated to have prognostic value in many types of malignant solid tumors. Objective To compare the prognostic value of inflammation-based prognostic scores and tumor node metastasis (TNM) stage in patients undergoing gastric cancer resection. Methods The inflammation-based prognostic scores were calculated for 207 patients with gastric cancer who underwent surgery. Glasgow prognostic score (GPS), neutrophil lymphocyte ratio (NLR), platelet lymphocyte ratio (PLR), prognostic nutritional index (PNI), and prognostic index (PI) were analyzed. Linear trend chi-square test, likelihood ratio chi-square test, and receiver operating characteristic were performed to compare the prognostic value of the selected scores and TNM stage. Results In univariate analysis, preoperative serum C-reactive protein (P<0.001), serum albumin (P<0.001), GPS (P<0.001), PLR (P=0.002), NLR (P<0.001), PI (P<0.001), PNI (P<0.001), and TNM stage (P<0.001) were significantly associated with both overall survival and disease-free survival of patients with gastric cancer. In multivariate analysis, GPS (P=0.024), NLR (P=0.012), PI (P=0.001), TNM stage (P<0.001), and degree of differentiation (P=0.002) were independent predictors of gastric cancer survival. GPS and TNM stage had a comparable prognostic value and higher linear trend chi-square value, likelihood ratio chi-square value, and larger area under the receiver operating characteristic curve as compared to other inflammation-based prognostic scores. Conclusion The present study indicates that preoperative GPS and TNM stage are robust predictors of gastric cancer survival as compared to NLR, PLR, PI, and PNI in patients undergoing tumor resection. PMID:26124667
NASA Astrophysics Data System (ADS)
Kwakkel, Jan; Haasnoot, Marjolijn
2015-04-01
In response to climate and socio-economic change, in various policy domains there is increasingly a call for robust plans or policies. That is, plans or policies that performs well in a very large range of plausible futures. In the literature, a wide range of alternative robustness metrics can be found. The relative merit of these alternative conceptualizations of robustness has, however, received less attention. Evidently, different robustness metrics can result in different plans or policies being adopted. This paper investigates the consequences of several robustness metrics on decision making, illustrated here by the design of a flood risk management plan. A fictitious case, inspired by a river reach in the Netherlands is used. The performance of this system in terms of casualties, damages, and costs for flood and damage mitigation actions is explored using a time horizon of 100 years, and accounting for uncertainties pertaining to climate change and land use change. A set of candidate policy options is specified up front. This set of options includes dike raising, dike strengthening, creating more space for the river, and flood proof building and evacuation options. The overarching aim is to design an effective flood risk mitigation strategy that is designed from the outset to be adapted over time in response to how the future actually unfolds. To this end, the plan will be based on the dynamic adaptive policy pathway approach (Haasnoot, Kwakkel et al. 2013) being used in the Dutch Delta Program. The policy problem is formulated as a multi-objective robust optimization problem (Kwakkel, Haasnoot et al. 2014). We solve the multi-objective robust optimization problem using several alternative robustness metrics, including both satisficing robustness metrics and regret based robustness metrics. Satisficing robustness metrics focus on the performance of candidate plans across a large ensemble of plausible futures. Regret based robustness metrics compare the performance of a candidate plan with the performance of other candidate plans across a large ensemble of plausible futures. Initial results suggest that the simplest satisficing metric, inspired by the signal to noise ratio, results in very risk averse solutions. Other satisficing metrics, which handle the average performance and the dispersion around the average separately, provide substantial additional insights into the trade off between the average performance, and the dispersion around this average. In contrast, the regret-based metrics enhance insight into the relative merits of candidate plans, while being less clear on the average performance or the dispersion around this performance. These results suggest that it is beneficial to use multiple robustness metrics when doing a robust decision analysis study. Haasnoot, M., J. H. Kwakkel, W. E. Walker and J. Ter Maat (2013). "Dynamic Adaptive Policy Pathways: A New Method for Crafting Robust Decisions for a Deeply Uncertain World." Global Environmental Change 23(2): 485-498. Kwakkel, J. H., M. Haasnoot and W. E. Walker (2014). "Developing Dynamic Adaptive Policy Pathways: A computer-assisted approach for developing adaptive strategies for a deeply uncertain world." Climatic Change.
Webster, A. Francina; Chepelev, Nikolai; Gagné, Rémi; Kuo, Byron; Recio, Leslie; Williams, Andrew; Yauk, Carole L.
2015-01-01
Many regulatory agencies are exploring ways to integrate toxicogenomic data into their chemical risk assessments. The major challenge lies in determining how to distill the complex data produced by high-content, multi-dose gene expression studies into quantitative information. It has been proposed that benchmark dose (BMD) values derived from toxicogenomics data be used as point of departure (PoD) values in chemical risk assessments. However, there is limited information regarding which genomics platforms are most suitable and how to select appropriate PoD values. In this study, we compared BMD values modeled from RNA sequencing-, microarray-, and qPCR-derived gene expression data from a single study, and explored multiple approaches for selecting a single PoD from these data. The strategies evaluated include several that do not require prior mechanistic knowledge of the compound for selection of the PoD, thus providing approaches for assessing data-poor chemicals. We used RNA extracted from the livers of female mice exposed to non-carcinogenic (0, 2 mg/kg/day, mkd) and carcinogenic (4, 8 mkd) doses of furan for 21 days. We show that transcriptional BMD values were consistent across technologies and highly predictive of the two-year cancer bioassay-based PoD. We also demonstrate that filtering data based on statistically significant changes in gene expression prior to BMD modeling creates more conservative BMD values. Taken together, this case study on mice exposed to furan demonstrates that high-content toxicogenomics studies produce robust data for BMD modelling that are minimally affected by inter-technology variability and highly predictive of cancer-based PoD doses. PMID:26313361
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bambi, Cosimo, E-mail: bambi@fudan.edu.cn
2014-03-01
In extensions of general relativity and in theories aiming at unifying gravity with the forces of the Standard Model, the value of the ''fundamental constants'' is often determined by the vacuum expectation value of new fields, which may thus change in different backgrounds. Variations of fundamental constants with respect to the values measured today in laboratories on Earth are expected to be more evident on cosmological timescales and/or in strong gravitational fields. In this paper, I show that the analysis of the Kα iron line observed in the X-ray spectrum of black holes can potentially be used to probe themore » fine structure constant α in gravitational potentials relative to Earth of Δφ ≈ 0.1. At present, systematic effects not fully under control prevent to get robust and stringent bounds on possible variations of the value of α with this technique, but the fact that current data can be fitted with models based on standard physics already rules out variations of the fine structure constant larger than some percent.« less
Network-based de-noising improves prediction from microarray data.
Kato, Tsuyoshi; Murata, Yukio; Miura, Koh; Asai, Kiyoshi; Horton, Paul B; Koji, Tsuda; Fujibuchi, Wataru
2006-03-20
Prediction of human cell response to anti-cancer drugs (compounds) from microarray data is a challenging problem, due to the noise properties of microarrays as well as the high variance of living cell responses to drugs. Hence there is a strong need for more practical and robust methods than standard methods for real-value prediction. We devised an extended version of the off-subspace noise-reduction (de-noising) method to incorporate heterogeneous network data such as sequence similarity or protein-protein interactions into a single framework. Using that method, we first de-noise the gene expression data for training and test data and also the drug-response data for training data. Then we predict the unknown responses of each drug from the de-noised input data. For ascertaining whether de-noising improves prediction or not, we carry out 12-fold cross-validation for assessment of the prediction performance. We use the Pearson's correlation coefficient between the true and predicted response values as the prediction performance. De-noising improves the prediction performance for 65% of drugs. Furthermore, we found that this noise reduction method is robust and effective even when a large amount of artificial noise is added to the input data. We found that our extended off-subspace noise-reduction method combining heterogeneous biological data is successful and quite useful to improve prediction of human cell cancer drug responses from microarray data.
Hu, Meng-Han; Dong, Qing-Li; Liu, Bao-Lin
2016-08-01
Hyperspectral reflectance and transmittance sensing as well as near-infrared (NIR) spectroscopy were investigated as non-destructive tools for estimating blueberry firmness, elastic modulus and soluble solid content (SSC). Least squares-support vector machine models were established from these three spectra based on samples from three cultivars viz. Bluecrop, Duke and M2 and two harvest years viz. 2014 and 2015 for predicting blueberry postharvest quality. One-cultivar reflectance models (establishing model using one cultivar) derived better results than the corresponding transmittance and NIR models for predicting blueberry firmness with few cultivar effects. Two-cultivar NIR models (establishing model using two cultivars) proved to be suitable for estimating blueberry SSC with correlations over 0.83. Rp (RMSEp ) values of the three-cultivar reflectance models (establishing model using 75% of three cultivars) were 0.73 (0.094) and 0.73 (0.186), respectively , for predicting blueberry firmness and elastic modulus. For SSC prediction, the three-cultivar NIR model was found to achieve an Rp (RMSEp ) value of 0.85 (0.090). Adding Bluecrop samples harvested in 2014 could enhance the three-cultivar model robustness for firmness and elastic modulus. The above results indicated the potential for using spatial and spectral techniques to develop robust models for predicting blueberry postharvest quality containing biological variability. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.
Developing image processing meta-algorithms with data mining of multiple metrics.
Leung, Kelvin; Cunha, Alexandre; Toga, A W; Parker, D Stott
2014-01-01
People often use multiple metrics in image processing, but here we take a novel approach of mining the values of batteries of metrics on image processing results. We present a case for extending image processing methods to incorporate automated mining of multiple image metric values. Here by a metric we mean any image similarity or distance measure, and in this paper we consider intensity-based and statistical image measures and focus on registration as an image processing problem. We show how it is possible to develop meta-algorithms that evaluate different image processing results with a number of different metrics and mine the results in an automated fashion so as to select the best results. We show that the mining of multiple metrics offers a variety of potential benefits for many image processing problems, including improved robustness and validation.
Mixed model approaches for diallel analysis based on a bio-model.
Zhu, J; Weir, B S
1996-12-01
A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.
Automated tumor volumetry using computer-aided image segmentation.
Gaonkar, Bilwaj; Macyszyn, Luke; Bilello, Michel; Sadaghiani, Mohammed Salehi; Akbari, Hamed; Atthiah, Mark A; Ali, Zarina S; Da, Xiao; Zhan, Yiqang; O'Rourke, Donald; Grady, Sean M; Davatzikos, Christos
2015-05-01
Accurate segmentation of brain tumors, and quantification of tumor volume, is important for diagnosis, monitoring, and planning therapeutic intervention. Manual segmentation is not widely used because of time constraints. Previous efforts have mainly produced methods that are tailored to a particular type of tumor or acquisition protocol and have mostly failed to produce a method that functions on different tumor types and is robust to changes in scanning parameters, resolution, and image quality, thereby limiting their clinical value. Herein, we present a semiautomatic method for tumor segmentation that is fast, accurate, and robust to a wide variation in image quality and resolution. A semiautomatic segmentation method based on the geodesic distance transform was developed and validated by using it to segment 54 brain tumors. Glioblastomas, meningiomas, and brain metastases were segmented. Qualitative validation was based on physician ratings provided by three clinical experts. Quantitative validation was based on comparing semiautomatic and manual segmentations. Tumor segmentations obtained using manual and automatic methods were compared quantitatively using the Dice measure of overlap. Subjective evaluation was performed by having human experts rate the computerized segmentations on a 0-5 rating scale where 5 indicated perfect segmentation. The proposed method addresses a significant, unmet need in the field of neuro-oncology. Specifically, this method enables clinicians to obtain accurate and reproducible tumor volumes without the need for manual segmentation. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
Automated Tumor Volumetry Using Computer-Aided Image Segmentation
Bilello, Michel; Sadaghiani, Mohammed Salehi; Akbari, Hamed; Atthiah, Mark A.; Ali, Zarina S.; Da, Xiao; Zhan, Yiqang; O'Rourke, Donald; Grady, Sean M.; Davatzikos, Christos
2015-01-01
Rationale and Objectives Accurate segmentation of brain tumors, and quantification of tumor volume, is important for diagnosis, monitoring, and planning therapeutic intervention. Manual segmentation is not widely used because of time constraints. Previous efforts have mainly produced methods that are tailored to a particular type of tumor or acquisition protocol and have mostly failed to produce a method that functions on different tumor types and is robust to changes in scanning parameters, resolution, and image quality, thereby limiting their clinical value. Herein, we present a semiautomatic method for tumor segmentation that is fast, accurate, and robust to a wide variation in image quality and resolution. Materials and Methods A semiautomatic segmentation method based on the geodesic distance transform was developed and validated by using it to segment 54 brain tumors. Glioblastomas, meningiomas, and brain metastases were segmented. Qualitative validation was based on physician ratings provided by three clinical experts. Quantitative validation was based on comparing semiautomatic and manual segmentations. Results Tumor segmentations obtained using manual and automatic methods were compared quantitatively using the Dice measure of overlap. Subjective evaluation was performed by having human experts rate the computerized segmentations on a 0–5 rating scale where 5 indicated perfect segmentation. Conclusions The proposed method addresses a significant, unmet need in the field of neuro-oncology. Specifically, this method enables clinicians to obtain accurate and reproducible tumor volumes without the need for manual segmentation. PMID:25770633
Projection-slice theorem based 2D-3D registration
NASA Astrophysics Data System (ADS)
van der Bom, M. J.; Pluim, J. P. W.; Homan, R.; Timmer, J.; Bartels, L. W.
2007-03-01
In X-ray guided procedures, the surgeon or interventionalist is dependent on his or her knowledge of the patient's specific anatomy and the projection images acquired during the procedure by a rotational X-ray source. Unfortunately, these X-ray projections fail to give information on the patient's anatomy in the dimension along the projection axis. It would be very profitable to provide the surgeon or interventionalist with a 3D insight of the patient's anatomy that is directly linked to the X-ray images acquired during the procedure. In this paper we present a new robust 2D-3D registration method based on the Projection-Slice Theorem. This theorem gives us a relation between the pre-operative 3D data set and the interventional projection images. Registration is performed by minimizing a translation invariant similarity measure that is applied to the Fourier transforms of the images. The method was tested by performing multiple exhaustive searches on phantom data of the Circle of Willis and on a post-mortem human skull. Validation was performed visually by comparing the test projections to the ones that corresponded to the minimal value of the similarity measure. The Projection-Slice Theorem Based method was shown to be very effective and robust, and provides capture ranges up to 62 degrees. Experiments have shown that the method is capable of retrieving similar results when translations are applied to the projection images.
Exploration of robust operating conditions in inductively coupled plasma mass spectrometry
NASA Astrophysics Data System (ADS)
Tromp, John W.; Pomares, Mario; Alvarez-Prieto, Manuel; Cole, Amanda; Ying, Hai; Salin, Eric D.
2003-11-01
'Robust' conditions, as defined by Mermet and co-workers for inductively coupled plasma (ICP)-atomic emission spectrometry, minimize matrix effects on analyte signals, and are obtained by increasing power and reducing nebulizer gas flow. In ICP-mass spectrometry (MS), it is known that reduced nebulizer gas flow usually leads to more robust conditions such that matrix effects are reduced. In this work, robust conditions for ICP-MS have been determined by optimizing for accuracy in the determination of analytes in a multi-element solution with various interferents (Al, Ba, Cs, K, Na), by varying power, nebulizer gas flow, sample introduction rate and ion lens voltage. The goal of the work was to determine which operating parameters were the most important in reducing matrix effects, and whether different interferents yielded the same robust conditions. Reduction in nebulizer gas flow and in sample input rate led to a significantly decreased interference, while an increase in power seemed to have a lesser effect. Once the other parameters had been adjusted to their robust values, there was no additional improvement in accuracy attainable by adjusting the ion lens voltage. The robust conditions were universal, since, for all the interferents and analytes studied, the optimum was found at the same operating conditions. One drawback to the use of robust conditions was the slightly reduced sensitivity; however, in the context of 'intelligent' instruments, the concept of 'robust conditions' is useful in many cases.
Rank-preserving regression: a more robust rank regression model against outliers.
Chen, Tian; Kowalski, Jeanne; Chen, Rui; Wu, Pan; Zhang, Hui; Feng, Changyong; Tu, Xin M
2016-08-30
Mean-based semi-parametric regression models such as the popular generalized estimating equations are widely used to improve robustness of inference over parametric models. Unfortunately, such models are quite sensitive to outlying observations. The Wilcoxon-score-based rank regression (RR) provides more robust estimates over generalized estimating equations against outliers. However, the RR and its extensions do not sufficiently address missing data arising in longitudinal studies. In this paper, we propose a new approach to address outliers under a different framework based on the functional response models. This functional-response-model-based alternative not only addresses limitations of the RR and its extensions for longitudinal data, but, with its rank-preserving property, even provides more robust estimates than these alternatives. The proposed approach is illustrated with both real and simulated data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
A Novel Robust H∞ Filter Based on Krein Space Theory in the SINS/CNS Attitude Reference System.
Yu, Fei; Lv, Chongyang; Dong, Qianhui
2016-03-18
Owing to their numerous merits, such as compact, autonomous and independence, the strapdown inertial navigation system (SINS) and celestial navigation system (CNS) can be used in marine applications. What is more, due to the complementary navigation information obtained from two different kinds of sensors, the accuracy of the SINS/CNS integrated navigation system can be enhanced availably. Thus, the SINS/CNS system is widely used in the marine navigation field. However, the CNS is easily interfered with by the surroundings, which will lead to the output being discontinuous. Thus, the uncertainty problem caused by the lost measurement will reduce the system accuracy. In this paper, a robust H∞ filter based on the Krein space theory is proposed. The Krein space theory is introduced firstly, and then, the linear state and observation models of the SINS/CNS integrated navigation system are established reasonably. By taking the uncertainty problem into account, in this paper, a new robust H∞ filter is proposed to improve the robustness of the integrated system. At last, this new robust filter based on the Krein space theory is estimated by numerical simulations and actual experiments. Additionally, the simulation and experiment results and analysis show that the attitude errors can be reduced by utilizing the proposed robust filter effectively when the measurements are missing discontinuous. Compared to the traditional Kalman filter (KF) method, the accuracy of the SINS/CNS integrated system is improved, verifying the robustness and the availability of the proposed robust H∞ filter.
GO-based functional dissimilarity of gene sets.
Díaz-Díaz, Norberto; Aguilar-Ruiz, Jesús S
2011-09-01
The Gene Ontology (GO) provides a controlled vocabulary for describing the functions of genes and can be used to evaluate the functional coherence of gene sets. Many functional coherence measures consider each pair of gene functions in a set and produce an output based on all pairwise distances. A single gene can encode multiple proteins that may differ in function. For each functionality, other proteins that exhibit the same activity may also participate. Therefore, an identification of the most common function for all of the genes involved in a biological process is important in evaluating the functional similarity of groups of genes and a quantification of functional coherence can helps to clarify the role of a group of genes working together. To implement this approach to functional assessment, we present GFD (GO-based Functional Dissimilarity), a novel dissimilarity measure for evaluating groups of genes based on the most relevant functions of the whole set. The measure assigns a numerical value to the gene set for each of the three GO sub-ontologies. Results show that GFD performs robustly when applied to gene set of known functionality (extracted from KEGG). It performs particularly well on randomly generated gene sets. An ROC analysis reveals that the performance of GFD in evaluating the functional dissimilarity of gene sets is very satisfactory. A comparative analysis against other functional measures, such as GS2 and those presented by Resnik and Wang, also demonstrates the robustness of GFD.
Nonlinear Dynamics in Gene Regulation Promote Robustness and Evolvability of Gene Expression Levels.
Steinacher, Arno; Bates, Declan G; Akman, Ozgur E; Soyer, Orkun S
2016-01-01
Cellular phenotypes underpinned by regulatory networks need to respond to evolutionary pressures to allow adaptation, but at the same time be robust to perturbations. This creates a conflict in which mutations affecting regulatory networks must both generate variance but also be tolerated at the phenotype level. Here, we perform mathematical analyses and simulations of regulatory networks to better understand the potential trade-off between robustness and evolvability. Examining the phenotypic effects of mutations, we find an inverse correlation between robustness and evolvability that breaks only with nonlinearity in the network dynamics, through the creation of regions presenting sudden changes in phenotype with small changes in genotype. For genotypes embedding low levels of nonlinearity, robustness and evolvability correlate negatively and almost perfectly. By contrast, genotypes embedding nonlinear dynamics allow expression levels to be robust to small perturbations, while generating high diversity (evolvability) under larger perturbations. Thus, nonlinearity breaks the robustness-evolvability trade-off in gene expression levels by allowing disparate responses to different mutations. Using analytical derivations of robustness and system sensitivity, we show that these findings extend to a large class of gene regulatory network architectures and also hold for experimentally observed parameter regimes. Further, the effect of nonlinearity on the robustness-evolvability trade-off is ensured as long as key parameters of the system display specific relations irrespective of their absolute values. We find that within this parameter regime genotypes display low and noisy expression levels. Examining the phenotypic effects of mutations, we find an inverse correlation between robustness and evolvability that breaks only with nonlinearity in the network dynamics. Our results provide a possible solution to the robustness-evolvability trade-off, suggest an explanation for the ubiquity of nonlinear dynamics in gene expression networks, and generate useful guidelines for the design of synthetic gene circuits.
MACRA: A New Age for Physician Payments.
Huston, Kent Kwasind
2017-04-01
The Medicare Access and CHIP Reauthorization Act (MACRA) of 2015 introduced a new system of physician payments in the United States. This legislation and the complex rules written to enact the law intend to force a shift away from volume-based payments and into so called value-based payments. Physicians and other clinicians will be graded via quality and cost metrics and payments will be adjusted based on performance. Robust use of certified electronic health records is required under MACRA. Physicians will follow one of two payment reform tracks known as the Merit-Based Incentive Payment System (MIPS) and the Alternative Payment Model (APM) pathways. Although there are rheumatology and other specialty specific quality measures in the MIPS program, there are no rheumatology specific APMs to date. A thorough understating of MACRA is required for medical practices to survive the new era of payment reform.
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem.
Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun; Zhong, Yi-wen
2016-01-01
Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.
Planning for robust reserve networks using uncertainty analysis
Moilanen, A.; Runge, M.C.; Elith, Jane; Tyre, A.; Carmel, Y.; Fegraus, E.; Wintle, B.A.; Burgman, M.; Ben-Haim, Y.
2006-01-01
Planning land-use for biodiversity conservation frequently involves computer-assisted reserve selection algorithms. Typically such algorithms operate on matrices of species presence?absence in sites, or on species-specific distributions of model predicted probabilities of occurrence in grid cells. There are practically always errors in input data?erroneous species presence?absence data, structural and parametric uncertainty in predictive habitat models, and lack of correspondence between temporal presence and long-run persistence. Despite these uncertainties, typical reserve selection methods proceed as if there is no uncertainty in the data or models. Having two conservation options of apparently equal biological value, one would prefer the option whose value is relatively insensitive to errors in planning inputs. In this work we show how uncertainty analysis for reserve planning can be implemented within a framework of information-gap decision theory, generating reserve designs that are robust to uncertainty. Consideration of uncertainty involves modifications to the typical objective functions used in reserve selection. Search for robust-optimal reserve structures can still be implemented via typical reserve selection optimization techniques, including stepwise heuristics, integer-programming and stochastic global search.
Reducing Design Risk Using Robust Design Methods: A Dual Response Surface Approach
NASA Technical Reports Server (NTRS)
Unal, Resit; Yeniay, Ozgur; Lepsch, Roger A. (Technical Monitor)
2003-01-01
Space transportation system conceptual design is a multidisciplinary process containing considerable element of risk. Risk here is defined as the variability in the estimated (output) performance characteristic of interest resulting from the uncertainties in the values of several disciplinary design and/or operational parameters. Uncertainties from one discipline (and/or subsystem) may propagate to another, through linking parameters and the final system output may have a significant accumulation of risk. This variability can result in significant deviations from the expected performance. Therefore, an estimate of variability (which is called design risk in this study) together with the expected performance characteristic value (e.g. mean empty weight) is necessary for multidisciplinary optimization for a robust design. Robust design in this study is defined as a solution that minimizes variability subject to a constraint on mean performance characteristics. Even though multidisciplinary design optimization has gained wide attention and applications, the treatment of uncertainties to quantify and analyze design risk has received little attention. This research effort explores the dual response surface approach to quantify variability (risk) in critical performance characteristics (such as weight) during conceptual design.
NASA Astrophysics Data System (ADS)
Kadhem, Hasan; Amagasa, Toshiyuki; Kitagawa, Hiroyuki
Encryption can provide strong security for sensitive data against inside and outside attacks. This is especially true in the “Database as Service” model, where confidentiality and privacy are important issues for the client. In fact, existing encryption approaches are vulnerable to a statistical attack because each value is encrypted to another fixed value. This paper presents a novel database encryption scheme called MV-OPES (Multivalued — Order Preserving Encryption Scheme), which allows privacy-preserving queries over encrypted databases with an improved security level. Our idea is to encrypt a value to different multiple values to prevent statistical attacks. At the same time, MV-OPES preserves the order of the integer values to allow comparison operations to be directly applied on encrypted data. Using calculated distance (range), we propose a novel method that allows a join query between relations based on inequality over encrypted values. We also present techniques to offload query execution load to a database server as much as possible, thereby making a better use of server resources in a database outsourcing environment. Our scheme can easily be integrated with current database systems as it is designed to work with existing indexing structures. It is robust against statistical attack and the estimation of true values. MV-OPES experiments show that security for sensitive data can be achieved with reasonable overhead, establishing the practicability of the scheme.
Valous, Nektarios A; Mendoza, Fernando; Sun, Da-Wen; Allen, Paul
2010-03-01
The quaternionic singular value decomposition is a technique to decompose a quaternion matrix (representation of a colour image) into quaternion singular vector and singular value component matrices exposing useful properties. The objective of this study was to use a small portion of uncorrelated singular values, as robust features for the classification of sliced pork ham images, using a supervised artificial neural network classifier. Images were acquired from four qualities of sliced cooked pork ham typically consumed in Ireland (90 slices per quality), having similar appearances. Mahalanobis distances and Pearson product moment correlations were used for feature selection. Six highly discriminating features were used as input to train the neural network. An adaptive feedforward multilayer perceptron classifier was employed to obtain a suitable mapping from the input dataset. The overall correct classification performance for the training, validation and test set were 90.3%, 94.4%, and 86.1%, respectively. The results confirm that the classification performance was satisfactory. Extracting the most informative features led to the recognition of a set of different but visually quite similar textural patterns based on quaternionic singular values. Copyright 2009 Elsevier Ltd. All rights reserved.
siMacro: A Fast and Easy Data Processing Tool for Cell-Based Genomewide siRNA Screens.
Singh, Nitin Kumar; Seo, Bo Yeun; Vidyasagar, Mathukumalli; White, Michael A; Kim, Hyun Seok
2013-03-01
Growing numbers of studies employ cell line-based systematic short interfering RNA (siRNA) screens to study gene functions and to identify drug targets. As multiple sources of variations that are unique to siRNA screens exist, there is a growing demand for a computational tool that generates normalized values and standardized scores. However, only a few tools have been available so far with limited usability. Here, we present siMacro, a fast and easy-to-use Microsoft Office Excel-based tool with a graphic user interface, designed to process single-condition or two-condition synthetic screen datasets. siMacro normalizes position and batch effects, censors outlier samples, and calculates Z-scores and robust Z-scores, with a spreadsheet output of >120,000 samples in under 1 minute.
siMacro: A Fast and Easy Data Processing Tool for Cell-Based Genomewide siRNA Screens
Singh, Nitin Kumar; Seo, Bo Yeun; Vidyasagar, Mathukumalli; White, Michael A.
2013-01-01
Growing numbers of studies employ cell line-based systematic short interfering RNA (siRNA) screens to study gene functions and to identify drug targets. As multiple sources of variations that are unique to siRNA screens exist, there is a growing demand for a computational tool that generates normalized values and standardized scores. However, only a few tools have been available so far with limited usability. Here, we present siMacro, a fast and easy-to-use Microsoft Office Excel-based tool with a graphic user interface, designed to process single-condition or two-condition synthetic screen datasets. siMacro normalizes position and batch effects, censors outlier samples, and calculates Z-scores and robust Z-scores, with a spreadsheet output of >120,000 samples in under 1 minute. PMID:23613684
Robust peptidoglycan growth by dynamic and variable multi-protein complexes.
Pazos, Manuel; Peters, Katharina; Vollmer, Waldemar
2017-04-01
In Gram-negative bacteria such as Escherichia coli the peptidoglycan sacculus resides in the periplasm, a compartment that experiences changes in pH value, osmolality, ion strength and other parameters depending on the cell's environment. Hence, the cell needs robust peptidoglycan growth mechanisms to grow and divide under different conditions. Here we propose a model according to which the cell achieves robust peptidoglycan growth by employing dynamic multi-protein complexes, which assemble with variable composition from freely diffusing sets of peptidoglycan synthases, hydrolases and their regulators, whereby the composition of the active complexes depends on the cell cycle state - cell elongation or division - and the periplasmic growth conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Worst-Case Approach for On-Line Flutter Prediction
NASA Technical Reports Server (NTRS)
Lind, Rick C.; Brenner, Martin J.
1998-01-01
Worst-case flutter margins may be computed for a linear model with respect to a set of uncertainty operators using the structured singular value. This paper considers an on-line implementation to compute these robust margins in a flight test program. Uncertainty descriptions are updated at test points to account for unmodeled time-varying dynamics of the airplane by ensuring the robust model is not invalidated by measured flight data. Robust margins computed with respect to this uncertainty remain conservative to the changing dynamics throughout the flight. A simulation clearly demonstrates this method can improve the efficiency of flight testing by accurately predicting the flutter margin to improve safety while reducing the necessary flight time.