NASA Astrophysics Data System (ADS)
Jiang, Yulian; Liu, Jianchang; Tan, Shubin; Ming, Pingsong
2014-09-01
In this paper, a robust consensus algorithm is developed and sufficient conditions for convergence to consensus are proposed for a multi-agent system (MAS) with exogenous disturbances subject to partial information. By utilizing H∞ robust control, differential game theory and a design-based approach, the consensus problem of the MAS with exogenous bounded interference is resolved and the disturbances are restrained, simultaneously. Attention is focused on designing an H∞ robust controller (the robust consensus algorithm) based on minimisation of our proposed rational and individual cost functions according to goals of the MAS. Furthermore, sufficient conditions for convergence of the robust consensus algorithm are given. An example is employed to demonstrate that our results are effective and more capable to restrain exogenous disturbances than the existing literature.
Distributed robust finite-time nonlinear consensus protocols for multi-agent systems
NASA Astrophysics Data System (ADS)
Zuo, Zongyu; Tie, Lin
2016-04-01
This paper investigates the robust finite-time consensus problem of multi-agent systems in networks with undirected topology. Global nonlinear consensus protocols augmented with a variable structure are constructed with the aid of Lyapunov functions for each single-integrator agent dynamics in the presence of external disturbances. In particular, it is shown that the finite settling time of the proposed general framework for robust consensus design is upper bounded for any initial condition. This makes it possible for network consensus problems to design and estimate the convergence time offline for a multi-agent team with a given undirected information flow. Finally, simulation results are presented to demonstrate the performance and effectiveness of our finite-time protocols.
Decentralized Planning for Autonomous Agents Cooperating in Complex Missions
2010-09-01
Consensus - based decentralized auctions for robust task allocation ," IEEE Transactions on Robotics...Robotics, vol. 24, pp. 209-222, 2006. [44] H.-L. Choi, L. Brunet, and J. P. How, " Consensus - based decentralized auctions for robust task allocation ...2003. 123 [31] L. Brunet, " Consensus - Based Auctions for Decentralized Task Assignment," Master’s thesis, Dept.
Consensus on consensus: a synthesis of consensus estimates on human-caused global warming
NASA Astrophysics Data System (ADS)
Cook, John; Oreskes, Naomi; Doran, Peter T.; Anderegg, William R. L.; Verheggen, Bart; Maibach, Ed W.; Carlton, J. Stuart; Lewandowsky, Stephan; Skuce, Andrew G.; Green, Sarah A.; Nuccitelli, Dana; Jacobs, Peter; Richardson, Mark; Winkler, Bärbel; Painting, Rob; Rice, Ken
2016-04-01
The consensus that humans are causing recent global warming is shared by 90%-100% of publishing climate scientists according to six independent studies by co-authors of this paper. Those results are consistent with the 97% consensus reported by Cook et al (Environ. Res. Lett. 8 024024) based on 11 944 abstracts of research papers, of which 4014 took a position on the cause of recent global warming. A survey of authors of those papers (N = 2412 papers) also supported a 97% consensus. Tol (2016 Environ. Res. Lett. 11 048001) comes to a different conclusion using results from surveys of non-experts such as economic geologists and a self-selected group of those who reject the consensus. We demonstrate that this outcome is not unexpected because the level of consensus correlates with expertise in climate science. At one point, Tol also reduces the apparent consensus by assuming that abstracts that do not explicitly state the cause of global warming (‘no position’) represent non-endorsement, an approach that if applied elsewhere would reject consensus on well-established theories such as plate tectonics. We examine the available studies and conclude that the finding of 97% consensus in published climate research is robust and consistent with other surveys of climate scientists and peer-reviewed studies.
Consensus positive position feedback control for vibration attenuation of smart structures
NASA Astrophysics Data System (ADS)
Omidi, Ehsan; Nima Mahmoodi, S.
2015-04-01
This paper presents a new network-based approach for active vibration control in smart structures. In this approach, a network with known topology connects collocated actuator/sensor elements of the smart structure to one another. Each of these actuators/sensors, i.e., agent or node, is enhanced by a separate multi-mode positive position feedback (PPF) controller. The decentralized PPF controlled agents collaborate with each other in the designed network, under a certain consensus dynamics. The consensus constraint forces neighboring agents to cooperate with each other such that the disagreement between the time-domain actuation of the agents is driven to zero. The controller output of each agent is calculated using state-space variables; hence, optimal state estimators are designed first for the proposed observer-based consensus PPF control. The consensus controller is numerically investigated for a flexible smart structure, i.e., a thin aluminum beam that is clamped at its both ends. Results demonstrate that the consensus law successfully imposes synchronization between the independently controlled agents, as the disagreements between the decentralized PPF controller variables converge to zero in a short time. The new consensus PPF controller brings extra robustness to vibration suppression in smart structures, where malfunctions of an agent can be compensated for by referencing the neighboring agents’ performance. This is demonstrated in the results by comparing the new controller with former centralized PPF approach.
NASA Astrophysics Data System (ADS)
Zhou, Q.; Tong, X.; Liu, S.; Lu, X.; Liu, S.; Chen, P.; Jin, Y.; Xie, H.
2017-07-01
Visual Odometry (VO) is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB) features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC) and Random Sample Consensus (RANSAC) algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF) matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation). The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.
Cuevas, Erik; Díaz, Margarita
2015-01-01
In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC) algorithm and the evolutionary method harmony search (HS). With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples) are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness.
Robust Stereo Visual Odometry Using Improved RANSAC-Based Methods for Mobile Robot Localization
Liu, Yanqing; Gu, Yuzhang; Li, Jiamao; Zhang, Xiaolin
2017-01-01
In this paper, we present a novel approach for stereo visual odometry with robust motion estimation that is faster and more accurate than standard RANSAC (Random Sample Consensus). Our method makes improvements in RANSAC in three aspects: first, the hypotheses are preferentially generated by sampling the input feature points on the order of ages and similarities of the features; second, the evaluation of hypotheses is performed based on the SPRT (Sequential Probability Ratio Test) that makes bad hypotheses discarded very fast without verifying all the data points; third, we aggregate the three best hypotheses to get the final estimation instead of only selecting the best hypothesis. The first two aspects improve the speed of RANSAC by generating good hypotheses and discarding bad hypotheses in advance, respectively. The last aspect improves the accuracy of motion estimation. Our method was evaluated in the KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) and the New Tsukuba dataset. Experimental results show that the proposed method achieves better results for both speed and accuracy than RANSAC. PMID:29027935
Robust point matching via vector field consensus.
Jiayi Ma; Ji Zhao; Jinwen Tian; Yuille, Alan L; Zhuowen Tu
2014-04-01
In this paper, we propose an efficient algorithm, called vector field consensus, for establishing robust point correspondences between two sets of points. Our algorithm starts by creating a set of putative correspondences which can contain a very large number of false correspondences, or outliers, in addition to a limited number of true correspondences (inliers). Next, we solve for correspondence by interpolating a vector field between the two point sets, which involves estimating a consensus of inlier points whose matching follows a nonparametric geometrical constraint. We formulate this a maximum a posteriori (MAP) estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose nonparametric geometrical constraints on the correspondence, as a prior distribution, using Tikhonov regularizers in a reproducing kernel Hilbert space. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value) is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation). We illustrate this method on data sets in 2D and 3D and demonstrate that it is robust to a very large number of outliers (even up to 90%). We also show that in the special case where there is an underlying parametric geometrical model (e.g., the epipolar line constraint) that we obtain better results than standard alternatives like RANSAC if a large number of outliers are present. This suggests a two-stage strategy, where we use our nonparametric model to reduce the size of the putative set and then apply a parametric variant of our approach to estimate the geometric parameters. Our algorithm is computationally efficient and we provide code for others to use it. In addition, our approach is general and can be applied to other problems, such as learning with a badly corrupted training data set.
SC3 - consensus clustering of single-cell RNA-Seq data
Kiselev, Vladimir Yu.; Kirschner, Kristina; Schaub, Michael T.; Andrews, Tallulah; Yiu, Andrew; Chandra, Tamir; Natarajan, Kedar N; Reik, Wolf; Barahona, Mauricio; Green, Anthony R; Hemberg, Martin
2017-01-01
Single-cell RNA-seq (scRNA-seq) enables a quantitative cell-type characterisation based on global transcriptome profiles. We present Single-Cell Consensus Clustering (SC3), a user-friendly tool for unsupervised clustering which achieves high accuracy and robustness by combining multiple clustering solutions through a consensus approach. We demonstrate that SC3 is capable of identifying subclones based on the transcriptomes from neoplastic cells collected from patients. PMID:28346451
Cuevas, Erik; Díaz, Margarita
2015-01-01
In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC) algorithm and the evolutionary method harmony search (HS). With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples) are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness. PMID:26339228
Image registration based on subpixel localization and Cauchy-Schwarz divergence
NASA Astrophysics Data System (ADS)
Ge, Yongxin; Yang, Dan; Zhang, Xiaohong; Lu, Jiwen
2010-07-01
We define a new matching metric-corner Cauchy-Schwarz divergence (CCSD) and present a new approach based on the proposed CCSD and subpixel localization for image registration. First, we detect the corners in an image by a multiscale Harris operator and take them as initial interest points. And then, a subpixel localization technique is applied to determine the locations of the corners and eliminate the false and unstable corners. After that, CCSD is defined to obtain the initial matching corners. Finally, we use random sample consensus to robustly estimate the parameters based on the initial matching. The experimental results demonstrate that the proposed algorithm has a good performance in terms of both accuracy and efficiency.
Consensus-Based Cooperative Spectrum Sensing with Improved Robustness Against SSDF Attacks
NASA Astrophysics Data System (ADS)
Liu, Quan; Gao, Jun; Guo, Yunwei; Liu, Siyang
2011-05-01
Based on the consensus algorithm, an attack-proof cooperative spectrum sensing (CSS) scheme is presented for decentralized cognitive radio networks (CRNs), where a common fusion center is not available and some malicious users may launch attacks with spectrum sensing data falsification (SSDF). Local energy detection is firstly performed by each secondary user (SU), and then, utilizing the consensus notions, each SU can make its own decision individually only by local information exchange with its neighbors rather than any centralized fusion used in most existing schemes. With the help of some anti-attack tricks, each authentic SU can generally identify and exclude those malicious reports during the interactions within the neighborhood. Compared with the existing solutions, the proposed scheme is proved to have much better robustness against three categories of SSDF attack, without requiring any a priori knowledge of the whole network.
Robust consensus control with guaranteed rate of convergence using second-order Hurwitz polynomials
NASA Astrophysics Data System (ADS)
Fruhnert, Michael; Corless, Martin
2017-10-01
This paper considers homogeneous networks of general, linear time-invariant, second-order systems. We consider linear feedback controllers and require that the directed graph associated with the network contains a spanning tree and systems are stabilisable. We show that consensus with a guaranteed rate of convergence can always be achieved using linear state feedback. To achieve this, we provide a new and simple derivation of the conditions for a second-order polynomial with complex coefficients to be Hurwitz. We apply this result to obtain necessary and sufficient conditions to achieve consensus with networks whose graph Laplacian matrix may have complex eigenvalues. Based on the conditions found, methods to compute feedback gains are proposed. We show that gains can be chosen such that consensus is achieved robustly over a variety of communication structures and system dynamics. We also consider the use of static output feedback.
Operator Objective Function Guidance for a Real-Time Unmanned Vehicle Scheduling Algorithm
2012-12-01
Consensus - Based Decentralized Auctions for Robust Task Allocation ,” IEEE Transactions on Robotics and Automation, Vol. 25, No. 4, No. 4, 2009, pp. 912...planning for the fleet. The decentralized task planner used in OPS-USERS is the consensus - based bundle algorithm (CBBA), a decentralized , polynomial...and surveillance (OPS-USERS), which leverages decentralized algorithms for vehicle routing and task allocation . This
Locally Weighted Ensemble Clustering.
Huang, Dong; Wang, Chang-Dong; Lai, Jian-Huang
2018-05-01
Due to its ability to combine multiple base clusterings into a probably better and more robust clustering, the ensemble clustering technique has been attracting increasing attention in recent years. Despite the significant success, one limitation to most of the existing ensemble clustering methods is that they generally treat all base clusterings equally regardless of their reliability, which makes them vulnerable to low-quality base clusterings. Although some efforts have been made to (globally) evaluate and weight the base clusterings, yet these methods tend to view each base clustering as an individual and neglect the local diversity of clusters inside the same base clustering. It remains an open problem how to evaluate the reliability of clusters and exploit the local diversity in the ensemble to enhance the consensus performance, especially, in the case when there is no access to data features or specific assumptions on data distribution. To address this, in this paper, we propose a novel ensemble clustering approach based on ensemble-driven cluster uncertainty estimation and local weighting strategy. In particular, the uncertainty of each cluster is estimated by considering the cluster labels in the entire ensemble via an entropic criterion. A novel ensemble-driven cluster validity measure is introduced, and a locally weighted co-association matrix is presented to serve as a summary for the ensemble of diverse clusters. With the local diversity in ensembles exploited, two novel consensus functions are further proposed. Extensive experiments on a variety of real-world datasets demonstrate the superiority of the proposed approach over the state-of-the-art.
Ultra-fast consensus of discrete-time multi-agent systems with multi-step predictive output feedback
NASA Astrophysics Data System (ADS)
Zhang, Wenle; Liu, Jianchang
2016-04-01
This article addresses the ultra-fast consensus problem of high-order discrete-time multi-agent systems based on a unified consensus framework. A novel multi-step predictive output mechanism is proposed under a directed communication topology containing a spanning tree. By predicting the outputs of a network several steps ahead and adding this information into the consensus protocol, it is shown that the asymptotic convergence factor is improved by a power of q + 1 compared to the routine consensus. The difficult problem of selecting the optimal control gain is solved well by introducing a variable called convergence step. In addition, the ultra-fast formation achievement is studied on the basis of this new consensus protocol. Finally, the ultra-fast consensus with respect to a reference model and robust consensus is discussed. Some simulations are performed to illustrate the effectiveness of the theoretical results.
Multi-Optimisation Consensus Clustering
NASA Astrophysics Data System (ADS)
Li, Jian; Swift, Stephen; Liu, Xiaohui
Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.
Delay-dependent coupling for a multi-agent LTI consensus system with inter-agent delays
NASA Astrophysics Data System (ADS)
Qiao, Wei; Sipahi, Rifat
2014-01-01
Delay-dependent coupling (DDC) is considered in this paper in a broadly studied linear time-invariant multi-agent consensus system in which agents communicate with each other under homogeneous delays, while attempting to reach consensus. The coupling among the agents is designed here as an explicit parameter of this delay, allowing couplings to autonomously adapt based on the delay value, and in order to guarantee stability and a certain degree of robustness in the network despite the destabilizing effect of delay. Design procedures, analysis of convergence speed of consensus, comprehensive numerical studies for the case of time-varying delay, and limitations are presented.
NASA Astrophysics Data System (ADS)
Li, Yongfu; Li, Kezhi; Zheng, Taixiong; Hu, Xiangdong; Feng, Huizong; Li, Yinguo
2016-05-01
This study proposes a feedback-based platoon control protocol for connected autonomous vehicles (CAVs) under different network topologies of initial states. In particularly, algebraic graph theory is used to describe the network topology. Then, the leader-follower approach is used to model the interactions between CAVs. In addition, feedback-based protocol is designed to control the platoon considering the longitudinal and lateral gaps simultaneously as well as different network topologies. The stability and consensus of the vehicular platoon is analyzed using the Lyapunov technique. Effects of different network topologies of initial states on convergence time and robustness of platoon control are investigated. Results from numerical experiments demonstrate the effectiveness of the proposed protocol with respect to the position and velocity consensus in terms of the convergence time and robustness. Also, the findings of this study illustrate the convergence time of the control protocol is associated with the initial states, while the robustness is not affected by the initial states significantly.
Liu, Xiaoyang; Ho, Daniel W C; Cao, Jinde; Xu, Wenying
This brief investigates the problem of finite-time robust consensus (FTRC) for second-order nonlinear multiagent systems with external disturbances. Based on the global finite-time stability theory of discontinuous homogeneous systems, a novel finite-time convergent discontinuous disturbed observer (DDO) is proposed for the leader-following multiagent systems. The states of the designed DDO are then used to design the control inputs to achieve the FTRC of nonlinear multiagent systems in the presence of bounded disturbances. The simulation results are provided to validate the effectiveness of these theoretical results.This brief investigates the problem of finite-time robust consensus (FTRC) for second-order nonlinear multiagent systems with external disturbances. Based on the global finite-time stability theory of discontinuous homogeneous systems, a novel finite-time convergent discontinuous disturbed observer (DDO) is proposed for the leader-following multiagent systems. The states of the designed DDO are then used to design the control inputs to achieve the FTRC of nonlinear multiagent systems in the presence of bounded disturbances. The simulation results are provided to validate the effectiveness of these theoretical results.
Li, Yankun; Shao, Xueguang; Cai, Wensheng
2007-04-15
Consensus modeling of combining the results of multiple independent models to produce a single prediction avoids the instability of single model. Based on the principle of consensus modeling, a consensus least squares support vector regression (LS-SVR) method for calibrating the near-infrared (NIR) spectra was proposed. In the proposed approach, NIR spectra of plant samples were firstly preprocessed using discrete wavelet transform (DWT) for filtering the spectral background and noise, then, consensus LS-SVR technique was used for building the calibration model. With an optimization of the parameters involved in the modeling, a satisfied model was achieved for predicting the content of reducing sugar in plant samples. The predicted results show that consensus LS-SVR model is more robust and reliable than the conventional partial least squares (PLS) and LS-SVR methods.
Robust image matching via ORB feature and VFC for mismatch removal
NASA Astrophysics Data System (ADS)
Ma, Tao; Fu, Wenxing; Fang, Bin; Hu, Fangyu; Quan, Siwen; Ma, Jie
2018-03-01
Image matching is at the base of many image processing and computer vision problems, such as object recognition or structure from motion. Current methods rely on good feature descriptors and mismatch removal strategies for detection and matching. In this paper, we proposed a robust image match approach based on ORB feature and VFC for mismatch removal. ORB (Oriented FAST and Rotated BRIEF) is an outstanding feature, it has the same performance as SIFT with lower cost. VFC (Vector Field Consensus) is a state-of-the-art mismatch removing method. The experiment results demonstrate that our method is efficient and robust.
NASA Astrophysics Data System (ADS)
Yang, Hong-Yong; Zhang, Shun; Zong, Guang-Deng
2011-01-01
In this paper, the trajectory control of multi-agent dynamical systems with exogenous disturbances is studied. Suppose multiple agents composing of a scale-free network topology, the performance of rejecting disturbances for the low degree node and high degree node is analyzed. Firstly, the consensus of multi-agent systems without disturbances is studied by designing a pinning control strategy on a part of agents, where this pinning control can bring multiple agents' states to an expected consensus track. Then, the influence of the disturbances is considered by developing disturbance observers, and disturbance observers based control (DOBC) are developed for disturbances generated by an exogenous system to estimate the disturbances. Asymptotical consensus of the multi-agent systems with disturbances under the composite controller can be achieved for scale-free network topology. Finally, by analyzing examples of multi-agent systems with scale-free network topology and exogenous disturbances, the verities of the results are proved. Under the DOBC with the designed parameters, the trajectory convergence of multi-agent systems is researched by pinning two class of the nodes. We have found that it has more stronger robustness to exogenous disturbances for the high degree node pinned than that of the low degree node pinned.
Hisano, Mizue; Connolly, Sean R; Robbins, William D
2011-01-01
Overfishing of sharks is a global concern, with increasing numbers of species threatened by overfishing. For many sharks, both catch rates and underwater visual surveys have been criticized as indices of abundance. In this context, estimation of population trends using individual demographic rates provides an important alternative means of assessing population status. However, such estimates involve uncertainties that must be appropriately characterized to credibly and effectively inform conservation efforts and management. Incorporating uncertainties into population assessment is especially important when key demographic rates are obtained via indirect methods, as is often the case for mortality rates of marine organisms subject to fishing. Here, focusing on two reef shark species on the Great Barrier Reef, Australia, we estimated natural and total mortality rates using several indirect methods, and determined the population growth rates resulting from each. We used bootstrapping to quantify the uncertainty associated with each estimate, and to evaluate the extent of agreement between estimates. Multiple models produced highly concordant natural and total mortality rates, and associated population growth rates, once the uncertainties associated with the individual estimates were taken into account. Consensus estimates of natural and total population growth across multiple models support the hypothesis that these species are declining rapidly due to fishing, in contrast to conclusions previously drawn from catch rate trends. Moreover, quantitative projections of abundance differences on fished versus unfished reefs, based on the population growth rate estimates, are comparable to those found in previous studies using underwater visual surveys. These findings appear to justify management actions to substantially reduce the fishing mortality of reef sharks. They also highlight the potential utility of rigorously characterizing uncertainty, and applying multiple assessment methods, to obtain robust estimates of population trends in species threatened by overfishing.
Hisano, Mizue; Connolly, Sean R.; Robbins, William D.
2011-01-01
Overfishing of sharks is a global concern, with increasing numbers of species threatened by overfishing. For many sharks, both catch rates and underwater visual surveys have been criticized as indices of abundance. In this context, estimation of population trends using individual demographic rates provides an important alternative means of assessing population status. However, such estimates involve uncertainties that must be appropriately characterized to credibly and effectively inform conservation efforts and management. Incorporating uncertainties into population assessment is especially important when key demographic rates are obtained via indirect methods, as is often the case for mortality rates of marine organisms subject to fishing. Here, focusing on two reef shark species on the Great Barrier Reef, Australia, we estimated natural and total mortality rates using several indirect methods, and determined the population growth rates resulting from each. We used bootstrapping to quantify the uncertainty associated with each estimate, and to evaluate the extent of agreement between estimates. Multiple models produced highly concordant natural and total mortality rates, and associated population growth rates, once the uncertainties associated with the individual estimates were taken into account. Consensus estimates of natural and total population growth across multiple models support the hypothesis that these species are declining rapidly due to fishing, in contrast to conclusions previously drawn from catch rate trends. Moreover, quantitative projections of abundance differences on fished versus unfished reefs, based on the population growth rate estimates, are comparable to those found in previous studies using underwater visual surveys. These findings appear to justify management actions to substantially reduce the fishing mortality of reef sharks. They also highlight the potential utility of rigorously characterizing uncertainty, and applying multiple assessment methods, to obtain robust estimates of population trends in species threatened by overfishing. PMID:21966402
MacDonald, Donald D.; Dipinto, Lisa M.; Field, Jay; Ingersoll, Christopher G.; Long, Edward R.; Swartz, Richard C.
2000-01-01
Sediment-quality guidelines (SQGs) have been published for polychlorinated biphenyls (PCBs) using both empirical and theoretical approaches. Empirically based guidelines have been developed using the screening-level concentration, effects range, effects level, and apparent effects threshold approaches. Theoretically based guidelines have been developed using the equilibrium-partitioning approach. Empirically-based guidelines were classified into three general categories, in accordance with their original narrative intents, and used to develop three consensus-based sediment effect concentrations (SECs) for total PCBs (tPCBs), including a threshold effect concentration, a midrange effect concentration, and an extreme effect concentration. Consensus-based SECs were derived because they estimate the central tendency of the published SQGs and, thus, reconcile the guidance values that have been derived using various approaches. Initially, consensus-based SECs for tPCBs were developed separately for freshwater sediments and for marine and estuarine sediments. Because the respective SECs were statistically similar, the underlying SQGs were subsequently merged and used to formulate more generally applicable SECs. The three consensus-based SECs were then evaluated for reliability using matching sediment chemistry and toxicity data from field studies, dose-response data from spiked-sediment toxicity tests, and SQGs derived from the equilibrium-partitioning approach. The results of this evaluation demonstrated that the consensus-based SECs can accurately predict both the presence and absence of toxicity in field-collected sediments. Importantly, the incidence of toxicity increases incrementally with increasing concentrations of tPCBs. Moreover, the consensus-based SECs are comparable to the chronic toxicity thresholds that have been estimated from dose-response data and equilibrium-partitioning models. Therefore, consensus-based SECs provide a unifying synthesis of existing SQGs, reflect causal rather than correlative effects, and accurately predict sediment toxicity in PCB-contaminated sediments.
NASA Astrophysics Data System (ADS)
Zhang, Jiancheng; Zhu, Fanglai
2018-03-01
In this paper, the output consensus of a class of linear heterogeneous multi-agent systems with unmatched disturbances is considered. Firstly, based on the relative output information among neighboring agents, we propose an asymptotic sliding-mode based consensus control scheme, under which, the output consensus error can converge to zero by removing the disturbances from output channels. Secondly, in order to reach the consensus goal, we design a novel high-order unknown input observer for each agent. It can estimate not only each agent's states and disturbances, but also the disturbances' high-order derivatives, which are required in the control scheme aforementioned above. The observer-based consensus control laws and the convergence analysis of the consensus error dynamics are given. Finally, a simulation example is provided to verify the validity of our methods.
Robust Methods for Moderation Analysis with a Two-Level Regression Model.
Yang, Miao; Yuan, Ke-Hai
2016-01-01
Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.
NASA Astrophysics Data System (ADS)
Ahmed, Mousumi
Designing the control technique for nonlinear dynamic systems is a significant challenge. Approaches to designing a nonlinear controller are studied and an extensive study on backstepping based technique is performed in this research with the purpose of tracking a moving target autonomously. Our main motivation is to explore the controller for cooperative and coordinating unmanned vehicles in a target tracking application. To start with, a general theoretical framework for target tracking is studied and a controller in three dimensional environment for a single UAV is designed. This research is primarily focused on finding a generalized method which can be applied to track almost any reference trajectory. The backstepping technique is employed to derive the controller for a simplified UAV kinematic model. This controller can compute three autopilot modes i.e. velocity, ground heading (or course angle), and flight path angle for tracking the unmanned vehicle. Numerical implementation is performed in MATLAB with the assumption of having perfect and full state information of the target to investigate the accuracy of the proposed controller. This controller is then frozen for the multi-vehicle problem. Distributed or decentralized cooperative control is discussed in the context of multi-agent systems. A consensus based cooperative control is studied; such consensus based control problem can be viewed from the algebraic graph theory concepts. The communication structure between the UAVs is represented by the dynamic graph where UAVs are represented by the nodes and the communication links are represented by the edges. The previously designed controller is augmented to account for the group to obtain consensus based on their communication. A theoretical development of the controller for the cooperative group of UAVs is presented and the simulation results for different communication topologies are shown. This research also investigates the cases where the communication topology switches to a different topology over particular time instants. Lyapunov analysis is performed to show stability in all cases. Another important aspect of this dissertation research is to implement the controller for the case, where perfect or full state information is not available. This necessitates the design of an estimator to estimate the system state. A nonlinear estimator, Extended Kalman Filter (EKF) is first developed for target tracking with a single UAV. The uncertainties involved with the measurement model and dynamics model are considered as zero mean Gaussian noises with some known covariances. The measurements of the full state of the target are not available and only the range, elevation, and azimuth angle are available from an onboard seeker sensor. A separate EKF is designed to estimate the UAV's own state where the state measurement is available through on-board sensors. The controller computes the three control commands based on the estimated states of target and its own states. Estimation based control laws is also implemented for colored noise measurement uncertainties, and the controller performance is shown with the simulation results. The estimation based control approach is then extended for the cooperative target tracking case. The target information is available to the network and a separate estimator is used to estimate target states. All of the UAVs in the network apply the same control law and the only difference is that each UAV updates the commands according to their connection. The simulation is performed for both cases of fixed and time varying communication topology. Monte Carlo simulation is also performed with different sample noises to investigate the performance of the estimator. The proposed technique is shown to be simple and robust to noisy environments.
Study of consensus-based time synchronization in wireless sensor networks.
He, Jianping; Li, Hao; Chen, Jiming; Cheng, Peng
2014-03-01
Recently, various consensus-based protocols have been developed for time synchronization in wireless sensor networks. However, due to the uncertainties lying in both the hardware fabrication and network communication processes, it is not clear how most of the protocols will perform in real implementations. In order to reduce such gap, this paper investigates whether and how the typical consensus-based time synchronization protocols can tolerate the uncertainties in practical sensor networks through extensive testbed experiments. For two typical protocols, i.e., Average Time Synchronization (ATS) and Maximum Time Synchronization (MTS), we first analyze how the time synchronization accuracy will be affected by various uncertainties in the system. Then, we implement both protocols on our sensor network testbed consisted of Micaz nodes, and investigate the time synchronization performance and robustness under various network settings. Noticing that the synchronized clocks under MTS may be slightly faster than the desirable clock, by adopting both maximum consensus and minimum consensus, we propose a modified protocol, MMTS, which is able to drive the synchronized clocks closer to the desirable clock while maintaining the convergence rate and synchronization accuracy of MTS. © 2013 ISA. Published by ISA. All rights reserved.
A closed-form solution to tensor voting: theory and applications.
Wu, Tai-Pang; Yeung, Sai-Kit; Jia, Jiaya; Tang, Chi-Keung; Medioni, Gérard
2012-08-01
We prove a closed-form solution to tensor voting (CFTV): Given a point set in any dimensions, our closed-form solution provides an exact, continuous, and efficient algorithm for computing a structure-aware tensor that simultaneously achieves salient structure detection and outlier attenuation. Using CFTV, we prove the convergence of tensor voting on a Markov random field (MRF), thus termed as MRFTV, where the structure-aware tensor at each input site reaches a stationary state upon convergence in structure propagation. We then embed structure-aware tensor into expectation maximization (EM) for optimizing a single linear structure to achieve efficient and robust parameter estimation. Specifically, our EMTV algorithm optimizes both the tensor and fitting parameters and does not require random sampling consensus typically used in existing robust statistical techniques. We performed quantitative evaluation on its accuracy and robustness, showing that EMTV performs better than the original TV and other state-of-the-art techniques in fundamental matrix estimation for multiview stereo matching. The extensions of CFTV and EMTV for extracting multiple and nonlinear structures are underway.
Alatise, Mary B; Hancke, Gerhard P
2017-09-21
Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs).
Hancke, Gerhard P.
2017-01-01
Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs). PMID:28934102
Benkert, Pascal; Schwede, Torsten; Tosatto, Silvio Ce
2009-05-20
The selection of the most accurate protein model from a set of alternatives is a crucial step in protein structure prediction both in template-based and ab initio approaches. Scoring functions have been developed which can either return a quality estimate for a single model or derive a score from the information contained in the ensemble of models for a given sequence. Local structural features occurring more frequently in the ensemble have a greater probability of being correct. Within the context of the CASP experiment, these so called consensus methods have been shown to perform considerably better in selecting good candidate models, but tend to fail if the best models are far from the dominant structural cluster. In this paper we show that model selection can be improved if both approaches are combined by pre-filtering the models used during the calculation of the structural consensus. Our recently published QMEAN composite scoring function has been improved by including an all-atom interaction potential term. The preliminary model ranking based on the new QMEAN score is used to select a subset of reliable models against which the structural consensus score is calculated. This scoring function called QMEANclust achieves a correlation coefficient of predicted quality score and GDT_TS of 0.9 averaged over the 98 CASP7 targets and perform significantly better in selecting good models from the ensemble of server models than any other groups participating in the quality estimation category of CASP7. Both scoring functions are also benchmarked on the MOULDER test set consisting of 20 target proteins each with 300 alternatives models generated by MODELLER. QMEAN outperforms all other tested scoring functions operating on individual models, while the consensus method QMEANclust only works properly on decoy sets containing a certain fraction of near-native conformations. We also present a local version of QMEAN for the per-residue estimation of model quality (QMEANlocal) and compare it to a new local consensus-based approach. Improved model selection is obtained by using a composite scoring function operating on single models in order to enrich higher quality models which are subsequently used to calculate the structural consensus. The performance of consensus-based methods such as QMEANclust highly depends on the composition and quality of the model ensemble to be analysed. Therefore, performance estimates for consensus methods based on large meta-datasets (e.g. CASP) might overrate their applicability in more realistic modelling situations with smaller sets of models based on individual methods.
ERIC Educational Resources Information Center
Lee, I-Ching; Pratto, Felicia; Johnson, Blair T.
2011-01-01
A meta-analysis examined the extent to which socio-structural and psycho-cultural characteristics of societies correspond with how much gender and ethnic/racial groups differ on their support of group-based hierarchy. Robustly, women opposed group-based hierarchy more than men did, and members of lower power ethnic/racial groups opposed…
Cluster-Based Maximum Consensus Time Synchronization for Industrial Wireless Sensor Networks.
Wang, Zhaowei; Zeng, Peng; Zhou, Mingtuo; Li, Dong; Wang, Jintao
2017-01-13
Time synchronization is one of the key technologies in Industrial Wireless Sensor Networks (IWSNs), and clustering is widely used in WSNs for data fusion and information collection to reduce redundant data and communication overhead. Considering IWSNs' demand for low energy consumption, fast convergence, and robustness, this paper presents a novel Cluster-based Maximum consensus Time Synchronization (CMTS) method. It consists of two parts: intra-cluster time synchronization and inter-cluster time synchronization. Based on the theory of distributed consensus, the proposed method utilizes the maximum consensus approach to realize the intra-cluster time synchronization, and adjacent clusters exchange the time messages via overlapping nodes to synchronize with each other. A Revised-CMTS is further proposed to counteract the impact of bounded communication delays between two connected nodes, because the traditional stochastic models of the communication delays would distort in a dynamic environment. The simulation results show that our method reduces the communication overhead and improves the convergence rate in comparison to existing works, as well as adapting to the uncertain bounded communication delays.
Cluster-Based Maximum Consensus Time Synchronization for Industrial Wireless Sensor Networks †
Wang, Zhaowei; Zeng, Peng; Zhou, Mingtuo; Li, Dong; Wang, Jintao
2017-01-01
Time synchronization is one of the key technologies in Industrial Wireless Sensor Networks (IWSNs), and clustering is widely used in WSNs for data fusion and information collection to reduce redundant data and communication overhead. Considering IWSNs’ demand for low energy consumption, fast convergence, and robustness, this paper presents a novel Cluster-based Maximum consensus Time Synchronization (CMTS) method. It consists of two parts: intra-cluster time synchronization and inter-cluster time synchronization. Based on the theory of distributed consensus, the proposed method utilizes the maximum consensus approach to realize the intra-cluster time synchronization, and adjacent clusters exchange the time messages via overlapping nodes to synchronize with each other. A Revised-CMTS is further proposed to counteract the impact of bounded communication delays between two connected nodes, because the traditional stochastic models of the communication delays would distort in a dynamic environment. The simulation results show that our method reduces the communication overhead and improves the convergence rate in comparison to existing works, as well as adapting to the uncertain bounded communication delays. PMID:28098750
Yu, Wen; Taylor, J Alex; Davis, Michael T; Bonilla, Leo E; Lee, Kimberly A; Auger, Paul L; Farnsworth, Chris C; Welcher, Andrew A; Patterson, Scott D
2010-03-01
Despite recent advances in qualitative proteomics, the automatic identification of peptides with optimal sensitivity and accuracy remains a difficult goal. To address this deficiency, a novel algorithm, Multiple Search Engines, Normalization and Consensus is described. The method employs six search engines and a re-scoring engine to search MS/MS spectra against protein and decoy sequences. After the peptide hits from each engine are normalized to error rates estimated from the decoy hits, peptide assignments are then deduced using a minimum consensus model. These assignments are produced in a series of progressively relaxed false-discovery rates, thus enabling a comprehensive interpretation of the data set. Additionally, the estimated false-discovery rate was found to have good concordance with the observed false-positive rate calculated from known identities. Benchmarking against standard proteins data sets (ISBv1, sPRG2006) and their published analysis, demonstrated that the Multiple Search Engines, Normalization and Consensus algorithm consistently achieved significantly higher sensitivity in peptide identifications, which led to increased or more robust protein identifications in all data sets compared with prior methods. The sensitivity and the false-positive rate of peptide identification exhibit an inverse-proportional and linear relationship with the number of participating search engines.
Consensus-based distributed estimation in multi-agent systems with time delay
NASA Astrophysics Data System (ADS)
Abdelmawgoud, Ahmed
During the last years, research in the field of cooperative control of swarm of robots, especially Unmanned Aerial Vehicles (UAV); have been improved due to the increase of UAV applications. The ability to track targets using UAVs has a wide range of applications not only civilian but also military as well. For civilian applications, UAVs can perform tasks including, but not limited to: map an unknown area, weather forecasting, land survey, and search and rescue missions. On the other hand, for military personnel, UAV can track and locate a variety of objects, including the movement of enemy vehicles. Consensus problems arise in a number of applications including coordination of UAVs, information processing in wireless sensor networks, and distributed multi-agent optimization. We consider a widely studied consensus algorithms for processing sensed data by different sensors in wireless sensor networks of dynamic agents. Every agent involved in the network forms a weighted average of its own estimated value of some state with the values received from its neighboring agents. We introduced a novelty of consensus-based distributed estimation algorithms. We propose a new algorithm to reach a consensus given time delay constraints. The proposed algorithm performance was observed in a scenario where a swarm of UAVs measuring the location of a ground maneuvering target. We assume that each UAV computes its state prediction and shares it with its neighbors only. However, the shared information applied to different agents with variant time delays. The entire group of UAVs must reach a consensus on target state. Different scenarios were also simulated to examine the effectiveness and performance in terms of overall estimation error, disagreement between delayed and non-delayed agents, and time to reach a consensus for each parameter contributing on the proposed algorithm.
Cheng, Xuemin; Hao, Qun; Xie, Mengdi
2016-04-07
Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.
Toward automated assessment of health Web page quality using the DISCERN instrument.
Allam, Ahmed; Schulz, Peter J; Krauthammer, Michael
2017-05-01
As the Internet becomes the number one destination for obtaining health-related information, there is an increasing need to identify health Web pages that convey an accurate and current view of medical knowledge. In response, the research community has created multicriteria instruments for reliably assessing online medical information quality. One such instrument is DISCERN, which measures health Web page quality by assessing an array of features. In order to scale up use of the instrument, there is interest in automating the quality evaluation process by building machine learning (ML)-based DISCERN Web page classifiers. The paper addresses 2 key issues that are essential before constructing automated DISCERN classifiers: (1) generation of a robust DISCERN training corpus useful for training classification algorithms, and (2) assessment of the usefulness of the current DISCERN scoring schema as a metric for evaluating the performance of these algorithms. Using DISCERN, 272 Web pages discussing treatment options in breast cancer, arthritis, and depression were evaluated and rated by trained coders. First, different consensus models were compared to obtain a robust aggregated rating among the coders, suitable for a DISCERN ML training corpus. Second, a new DISCERN scoring criterion was proposed (features-based score) as an ML performance metric that is more reflective of the score distribution across different DISCERN quality criteria. First, we found that a probabilistic consensus model applied to the DISCERN instrument was robust against noise (random ratings) and superior to other approaches for building a training corpus. Second, we found that the established DISCERN scoring schema (overall score) is ill-suited to measure ML performance for automated classifiers. Use of a probabilistic consensus model is advantageous for building a training corpus for the DISCERN instrument, and use of a features-based score is an appropriate ML metric for automated DISCERN classifiers. The code for the probabilistic consensus model is available at https://bitbucket.org/A_2/em_dawid/ . © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Enhanced echolocation via robust statistics and super-resolution of sonar images
NASA Astrophysics Data System (ADS)
Kim, Kio
Echolocation is a process in which an animal uses acoustic signals to exchange information with environments. In a recent study, Neretti et al. have shown that the use of robust statistics can significantly improve the resiliency of echolocation against noise and enhance its accuracy by suppressing the development of sidelobes in the processing of an echo signal. In this research, the use of robust statistics is extended to problems in underwater explorations. The dissertation consists of two parts. Part I describes how robust statistics can enhance the identification of target objects, which in this case are cylindrical containers filled with four different liquids. Particularly, this work employs a variation of an existing robust estimator called an L-estimator, which was first suggested by Koenker and Bassett. As pointed out by Au et al.; a 'highlight interval' is an important feature, and it is closely related with many other important features that are known to be crucial for dolphin echolocation. A varied L-estimator described in this text is used to enhance the detection of highlight intervals, which eventually leads to a successful classification of echo signals. Part II extends the problem into 2 dimensions. Thanks to the advances in material and computer technology, various sonar imaging modalities are available on the market. By registering acoustic images from such video sequences, one can extract more information on the region of interest. Computer vision and image processing allowed application of robust statistics to the acoustic images produced by forward looking sonar systems, such as Dual-frequency Identification Sonar and ProViewer. The first use of robust statistics for sonar image enhancement in this text is in image registration. Random Sampling Consensus (RANSAC) is widely used for image registration. The registration algorithm using RANSAC is optimized for sonar image registration, and the performance is studied. The second use of robust statistics is in fusing the images. It is shown that the maximum a posteriori fusion method can be formulated in a Kalman filter-like manner, and also that the resulting expression is identical to a W-estimator with a specific weight function.
NASA Astrophysics Data System (ADS)
Duncan, Kenneth J.; Jarvis, Matt J.; Brown, Michael J. I.; Röttgering, Huub J. A.
2018-07-01
Building on the first paper in this series (Duncan et al. 2018), we present a study investigating the performance of Gaussian process photometric redshift (photo-z) estimates for galaxies and active galactic nuclei (AGNs) detected in deep radio continuum surveys. A Gaussian process redshift code is used to produce photo-z estimates targeting specific subsets of both the AGN population - infrared (IR), X-ray, and optically selected AGNs - and the general galaxy population. The new estimates for the AGN population are found to perform significantly better at z > 1 than the template-based photo-z estimates presented in our previous study. Our new photo-z estimates are then combined with template estimates through hierarchical Bayesian combination to produce a hybrid consensus estimate that outperforms both of the individual methods across all source types. Photo-z estimates for radio sources that are X-ray sources or optical/IR AGNs are significantly improved in comparison to previous template-only estimates - with outlier fractions and robust scatter reduced by up to a factor of ˜4. The ability of our method to combine the strengths of the two input photo-z techniques and the large improvements we observe illustrate its potential for enabling future exploitation of deep radio continuum surveys for both the study of galaxy and black hole coevolution and for cosmological studies.
Automating the expert consensus paradigm for robust lung tissue classification
NASA Astrophysics Data System (ADS)
Rajagopalan, Srinivasan; Karwoski, Ronald A.; Raghunath, Sushravya; Bartholmai, Brian J.; Robb, Richard A.
2012-03-01
Clinicians confirm the efficacy of dynamic multidisciplinary interactions in diagnosing Lung disease/wellness from CT scans. However, routine clinical practice cannot readily accomodate such interactions. Current schemes for automating lung tissue classification are based on a single elusive disease differentiating metric; this undermines their reliability in routine diagnosis. We propose a computational workflow that uses a collection (#: 15) of probability density functions (pdf)-based similarity metrics to automatically cluster pattern-specific (#patterns: 5) volumes of interest (#VOI: 976) extracted from the lung CT scans of 14 patients. The resultant clusters are refined for intra-partition compactness and subsequently aggregated into a super cluster using a cluster ensemble technique. The super clusters were validated against the consensus agreement of four clinical experts. The aggregations correlated strongly with expert consensus. By effectively mimicking the expertise of physicians, the proposed workflow could make automation of lung tissue classification a clinical reality.
Distributed attitude synchronization of formation flying via consensus-based virtual structure
NASA Astrophysics Data System (ADS)
Cong, Bing-Long; Liu, Xiang-Dong; Chen, Zhen
2011-06-01
This paper presents a general framework for synchronized multiple spacecraft rotations via consensus-based virtual structure. In this framework, attitude control systems for formation spacecrafts and virtual structure are designed separately. Both parametric uncertainty and external disturbance are taken into account. A time-varying sliding mode control (TVSMC) algorithm is designed to improve the robustness of the actual attitude control system. As for the virtual attitude control system, a behavioral consensus algorithm is presented to accomplish the attitude maneuver of the entire formation and guarantee a consistent attitude among the local virtual structure counterparts during the attitude maneuver. A multiple virtual sub-structures (MVSSs) system is introduced to enhance current virtual structure scheme when large amounts of spacecrafts are involved in the formation. The attitude of spacecraft is represented by modified Rodrigues parameter (MRP) for its non-redundancy. Finally, a numerical simulation with three synchronization situations is employed to illustrate the effectiveness of the proposed strategy.
Video-based measurements for wireless capsule endoscope tracking
NASA Astrophysics Data System (ADS)
Spyrou, Evaggelos; Iakovidis, Dimitris K.
2014-01-01
The wireless capsule endoscope is a swallowable medical device equipped with a miniature camera enabling the visual examination of the gastrointestinal (GI) tract. It wirelessly transmits thousands of images to an external video recording system, while its location and orientation are being tracked approximately by external sensor arrays. In this paper we investigate a video-based approach to tracking the capsule endoscope without requiring any external equipment. The proposed method involves extraction of speeded up robust features from video frames, registration of consecutive frames based on the random sample consensus algorithm, and estimation of the displacement and rotation of interest points within these frames. The results obtained by the application of this method on wireless capsule endoscopy videos indicate its effectiveness and improved performance over the state of the art. The findings of this research pave the way for a cost-effective localization and travel distance measurement of capsule endoscopes in the GI tract, which could contribute in the planning of more accurate surgical interventions.
Davis, Jennifer C; Verhagen, Evert; Bryan, Stirling; Liu-Ambrose, Teresa; Borland, Jeff; Buchner, David; Hendriks, Marike R C; Weiler, Richard; Morrow, James R; van Mechelen, Willem; Blair, Steven N; Pratt, Mike; Windt, Johann; al-Tunaiji, Hashel; Macri, Erin; Khan, Karim M
2014-06-01
This article describes major topics discussed from the 'Economics of Physical Inactivity Consensus Workshop' (EPIC), held in Vancouver, Canada, in April 2011. Specifically, we (1) detail existing evidence on effective physical inactivity prevention strategies; (2) introduce economic evaluation and its role in health policy decisions; (3) discuss key challenges in establishing and building health economic evaluation evidence (including accurate and reliable costs and clinical outcome measurement) and (4) provide insight into interpretation of economic evaluations in this critically important field. We found that most methodological challenges are related to (1) accurately and objectively valuing outcomes; (2) determining meaningful clinically important differences in objective measures of physical inactivity; (3) estimating investment and disinvestment costs and (4) addressing barriers to implementation. We propose that guidelines specific for economic evaluations of physical inactivity intervention studies are developed to ensure that related costs and effects are robustly, consistently and accurately measured. This will also facilitate comparisons among future economic evidence. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.
Ci, Wenyan; Huang, Yingping
2016-10-17
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.
A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera
Ci, Wenyan; Huang, Yingping
2016-01-01
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method. PMID:27763508
Enhancement Strategies for Frame-To Uas Stereo Visual Odometry
NASA Astrophysics Data System (ADS)
Kersten, J.; Rodehorst, V.
2016-06-01
Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Several valuable real-time capable techniques for outlier detection and drift reduction in frame-to-frame VO, for example robust relative orientation estimation using random sample consensus (RANSAC) and bundle adjustment, are available. This study addresses the problem of choosing appropriate VO components. We propose a frame-to-frame stereo VO method based on carefully selected components and parameters. This method is evaluated regarding the impact and value of different outlier detection and drift-reduction strategies, for example keyframe selection and sparse bundle adjustment (SBA), using reference benchmark data as well as own real stereo data. The experimental results demonstrate that our VO method is able to estimate quite accurate trajectories. Feature bucketing and keyframe selection are simple but effective strategies which further improve the VO results. Furthermore, introducing the stereo baseline constraint in pose graph optimization (PGO) leads to significant improvements.
New robust statistical procedures for the polytomous logistic regression models.
Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro
2018-05-17
This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.
A robust and hierarchical approach for the automatic co-registration of intensity and visible images
NASA Astrophysics Data System (ADS)
González-Aguilera, Diego; Rodríguez-Gonzálvez, Pablo; Hernández-López, David; Luis Lerma, José
2012-09-01
This paper presents a new robust approach to integrate intensity and visible images which have been acquired with a terrestrial laser scanner and a calibrated digital camera, respectively. In particular, an automatic and hierarchical method for the co-registration of both sensors is developed. The approach integrates several existing solutions to improve the performance of the co-registration between range-based and visible images: the Affine Scale-Invariant Feature Transform (A-SIFT), the epipolar geometry, the collinearity equations, the Groebner basis solution and the RANdom SAmple Consensus (RANSAC), integrating a voting scheme. The approach presented herein improves the existing co-registration approaches in automation, robustness, reliability and accuracy.
A robust vision-based sensor fusion approach for real-time pose estimation.
Assa, Akbar; Janabi-Sharifi, Farrokh
2014-02-01
Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.
Valuing Reductions in Fatal Illness Risks: Implications of Recent Research.
Robinson, Lisa A; Hammitt, James K
2016-08-01
The value of mortality risk reductions, conventionally expressed as the value per statistical life, is an important determinant of the net benefits of many government policies. US regulators currently rely primarily on studies of fatal injuries, raising questions about whether different values might be appropriate for risks associated with fatal illnesses. Our review suggests that, despite the substantial expansion of the research base in recent years, few US studies of illness-related risks meet criteria for quality, and those that do yield similar values to studies of injury-related risks. Given this result, combining the findings of these few studies with the findings of the more robust literature on injury-related risks appears to provide a reasonable range of estimates for application in regulatory analysis. Our review yields estimates ranging from about $4.2 million to $13.7 million with a mid-point of $9.0 million (2013 dollars). Although the studies we identify differ from those that underlie the values currently used by Federal agencies, the resulting estimates are remarkably similar, suggesting that there is substantial consensus emerging on the values applicable to the general US population. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Robust Magnetotelluric Impedance Estimation
NASA Astrophysics Data System (ADS)
Sutarno, D.
2010-12-01
Robust magnetotelluric (MT) response function estimators are now in standard use by the induction community. Properly devised and applied, these have ability to reduce the influence of unusual data (outliers). The estimators always yield impedance estimates which are better than the conventional least square (LS) estimation because the `real' MT data almost never satisfy the statistical assumptions of Gaussian distribution and stationary upon which normal spectral analysis is based. This paper discuses the development and application of robust estimation procedures which can be classified as M-estimators to MT data. Starting with the description of the estimators, special attention is addressed to the recent development of a bounded-influence robust estimation, including utilization of the Hilbert Transform (HT) operation on causal MT impedance functions. The resulting robust performances are illustrated using synthetic as well as real MT data.
Freezing period strongly impacts the emergence of a global consensus in the voter model
Wang, Zhen; Liu, Yi; Wang, Lin; Zhang, Yan; Wang, Zhen
2014-01-01
It is well known that human beings do not always change opinions or attitudes, since the duration of interaction with others usually has a significant impact on one's decision-making. Based on this observation, we introduce a freezing period into the voter model, in which the frozen individuals have a weakened opinion switching ability. We unfold the presence of an optimal freezing period, which leads to the fastest consensus, using computation simulations as well as theoretical analysis. We demonstrate that the essence of an accelerated consensus is attributed to the biased random walk of the interface between adjacent opinion clusters. The emergence of an optimal freezing period is robust against the size of the system and the number of distinct opinions. This study is instructive for understanding human collective behavior in other relevant fields. PMID:24398458
Rank-preserving regression: a more robust rank regression model against outliers.
Chen, Tian; Kowalski, Jeanne; Chen, Rui; Wu, Pan; Zhang, Hui; Feng, Changyong; Tu, Xin M
2016-08-30
Mean-based semi-parametric regression models such as the popular generalized estimating equations are widely used to improve robustness of inference over parametric models. Unfortunately, such models are quite sensitive to outlying observations. The Wilcoxon-score-based rank regression (RR) provides more robust estimates over generalized estimating equations against outliers. However, the RR and its extensions do not sufficiently address missing data arising in longitudinal studies. In this paper, we propose a new approach to address outliers under a different framework based on the functional response models. This functional-response-model-based alternative not only addresses limitations of the RR and its extensions for longitudinal data, but, with its rank-preserving property, even provides more robust estimates than these alternatives. The proposed approach is illustrated with both real and simulated data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Wang, Rui; Li, Yanxiao; Sun, Hui; Chen, Zengqiang
2017-11-01
The modern civil aircrafts use air ventilation pressurized cabins subject to the limited space. In order to monitor multiple contaminants and overcome the hypersensitivity of the single sensor, the paper constructs an output correction integrated sensor configuration using sensors with different measurement theories after comparing to other two different configurations. This proposed configuration works as a node in the contaminant distributed wireless sensor monitoring network. The corresponding measurement error models of integrated sensors are also proposed by using the Kalman consensus filter to estimate states and conduct data fusion in order to regulate the single sensor measurement results. The paper develops the sufficient proof of the Kalman consensus filter stability when considering the system and the observation noises and compares the mean estimation and the mean consensus errors between Kalman consensus filter and local Kalman filter. The numerical example analyses show the effectiveness of the algorithm. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Robust Regression for Slope Estimation in Curriculum-Based Measurement Progress Monitoring
ERIC Educational Resources Information Center
Mercer, Sterett H.; Lyons, Alina F.; Johnston, Lauren E.; Millhoff, Courtney L.
2015-01-01
Although ordinary least-squares (OLS) regression has been identified as a preferred method to calculate rates of improvement for individual students during curriculum-based measurement (CBM) progress monitoring, OLS slope estimates are sensitive to the presence of extreme values. Robust estimators have been developed that are less biased by…
Learning consensus in adversarial environments
NASA Astrophysics Data System (ADS)
Vamvoudakis, Kyriakos G.; García Carrillo, Luis R.; Hespanha, João. P.
2013-05-01
This work presents a game theory-based consensus problem for leaderless multi-agent systems in the presence of adversarial inputs that are introducing disturbance to the dynamics. Given the presence of enemy components and the possibility of malicious cyber attacks compromising the security of networked teams, a position agreement must be reached by the networked mobile team based on environmental changes. The problem is addressed under a distributed decision making framework that is robust to possible cyber attacks, which has an advantage over centralized decision making in the sense that a decision maker is not required to access information from all the other decision makers. The proposed framework derives three tuning laws for every agent; one associated with the cost, one associated with the controller, and one with the adversarial input.
Web-based dynamic Delphi: a new survey instrument
NASA Astrophysics Data System (ADS)
Yao, JingTao; Liu, Wei-Ning
2006-04-01
We present a mathematical model for a dynamic Delphi survey method which takes advantages of Web technology. A comparative study on the performance of the conventional Delphi method and the dynamic Delphi instrument is conducted. It is suggested that a dynamic Delphi survey may form a consensus quickly. However, the result may not be robust due to the judgement leaking issues.
Beckmann, Kerri R; Lynch, John W; Hiller, Janet E; Farshid, Gelareh; Houssami, Nehmat; Duffy, Stephen W; Roder, David M
2015-03-15
Debate about the extent of breast cancer over-diagnosis due to mammography screening has continued for over a decade, without consensus. Estimates range from 0 to 54%, but many studies have been criticized for having flawed methodology. In this study we used a novel study design to estimate over-diagnosis due to organised mammography screening in South Australia (SA). To estimate breast cancer incidence at and following screening we used a population-based, age-matched case-control design involving 4,931 breast cancer cases and 22,914 controls to obtain OR for yearly time intervals since women's last screening mammogram. The level of over-diagnosis was estimated by comparing the cumulative breast cancer incidence with and without screening. The former was derived by applying ORs for each time window to incidence rates in the absence of screening, and the latter, by projecting pre-screening incidence rates. Sensitivity analyses were undertaken to assess potential biases. Over-diagnosis was estimated to be 8% (95%CI 2-14%) and 14% (95%CI 8-19%) among SA women aged 45 to 85 years from 2006-2010, for invasive breast cancer and all breast cancer respectively. These estimates were robust when applying various sensitivity analyses, except for adjustment for potential confounding assuming higher risk among screened than non-screened women, which reduced levels of over-diagnosis to 1% (95%CI 5-7%) and 8% (95%CI 2-14%) respectively when incidence rates for screening participants were adjusted by 10%. Our results indicate that the level of over-diagnosis due to mammography screening is modest and considerably lower than many previous estimates, including others for Australia. © 2014 UICC.
Accurately estimating PSF with straight lines detected by Hough transform
NASA Astrophysics Data System (ADS)
Wang, Ruichen; Xu, Liangpeng; Fan, Chunxiao; Li, Yong
2018-04-01
This paper presents an approach to estimating point spread function (PSF) from low resolution (LR) images. Existing techniques usually rely on accurate detection of ending points of the profile normal to edges. In practice however, it is often a great challenge to accurately localize profiles of edges from a LR image, which hence leads to a poor PSF estimation of the lens taking the LR image. For precisely estimating the PSF, this paper proposes firstly estimating a 1-D PSF kernel with straight lines, and then robustly obtaining the 2-D PSF from the 1-D kernel by least squares techniques and random sample consensus. Canny operator is applied to the LR image for obtaining edges and then Hough transform is utilized to extract straight lines of all orientations. Estimating 1-D PSF kernel with straight lines effectively alleviates the influence of the inaccurate edge detection on PSF estimation. The proposed method is investigated on both natural and synthetic images for estimating PSF. Experimental results show that the proposed method outperforms the state-ofthe- art and does not rely on accurate edge detection.
Robust estimation approach for blind denoising.
Rabie, Tamer
2005-11-01
This work develops a new robust statistical framework for blind image denoising. Robust statistics addresses the problem of estimation when the idealized assumptions about a system are occasionally violated. The contaminating noise in an image is considered as a violation of the assumption of spatial coherence of the image intensities and is treated as an outlier random variable. A denoised image is estimated by fitting a spatially coherent stationary image model to the available noisy data using a robust estimator-based regression method within an optimal-size adaptive window. The robust formulation aims at eliminating the noise outliers while preserving the edge structures in the restored image. Several examples demonstrating the effectiveness of this robust denoising technique are reported and a comparison with other standard denoising filters is presented.
Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P
2014-06-26
To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.
2013-01-01
Rapid innovations in cardiovascular magnetic resonance (CMR) now permit the routine acquisition of quantitative measures of myocardial and blood T1 which are key tissue characteristics. These capabilities introduce a new frontier in cardiology, enabling the practitioner/investigator to quantify biologically important myocardial properties that otherwise can be difficult to ascertain clinically. CMR may be able to track biologically important changes in the myocardium by: a) native T1 that reflects myocardial disease involving the myocyte and interstitium without use of gadolinium based contrast agents (GBCA), or b) the extracellular volume fraction (ECV)–a direct GBCA-based measurement of the size of the extracellular space, reflecting interstitial disease. The latter technique attempts to dichotomize the myocardium into its cellular and interstitial components with estimates expressed as volume fractions. This document provides recommendations for clinical and research T1 and ECV measurement, based on published evidence when available and expert consensus when not. We address site preparation, scan type, scan planning and acquisition, quality control, visualisation and analysis, technical development. We also address controversies in the field. While ECV and native T1 mapping appear destined to affect clinical decision making, they lack multi-centre application and face significant challenges, which demand a community-wide approach among stakeholders. At present, ECV and native T1 mapping appear sufficiently robust for many diseases; yet more research is required before a large-scale application for clinical decision-making can be recommended. PMID:24124732
Global finite-time attitude consensus tracking control for a group of rigid spacecraft
NASA Astrophysics Data System (ADS)
Li, Penghua
2017-10-01
The problem of finite-time attitude consensus for multiple rigid spacecraft with a leader-follower architecture is investigated in this paper. To achieve the finite-time attitude consensus, at the first step, a distributed finite-time convergent observer is proposed for each follower to estimate the leader's attitude in a finite time. Then based on the terminal sliding mode control method, a new finite-time attitude tracking controller is designed such that the leader's attitude can be tracked in a finite time. Finally, a finite-time observer-based distributed control strategy is proposed. It is shown that the attitude consensus can be achieved in a finite time under the proposed controller. Simulation results are given to show the effectiveness of the proposed method.
A robust background regression based score estimation algorithm for hyperspectral anomaly detection
NASA Astrophysics Data System (ADS)
Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei
2016-12-01
Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement in practice.
Kong, Mei-Fung; Chan, Serena; Wong, Yiu-Chung
2008-01-01
The proficiency testing (PT) program for 97 worldwide laboratories for determining total arsenic, cadmium, and lead in seawater shrimp under the auspices of the Asia-Pacific Laboratory Accreditation Cooperation (APLAC) is discussed. The program is one of the APLAC PT series whose primary purposes are to establish mutual agreement on the equivalence of the operation of APLAC member laboratories and to take corrective actions if testing deficiencies are identified. Pooled data for Cd and Pb were normally distributed with interlaboratory variations of 21.9 and 34.8%, respectively. The corresponding consensus mean values estimated by robust statistics were in good agreement with those obtained in the homogeneity tests. However, a bimodal distribution was observed from the determination of total As, in which 14 out of 74 participants reported much smaller values (0.482-6.4 mg/kg) as compared with the mean values of 60.9 mg/kg in the homogeneity test. The use of consensus mean is known to have significant deviation from the true value in bi- or multimodal distribution. Therefore, the mode value, a better estimate of central tendency, was chosen to assess participants' performance for total As. Estimates of the overall uncertainty from participants varied in this program, and some were recommended to acquire more comprehensive exposure toward important criteria as stipulated in ISO/IEC 17025.
Doubly robust nonparametric inference on the average treatment effect.
Benkeser, D; Carone, M; Laan, M J Van Der; Gilbert, P B
2017-12-01
Doubly robust estimators are widely used to draw inference about the average effect of a treatment. Such estimators are consistent for the effect of interest if either one of two nuisance parameters is consistently estimated. However, if flexible, data-adaptive estimators of these nuisance parameters are used, double robustness does not readily extend to inference. We present a general theoretical study of the behaviour of doubly robust estimators of an average treatment effect when one of the nuisance parameters is inconsistently estimated. We contrast different methods for constructing such estimators and investigate the extent to which they may be modified to also allow doubly robust inference. We find that while targeted minimum loss-based estimation can be used to solve this problem very naturally, common alternative frameworks appear to be inappropriate for this purpose. We provide a theoretical study and a numerical evaluation of the alternatives considered. Our simulations highlight the need for and usefulness of these approaches in practice, while our theoretical developments have broad implications for the construction of estimators that permit doubly robust inference in other problems.
Fourier-Domain Shift Matching: A Robust Time-of-Flight Approach for Shear Wave Speed Estimation.
Rosen, David; Jiang, Jingfeng
2018-05-01
Our primary objective of this work was to design and test a new time-of-flight (TOF) method that allows measurements of shear wave speed (SWS) following impulsive excitation in soft tissues. Particularly, under the assumption of the local plane shear wave, this work named the Fourier-domain shift matching (FDSM) method, estimates SWS by aligning a series of shear waveforms either temporally or spatially using a solution space deduced by characteristic curves of the well-known 1-D wave equation. The proposed SWS estimation method was tested using computer-simulated data, and tissue-mimicking phantom and ex vivo tissue experiments. Its performance was then compared with three other known TOF methods: lateral time-to-peak (TTP) method with robust random sampling consensus (RANSAC) fitting method, Radon sum transformation method, and a modified cross correlation method. Hereafter, these three TOF methods are referred to as the TTP-RANSAC, Radon sum, and X-corr methods, respectively. In addition to an adapted form of the 2-D Fourier transform (2-D FT)-based method in which the (group) SWS was approximated by averaging phase SWS values was considered for comparison. Based on data evaluated, we found that the overall performance of the above-mentioned temporal implementation of the proposed FDSM method was most similar to the established Radon sum method (correlation = 0.99, scale factor = 1.03, and mean difference = 0.07 m/s), and the 2-D FT (correlation = 0.98, scale factor = 1.00, and mean difference = 0.10 m/s) at high signal quality. However, results obtained from the 2-D FT method diverged (correlation = 0.201) from these of the proposed temporal implementation in the presence of diminished signal quality, whereas the agreement between the Radon sum approach and the proposed temporal implementation largely remained the same (correlation = 0.98).
Ren, Hongwei; Deng, Feiqi
2017-11-01
This paper investigates the mean square consensus problem of dynamical networks of leader-following multi-agent systems with measurement noises and time-varying delays. We consider that the fixed undirected communication topologies are connected. A neighbor-based tracking algorithm together with distributed estimators are presented. Using tools of algebraic graph theory and the Gronwall-Bellman-Halanay type inequality, we establish sufficient conditions to reach consensus in mean square sense via the proposed consensus protocols. Finally, a numerical simulation is provided to demonstrate the effectiveness of the obtained theoretical result. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yang, Hong-Yong; Lu, Lan; Cao, Ke-Cai; Zhang, Si-Ying
2010-04-01
In this paper, the relations of the network topology and the moving consensus of multi-agent systems are studied. A consensus-prestissimo scale-free network model with the static preferential-consensus attachment is presented on the rewired link of the regular network. The effects of the static preferential-consensus BA network on the algebraic connectivity of the topology graph are compared with the regular network. The robustness gain to delay is analyzed for variable network topology with the same scale. The time to reach the consensus is studied for the dynamic network with and without communication delays. By applying the computer simulations, it is validated that the speed of the convergence of multi-agent systems can be greatly improved in the preferential-consensus BA network model with different configuration.
Grandchamp, Romain; Delorme, Arnaud
2011-01-01
In electroencephalography, the classical event-related potential model often proves to be a limited method to study complex brain dynamics. For this reason, spectral techniques adapted from signal processing such as event-related spectral perturbation (ERSP) – and its variant event-related synchronization and event-related desynchronization – have been used over the past 20 years. They represent average spectral changes in response to a stimulus. These spectral methods do not have strong consensus for comparing pre- and post-stimulus activity. When computing ERSP, pre-stimulus baseline removal is usually performed after averaging the spectral estimate of multiple trials. Correcting the baseline of each single-trial prior to averaging spectral estimates is an alternative baseline correction method. However, we show that this method leads to positively skewed post-stimulus ERSP values. We eventually present new single-trial-based ERSP baseline correction methods that perform trial normalization or centering prior to applying classical baseline correction methods. We show that single-trial correction methods minimize the contribution of artifactual data trials with high-amplitude spectral estimates and are robust to outliers when performing statistical inference testing. We then characterize these methods in terms of their time–frequency responses and behavior compared to classical ERSP methods. PMID:21994498
Mahajan, Prashant; Batra, Prerna; Thakur, Neha; Patel, Reena; Rai, Narendra; Trivedi, Nitin; Fassl, Bernhard; Shah, Binita; Lozon, Marie; Oteng, Rockerfeller A; Saha, Abhijeet; Shah, Dheeraj; Galwankar, Sagar
2017-08-15
India, home to almost 1.5 billion people, is in need of a country-specific, evidence-based, consensus approach for the emergency department (ED) evaluation and management of the febrile child. We held two consensus meetings, performed an exhaustive literature review, and held ongoing web-based discussions to arrive at a formal consensus on the proposed evaluation and management algorithm. The first meeting was held in Delhi in October 2015, under the auspices of Pediatric Emergency Medicine (PEM) Section of Academic College of Emergency Experts in India (ACEE-INDIA); and the second meeting was conducted at Pune during Emergency Medical Pediatrics and Recent Trends (EMPART 2016) in March 2016. The second meeting was followed with futher e-mail-based discussions to arrive at a formal consensus on the proposed algorithm. To develop an algorithmic approach for the evaluation and management of the febrile child that can be easily applied in the context of emergency care and modified based on local epidemiology and practice standards. We created an algorithm that can assist the clinician in the evaluation and management of the febrile child presenting to the ED, contextualized to health care in India. This guideline includes the following key components: triage and the timely assessment; evaluation; and patient disposition from the ED. We urge the development and creation of a robust data repository of minimal standard data elements. This would provide a systematic measurement of the care processes and patient outcomes, and a better understanding of various etiologies of febrile illnesses in India; both of which can be used to further modify the proposed approach and algorithm.
Mi, Shichao; Han, Hui; Chen, Cailian; Yan, Jian; Guan, Xinping
2016-02-19
Heterogeneous wireless sensor networks (HWSNs) can achieve more tasks and prolong the network lifetime. However, they are vulnerable to attacks from the environment or malicious nodes. This paper is concerned with the issues of a consensus secure scheme in HWSNs consisting of two types of sensor nodes. Sensor nodes (SNs) have more computation power, while relay nodes (RNs) with low power can only transmit information for sensor nodes. To address the security issues of distributed estimation in HWSNs, we apply the heterogeneity of responsibilities between the two types of sensors and then propose a parameter adjusted-based consensus scheme (PACS) to mitigate the effect of the malicious node. Finally, the convergence property is proven to be guaranteed, and the simulation results validate the effectiveness and efficiency of PACS.
Robust time and frequency domain estimation methods in adaptive control
NASA Technical Reports Server (NTRS)
Lamaire, Richard Orville
1987-01-01
A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.
Wang, Wei; Wen, Changyun; Huang, Jiangshuai; Fan, Huijin
2017-11-01
In this paper, a backstepping based distributed adaptive control scheme is proposed for multiple uncertain Euler-Lagrange systems under directed graph condition. The common desired trajectory is allowed totally unknown by part of the subsystems and the linearly parameterized trajectory model assumed in currently available results is no longer needed. To compensate the effects due to unknown trajectory information, a smooth function of consensus errors and certain positive integrable functions are introduced in designing virtual control inputs. Besides, to overcome the difficulty of completely counteracting the coupling terms of distributed consensus errors and parameter estimation errors in the presence of asymmetric Laplacian matrix, extra information transmission of local parameter estimates are introduced among linked subsystem and adaptive gain technique is adopted to generate distributed torque inputs. It is shown that with the proposed distributed adaptive control scheme, global uniform boundedness of all the closed-loop signals and asymptotically output consensus tracking can be achieved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Gold standards and expert panels: a pulmonary nodule case study with challenges and solutions
NASA Astrophysics Data System (ADS)
Miller, Dave P.; O'Shaughnessy, Kathryn F.; Wood, Susan A.; Castellino, Ronald A.
2004-05-01
Comparative evaluations of reader performance using different modalities, e.g. CT with computer-aided detection (CAD) vs. CT without CAD, generally require a "truth" definition based on a gold standard. There are many situations in which a true invariant gold standard is impractical or impossible to obtain. For instance, small pulmonary nodules are generally not assessed by biopsy or resection. In such cases, it is common to use a unanimous consensus or majority agreement from an expert panel as a reference standard for actionability in lieu of the unknown gold standard for disease. Nonetheless, there are three major concerns about expert panel reference standards: (1) actionability is not synonymous with disease (2) it may be possible to obtain different conclusions about which modality is better using different rules (e.g. majority vs. unanimous consensus), and (3) the variability associated with the panelists is not formally captured in the p-values or confidence intervals that are generally produced for estimating the extent to which one modality is superior to the other. A multi-reader-multi-case (MRMC) receiver operating characteristic (ROC) study was performed using 90 cases, 15 readers, and a reference truth based on 3 experienced panelists. The primary analyses were conducted using a reference truth of unanimous consensus regarding actionability (3 out of 3 panelists). To assess the three concerns noted above: (1) additional data from the original radiology reports were compared to the panel (2) the complete analysis was repeated using different definitions of truth, and (3) bootstrap analyses were conducted in which new truth panels were constructed by picking 1, 2, or 3 panelists at random. The definition of the reference truth affected the results for each modality (CT with CAD and CT without CAD) considered by itself, but the effects were similar, so the primary analysis comparing the modalities was robust to the choice of the reference truth.
Robust range estimation with a monocular camera for vision-based forward collision warning system.
Park, Ki-Yeong; Hwang, Sun-Young
2014-01-01
We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments.
Robust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System
2014-01-01
We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments. PMID:24558344
NASA Astrophysics Data System (ADS)
Bai, Jing; Wen, Guoguang; Rahmani, Ahmed
2018-04-01
Leaderless consensus for the fractional-order nonlinear multi-agent systems is investigated in this paper. At the first part, a control protocol is proposed to achieve leaderless consensus for the nonlinear single-integrator multi-agent systems. At the second part, based on sliding mode estimator, a control protocol is given to solve leaderless consensus for the the nonlinear single-integrator multi-agent systems. It shows that the control protocol can improve the systems' convergence speed. At the third part, a control protocol is designed to accomplish leaderless consensus for the nonlinear double-integrator multi-agent systems. To judge the systems' stability in this paper, two classic continuous Lyapunov candidate functions are chosen. Finally, several worked out examples under directed interaction topology are given to prove above results.
A formal concept analysis approach to consensus clustering of multi-experiment expression data
2014-01-01
Background Presently, with the increasing number and complexity of available gene expression datasets, the combination of data from multiple microarray studies addressing a similar biological question is gaining importance. The analysis and integration of multiple datasets are expected to yield more reliable and robust results since they are based on a larger number of samples and the effects of the individual study-specific biases are diminished. This is supported by recent studies suggesting that important biological signals are often preserved or enhanced by multiple experiments. An approach to combining data from different experiments is the aggregation of their clusterings into a consensus or representative clustering solution which increases the confidence in the common features of all the datasets and reveals the important differences among them. Results We propose a novel generic consensus clustering technique that applies Formal Concept Analysis (FCA) approach for the consolidation and analysis of clustering solutions derived from several microarray datasets. These datasets are initially divided into groups of related experiments with respect to a predefined criterion. Subsequently, a consensus clustering algorithm is applied to each group resulting in a clustering solution per group. These solutions are pooled together and further analysed by employing FCA which allows extracting valuable insights from the data and generating a gene partition over all the experiments. In order to validate the FCA-enhanced approach two consensus clustering algorithms are adapted to incorporate the FCA analysis. Their performance is evaluated on gene expression data from multi-experiment study examining the global cell-cycle control of fission yeast. The FCA results derived from both methods demonstrate that, although both algorithms optimize different clustering characteristics, FCA is able to overcome and diminish these differences and preserve some relevant biological signals. Conclusions The proposed FCA-enhanced consensus clustering technique is a general approach to the combination of clustering algorithms with FCA for deriving clustering solutions from multiple gene expression matrices. The experimental results presented herein demonstrate that it is a robust data integration technique able to produce good quality clustering solution that is representative for the whole set of expression matrices. PMID:24885407
Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization.
Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona
2016-05-31
Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms.
Consensus in controversy: The modified Delphi method applied to Gynecologic Oncology practice.
Cohn, David E; Havrilesky, Laura J; Osann, Kathryn; Lipscomb, Joseph; Hsieh, Susie; Walker, Joan L; Wright, Alexi A; Alvarez, Ronald D; Karlan, Beth Y; Bristow, Robert E; DiSilvestro, Paul A; Wakabayashi, Mark T; Morgan, Robert; Mukamel, Dana B; Wenzel, Lari
2015-09-01
To determine the degree of consensus regarding the probabilities of outcomes associated with IP/IV and IV chemotherapy. A survey was administered to an expert panel using the Delphi method. Ten ovarian cancer experts were asked to estimate outcomes for patients receiving IP/IV or IV chemotherapy. The clinical estimates were: 1) probability of completing six cycles of chemotherapy, 2) probability of surviving five years, 3) median survival, and 4) probability of ER/hospital visits during treatment. Estimates for two patients, one with a low comorbidity index (patient 1) and the other with a moderate index (patient 2), were included. The survey was administered in three rounds, and panelists could revise their subsequent responses based on review of the anonymous opinions of their peers. The ranges were smaller for IV compared with IP/IV therapy. Ranges decreased with each round. Consensus converged around outcomes related to IP/IV chemotherapy for: 1) completion of 6 cycles of therapy (type 1 patient, 62%, type 2 patient, 43%); 2) percentage of patients surviving 5 years (type 1 patient, 66%, type 2 patient, 47%); and 3) median survival (type 1 patient, 83 months, type 2 patient, 58 months). The group required three rounds to achieve consensus on the probabilities of ER/hospital visits (type 1 patient, 24%, type 2 patient, 35%). Initial estimates of survival and adverse events associated with IP/IV chemotherapy differ among experts. The Delphi process works to build consensus and may be a pragmatic tool to inform patients of their expected outcomes. Copyright © 2015 Elsevier Inc. All rights reserved.
A Robust Approach to Risk Assessment Based on Species Sensitivity Distributions.
Monti, Gianna S; Filzmoser, Peter; Deutsch, Roland C
2018-05-03
The guidelines for setting environmental quality standards are increasingly based on probabilistic risk assessment due to a growing general awareness of the need for probabilistic procedures. One of the commonly used tools in probabilistic risk assessment is the species sensitivity distribution (SSD), which represents the proportion of species affected belonging to a biological assemblage as a function of exposure to a specific toxicant. Our focus is on the inverse use of the SSD curve with the aim of estimating the concentration, HCp, of a toxic compound that is hazardous to p% of the biological community under study. Toward this end, we propose the use of robust statistical methods in order to take into account the presence of outliers or apparent skew in the data, which may occur without any ecological basis. A robust approach exploits the full neighborhood of a parametric model, enabling the analyst to account for the typical real-world deviations from ideal models. We examine two classic HCp estimation approaches and consider robust versions of these estimators. In addition, we also use data transformations in conjunction with robust estimation methods in case of heteroscedasticity. Different scenarios using real data sets as well as simulated data are presented in order to illustrate and compare the proposed approaches. These scenarios illustrate that the use of robust estimation methods enhances HCp estimation. © 2018 Society for Risk Analysis.
The Consensus Molecular Subtypes of Colorectal Cancer
Guinney, Justin; Dienstmann, Rodrigo; Wang, Xin; de Reyniès, Aurélien; Schlicker, Andreas; Soneson, Charlotte; Marisa, Laetitia; Roepman, Paul; Nyamundanda, Gift; Angelino, Paolo; Bot, Brian M.; Morris, Jeffrey S.; Simon, Iris M.; Gerster, Sarah; Fessler, Evelyn; de Sousa e Melo, Felipe; Missiaglia, Edoardo; Ramay, Hena; Barras, David; Homicsko, Krisztian; Maru, Dipen; Manyam, Ganiraju C.; Broom, Bradley; Boige, Valerie; Perez-Villamil, Beatriz; Laderas, Ted; Salazar, Ramon; Gray, Joe W.; Hanahan, Douglas; Tabernero, Josep; Bernards, Rene; Friend, Stephen H.; Laurent-Puig, Pierre; Medema, Jan Paul; Sadanandam, Anguraj; Wessels, Lodewyk; Delorenzi, Mauro; Kopetz, Scott; Vermeulen, Louis; Tejpar, Sabine
2015-01-01
Colorectal cancer (CRC) is a frequently lethal disease with heterogeneous outcomes and drug responses. To resolve inconsistencies among the reported gene expression–based CRC classifications and facilitate clinical translation, we formed an international consortium dedicated to large-scale data sharing and analytics across expert groups. We show marked interconnectivity between six independent classification systems coalescing into four consensus molecular subtypes (CMS) with distinguishing features: CMS1 (MSI Immune, 14%), hypermutated, microsatellite unstable, strong immune activation; CMS2 (Canonical, 37%), epithelial, chromosomally unstable, marked WNT and MYC signaling activation; CMS3 (Metabolic, 13%), epithelial, evident metabolic dysregulation; and CMS4 (Mesenchymal, 23%), prominent transforming growth factor β activation, stromal invasion, and angiogenesis. Samples with mixed features (13%) possibly represent a transition phenotype or intra-tumoral heterogeneity. We consider the CMS groups the most robust classification system currently available for CRC – with clear biological interpretability – and the basis for future clinical stratification and subtype–based targeted interventions. PMID:26457759
Consensus statement: Virus taxonomy in the age of metagenomics.
Simmonds, Peter; Adams, Mike J; Benkő, Mária; Breitbart, Mya; Brister, J Rodney; Carstens, Eric B; Davison, Andrew J; Delwart, Eric; Gorbalenya, Alexander E; Harrach, Balázs; Hull, Roger; King, Andrew M Q; Koonin, Eugene V; Krupovic, Mart; Kuhn, Jens H; Lefkowitz, Elliot J; Nibert, Max L; Orton, Richard; Roossinck, Marilyn J; Sabanadzovic, Sead; Sullivan, Matthew B; Suttle, Curtis A; Tesh, Robert B; van der Vlugt, René A; Varsani, Arvind; Zerbini, F Murilo
2017-03-01
The number and diversity of viral sequences that are identified in metagenomic data far exceeds that of experimentally characterized virus isolates. In a recent workshop, a panel of experts discussed the proposal that, with appropriate quality control, viruses that are known only from metagenomic data can, and should be, incorporated into the official classification scheme of the International Committee on Taxonomy of Viruses (ICTV). Although a taxonomy that is based on metagenomic sequence data alone represents a substantial departure from the traditional reliance on phenotypic properties, the development of a robust framework for sequence-based virus taxonomy is indispensable for the comprehensive characterization of the global virome. In this Consensus Statement article, we consider the rationale for why metagenomic sequence data should, and how it can, be incorporated into the ICTV taxonomy, and present proposals that have been endorsed by the Executive Committee of the ICTV.
Real-time UAV trajectory generation using feature points matching between video image sequences
NASA Astrophysics Data System (ADS)
Byun, Younggi; Song, Jeongheon; Han, Dongyeob
2017-09-01
Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system
Pizzo, Francesca; Bartolomei, Fabrice; Wendling, Fabrice; Bénar, Christian-George
2017-01-01
High-frequency oscillations (HFO) have been suggested as biomarkers of epileptic tissues. While visual marking of these short and small oscillations is tedious and time-consuming, automatic HFO detectors have not yet met a large consensus. Even though detectors have been shown to perform well when validated against visual marking, the large number of false detections due to their lack of robustness hinder their clinical application. In this study, we developed a validation framework based on realistic and controlled simulations to quantify precisely the assets and weaknesses of current detectors. We constructed a dictionary of synthesized elements—HFOs and epileptic spikes—from different patients and brain areas by extracting these elements from the original data using discrete wavelet transform coefficients. These elements were then added to their corresponding simulated background activity (preserving patient- and region- specific spectra). We tested five existing detectors against this benchmark. Compared to other studies confronting detectors, we did not only ranked them according their performance but we investigated the reasons leading to these results. Our simulations, thanks to their realism and their variability, enabled us to highlight unreported issues of current detectors: (1) the lack of robust estimation of the background activity, (2) the underestimated impact of the 1/f spectrum, and (3) the inadequate criteria defining an HFO. We believe that our benchmark framework could be a valuable tool to translate HFOs into a clinical environment. PMID:28406919
Retinal slit lamp video mosaicking.
De Zanet, Sandro; Rudolph, Tobias; Richa, Rogerio; Tappeiner, Christoph; Sznitman, Raphael
2016-06-01
To this day, the slit lamp remains the first tool used by an ophthalmologist to examine patient eyes. Imaging of the retina poses, however, a variety of problems, namely a shallow depth of focus, reflections from the optical system, a small field of view and non-uniform illumination. For ophthalmologists, the use of slit lamp images for documentation and analysis purposes, however, remains extremely challenging due to large image artifacts. For this reason, we propose an automatic retinal slit lamp video mosaicking, which enlarges the field of view and reduces amount of noise and reflections, thus enhancing image quality. Our method is composed of three parts: (i) viable content segmentation, (ii) global registration and (iii) image blending. Frame content is segmented using gradient boosting with custom pixel-wise features. Speeded-up robust features are used for finding pair-wise translations between frames with robust random sample consensus estimation and graph-based simultaneous localization and mapping for global bundle adjustment. Foreground-aware blending based on feathering merges video frames into comprehensive mosaics. Foreground is segmented successfully with an area under the curve of the receiver operating characteristic curve of 0.9557. Mosaicking results and state-of-the-art methods were compared and rated by ophthalmologists showing a strong preference for a large field of view provided by our method. The proposed method for global registration of retinal slit lamp images of the retina into comprehensive mosaics improves over state-of-the-art methods and is preferred qualitatively.
Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Chin, Alexander W.; Marvis, Dimitri N.
2014-01-01
The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.
Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Chin, Alexander W.; Mavris, Dimitri N.
2016-01-01
The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.
Graphical Evaluation of the Ridge-Type Robust Regression Estimators in Mixture Experiments
Erkoc, Ali; Emiroglu, Esra
2014-01-01
In mixture experiments, estimation of the parameters is generally based on ordinary least squares (OLS). However, in the presence of multicollinearity and outliers, OLS can result in very poor estimates. In this case, effects due to the combined outlier-multicollinearity problem can be reduced to certain extent by using alternative approaches. One of these approaches is to use biased-robust regression techniques for the estimation of parameters. In this paper, we evaluate various ridge-type robust estimators in the cases where there are multicollinearity and outliers during the analysis of mixture experiments. Also, for selection of biasing parameter, we use fraction of design space plots for evaluating the effect of the ridge-type robust estimators with respect to the scaled mean squared error of prediction. The suggested graphical approach is illustrated on Hald cement data set. PMID:25202738
Graphical evaluation of the ridge-type robust regression estimators in mixture experiments.
Erkoc, Ali; Emiroglu, Esra; Akay, Kadri Ulas
2014-01-01
In mixture experiments, estimation of the parameters is generally based on ordinary least squares (OLS). However, in the presence of multicollinearity and outliers, OLS can result in very poor estimates. In this case, effects due to the combined outlier-multicollinearity problem can be reduced to certain extent by using alternative approaches. One of these approaches is to use biased-robust regression techniques for the estimation of parameters. In this paper, we evaluate various ridge-type robust estimators in the cases where there are multicollinearity and outliers during the analysis of mixture experiments. Also, for selection of biasing parameter, we use fraction of design space plots for evaluating the effect of the ridge-type robust estimators with respect to the scaled mean squared error of prediction. The suggested graphical approach is illustrated on Hald cement data set.
Trong Bui, Duong; Nguyen, Nhan Duc; Jeong, Gu-Min
2018-06-25
Human activity recognition and pedestrian dead reckoning are an interesting field because of their importance utilities in daily life healthcare. Currently, these fields are facing many challenges, one of which is the lack of a robust algorithm with high performance. This paper proposes a new method to implement a robust step detection and adaptive distance estimation algorithm based on the classification of five daily wrist activities during walking at various speeds using a smart band. The key idea is that the non-parametric adaptive distance estimator is performed after two activity classifiers and a robust step detector. In this study, two classifiers perform two phases of recognizing five wrist activities during walking. Then, a robust step detection algorithm, which is integrated with an adaptive threshold, peak and valley correction algorithm, is applied to the classified activities to detect the walking steps. In addition, the misclassification activities are fed back to the previous layer. Finally, three adaptive distance estimators, which are based on a non-parametric model of the average walking speed, calculate the length of each strike. The experimental results show that the average classification accuracy is about 99%, and the accuracy of the step detection is 98.7%. The error of the estimated distance is 2.2⁻4.2% depending on the type of wrist activities.
Fisher, Jacob C.
2017-01-01
Virtually all social diffusion work relies on a common formal basis, which predicts that consensus will develop among a connected population as the result of diffusion. In spite of the popularity of social diffusion models that predict consensus, few empirical studies examine consensus, or a clustering of attitudes, directly. Those that do either focus on the coordinating role of strict hierarchies, or on the results of online experiments, and do not consider how consensus occurs among groups in situ. This study uses longitudinal data on adolescent social networks to show how meso-level social structures, such as informal peer groups, moderate the process of consensus formation. Using a novel method for controlling for selection into a group, I find that centralized peer groups, meaning groups with clear leaders, have very low levels of consensus, while cohesive peer groups, meaning groups where more ties hold the members of the group together, have very high levels of consensus. This finding is robust to two different measures of cohesion and consensus. This suggests that consensus occurs either through central leaders’ enforcement or through diffusion of attitudes, but that central leaders have limited ability to enforce when people can leave the group easily. PMID:29335675
Robust control of the DC-DC boost converter based on the uncertainty and disturbance estimator
NASA Astrophysics Data System (ADS)
Oucheriah, Said
2017-11-01
In this paper, a robust non-linear controller based on the uncertainty and disturbance estimator (UDE) scheme is successfully developed and implemented for the output voltage regulation of the DC-DC boost converter. System uncertainties, external disturbances and unknown non-linear dynamics are lumped as a signal that is accurately estimated using a low-pass filter and their effects are cancelled by the controller. This methodology forms the basis of the UDE-based controller. A simple procedure is also developed that systematically determines the parameters of the controller to meet certain specifications. Using simulation, the effectiveness of the proposed controller is compared against the sliding-mode control (SMC). Experimental tests also show that the proposed controller is robust to system uncertainties, large input and load perturbations.
False match elimination for face recognition based on SIFT algorithm
NASA Astrophysics Data System (ADS)
Gu, Xuyuan; Shi, Ping; Shao, Meide
2011-06-01
The SIFT (Scale Invariant Feature Transform) is a well known algorithm used to detect and describe local features in images. It is invariant to image scale, rotation and robust to the noise and illumination. In this paper, a novel method used for face recognition based on SIFT is proposed, which combines the optimization of SIFT, mutual matching and Progressive Sample Consensus (PROSAC) together and can eliminate the false matches of face recognition effectively. Experiments on ORL face database show that many false matches can be eliminated and better recognition rate is achieved.
Method for hyperspectral imagery exploitation and pixel spectral unmixing
NASA Technical Reports Server (NTRS)
Lin, Ching-Fang (Inventor)
2003-01-01
An efficiently hybrid approach to exploit hyperspectral imagery and unmix spectral pixels. This hybrid approach uses a genetic algorithm to solve the abundance vector for the first pixel of a hyperspectral image cube. This abundance vector is used as initial state in a robust filter to derive the abundance estimate for the next pixel. By using Kalman filter, the abundance estimate for a pixel can be obtained in one iteration procedure which is much fast than genetic algorithm. The output of the robust filter is fed to genetic algorithm again to derive accurate abundance estimate for the current pixel. The using of robust filter solution as starting point of the genetic algorithm speeds up the evolution of the genetic algorithm. After obtaining the accurate abundance estimate, the procedure goes to next pixel, and uses the output of genetic algorithm as the previous state estimate to derive abundance estimate for this pixel using robust filter. And again use the genetic algorithm to derive accurate abundance estimate efficiently based on the robust filter solution. This iteration continues until pixels in a hyperspectral image cube end.
Adaptive torque estimation of robot joint with harmonic drive transmission
NASA Astrophysics Data System (ADS)
Shi, Zhiguo; Li, Yuankai; Liu, Guangjun
2017-11-01
Robot joint torque estimation using input and output position measurements is a promising technique, but the result may be affected by the load variation of the joint. In this paper, a torque estimation method with adaptive robustness and optimality adjustment according to load variation is proposed for robot joint with harmonic drive transmission. Based on a harmonic drive model and a redundant adaptive robust Kalman filter (RARKF), the proposed approach can adapt torque estimation filtering optimality and robustness to the load variation by self-tuning the filtering gain and self-switching the filtering mode between optimal and robust. The redundant factor of RARKF is designed as a function of the motor current for tolerating the modeling error and load-dependent filtering mode switching. The proposed joint torque estimation method has been experimentally studied in comparison with a commercial torque sensor and two representative filtering methods. The results have demonstrated the effectiveness of the proposed torque estimation technique.
Robust geostatistical analysis of spatial data
NASA Astrophysics Data System (ADS)
Papritz, Andreas; Künsch, Hans Rudolf; Schwierz, Cornelia; Stahel, Werner A.
2013-04-01
Most of the geostatistical software tools rely on non-robust algorithms. This is unfortunate, because outlying observations are rather the rule than the exception, in particular in environmental data sets. Outliers affect the modelling of the large-scale spatial trend, the estimation of the spatial dependence of the residual variation and the predictions by kriging. Identifying outliers manually is cumbersome and requires expertise because one needs parameter estimates to decide which observation is a potential outlier. Moreover, inference after the rejection of some observations is problematic. A better approach is to use robust algorithms that prevent automatically that outlying observations have undue influence. Former studies on robust geostatistics focused on robust estimation of the sample variogram and ordinary kriging without external drift. Furthermore, Richardson and Welsh (1995) proposed a robustified version of (restricted) maximum likelihood ([RE]ML) estimation for the variance components of a linear mixed model, which was later used by Marchant and Lark (2007) for robust REML estimation of the variogram. We propose here a novel method for robust REML estimation of the variogram of a Gaussian random field that is possibly contaminated by independent errors from a long-tailed distribution. It is based on robustification of estimating equations for the Gaussian REML estimation (Welsh and Richardson, 1997). Besides robust estimates of the parameters of the external drift and of the variogram, the method also provides standard errors for the estimated parameters, robustified kriging predictions at both sampled and non-sampled locations and kriging variances. Apart from presenting our modelling framework, we shall present selected simulation results by which we explored the properties of the new method. This will be complemented by an analysis a data set on heavy metal contamination of the soil in the vicinity of a metal smelter. Marchant, B.P. and Lark, R.M. 2007. Robust estimation of the variogram by residual maximum likelihood. Geoderma 140: 62-72. Richardson, A.M. and Welsh, A.H. 1995. Robust restricted maximum likelihood in mixed linear models. Biometrics 51: 1429-1439. Welsh, A.H. and Richardson, A.M. 1997. Approaches to the robust estimation of mixed models. In: Handbook of Statistics Vol. 15, Elsevier, pp. 343-384.
Integrated direct/indirect adaptive robust motion trajectory tracking control of pneumatic cylinders
NASA Astrophysics Data System (ADS)
Meng, Deyuan; Tao, Guoliang; Zhu, Xiaocong
2013-09-01
This paper studies the precision motion trajectory tracking control of a pneumatic cylinder driven by a proportional-directional control valve. An integrated direct/indirect adaptive robust controller is proposed. The controller employs a physical model based indirect-type parameter estimation to obtain reliable estimates of unknown model parameters, and utilises a robust control method with dynamic compensation type fast adaptation to attenuate the effects of parameter estimation errors, unmodelled dynamics and disturbances. Due to the use of projection mapping, the robust control law and the parameter adaption algorithm can be designed separately. Since the system model uncertainties are unmatched, the recursive backstepping technology is adopted to design the robust control law. Extensive comparative experimental results are presented to illustrate the effectiveness of the proposed controller and its performance robustness to parameter variations and sudden disturbances.
The effectiveness of robust RMCD control chart as outliers’ detector
NASA Astrophysics Data System (ADS)
Darmanto; Astutik, Suci
2017-12-01
A well-known control chart to monitor a multivariate process is Hotelling’s T 2 which its parameters are estimated classically, very sensitive and also marred by masking and swamping of outliers data effect. To overcome these situation, robust estimators are strongly recommended. One of robust estimators is re-weighted minimum covariance determinant (RMCD) which has robust characteristics as same as MCD. In this paper, the effectiveness term is accuracy of the RMCD control chart in detecting outliers as real outliers. In other word, how effectively this control chart can identify and remove masking and swamping effects of outliers. We assessed the effectiveness the robust control chart based on simulation by considering different scenarios: n sample sizes, proportion of outliers, number of p quality characteristics. We found that in some scenarios, this RMCD robust control chart works effectively.
A Robust Adaptive Unscented Kalman Filter for Nonlinear Estimation with Uncertain Noise Covariance.
Zheng, Binqi; Fu, Pengcheng; Li, Baoqing; Yuan, Xiaobing
2018-03-07
The Unscented Kalman filter (UKF) may suffer from performance degradation and even divergence while mismatch between the noise distribution assumed as a priori by users and the actual ones in a real nonlinear system. To resolve this problem, this paper proposes a robust adaptive UKF (RAUKF) to improve the accuracy and robustness of state estimation with uncertain noise covariance. More specifically, at each timestep, a standard UKF will be implemented first to obtain the state estimations using the new acquired measurement data. Then an online fault-detection mechanism is adopted to judge if it is necessary to update current noise covariance. If necessary, innovation-based method and residual-based method are used to calculate the estimations of current noise covariance of process and measurement, respectively. By utilizing a weighting factor, the filter will combine the last noise covariance matrices with the estimations as the new noise covariance matrices. Finally, the state estimations will be corrected according to the new noise covariance matrices and previous state estimations. Compared with the standard UKF and other adaptive UKF algorithms, RAUKF converges faster to the actual noise covariance and thus achieves a better performance in terms of robustness, accuracy, and computation for nonlinear estimation with uncertain noise covariance, which is demonstrated by the simulation results.
A Robust Adaptive Unscented Kalman Filter for Nonlinear Estimation with Uncertain Noise Covariance
Zheng, Binqi; Yuan, Xiaobing
2018-01-01
The Unscented Kalman filter (UKF) may suffer from performance degradation and even divergence while mismatch between the noise distribution assumed as a priori by users and the actual ones in a real nonlinear system. To resolve this problem, this paper proposes a robust adaptive UKF (RAUKF) to improve the accuracy and robustness of state estimation with uncertain noise covariance. More specifically, at each timestep, a standard UKF will be implemented first to obtain the state estimations using the new acquired measurement data. Then an online fault-detection mechanism is adopted to judge if it is necessary to update current noise covariance. If necessary, innovation-based method and residual-based method are used to calculate the estimations of current noise covariance of process and measurement, respectively. By utilizing a weighting factor, the filter will combine the last noise covariance matrices with the estimations as the new noise covariance matrices. Finally, the state estimations will be corrected according to the new noise covariance matrices and previous state estimations. Compared with the standard UKF and other adaptive UKF algorithms, RAUKF converges faster to the actual noise covariance and thus achieves a better performance in terms of robustness, accuracy, and computation for nonlinear estimation with uncertain noise covariance, which is demonstrated by the simulation results. PMID:29518960
Stability switches of arbitrary high-order consensus in multiagent networks with time delays.
Yang, Bo
2013-01-01
High-order consensus seeking, in which individual high-order dynamic agents share a consistent view of the objectives and the world in a distributed manner, finds its potential broad applications in the field of cooperative control. This paper presents stability switches analysis of arbitrary high-order consensus in multiagent networks with time delays. By employing a frequency domain method, we explicitly derive analytical equations that clarify a rigorous connection between the stability of general high-order consensus and the system parameters such as the network topology, communication time-delays, and feedback gains. Particularly, our results provide a general and a fairly precise notion of how increasing communication time-delay causes the stability switches of consensus. Furthermore, under communication constraints, the stability and robustness problems of consensus algorithms up to third order are discussed in details to illustrate our central results. Numerical examples and simulation results for fourth-order consensus are provided to demonstrate the effectiveness of our theoretical results.
NASA Astrophysics Data System (ADS)
Miola, Apollonia; Ciuffo, Biagio
2011-04-01
Maritime transport plays a central role in the transport sector's sustainability debate. Its contribution to air pollution and greenhouse gases is significant. An effective policy strategy to regulate air emissions requires their robust estimation in terms of quantification and location. This paper provides a critical analysis of the ship emission modelling approaches and data sources available, identifying their limits and constraints. It classifies the main methodologies on the basis of the approach followed (bottom-up or top-down) for the evaluation and geographic characterisation of emissions. The analysis highlights the uncertainty of results from the different methods. This is mainly due to the level of uncertainty connected with the sources of information that are used as inputs to the different studies. This paper describes the sources of the information required for these analyses, paying particular attention to AIS data and to the possible problems associated with their use. One way of reducing the overall uncertainty in the results could be the simultaneous use of different sources of information. This paper presents an alternative methodology based on this approach. As a final remark, it can be expected that new approaches to the problem together with more reliable data sources over the coming years could give more impetus to the debate on the global impact of maritime traffic on the environment that, currently, has only reached agreement via the "consensus" estimates provided by IMO (2009).
Heading Estimation for Pedestrian Dead Reckoning Based on Robust Adaptive Kalman Filtering.
Wu, Dongjin; Xia, Linyuan; Geng, Jijun
2018-06-19
Pedestrian dead reckoning (PDR) using smart phone-embedded micro-electro-mechanical system (MEMS) sensors plays a key role in ubiquitous localization indoors and outdoors. However, as a relative localization method, it suffers from the problem of error accumulation which prevents it from long term independent running. Heading estimation error is one of the main location error sources, and therefore, in order to improve the location tracking performance of the PDR method in complex environments, an approach based on robust adaptive Kalman filtering (RAKF) for estimating accurate headings is proposed. In our approach, outputs from gyroscope, accelerometer, and magnetometer sensors are fused using the solution of Kalman filtering (KF) that the heading measurements derived from accelerations and magnetic field data are used to correct the states integrated from angular rates. In order to identify and control measurement outliers, a maximum likelihood-type estimator (M-estimator)-based model is used. Moreover, an adaptive factor is applied to resist the negative effects of state model disturbances. Extensive experiments under static and dynamic conditions were conducted in indoor environments. The experimental results demonstrate the proposed approach provides more accurate heading estimates and supports more robust and dynamic adaptive location tracking, compared with methods based on conventional KF.
Toward Robust Estimation of the Components of Forest Population Change
Francis A. Roesch
2014-01-01
Multiple levels of simulation are used to test the robustness of estimators of the components of change. I first created a variety of spatial-temporal populations based on, but more variable than, an actual forest monitoring data set and then sampled those populations under a variety of sampling error structures. The performance of each of four estimation approaches is...
Robust Fault Detection Using Robust Z1 Estimation and Fuzzy Logic
NASA Technical Reports Server (NTRS)
Curry, Tramone; Collins, Emmanuel G., Jr.; Selekwa, Majura; Guo, Ten-Huei (Technical Monitor)
2001-01-01
This research considers the application of robust Z(sub 1), estimation in conjunction with fuzzy logic to robust fault detection for an aircraft fight control system. It begins with the development of robust Z(sub 1) estimators based on multiplier theory and then develops a fixed threshold approach to fault detection (FD). It then considers the use of fuzzy logic for robust residual evaluation and FD. Due to modeling errors and unmeasurable disturbances, it is difficult to distinguish between the effects of an actual fault and those caused by uncertainty and disturbance. Hence, it is the aim of a robust FD system to be sensitive to faults while remaining insensitive to uncertainty and disturbances. While fixed thresholds only allow a decision on whether a fault has or has not occurred, it is more valuable to have the residual evaluation lead to a conclusion related to the degree of, or probability of, a fault. Fuzzy logic is a viable means of determining the degree of a fault and allows the introduction of human observations that may not be incorporated in the rigorous threshold theory. Hence, fuzzy logic can provide a more reliable and informative fault detection process. Using an aircraft flight control system, the results of FD using robust Z(sub 1) estimation with a fixed threshold are demonstrated. FD that combines robust Z(sub 1) estimation and fuzzy logic is also demonstrated. It is seen that combining the robust estimator with fuzzy logic proves to be advantageous in increasing the sensitivity to smaller faults while remaining insensitive to uncertainty and disturbances.
Zhu, Bangyan; Li, Jiancheng; Chu, Zhengwei; Tang, Wei; Wang, Bin; Li, Dawei
2016-01-01
Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR) observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS) pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS. PMID:27420066
Zhu, Bangyan; Li, Jiancheng; Chu, Zhengwei; Tang, Wei; Wang, Bin; Li, Dawei
2016-07-12
Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR) observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS) pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS.
Chu, Hui-May; Ette, Ene I
2005-09-02
his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.
Discriminative confidence estimation for probabilistic multi-atlas label fusion.
Benkarim, Oualid M; Piella, Gemma; González Ballester, Miguel Angel; Sanroma, Gerard
2017-12-01
Quantitative neuroimaging analyses often rely on the accurate segmentation of anatomical brain structures. In contrast to manual segmentation, automatic methods offer reproducible outputs and provide scalability to study large databases. Among existing approaches, multi-atlas segmentation has recently shown to yield state-of-the-art performance in automatic segmentation of brain images. It consists in propagating the labelmaps from a set of atlases to the anatomy of a target image using image registration, and then fusing these multiple warped labelmaps into a consensus segmentation on the target image. Accurately estimating the contribution of each atlas labelmap to the final segmentation is a critical step for the success of multi-atlas segmentation. Common approaches to label fusion either rely on local patch similarity, probabilistic statistical frameworks or a combination of both. In this work, we propose a probabilistic label fusion framework based on atlas label confidences computed at each voxel of the structure of interest. Maximum likelihood atlas confidences are estimated using a supervised approach, explicitly modeling the relationship between local image appearances and segmentation errors produced by each of the atlases. We evaluate different spatial pooling strategies for modeling local segmentation errors. We also present a novel type of label-dependent appearance features based on atlas labelmaps that are used during confidence estimation to increase the accuracy of our label fusion. Our approach is evaluated on the segmentation of seven subcortical brain structures from the MICCAI 2013 SATA Challenge dataset and the hippocampi from the ADNI dataset. Overall, our results indicate that the proposed label fusion framework achieves superior performance to state-of-the-art approaches in the majority of the evaluated brain structures and shows more robustness to registration errors. Copyright © 2017 Elsevier B.V. All rights reserved.
The influence of ignoring secondary structure on divergence time estimates from ribosomal RNA genes.
Dohrmann, Martin
2014-02-01
Genes coding for ribosomal RNA molecules (rDNA) are among the most popular markers in molecular phylogenetics and evolution. However, coevolution of sites that code for pairing regions (stems) in the RNA secondary structure can make it challenging to obtain accurate results from such loci. While the influence of ignoring secondary structure on multiple sequence alignment and tree topology has been investigated in numerous studies, its effect on molecular divergence time estimates is still poorly known. Here, I investigate this issue in Bayesian Markov Chain Monte Carlo (BMCMC) and penalized likelihood (PL) frameworks, using empirical datasets from dragonflies (Odonata: Anisoptera) and glass sponges (Porifera: Hexactinellida). My results indicate that highly biased inferences under substitution models that ignore secondary structure only occur if maximum-likelihood estimates of branch lengths are used as input to PL dating, whereas in a BMCMC framework and in PL dating based on Bayesian consensus branch lengths, the effect is far less severe. I conclude that accounting for coevolution of paired sites in molecular dating studies is not as important as previously suggested, as long as the estimates are based on Bayesian consensus branch lengths instead of ML point estimates. This finding is especially relevant for studies where computational limitations do not allow the use of secondary-structure specific substitution models, or where accurate consensus structures cannot be predicted. I also found that the magnitude and direction (over- vs. underestimating node ages) of bias in age estimates when secondary structure is ignored was not distributed randomly across the nodes of the phylogenies, a phenomenon that requires further investigation. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Shariff, Nurul Sima Mohamad; Ferdaos, Nur Aqilah
2017-08-01
Multicollinearity often leads to inconsistent and unreliable parameter estimates in regression analysis. This situation will be more severe in the presence of outliers it will cause fatter tails in the error distributions than the normal distributions. The well-known procedure that is robust to multicollinearity problem is the ridge regression method. This method however is expected to be affected by the presence of outliers due to some assumptions imposed in the modeling procedure. Thus, the robust version of existing ridge method with some modification in the inverse matrix and the estimated response value is introduced. The performance of the proposed method is discussed and comparisons are made with several existing estimators namely, Ordinary Least Squares (OLS), ridge regression and robust ridge regression based on GM-estimates. The finding of this study is able to produce reliable parameter estimates in the presence of both multicollinearity and outliers in the data.
Management of Malignant Pleural Effusion: A Cost-Utility Analysis.
Shafiq, Majid; Frick, Kevin D; Lee, Hans; Yarmus, Lonny; Feller-Kopman, David J
2015-07-01
Malignant pleural effusion (MPE) is associated with a significant impact on health-related quality of life. Palliative interventions abound, with varying costs and degrees of invasiveness. We examined the relative cost-utility of 5 therapeutic alternatives for MPE among adults. Original studies investigating the management of MPE were extensively researched, and the most robust and current data particularly those from the TIME2 trial were chosen to estimate event probabilities. Medicare data were used for cost estimation. Utility estimates were adapted from 2 original studies and kept consistent with prior estimations. The decision tree model was based on clinical guidelines and authors' consensus opinion. Primary outcome of interest was the incremental cost-effectiveness ratio for each intervention over a less effective alternative over an analytical horizon of 6 months. Given the paucity of data on rapid pleurodesis protocol, a sensitivity analysis was conducted to address the uncertainty surrounding its efficacy in terms of achieving long-term pleurodesis. Except for repeated thoracentesis (RT; least effective), all interventions had similar effectiveness. Tunneled pleural catheter was the most cost-effective option with an incremental cost-effectiveness ratio of $45,747 per QALY gained over RT, assuming a willingness-to-pay threshold of $100,000/QALY. Multivariate sensitivity analysis showed that rapid pleurodesis protocol remained cost-ineffective even with an estimated probability of lasting pleurodesis up to 85%. Tunneled pleural catheter is the most cost-effective therapeutic alternative to RT. This, together with its relative convenience (requiring neither hospitalization nor thoracoscopic procedural skills), makes it an intervention of choice for MPE.
Grant, Frederick D; Gelfand, Michael J; Drubach, Laura A; Treves, S Ted; Fahey, Frederic H
2015-04-01
Estimated radiation dose is important for assessing and communicating the risks and benefits of pediatric nuclear medicine studies. Radiation dose depends on the radiopharmaceutical, the administered activity, and patient factors such as age and size. Most radiation dose estimates for pediatric nuclear medicine have not been based on administered activities of radiopharmaceuticals recommended by established practice guidelines. The dosage card of the European Association of Nuclear Medicine (EANM) and the North American consensus guidelines each provide recommendations of administered activities of radiopharmaceuticals in children, but there are substantial differences between these two guidelines. For 12 commonly performed pediatric nuclear medicine studies, two established pediatric radiopharmaceutical administration guidelines were used to calculate updated radiation dose estimates and to compare the radiation exposure resulting from the recommendations of each of the guidelines. Estimated radiation doses were calculated for 12 common procedures in pediatric nuclear medicine using administered activities recommended by the dosage card of the EANM (version 1.5.2008) and the 2010 North American consensus guidelines for radiopharmaceutical administered activities in pediatrics. Based on standard models and nominal age-based weights, radiation dose was estimated for typical patients at ages 1, 5, 10 and 15 years and adult. The resulting effective doses were compared, with differences greater than 20% considered significant. Following either the EANM dosage card or the 2010 North American guidelines, the highest effective doses occur with radiopharmaceuticals labeled with fluorine-18 and iodine-123. In 24% of cases, following the North American consensus guidelines would result in a substantially higher radiation dose. The guidelines of the EANM dosage card would lead to a substantially higher radiation dose in 39% of all cases, and in 62% of cases in which patients were age 5 years or younger. For 12 commonly performed pediatric nuclear medicine studies, updated radiation dose estimates can guide efforts to reduce radiation exposure and provide current information for discussing radiation exposure and risk with referring physicians, patients and families. There can be substantial differences in radiation exposure for the same procedure, depending upon which of these two guidelines is followed. This discordance identifies opportunities for harmonization of the guidelines, which may lead to further reduction in nuclear medicine radiation doses in children.
Shahin, Arwa; Smulders, Marinus J. M.; van Tuyl, Jaap M.; Arens, Paul; Bakker, Freek T.
2014-01-01
Next Generation Sequencing (NGS) may enable estimating relationships among genotypes using allelic variation of multiple nuclear genes simultaneously. We explored the potential and caveats of this strategy in four genetically distant Lilium cultivars to estimate their genetic divergence from transcriptome sequences using three approaches: POFAD (Phylogeny of Organisms from Allelic Data, uses allelic information of sequence data), RAxML (Randomized Accelerated Maximum Likelihood, tree building based on concatenated consensus sequences) and Consensus Network (constructing a network summarizing among gene tree conflicts). Twenty six gene contigs were chosen based on the presence of orthologous sequences in all cultivars, seven of which also had an orthologous sequence in Tulipa, used as out-group. The three approaches generated the same topology. Although the resolution offered by these approaches is high, in this case there was no extra benefit in using allelic information. We conclude that these 26 genes can be widely applied to construct a species tree for the genus Lilium. PMID:25368628
Case, Brett A; Hackel, Benjamin J
2016-08-01
Protein ligand charge can impact physiological delivery with charge reduction often benefiting performance. Yet neutralizing mutations can be detrimental to protein function. Herein, three approaches are evaluated to introduce charged-to-neutral mutations of three cations and three anions within an affibody engineered to bind epidermal growth factor receptor. These approaches-combinatorial library sorting or consensus design, based on natural homologs or library-sorted mutants-are used to identify mutations with favorable affinity, stability, and recombinant yield. Consensus design, based on 942 affibody homologs, yielded a mutant of modest function (Kd = 11 ±4 nM, Tm = 62°C, and yield = 4.0 ± 0.8 mg/L as compared to 5.3 ± 1.7 nM, 71°C, and 3.5 ± 0.3 mg/L for the parental affibody). Extension of consensus design to 10 additional mutants exhibited varied performance including a substantially improved mutant (Kd = 6.9 ± 1.4 nM, Tm = 71°C, and 12.7 ± 0.9 mg/L yield). Sorting a homolog-based combinatorial library of 7 × 10(5) mutants generated a distribution of mutants with lower stability and yield, but did identify one strongly binding variant (Kd = 1.2 ± 0.3 nM, Tm = 69°C, and 6.0 ± 0.4 mg/L yield). Synthetic consensus design, based on the amino acid distribution in functional library mutants, yielded higher affinities (P = 0.05) with comparable stabilities and yields. The best of four analyzed clones had Kd = 1.7 ± 0.5 nM, Tm = 68°C, and 7.0 ± 0.5 mg/L yield. While all three approaches were effective in creating targeted affibodies with six charged-to-neutral mutations, synthetic consensus design proved to be the most robust. Synthetic consensus design provides a valuable tool for ligand engineering, particularly in the context of charge manipulation. Biotechnol. Bioeng. 2016;113: 1628-1638. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Cook, J.
2016-12-01
A number of studies have sought to quantify the level of agreement among climate scientists on human-caused global warming. This has included surveys of the scientific community, analyses of public declarations about climate change and analyses of peer-reviewed climate papers. This body of research has found that the level of consensus increases with expertise in climate science, culminating in 97% agreement among publishing climate scientists. Despite this robust finding, there is a significant gap between public perception of scientific consensus and the overwhelming agreement among climate scientists. This "consensus gap" is due in large part to a persistent, focused campaign to manufacture doubt about the scientific consensus by opponents of climate action. This campaign has employed non-expert spokespeople, magnified the small minority of dissenting scientists and exploited the journalistic norm of balance to generate the impression of an equal debate among scientists. Given the importance of perceived consensus as a "gateway belief" influencing a number of climate beliefs and attitudes, it is imperative that climate communicators close the consensus gap. This can be achieved by communicating the 97% consensus and explaining the techniques used to cast doubt on the consensus.
Robust Fault Detection for Aircraft Using Mixed Structured Singular Value Theory and Fuzzy Logic
NASA Technical Reports Server (NTRS)
Collins, Emmanuel G.
2000-01-01
The purpose of fault detection is to identify when a fault or failure has occurred in a system such as an aircraft or expendable launch vehicle. The faults may occur in sensors, actuators, structural components, etc. One of the primary approaches to model-based fault detection relies on analytical redundancy. That is the output of a computer-based model (actually a state estimator) is compared with the sensor measurements of the actual system to determine when a fault has occurred. Unfortunately, the state estimator is based on an idealized mathematical description of the underlying plant that is never totally accurate. As a result of these modeling errors, false alarms can occur. This research uses mixed structured singular value theory, a relatively recent and powerful robustness analysis tool, to develop robust estimators and demonstrates the use of these estimators in fault detection. To allow qualitative human experience to be effectively incorporated into the detection process fuzzy logic is used to predict the seriousness of the fault that has occurred.
New Phosphospecific Antibody Reveals Isoform-Specific Phosphorylation of CPEB3 Protein
Sehgal, Kapil; Sylvester, Marc; Skubal, Magdalena; Josten, Michele; Steinhäuser, Christian; De Koninck, Paul; Theis, Martin
2016-01-01
Cytoplasmic Polyadenylation Element Binding proteins (CPEBs) are a family of polyadenylation factors interacting with 3’UTRs of mRNA and thereby regulating gene expression. Various functions of CPEBs in development, synaptic plasticity, and cellular senescence have been reported. Four CPEB family members of partially overlapping functions have been described to date, each containing a distinct alternatively spliced region. This region is highly conserved between CPEBs-2-4 and contains a putative phosphorylation consensus, overlapping with the exon seven of CPEB3. We previously found CPEBs-2-4 splice isoforms containing exon seven to be predominantly present in neurons, and the isoform expression pattern to be cell type-specific. Here, focusing on the alternatively spliced region of CPEB3, we determined that putative neuronal isoforms of CPEB3 are phosphorylated. Using a new phosphospecific antibody directed to the phosphorylation consensus we found Protein Kinase A and Calcium/Calmodulin-dependent Protein Kinase II to robustly phosphorylate CPEB3 in vitro and in primary hippocampal neurons. Interestingly, status epilepticus induced by systemic kainate injection in mice led to specific upregulation of the CPEB3 isoforms containing exon seven. Extensive analysis of CPEB3 phosphorylation in vitro revealed two other phosphorylation sites. In addition, we found plethora of potential kinases that might be targeting the alternatively spliced kinase consensus site of CPEB3. As this site is highly conserved between the CPEB family members, we suggest the existence of a splicing-based regulatory mechanism of CPEB function, and describe a robust phosphospecific antibody to study it in future. PMID:26915047
NASA Astrophysics Data System (ADS)
Kang, Zhizhong
2013-10-01
This paper presents a new approach to automatic registration of terrestrial laser scanning (TLS) point clouds utilizing a novel robust estimation method by an efficient BaySAC (BAYes SAmpling Consensus). The proposed method directly generates reflectance images from 3D point clouds, and then using SIFT algorithm extracts keypoints to identify corresponding image points. The 3D corresponding points, from which transformation parameters between point clouds are computed, are acquired by mapping the 2D ones onto the point cloud. To remove false accepted correspondences, we implement a conditional sampling method to select the n data points with the highest inlier probabilities as a hypothesis set and update the inlier probabilities of each data point using simplified Bayes' rule for the purpose of improving the computation efficiency. The prior probability is estimated by the verification of the distance invariance between correspondences. The proposed approach is tested on four data sets acquired by three different scanners. The results show that, comparing with the performance of RANSAC, BaySAC leads to less iterations and cheaper computation cost when the hypothesis set is contaminated with more outliers. The registration results also indicate that, the proposed algorithm can achieve high registration accuracy on all experimental datasets.
An evaluation of consensus techniques for diagnostic interpretation
NASA Astrophysics Data System (ADS)
Sauter, Jake N.; LaBarre, Victoria M.; Furst, Jacob D.; Raicu, Daniela S.
2018-02-01
Learning diagnostic labels from image content has been the standard in computer-aided diagnosis. Most computer-aided diagnosis systems use low-level image features extracted directly from image content to train and test machine learning classifiers for diagnostic label prediction. When the ground truth for the diagnostic labels is not available, reference truth is generated from the experts diagnostic interpretations of the image/region of interest. More specifically, when the label is uncertain, e.g. when multiple experts label an image and their interpretations are different, techniques to handle the label variability are necessary. In this paper, we compare three consensus techniques that are typically used to encode the variability in the experts labeling of the medical data: mean, median and mode, and their effects on simple classifiers that can handle deterministic labels (decision trees) and probabilistic vectors of labels (belief decision trees). Given that the NIH/NCI Lung Image Database Consortium (LIDC) data provides interpretations for lung nodules by up to four radiologists, we leverage the LIDC data to evaluate and compare these consensus approaches when creating computer-aided diagnosis systems for lung nodules. First, low-level image features of nodules are extracted and paired with their radiologists semantic ratings (1= most likely benign, , 5 = most likely malignant); second, machine learning multi-class classifiers that handle deterministic labels (decision trees) and probabilistic vectors of labels (belief decision trees) are built to predict the lung nodules semantic ratings. We show that the mean-based consensus generates the most robust classi- fier overall when compared to the median- and mode-based consensus. Lastly, the results of this study show that, when building CAD systems with uncertain diagnostic interpretation, it is important to evaluate different strategies for encoding and predicting the diagnostic label.
Zhang, Shu-Xin; Chai, Xin-Sheng; Huang, Bo-Xi; Mai, Xiao-Xia
2015-08-07
Alkylphenol polyethoxylates (APEO), surfactants used in the production of textiles, have the potential to move from the fabric to the skin of the person wearing the clothes, posing an inherent risk of adverse health consequences. Therefore, the textile industry needs a fast, robust method for determining aqueous extractable APEO in fabrics. The currently-favored HPLC methods are limited by the presence of a mixture of analytes (due to the molecular weight distribution) and a lack of analytical standards for quantifying results. As a result, it has not been possible to reach consensus on a standard method for the determination of APEO in textiles. This paper addresses these limitations through the use of reaction-based head space-gas chromatography (HS-GC). Specifically, water is used to simulate body sweat and extract APEO. HI is then used to react the ethoxylate chains to depolymerize the chains into iodoethane that is quantified through HS-GC, providing an estimate of the average amount of APEO in the clothing. Data are presented to justify the optimal operating conditions; i.e., water extraction at 60°C for 1h and reaction with a specified amount of HI in the headspace vial at 135°C for 4h. The results show that the HS-GC method has good precision (RSD<10%) and good accuracy (recoveries from 95 to 106%) for the quantification of APEO content in textile and related materials. As such, the method should be a strong candidate to become a standard method for such determinations. Copyright © 2015 Elsevier B.V. All rights reserved.
Local Estimators for Spacecraft Formation Flying
NASA Technical Reports Server (NTRS)
Fathpour, Nanaz; Hadaegh, Fred Y.; Mesbahi, Mehran; Nabi, Marzieh
2011-01-01
A formation estimation architecture for formation flying builds upon the local information exchange among multiple local estimators. Spacecraft formation flying involves the coordination of states among multiple spacecraft through relative sensing, inter-spacecraft communication, and control. Most existing formation flying estimation algorithms can only be supported via highly centralized, all-to-all, static relative sensing. New algorithms are needed that are scalable, modular, and robust to variations in the topology and link characteristics of the formation exchange network. These distributed algorithms should rely on a local information-exchange network, relaxing the assumptions on existing algorithms. In this research, it was shown that only local observability is required to design a formation estimator and control law. The approach relies on breaking up the overall information-exchange network into sequence of local subnetworks, and invoking an agreement-type filter to reach consensus among local estimators within each local network. State estimates were obtained by a set of local measurements that were passed through a set of communicating Kalman filters to reach an overall state estimation for the formation. An optimization approach was also presented by means of which diffused estimates over the network can be incorporated in the local estimates obtained by each estimator via local measurements. This approach compares favorably with that obtained by a centralized Kalman filter, which requires complete knowledge of the raw measurement available to each estimator.
Fuzzy Behavior-Based Navigation for Planetary
NASA Technical Reports Server (NTRS)
Tunstel, Edward; Danny, Harrison; Lippincott, Tanya; Jamshidi, Mo
1997-01-01
Adaptive behavioral capabilities are necessary for robust rover navigation in unstructured and partially-mapped environments. A control approach is described which exploits the approximate reasoning capability of fuzzy logic to produce adaptive motion behavior. In particular, a behavior-based architecture for hierarchical fuzzy control of microrovers is presented. Its structure is described, as well as mechanisms of control decision-making which give rise to adaptive behavior. Control decisions for local navigation result from a consensus of recommendations offered only by behaviors that are applicable to current situations. Simulation predicts the navigation performance on a microrover in simplified Mars-analog terrain.
Robust detection, isolation and accommodation for sensor failures
NASA Technical Reports Server (NTRS)
Emami-Naeini, A.; Akhter, M. M.; Rock, S. M.
1986-01-01
The objective is to extend the recent advances in robust control system design of multivariable systems to sensor failure detection, isolation, and accommodation (DIA), and estimator design. This effort provides analysis tools to quantify the trade-off between performance robustness and DIA sensitivity, which are to be used to achieve higher levels of performance robustness for given levels of DIA sensitivity. An innovations-based DIA scheme is used. Estimators, which depend upon a model of the process and process inputs and outputs, are used to generate these innovations. Thresholds used to determine failure detection are computed based on bounds on modeling errors, noise properties, and the class of failures. The applicability of the newly developed tools are demonstrated on a multivariable aircraft turbojet engine example. A new concept call the threshold selector was developed. It represents a significant and innovative tool for the analysis and synthesis of DiA algorithms. The estimators were made robust by introduction of an internal model and by frequency shaping. The internal mode provides asymptotically unbiased filter estimates.The incorporation of frequency shaping of the Linear Quadratic Gaussian cost functional modifies the estimator design to make it suitable for sensor failure DIA. The results are compared with previous studies which used thresholds that were selcted empirically. Comparison of these two techniques on a nonlinear dynamic engine simulation shows improved performance of the new method compared to previous techniques
Rabe, Eberhard; Partsch, Hugo; Hafner, Juerg; Lattimer, Christopher; Mosti, Giovanni; Neumann, Martino; Urbanek, Tomasz; Huebner, Monika; Gaillard, Sylvain; Carpentier, Patrick
2017-01-01
Objective Medical compression stockings are a standard, non-invasive treatment option for all venous and lymphatic diseases. The aim of this consensus document is to provide up-to-date recommendations and evidence grading on the indications for treatment, based on evidence accumulated during the past decade, under the auspices of the International Compression Club. Methods A systematic literature review was conducted and, using PRISMA guidelines, 51 relevant publications were selected for an evidence-based analysis of an initial 2407 unrefined results. Key search terms included: ‘acute', CEAP', ‘chronic', ‘compression stockings', ‘compression therapy', ‘lymph', ‘lymphatic disease', ‘vein' and ‘venous disease'. Evidence extracted from the publications was graded initially by the panel members individually and then refined at the consensus meeting. Results Based on the current evidence, 25 recommendations for chronic and acute venous disorders were made. Of these, 24 recommendations were graded as: Grade 1A (n = 4), 1B (n = 13), 1C (n = 2), 2B (n = 4) and 2C (n = 1). The panel members found moderately robust evidence for medical compression stockings in patients with venous symptoms and prevention and treatment of venous oedema. Robust evidence was found for prevention and treatment of venous leg ulcers. Recommendations for stocking-use after great saphenous vein interventions were limited to the first post-interventional week. No randomised clinical trials are available that document a prophylactic effect of medical compression stockings on the progression of chronic venous disease (CVD). In acute deep vein thrombosis, immediate compression is recommended to reduce pain and swelling. Despite conflicting results from a recent study to prevent post-thrombotic syndrome, medical compression stockings are still recommended. In thromboprophylaxis, the role of stockings in addition to anticoagulation is limited. For the maintenance phase of lymphoedema management, compression stockings are the most important intervention. Conclusion The beneficial value of applying compression stockings in the treatment of venous and lymphatic disease is supported by this document, with 19/25 recommendations rated as Grade 1 evidence. For recommendations rated with Grade 2 level of evidence, further studies are needed. PMID:28549402
Feedback Robust Cubature Kalman Filter for Target Tracking Using an Angle Sensor.
Wu, Hao; Chen, Shuxin; Yang, Binfeng; Chen, Kun
2016-05-09
The direction of arrival (DOA) tracking problem based on an angle sensor is an important topic in many fields. In this paper, a nonlinear filter named the feedback M-estimation based robust cubature Kalman filter (FMR-CKF) is proposed to deal with measurement outliers from the angle sensor. The filter designs a new equivalent weight function with the Mahalanobis distance to combine the cubature Kalman filter (CKF) with the M-estimation method. Moreover, by embedding a feedback strategy which consists of a splitting and merging procedure, the proper sub-filter (the standard CKF or the robust CKF) can be chosen in each time index. Hence, the probability of the outliers' misjudgment can be reduced. Numerical experiments show that the FMR-CKF performs better than the CKF and conventional robust filters in terms of accuracy and robustness with good computational efficiency. Additionally, the filter can be extended to the nonlinear applications using other types of sensors.
Comparison of Global and Mode of Action-Based Models for Aquatic Toxicity
The ability to estimate aquatic toxicity for a wide variety of chemicals is a critical need for ecological risk assessment and chemical regulation. The consensus in the literature is that mode of action (MOA) based QSAR (Quantitative Structure Activity Relationship) models yield ...
Speeding up the Consensus Clustering methodology for microarray data analysis
2011-01-01
Background The inference of the number of clusters in a dataset, a fundamental problem in Statistics, Data Analysis and Classification, is usually addressed via internal validation measures. The stated problem is quite difficult, in particular for microarrays, since the inferred prediction must be sensible enough to capture the inherent biological structure in a dataset, e.g., functionally related genes. Despite the rich literature present in that area, the identification of an internal validation measure that is both fast and precise has proved to be elusive. In order to partially fill this gap, we propose a speed-up of Consensus (Consensus Clustering), a methodology whose purpose is the provision of a prediction of the number of clusters in a dataset, together with a dissimilarity matrix (the consensus matrix) that can be used by clustering algorithms. As detailed in the remainder of the paper, Consensus is a natural candidate for a speed-up. Results Since the time-precision performance of Consensus depends on two parameters, our first task is to show that a simple adjustment of the parameters is not enough to obtain a good precision-time trade-off. Our second task is to provide a fast approximation algorithm for Consensus. That is, the closely related algorithm FC (Fast Consensus) that would have the same precision as Consensus with a substantially better time performance. The performance of FC has been assessed via extensive experiments on twelve benchmark datasets that summarize key features of microarray applications, such as cancer studies, gene expression with up and down patterns, and a full spectrum of dimensionality up to over a thousand. Based on their outcome, compared with previous benchmarking results available in the literature, FC turns out to be among the fastest internal validation methods, while retaining the same outstanding precision of Consensus. Moreover, it also provides a consensus matrix that can be used as a dissimilarity matrix, guaranteeing the same performance as the corresponding matrix produced by Consensus. We have also experimented with the use of Consensus and FC in conjunction with NMF (Nonnegative Matrix Factorization), in order to identify the correct number of clusters in a dataset. Although NMF is an increasingly popular technique for biological data mining, our results are somewhat disappointing and complement quite well the state of the art about NMF, shedding further light on its merits and limitations. Conclusions In summary, FC with a parameter setting that makes it robust with respect to small and medium-sized datasets, i.e, number of items to cluster in the hundreds and number of conditions up to a thousand, seems to be the internal validation measure of choice. Moreover, the technique we have developed here can be used in other contexts, in particular for the speed-up of stability-based validation measures. PMID:21235792
Taylor, Jeremy M G; Cheng, Wenting; Foster, Jared C
2015-03-01
A recent article (Zhang et al., 2012, Biometrics 168, 1010-1018) compares regression based and inverse probability based methods of estimating an optimal treatment regime and shows for a small number of covariates that inverse probability weighted methods are more robust to model misspecification than regression methods. We demonstrate that using models that fit the data better reduces the concern about non-robustness for the regression methods. We extend the simulation study of Zhang et al. (2012, Biometrics 168, 1010-1018), also considering the situation of a larger number of covariates, and show that incorporating random forests into both regression and inverse probability weighted based methods improves their properties. © 2014, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Bandyopadhyay, Saptarshi
Multi-agent systems are widely used for constructing a desired formation shape, exploring an area, surveillance, coverage, and other cooperative tasks. This dissertation introduces novel algorithms in the three main areas of shape formation, distributed estimation, and attitude control of large-scale multi-agent systems. In the first part of this dissertation, we address the problem of shape formation for thousands to millions of agents. Here, we present two novel algorithms for guiding a large-scale swarm of robotic systems into a desired formation shape in a distributed and scalable manner. These probabilistic swarm guidance algorithms adopt an Eulerian framework, where the physical space is partitioned into bins and the swarm's density distribution over each bin is controlled using tunable Markov chains. In the first algorithm - Probabilistic Swarm Guidance using Inhomogeneous Markov Chains (PSG-IMC) - each agent determines its bin transition probabilities using a time-inhomogeneous Markov chain that is constructed in real-time using feedback from the current swarm distribution. This PSG-IMC algorithm minimizes the expected cost of the transitions required to achieve and maintain the desired formation shape, even when agents are added to or removed from the swarm. The algorithm scales well with a large number of agents and complex formation shapes, and can also be adapted for area exploration applications. In the second algorithm - Probabilistic Swarm Guidance using Optimal Transport (PSG-OT) - each agent determines its bin transition probabilities by solving an optimal transport problem, which is recast as a linear program. In the presence of perfect feedback of the current swarm distribution, this algorithm minimizes the given cost function, guarantees faster convergence, reduces the number of transitions for achieving the desired formation, and is robust to disturbances or damages to the formation. We demonstrate the effectiveness of these two proposed swarm guidance algorithms using results from numerical simulations and closed-loop hardware experiments on multiple quadrotors. In the second part of this dissertation, we present two novel discrete-time algorithms for distributed estimation, which track a single target using a network of heterogeneous sensing agents. The Distributed Bayesian Filtering (DBF) algorithm, the sensing agents combine their normalized likelihood functions using the logarithmic opinion pool and the discrete-time dynamic average consensus algorithm. Each agent's estimated likelihood function converges to an error ball centered on the joint likelihood function of the centralized multi-sensor Bayesian filtering algorithm. Using a new proof technique, the convergence, stability, and robustness properties of the DBF algorithm are rigorously characterized. The explicit bounds on the time step of the robust DBF algorithm are shown to depend on the time-scale of the target dynamics. Furthermore, the DBF algorithm for linear-Gaussian models can be cast into a modified form of the Kalman information filter. In the Bayesian Consensus Filtering (BCF) algorithm, the agents combine their estimated posterior pdfs multiple times within each time step using the logarithmic opinion pool scheme. Thus, each agent's consensual pdf minimizes the sum of Kullback-Leibler divergences with the local posterior pdfs. The performance and robust properties of these algorithms are validated using numerical simulations. In the third part of this dissertation, we present an attitude control strategy and a new nonlinear tracking controller for a spacecraft carrying a large object, such as an asteroid or a boulder. If the captured object is larger or comparable in size to the spacecraft and has significant modeling uncertainties, conventional nonlinear control laws that use exact feed-forward cancellation are not suitable because they exhibit a large resultant disturbance torque. The proposed nonlinear tracking control law guarantees global exponential convergence of tracking errors with finite-gain Lp stability in the presence of modeling uncertainties and disturbances, and reduces the resultant disturbance torque. Further, this control law permits the use of any attitude representation and its integral control formulation eliminates any constant disturbance. Under small uncertainties, the best strategy for stabilizing the combined system is to track a fuel-optimal reference trajectory using this nonlinear control law, because it consumes the least amount of fuel. In the presence of large uncertainties, the most effective strategy is to track the derivative plus proportional-derivative based reference trajectory, because it reduces the resultant disturbance torque. The effectiveness of the proposed attitude control law is demonstrated by using results of numerical simulation based on an Asteroid Redirect Mission concept. The new algorithms proposed in this dissertation will facilitate the development of versatile autonomous multi-agent systems that are capable of performing a variety of complex tasks in a robust and scalable manner.
Using the Entrustable Professional Activities Framework in the Assessment of Procedural Skills.
Pugh, Debra; Cavalcanti, Rodrigo B; Halman, Samantha; Ma, Irene W Y; Mylopoulos, Maria; Shanks, David; Stroud, Lynfa
2017-04-01
The entrustable professional activity (EPA) framework has been identified as a useful approach to assessment in competency-based education. To apply an EPA framework for assessment, essential skills necessary for entrustment to occur must first be identified. Using an EPA framework, our study sought to (1) define the essential skills required for entrustment for 7 bedside procedures expected of graduates of Canadian internal medicine (IM) residency programs, and (2) develop rubrics for the assessment of these procedural skills. An initial list of essential skills was defined for each procedural EPA by focus groups of experts at 4 academic centers using the nominal group technique. These lists were subsequently vetted by representatives from all Canadian IM training programs through a web-based survey. Consensus (more than 80% agreement) about inclusion of each item was sought using a modified Delphi exercise. Qualitative survey data were analyzed using a framework approach to inform final assessment rubrics for each procedure. Initial lists of essential skills for procedural EPAs ranged from 10 to 24 items. A total of 111 experts completed the national survey. After 2 iterations, consensus was reached on all items. Following qualitative analysis, final rubrics were created, which included 6 to 10 items per procedure. These EPA-based assessment rubrics represent a national consensus by Canadian IM clinician educators. They provide a practical guide for the assessment of procedural skills in a competency-based education model, and a robust foundation for future research on their implementation and evaluation.
Ligand Binding Site Detection by Local Structure Alignment and Its Performance Complementarity
Lee, Hui Sun; Im, Wonpil
2013-01-01
Accurate determination of potential ligand binding sites (BS) is a key step for protein function characterization and structure-based drug design. Despite promising results of template-based BS prediction methods using global structure alignment (GSA), there is a room to improve the performance by properly incorporating local structure alignment (LSA) because BS are local structures and often similar for proteins with dissimilar global folds. We present a template-based ligand BS prediction method using G-LoSA, our LSA tool. A large benchmark set validation shows that G-LoSA predicts drug-like ligands’ positions in single-chain protein targets more precisely than TM-align, a GSA-based method, while the overall success rate of TM-align is better. G-LoSA is particularly efficient for accurate detection of local structures conserved across proteins with diverse global topologies. Recognizing the performance complementarity of G-LoSA to TM-align and a non-template geometry-based method, fpocket, a robust consensus scoring method, CMCS-BSP (Complementary Methods and Consensus Scoring for ligand Binding Site Prediction), is developed and shows improvement on prediction accuracy. The G-LoSA source code is freely available at http://im.bioinformatics.ku.edu/GLoSA. PMID:23957286
Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Bentler, Peter M.
2000-01-01
Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)
Prioritizing Scientific Initiatives.
ERIC Educational Resources Information Center
Bahcall, John N.
1991-01-01
Discussed is the way in which a limited number of astronomy research initiatives were chosen and prioritized based on a consensus of members from the Astronomy and Astrophysics Survey Committee. A list of recommended equipment initiatives and estimated costs is provided. (KR)
A Decentralized Eigenvalue Computation Method for Spectrum Sensing Based on Average Consensus
NASA Astrophysics Data System (ADS)
Mohammadi, Jafar; Limmer, Steffen; Stańczak, Sławomir
2016-07-01
This paper considers eigenvalue estimation for the decentralized inference problem for spectrum sensing. We propose a decentralized eigenvalue computation algorithm based on the power method, which is referred to as generalized power method GPM; it is capable of estimating the eigenvalues of a given covariance matrix under certain conditions. Furthermore, we have developed a decentralized implementation of GPM by splitting the iterative operations into local and global computation tasks. The global tasks require data exchange to be performed among the nodes. For this task, we apply an average consensus algorithm to efficiently perform the global computations. As a special case, we consider a structured graph that is a tree with clusters of nodes at its leaves. For an accelerated distributed implementation, we propose to use computation over multiple access channel (CoMAC) as a building block of the algorithm. Numerical simulations are provided to illustrate the performance of the two algorithms.
Nichols, James D.; Pollock, Kenneth H.; Hines, James E.
1984-01-01
The robust design of Pollock (1982) was used to estimate parameters of a Maryland M. pennsylvanicus population. Closed model tests provided strong evidence of heterogeneity of capture probability, and model M eta (Otis et al., 1978) was selected as the most appropriate model for estimating population size. The Jolly-Seber model goodness-of-fit test indicated rejection of the model for this data set, and the M eta estimates of population size were all higher than the Jolly-Seber estimates. Both of these results are consistent with the evidence of heterogeneous capture probabilities. The authors thus used M eta estimates of population size, Jolly-Seber estimates of survival rate, and estimates of birth-immigration based on a combination of the population size and survival rate estimates. Advantages of the robust design estimates for certain inference procedures are discussed, and the design is recommended for future small mammal capture-recapture studies directed at estimation.
Kiryu, Hisanori; Kin, Taishin; Asai, Kiyoshi
2007-02-15
Recent transcriptomic studies have revealed the existence of a considerable number of non-protein-coding RNA transcripts in higher eukaryotic cells. To investigate the functional roles of these transcripts, it is of great interest to find conserved secondary structures from multiple alignments on a genomic scale. Since multiple alignments are often created using alignment programs that neglect the special conservation patterns of RNA secondary structures for computational efficiency, alignment failures can cause potential risks of overlooking conserved stem structures. We investigated the dependence of the accuracy of secondary structure prediction on the quality of alignments. We compared three algorithms that maximize the expected accuracy of secondary structures as well as other frequently used algorithms. We found that one of our algorithms, called McCaskill-MEA, was more robust against alignment failures than others. The McCaskill-MEA method first computes the base pairing probability matrices for all the sequences in the alignment and then obtains the base pairing probability matrix of the alignment by averaging over these matrices. The consensus secondary structure is predicted from this matrix such that the expected accuracy of the prediction is maximized. We show that the McCaskill-MEA method performs better than other methods, particularly when the alignment quality is low and when the alignment consists of many sequences. Our model has a parameter that controls the sensitivity and specificity of predictions. We discussed the uses of that parameter for multi-step screening procedures to search for conserved secondary structures and for assigning confidence values to the predicted base pairs. The C++ source code that implements the McCaskill-MEA algorithm and the test dataset used in this paper are available at http://www.ncrna.org/papers/McCaskillMEA/. Supplementary data are available at Bioinformatics online.
Jiang, Xuejun; Guo, Xu; Zhang, Ning; Wang, Bo
2018-01-01
This article presents and investigates performance of a series of robust multivariate nonparametric tests for detection of location shift between two multivariate samples in randomized controlled trials. The tests are built upon robust estimators of distribution locations (medians, Hodges-Lehmann estimators, and an extended U statistic) with both unscaled and scaled versions. The nonparametric tests are robust to outliers and do not assume that the two samples are drawn from multivariate normal distributions. Bootstrap and permutation approaches are introduced for determining the p-values of the proposed test statistics. Simulation studies are conducted and numerical results are reported to examine performance of the proposed statistical tests. The numerical results demonstrate that the robust multivariate nonparametric tests constructed from the Hodges-Lehmann estimators are more efficient than those based on medians and the extended U statistic. The permutation approach can provide a more stringent control of Type I error and is generally more powerful than the bootstrap procedure. The proposed robust nonparametric tests are applied to detect multivariate distributional difference between the intervention and control groups in the Thai Healthy Choices study and examine the intervention effect of a four-session motivational interviewing-based intervention developed in the study to reduce risk behaviors among youth living with HIV. PMID:29672555
Moore, Caroline M; Giganti, Francesco; Albertsen, Peter; Allen, Clare; Bangma, Chris; Briganti, Alberto; Carroll, Peter; Haider, Masoom; Kasivisvanathan, Veeru; Kirkham, Alex; Klotz, Laurence; Ouzzane, Adil; Padhani, Anwar R; Panebianco, Valeria; Pinto, Peter; Puech, Philippe; Rannikko, Antti; Renard-Penna, Raphaele; Touijer, Karim; Turkbey, Baris; van Poppel, Heinrik; Valdagni, Riccardo; Walz, Jochen; Schoots, Ivo
2017-04-01
Published data on prostate magnetic resonance imaging (MRI) during follow-up of men on active surveillance are lacking. Current guidelines for prostate MRI reporting concentrate on prostate cancer (PCa) detection and staging. A standardised approach to prostate MRI reporting for active surveillance will facilitate the robust collection of evidence in this newly developing area. To develop preliminary recommendations for reporting of individual MRI studies in men on active surveillance and for researchers reporting the outcomes of cohorts of men having MRI on active surveillance. The RAND/UCLA Appropriateness Method was used. Experts in urology, radiology, and radiation oncology developed a set of 394 statements relevant to prostate MRI reporting in men on active surveillance for PCa. Each statement was scored for agreement on a 9-point scale by each panellist prior to a panel meeting. Each statement was discussed and rescored at the meeting. Measures of agreement and consensus were calculated for each statement. The most important statements, derived from both group discussion and scores of agreement and consensus, were used to create the Prostate Cancer Radiological Estimation of Change in Sequential Evaluation (PRECISE) checklist and case report form. Key recommendations include reporting the index lesion size using absolute values at baseline and at each subsequent MRI. Radiologists should assess the likelihood of true change over time (ie, change in size or change in lesion characteristics on one or more sequences) on a 1-5 scale. A checklist of items for reporting a cohort of men on active surveillance was developed. These items were developed based on expert consensus in many areas in which data are lacking, and they are expected to develop and change as evidence is accrued. The PRECISE recommendations are designed to facilitate the development of a robust evidence database for documenting changes in prostate MRI findings over time of men on active surveillance. If used, they will facilitate data collection to distinguish measurement error and natural variability in MRI appearances from true radiologic progression. Few published reports are available on how to use and interpret magnetic resonance imaging for men on active surveillance for prostate cancer. The PRECISE panel recommends that data should be collected in a standardised manner so that natural variation in the appearance and measurement of cancer over time can be distinguished from changes indicating significant tumour progression. Copyright © 2016 European Association of Urology. All rights reserved.
NASA Astrophysics Data System (ADS)
Girinoto, Sadik, Kusman; Indahwati
2017-03-01
The National Socio-Economic Survey samples are designed to produce estimates of parameters of planned domains (provinces and districts). The estimation of unplanned domains (sub-districts and villages) has its limitation to obtain reliable direct estimates. One of the possible solutions to overcome this problem is employing small area estimation techniques. The popular choice of small area estimation is based on linear mixed models. However, such models need strong distributional assumptions and do not easy allow for outlier-robust estimation. As an alternative approach for this purpose, M-quantile regression approach to small area estimation based on modeling specific M-quantile coefficients of conditional distribution of study variable given auxiliary covariates. It obtained outlier-robust estimation from influence function of M-estimator type and also no need strong distributional assumptions. In this paper, the aim of study is to estimate the poverty indicator at sub-district level in Bogor District-West Java using M-quantile models for small area estimation. Using data taken from National Socioeconomic Survey and Villages Potential Statistics, the results provide a detailed description of pattern of incidence and intensity of poverty within Bogor district. We also compare the results with direct estimates. The results showed the framework may be preferable when direct estimate having no incidence of poverty at all in the small area.
Report on the Consensus Workshop on Formaldehyde.
1984-01-01
The Consensus Workshop on Formaldehyde consisted of bringing together scientists from academia, government, industry and public interest groups to address some important toxicological questions concerning the health effects of formaldehyde. The participants in the workshop, the Executive Panel which coordinated the meeting, and the questions posed, all were chosen through a broadly based nomination process in order to achieve as comprehensive a consensus as possible. The subcommittees considered the toxicological problems associated with formaldehyde in the areas of exposure, epidemiology, carcinogenicity/histology/genotoxicity, immunology/sensitization/irritation, structure activity/biochemistry/metabolism, reproduction/teratology, behavior/neurotoxicity/psychology and risk estimation. Some questions considered included the possible human carcinogenicity of formaldehyde, as well as other human health effects, and the interpretation of pathology induced by formaldehyde. These reports, plus introductory material on the procedures used in setting up the Consensus Workshop are presented here. Additionally, there is included a listing of the data base that was made available to the panel chairmen prior to the meeting and was readily accessible to the participants during their deliberations in the meeting. This data base, since it was computerized, was also capable of being searched for important terms. These materials were supplemented by information brought by the panelists. The workshop has defined the consensus concerning a number of major points in formaldehyde toxicology and has identified a number of major deficits in understanding which are important guides to future research. PMID:6525992
Rice, Andrew L; Butenhoff, Christopher L; Teama, Doaa G; Röger, Florian H; Khalil, M Aslam K; Rasmussen, Reinhold A
2016-09-27
Observations of atmospheric methane (CH4) since the late 1970s and measurements of CH4 trapped in ice and snow reveal a meteoric rise in concentration during much of the twentieth century. Since 1750, levels of atmospheric CH4 have more than doubled to current globally averaged concentration near 1,800 ppb. During the late 1980s and 1990s, the CH4 growth rate slowed substantially and was near or at zero between 1999 and 2006. There is no scientific consensus on the drivers of this slowdown. Here, we report measurements of the stable isotopic composition of atmospheric CH4 ((13)C/(12)C and D/H) from a rare air archive dating from 1977 to 1998. Together with more modern records of isotopic atmospheric CH4, we performed a time-dependent retrieval of methane fluxes spanning 25 y (1984-2009) using a 3D chemical transport model. This inversion results in a 24 [18, 27] Tg y(-1) CH4 increase in fugitive fossil fuel emissions since 1984 with most of this growth occurring after year 2000. This result is consistent with some bottom-up emissions inventories but not with recent estimates based on atmospheric ethane. In fact, when forced with decreasing emissions from fossil fuel sources our inversion estimates unreasonably high emissions in other sources. Further, the inversion estimates a decrease in biomass-burning emissions that could explain falling ethane abundance. A range of sensitivity tests suggests that these results are robust.
NASA Astrophysics Data System (ADS)
Rice, Andrew L.; Butenhoff, Christopher L.; Teama, Doaa G.; Röger, Florian H.; Khalil, M. Aslam K.; Rasmussen, Reinhold A.
2016-09-01
Observations of atmospheric methane (CH4) since the late 1970s and measurements of CH4 trapped in ice and snow reveal a meteoric rise in concentration during much of the twentieth century. Since 1750, levels of atmospheric CH4 have more than doubled to current globally averaged concentration near 1,800 ppb. During the late 1980s and 1990s, the CH4 growth rate slowed substantially and was near or at zero between 1999 and 2006. There is no scientific consensus on the drivers of this slowdown. Here, we report measurements of the stable isotopic composition of atmospheric CH4 (13C/12C and D/H) from a rare air archive dating from 1977 to 1998. Together with more modern records of isotopic atmospheric CH4, we performed a time-dependent retrieval of methane fluxes spanning 25 y (1984-2009) using a 3D chemical transport model. This inversion results in a 24 [18, 27] Tg y-1 CH4 increase in fugitive fossil fuel emissions since 1984 with most of this growth occurring after year 2000. This result is consistent with some bottom-up emissions inventories but not with recent estimates based on atmospheric ethane. In fact, when forced with decreasing emissions from fossil fuel sources our inversion estimates unreasonably high emissions in other sources. Further, the inversion estimates a decrease in biomass-burning emissions that could explain falling ethane abundance. A range of sensitivity tests suggests that these results are robust.
Consensus model for identification of novel PI3K inhibitors in large chemical library.
Liew, Chin Yee; Ma, Xiao Hua; Yap, Chun Wei
2010-02-01
Phosphoinositide 3-kinases (PI3Ks) inhibitors have treatment potential for cancer, diabetes, cardiovascular disease, chronic inflammation and asthma. A consensus model consisting of three base classifiers (AODE, kNN, and SVM) trained with 1,283 positive compounds (PI3K inhibitors), 16 negative compounds (PI3K non-inhibitors) and 64,078 generated putative negatives was developed for predicting compounds with PI3K inhibitory activity of IC(50) < or = 10 microM. The consensus model has an estimated false positive rate of 0.75%. Nine novel potential inhibitors were identified using the consensus model and several of these contain structural features that are consistent with those found to be important for PI3K inhibitory activities. An advantage of the current model is that it does not require knowledge of 3D structural information of the various PI3K isoforms, which is not readily available for all isoforms.
Consensus model for identification of novel PI3K inhibitors in large chemical library
NASA Astrophysics Data System (ADS)
Liew, Chin Yee; Ma, Xiao Hua; Yap, Chun Wei
2010-02-01
Phosphoinositide 3-kinases (PI3Ks) inhibitors have treatment potential for cancer, diabetes, cardiovascular disease, chronic inflammation and asthma. A consensus model consisting of three base classifiers (AODE, kNN, and SVM) trained with 1,283 positive compounds (PI3K inhibitors), 16 negative compounds (PI3K non-inhibitors) and 64,078 generated putative negatives was developed for predicting compounds with PI3K inhibitory activity of IC50 ≤ 10 μM. The consensus model has an estimated false positive rate of 0.75%. Nine novel potential inhibitors were identified using the consensus model and several of these contain structural features that are consistent with those found to be important for PI3K inhibitory activities. An advantage of the current model is that it does not require knowledge of 3D structural information of the various PI3K isoforms, which is not readily available for all isoforms.
Renewable Energy Power Generation Estimation Using Consensus Algorithm
NASA Astrophysics Data System (ADS)
Ahmad, Jehanzeb; Najm-ul-Islam, M.; Ahmed, Salman
2017-08-01
At the small consumer level, Photo Voltaic (PV) panel based grid tied systems are the most common form of Distributed Energy Resources (DER). Unlike wind which is suitable for only selected locations, PV panels can generate electricity almost anywhere. Pakistan is currently one of the most energy deficient countries in the world. In order to mitigate this shortage the Government has recently announced a policy of net-metering for residential consumers. After wide spread adoption of DERs, one of the issues that will be faced by load management centers would be accurate estimate of the amount of electricity being injected in the grid at any given time through these DERs. This becomes a critical issue once the penetration of DER increases beyond a certain limit. Grid stability and management of harmonics becomes an important consideration where electricity is being injected at the distribution level and through solid state controllers instead of rotating machinery. This paper presents a solution using graph theoretic methods for the estimation of total electricity being injected in the grid in a wide spread geographical area. An agent based consensus approach for distributed computation is being used to provide an estimate under varying generation conditions.
Partial Deconvolution with Inaccurate Blur Kernel.
Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei
2017-10-17
Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.
Toward robust estimation of the components of forest population change: simulation results
Francis A. Roesch
2014-01-01
This report presents the full simulation results of the work described in Roesch (2014), in which multiple levels of simulation were used to test the robustness of estimators for the components of forest change. In that study, a variety of spatial-temporal populations were created based on, but more variable than, an actual forest monitoring dataset, and then those...
Ward, Michael J.; Chang, Anna Marie; Pines, Jesse M.; Jouriles, Nick; Yealy, Donald M.
2016-01-01
The Consensus Conference on “Advancing Research in Emergency Department (ED) Operations and Its Impact on Patient Care,” hosted by The ED Operations Study Group (EDOSG), convened to craft a framework for future investigations in this important but underserved area. The EDOSG is a research consortium dedicated to promoting evidence based clinical practice in Emergency Medicine. The consensus process format was a modified version of the NIH Model for Consensus Conference Development. Recommendations provide an action plan for how to improve ED operations study design, create a facilitating research environment, identify data measures of value for process and outcomes research, and disseminate new knowledge in this area. Specifically, we called for eight key initiatives: 1) the development of universal measures for ED patient care processes; 2) attention to patient outcomes, in addition to process efficiency and best practice compliance; 3) the promotion of multi-site clinical operations studies to create more generalizable knowledge; 4) encouraging the use of mixed methods to understand the social community and human behavior factors that influence ED operations; 5) the creation of robust ED operations research registries to drive stronger evidence based research, 6) prioritizing key clinical questions with the input of patients, clinicians, medical leadership, emergency medicine organizations, payers, and other government stakeholders; 7) more consistently defining the functional components of the ED care system including observation units, fast tracks, waiting rooms, laboratories and radiology sub-units; and 8) maximizing multidisciplinary knowledge dissemination via emergency medicine, public health, general medicine, operations research and nontraditional publications. PMID:26014365
Yiadom, Maame Yaa A B; Ward, Michael J; Chang, Anna Marie; Pines, Jesse M; Jouriles, Nick; Yealy, Donald M
2015-06-01
The consensus conference on "Advancing Research in Emergency Department (ED) Operations and Its Impact on Patient Care," hosted by The ED Operations Study Group (EDOSG), convened to craft a framework for future investigations in this important but understudied area. The EDOSG is a research consortium dedicated to promoting evidence-based clinical practice in emergency medicine. The consensus process format was a modified version of the NIH Model for Consensus Conference Development. Recommendations provide an action plan for how to improve ED operations study design, create a facilitating research environment, identify data measures of value for process and outcomes research, and disseminate new knowledge in this area. Specifically, we call for eight key initiatives: 1) the development of universal measures for ED patient care processes; 2) attention to patient outcomes, in addition to process efficiency and best practice compliance; 3) the promotion of multisite clinical operations studies to create more generalizable knowledge; 4) encouraging the use of mixed methods to understand the social community and human behavior factors that influence ED operations; 5) the creation of robust ED operations research registries to drive stronger evidence-based research; 6) prioritizing key clinical questions with the input of patients, clinicians, medical leadership, emergency medicine organizations, payers, and other government stakeholders; 7) more consistently defining the functional components of the ED care system, including observation units, fast tracks, waiting rooms, laboratories, and radiology subunits; and 8) maximizing multidisciplinary knowledge dissemination via emergency medicine, public health, general medicine, operations research, and nontraditional publications. © 2015 by the Society for Academic Emergency Medicine.
Long-term association of economic inequality and mortality in adult Costa Ricans.
Modrek, Sepideh; Dow, William H; Rosero-Bixby, Luis
2012-01-01
Despite the large number of studies, mostly in developed economies, there is limited consensus on the health effects of inequality. Recently a related literature has examined the relationship between relative deprivation and health as a mechanism to explain the economic inequality and health relationship. This study evaluates the relationship between mortality and economic inequality, as measured by area-level Gini coefficients, as well as the relationship between mortality and relative deprivation, in the context of a middle-income country, Costa Rica. We followed a nationally representative prospective cohort of approximately 16,000 individuals aged 30 and over who were randomly selected from the 1984 census. These individuals were then linked to the Costa Rican National Death Registry until Dec. 31, 2007. Hazard models were used to estimate the relative risk of mortality for all-cause and cardiovascular disease mortality for two indicators: canton-level income inequality and relative deprivation based on asset ownership. Results indicate that there was an unexpectedly negative association between canton income inequality and mortality, but the relationship is not robust to the inclusion of canton fixed-effects. In contrast, we find a positive association between relative deprivation and mortality, which is robust to the inclusion of canton fixed-effects. Taken together, these results suggest that deprivation relative to those higher in a hierarchy is more detrimental to health than the overall dispersion of the hierarchy itself, within the Costa Rican context. Copyright © 2011 Elsevier Ltd. All rights reserved.
An emperor penguin population estimate: the first global, synoptic survey of a species from space.
Fretwell, Peter T; Larue, Michelle A; Morin, Paul; Kooyman, Gerald L; Wienecke, Barbara; Ratcliffe, Norman; Fox, Adrian J; Fleming, Andrew H; Porter, Claire; Trathan, Phil N
2012-01-01
Our aim was to estimate the population of emperor penguins (Aptenodytes fosteri) using a single synoptic survey. We examined the whole continental coastline of Antarctica using a combination of medium resolution and Very High Resolution (VHR) satellite imagery to identify emperor penguin colony locations. Where colonies were identified, VHR imagery was obtained in the 2009 breeding season. The remotely-sensed images were then analysed using a supervised classification method to separate penguins from snow, shadow and guano. Actual counts of penguins from eleven ground truthing sites were used to convert these classified areas into numbers of penguins using a robust regression algorithm.We found four new colonies and confirmed the location of three previously suspected sites giving a total number of emperor penguin breeding colonies of 46. We estimated the breeding population of emperor penguins at each colony during 2009 and provide a population estimate of ~238,000 breeding pairs (compared with the last previously published count of 135,000-175,000 pairs). Based on published values of the relationship between breeders and non-breeders, this translates to a total population of ~595,000 adult birds.There is a growing consensus in the literature that global and regional emperor penguin populations will be affected by changing climate, a driver thought to be critical to their future survival. However, a complete understanding is severely limited by the lack of detailed knowledge about much of their ecology, and importantly a poor understanding of their total breeding population. To address the second of these issues, our work now provides a comprehensive estimate of the total breeding population that can be used in future population models and will provide a baseline for long-term research.
An Emperor Penguin Population Estimate: The First Global, Synoptic Survey of a Species from Space
Fretwell, Peter T.; LaRue, Michelle A.; Morin, Paul; Kooyman, Gerald L.; Wienecke, Barbara; Ratcliffe, Norman; Fox, Adrian J.; Fleming, Andrew H.; Porter, Claire; Trathan, Phil N.
2012-01-01
Our aim was to estimate the population of emperor penguins (Aptenodytes fosteri) using a single synoptic survey. We examined the whole continental coastline of Antarctica using a combination of medium resolution and Very High Resolution (VHR) satellite imagery to identify emperor penguin colony locations. Where colonies were identified, VHR imagery was obtained in the 2009 breeding season. The remotely-sensed images were then analysed using a supervised classification method to separate penguins from snow, shadow and guano. Actual counts of penguins from eleven ground truthing sites were used to convert these classified areas into numbers of penguins using a robust regression algorithm. We found four new colonies and confirmed the location of three previously suspected sites giving a total number of emperor penguin breeding colonies of 46. We estimated the breeding population of emperor penguins at each colony during 2009 and provide a population estimate of ∼238,000 breeding pairs (compared with the last previously published count of 135,000–175,000 pairs). Based on published values of the relationship between breeders and non-breeders, this translates to a total population of ∼595,000 adult birds. There is a growing consensus in the literature that global and regional emperor penguin populations will be affected by changing climate, a driver thought to be critical to their future survival. However, a complete understanding is severely limited by the lack of detailed knowledge about much of their ecology, and importantly a poor understanding of their total breeding population. To address the second of these issues, our work now provides a comprehensive estimate of the total breeding population that can be used in future population models and will provide a baseline for long-term research. PMID:22514609
NASA Astrophysics Data System (ADS)
Addawe, Rizavel C.; Addawe, Joel M.; Magadia, Joselito C.
2016-10-01
Accurate forecasting of dengue cases would significantly improve epidemic prevention and control capabilities. This paper attempts to provide useful models in forecasting dengue epidemic specific to the young and adult population of Baguio City. To capture the seasonal variations in dengue incidence, this paper develops a robust modeling approach to identify and estimate seasonal autoregressive integrated moving average (SARIMA) models in the presence of additive outliers. Since the least squares estimators are not robust in the presence of outliers, we suggest a robust estimation based on winsorized and reweighted least squares estimators. A hybrid algorithm, Differential Evolution - Simulated Annealing (DESA), is used to identify and estimate the parameters of the optimal SARIMA model. The method is applied to the monthly reported dengue cases in Baguio City, Philippines.
Re-examination of the relationship between marine virus and microbial cell abundances.
Wigington, Charles H; Sonderegger, Derek; Brussaard, Corina P D; Buchan, Alison; Finke, Jan F; Fuhrman, Jed A; Lennon, Jay T; Middelboe, Mathias; Suttle, Curtis A; Stock, Charles; Wilson, William H; Wommack, K Eric; Wilhelm, Steven W; Weitz, Joshua S
2016-01-25
Marine viruses are critical drivers of ocean biogeochemistry, and their abundances vary spatiotemporally in the global oceans, with upper estimates exceeding 10(8) per ml. Over many years, a consensus has emerged that virus abundances are typically tenfold higher than microbial cell abundances. However, the true explanatory power of a linear relationship and its robustness across diverse ocean environments is unclear. Here, we compile 5,671 microbial cell and virus abundance estimates from 25 distinct marine surveys and find substantial variation in the virus-to-microbial cell ratio, in which a 10:1 model has either limited or no explanatory power. Instead, virus abundances are better described as nonlinear, power-law functions of microbial cell abundances. The fitted scaling exponents are typically less than 1, implying that the virus-to-microbial cell ratio decreases with microbial cell density, rather than remaining fixed. The observed scaling also implies that viral effect sizes derived from 'representative' abundances require substantial refinement to be extrapolated to regional or global scales.
Robust Angle Estimation for MIMO Radar with the Coexistence of Mutual Coupling and Colored Noise.
Wang, Junxiang; Wang, Xianpeng; Xu, Dingjie; Bi, Guoan
2018-03-09
This paper deals with joint estimation of direction-of-departure (DOD) and direction-of- arrival (DOA) in bistatic multiple-input multiple-output (MIMO) radar with the coexistence of unknown mutual coupling and spatial colored noise by developing a novel robust covariance tensor-based angle estimation method. In the proposed method, a third-order tensor is firstly formulated for capturing the multidimensional nature of the received data. Then taking advantage of the temporal uncorrelated characteristic of colored noise and the banded complex symmetric Toeplitz structure of the mutual coupling matrices, a novel fourth-order covariance tensor is constructed for eliminating the influence of both spatial colored noise and mutual coupling. After a robust signal subspace estimation is obtained by using the higher-order singular value decomposition (HOSVD) technique, the rotational invariance technique is applied to achieve the DODs and DOAs. Compared with the existing HOSVD-based subspace methods, the proposed method can provide superior angle estimation performance and automatically jointly perform the DODs and DOAs. Results from numerical experiments are presented to verify the effectiveness of the proposed method.
Robust Angle Estimation for MIMO Radar with the Coexistence of Mutual Coupling and Colored Noise
Wang, Junxiang; Wang, Xianpeng; Xu, Dingjie; Bi, Guoan
2018-01-01
This paper deals with joint estimation of direction-of-departure (DOD) and direction-of- arrival (DOA) in bistatic multiple-input multiple-output (MIMO) radar with the coexistence of unknown mutual coupling and spatial colored noise by developing a novel robust covariance tensor-based angle estimation method. In the proposed method, a third-order tensor is firstly formulated for capturing the multidimensional nature of the received data. Then taking advantage of the temporal uncorrelated characteristic of colored noise and the banded complex symmetric Toeplitz structure of the mutual coupling matrices, a novel fourth-order covariance tensor is constructed for eliminating the influence of both spatial colored noise and mutual coupling. After a robust signal subspace estimation is obtained by using the higher-order singular value decomposition (HOSVD) technique, the rotational invariance technique is applied to achieve the DODs and DOAs. Compared with the existing HOSVD-based subspace methods, the proposed method can provide superior angle estimation performance and automatically jointly perform the DODs and DOAs. Results from numerical experiments are presented to verify the effectiveness of the proposed method. PMID:29522499
Ebrahimkhani, Sadegh
2016-07-01
Wind power plants have nonlinear dynamics and contain many uncertainties such as unknown nonlinear disturbances and parameter uncertainties. Thus, it is a difficult task to design a robust reliable controller for this system. This paper proposes a novel robust fractional-order sliding mode (FOSM) controller for maximum power point tracking (MPPT) control of doubly fed induction generator (DFIG)-based wind energy conversion system. In order to enhance the robustness of the control system, uncertainties and disturbances are estimated using a fractional order uncertainty estimator. In the proposed method a continuous control strategy is developed to achieve the chattering free fractional order sliding-mode control, and also no knowledge of the uncertainties and disturbances or their bound is assumed. The boundedness and convergence properties of the closed-loop signals are proven using Lyapunov׳s stability theory. Simulation results in the presence of various uncertainties were carried out to evaluate the effectiveness and robustness of the proposed control scheme. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Space resection model calculation based on Random Sample Consensus algorithm
NASA Astrophysics Data System (ADS)
Liu, Xinzhu; Kang, Zhizhong
2016-03-01
Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.
An open-source computational and data resource to analyze digital maps of immunopeptidomes
Caron, Etienne; Espona, Lucia; Kowalewski, Daniel J.; ...
2015-07-08
We present a novel mass spectrometry-based high-throughput workflow and an open-source computational and data resource to reproducibly identify and quantify HLA-associated peptides. Collectively, the resources support the generation of HLA allele-specific peptide assay libraries consisting of consensus fragment ion spectra, and the analysis of quantitative digital maps of HLA peptidomes generated from a range of biological sources by SWATH mass spectrometry (MS). This study represents the first community-based effort to develop a robust platform for the reproducible and quantitative measurement of the entire repertoire of peptides presented by HLA molecules, an essential step towards the design of efficient immunotherapies.
Aeroservoelastic Uncertainty Model Identification from Flight Data
NASA Technical Reports Server (NTRS)
Brenner, Martin J.
2001-01-01
Uncertainty modeling is a critical element in the estimation of robust stability margins for stability boundary prediction and robust flight control system development. There has been a serious deficiency to date in aeroservoelastic data analysis with attention to uncertainty modeling. Uncertainty can be estimated from flight data using both parametric and nonparametric identification techniques. The model validation problem addressed in this paper is to identify aeroservoelastic models with associated uncertainty structures from a limited amount of controlled excitation inputs over an extensive flight envelope. The challenge to this problem is to update analytical models from flight data estimates while also deriving non-conservative uncertainty descriptions consistent with the flight data. Multisine control surface command inputs and control system feedbacks are used as signals in a wavelet-based modal parameter estimation procedure for model updates. Transfer function estimates are incorporated in a robust minimax estimation scheme to get input-output parameters and error bounds consistent with the data and model structure. Uncertainty estimates derived from the data in this manner provide an appropriate and relevant representation for model development and robust stability analysis. This model-plus-uncertainty identification procedure is applied to aeroservoelastic flight data from the NASA Dryden Flight Research Center F-18 Systems Research Aircraft.
Srivastava, Rishi; Singh, Mohar; Bajaj, Deepak; Parida, Swarup K.
2016-01-01
Development and large-scale genotyping of user-friendly informative genome/gene-derived InDel markers in natural and mapping populations is vital for accelerating genomics-assisted breeding applications of chickpea with minimal resource expenses. The present investigation employed a high-throughput whole genome next-generation resequencing strategy in low and high pod number parental accessions and homozygous individuals constituting the bulks from each of two inter-specific mapping populations [(Pusa 1103 × ILWC 46) and (Pusa 256 × ILWC 46)] to develop non-erroneous InDel markers at a genome-wide scale. Comparing these high-quality genomic sequences, 82,360 InDel markers with reference to kabuli genome and 13,891 InDel markers exhibiting differentiation between low and high pod number parental accessions and bulks of aforementioned mapping populations were developed. These informative markers were structurally and functionally annotated in diverse coding and non-coding sequence components of genome/genes of kabuli chickpea. The functional significance of regulatory and coding (frameshift and large-effect mutations) InDel markers for establishing marker-trait linkages through association/genetic mapping was apparent. The markers detected a greater amplification (97%) and intra-specific polymorphic potential (58–87%) among a diverse panel of cultivated desi, kabuli, and wild accessions even by using a simpler cost-efficient agarose gel-based assay implicating their utility in large-scale genetic analysis especially in domesticated chickpea with narrow genetic base. Two high-density inter-specific genetic linkage maps generated using aforesaid mapping populations were integrated to construct a consensus 1479 InDel markers-anchored high-resolution (inter-marker distance: 0.66 cM) genetic map for efficient molecular mapping of major QTLs governing pod number and seed yield per plant in chickpea. Utilizing these high-density genetic maps as anchors, three major genomic regions harboring each of pod number and seed yield robust QTLs (15–28% phenotypic variation explained) were identified on chromosomes 2, 4, and 6. The integration of genetic and physical maps at these QTLs mapped on chromosomes scaled-down the long major QTL intervals into high-resolution short pod number and seed yield robust QTL physical intervals (0.89–2.94 Mb) which were essentially got validated in multiple genetic backgrounds of two chickpea mapping populations. The genome-wide InDel markers including natural allelic variants and genomic loci/genes delineated at major six especially in one colocalized novel congruent robust pod number and seed yield robust QTLs mapped on a high-density consensus genetic map were found most promising in chickpea. These functionally relevant molecular tags can drive marker-assisted genetic enhancement to develop high-yielding cultivars with increased seed/pod number and yield in chickpea. PMID:27695461
Consensus Algorithms for Networks of Systems with Second- and Higher-Order Dynamics
NASA Astrophysics Data System (ADS)
Fruhnert, Michael
This thesis considers homogeneous networks of linear systems. We consider linear feedback controllers and require that the directed graph associated with the network contains a spanning tree and systems are stabilizable. We show that, in continuous-time, consensus with a guaranteed rate of convergence can always be achieved using linear state feedback. For networks of continuous-time second-order systems, we provide a new and simple derivation of the conditions for a second-order polynomials with complex coefficients to be Hurwitz. We apply this result to obtain necessary and sufficient conditions to achieve consensus with networks whose graph Laplacian matrix may have complex eigenvalues. Based on the conditions found, methods to compute feedback gains are proposed. We show that gains can be chosen such that consensus is achieved robustly over a variety of communication structures and system dynamics. We also consider the use of static output feedback. For networks of discrete-time second-order systems, we provide a new and simple derivation of the conditions for a second-order polynomials with complex coefficients to be Schur. We apply this result to obtain necessary and sufficient conditions to achieve consensus with networks whose graph Laplacian matrix may have complex eigenvalues. We show that consensus can always be achieved for marginally stable systems and discretized systems. Simple conditions for consensus achieving controllers are obtained when the Laplacian eigenvalues are all real. For networks of continuous-time time-variant higher-order systems, we show that uniform consensus can always be achieved if systems are quadratically stabilizable. In this case, we provide a simple condition to obtain a linear feedback control. For networks of discrete-time higher-order systems, we show that constant gains can be chosen such that consensus is achieved for a variety of network topologies. First, we develop simple results for networks of time-invariant systems and networks of time-variant systems that are given in controllable canonical form. Second, we formulate the problem in terms of Linear Matrix Inequalities (LMIs). The condition found simplifies the design process and avoids the parallel solution of multiple LMIs. The result yields a modified Algebraic Riccati Equation (ARE) for which we present an equivalent LMI condition.
Robust gaze-steering of an active vision system against errors in the estimated parameters
NASA Astrophysics Data System (ADS)
Han, Youngmo
2015-01-01
Gaze-steering is often used to broaden the viewing range of an active vision system. Gaze-steering procedures are usually based on estimated parameters such as image position, image velocity, depth and camera calibration parameters. However, there may be uncertainties in these estimated parameters because of measurement noise and estimation errors. In this case, robust gaze-steering cannot be guaranteed. To compensate for such problems, this paper proposes a gaze-steering method based on a linear matrix inequality (LMI). In this method, we first propose a proportional derivative (PD) control scheme on the unit sphere that does not use depth parameters. This proposed PD control scheme can avoid uncertainties in the estimated depth and camera calibration parameters, as well as inconveniences in their estimation process, including the use of auxiliary feature points and highly non-linear computation. Furthermore, the control gain of the proposed PD control scheme on the unit sphere is designed using LMI such that the designed control is robust in the presence of uncertainties in the other estimated parameters, such as image position and velocity. Simulation results demonstrate that the proposed method provides a better compensation for uncertainties in the estimated parameters than the contemporary linear method and steers the gaze of the camera more steadily over time than the contemporary non-linear method.
Robust linear discriminant analysis with distance based estimators
NASA Astrophysics Data System (ADS)
Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Ali, Hazlina
2017-11-01
Linear discriminant analysis (LDA) is one of the supervised classification techniques concerning relationship between a categorical variable and a set of continuous variables. The main objective of LDA is to create a function to distinguish between populations and allocating future observations to previously defined populations. Under the assumptions of normality and homoscedasticity, the LDA yields optimal linear discriminant rule (LDR) between two or more groups. However, the optimality of LDA highly relies on the sample mean and pooled sample covariance matrix which are known to be sensitive to outliers. To alleviate these conflicts, a new robust LDA using distance based estimators known as minimum variance vector (MVV) has been proposed in this study. The MVV estimators were used to substitute the classical sample mean and classical sample covariance to form a robust linear discriminant rule (RLDR). Simulation and real data study were conducted to examine on the performance of the proposed RLDR measured in terms of misclassification error rates. The computational result showed that the proposed RLDR is better than the classical LDR and was comparable with the existing robust LDR.
Robust versus consistent variance estimators in marginal structural Cox models.
Enders, Dirk; Engel, Susanne; Linder, Roland; Pigeot, Iris
2018-06-11
In survival analyses, inverse-probability-of-treatment (IPT) and inverse-probability-of-censoring (IPC) weighted estimators of parameters in marginal structural Cox models are often used to estimate treatment effects in the presence of time-dependent confounding and censoring. In most applications, a robust variance estimator of the IPT and IPC weighted estimator is calculated leading to conservative confidence intervals. This estimator assumes that the weights are known rather than estimated from the data. Although a consistent estimator of the asymptotic variance of the IPT and IPC weighted estimator is generally available, applications and thus information on the performance of the consistent estimator are lacking. Reasons might be a cumbersome implementation in statistical software, which is further complicated by missing details on the variance formula. In this paper, we therefore provide a detailed derivation of the variance of the asymptotic distribution of the IPT and IPC weighted estimator and explicitly state the necessary terms to calculate a consistent estimator of this variance. We compare the performance of the robust and consistent variance estimators in an application based on routine health care data and in a simulation study. The simulation reveals no substantial differences between the 2 estimators in medium and large data sets with no unmeasured confounding, but the consistent variance estimator performs poorly in small samples or under unmeasured confounding, if the number of confounders is large. We thus conclude that the robust estimator is more appropriate for all practical purposes. Copyright © 2018 John Wiley & Sons, Ltd.
Robust geostatistical analysis of spatial data
NASA Astrophysics Data System (ADS)
Papritz, A.; Künsch, H. R.; Schwierz, C.; Stahel, W. A.
2012-04-01
Most of the geostatistical software tools rely on non-robust algorithms. This is unfortunate, because outlying observations are rather the rule than the exception, in particular in environmental data sets. Outlying observations may results from errors (e.g. in data transcription) or from local perturbations in the processes that are responsible for a given pattern of spatial variation. As an example, the spatial distribution of some trace metal in the soils of a region may be distorted by emissions of local anthropogenic sources. Outliers affect the modelling of the large-scale spatial variation, the so-called external drift or trend, the estimation of the spatial dependence of the residual variation and the predictions by kriging. Identifying outliers manually is cumbersome and requires expertise because one needs parameter estimates to decide which observation is a potential outlier. Moreover, inference after the rejection of some observations is problematic. A better approach is to use robust algorithms that prevent automatically that outlying observations have undue influence. Former studies on robust geostatistics focused on robust estimation of the sample variogram and ordinary kriging without external drift. Furthermore, Richardson and Welsh (1995) [2] proposed a robustified version of (restricted) maximum likelihood ([RE]ML) estimation for the variance components of a linear mixed model, which was later used by Marchant and Lark (2007) [1] for robust REML estimation of the variogram. We propose here a novel method for robust REML estimation of the variogram of a Gaussian random field that is possibly contaminated by independent errors from a long-tailed distribution. It is based on robustification of estimating equations for the Gaussian REML estimation. Besides robust estimates of the parameters of the external drift and of the variogram, the method also provides standard errors for the estimated parameters, robustified kriging predictions at both sampled and unsampled locations and kriging variances. The method has been implemented in an R package. Apart from presenting our modelling framework, we shall present selected simulation results by which we explored the properties of the new method. This will be complemented by an analysis of the Tarrawarra soil moisture data set [3].
NASA Astrophysics Data System (ADS)
Fang, Li; Xu, Yusheng; Yao, Wei; Stilla, Uwe
2016-11-01
For monitoring of glacier surface motion in pole and alpine areas, radar remote sensing is becoming a popular technology accounting for its specific advantages of being independent of weather conditions and sunlight. In this paper we propose a method for glacier surface motion monitoring using phase correlation (PC) based on point-like features (PLF). We carry out experiments using repeat-pass TerraSAR X-band (TSX) and Sentinel-1 C-band (S1C) intensity images of the Taku glacier in Juneau icefield located in southeast Alaska. The intensity imagery is first filtered by an improved adaptive refined Lee filter while the effect of topographic reliefs is removed via SRTM-X DEM. Then, a robust phase correlation algorithm based on singular value decomposition (SVD) and an improved random sample consensus (RANSAC) algorithm is applied to sequential PLF pairs generated by correlation using a 2D sinc function template. The approaches for glacier monitoring are validated by both simulated SAR data and real SAR data from two satellites. The results obtained from these three test datasets confirm the superiority of the proposed approach compared to standard correlation-like methods. By the use of the proposed adaptive refined Lee filter, we achieve a good balance between the suppression of noise and the preservation of local image textures. The presented phase correlation algorithm shows the accuracy of better than 0.25 pixels, when conducting matching tests using simulated SAR intensity images with strong noise. Quantitative 3D motions and velocities of the investigated Taku glacier during a repeat-pass period are obtained, which allows a comprehensive and reliable analysis for the investigation of large-scale glacier surface dynamics.
Robust Inference of Risks of Large Portfolios
Fan, Jianqing; Han, Fang; Liu, Han; Vickers, Byron
2016-01-01
We propose a bootstrap-based robust high-confidence level upper bound (Robust H-CLUB) for assessing the risks of large portfolios. The proposed approach exploits rank-based and quantile-based estimators, and can be viewed as a robust extension of the H-CLUB procedure (Fan et al., 2015). Such an extension allows us to handle possibly misspecified models and heavy-tailed data, which are stylized features in financial returns. Under mixing conditions, we analyze the proposed approach and demonstrate its advantage over H-CLUB. We further provide thorough numerical results to back up the developed theory, and also apply the proposed method to analyze a stock market dataset. PMID:27818569
McCann, Liza J; Pilkington, Clarissa A; Huber, Adam M; Ravelli, Angelo; Appelbe, Duncan; Kirkham, Jamie J; Williamson, Paula R; Aggarwal, Amita; Christopher-Stine, Lisa; Constantin, Tamas; Feldman, Brian M; Lundberg, Ingrid; Maillard, Sue; Mathiesen, Pernille; Murphy, Ruth; Pachman, Lauren M; Reed, Ann M; Rider, Lisa G; van Royen-Kerkof, Annet; Russo, Ricardo; Spinty, Stefan; Wedderburn, Lucy R
2018-01-01
Objectives This study aimed to develop consensus on an internationally agreed dataset for juvenile dermatomyositis (JDM), designed for clinical use, to enhance collaborative research and allow integration of data between centres. Methods A prototype dataset was developed through a formal process that included analysing items within existing databases of patients with idiopathic inflammatory myopathies. This template was used to aid a structured multistage consensus process. Exploiting Delphi methodology, two web-based questionnaires were distributed to healthcare professionals caring for patients with JDM identified through email distribution lists of international paediatric rheumatology and myositis research groups. A separate questionnaire was sent to parents of children with JDM and patients with JDM, identified through established research networks and patient support groups. The results of these parallel processes informed a face-to-face nominal group consensus meeting of international myositis experts, tasked with defining the content of the dataset. This developed dataset was tested in routine clinical practice before review and finalisation. Results A dataset containing 123 items was formulated with an accompanying glossary. Demographic and diagnostic data are contained within form A collected at baseline visit only, disease activity measures are included within form B collected at every visit and disease damage items within form C collected at baseline and annual visits thereafter. Conclusions Through a robust international process, a consensus dataset for JDM has been formulated that can capture disease activity and damage over time. This dataset can be incorporated into national and international collaborative efforts, including existing clinical research databases. PMID:29084729
NASA Astrophysics Data System (ADS)
Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.
2016-01-01
Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.
Kesselmeier, Miriam; Lorenzo Bermejo, Justo
2017-11-01
Logistic regression is the most common technique used for genetic case-control association studies. A disadvantage of standard maximum likelihood estimators of the genotype relative risk (GRR) is their strong dependence on outlier subjects, for example, patients diagnosed at unusually young age. Robust methods are available to constrain outlier influence, but they are scarcely used in genetic studies. This article provides a non-intimidating introduction to robust logistic regression, and investigates its benefits and limitations in genetic association studies. We applied the bounded Huber and extended the R package 'robustbase' with the re-descending Hampel functions to down-weight outlier influence. Computer simulations were carried out to assess the type I error rate, mean squared error (MSE) and statistical power according to major characteristics of the genetic study and investigated markers. Simulations were complemented with the analysis of real data. Both standard and robust estimation controlled type I error rates. Standard logistic regression showed the highest power but standard GRR estimates also showed the largest bias and MSE, in particular for associated rare and recessive variants. For illustration, a recessive variant with a true GRR=6.32 and a minor allele frequency=0.05 investigated in a 1000 case/1000 control study by standard logistic regression resulted in power=0.60 and MSE=16.5. The corresponding figures for Huber-based estimation were power=0.51 and MSE=0.53. Overall, Hampel- and Huber-based GRR estimates did not differ much. Robust logistic regression may represent a valuable alternative to standard maximum likelihood estimation when the focus lies on risk prediction rather than identification of susceptibility variants. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Use of cultural consensus analysis to evaluate expert feedback of median safety.
Kim, Tae-Gyu; Donnell, Eric T; Lee, Dongmin
2008-07-01
Cultural consensus analysis is a statistical method that can be used to assess participant responses to survey questions. The technique concurrently estimates the knowledge of each survey participant and estimates the culturally correct answer to each question asked, based on the existence of consensus among survey participants. The main objectives of this paper are to present the cultural consensus methodology and apply it to a set of median design and safety survey data that were collected using the Delphi method. A total of 21 Delphi survey participants were asked to answer research questions related to cross-median crashes. It was found that the Delphi panel had agreeable opinions with respect to the association of average daily traffic (ADT) and heavy vehicle percentage combination on the risk of cross-median crashes; relative importance of additional factors, other than ADT, median width, and crash history that may contribute to cross-median crashes; and, the relative importance of geometric factors that may be associated with the likelihood of cross-median crashes. Therefore, the findings from the cultural consensus analysis indicate that the expert panel selected to participate in the Delphi survey shared a common knowledge pool relative to the association between median design and safety. There were, however, diverse opinions regarding median barrier type and its preferred placement location. The panel showed a higher level of knowledge on the relative importance regarding the association of geometric factors on cross-median crashes likelihood than on other issues considered. The results of the cultural consensus analysis of the present median design and safety survey data could be used to design a focused field study of median safety.
Distributed estimation for adaptive sensor selection in wireless sensor networks
NASA Astrophysics Data System (ADS)
Mahmoud, Magdi S.; Hassan Hamid, Matasm M.
2014-05-01
Wireless sensor networks (WSNs) are usually deployed for monitoring systems with the distributed detection and estimation of sensors. Sensor selection in WSNs is considered for target tracking. A distributed estimation scenario is considered based on the extended information filter. A cost function using the geometrical dilution of precision measure is derived for active sensor selection. A consensus-based estimation method is proposed in this paper for heterogeneous WSNs with two types of sensors. The convergence properties of the proposed estimators are analyzed under time-varying inputs. Accordingly, a new adaptive sensor selection (ASS) algorithm is presented in which the number of active sensors is adaptively determined based on the absolute local innovations vector. Simulation results show that the tracking accuracy of the ASS is comparable to that of the other algorithms.
On Consistency Test Method of Expert Opinion in Ecological Security Assessment
Wang, Lihong
2017-01-01
To reflect the initiative design and initiative of human security management and safety warning, ecological safety assessment is of great value. In the comprehensive evaluation of regional ecological security with the participation of experts, the expert’s individual judgment level, ability and the consistency of the expert’s overall opinion will have a very important influence on the evaluation result. This paper studies the consistency measure and consensus measure based on the multiplicative and additive consistency property of fuzzy preference relation (FPR). We firstly propose the optimization methods to obtain the optimal multiplicative consistent and additively consistent FPRs of individual and group judgments, respectively. Then, we put forward a consistency measure by computing the distance between the original individual judgment and the optimal individual estimation, along with a consensus measure by computing the distance between the original collective judgment and the optimal collective estimation. In the end, we make a case study on ecological security for five cities. Result shows that the optimal FPRs are helpful in measuring the consistency degree of individual judgment and the consensus degree of collective judgment. PMID:28869570
On Consistency Test Method of Expert Opinion in Ecological Security Assessment.
Gong, Zaiwu; Wang, Lihong
2017-09-04
To reflect the initiative design and initiative of human security management and safety warning, ecological safety assessment is of great value. In the comprehensive evaluation of regional ecological security with the participation of experts, the expert's individual judgment level, ability and the consistency of the expert's overall opinion will have a very important influence on the evaluation result. This paper studies the consistency measure and consensus measure based on the multiplicative and additive consistency property of fuzzy preference relation (FPR). We firstly propose the optimization methods to obtain the optimal multiplicative consistent and additively consistent FPRs of individual and group judgments, respectively. Then, we put forward a consistency measure by computing the distance between the original individual judgment and the optimal individual estimation, along with a consensus measure by computing the distance between the original collective judgment and the optimal collective estimation. In the end, we make a case study on ecological security for five cities. Result shows that the optimal FPRs are helpful in measuring the consistency degree of individual judgment and the consensus degree of collective judgment.
Devillers, J; Pandard, P; Richard, B
2013-01-01
Biodegradation is an important mechanism for eliminating xenobiotics by biotransforming them into simple organic and inorganic products. Faced with the ever growing number of chemicals available on the market, structure-biodegradation relationship (SBR) and quantitative structure-biodegradation relationship (QSBR) models are increasingly used as surrogates of the biodegradation tests. Such models have great potential for a quick and cheap estimation of the biodegradation potential of chemicals. The Estimation Programs Interface (EPI) Suite™ includes different models for predicting the potential aerobic biodegradability of organic substances. They are based on different endpoints, methodologies and/or statistical approaches. Among them, Biowin 5 and 6 appeared the most robust, being derived from the largest biodegradation database with results obtained only from the Ministry of International Trade and Industry (MITI) test. The aim of this study was to assess the predictive performances of these two models from a set of 356 chemicals extracted from notification dossiers including compatible biodegradation data. Another set of molecules with no more than four carbon atoms and substituted by various heteroatoms and/or functional groups was also embodied in the validation exercise. Comparisons were made with the predictions obtained with START (Structural Alerts for Reactivity in Toxtree). Biowin 5 and Biowin 6 gave satisfactorily prediction results except for the prediction of readily degradable chemicals. A consensus model built with Biowin 1 allowed the diminution of this tendency.
Real-time stereo vision-based lane detection system
NASA Astrophysics Data System (ADS)
Fan, Rui; Dahnoun, Naim
2018-07-01
The detection of multiple curved lane markings on a non-flat road surface is still a challenging task for vehicular systems. To make an improvement, depth information can be used to enhance the robustness of the lane detection systems. In this paper, a proposed lane detection system is developed from our previous work where the estimation of the dense vanishing point is further improved using the disparity information. However, the outliers in the least squares fitting severely affect the accuracy when estimating the vanishing point. Therefore, in this paper we use random sample consensus to update the parameters of the road model iteratively until the percentage of the inliers exceeds our pre-set threshold. This significantly helps the system to overcome some suddenly changing conditions. Furthermore, we propose a novel lane position validation approach which computes the energy of each possible solution and selects all satisfying lane positions for visualisation. The proposed system is implemented on a heterogeneous system which consists of an Intel Core i7-4720HQ CPU and an NVIDIA GTX 970M GPU. A processing speed of 143 fps has been achieved, which is over 38 times faster than our previous work. Moreover, in order to evaluate the detection precision, we tested 2495 frames including 5361 lanes. It is shown that the overall successful detection rate is increased from 98.7% to 99.5%.
Visualizing value for money in public health interventions.
Leigh-Hunt, Nicholas; Cooper, Duncan; Furber, Andrew; Bevan, Gwyn; Gray, Muir
2018-01-23
The Socio-Technical Allocation of Resources (STAR) has been developed for value for money analysis of health services through stakeholder workshops. This article reports on its application for prioritization of interventions within public health programmes. The STAR tool was used by identifying costs and service activity for interventions within commissioned public health programmes, with benefits estimated from the literature on economic evaluations in terms of costs per Quality-Adjusted Life Years (QALYs); consensus on how these QALY values applied to local services was obtained with local commissioners. Local cost-effectiveness estimates could be made for some interventions. Methodological issues arose from gaps in the evidence base for other interventions, inability to closely match some performance monitoring data with interventions, and disparate time horizons of published QALY data. Practical adjustment for these issues included using population prevalences and utility states where intervention specific evidence was lacking, and subdivision of large contracts into specific intervention costs using staffing ratios. The STAR approach proved useful in informing commissioning decisions and understanding the relative value of local public health interventions. Further work is needed to improve robustness of the process and develop a visualization tool for use by public health departments. © The Author(s) 2018. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Peptide Array X-Linking (PAX): A New Peptide-Protein Identification Approach
Okada, Hirokazu; Uezu, Akiyoshi; Soderblom, Erik J.; Moseley, M. Arthur; Gertler, Frank B.; Soderling, Scott H.
2012-01-01
Many protein interaction domains bind short peptides based on canonical sequence consensus motifs. Here we report the development of a peptide array-based proteomics tool to identify proteins directly interacting with ligand peptides from cell lysates. Array-formatted bait peptides containing an amino acid-derived cross-linker are photo-induced to crosslink with interacting proteins from lysates of interest. Indirect associations are removed by high stringency washes under denaturing conditions. Covalently trapped proteins are subsequently identified by LC-MS/MS and screened by cluster analysis and domain scanning. We apply this methodology to peptides with different proline-containing consensus sequences and show successful identifications from brain lysates of known and novel proteins containing polyproline motif-binding domains such as EH, EVH1, SH3, WW domains. These results suggest the capacity of arrayed peptide ligands to capture and subsequently identify proteins by mass spectrometry is relatively broad and robust. Additionally, the approach is rapid and applicable to cell or tissue fractions from any source, making the approach a flexible tool for initial protein-protein interaction discovery. PMID:22606326
Dereymaeker, Anneleen; Ansari, Amir H; Jansen, Katrien; Cherian, Perumpillichira J; Vervisch, Jan; Govaert, Paul; De Wispelaere, Leen; Dielman, Charlotte; Matic, Vladimir; Dorado, Alexander Caicedo; De Vos, Maarten; Van Huffel, Sabine; Naulaers, Gunnar
2017-09-01
To assess interrater agreement based on majority voting in visual scoring of neonatal seizures. An online platform was designed based on a multicentre seizure EEG-database. Consensus decision based on 'majority voting' and interrater agreement was estimated using Fleiss' Kappa. The influences of different factors on agreement were determined. 1919 Events extracted from 280h EEG of 71 neonates were reviewed by 4 raters. Majority voting was applied to assign a seizure/non-seizure classification. 44% of events were classified with high, 36% with moderate, and 20% with poor agreement, resulting in a Kappa value of 0.39. 68% of events were labelled as seizures, and in 46%, all raters were convinced about electrographic seizures. The most common seizure duration was <30s. Raters agreed best for seizures lasting 60-120s. There was a significant difference in electrographic characteristics of seizures versus dubious events, with seizures having longer duration, higher power and amplitude. There is a wide variability in identifying rhythmic ictal and non-ictal EEG events, and only the most robust ictal patterns are consistently agreed upon. Database composition and electrographic characteristics are important factors that influence interrater agreement. The use of well-described databases and input of different experts will improve neonatal EEG interpretation and help to develop uniform seizure definitions, useful for evidence-based studies of seizure recognition and management. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
A Comparative Survey of Methods for Remote Heart Rate Detection From Frontal Face Videos
Wang, Chen; Pun, Thierry; Chanel, Guillaume
2018-01-01
Remotely measuring physiological activity can provide substantial benefits for both the medical and the affective computing applications. Recent research has proposed different methodologies for the unobtrusive detection of heart rate (HR) using human face recordings. These methods are based on subtle color changes or motions of the face due to cardiovascular activities, which are invisible to human eyes but can be captured by digital cameras. Several approaches have been proposed such as signal processing and machine learning. However, these methods are compared with different datasets, and there is consequently no consensus on method performance. In this article, we describe and evaluate several methods defined in literature, from 2008 until present day, for the remote detection of HR using human face recordings. The general HR processing pipeline is divided into three stages: face video processing, face blood volume pulse (BVP) signal extraction, and HR computation. Approaches presented in the paper are classified and grouped according to each stage. At each stage, algorithms are analyzed and compared based on their performance using the public database MAHNOB-HCI. Results found in this article are limited on MAHNOB-HCI dataset. Results show that extracted face skin area contains more BVP information. Blind source separation and peak detection methods are more robust with head motions for estimating HR. PMID:29765940
Systematic review and consensus guidelines for environmental sampling of Burkholderia pseudomallei.
Limmathurotsakul, Direk; Dance, David A B; Wuthiekanun, Vanaporn; Kaestli, Mirjam; Mayo, Mark; Warner, Jeffrey; Wagner, David M; Tuanyok, Apichai; Wertheim, Heiman; Yoke Cheng, Tan; Mukhopadhyay, Chiranjay; Puthucheary, Savithiri; Day, Nicholas P J; Steinmetz, Ivo; Currie, Bart J; Peacock, Sharon J
2013-01-01
Burkholderia pseudomallei, a Tier 1 Select Agent and the cause of melioidosis, is a Gram-negative bacillus present in the environment in many tropical countries. Defining the global pattern of B. pseudomallei distribution underpins efforts to prevent infection, and is dependent upon robust environmental sampling methodology. Our objective was to review the literature on the detection of environmental B. pseudomallei, update the risk map for melioidosis, and propose international consensus guidelines for soil sampling. An international working party (Detection of Environmental Burkholderia pseudomallei Working Party (DEBWorP)) was formed during the VIth World Melioidosis Congress in 2010. PubMed (January 1912 to December 2011) was searched using the following MeSH terms: pseudomallei or melioidosis. Bibliographies were hand-searched for secondary references. The reported geographical distribution of B. pseudomallei in the environment was mapped and categorized as definite, probable, or possible. The methodology used for detecting environmental B. pseudomallei was extracted and collated. We found that global coverage was patchy, with a lack of studies in many areas where melioidosis is suspected to occur. The sampling strategies and bacterial identification methods used were highly variable, and not all were robust. We developed consensus guidelines with the goals of reducing the probability of false-negative results, and the provision of affordable and 'low-tech' methodology that is applicable in both developed and developing countries. The proposed consensus guidelines provide the basis for the development of an accurate and comprehensive global map of environmental B. pseudomallei.
PredictSNP: Robust and Accurate Consensus Classifier for Prediction of Disease-Related Mutations
Bendl, Jaroslav; Stourac, Jan; Salanda, Ondrej; Pavelka, Antonin; Wieben, Eric D.; Zendulka, Jaroslav; Brezovsky, Jan; Damborsky, Jiri
2014-01-01
Single nucleotide variants represent a prevalent form of genetic variation. Mutations in the coding regions are frequently associated with the development of various genetic diseases. Computational tools for the prediction of the effects of mutations on protein function are very important for analysis of single nucleotide variants and their prioritization for experimental characterization. Many computational tools are already widely employed for this purpose. Unfortunately, their comparison and further improvement is hindered by large overlaps between the training datasets and benchmark datasets, which lead to biased and overly optimistic reported performances. In this study, we have constructed three independent datasets by removing all duplicities, inconsistencies and mutations previously used in the training of evaluated tools. The benchmark dataset containing over 43,000 mutations was employed for the unbiased evaluation of eight established prediction tools: MAPP, nsSNPAnalyzer, PANTHER, PhD-SNP, PolyPhen-1, PolyPhen-2, SIFT and SNAP. The six best performing tools were combined into a consensus classifier PredictSNP, resulting into significantly improved prediction performance, and at the same time returned results for all mutations, confirming that consensus prediction represents an accurate and robust alternative to the predictions delivered by individual tools. A user-friendly web interface enables easy access to all eight prediction tools, the consensus classifier PredictSNP and annotations from the Protein Mutant Database and the UniProt database. The web server and the datasets are freely available to the academic community at http://loschmidt.chemi.muni.cz/predictsnp. PMID:24453961
Effects of an Environmental Education Course on Consensus Estimates for Proenvironmental Intentions
ERIC Educational Resources Information Center
Hovardas, Tasos; Korfiatis, Konstantinos
2012-01-01
An environmental education intervention in a university conservation-related course was designed to decrease students' errors in consensus estimates for proenvironmental intentions, that is, their errors in guessing their classmates' proenvironmental intentions. Before and after the course, the authors measured two intentions regarding willingness…
Direct adaptive robust tracking control for 6 DOF industrial robot with enhanced accuracy.
Yin, Xiuxing; Pan, Li
2018-01-01
A direct adaptive robust tracking control is proposed for trajectory tracking of 6 DOF industrial robot in the presence of parametric uncertainties, external disturbances and uncertain nonlinearities. The controller is designed based on the dynamic characteristics in the working space of the end-effector of the 6 DOF robot. The controller includes robust control term and model compensation term that is developed directly based on the input reference or desired motion trajectory. A projection-type parametric adaptation law is also designed to compensate for parametric estimation errors for the adaptive robust control. The feasibility and effectiveness of the proposed direct adaptive robust control law and the associated projection-type parametric adaptation law have been comparatively evaluated based on two 6 DOF industrial robots. The test results demonstrate that the proposed control can be employed to better maintain the desired trajectory tracking even in the presence of large parametric uncertainties and external disturbances as compared with PD controller and nonlinear controller. The parametric estimates also eventually converge to the real values along with the convergence of tracking errors, which further validate the effectiveness of the proposed parametric adaption law. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Robust Foot Clearance Estimation Based on the Integration of Foot-Mounted IMU Acceleration Data
Benoussaad, Mourad; Sijobert, Benoît; Mombaur, Katja; Azevedo Coste, Christine
2015-01-01
This paper introduces a method for the robust estimation of foot clearance during walking, using a single inertial measurement unit (IMU) placed on the subject’s foot. The proposed solution is based on double integration and drift cancellation of foot acceleration signals. The method is insensitive to misalignment of IMU axes with respect to foot axes. Details are provided regarding calibration and signal processing procedures. Experimental validation was performed on 10 healthy subjects under three walking conditions: normal, fast and with obstacles. Foot clearance estimation results were compared to measurements from an optical motion capture system. The mean error between them is significantly less than 15% under the various walking conditions. PMID:26703622
A robust bayesian estimate of the concordance correlation coefficient.
Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir
2015-01-01
A need for assessment of agreement arises in many situations including statistical biomarker qualification or assay or method validation. Concordance correlation coefficient (CCC) is one of the most popular scaled indices reported in evaluation of agreement. Robust methods for CCC estimation currently present an important statistical challenge. Here, we propose a novel Bayesian method of robust estimation of CCC based on multivariate Student's t-distribution and compare it with its alternatives. Furthermore, we extend the method to practically relevant settings, enabling incorporation of confounding covariates and replications. The superiority of the new approach is demonstrated using simulation as well as real datasets from biomarker application in electroencephalography (EEG). This biomarker is relevant in neuroscience for development of treatments for insomnia.
Vehicle detection and orientation estimation using the radon transform
NASA Astrophysics Data System (ADS)
Pelapur, Rengarajan; Bunyak, Filiz; Palaniappan, Kannappan; Seetharaman, Gunasekaran
2013-05-01
Determining the location and orientation of vehicles in satellite and airborne imagery is a challenging task given the density of cars and other vehicles and complexity of the environment in urban scenes almost anywhere in the world. We have developed a robust and accurate method for detecting vehicles using a template-based directional chamfer matching, combined with vehicle orientation estimation based on a refined segmentation, followed by a Radon transform based profile variance peak analysis approach. The same algorithm was applied to both high resolution satellite imagery and wide area aerial imagery and initial results show robustness to illumination changes and geometric appearance distortions. Nearly 80% of the orientation angle estimates for 1585 vehicles across both satellite and aerial imagery were accurate to within 15? of the ground truth. In the case of satellite imagery alone, nearly 90% of the objects have an estimated error within +/-1.0° of the ground truth.
NASA Astrophysics Data System (ADS)
Dong, Gangqi; Zhu, Z. H.
2016-04-01
This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.
Exploring International Views on Key Concepts for Mass-gathering Health through a Delphi Process.
Steenkamp, Malinda; Hutton, Alison E; Ranse, Jamie C; Lund, Adam; Turris, Sheila A; Bowles, Ron; Arbuthnott, Katherine; Arbon, Paul A
2016-08-01
Introduction The science underpinning mass-gathering health (MGH) is developing rapidly. However, MGH terminology and concepts are not yet well defined or used consistently. These variations can complicate comparisons across settings. There is, therefore, a need to develop consensus and standardize concepts and data points to support the development of a robust MGH evidence-base for governments, event planners, responders, and researchers. This project explored the views and sought consensus of international MGH experts on previously published concepts around MGH to inform the development of a transnational minimum data set (MDS) with an accompanying data dictionary (DD). Report A two-round Delphi process was undertaken involving volunteers from the World Health Organization (WHO) Virtual Interdisciplinary Advisory Group (VIAG) on Mass Gatherings (MGs) and the MG section of the World Association for Disaster and Emergency Medicine (WADEM). The first online survey tested agreement on six key concepts: (1) using the term "MG HEALTH;" (2) purposes of the proposed MDS and DD; (3) event phases; (4) two MG population models; (5) a MGH conceptual diagram; and (6) a data matrix for organizing MGH data elements. Consensus was defined as ≥80% agreement. Round 2 presented five refined MGH principles based on Round 1 input that was analyzed using descriptive statistics and content analysis. Thirty-eight participants started Round 1 with 36 completing the survey and 24 (65% of 36) completing Round 2. Agreement was reached on: the term "MGH" (n=35/38; 92%); the stated purposes for the MDS (n=38/38; 100%); the two MG population models (n=31/36; 86% and n=30/36; 83%, respectively); and the event phases (n=34/36; 94%). Consensus was not achieved on the overall conceptual MGH diagram (n=25/37; 67%) and the proposed matrix to organize data elements (n=28/37; 77%). In Round 2, agreement was reached on all the proposed principles and revisions, except on the MGH diagram (n=18/24; 75%). Discussion/Conclusions Event health stakeholders require sound data upon which to build a robust MGH evidence-base. The move towards standardization of data points and/or reporting items of interest will strengthen the development of such an evidence-base from which governments, researchers, clinicians, and event planners could benefit. There is substantial agreement on some broad concepts underlying MGH amongst an international group of MG experts. Refinement is needed regarding an overall conceptual diagram and proposed matrix for organizing data elements. Steenkamp M , Hutton AE , Ranse JC , Lund A , Turris SA , Bowles R , Arbuthnott K , Arbon PA . Exploring international views on key concepts for mass-gathering health through a Delphi process. Prehosp Disaster Med. 2016;31(4):443-453.
A robust sparse-modeling framework for estimating schizophrenia biomarkers from fMRI.
Dillon, Keith; Calhoun, Vince; Wang, Yu-Ping
2017-01-30
Our goal is to identify the brain regions most relevant to mental illness using neuroimaging. State of the art machine learning methods commonly suffer from repeatability difficulties in this application, particularly when using large and heterogeneous populations for samples. We revisit both dimensionality reduction and sparse modeling, and recast them in a common optimization-based framework. This allows us to combine the benefits of both types of methods in an approach which we call unambiguous components. We use this to estimate the image component with a constrained variability, which is best correlated with the unknown disease mechanism. We apply the method to the estimation of neuroimaging biomarkers for schizophrenia, using task fMRI data from a large multi-site study. The proposed approach yields an improvement in both robustness of the estimate and classification accuracy. We find that unambiguous components incorporate roughly two thirds of the same brain regions as sparsity-based methods LASSO and elastic net, while roughly one third of the selected regions differ. Further, unambiguous components achieve superior classification accuracy in differentiating cases from controls. Unambiguous components provide a robust way to estimate important regions of imaging data. Copyright © 2016 Elsevier B.V. All rights reserved.
C.Livezey, B.
1998-01-01
The order Gruiformes, for which even familial composition remains controversial, is perhaps the least well understood avian order from a phylogenetic perspective. The history of the systematics of the order is presented, and the ecological and biogeographic characteristics of its members are summarized. Using cladistic techniques, phylogenetic relationships among fossil and modern genera of the Gruiformes were estimated based on 381 primarily osteological characters; relationships among modern species of Grues (Psophiidae, Aramidae, Gruidae, Heliornithidae and Rallidae) were assessed based on these characters augmented by 189 characters of the definitive integument. A strict consensus tree for 20,000 shortest trees compiled for the matrix of gruiform genera (length = 967, CI = 0.517) revealed a number of nodes common to the solution set, many of which were robust to bootstrapping and had substantial support (Bremer) indices. Robust nodes included those supporting: a sister relationship between the Pedionomidae and Turnicidae; monophyly of the Gruiformes exclusive of the Pedionomidae and Turnicidae; a sister relationship between the Cariamidae and Phorusrhacoidea; a sister relationship between a clade comprising Eurypyga and Messelornis and one comprising Rhynochetos and Aptornis; monophyly of the Grues (Psophiidae, Aramidae, Gruidae, Heliornithidae and Rallidae); monophyly of a clade (Gruoidea) comprising (in order of increasingly close relationship) Psophia, Aramus, Balearica and other Gruidae, with monophyly of each member in this series confirmed; a sister relationship between the Heliornithidae and Rallidae; and monophyly of the Rallidae exclusive of Himantornis. Autapomorphic divergence was comparatively high for Pedionomus, Eurypyga, Psophia, Himantornis and Fulica; extreme autapomorphy, much of which is unique for the order, characterized the extinct, flightless Aptornis. In the species-level analysis of modern Grues, special efforts were made to limit the analytical impacts of homoplasy related to flightlessness in a number of rallid lineages. A strict consensus tree of 20,000 shortest trees compiled (length = 1232, CI = 0.463) confirmed the interfamilial relationships resolved in the ordinal analysis and established a number of other, variably supported groups within the Rallidae. Groupings within the Rallidae included: monophyly of Rallidae exclusive of Himantornis and a clade comprising Porphyrio (including Notornis) and Porphyrula; a poorly resolved, basal group of genera including Gymnocrex, Habroptila, Eulabeornis, Aramides, Canirallus and Mentocrex; an intermediate grade comprising Anurolimnas, Amaurolimnas, and Rougetius; monophyly of two major subdivisions of remaining rallids, one comprising Rallina (paraphyletic), Rallicula, and Sarothrura, and the other comprising the apparently paraphyletic 'long-billed' rails (e.g. Pardirallus, Cyanolimnas, Rallus, Gallirallus and Cabalus and a variably resolved clade comprising 'crakes' (e.g. Atlantisia, Laterallus and Porzana, waterhens (Amaurornis), moorhens (Gallinula and allied genera) and coots (Fulica). Relationships among 'crakes' remain poorly resolved; Laterallus may be paraphyletic, and Porzana is evidently polyphyletic and poses substantial challenges for reconciliation with current taxonomy. Relationships among the species of waterhens, moorhens and coots, however, were comparatively well resolved, and exhaustive, fine-scale analyses of several genera (Grus, Porphyrio, Aramides, Rallus, Laterallus and Fulica) and species complexes (Porphyrio porphyrio -group,Gallirallus philippensis -group and Fulica americana -group) revealed additional topological likelihoods. Many nodes shared by a majority of the shortest trees under equal weighting were common to all shortest trees found following one or two iterations of successive weighting of characters. Provisional placements of selected subfossil rallids (e.g. Diaphorapteryx, Aphanapteryx and Capellirallus ) were based on separate heuristic searches using the strict consensus tree for modern rallids as a backbone constraint. These analyses were considered with respect to assessments of robustness, homoplasy related to flightlessness, challenges and importance of fossils in cladistic analysis, previously published studies and biogeography, and an annotated phylogenetic classification of the Gruiformes is proposed.
Optimal designs based on the maximum quasi-likelihood estimator
Shen, Gang; Hyun, Seung Won; Wong, Weng Kee
2016-01-01
We use optimal design theory and construct locally optimal designs based on the maximum quasi-likelihood estimator (MqLE), which is derived under less stringent conditions than those required for the MLE method. We show that the proposed locally optimal designs are asymptotically as efficient as those based on the MLE when the error distribution is from an exponential family, and they perform just as well or better than optimal designs based on any other asymptotically linear unbiased estimators such as the least square estimator (LSE). In addition, we show current algorithms for finding optimal designs can be directly used to find optimal designs based on the MqLE. As an illustrative application, we construct a variety of locally optimal designs based on the MqLE for the 4-parameter logistic (4PL) model and study their robustness properties to misspecifications in the model using asymptotic relative efficiency. The results suggest that optimal designs based on the MqLE can be easily generated and they are quite robust to mis-specification in the probability distribution of the responses. PMID:28163359
Tree allometry and improved estimation of carbon stocks and balance in tropical forests.
Chave, J; Andalo, C; Brown, S; Cairns, M A; Chambers, J Q; Eamus, D; Fölster, H; Fromard, F; Higuchi, N; Kira, T; Lescure, J-P; Nelson, B W; Ogawa, H; Puig, H; Riéra, B; Yamakura, T
2005-08-01
Tropical forests hold large stores of carbon, yet uncertainty remains regarding their quantitative contribution to the global carbon cycle. One approach to quantifying carbon biomass stores consists in inferring changes from long-term forest inventory plots. Regression models are used to convert inventory data into an estimate of aboveground biomass (AGB). We provide a critical reassessment of the quality and the robustness of these models across tropical forest types, using a large dataset of 2,410 trees >or= 5 cm diameter, directly harvested in 27 study sites across the tropics. Proportional relationships between aboveground biomass and the product of wood density, trunk cross-sectional area, and total height are constructed. We also develop a regression model involving wood density and stem diameter only. Our models were tested for secondary and old-growth forests, for dry, moist and wet forests, for lowland and montane forests, and for mangrove forests. The most important predictors of AGB of a tree were, in decreasing order of importance, its trunk diameter, wood specific gravity, total height, and forest type (dry, moist, or wet). Overestimates prevailed, giving a bias of 0.5-6.5% when errors were averaged across all stands. Our regression models can be used reliably to predict aboveground tree biomass across a broad range of tropical forests. Because they are based on an unprecedented dataset, these models should improve the quality of tropical biomass estimates, and bring consensus about the contribution of the tropical forest biome and tropical deforestation to the global carbon cycle.
Stoner, Charlotte R; Stansfeld, Jacki; Orrell, Martin; Spector, Aimee
2017-01-01
Positive psychology is gaining credence within dementia research but currently there is a lack of outcome measures within this area developed specifically for people with dementia. Authors have begun adopting positive psychology measures developed with other populations but there is no consensus around which are more appropriate or psychometrically robust. A systematic search identified measures used between 1998 and 2017 and an appraisal of the development procedure was undertaken using standardised criteria enabling the awarding of scores based on reporting of psychometric information. Twelve measures within the constructs of identity, hope, religiosity/spirituality, life valuation, self-efficacy, community and wellbeing were identified as being used within 17 dementia studies. Development procedures were variable and scores on development criterion reflected this variability. Of the measures included, the Herth Hope Index, Systems of Belief Inventory and Psychological Wellbeing Scale appeared to be the most robustly developed and appropriate for people with dementia.
Chakraborty, Arindom
2016-12-01
A common objective in longitudinal studies is to characterize the relationship between a longitudinal response process and a time-to-event data. Ordinal nature of the response and possible missing information on covariates add complications to the joint model. In such circumstances, some influential observations often present in the data may upset the analysis. In this paper, a joint model based on ordinal partial mixed model and an accelerated failure time model is used, to account for the repeated ordered response and time-to-event data, respectively. Here, we propose an influence function-based robust estimation method. Monte Carlo expectation maximization method-based algorithm is used for parameter estimation. A detailed simulation study has been done to evaluate the performance of the proposed method. As an application, a data on muscular dystrophy among children is used. Robust estimates are then compared with classical maximum likelihood estimates. © The Author(s) 2014.
Collective Behaviors of Mobile Robots Beyond the Nearest Neighbor Rules With Switching Topology.
Ning, Boda; Han, Qing-Long; Zuo, Zongyu; Jin, Jiong; Zheng, Jinchuan
2018-05-01
This paper is concerned with the collective behaviors of robots beyond the nearest neighbor rules, i.e., dispersion and flocking, when robots interact with others by applying an acute angle test (AAT)-based interaction rule. Different from a conventional nearest neighbor rule or its variations, the AAT-based interaction rule allows interactions with some far-neighbors and excludes unnecessary nearest neighbors. The resulting dispersion and flocking hold the advantages of scalability, connectivity, robustness, and effective area coverage. For the dispersion, a spring-like controller is proposed to achieve collision-free coordination. With switching topology, a new fixed-time consensus-based energy function is developed to guarantee the system stability. An upper bound of settling time for energy consensus is obtained, and a uniform time interval is accordingly set so that energy distribution is conducted in a fair manner. For the flocking, based on a class of generalized potential functions taking nonsmooth switching into account, a new controller is proposed to ensure that the same velocity for all robots is eventually reached. A co-optimizing problem is further investigated to accomplish additional tasks, such as enhancing communication performance, while maintaining the collective behaviors of mobile robots. Simulation results are presented to show the effectiveness of the theoretical results.
Statistical Methods and Sampling Design for Estimating Step Trends in Surface-Water Quality
Hirsch, Robert M.
1988-01-01
This paper addresses two components of the problem of estimating the magnitude of step trends in surface water quality. The first is finding a robust estimator appropriate to the data characteristics expected in water-quality time series. The J. L. Hodges-E. L. Lehmann class of estimators is found to be robust in comparison to other nonparametric and moment-based estimators. A seasonal Hodges-Lehmann estimator is developed and shown to have desirable properties. Second, the effectiveness of various sampling strategies is examined using Monte Carlo simulation coupled with application of this estimator. The simulation is based on a large set of total phosphorus data from the Potomac River. To assure that the simulated records have realistic properties, the data are modeled in a multiplicative fashion incorporating flow, hysteresis, seasonal, and noise components. The results demonstrate the importance of balancing the length of the two sampling periods and balancing the number of data values between the two periods.
Magnitude Estimation for the 2011 Tohoku-Oki Earthquake Based on Ground Motion Prediction Equations
NASA Astrophysics Data System (ADS)
Eshaghi, Attieh; Tiampo, Kristy F.; Ghofrani, Hadi; Atkinson, Gail M.
2015-08-01
This study investigates whether real-time strong ground motion data from seismic stations could have been used to provide an accurate estimate of the magnitude of the 2011 Tohoku-Oki earthquake in Japan. Ultimately, such an estimate could be used as input data for a tsunami forecast and would lead to more robust earthquake and tsunami early warning. We collected the strong motion accelerograms recorded by borehole and free-field (surface) Kiban Kyoshin network stations that registered this mega-thrust earthquake in order to perform an off-line test to estimate the magnitude based on ground motion prediction equations (GMPEs). GMPEs for peak ground acceleration and peak ground velocity (PGV) from a previous study by Eshaghi et al. in the Bulletin of the Seismological Society of America 103. (2013) derived using events with moment magnitude ( M) ≥ 5.0, 1998-2010, were used to estimate the magnitude of this event. We developed new GMPEs using a more complete database (1998-2011), which added only 1 year but approximately twice as much data to the initial catalog (including important large events), to improve the determination of attenuation parameters and magnitude scaling. These new GMPEs were used to estimate the magnitude of the Tohoku-Oki event. The estimates obtained were compared with real time magnitude estimates provided by the existing earthquake early warning system in Japan. Unlike the current operational magnitude estimation methods, our method did not saturate and can provide robust estimates of moment magnitude within ~100 s after earthquake onset for both catalogs. It was found that correcting for average shear-wave velocity in the uppermost 30 m () improved the accuracy of magnitude estimates from surface recordings, particularly for magnitude estimates of PGV (Mpgv). The new GMPEs also were used to estimate the magnitude of all earthquakes in the new catalog with at least 20 records. Results show that the magnitude estimate from PGV values using borehole recordings had the smallest standard deviation among the estimated magnitudes and produced more stable and robust magnitude estimates. This suggests that incorporating borehole strong ground-motion records immediately available after the occurrence of large earthquakes can provide robust and accurate magnitude estimation.
Asea, Godfrey; Vivek, Bindiganavile S; Bigirwa, George; Lipps, Patrick E; Pratt, Richard C
2009-05-01
Maize production in sub-Saharan Africa incurs serious losses to epiphytotics of foliar diseases. Quantitative trait loci conditioning partial resistance (rQTL) to infection by causal agents of gray leaf spot (GLS), northern corn leaf blight (NCLB), and maize streak have been reported. Our objectives were to identify simple-sequence repeat (SSR) molecular markers linked to consensus rQTL and one recently identified rQTL associated with GLS, and to determine their suitability as tools for selection of improved host resistance. We conducted evaluations of disease severity phenotypes in separate field nurseries, each containing 410 F2:3 families derived from a cross between maize inbred CML202 (NCLB and maize streak resistant) and VP31 (a GLS-resistant breeding line) that possess complimentary rQTL. F2:3 families were selected for resistance based on genotypic (SSR marker), phenotypic, or combined data and the selected F3:4 families were reevaluated. Phenotypic values associated with SSR markers for consensus rQTL in bins 4.08 for GLS, 5.04 for NCLB, and 1.04 for maize streak significantly reduced disease severity in both generations based on single-factor analysis of variance and marker-interval analysis. These results were consistent with the presence of homozygous resistant parent alleles, except in bin 8.06, where markers were contributed by the NCLB-susceptible parent. Only one marker associated with resistance could be confirmed in bins 2.09 (GLS) and 3.06 (NCLB), illustrating the need for more robust rQTL discovery, fine-mapping, and validation prior to undertaking marker-based selection.
NASA Astrophysics Data System (ADS)
Rock, N. M. S.
ROBUST calculates 53 statistics, plus significance levels for 6 hypothesis tests, on each of up to 52 variables. These together allow the following properties of the data distribution for each variable to be examined in detail: (1) Location. Three means (arithmetic, geometric, harmonic) are calculated, together with the midrange and 19 high-performance robust L-, M-, and W-estimates of location (combined, adaptive, trimmed estimates, etc.) (2) Scale. The standard deviation is calculated along with the H-spread/2 (≈ semi-interquartile range), the mean and median absolute deviations from both mean and median, and a biweight scale estimator. The 23 location and 6 scale estimators programmed cover all possible degrees of robustness. (3) Normality: Distributions are tested against the null hypothesis that they are normal, using the 3rd (√ h1) and 4th ( b 2) moments, Geary's ratio (mean deviation/standard deviation), Filliben's probability plot correlation coefficient, and a more robust test based on the biweight scale estimator. These statistics collectively are sensitive to most usual departures from normality. (4) Presence of outliers. The maximum and minimum values are assessed individually or jointly using Grubbs' maximum Studentized residuals, Harvey's and Dixon's criteria, and the Studentized range. For a single input variable, outliers can be either winsorized or eliminated and all estimates recalculated iteratively as desired. The following data-transformations also can be applied: linear, log 10, generalized Box Cox power (including log, reciprocal, and square root), exponentiation, and standardization. For more than one variable, all results are tabulated in a single run of ROBUST. Further options are incorporated to assess ratios (of two variables) as well as discrete variables, and be concerned with missing data. Cumulative S-plots (for assessing normality graphically) also can be generated. The mutual consistency or inconsistency of all these measures helps to detect errors in data as well as to assess data-distributions themselves.
McCann, Liza J; Pilkington, Clarissa A; Huber, Adam M; Ravelli, Angelo; Appelbe, Duncan; Kirkham, Jamie J; Williamson, Paula R; Aggarwal, Amita; Christopher-Stine, Lisa; Constantin, Tamas; Feldman, Brian M; Lundberg, Ingrid; Maillard, Sue; Mathiesen, Pernille; Murphy, Ruth; Pachman, Lauren M; Reed, Ann M; Rider, Lisa G; van Royen-Kerkof, Annet; Russo, Ricardo; Spinty, Stefan; Wedderburn, Lucy R; Beresford, Michael W
2018-02-01
This study aimed to develop consensus on an internationally agreed dataset for juvenile dermatomyositis (JDM), designed for clinical use, to enhance collaborative research and allow integration of data between centres. A prototype dataset was developed through a formal process that included analysing items within existing databases of patients with idiopathic inflammatory myopathies. This template was used to aid a structured multistage consensus process. Exploiting Delphi methodology, two web-based questionnaires were distributed to healthcare professionals caring for patients with JDM identified through email distribution lists of international paediatric rheumatology and myositis research groups. A separate questionnaire was sent to parents of children with JDM and patients with JDM, identified through established research networks and patient support groups. The results of these parallel processes informed a face-to-face nominal group consensus meeting of international myositis experts, tasked with defining the content of the dataset. This developed dataset was tested in routine clinical practice before review and finalisation. A dataset containing 123 items was formulated with an accompanying glossary. Demographic and diagnostic data are contained within form A collected at baseline visit only, disease activity measures are included within form B collected at every visit and disease damage items within form C collected at baseline and annual visits thereafter. Through a robust international process, a consensus dataset for JDM has been formulated that can capture disease activity and damage over time. This dataset can be incorporated into national and international collaborative efforts, including existing clinical research databases. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Jang, Jae Young; Kim, Moon Young; Jeong, Soung Won; Kim, Tae Yeob; Kim, Seung Up; Lee, Sae Hwan; Suk, Ki Tae; Park, Soo Young; Woo, Hyun Young; Kim, Sang Gyune; Heo, Jeong; Baik, Soon Koo; Kim, Hong Soo
2013-01-01
The application of ultrasound contrast agents (UCAs) is considered essential when evaluating focal liver lesions (FLLs) using ultrasonography (US). Microbubble UCAs are easy to use and robust; their use poses no risk of nephrotoxicity and requires no ionizing radiation. The unique features of contrast enhanced US (CEUS) are not only noninvasiveness but also real-time assessing of liver perfusion throughout the vascular phases. The later feature has led to dramatic improvement in the diagnostic accuracy of US for detection and characterization of FLLs as well as the guidance to therapeutic procedures and evaluation of response to treatment. This article describes the current consensus and guidelines for the use of UCAs for the FLLs that are commonly encountered in US. After a brief description of the bases of different CEUS techniques, contrast-enhancement patterns of different types of benign and malignant FLLs and other clinical applications are described and discussed on the basis of our experience and the literature data. PMID:23593604
Not-from-concentrate pilot plant 'Wonderful' cultivar pomegranate juice changes: Volatiles.
Beaulieu, John C; Obando-Ulloa, Javier M
2017-08-15
Pilot plant ultrafiltration was used to mimic the dominant U.S. commercial pomegranate juice extraction method (hydraulic pressing whole fruit), to deliver a not-from-concentrate (NFC) juice that was high-temperature short-time pasteurized and stored at 4 and 25°C. Recovered were 46 compounds, of which 38 were routinely isolated and subjected to analysis of variance to assess these NFC juices. Herein, 18 of the 21 consensus pomegranate compounds were recovered. Ultrafiltration resulted in significant decreases for many compounds. Conversely, pasteurization resulted in compound increases. Highly significant decreases in 12 consensus compounds were observed during storage. Principal component analysis demonstrated clearly which compounds were tightly associated, and how storage samples behaved very similarly, independent of temperature. Based on these data and previous work we reported, this solid-phase microextraction (SPME) method delivered a robust 'Wonderful' volatile profile in NFC juices that is likely superior qualitatively and perhaps quantitatively to typical commercial offerings. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Friedel, M. J.; Daughney, C.
2016-12-01
The development of a successful surface-groundwater management strategy depends on the quality of data provided for analysis. This study evaluates the statistical robustness when using a modified self-organizing map (MSOM) technique to estimate missing values for three hypersurface models: synoptic groundwater-surface water hydrochemistry, time-series of groundwater-surface water hydrochemistry, and mixed-survey (combination of groundwater-surface water hydrochemistry and lithologies) hydrostratigraphic unit data. These models of increasing complexity are developed and validated based on observations from the Southland region of New Zealand. In each case, the estimation method is sufficiently robust to cope with groundwater-surface water hydrochemistry vagaries due to sample size and extreme data insufficiency, even when >80% of the data are missing. The estimation of surface water hydrochemistry time series values enabled the evaluation of seasonal variation, and the imputation of lithologies facilitated the evaluation of hydrostratigraphic controls on groundwater-surface water interaction. The robust statistical results for groundwater-surface water models of increasing data complexity provide justification to apply the MSOM technique in other regions of New Zealand and abroad.
Robust mislabel logistic regression without modeling mislabel probabilities.
Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun
2018-03-01
Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.
Robust Speech Enhancement Using Two-Stage Filtered Minima Controlled Recursive Averaging
NASA Astrophysics Data System (ADS)
Ghourchian, Negar; Selouani, Sid-Ahmed; O'Shaughnessy, Douglas
In this paper we propose an algorithm for estimating noise in highly non-stationary noisy environments, which is a challenging problem in speech enhancement. This method is based on minima-controlled recursive averaging (MCRA) whereby an accurate, robust and efficient noise power spectrum estimation is demonstrated. We propose a two-stage technique to prevent the appearance of musical noise after enhancement. This algorithm filters the noisy speech to achieve a robust signal with minimum distortion in the first stage. Subsequently, it estimates the residual noise using MCRA and removes it with spectral subtraction. The proposed Filtered MCRA (FMCRA) performance is evaluated using objective tests on the Aurora database under various noisy environments. These measures indicate the higher output SNR and lower output residual noise and distortion.
Robust efficient estimation of heart rate pulse from video.
Xu, Shuchang; Sun, Lingyun; Rohde, Gustavo Kunde
2014-04-01
We describe a simple but robust algorithm for estimating the heart rate pulse from video sequences containing human skin in real time. Based on a model of light interaction with human skin, we define the change of blood concentration due to arterial pulsation as a pixel quotient in log space, and successfully use the derived signal for computing the pulse heart rate. Various experiments with different cameras, different illumination condition, and different skin locations were conducted to demonstrate the effectiveness and robustness of the proposed algorithm. Examples computed with normal illumination show the algorithm is comparable with pulse oximeter devices both in accuracy and sensitivity.
Robust efficient estimation of heart rate pulse from video
Xu, Shuchang; Sun, Lingyun; Rohde, Gustavo Kunde
2014-01-01
We describe a simple but robust algorithm for estimating the heart rate pulse from video sequences containing human skin in real time. Based on a model of light interaction with human skin, we define the change of blood concentration due to arterial pulsation as a pixel quotient in log space, and successfully use the derived signal for computing the pulse heart rate. Various experiments with different cameras, different illumination condition, and different skin locations were conducted to demonstrate the effectiveness and robustness of the proposed algorithm. Examples computed with normal illumination show the algorithm is comparable with pulse oximeter devices both in accuracy and sensitivity. PMID:24761294
Yong, Alan K.; Hough, Susan E.; Iwahashi, Junko; Braverman, Amy
2012-01-01
We present an approach based on geomorphometry to predict material properties and characterize site conditions using the VS30 parameter (time‐averaged shear‐wave velocity to a depth of 30 m). Our framework consists of an automated terrain classification scheme based on taxonomic criteria (slope gradient, local convexity, and surface texture) that systematically identifies 16 terrain types from 1‐km spatial resolution (30 arcsec) Shuttle Radar Topography Mission digital elevation models (SRTM DEMs). Using 853 VS30 values from California, we apply a simulation‐based statistical method to determine the mean VS30 for each terrain type in California. We then compare the VS30 values with models based on individual proxies, such as mapped surface geology and topographic slope, and show that our systematic terrain‐based approach consistently performs better than semiempirical estimates based on individual proxies. To further evaluate our model, we apply our California‐based estimates to terrains of the contiguous United States. Comparisons of our estimates with 325 VS30 measurements outside of California, as well as estimates based on the topographic slope model, indicate our method to be statistically robust and more accurate. Our approach thus provides an objective and robust method for extending estimates of VS30 for regions where in situ measurements are sparse or not readily available.
Pant, Jeevan K; Krishnan, Sridhar
2018-03-15
To present a new compressive sensing (CS)-based method for the acquisition of ECG signals and for robust estimation of heart-rate variability (HRV) parameters from compressively sensed measurements with high compression ratio. CS is used in the biosensor to compress the ECG signal. Estimation of the locations of QRS segments is carried out by applying two algorithms on the compressed measurements. The first algorithm reconstructs the ECG signal by enforcing a block-sparse structure on the first-order difference of the signal, so the transient QRS segments are significantly emphasized on the first-order difference of the signal. Multiple block-divisions of the signals are carried out with various block lengths, and multiple reconstructed signals are combined to enhance the robustness of the localization of the QRS segments. The second algorithm removes errors in the locations of QRS segments by applying low-pass filtering and morphological operations. The proposed CS-based method is found to be effective for the reconstruction of ECG signals by enforcing transient QRS structures on the first-order difference of the signal. It is demonstrated to be robust not only to high compression ratio but also to various artefacts present in ECG signals acquired by using on-body wireless sensors. HRV parameters computed by using the QRS locations estimated from the signals reconstructed with a compression ratio as high as 90% are comparable with that computed by using QRS locations estimated by using the Pan-Tompkins algorithm. The proposed method is useful for the realization of long-term HRV monitoring systems by using CS-based low-power wireless on-body biosensors.
40 CFR 98.464 - Monitoring and QA/QC requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
...-day anaerobic biodegradation test as specified in paragraph (b)(4)(i) of this section or by estimating...) of this section. (i) Perform an anaerobic biodegradation test and determine the DOC value of a waste... minimum of a 60-day anaerobic biodegradation test. Consensus-based standards organizations include, but...
Robust Parallel Motion Estimation and Mapping with Stereo Cameras in Underground Infrastructure
NASA Astrophysics Data System (ADS)
Liu, Chun; Li, Zhengning; Zhou, Yuan
2016-06-01
Presently, we developed a novel robust motion estimation method for localization and mapping in underground infrastructure using a pre-calibrated rigid stereo camera rig. Localization and mapping in underground infrastructure is important to safety. Yet it's also nontrivial since most underground infrastructures have poor lighting condition and featureless structure. Overcoming these difficulties, we discovered that parallel system is more efficient than the EKF-based SLAM approach since parallel system divides motion estimation and 3D mapping tasks into separate threads, eliminating data-association problem which is quite an issue in SLAM. Moreover, the motion estimation thread takes the advantage of state-of-art robust visual odometry algorithm which is highly functional under low illumination and provides accurate pose information. We designed and built an unmanned vehicle and used the vehicle to collect a dataset in an underground garage. The parallel system was evaluated by the actual dataset. Motion estimation results indicated a relative position error of 0.3%, and 3D mapping results showed a mean position error of 13cm. Off-line process reduced position error to 2cm. Performance evaluation by actual dataset showed that our system is capable of robust motion estimation and accurate 3D mapping in poor illumination and featureless underground environment.
Integrated consensus genetic and physical maps of flax (Linum usitatissimum L.).
Cloutier, Sylvie; Ragupathy, Raja; Miranda, Evelyn; Radovanovic, Natasa; Reimer, Elsa; Walichnowski, Andrzej; Ward, Kerry; Rowland, Gordon; Duguid, Scott; Banik, Mitali
2012-12-01
Three linkage maps of flax (Linum usitatissimum L.) were constructed from populations CDC Bethune/Macbeth, E1747/Viking and SP2047/UGG5-5 containing between 385 and 469 mapped markers each. The first consensus map of flax was constructed incorporating 770 markers based on 371 shared markers including 114 that were shared by all three populations and 257 shared between any two populations. The 15 linkage group map corresponds to the haploid number of chromosomes of this species. The marker order of the consensus map was largely collinear in all three individual maps but a few local inversions and marker rearrangements spanning short intervals were observed. Segregation distortion was present in all linkage groups which contained 1-52 markers displaying non-Mendelian segregation. The total length of the consensus genetic map is 1,551 cM with a mean marker density of 2.0 cM. A total of 670 markers were anchored to 204 of the 416 fingerprinted contigs of the physical map corresponding to ~274 Mb or 74 % of the estimated flax genome size of 370 Mb. This high resolution consensus map will be a resource for comparative genomics, genome organization, evolution studies and anchoring of the whole genome shotgun sequence.
Systematic Review and Consensus Guidelines for Environmental Sampling of Burkholderia pseudomallei
Limmathurotsakul, Direk; Dance, David A. B.; Wuthiekanun, Vanaporn; Kaestli, Mirjam; Mayo, Mark; Warner, Jeffrey; Wagner, David M.; Tuanyok, Apichai; Wertheim, Heiman; Yoke Cheng, Tan; Mukhopadhyay, Chiranjay; Puthucheary, Savithiri; Day, Nicholas P. J.; Steinmetz, Ivo; Currie, Bart J.; Peacock, Sharon J.
2013-01-01
Background Burkholderia pseudomallei, a Tier 1 Select Agent and the cause of melioidosis, is a Gram-negative bacillus present in the environment in many tropical countries. Defining the global pattern of B. pseudomallei distribution underpins efforts to prevent infection, and is dependent upon robust environmental sampling methodology. Our objective was to review the literature on the detection of environmental B. pseudomallei, update the risk map for melioidosis, and propose international consensus guidelines for soil sampling. Methods/Principal Findings An international working party (Detection of Environmental Burkholderia pseudomallei Working Party (DEBWorP)) was formed during the VIth World Melioidosis Congress in 2010. PubMed (January 1912 to December 2011) was searched using the following MeSH terms: pseudomallei or melioidosis. Bibliographies were hand-searched for secondary references. The reported geographical distribution of B. pseudomallei in the environment was mapped and categorized as definite, probable, or possible. The methodology used for detecting environmental B. pseudomallei was extracted and collated. We found that global coverage was patchy, with a lack of studies in many areas where melioidosis is suspected to occur. The sampling strategies and bacterial identification methods used were highly variable, and not all were robust. We developed consensus guidelines with the goals of reducing the probability of false-negative results, and the provision of affordable and ‘low-tech’ methodology that is applicable in both developed and developing countries. Conclusions/Significance The proposed consensus guidelines provide the basis for the development of an accurate and comprehensive global map of environmental B. pseudomallei. PMID:23556010
A Robust Linear Feature-Based Procedure for Automated Registration of Point Clouds
Poreba, Martyna; Goulette, François
2015-01-01
With the variety of measurement techniques available on the market today, fusing multi-source complementary information into one dataset is a matter of great interest. Target-based, point-based and feature-based methods are some of the approaches used to place data in a common reference frame by estimating its corresponding transformation parameters. This paper proposes a new linear feature-based method to perform accurate registration of point clouds, either in 2D or 3D. A two-step fast algorithm called Robust Line Matching and Registration (RLMR), which combines coarse and fine registration, was developed. The initial estimate is found from a triplet of conjugate line pairs, selected by a RANSAC algorithm. Then, this transformation is refined using an iterative optimization algorithm. Conjugates of linear features are identified with respect to a similarity metric representing a line-to-line distance. The efficiency and robustness to noise of the proposed method are evaluated and discussed. The algorithm is valid and ensures valuable results when pre-aligned point clouds with the same scale are used. The studies show that the matching accuracy is at least 99.5%. The transformation parameters are also estimated correctly. The error in rotation is better than 2.8% full scale, while the translation error is less than 12.7%. PMID:25594589
Kantor, Daniel; Johnson, Kristen; Vieira, Maria Cecilia; Signorovitch, James; Li, Nanxin; Gao, Wei; Koo, Valerie; Duchesneau, Emilie; Herrera, Vivian
2018-05-15
To systematically review reports of fingolimod persistence in the treatment of relapsing-remitting multiple sclerosis (RRMS) across data sources and practice settings, and to develop a consensus estimate of the 1-year real-world persistence rate. A systematic literature review was conducted (MEDLINE, EMBASE, and abstracts from selected conferences [2013-2015]) to identify observational studies reporting 1-year fingolimod persistence among adult patients with RRMS (sample size ≥50). A random-effects meta-analysis was performed to estimate a synthesized 1-year persistence rate and to assess heterogeneity across studies. Of 527 publications identified, 25 real-world studies reporting 1-year fingolimod persistence rates were included. The studies included patients from different data sources (e.g., administrative claims, electronic medical records, or registries), used different definitions of persistence (e.g., based on prescriptions refills, patient report, or prescription orders), and spanned multiple geographic regions. Reported 1-year persistence rates ranged from 72%-100%, and exhibited statistical evidence of heterogeneity (I 2 = 93% of the variability due to heterogeneity across studies). The consensus estimate of the 1-year persistence rate was 82% (95% confidence interval: 79%-85%). Across heterogeneous study designs and patient populations found in real-world studies, the consensus 1-year fingolimod persistence rate exceeded 80%, consistent with persistence rates identified in the recently-completed trial, PREFERMS. Copyright © 2018. Published by Elsevier B.V.
Motivation and Prospects for Spatio-spectral Interferometry in the Far-infrared
NASA Technical Reports Server (NTRS)
Leisawitz, David
2013-01-01
Consensus developed through a series of workshops, starting in 1998. Compelling science case for high angular resolution imaging and spectroscopy, and mission concepts. A robust plan - it has evolved over the years, but has consistently called for high resolution.
Kargar, Soudabeh; Borisch, Eric A; Froemming, Adam T; Kawashima, Akira; Mynderse, Lance A; Stinson, Eric G; Trzasko, Joshua D; Riederer, Stephen J
2018-05-01
To describe an efficient numerical optimization technique using non-linear least squares to estimate perfusion parameters for the Tofts and extended Tofts models from dynamic contrast enhanced (DCE) MRI data and apply the technique to prostate cancer. Parameters were estimated by fitting the two Tofts-based perfusion models to the acquired data via non-linear least squares. We apply Variable Projection (VP) to convert the fitting problem from a multi-dimensional to a one-dimensional line search to improve computational efficiency and robustness. Using simulation and DCE-MRI studies in twenty patients with suspected prostate cancer, the VP-based solver was compared against the traditional Levenberg-Marquardt (LM) strategy for accuracy, noise amplification, robustness to converge, and computation time. The simulation demonstrated that VP and LM were both accurate in that the medians closely matched assumed values across typical signal to noise ratio (SNR) levels for both Tofts models. VP and LM showed similar noise sensitivity. Studies using the patient data showed that the VP method reliably converged and matched results from LM with approximate 3× and 2× reductions in computation time for the standard (two-parameter) and extended (three-parameter) Tofts models. While LM failed to converge in 14% of the patient data, VP converged in the ideal 100%. The VP-based method for non-linear least squares estimation of perfusion parameters for prostate MRI is equivalent in accuracy and robustness to noise, while being more reliably (100%) convergent and computationally about 3× (TM) and 2× (ETM) faster than the LM-based method. Copyright © 2017 Elsevier Inc. All rights reserved.
Kendall, W.L.; Nichols, J.D.; Hines, J.E.
1997-01-01
Statistical inference for capture-recapture studies of open animal populations typically relies on the assumption that all emigration from the studied population is permanent. However, there are many instances in which this assumption is unlikely to be met. We define two general models for the process of temporary emigration, completely random and Markovian. We then consider effects of these two types of temporary emigration on Jolly-Seber (Seber 1982) estimators and on estimators arising from the full-likelihood approach of Kendall et al. (1995) to robust design data. Capture-recapture data arising from Pollock's (1982) robust design provide the basis for obtaining unbiased estimates of demographic parameters in the presence of temporary emigration and for estimating the probability of temporary emigration. We present a likelihood-based approach to dealing with temporary emigration that permits estimation under different models of temporary emigration and yields tests for completely random and Markovian emigration. In addition, we use the relationship between capture probability estimates based on closed and open models under completely random temporary emigration to derive three ad hoc estimators for the probability of temporary emigration, two of which should be especially useful in situations where capture probabilities are heterogeneous among individual animals. Ad hoc and full-likelihood estimators are illustrated for small mammal capture-recapture data sets. We believe that these models and estimators will be useful for testing hypotheses about the process of temporary emigration, for estimating demographic parameters in the presence of temporary emigration, and for estimating probabilities of temporary emigration. These latter estimates are frequently of ecological interest as indicators of animal movement and, in some sampling situations, as direct estimates of breeding probabilities and proportions.
Data-Adaptive Bias-Reduced Doubly Robust Estimation.
Vermeulen, Karel; Vansteelandt, Stijn
2016-05-01
Doubly robust estimators have now been proposed for a variety of target parameters in the causal inference and missing data literature. These consistently estimate the parameter of interest under a semiparametric model when one of two nuisance working models is correctly specified, regardless of which. The recently proposed bias-reduced doubly robust estimation procedure aims to partially retain this robustness in more realistic settings where both working models are misspecified. These so-called bias-reduced doubly robust estimators make use of special (finite-dimensional) nuisance parameter estimators that are designed to locally minimize the squared asymptotic bias of the doubly robust estimator in certain directions of these finite-dimensional nuisance parameters under misspecification of both parametric working models. In this article, we extend this idea to incorporate the use of data-adaptive estimators (infinite-dimensional nuisance parameters), by exploiting the bias reduction estimation principle in the direction of only one nuisance parameter. We additionally provide an asymptotic linearity theorem which gives the influence function of the proposed doubly robust estimator under correct specification of a parametric nuisance working model for the missingness mechanism/propensity score but a possibly misspecified (finite- or infinite-dimensional) outcome working model. Simulation studies confirm the desirable finite-sample performance of the proposed estimators relative to a variety of other doubly robust estimators.
Resilient Distributed Estimation Through Adversary Detection
NASA Astrophysics Data System (ADS)
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
2018-05-01
This paper studies resilient multi-agent distributed estimation of an unknown vector parameter when a subset of the agents is adversarial. We present and analyze a Flag Raising Distributed Estimator ($\\mathcal{FRDE}$) that allows the agents under attack to perform accurate parameter estimation and detect the adversarial agents. The $\\mathcal{FRDE}$ algorithm is a consensus+innovations estimator in which agents combine estimates of neighboring agents (consensus) with local sensing information (innovations). We establish that, under $\\mathcal{FRDE}$, either the uncompromised agents' estimates are almost surely consistent or the uncompromised agents detect compromised agents if and only if the network of uncompromised agents is connected and globally observable. Numerical examples illustrate the performance of $\\mathcal{FRDE}$.
Elmore, Joann G.; Nelson, Heidi D.; Pepe, Margaret S.; Longton, Gary M.; Tosteson, Anna N.A.; Geller, Berta; Onega, Tracy; Carney, Patricia A.; Jackson, Sara L.; Allison, Kimberly H.; Weaver, Donald L.
2016-01-01
Background The effect of physician diagnostic variability on accuracy at a population level depends on the prevalence of diagnoses. Objective To estimate how diagnostic variability affects accuracy from the perspective of a U.S. woman aged 50 to 59 years having a breast biopsy. Design Applied probability using Bayes theorem. Setting B-Path (Breast Pathology) Study comparing pathologists’ interpretations of a single biopsy slide versus a reference consensus interpretation from 3 experts. Participants 115 practicing pathologists (6900 total interpretations from 240 distinct cases). Measurements A single representative slide from each of the 240 cases was used to estimate the proportion of biopsies with a diagnosis that would be verified if the same slide were interpreted by a reference group of 3 expert pathologists. Probabilities of confirmation (predictive values) were estimated using B-Path Study results and prevalence of biopsy diagnoses for women aged 50 to 59 years in the Breast Cancer Surveillance Consortium. Results Overall, if 1 representative slide were used per case, 92.3% (95% CI, 91.4% to 93.1%) of breast biopsy diagnoses would be verified by reference consensus diagnoses, with 4.6% (CI, 3.9% to 5.3%) overinterpreted and 3.2% (CI, 2.7% to 3.6%) underinterpreted. Verification of invasive breast cancer and benign without atypia diagnoses is highly probable; estimated predictive values were 97.7% (CI, 96.5% to 98.7%) and 97.1% (CI, 96.7% to 97.4%), respectively. Verification is less probable for atypia (53.6% overinterpreted and 8.6% underinterpreted) and ductal carcinoma in situ (DCIS) (18.5% overinterpreted and 11.8% underinterpreted). Limitations Estimates are based on a testing situation with 1 slide used per case and without access to second opinions. Population-adjusted estimates may differ for women from other age groups, unscreened women, or women in different practice settings. Conclusion This analysis, based on interpretation of a single breast biopsy slide per case, predicts a low likelihood that a diagnosis of atypia or DCIS would be verified by a reference consensus diagnosis. This diagnostic gray zone should be considered in clinical management decisions in patients with these diagnoses. Primary Funding Source National Cancer Institute. PMID:26999810
Lee, Jewon; Moon, Seokbae; Jeong, Hyeyun; Kim, Sang Woo
2015-11-20
This paper proposes a diagnosis method for a multipole permanent magnet synchronous motor (PMSM) under an interturn short circuit fault. Previous works in this area have suffered from the uncertainties of the PMSM parameters, which can lead to misdiagnosis. The proposed method estimates the q-axis inductance (Lq) of the faulty PMSM to solve this problem. The proposed method also estimates the faulty phase and the value of G, which serves as an index of the severity of the fault. The q-axis current is used to estimate the faulty phase, the values of G and Lq. For this reason, two open-loop observers and an optimization method based on a particle-swarm are implemented. The q-axis current of a healthy PMSM is estimated by the open-loop observer with the parameters of a healthy PMSM. The Lq estimation significantly compensates for the estimation errors in high-speed operation. The experimental results demonstrate that the proposed method can estimate the faulty phase, G, and Lq besides exhibiting robustness against parameter uncertainties.
Archambeau, Cédric; Verleysen, Michel
2007-01-01
A new variational Bayesian learning algorithm for Student-t mixture models is introduced. This algorithm leads to (i) robust density estimation, (ii) robust clustering and (iii) robust automatic model selection. Gaussian mixture models are learning machines which are based on a divide-and-conquer approach. They are commonly used for density estimation and clustering tasks, but are sensitive to outliers. The Student-t distribution has heavier tails than the Gaussian distribution and is therefore less sensitive to any departure of the empirical distribution from Gaussianity. As a consequence, the Student-t distribution is suitable for constructing robust mixture models. In this work, we formalize the Bayesian Student-t mixture model as a latent variable model in a different way from Svensén and Bishop [Svensén, M., & Bishop, C. M. (2005). Robust Bayesian mixture modelling. Neurocomputing, 64, 235-252]. The main difference resides in the fact that it is not necessary to assume a factorized approximation of the posterior distribution on the latent indicator variables and the latent scale variables in order to obtain a tractable solution. Not neglecting the correlations between these unobserved random variables leads to a Bayesian model having an increased robustness. Furthermore, it is expected that the lower bound on the log-evidence is tighter. Based on this bound, the model complexity, i.e. the number of components in the mixture, can be inferred with a higher confidence.
Robust Variable Selection with Exponential Squared Loss.
Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping
2013-04-01
Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are [Formula: see text] and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods.
Robust Variable Selection with Exponential Squared Loss
Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping
2013-01-01
Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are n-consistent and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods. PMID:23913996
Shoemark, Helen; Rimmer, Jo; Bower, Janeen; Tucquet, Belinda; Miller, Lauren; Fisher, Michelle; Ogburn, Nicholas; Dun, Beth
2018-03-09
This article reports on a project at the Royal Children's Hospital Melbourne in which the music therapy team synthesized their practice and related theories to propose a new conceptual framework for music therapy in their acute pediatric setting. The impetus for the project was the realization that in the process of producing key statements about the non-musical benefits of music therapy, the cost was often the suppression of information about the patient's unique musical potential as the major (mediating) pathway from referral reason, to music therapy, and to effective outcomes. The purpose of the project was to articulate how this team of clinicians conceive of the patient's musical self as the major theoretical pathway for music therapy in an evidence-based acute medical setting. The clinicians' shared reflexive process across six months involved robust directed discussion, annotation of shared reading, and documentation of all engagement in words and diagrams. The outcome was a consensus framework including three constructs: the place of music in the life of the infant, child, and young people, Culture and Context, and Musical Manifestations. The constructs were tested in a clinical audit, and found to be robustly inclusive. In addition to the conceptual framework, this project serves to demonstrate a process by which clinical teams may reflect on their individual practice and theory together to create a consensus stance for the overall service they provide in the one setting.
Curve Set Feature-Based Robust and Fast Pose Estimation Algorithm
Hashimoto, Koichi
2017-01-01
Bin picking refers to picking the randomly-piled objects from a bin for industrial production purposes, and robotic bin picking is always used in automated assembly lines. In order to achieve a higher productivity, a fast and robust pose estimation algorithm is necessary to recognize and localize the randomly-piled parts. This paper proposes a pose estimation algorithm for bin picking tasks using point cloud data. A novel descriptor Curve Set Feature (CSF) is proposed to describe a point by the surface fluctuation around this point and is also capable of evaluating poses. The Rotation Match Feature (RMF) is proposed to match CSF efficiently. The matching process combines the idea of the matching in 2D space of origin Point Pair Feature (PPF) algorithm with nearest neighbor search. A voxel-based pose verification method is introduced to evaluate the poses and proved to be more than 30-times faster than the kd-tree-based verification method. Our algorithm is evaluated against a large number of synthetic and real scenes and proven to be robust to noise, able to detect metal parts, more accurately and more than 10-times faster than PPF and Oriented, Unique and Repeatable (OUR)-Clustered Viewpoint Feature Histogram (CVFH). PMID:28771216
Mu, Wenying; Cui, Baotong; Li, Wen; Jiang, Zhengxian
2014-07-01
This paper proposes a scheme for non-collocated moving actuating and sensing devices which is unitized for improving performance in distributed parameter systems. By Lyapunov stability theorem, each moving actuator/sensor agent velocity is obtained. To enhance state estimation of a spatially distributes process, two kinds of filters with consensus terms which penalize the disagreement of the estimates are considered. Both filters can result in the well-posedness of the collective dynamics of state errors and can converge to the plant state. Numerical simulations demonstrate that the effectiveness of such a moving actuator-sensor network in enhancing system performance and the consensus filters converge faster to the plant state when consensus terms are included. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Robustness of fit indices to outliers and leverage observations in structural equation modeling.
Yuan, Ke-Hai; Zhong, Xiaoling
2013-06-01
Normal-distribution-based maximum likelihood (NML) is the most widely used method in structural equation modeling (SEM), although practical data tend to be nonnormally distributed. The effect of nonnormally distributed data or data contamination on the normal-distribution-based likelihood ratio (LR) statistic is well understood due to many analytical and empirical studies. In SEM, fit indices are used as widely as the LR statistic. In addition to NML, robust procedures have been developed for more efficient and less biased parameter estimates with practical data. This article studies the effect of outliers and leverage observations on fit indices following NML and two robust methods. Analysis and empirical results indicate that good leverage observations following NML and one of the robust methods lead most fit indices to give more support to the substantive model. While outliers tend to make a good model superficially bad according to many fit indices following NML, they have little effect on those following the two robust procedures. Implications of the results to data analysis are discussed, and recommendations are provided regarding the use of estimation methods and interpretation of fit indices. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Liu, Hongjian; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.
2016-07-01
This paper deals with the robust H∞ state estimation problem for a class of memristive recurrent neural networks with stochastic time-delays. The stochastic time-delays under consideration are governed by a Bernoulli-distributed stochastic sequence. The purpose of the addressed problem is to design the robust state estimator such that the dynamics of the estimation error is exponentially stable in the mean square, and the prescribed ? performance constraint is met. By utilizing the difference inclusion theory and choosing a proper Lyapunov-Krasovskii functional, the existence condition of the desired estimator is derived. Based on it, the explicit expression of the estimator gain is given in terms of the solution to a linear matrix inequality. Finally, a numerical example is employed to demonstrate the effectiveness and applicability of the proposed estimation approach.
Diagnostic index: an open-source tool to classify TMJ OA condyles
NASA Astrophysics Data System (ADS)
Paniagua, Beatriz; Pascal, Laura; Prieto, Juan; Vimort, Jean Baptiste; Gomes, Liliane; Yatabe, Marilia; Ruellas, Antonio Carlos; Budin, Francois; Pieper, Steve; Styner, Martin; Benavides, Erika; Cevidanes, Lucia
2017-03-01
Osteoarthritis (OA) of temporomandibular joints (TMJ) occurs in about 40% of the patients who present TMJ disorders. Despite its prevalence, OA diagnosis and treatment remain controversial since there are no clear symptoms of the disease, especially in early stages. Quantitative tools based on 3D imaging of the TMJ condyle have the potential to help characterize TMJ OA changes. The goals of the tools proposed in this study are to ultimately develop robust imaging markers for diagnosis and assessment of treatment efficacy. This work proposes to identify differences among asymptomatic controls and different clinical phenotypes of TMJ OA by means of Statistical Shape Modeling (SSM), obtained via clinical expert consensus. From three different grouping schemes (with 3, 5 and 7 groups), our best results reveal that that the majority (74.5%) of the classifications occur in agreement with the groups assigned by consensus between our clinical experts. Our findings suggest the existence of different disease-based phenotypic morphologies in TMJ OA. Our preliminary findings with statistical shape modeling based biomarkers may provide a quantitative staging of the disease. The methodology used in this study is included in an open source image analysis toolbox, to ensure reproducibility and appropriate distribution and dissemination of the solution proposed.
Biomolecular self-defense and futility of high-specificity therapeutic targeting.
Rosenfeld, Simon
2011-01-01
Robustness has been long recognized to be a distinctive property of living entities. While a reasonably wide consensus has been achieved regarding the conceptual meaning of robustness, the biomolecular mechanisms underlying this systemic property are still open to many unresolved questions. The goal of this paper is to provide an overview of existing approaches to characterization of robustness in mathematically sound terms. The concept of robustness is discussed in various contexts including network vulnerability, nonlinear dynamic stability, and self-organization. The second goal is to discuss the implications of biological robustness for individual-target therapeutics and possible strategies for outsmarting drug resistance arising from it. Special attention is paid to the concept of swarm intelligence, a well studied mechanism of self-organization in natural, societal and artificial systems. It is hypothesized that swarm intelligence is the key to understanding the emergent property of chemoresistance.
Biomolecular Self-Defense and Futility of High-Specificity Therapeutic Targeting
Rosenfeld, Simon
2011-01-01
Robustness has been long recognized to be a distinctive property of living entities. While a reasonably wide consensus has been achieved regarding the conceptual meaning of robustness, the biomolecular mechanisms underlying this systemic property are still open to many unresolved questions. The goal of this paper is to provide an overview of existing approaches to characterization of robustness in mathematically sound terms. The concept of robustness is discussed in various contexts including network vulnerability, nonlinear dynamic stability, and self-organization. The second goal is to discuss the implications of biological robustness for individual-target therapeutics and possible strategies for outsmarting drug resistance arising from it. Special attention is paid to the concept of swarm intelligence, a well studied mechanism of self-organization in natural, societal and artificial systems. It is hypothesized that swarm intelligence is the key to understanding the emergent property of chemoresistance. PMID:22272063
Gao, Wei; Liu, Yalong; Xu, Bo
2014-12-19
A new algorithm called Huber-based iterated divided difference filtering (HIDDF) is derived and applied to cooperative localization of autonomous underwater vehicles (AUVs) supported by a single surface leader. The position states are estimated using acoustic range measurements relative to the leader, in which some disadvantages such as weak observability, large initial error and contaminated measurements with outliers are inherent. By integrating both merits of iterated divided difference filtering (IDDF) and Huber's M-estimation methodology, the new filtering method could not only achieve more accurate estimation and faster convergence contrast to standard divided difference filtering (DDF) in conditions of weak observability and large initial error, but also exhibit robustness with respect to outlier measurements, for which the standard IDDF would exhibit severe degradation in estimation accuracy. The correctness as well as validity of the algorithm is demonstrated through experiment results.
NASA Astrophysics Data System (ADS)
He, A.; Quan, C.
2018-04-01
The principal component analysis (PCA) and region matching combined method is effective for fringe direction estimation. However, its mask construction algorithm for region matching fails in some circumstances, and the algorithm for conversion of orientation to direction in mask areas is computationally-heavy and non-optimized. We propose an improved PCA based region matching method for the fringe direction estimation, which includes an improved and robust mask construction scheme, and a fast and optimized orientation-direction conversion algorithm for the mask areas. Along with the estimated fringe direction map, filtered fringe pattern by automatic selective reconstruction modification and enhanced fast empirical mode decomposition (ASRm-EFEMD) is used for Hilbert spiral transform (HST) to demodulate the phase. Subsequently, windowed Fourier ridge (WFR) method is used for the refinement of the phase. The robustness and effectiveness of proposed method are demonstrated by both simulated and experimental fringe patterns.
Composition of a dewarped and enhanced document image from two view images.
Koo, Hyung Il; Kim, Jinho; Cho, Nam Ik
2009-07-01
In this paper, we propose an algorithm to compose a geometrically dewarped and visually enhanced image from two document images taken by a digital camera at different angles. Unlike the conventional works that require special equipment or assumptions on the contents of books or complicated image acquisition steps, we estimate the unfolded book or document surface from the corresponding points between two images. For this purpose, the surface and camera matrices are estimated using structure reconstruction, 3-D projection analysis, and random sample consensus-based curve fitting with the cylindrical surface model. Because we do not need any assumption on the contents of books, the proposed method can be applied not only to optical character recognition (OCR), but also to the high-quality digitization of pictures in documents. In addition to the dewarping for a structurally better image, image mosaic is also performed for further improving the visual quality. By finding better parts of images (with less out of focus blur and/or without specular reflections) from either of views, we compose a better image by stitching and blending them. These processes are formulated as energy minimization problems that can be solved using a graph cut method. Experiments on many kinds of book or document images show that the proposed algorithm robustly works and yields visually pleasing results. Also, the OCR rate of the resulting image is comparable to that of document images from a flatbed scanner.
A new mosaic method for three-dimensional surface
NASA Astrophysics Data System (ADS)
Yuan, Yun; Zhu, Zhaokun; Ding, Yongjun
2011-08-01
Three-dimensional (3-D) data mosaic is a indispensable link in surface measurement and digital terrain map generation. With respect to the mosaic problem of the local unorganized cloud points with rude registration and mass mismatched points, a new mosaic method for 3-D surface based on RANSAC is proposed. Every circular of this method is processed sequentially by random sample with additional shape constraint, data normalization of cloud points, absolute orientation, data denormalization of cloud points, inlier number statistic, etc. After N random sample trials the largest consensus set is selected, and at last the model is re-estimated using all the points in the selected subset. The minimal subset is composed of three non-colinear points which form a triangle. The shape of triangle is considered in random sample selection in order to make the sample selection reasonable. A new coordinate system transformation algorithm presented in this paper is used to avoid the singularity. The whole rotation transformation between the two coordinate systems can be solved by twice rotations expressed by Euler angle vector, each rotation has explicit physical means. Both simulation and real data are used to prove the correctness and validity of this mosaic method. This method has better noise immunity due to its robust estimation property, and has high accuracy as the shape constraint is added to random sample and the data normalization added to the absolute orientation. This method is applicable for high precision measurement of three-dimensional surface and also for the 3-D terrain mosaic.
Pareto-optimal estimates that constrain mean California precipitation change
NASA Astrophysics Data System (ADS)
Langenbrunner, B.; Neelin, J. D.
2017-12-01
Global climate model (GCM) projections of greenhouse gas-induced precipitation change can exhibit notable uncertainty at the regional scale, particularly in regions where the mean change is small compared to internal variability. This is especially true for California, which is located in a transition zone between robust precipitation increases to the north and decreases to the south, and where GCMs from the Climate Model Intercomparison Project phase 5 (CMIP5) archive show no consensus on mean change (in either magnitude or sign) across the central and southern parts of the state. With the goal of constraining this uncertainty, we apply a multiobjective approach to a large set of subensembles (subsets of models from the full CMIP5 ensemble). These constraints are based on subensemble performance in three fields important to California precipitation: tropical Pacific sea surface temperatures, upper-level zonal winds in the midlatitude Pacific, and precipitation over the state. An evolutionary algorithm is used to sort through and identify the set of Pareto-optimal subensembles across these three measures in the historical climatology, and we use this information to constrain end-of-century California wet season precipitation change. This technique narrows the range of projections throughout the state and increases confidence in estimates of positive mean change. Furthermore, these methods complement and generalize emergent constraint approaches that aim to restrict uncertainty in end-of-century projections, and they have applications to even broader aspects of uncertainty quantification, including parameter sensitivity and model calibration.
The use of resighting data to estimate the rate of population growth of the snail kite in Florida
Dreitz, V.J.; Nichols, J.D.; Hines, J.E.; Bennetts, R.E.; Kitchens, W.M.; DeAngelis, D.L.
2002-01-01
The rate of population growth (lambda) is an important demographic parameter used to assess the viability of a population and to develop management and conservation agendas. We examined the use of resighting data to estimate lambda for the snail kite population in Florida from 1997-2000. The analyses consisted of (1) a robust design approach that derives an estimate of lambda from estimates of population size and (2) the Pradel (1996) temporal symmetry (TSM) approach that directly estimates lambda using an open-population capture-recapture model. Besides resighting data, both approaches required information on the number of unmarked individuals that were sighted during the sampling periods. The point estimates of lambda differed between the robust design and TSM approaches, but the 95% confidence intervals overlapped substantially. We believe the differences may be the result of sparse data and do not indicate the inappropriateness of either modelling technique. We focused on the results of the robust design because this approach provided estimates for all study years. Variation among these estimates was smaller than levels of variation among ad hoc estimates based on previously reported index statistics. We recommend that lambda of snail kites be estimated using capture-resighting methods rather than ad hoc counts.
Kamneva, Olga K; Rosenberg, Noah A
2017-01-01
Hybridization events generate reticulate species relationships, giving rise to species networks rather than species trees. We report a comparative study of consensus, maximum parsimony, and maximum likelihood methods of species network reconstruction using gene trees simulated assuming a known species history. We evaluate the role of the divergence time between species involved in a hybridization event, the relative contributions of the hybridizing species, and the error in gene tree estimation. When gene tree discordance is mostly due to hybridization and not due to incomplete lineage sorting (ILS), most of the methods can detect even highly skewed hybridization events between highly divergent species. For recent divergences between hybridizing species, when the influence of ILS is sufficiently high, likelihood methods outperform parsimony and consensus methods, which erroneously identify extra hybridizations. The more sophisticated likelihood methods, however, are affected by gene tree errors to a greater extent than are consensus and parsimony. PMID:28469378
Ye, Yalan; He, Wenwen; Cheng, Yunfei; Huang, Wenxia; Zhang, Zhilin
2017-02-16
The estimation of heart rate (HR) based on wearable devices is of interest in fitness. Photoplethysmography (PPG) is a promising approach to estimate HR due to low cost; however, it is easily corrupted by motion artifacts (MA). In this work, a robust approach based on random forest is proposed for accurately estimating HR from the photoplethysmography signal contaminated by intense motion artifacts, consisting of two stages. Stage 1 proposes a hybrid method to effectively remove MA with a low computation complexity, where two MA removal algorithms are combined by an accurate binary decision algorithm whose aim is to decide whether or not to adopt the second MA removal algorithm. Stage 2 proposes a random forest-based spectral peak-tracking algorithm, whose aim is to locate the spectral peak corresponding to HR, formulating the problem of spectral peak tracking into a pattern classification problem. Experiments on the PPG datasets including 22 subjects used in the 2015 IEEE Signal Processing Cup showed that the proposed approach achieved the average absolute error of 1.65 beats per minute (BPM) on the 22 PPG datasets. Compared to state-of-the-art approaches, the proposed approach has better accuracy and robustness to intense motion artifacts, indicating its potential use in wearable sensors for health monitoring and fitness tracking.
Unsupervised, Robust Estimation-based Clustering for Multispectral Images
NASA Technical Reports Server (NTRS)
Netanyahu, Nathan S.
1997-01-01
To prepare for the challenge of handling the archiving and querying of terabyte-sized scientific spatial databases, the NASA Goddard Space Flight Center's Applied Information Sciences Branch (AISB, Code 935) developed a number of characterization algorithms that rely on supervised clustering techniques. The research reported upon here has been aimed at continuing the evolution of some of these supervised techniques, namely the neural network and decision tree-based classifiers, plus extending the approach to incorporating unsupervised clustering algorithms, such as those based on robust estimation (RE) techniques. The algorithms developed under this task should be suited for use by the Intelligent Information Fusion System (IIFS) metadata extraction modules, and as such these algorithms must be fast, robust, and anytime in nature. Finally, so that the planner/schedule module of the IlFS can oversee the use and execution of these algorithms, all information required by the planner/scheduler must be provided to the IIFS development team to ensure the timely integration of these algorithms into the overall system.
A novel methodology for building robust design rules by using design based metrology (DBM)
NASA Astrophysics Data System (ADS)
Lee, Myeongdong; Choi, Seiryung; Choi, Jinwoo; Kim, Jeahyun; Sung, Hyunju; Yeo, Hyunyoung; Shim, Myoungseob; Jin, Gyoyoung; Chung, Eunseung; Roh, Yonghan
2013-03-01
This paper addresses a methodology for building robust design rules by using design based metrology (DBM). Conventional method for building design rules has been using a simulation tool and a simple pattern spider mask. At the early stage of the device, the estimation of simulation tool is poor. And the evaluation of the simple pattern spider mask is rather subjective because it depends on the experiential judgment of an engineer. In this work, we designed a huge number of pattern situations including various 1D and 2D design structures. In order to overcome the difficulties of inspecting many types of patterns, we introduced Design Based Metrology (DBM) of Nano Geometry Research, Inc. And those mass patterns could be inspected at a fast speed with DBM. We also carried out quantitative analysis on PWQ silicon data to estimate process variability. Our methodology demonstrates high speed and accuracy for building design rules. All of test patterns were inspected within a few hours. Mass silicon data were handled with not personal decision but statistical processing. From the results, robust design rules are successfully verified and extracted. Finally we found out that our methodology is appropriate for building robust design rules.
Probability based remaining capacity estimation using data-driven and neural network model
NASA Astrophysics Data System (ADS)
Wang, Yujie; Yang, Duo; Zhang, Xu; Chen, Zonghai
2016-05-01
Since large numbers of lithium-ion batteries are composed in pack and the batteries are complex electrochemical devices, their monitoring and safety concerns are key issues for the applications of battery technology. An accurate estimation of battery remaining capacity is crucial for optimization of the vehicle control, preventing battery from over-charging and over-discharging and ensuring the safety during its service life. The remaining capacity estimation of a battery includes the estimation of state-of-charge (SOC) and state-of-energy (SOE). In this work, a probability based adaptive estimator is presented to obtain accurate and reliable estimation results for both SOC and SOE. For the SOC estimation, an n ordered RC equivalent circuit model is employed by combining an electrochemical model to obtain more accurate voltage prediction results. For the SOE estimation, a sliding window neural network model is proposed to investigate the relationship between the terminal voltage and the model inputs. To verify the accuracy and robustness of the proposed model and estimation algorithm, experiments under different dynamic operation current profiles are performed on the commercial 1665130-type lithium-ion batteries. The results illustrate that accurate and robust estimation can be obtained by the proposed method.
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2018-05-01
The pinning/leader control problems provide the design of the leader or pinning controller in order to guide a complex network to a desired trajectory or target (synchronisation or consensus). Let a time-invariant complex network, pinning/leader control problems include the design of the leader or pinning controller gain and number of nodes to pin in order to guide a network to a desired trajectory (synchronization or consensus). Usually, lower is the number of pinned nodes larger is the pinning gain required to assess network synchronisation. On the other side, realistic application scenario of complex networks is characterised by switching topologies, time-varying node coupling strength and link weight that make hard to solve the pinning/leader control problem. Additionally, the system dynamics at nodes can be heterogeneous. In this paper, we derive robust stabilisation conditions of time-varying heterogeneous complex networks with jointly connected topologies when coupling strength and link weight interactions are affected by time-varying uncertainties. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, we formulate low computationally demanding stabilisability conditions to design a pinning/leader control gain for robust network synchronisation. The effectiveness of the proposed approach is shown by several design examples applied to a paradigmatic well-known complex network composed of heterogeneous Chua's circuits.
Bias and robustness of uncertainty components estimates in transient climate projections
NASA Astrophysics Data System (ADS)
Hingray, Benoit; Blanchet, Juliette; Jean-Philippe, Vidal
2016-04-01
A critical issue in climate change studies is the estimation of uncertainties in projections along with the contribution of the different uncertainty sources, including scenario uncertainty, the different components of model uncertainty and internal variability. Quantifying the different uncertainty sources faces actually different problems. For instance and for the sake of simplicity, an estimate of model uncertainty is classically obtained from the empirical variance of the climate responses obtained for the different modeling chains. These estimates are however biased. Another difficulty arises from the limited number of members that are classically available for most modeling chains. In this case, the climate response of one given chain and the effect of its internal variability may be actually difficult if not impossible to separate. The estimate of scenario uncertainty, model uncertainty and internal variability components are thus likely to be not really robust. We explore the importance of the bias and the robustness of the estimates for two classical Analysis of Variance (ANOVA) approaches: a Single Time approach (STANOVA), based on the only data available for the considered projection lead time and a time series based approach (QEANOVA), which assumes quasi-ergodicity of climate outputs over the whole available climate simulation period (Hingray and Saïd, 2014). We explore both issues for a simple but classical configuration where uncertainties in projections are composed of two single sources: model uncertainty and internal climate variability. The bias in model uncertainty estimates is explored from theoretical expressions of unbiased estimators developed for both ANOVA approaches. The robustness of uncertainty estimates is explored for multiple synthetic ensembles of time series projections generated with MonteCarlo simulations. For both ANOVA approaches, when the empirical variance of climate responses is used to estimate model uncertainty, the bias is always positive. It can be especially high with STANOVA. In the most critical configurations, when the number of members available for each modeling chain is small (< 3) and when internal variability explains most of total uncertainty variance (75% or more), the overestimation is higher than 100% of the true model uncertainty variance. The bias can be considerably reduced with a time series ANOVA approach, owing to the multiple time steps accounted for. The longer the transient time period used for the analysis, the larger the reduction. When a quasi-ergodic ANOVA approach is applied to decadal data for the whole 1980-2100 period, the bias is reduced by a factor 2.5 to 20 depending on the projection lead time. In all cases, the bias is likely to be not negligible for a large number of climate impact studies resulting in a likely large overestimation of the contribution of model uncertainty to total variance. For both approaches, the robustness of all uncertainty estimates is higher when more members are available, when internal variability is smaller and/or the response-to-uncertainty ratio is higher. QEANOVA estimates are much more robust than STANOVA ones: QEANOVA simulated confidence intervals are roughly 3 to 5 times smaller than STANOVA ones. Excepted for STANOVA when less than 3 members is available, the robustness is rather high for total uncertainty and moderate for internal variability estimates. For model uncertainty or response-to-uncertainty ratio estimates, the robustness is conversely low for QEANOVA to very low for STANOVA. In the most critical configurations (small number of member, large internal variability), large over- or underestimation of uncertainty components is very thus likely. To propose relevant uncertainty analyses and avoid misleading interpretations, estimates of uncertainty components should be therefore bias corrected and ideally come with estimates of their robustness. This work is part of the COMPLEX Project (European Collaborative Project FP7-ENV-2012 number: 308601; http://www.complex.ac.uk/). Hingray, B., Saïd, M., 2014. Partitioning internal variability and model uncertainty components in a multimodel multireplicate ensemble of climate projections. J.Climate. doi:10.1175/JCLI-D-13-00629.1 Hingray, B., Blanchet, J. (revision) Unbiased estimators for uncertainty components in transient climate projections. J. Climate Hingray, B., Blanchet, J., Vidal, J.P. (revision) Robustness of uncertainty components estimates in climate projections. J.Climate
An open-source computational and data resource to analyze digital maps of immunopeptidomes
Caron, Etienne; Espona, Lucia; Kowalewski, Daniel J; Schuster, Heiko; Ternette, Nicola; Alpízar, Adán; Schittenhelm, Ralf B; Ramarathinam, Sri H; Lindestam Arlehamn, Cecilia S; Chiek Koh, Ching; Gillet, Ludovic C; Rabsteyn, Armin; Navarro, Pedro; Kim, Sangtae; Lam, Henry; Sturm, Theo; Marcilla, Miguel; Sette, Alessandro; Campbell, David S; Deutsch, Eric W; Moritz, Robert L; Purcell, Anthony W; Rammensee, Hans-Georg; Stevanovic, Stefan; Aebersold, Ruedi
2015-01-01
We present a novel mass spectrometry-based high-throughput workflow and an open-source computational and data resource to reproducibly identify and quantify HLA-associated peptides. Collectively, the resources support the generation of HLA allele-specific peptide assay libraries consisting of consensus fragment ion spectra, and the analysis of quantitative digital maps of HLA peptidomes generated from a range of biological sources by SWATH mass spectrometry (MS). This study represents the first community-based effort to develop a robust platform for the reproducible and quantitative measurement of the entire repertoire of peptides presented by HLA molecules, an essential step towards the design of efficient immunotherapies. DOI: http://dx.doi.org/10.7554/eLife.07661.001 PMID:26154972
O'Reilly, Joseph E; Donoghue, Philip C J
2018-03-01
Consensus trees are required to summarize trees obtained through MCMC sampling of a posterior distribution, providing an overview of the distribution of estimated parameters such as topology, branch lengths, and divergence times. Numerous consensus tree construction methods are available, each presenting a different interpretation of the tree sample. The rise of morphological clock and sampled-ancestor methods of divergence time estimation, in which times and topology are coestimated, has increased the popularity of the maximum clade credibility (MCC) consensus tree method. The MCC method assumes that the sampled, fully resolved topology with the highest clade credibility is an adequate summary of the most probable clades, with parameter estimates from compatible sampled trees used to obtain the marginal distributions of parameters such as clade ages and branch lengths. Using both simulated and empirical data, we demonstrate that MCC trees, and trees constructed using the similar maximum a posteriori (MAP) method, often include poorly supported and incorrect clades when summarizing diffuse posterior samples of trees. We demonstrate that the paucity of information in morphological data sets contributes to the inability of MCC and MAP trees to accurately summarise of the posterior distribution. Conversely, majority-rule consensus (MRC) trees represent a lower proportion of incorrect nodes when summarizing the same posterior samples of trees. Thus, we advocate the use of MRC trees, in place of MCC or MAP trees, in attempts to summarize the results of Bayesian phylogenetic analyses of morphological data.
O’Reilly, Joseph E; Donoghue, Philip C J
2018-01-01
Abstract Consensus trees are required to summarize trees obtained through MCMC sampling of a posterior distribution, providing an overview of the distribution of estimated parameters such as topology, branch lengths, and divergence times. Numerous consensus tree construction methods are available, each presenting a different interpretation of the tree sample. The rise of morphological clock and sampled-ancestor methods of divergence time estimation, in which times and topology are coestimated, has increased the popularity of the maximum clade credibility (MCC) consensus tree method. The MCC method assumes that the sampled, fully resolved topology with the highest clade credibility is an adequate summary of the most probable clades, with parameter estimates from compatible sampled trees used to obtain the marginal distributions of parameters such as clade ages and branch lengths. Using both simulated and empirical data, we demonstrate that MCC trees, and trees constructed using the similar maximum a posteriori (MAP) method, often include poorly supported and incorrect clades when summarizing diffuse posterior samples of trees. We demonstrate that the paucity of information in morphological data sets contributes to the inability of MCC and MAP trees to accurately summarise of the posterior distribution. Conversely, majority-rule consensus (MRC) trees represent a lower proportion of incorrect nodes when summarizing the same posterior samples of trees. Thus, we advocate the use of MRC trees, in place of MCC or MAP trees, in attempts to summarize the results of Bayesian phylogenetic analyses of morphological data. PMID:29106675
Robust pupil center detection using a curvature algorithm
NASA Technical Reports Server (NTRS)
Zhu, D.; Moore, S. T.; Raphan, T.; Wall, C. C. (Principal Investigator)
1999-01-01
Determining the pupil center is fundamental for calculating eye orientation in video-based systems. Existing techniques are error prone and not robust because eyelids, eyelashes, corneal reflections or shadows in many instances occlude the pupil. We have developed a new algorithm which utilizes curvature characteristics of the pupil boundary to eliminate these artifacts. Pupil center is computed based solely on points related to the pupil boundary. For each boundary point, a curvature value is computed. Occlusion of the boundary induces characteristic peaks in the curvature function. Curvature values for normal pupil sizes were determined and a threshold was found which together with heuristics discriminated normal from abnormal curvature. Remaining boundary points were fit with an ellipse using a least squares error criterion. The center of the ellipse is an estimate of the pupil center. This technique is robust and accurately estimates pupil center with less than 40% of the pupil boundary points visible.
Robust power spectral estimation for EEG data
Melman, Tamar; Victor, Jonathan D.
2016-01-01
Background Typical electroencephalogram (EEG) recordings often contain substantial artifact. These artifacts, often large and intermittent, can interfere with quantification of the EEG via its power spectrum. To reduce the impact of artifact, EEG records are typically cleaned by a preprocessing stage that removes individual segments or components of the recording. However, such preprocessing can introduce bias, discard available signal, and be labor-intensive. With this motivation, we present a method that uses robust statistics to reduce dependence on preprocessing by minimizing the effect of large intermittent outliers on the spectral estimates. New method Using the multitaper method[1] as a starting point, we replaced the final step of the standard power spectrum calculation with a quantile-based estimator, and the Jackknife approach to confidence intervals with a Bayesian approach. The method is implemented in provided MATLAB modules, which extend the widely used Chronux toolbox. Results Using both simulated and human data, we show that in the presence of large intermittent outliers, the robust method produces improved estimates of the power spectrum, and that the Bayesian confidence intervals yield close-to-veridical coverage factors. Comparison to existing method The robust method, as compared to the standard method, is less affected by artifact: inclusion of outliers produces fewer changes in the shape of the power spectrum as well as in the coverage factor. Conclusion In the presence of large intermittent outliers, the robust method can reduce dependence on data preprocessing as compared to standard methods of spectral estimation. PMID:27102041
Robust power spectral estimation for EEG data.
Melman, Tamar; Victor, Jonathan D
2016-08-01
Typical electroencephalogram (EEG) recordings often contain substantial artifact. These artifacts, often large and intermittent, can interfere with quantification of the EEG via its power spectrum. To reduce the impact of artifact, EEG records are typically cleaned by a preprocessing stage that removes individual segments or components of the recording. However, such preprocessing can introduce bias, discard available signal, and be labor-intensive. With this motivation, we present a method that uses robust statistics to reduce dependence on preprocessing by minimizing the effect of large intermittent outliers on the spectral estimates. Using the multitaper method (Thomson, 1982) as a starting point, we replaced the final step of the standard power spectrum calculation with a quantile-based estimator, and the Jackknife approach to confidence intervals with a Bayesian approach. The method is implemented in provided MATLAB modules, which extend the widely used Chronux toolbox. Using both simulated and human data, we show that in the presence of large intermittent outliers, the robust method produces improved estimates of the power spectrum, and that the Bayesian confidence intervals yield close-to-veridical coverage factors. The robust method, as compared to the standard method, is less affected by artifact: inclusion of outliers produces fewer changes in the shape of the power spectrum as well as in the coverage factor. In the presence of large intermittent outliers, the robust method can reduce dependence on data preprocessing as compared to standard methods of spectral estimation. Copyright © 2016 Elsevier B.V. All rights reserved.
QSAR Modeling of Rat Acute Toxicity by Oral Exposure
Zhu, Hao; Martin, Todd M.; Ye, Lin; Sedykh, Alexander; Young, Douglas M.; Tropsha, Alexander
2009-01-01
Few Quantitative Structure-Activity Relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity endpoints. In this study, a comprehensive dataset of 7,385 compounds with their most conservative lethal dose (LD50) values has been compiled. A combinatorial QSAR approach has been employed to develop robust and predictive models of acute toxicity in rats caused by oral exposure to chemicals. To enable fair comparison between the predictive power of models generated in this study versus a commercial toxicity predictor, TOPKAT (Toxicity Prediction by Komputer Assisted Technology), a modeling subset of the entire dataset was selected that included all 3,472 compounds used in the TOPKAT’s training set. The remaining 3,913 compounds, which were not present in the TOPKAT training set, were used as the external validation set. QSAR models of five different types were developed for the modeling set. The prediction accuracy for the external validation set was estimated by determination coefficient R2 of linear regression between actual and predicted LD50 values. The use of the applicability domain threshold implemented in most models generally improved the external prediction accuracy but expectedly led to the decrease in chemical space coverage; depending on the applicability domain threshold, R2 ranged from 0.24 to 0.70. Ultimately, several consensus models were developed by averaging the predicted LD50 for every compound using all 5 models. The consensus models afforded higher prediction accuracy for the external validation dataset with the higher coverage as compared to individual constituent models. The validated consensus LD50 models developed in this study can be used as reliable computational predictors of in vivo acute toxicity. PMID:19845371
Quantitative structure-activity relationship modeling of rat acute toxicity by oral exposure.
Zhu, Hao; Martin, Todd M; Ye, Lin; Sedykh, Alexander; Young, Douglas M; Tropsha, Alexander
2009-12-01
Few quantitative structure-activity relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity end points. In this study, a comprehensive data set of 7385 compounds with their most conservative lethal dose (LD(50)) values has been compiled. A combinatorial QSAR approach has been employed to develop robust and predictive models of acute toxicity in rats caused by oral exposure to chemicals. To enable fair comparison between the predictive power of models generated in this study versus a commercial toxicity predictor, TOPKAT (Toxicity Prediction by Komputer Assisted Technology), a modeling subset of the entire data set was selected that included all 3472 compounds used in TOPKAT's training set. The remaining 3913 compounds, which were not present in the TOPKAT training set, were used as the external validation set. QSAR models of five different types were developed for the modeling set. The prediction accuracy for the external validation set was estimated by determination coefficient R(2) of linear regression between actual and predicted LD(50) values. The use of the applicability domain threshold implemented in most models generally improved the external prediction accuracy but expectedly led to the decrease in chemical space coverage; depending on the applicability domain threshold, R(2) ranged from 0.24 to 0.70. Ultimately, several consensus models were developed by averaging the predicted LD(50) for every compound using all five models. The consensus models afforded higher prediction accuracy for the external validation data set with the higher coverage as compared to individual constituent models. The validated consensus LD(50) models developed in this study can be used as reliable computational predictors of in vivo acute toxicity.
Magis, David
2014-11-01
In item response theory, the classical estimators of ability are highly sensitive to response disturbances and can return strongly biased estimates of the true underlying ability level. Robust methods were introduced to lessen the impact of such aberrant responses on the estimation process. The computation of asymptotic (i.e., large-sample) standard errors (ASE) for these robust estimators, however, has not yet been fully considered. This paper focuses on a broad class of robust ability estimators, defined by an appropriate selection of the weight function and the residual measure, for which the ASE is derived from the theory of estimating equations. The maximum likelihood (ML) and the robust estimators, together with their estimated ASEs, are then compared in a simulation study by generating random guessing disturbances. It is concluded that both the estimators and their ASE perform similarly in the absence of random guessing, while the robust estimator and its estimated ASE are less biased and outperform their ML counterparts in the presence of random guessing with large impact on the item response process. © 2013 The British Psychological Society.
Neuroanatomical profiles of personality change in frontotemporal lobar degeneration.
Mahoney, Colin J; Rohrer, Jonathan D; Omar, Rohani; Rossor, Martin N; Warren, Jason D
2011-05-01
The neurobiological basis of personality is poorly understood. Frontotemporal lobar degeneration (FTLD) frequently presents with complex behavioural changes, and therefore potentially provides a disease model in which to investigate brain substrates of personality. To assess neuroanatomical correlates of personality change in a cohort of individuals with FTLD using voxel-based morphometry (VBM). Thirty consecutive individuals fulfilling consensus criteria for FTLD were assessed. Each participant's carer completed a Big Five Inventory (BFI) questionnaire on five key personality traits; for each trait, a change score was derived based on current compared with estimated premorbid characteristics. All participants underwent volumetric brain magnetic resonance imaging. A VBM analysis was implemented regressing change score for each trait against regional grey matter volume across the FTLD group. The FTLD group showed a significant decline in extraversion, agreeableness, conscientiousness and openness and an increase in neuroticism. Change in particular personality traits was associated with overlapping profiles of grey matter loss in more anterior cortical areas and relative preservation of grey matter in more posterior areas; the most robust neuroanatomical correlate was identified for reduced conscientiousness in the region of the posterior superior temporal gyrus. Quantitative measures of personality change in FTLD can be correlated with changes in regional grey matter. The neuroanatomical profiles for particular personality traits overlap brain circuits previously implicated in aspects of social cognition and suggest that dysfunction at the level of distributed cortical networks underpins personality change in FTLD.
Risk, Robustness and Water Resources Planning Under Uncertainty
NASA Astrophysics Data System (ADS)
Borgomeo, Edoardo; Mortazavi-Naeini, Mohammad; Hall, Jim W.; Guillod, Benoit P.
2018-03-01
Risk-based water resources planning is based on the premise that water managers should invest up to the point where the marginal benefit of risk reduction equals the marginal cost of achieving that benefit. However, this cost-benefit approach may not guarantee robustness under uncertain future conditions, for instance under climatic changes. In this paper, we expand risk-based decision analysis to explore possible ways of enhancing robustness in engineered water resources systems under different risk attitudes. Risk is measured as the expected annual cost of water use restrictions, while robustness is interpreted in the decision-theoretic sense as the ability of a water resource system to maintain performance—expressed as a tolerable risk of water use restrictions—under a wide range of possible future conditions. Linking risk attitudes with robustness allows stakeholders to explicitly trade-off incremental increases in robustness with investment costs for a given level of risk. We illustrate the framework through a case study of London's water supply system using state-of-the -art regional climate simulations to inform the estimation of risk and robustness.
Robust Vision-Based Pose Estimation Algorithm for AN Uav with Known Gravity Vector
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2016-06-01
Accurate estimation of camera external orientation with respect to a known object is one of the central problems in photogrammetry and computer vision. In recent years this problem is gaining an increasing attention in the field of UAV autonomous flight. Such application requires a real-time performance and robustness of the external orientation estimation algorithm. The accuracy of the solution is strongly dependent on the number of reference points visible on the given image. The problem only has an analytical solution if 3 or more reference points are visible. However, in limited visibility conditions it is often needed to perform external orientation with only 2 visible reference points. In such case the solution could be found if the gravity vector direction in the camera coordinate system is known. A number of algorithms for external orientation estimation for the case of 2 known reference points and a gravity vector were developed to date. Most of these algorithms provide analytical solution in the form of polynomial equation that is subject to large errors in the case of complex reference points configurations. This paper is focused on the development of a new computationally effective and robust algorithm for external orientation based on positions of 2 known reference points and a gravity vector. The algorithm implementation for guidance of a Parrot AR.Drone 2.0 micro-UAV is discussed. The experimental evaluation of the algorithm proved its computational efficiency and robustness against errors in reference points positions and complex configurations.
Filtering Based Adaptive Visual Odometry Sensor Framework Robust to Blurred Images
Zhao, Haiying; Liu, Yong; Xie, Xiaojia; Liao, Yiyi; Liu, Xixi
2016-01-01
Visual odometry (VO) estimation from blurred image is a challenging problem in practical robot applications, and the blurred images will severely reduce the estimation accuracy of the VO. In this paper, we address the problem of visual odometry estimation from blurred images, and present an adaptive visual odometry estimation framework robust to blurred images. Our approach employs an objective measure of images, named small image gradient distribution (SIGD), to evaluate the blurring degree of the image, then an adaptive blurred image classification algorithm is proposed to recognize the blurred images, finally we propose an anti-blurred key-frame selection algorithm to enable the VO robust to blurred images. We also carried out varied comparable experiments to evaluate the performance of the VO algorithms with our anti-blur framework under varied blurred images, and the experimental results show that our approach can achieve superior performance comparing to the state-of-the-art methods under the condition with blurred images while not increasing too much computation cost to the original VO algorithms. PMID:27399704
NASA Technical Reports Server (NTRS)
Lombaerts, Thomas; Schuet, Stefan R.; Wheeler, Kevin; Acosta, Diana; Kaneshige, John
2013-01-01
This paper discusses an algorithm for estimating the safe maneuvering envelope of damaged aircraft. The algorithm performs a robust reachability analysis through an optimal control formulation while making use of time scale separation and taking into account uncertainties in the aerodynamic derivatives. Starting with an optimal control formulation, the optimization problem can be rewritten as a Hamilton- Jacobi-Bellman equation. This equation can be solved by level set methods. This approach has been applied on an aircraft example involving structural airframe damage. Monte Carlo validation tests have confirmed that this approach is successful in estimating the safe maneuvering envelope for damaged aircraft.
Serra-Majem, Lluis; Raposo, António; Aranceta-Bartrina, Javier; Varela-Moreiras, Gregorio; Logue, Caomhan; Laviada, Hugo; Socolovsky, Susana; Pérez-Rodrigo, Carmen; Aldrete-Velasco, Jorge Antonio; Meneses Sierra, Eduardo; López-García, Rebeca; Ortiz-Andrellucchi, Adriana; Gómez-Candela, Carmen; Abreu, Rodrigo; Alexanderson, Erick; Álvarez-Álvarez, Rolando Joel; Álvarez Falcón, Ana Luisa; Anadón, Arturo; Bellisle, France; Beristain-Navarrete, Ina Alejandra; Blasco Redondo, Raquel; Bochicchio, Tommaso; Camolas, José; Cardini, Fernando G; Carocho, Márcio; Costa, Maria do Céu; Drewnowski, Adam; Durán, Samuel; Faundes, Víctor; Fernández-Condori, Roxana; García-Luna, Pedro P; Garnica, Juan Carlos; González-Gross, Marcela; La Vecchia, Carlo; Leis, Rosaura; López-Sobaler, Ana María; Madero, Miguel Agustín; Marcos, Ascensión; Mariscal Ramírez, Luis Alfonso; Martyn, Danika M; Mistura, Lorenza; Moreno Rojas, Rafael; Moreno Villares, José Manuel; Niño-Cruz, José Antonio; Oliveira, María Beatriz P P; Palacios Gil-Antuñano, Nieves; Pérez-Castells, Lucía; Ribas-Barba, Lourdes; Rincón Pedrero, Rodolfo; Riobó, Pilar; Rivera Medina, Juan; Tinoco de Faria, Catarina; Valdés-Ramos, Roxana; Vasco, Elsa; Wac, Sandra N; Wakida, Guillermo; Wanden-Berghe, Carmina; Xóchihua Díaz, Luis; Zúñiga-Guajardo, Sergio; Pyrogianni, Vasiliki; Cunha Velho de Sousa, Sérgio
2018-06-25
International scientific experts in food, nutrition, dietetics, endocrinology, physical activity, paediatrics, nursing, toxicology and public health met in Lisbon on 2⁻4 July 2017 to develop a Consensus on the use of low- and no-calorie sweeteners (LNCS) as substitutes for sugars and other caloric sweeteners. LNCS are food additives that are broadly used as sugar substitutes to sweeten foods and beverages with the addition of fewer or no calories. They are also used in medicines, health-care products, such as toothpaste, and food supplements. The goal of this Consensus was to provide a useful, evidence-based, point of reference to assist in efforts to reduce free sugars consumption in line with current international public health recommendations. Participating experts in the Lisbon Consensus analysed and evaluated the evidence in relation to the role of LNCS in food safety, their regulation and the nutritional and dietary aspects of their use in foods and beverages. The conclusions of this Consensus were: (1) LNCS are some of the most extensively evaluated dietary constituents, and their safety has been reviewed and confirmed by regulatory bodies globally including the World Health Organisation, the US Food and Drug Administration and the European Food Safety Authority; (2) Consumer education, which is based on the most robust scientific evidence and regulatory processes, on the use of products containing LNCS should be strengthened in a comprehensive and objective way; (3) The use of LNCS in weight reduction programmes that involve replacing caloric sweeteners with LNCS in the context of structured diet plans may favour sustainable weight reduction. Furthermore, their use in diabetes management programmes may contribute to a better glycaemic control in patients, albeit with modest results. LNCS also provide dental health benefits when used in place of free sugars; (4) It is proposed that foods and beverages with LNCS could be included in dietary guidelines as alternative options to products sweetened with free sugars; (5) Continued education of health professionals is required, since they are a key source of information on issues related to food and health for both the general population and patients. With this in mind, the publication of position statements and consensus documents in the academic literature are extremely desirable.
NASA Astrophysics Data System (ADS)
Costanzi, Stefano; Tikhonova, Irina G.; Harden, T. Kendall; Jacobson, Kenneth A.
2009-11-01
Accurate in silico models for the quantitative prediction of the activity of G protein-coupled receptor (GPCR) ligands would greatly facilitate the process of drug discovery and development. Several methodologies have been developed based on the properties of the ligands, the direct study of the receptor-ligand interactions, or a combination of both approaches. Ligand-based three-dimensional quantitative structure-activity relationships (3D-QSAR) techniques, not requiring knowledge of the receptor structure, have been historically the first to be applied to the prediction of the activity of GPCR ligands. They are generally endowed with robustness and good ranking ability; however they are highly dependent on training sets. Structure-based techniques generally do not provide the level of accuracy necessary to yield meaningful rankings when applied to GPCR homology models. However, they are essentially independent from training sets and have a sufficient level of accuracy to allow an effective discrimination between binders and nonbinders, thus qualifying as viable lead discovery tools. The combination of ligand and structure-based methodologies in the form of receptor-based 3D-QSAR and ligand and structure-based consensus models results in robust and accurate quantitative predictions. The contribution of the structure-based component to these combined approaches is expected to become more substantial and effective in the future, as more sophisticated scoring functions are developed and more detailed structural information on GPCRs is gathered.
Standard and Robust Methods in Regression Imputation
ERIC Educational Resources Information Center
Moraveji, Behjat; Jafarian, Koorosh
2014-01-01
The aim of this paper is to provide an introduction of new imputation algorithms for estimating missing values from official statistics in larger data sets of data pre-processing, or outliers. The goal is to propose a new algorithm called IRMI (iterative robust model-based imputation). This algorithm is able to deal with all challenges like…
Robust and sparse correlation matrix estimation for the analysis of high-dimensional genomics data.
Serra, Angela; Coretto, Pietro; Fratello, Michele; Tagliaferri, Roberto; Stegle, Oliver
2018-02-15
Microarray technology can be used to study the expression of thousands of genes across a number of different experimental conditions, usually hundreds. The underlying principle is that genes sharing similar expression patterns, across different samples, can be part of the same co-expression system, or they may share the same biological functions. Groups of genes are usually identified based on cluster analysis. Clustering methods rely on the similarity matrix between genes. A common choice to measure similarity is to compute the sample correlation matrix. Dimensionality reduction is another popular data analysis task which is also based on covariance/correlation matrix estimates. Unfortunately, covariance/correlation matrix estimation suffers from the intrinsic noise present in high-dimensional data. Sources of noise are: sampling variations, presents of outlying sample units, and the fact that in most cases the number of units is much larger than the number of genes. In this paper, we propose a robust correlation matrix estimator that is regularized based on adaptive thresholding. The resulting method jointly tames the effects of the high-dimensionality, and data contamination. Computations are easy to implement and do not require hand tunings. Both simulated and real data are analyzed. A Monte Carlo experiment shows that the proposed method is capable of remarkable performances. Our correlation metric is more robust to outliers compared with the existing alternatives in two gene expression datasets. It is also shown how the regularization allows to automatically detect and filter spurious correlations. The same regularization is also extended to other less robust correlation measures. Finally, we apply the ARACNE algorithm on the SyNTreN gene expression data. Sensitivity and specificity of the reconstructed network is compared with the gold standard. We show that ARACNE performs better when it takes the proposed correlation matrix estimator as input. The R software is available at https://github.com/angy89/RobustSparseCorrelation. aserra@unisa.it or robtag@unisa.it. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
A cascaded two-step Kalman filter for estimation of human body segment orientation using MEMS-IMU.
Zihajehzadeh, S; Loh, D; Lee, M; Hoskinson, R; Park, E J
2014-01-01
Orientation of human body segments is an important quantity in many biomechanical analyses. To get robust and drift-free 3-D orientation, raw data from miniature body worn MEMS-based inertial measurement units (IMU) should be blended in a Kalman filter. Aiming at less computational cost, this work presents a novel cascaded two-step Kalman filter orientation estimation algorithm. Tilt angles are estimated in the first step of the proposed cascaded Kalman filter. The estimated tilt angles are passed to the second step of the filter for yaw angle calculation. The orientation results are benchmarked against the ones from a highly accurate tactical grade IMU. Experimental results reveal that the proposed algorithm provides robust orientation estimation in both kinematically and magnetically disturbed conditions.
2012-02-29
couples the estimation scheme with the computational scheme, using one to enhance the other. Numerically, this switching changes several of the matrices...2011. 11. M.A. Demetriou, Enforcing and enhancing consensus of spatially distributed filters utilizing mobile sensor networks, Proceedings of the 49th...expected May, 2012. References [1] J. H. Seinfeld and S. N. Pandis, Atmospheric Chemistry and Physics: From Air Pollution to Climate Change. New York
Efficient and robust computation of PDF features from diffusion MR signal.
Assemlal, Haz-Edine; Tschumperlé, David; Brun, Luc
2009-10-01
We present a method for the estimation of various features of the tissue micro-architecture using the diffusion magnetic resonance imaging. The considered features are designed from the displacement probability density function (PDF). The estimation is based on two steps: first the approximation of the signal by a series expansion made of Gaussian-Laguerre and Spherical Harmonics functions; followed by a projection on a finite dimensional space. Besides, we propose to tackle the problem of the robustness to Rician noise corrupting in-vivo acquisitions. Our feature estimation is expressed as a variational minimization process leading to a variational framework which is robust to noise. This approach is very flexible regarding the number of samples and enables the computation of a large set of various features of the local tissues structure. We demonstrate the effectiveness of the method with results on both synthetic phantom and real MR datasets acquired in a clinical time-frame.
Correlation techniques to determine model form in robust nonlinear system realization/identification
NASA Technical Reports Server (NTRS)
Stry, Greselda I.; Mook, D. Joseph
1991-01-01
The fundamental challenge in identification of nonlinear dynamic systems is determining the appropriate form of the model. A robust technique is presented which essentially eliminates this problem for many applications. The technique is based on the Minimum Model Error (MME) optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature is the ability to identify nonlinear dynamic systems without prior assumption regarding the form of the nonlinearities, in contrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. Model form is determined via statistical correlation of the MME optimal state estimates with the MME optimal model error estimates. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.
Research@ARL: Network Sciences
2013-03-01
and Power Allocation for Minimum Energy Consumption in Consensus Networks ................ 21 Stefania Sardellitti, Sergio Barbarossa, and Ananthram...battlefield effectiveness and to ensure that Soldier performance requirements are adequately considered in technology development and system design...Operations (NCO). NCW/NCO seeks to dramatically increase mission effectiveness via robust networking for information sharing leading to shared
Muchero, Wellington; Diop, Ndeye N; Bhat, Prasanna R; Fenton, Raymond D; Wanamaker, Steve; Pottorff, Marti; Hearne, Sarah; Cisse, Ndiaga; Fatokun, Christian; Ehlers, Jeffrey D; Roberts, Philip A; Close, Timothy J
2009-10-27
Consensus genetic linkage maps provide a genomic framework for quantitative trait loci identification, map-based cloning, assessment of genetic diversity, association mapping, and applied breeding in marker-assisted selection schemes. Among "orphan crops" with limited genomic resources such as cowpea [Vigna unguiculata (L.) Walp.] (2n = 2x = 22), the use of transcript-derived SNPs in genetic maps provides opportunities for automated genotyping and estimation of genome structure based on synteny analysis. Here, we report the development and validation of a high-throughput EST-derived SNP assay for cowpea, its application in consensus map building, and determination of synteny to reference genomes. SNP mining from 183,118 ESTs sequenced from 17 cDNA libraries yielded approximately 10,000 high-confidence SNPs from which an Illumina 1,536-SNP GoldenGate genotyping array was developed and applied to 741 recombinant inbred lines from six mapping populations. Approximately 90% of the SNPs were technically successful, providing 1,375 dependable markers. Of these, 928 were incorporated into a consensus genetic map spanning 680 cM with 11 linkage groups and an average marker distance of 0.73 cM. Comparison of this cowpea genetic map to reference legumes, soybean (Glycine max) and Medicago truncatula, revealed extensive macrosynteny encompassing 85 and 82%, respectively, of the cowpea map. Regions of soybean genome duplication were evident relative to the simpler diploid cowpea. Comparison with Arabidopsis revealed extensive genomic rearrangement with some conserved microsynteny. These results support evolutionary closeness between cowpea and soybean and identify regions for synteny-based functional genomics studies in legumes.
Statistical plant set estimation using Schroeder-phased multisinusoidal input design
NASA Technical Reports Server (NTRS)
Bayard, D. S.
1992-01-01
A frequency domain method is developed for plant set estimation. The estimation of a plant 'set' rather than a point estimate is required to support many methods of modern robust control design. The approach here is based on using a Schroeder-phased multisinusoid input design which has the special property of placing input energy only at the discrete frequency points used in the computation. A detailed analysis of the statistical properties of the frequency domain estimator is given, leading to exact expressions for the probability distribution of the estimation error, and many important properties. It is shown that, for any nominal parametric plant estimate, one can use these results to construct an overbound on the additive uncertainty to any prescribed statistical confidence. The 'soft' bound thus obtained can be used to replace 'hard' bounds presently used in many robust control analysis and synthesis methods.
Statistics based sampling for controller and estimator design
NASA Astrophysics Data System (ADS)
Tenne, Dirk
The purpose of this research is the development of statistical design tools for robust feed-forward/feedback controllers and nonlinear estimators. This dissertation is threefold and addresses the aforementioned topics nonlinear estimation, target tracking and robust control. To develop statistically robust controllers and nonlinear estimation algorithms, research has been performed to extend existing techniques, which propagate the statistics of the state, to achieve higher order accuracy. The so-called unscented transformation has been extended to capture higher order moments. Furthermore, higher order moment update algorithms based on a truncated power series have been developed. The proposed techniques are tested on various benchmark examples. Furthermore, the unscented transformation has been utilized to develop a three dimensional geometrically constrained target tracker. The proposed planar circular prediction algorithm has been developed in a local coordinate framework, which is amenable to extension of the tracking algorithm to three dimensional space. This tracker combines the predictions of a circular prediction algorithm and a constant velocity filter by utilizing the Covariance Intersection. This combined prediction can be updated with the subsequent measurement using a linear estimator. The proposed technique is illustrated on a 3D benchmark trajectory, which includes coordinated turns and straight line maneuvers. The third part of this dissertation addresses the design of controller which include knowledge of parametric uncertainties and their distributions. The parameter distributions are approximated by a finite set of points which are calculated by the unscented transformation. This set of points is used to design robust controllers which minimize a statistical performance of the plant over the domain of uncertainty consisting of a combination of the mean and variance. The proposed technique is illustrated on three benchmark problems. The first relates to the design of prefilters for a linear and nonlinear spring-mass-dashpot system and the second applies a feedback controller to a hovering helicopter. Lastly, the statistical robust controller design is devoted to a concurrent feed-forward/feedback controller structure for a high-speed low tension tape drive.
A robust approach for ECG-based analysis of cardiopulmonary coupling.
Zheng, Jiewen; Wang, Weidong; Zhang, Zhengbo; Wu, Dalei; Wu, Hao; Peng, Chung-Kang
2016-07-01
Deriving respiratory signal from a surface electrocardiogram (ECG) measurement has advantage of simultaneously monitoring of cardiac and respiratory activities. ECG-based cardiopulmonary coupling (CPC) analysis estimated by heart period variability and ECG-derived respiration (EDR) shows promising applications in medical field. The aim of this paper is to provide a quantitative analysis of the ECG-based CPC, and further improve its performance. Two conventional strategies were tested to obtain EDR signal: R-S wave amplitude and area of the QRS complex. An adaptive filter was utilized to extract the common component of inter-beat interval (RRI) and EDR, generating enhanced versions of EDR signal. CPC is assessed through probing the nonlinear phase interactions between RRI series and respiratory signal. Respiratory oscillations presented in both RRI series and respiratory signals were extracted by ensemble empirical mode decomposition for coupling analysis via phase synchronization index. The results demonstrated that CPC estimated from conventional EDR series exhibits constant and proportional biases, while that estimated from enhanced EDR series is more reliable. Adaptive filtering can improve the accuracy of the ECG-based CPC estimation significantly and achieve robust CPC analysis. The improved ECG-based CPC estimation may provide additional prognostic information for both sleep medicine and autonomic function analysis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Robust inference under the beta regression model with application to health care studies.
Ghosh, Abhik
2017-01-01
Data on rates, percentages, or proportions arise frequently in many different applied disciplines like medical biology, health care, psychology, and several others. In this paper, we develop a robust inference procedure for the beta regression model, which is used to describe such response variables taking values in (0, 1) through some related explanatory variables. In relation to the beta regression model, the issue of robustness has been largely ignored in the literature so far. The existing maximum likelihood-based inference has serious lack of robustness against outliers in data and generate drastically different (erroneous) inference in the presence of data contamination. Here, we develop the robust minimum density power divergence estimator and a class of robust Wald-type tests for the beta regression model along with several applications. We derive their asymptotic properties and describe their robustness theoretically through the influence function analyses. Finite sample performances of the proposed estimators and tests are examined through suitable simulation studies and real data applications in the context of health care and psychology. Although we primarily focus on the beta regression models with a fixed dispersion parameter, some indications are also provided for extension to the variable dispersion beta regression models with an application.
Rosas Hernández, Ana María; Alejandre Carmona, Sergio; Rodríguez Sánchez, Javier Enrique; Castell Alcalá, Maria Victoria; Otero Puime, Ángel
2018-03-16
Identify the population over 70 year's old treated in primary care who should participate in a physical exercise program to prevent frailty. Analyze the concordance among 2criteria to select the beneficiary population of the program. Population-based cross-sectional study. Primary Care. Elderly over 70 years old, living in the Peñagrande neighborhood (Fuencarral district of Madrid) from the Peñagrande cohort, who accepted to participate in 2015 (n = 332). The main variable of the study is the need for exercise prescription in people over 70 years old at the Primary Care setting. It was identified through 2different definitions: Prefrail (1-2 of 5 Fried criteria) and Independent individuals with physical performance limited, defined by Consensus on frailty and falls prevention among the elderly (independent and with a total SPPB score <10). The 63,8% of participants (n = 196) need exercise prescription based on criteria defined by Fried and/or the consensus for prevention of frailty and falls in the elderly. In 82 cases the 2criteria were met, 80 were prefrail with normal physical performance and 34 were robust with a limited physical performance. The concordance among both criteria is weak (kappa index 0, 27). Almost 2thirds of the elderly have some kind of functional limitation. The criteria of the consensus document to prevent frailty detect half of the pre-frail individuals in the community. Copyright © 2018 The Authors. Publicado por Elsevier España, S.L.U. All rights reserved.
Aging effects on DNA methylation modules in human brain and blood tissue
2012-01-01
Background Several recent studies reported aging effects on DNA methylation levels of individual CpG dinucleotides. But it is not yet known whether aging-related consensus modules, in the form of clusters of correlated CpG markers, can be found that are present in multiple human tissues. Such a module could facilitate the understanding of aging effects on multiple tissues. Results We therefore employed weighted correlation network analysis of 2,442 Illumina DNA methylation arrays from brain and blood tissues, which enabled the identification of an age-related co-methylation module. Module preservation analysis confirmed that this module can also be found in diverse independent data sets. Biological evaluation showed that module membership is associated with Polycomb group target occupancy counts, CpG island status and autosomal chromosome location. Functional enrichment analysis revealed that the aging-related consensus module comprises genes that are involved in nervous system development, neuron differentiation and neurogenesis, and that it contains promoter CpGs of genes known to be down-regulated in early Alzheimer's disease. A comparison with a standard, non-module based meta-analysis revealed that selecting CpGs based on module membership leads to significantly increased gene ontology enrichment, thus demonstrating that studying aging effects via consensus network analysis enhances the biological insights gained. Conclusions Overall, our analysis revealed a robustly defined age-related co-methylation module that is present in multiple human tissues, including blood and brain. We conclude that blood is a promising surrogate for brain tissue when studying the effects of age on DNA methylation profiles. PMID:23034122
Generalizations and Extensions of the Probability of Superiority Effect Size Estimator
ERIC Educational Resources Information Center
Ruscio, John; Gera, Benjamin Lee
2013-01-01
Researchers are strongly encouraged to accompany the results of statistical tests with appropriate estimates of effect size. For 2-group comparisons, a probability-based effect size estimator ("A") has many appealing properties (e.g., it is easy to understand, robust to violations of parametric assumptions, insensitive to outliers). We review…
Robust adaptive multichannel SAR processing based on covariance matrix reconstruction
NASA Astrophysics Data System (ADS)
Tan, Zhen-ya; He, Feng
2018-04-01
With the combination of digital beamforming (DBF) processing, multichannel synthetic aperture radar(SAR) systems in azimuth promise well in high-resolution and wide-swath imaging, whereas conventional processing methods don't take the nonuniformity of scattering coefficient into consideration. This paper brings up a robust adaptive Multichannel SAR processing method which utilizes the Capon spatial spectrum estimator to obtain the spatial spectrum distribution over all ambiguous directions first, and then the interference-plus-noise covariance Matrix is reconstructed based on definition to acquire the Multichannel SAR processing filter. The performance of processing under nonuniform scattering coefficient is promoted by this novel method and it is robust again array errors. The experiments with real measured data demonstrate the effectiveness and robustness of the proposed method.
Robust estimation for partially linear models with large-dimensional covariates
Zhu, LiPing; Li, RunZe; Cui, HengJian
2014-01-01
We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of o(n), where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures. PMID:24955087
Robust estimation for partially linear models with large-dimensional covariates.
Zhu, LiPing; Li, RunZe; Cui, HengJian
2013-10-01
We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.
A hybrid robust fault tolerant control based on adaptive joint unscented Kalman filter.
Shabbouei Hagh, Yashar; Mohammadi Asl, Reza; Cocquempot, Vincent
2017-01-01
In this paper, a new hybrid robust fault tolerant control scheme is proposed. A robust H ∞ control law is used in non-faulty situation, while a Non-Singular Terminal Sliding Mode (NTSM) controller is activated as soon as an actuator fault is detected. Since a linear robust controller is designed, the system is first linearized through the feedback linearization method. To switch from one controller to the other, a fuzzy based switching system is used. An Adaptive Joint Unscented Kalman Filter (AJUKF) is used for fault detection and diagnosis. The proposed method is based on the simultaneous estimation of the system states and parameters. In order to show the efficiency of the proposed scheme, a simulated 3-DOF robotic manipulator is used. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Skeletal Correlates for Body Mass Estimation in Modern and Fossil Flying Birds
Field, Daniel J.; Lynner, Colton; Brown, Christian; Darroch, Simon A. F.
2013-01-01
Scaling relationships between skeletal dimensions and body mass in extant birds are often used to estimate body mass in fossil crown-group birds, as well as in stem-group avialans. However, useful statistical measurements for constraining the precision and accuracy of fossil mass estimates are rarely provided, which prevents the quantification of robust upper and lower bound body mass estimates for fossils. Here, we generate thirteen body mass correlations and associated measures of statistical robustness using a sample of 863 extant flying birds. By providing robust body mass regressions with upper- and lower-bound prediction intervals for individual skeletal elements, we address the longstanding problem of body mass estimation for highly fragmentary fossil birds. We demonstrate that the most precise proxy for estimating body mass in the overall dataset, measured both as coefficient determination of ordinary least squares regression and percent prediction error, is the maximum diameter of the coracoid’s humeral articulation facet (the glenoid). We further demonstrate that this result is consistent among the majority of investigated avian orders (10 out of 18). As a result, we suggest that, in the majority of cases, this proxy may provide the most accurate estimates of body mass for volant fossil birds. Additionally, by presenting statistical measurements of body mass prediction error for thirteen different body mass regressions, this study provides a much-needed quantitative framework for the accurate estimation of body mass and associated ecological correlates in fossil birds. The application of these regressions will enhance the precision and robustness of many mass-based inferences in future paleornithological studies. PMID:24312392
Vector autoregressive models: A Gini approach
NASA Astrophysics Data System (ADS)
Mussard, Stéphane; Ndiaye, Oumar Hamady
2018-02-01
In this paper, it is proven that the usual VAR models may be performed in the Gini sense, that is, on a ℓ1 metric space. The Gini regression is robust to outliers. As a consequence, when data are contaminated by extreme values, we show that semi-parametric VAR-Gini regressions may be used to obtain robust estimators. The inference about the estimators is made with the ℓ1 norm. Also, impulse response functions and Gini decompositions for prevision errors are introduced. Finally, Granger's causality tests are properly derived based on U-statistics.
Cultural Consensus Theory: Aggregating Continuous Responses in a Finite Interval
NASA Astrophysics Data System (ADS)
Batchelder, William H.; Strashny, Alex; Romney, A. Kimball
Cultural consensus theory (CCT) consists of cognitive models for aggregating responses of "informants" to test items about some domain of their shared cultural knowledge. This paper develops a CCT model for items requiring bounded numerical responses, e.g. probability estimates, confidence judgments, or similarity judgments. The model assumes that each item generates a latent random representation in each informant, with mean equal to the consensus answer and variance depending jointly on the informant and the location of the consensus answer. The manifest responses may reflect biases of the informants. Markov Chain Monte Carlo (MCMC) methods were used to estimate the model, and simulation studies validated the approach. The model was applied to an existing cross-cultural dataset involving native Japanese and English speakers judging the similarity of emotion terms. The results sharpened earlier studies that showed that both cultures appear to have very similar cognitive representations of emotion terms.
Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors
Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin
2018-01-01
Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison. PMID:29614028
Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors.
Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin
2018-04-03
Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison.
Combined radar-radiometer surface soil moisture and roughness estimation
USDA-ARS?s Scientific Manuscript database
A robust physics-based combined radar-radiometer, or Active-Passive, surface soil moisture and roughness estimation methodology is presented. Soil moisture and roughness retrieval is performed via optimization, i.e., minimization, of a joint objective function which constrains similar resolution rad...
Fast and robust estimation of ophthalmic wavefront aberrations
NASA Astrophysics Data System (ADS)
Dillon, Keith
2016-12-01
Rapidly rising levels of myopia, particularly in the developing world, have led to an increased need for inexpensive and automated approaches to optometry. A simple and robust technique is provided for estimating major ophthalmic aberrations using a gradient-based wavefront sensor. The approach is based on the use of numerical calculations to produce diverse combinations of phase components, followed by Fourier transforms to calculate the coefficients. The approach does not utilize phase unwrapping nor iterative solution of inverse problems. This makes the method very fast and tolerant to image artifacts, which do not need to be detected and masked or interpolated as is needed in other techniques. These features make it a promising algorithm on which to base low-cost devices for applications that may have limited access to expert maintenance and operation.
Trust-based Task Assignment in Military Tactical Networks
2012-06-01
Decentralized Auctions for Robust Task Allocation ,“ IEEE Trans. on Robotics, vol. 25, no. 4, pp. 912‐926, Aug. 2009. [10] B.B. Choudhury, B.B. Biswal, and D...Choi et al. [9] and Choudhury et al. [10] investigated market‐ based task allocation algorithms for multi‐robot systems. Choi et al. [9] proposed a... consensus ‐ based bundle algorithm for rapid conflict‐free matching between tasks and robots. Choudhury et al. [10] conducted an empirical study on a
NASA Astrophysics Data System (ADS)
Davidzon, I.; Ilbert, O.; Laigle, C.; Coupon, J.; McCracken, H. J.; Delvecchio, I.; Masters, D.; Capak, P.; Hsieh, B. C.; Le Fèvre, O.; Tresse, L.; Bethermin, M.; Chang, Y.-Y.; Faisst, A. L.; Le Floc'h, E.; Steinhardt, C.; Toft, S.; Aussel, H.; Dubois, C.; Hasinger, G.; Salvato, M.; Sanders, D. B.; Scoville, N.; Silverman, J. D.
2017-09-01
We measure the stellar mass function (SMF) and stellar mass density of galaxies in the COSMOS field up to z 6. We select them in the near-IR bands of the COSMOS2015 catalogue, which includes ultra-deep photometry from UltraVISTA-DR2, SPLASH, and Subaru/Hyper Suprime-Cam. At z> 2.5 we use new precise photometric redshifts with error σz = 0.03(1 + z) and an outlier fraction of 12%, estimated by means of the unique spectroscopic sample of COSMOS ( 100 000 spectroscopic measurements in total, more than one thousand having robust zspec> 2.5). The increased exposure time in the DR2, along with our panchromatic detection strategy, allow us to improve the completeness at high z with respect to previous UltraVISTA catalogues (e.g. our sample is >75% complete at 1010 ℳ⊙ and z = 5). We also identify passive galaxies through a robust colour-colour selection, extending their SMF estimate up to z = 4. Our work provides a comprehensive view of galaxy-stellar-mass assembly between z = 0.1 and 6, for the first time using consistent estimates across the entire redshift range. We fit these measurements with a Schechter function, correcting for Eddington bias. We compare the SMF fit with the halo mass function predicted from ΛCDM simulations, finding that at z> 3 both functions decline with a similar slope in thehigh-mass end. This feature could be explained assuming that mechanisms quenching star formation in massive haloes become less effective at high redshifts; however further work needs to be done to confirm this scenario. Concerning the SMF low-mass end, it shows a progressive steepening as it moves towards higher redshifts, with α decreasing from -1.47+0.02-0.02 at z ≃ 0.1 to -2.11+0.30-0.13 at z ≃ 5. This slope depends on the characterisation of the observational uncertainties, which is crucial to properly remove the Eddington bias. We show that there is currently no consensus on the method to quantify such errors: different error models result in different best-fit Schechter parameters. Based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory under ESO programme ID 179.A-2005 and on data products produced by TERAPIX and the Cambridge Astronomy Survey Unit on behalf of the UltraVISTA consortium (http://ultravista.org/). Based on data produced by the SPLASH team from observations made with the Spitzer Space Telescope (http://splash.caltech.edu).
Avital, Itzhak; Langan, Russell C.; Summers, Thomas A.; Steele, Scott R.; Waldman, Scott A.; Backman, Vadim; Yee, Judy; Nissan, Aviram; Young, Patrick; Womeldorph, Craig; Mancusco, Paul; Mueller, Renee; Noto, Khristian; Grundfest, Warren; Bilchik, Anton J.; Protic, Mladjan; Daumer, Martin; Eberhardt, John; Man, Yan Gao; Brücher, Björn LDM; Stojadinovic, Alexander
2013-01-01
Colorectal cancer (CRC) is the third most common cause of cancer-related death in the United States (U.S.), with estimates of 143,460 new cases and 51,690 deaths for the year 2012. Numerous organizations have published guidelines for CRC screening; however, these numerical estimates of incidence and disease-specific mortality have remained stable from years prior. Technological, genetic profiling, molecular and surgical advances in our modern era should allow us to improve risk stratification of patients with CRC and identify those who may benefit from preventive measures, early aggressive treatment, alternative treatment strategies, and/or frequent surveillance for the early detection of disease recurrence. To better negotiate future economic constraints and enhance patient outcomes, ultimately, we propose to apply the principals of personalized and precise cancer care to risk-stratify patients for CRC screening (Precision Risk Stratification-Based Screening, PRSBS). We believe that genetic, molecular, ethnic and socioeconomic disparities impact oncological outcomes in general, those related to CRC, in particular. This document highlights evidence-based screening recommendations and risk stratification methods in response to our CRC working group private-public consensus meeting held in March 2012. Our aim was to address how we could improve CRC risk stratification-based screening, and to provide a vision for the future to achieving superior survival rates for patients diagnosed with CRC. PMID:23459409
Alsop, David C.; Detre, John A.; Golay, Xavier; Günther, Matthias; Hendrikse, Jeroen; Hernandez-Garcia, Luis; Lu, Hanzhang; MacIntosh, Bradley J.; Parkes, Laura M.; Smits, Marion; van Osch, Matthias J. P.; Wang, Danny JJ; Wong, Eric C.; Zaharchuk, Greg
2014-01-01
This article provides a summary statement of recommended implementations of arterial spin labeling (ASL) for clinical applications. It is a consensus of the ISMRM Perfusion Study Group and the European ‘ASL in Dementia’ consortium, both of whom met to reach this consensus in October 2012 in Amsterdam. Although ASL continues to undergo rapid technical development, we believe that current ASL methods are robust and ready to provide useful clinical information, and that a consensus statement on recommended implementations will help the clinical community to adopt a standardized approach. In this article we describe the major considerations and tradeoffs in implementing an ASL protocol, and provide specific recommendations for a standard approach. Our conclusions are that, as an optimal default implementation we recommend: pseudo-continuous labeling, background suppression, a segmented 3D readout without vascular crushing gradients, and calculation and presentation of both label/control difference images and cerebral blood flow in absolute units using a simplified model. PMID:24715426
Decentralized Observer with a Consensus Filter for Distributed Discrete-Time Linear Systems
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Mandic, Milan
2011-01-01
This paper presents a decentralized observer with a consensus filter for the state observation of a discrete-time linear distributed systems. In this setup, each agent in the distributed system has an observer with a model of the plant that utilizes the set of locally available measurements, which may not make the full plant state detectable. This lack of detectability is overcome by utilizing a consensus filter that blends the state estimate of each agent with its neighbors' estimates. We assume that the communication graph is connected for all times as well as the sensing graph. It is proven that the state estimates of the proposed observer asymptotically converge to the actual plant states under arbitrarily changing, but connected, communication and sensing topologies. As a byproduct of this research, we also obtained a result on the location of eigenvalues, the spectrum, of the Laplacian for a family of graphs with self-loops.
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan
2018-01-01
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan
2018-02-06
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.
NASA Astrophysics Data System (ADS)
Lim, Sungsoo; Lee, Seohyung; Kim, Jun-geon; Lee, Daeho
2018-01-01
The around-view monitoring (AVM) system is one of the major applications of advanced driver assistance systems and intelligent transportation systems. We propose an on-line calibration method, which can compensate misalignments for AVM systems. Most AVM systems use fisheye undistortion, inverse perspective transformation, and geometrical registration methods. To perform these procedures, the parameters for each process must be known; the procedure by which the parameters are estimated is referred to as the initial calibration. However, when only using the initial calibration data, we cannot compensate misalignments, caused by changing equilibria of cars. Moreover, even small changes such as tire pressure levels, passenger weight, or road conditions can affect a car's equilibrium. Therefore, to compensate for this misalignment, additional techniques are necessary, specifically an on-line calibration method. On-line calibration can recalculate homographies, which can correct any degree of misalignment using the unique features of ordinary parking lanes. To extract features from the parking lanes, this method uses corner detection and a pattern matching algorithm. From the extracted features, homographies are estimated using random sample consensus and parameter estimation. Finally, the misaligned epipolar geographies are compensated via the estimated homographies. Thus, the proposed method can render image planes parallel to the ground. This method does not require any designated patterns and can be used whenever cars are placed in a parking lot. The experimental results show the robustness and efficiency of the method.
Convex Hull Aided Registration Method (CHARM).
Fan, Jingfan; Yang, Jian; Zhao, Yitian; Ai, Danni; Liu, Yonghuai; Wang, Ge; Wang, Yongtian
2017-09-01
Non-rigid registration finds many applications such as photogrammetry, motion tracking, model retrieval, and object recognition. In this paper we propose a novel convex hull aided registration method (CHARM) to match two point sets subject to a non-rigid transformation. First, two convex hulls are extracted from the source and target respectively. Then, all points of the point sets are projected onto the reference plane through each triangular facet of the hulls. From these projections, invariant features are extracted and matched optimally. The matched feature point pairs are mapped back onto the triangular facets of the convex hulls to remove outliers that are outside any relevant triangular facet. The rigid transformation from the source to the target is robustly estimated by the random sample consensus (RANSAC) scheme through minimizing the distance between the matched feature point pairs. Finally, these feature points are utilized as the control points to achieve non-rigid deformation in the form of thin-plate spline of the entire source point set towards the target one. The experimental results based on both synthetic and real data show that the proposed algorithm outperforms several state-of-the-art ones with respect to sampling, rotational angle, and data noise. In addition, the proposed CHARM algorithm also shows higher computational efficiency compared to these methods.
NASA Astrophysics Data System (ADS)
Safdernejad, Morteza S.; Karpenko, Oleksii; Ye, Chaofeng; Udpa, Lalita; Udpa, Satish
2016-02-01
The advent of Giant Magneto-Resistive (GMR) technology permits development of novel highly sensitive array probes for Eddy Current (EC) inspection of multi-layer riveted structures. Multi-frequency GMR measurements with different EC pene-tration depths show promise for detection of bottom layer notches at fastener sites. However, the distortion of the induced magnetic field due to flaws is dominated by the strong fastener signal, which makes defect detection and classification a challenging prob-lem. This issue is more pronounced for ferromagnetic fasteners that concentrate most of the magnetic flux. In the present work, a novel multi-frequency mixing algorithm is proposed to suppress rivet signal response and enhance defect detection capability of the GMR array probe. The algorithm is baseline-free and does not require any assumptions about the sample geometry being inspected. Fastener signal suppression is based upon the random sample consensus (RANSAC) method, which iteratively estimates parameters of a mathematical model from a set of observed data with outliers. Bottom layer defects at fastener site are simulated as EDM notches of different length. Performance of the proposed multi-frequency mixing approach is evaluated on finite element data and experimental GMR measurements obtained with unidirectional planar current excitation. Initial results are promising demonstrating the feasibility of the approach.
M-estimation for robust sparse unmixing of hyperspectral images
NASA Astrophysics Data System (ADS)
Toomik, Maria; Lu, Shijian; Nelson, James D. B.
2016-10-01
Hyperspectral unmixing methods often use a conventional least squares based lasso which assumes that the data follows the Gaussian distribution. The normality assumption is an approximation which is generally invalid for real imagery data. We consider a robust (non-Gaussian) approach to sparse spectral unmixing of remotely sensed imagery which reduces the sensitivity of the estimator to outliers and relaxes the linearity assumption. The method consists of several appropriate penalties. We propose to use an lp norm with 0 < p < 1 in the sparse regression problem, which induces more sparsity in the results, but makes the problem non-convex. On the other hand, the problem, though non-convex, can be solved quite straightforwardly with an extensible algorithm based on iteratively reweighted least squares. To deal with the huge size of modern spectral libraries we introduce a library reduction step, similar to the multiple signal classification (MUSIC) array processing algorithm, which not only speeds up unmixing but also yields superior results. In the hyperspectral setting we extend the traditional least squares method to the robust heavy-tailed case and propose a generalised M-lasso solution. M-estimation replaces the Gaussian likelihood with a fixed function ρ(e) that restrains outliers. The M-estimate function reduces the effect of errors with large amplitudes or even assigns the outliers zero weights. Our experimental results on real hyperspectral data show that noise with large amplitudes (outliers) often exists in the data. This ability to mitigate the influence of such outliers can therefore offer greater robustness. Qualitative hyperspectral unmixing results on real hyperspectral image data corroborate the efficacy of the proposed method.
A Robust Nonlinear Observer for Real-Time Attitude Estimation Using Low-Cost MEMS Inertial Sensors
Guerrero-Castellanos, José Fermi; Madrigal-Sastre, Heberto; Durand, Sylvain; Torres, Lizeth; Muñoz-Hernández, German Ardul
2013-01-01
This paper deals with the attitude estimation of a rigid body equipped with angular velocity sensors and reference vector sensors. A quaternion-based nonlinear observer is proposed in order to fuse all information sources and to obtain an accurate estimation of the attitude. It is shown that the observer error dynamics can be separated into two passive subsystems connected in “feedback”. Then, this property is used to show that the error dynamics is input-to-state stable when the measurement disturbance is seen as an input and the error as the state. These results allow one to affirm that the observer is “robustly stable”. The proposed observer is evaluated in real-time with the design and implementation of an Attitude and Heading Reference System (AHRS) based on low-cost MEMS (Micro-Electro-Mechanical Systems) Inertial Measure Unit (IMU) and magnetic sensors and a 16-bit microcontroller. The resulting estimates are compared with a high precision motion system to demonstrate its performance. PMID:24201316
Non-invasive pressure difference estimation from PC-MRI using the work-energy equation
Donati, Fabrizio; Figueroa, C. Alberto; Smith, Nicolas P.; Lamata, Pablo; Nordsletten, David A.
2015-01-01
Pressure difference is an accepted clinical biomarker for cardiovascular disease conditions such as aortic coarctation. Currently, measurements of pressure differences in the clinic rely on invasive techniques (catheterization), prompting development of non-invasive estimates based on blood flow. In this work, we propose a non-invasive estimation procedure deriving pressure difference from the work-energy equation for a Newtonian fluid. Spatial and temporal convergence is demonstrated on in silico Phase Contrast Magnetic Resonance Image (PC-MRI) phantoms with steady and transient flow fields. The method is also tested on an image dataset generated in silico from a 3D patient-specific Computational Fluid Dynamics (CFD) simulation and finally evaluated on a cohort of 9 subjects. The performance is compared to existing approaches based on steady and unsteady Bernoulli formulations as well as the pressure Poisson equation. The new technique shows good accuracy, robustness to noise, and robustness to the image segmentation process, illustrating the potential of this approach for non-invasive pressure difference estimation. PMID:26409245
Robustness of Value-Added Analysis of School Effectiveness. Research Report. ETS RR-08-22
ERIC Educational Resources Information Center
Braun, Henry; Qu, Yanxuan
2008-01-01
This paper reports on a study conducted to investigate the consistency of the results between 2 approaches to estimating school effectiveness through value-added modeling. Estimates of school effects from the layered model employing item response theory (IRT) scaled data are compared to estimates derived from a discrete growth model based on the…
ERIC Educational Resources Information Center
Edwards, Oliver W.; Taub, Gordon E.
2016-01-01
Research indicates the primary difference between strong and weak readers is their phonemic awareness skills. However, there is no consensus regarding which specific components of phonemic awareness contribute most robustly to reading comprehension. In this study, the relationship among sound blending, sound segmentation, and reading comprehension…
Development on electromagnetic impedance function modeling and its estimation
NASA Astrophysics Data System (ADS)
Sutarno, D.
2015-09-01
Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition-as well as the far-field zones, and consequently the plane wave correction is no longer needed for the impedances. In the resulting robust impedance estimates, outlier contamination is removed and the self consistency between the real and imaginary parts of the impedance estimates is guaranteed. Using synthetic and real MT data, it is shown that the proposed robust estimation methods always yield impedance estimates which are better than the conventional least square (LS) estimation, even under condition of severe noise contamination. A recent development on the constrained robust CSAMT impedance estimation is also discussed. By using synthetic CSAMT data it is demonstrated that the proposed methods can produce usable CSAMT transfer functions for all measurement zones.
Development on electromagnetic impedance function modeling and its estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutarno, D., E-mail: Sutarno@fi.itb.ac.id
2015-09-30
Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim atmore » reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition-as well as the far-field zones, and consequently the plane wave correction is no longer needed for the impedances. In the resulting robust impedance estimates, outlier contamination is removed and the self consistency between the real and imaginary parts of the impedance estimates is guaranteed. Using synthetic and real MT data, it is shown that the proposed robust estimation methods always yield impedance estimates which are better than the conventional least square (LS) estimation, even under condition of severe noise contamination. A recent development on the constrained robust CSAMT impedance estimation is also discussed. By using synthetic CSAMT data it is demonstrated that the proposed methods can produce usable CSAMT transfer functions for all measurement zones.« less
Robust Target Tracking with Multi-Static Sensors under Insufficient TDOA Information.
Shin, Hyunhak; Ku, Bonhwa; Nelson, Jill K; Ko, Hanseok
2018-05-08
This paper focuses on underwater target tracking based on a multi-static sonar network composed of passive sonobuoys and an active ping. In the multi-static sonar network, the location of the target can be estimated using TDOA (Time Difference of Arrival) measurements. However, since the sensor network may obtain insufficient and inaccurate TDOA measurements due to ambient noise and other harsh underwater conditions, target tracking performance can be significantly degraded. We propose a robust target tracking algorithm designed to operate in such a scenario. First, track management with track splitting is applied to reduce performance degradation caused by insufficient measurements. Second, a target location is estimated by a fusion of multiple TDOA measurements using a Gaussian Mixture Model (GMM). In addition, the target trajectory is refined by conducting a stack-based data association method based on multiple-frames measurements in order to more accurately estimate target trajectory. The effectiveness of the proposed method is verified through simulations.
Tanner-Smith, Emily E; Tipton, Elizabeth
2014-03-01
Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and spss (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding the practical application and implementation of those macros. This paper provides a brief tutorial on the implementation of the Stata and spss macros and discusses practical issues meta-analysts should consider when estimating meta-regression models with robust variance estimates. Two example databases are used in the tutorial to illustrate the use of meta-analysis with robust variance estimates. Copyright © 2013 John Wiley & Sons, Ltd.
Liu, Fang; Zhang, Wei-Guo
2014-08-01
Due to the vagueness of real-world environments and the subjective nature of human judgments, it is natural for experts to estimate their judgements by using incomplete interval fuzzy preference relations. In this paper, based on the technique for order preference by similarity to ideal solution method, we present a consensus model for group decision-making (GDM) with incomplete interval fuzzy preference relations. To do this, we first define a new consistency measure for incomplete interval fuzzy preference relations. Second, a goal programming model is proposed to estimate the missing interval preference values and it is guided by the consistency property. Third, an ideal interval fuzzy preference relation is constructed by using the induced ordered weighted averaging operator, where the associated weights of characterizing the operator are based on the defined consistency measure. Fourth, a similarity degree between complete interval fuzzy preference relations and the ideal one is defined. The similarity degree is related to the associated weights, and used to aggregate the experts' preference relations in such a way that more importance is given to ones with the higher similarity degree. Finally, a new algorithm is given to solve the GDM problem with incomplete interval fuzzy preference relations, which is further applied to partnership selection in formation of virtual enterprises.
Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera
Sim, Sungdae; Sock, Juil; Kwak, Kiho
2016-01-01
LiDAR and cameras have been broadly utilized in computer vision and autonomous vehicle applications. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust extrinsic calibration algorithm that can be implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration datasets. Additionally, based on our robust calibration approach for a single LiDAR-camera pair, we introduce a joint calibration that estimates the extrinsic parameters of multiple sensors at once by minimizing one objective function with loop closing constraints. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm. The experimental results show that our calibration method has better performance than the other approaches. PMID:27338416
Savalei, Victoria
2018-01-01
A new type of nonnormality correction to the RMSEA has recently been developed, which has several advantages over existing corrections. In particular, the new correction adjusts the sample estimate of the RMSEA for the inflation due to nonnormality, while leaving its population value unchanged, so that established cutoff criteria can still be used to judge the degree of approximate fit. A confidence interval (CI) for the new robust RMSEA based on the mean-corrected ("Satorra-Bentler") test statistic has also been proposed. Follow up work has provided the same type of nonnormality correction for the CFI (Brosseau-Liard & Savalei, 2014). These developments have recently been implemented in lavaan. This note has three goals: a) to show how to compute the new robust RMSEA and CFI from the mean-and-variance corrected test statistic; b) to offer a new CI for the robust RMSEA based on the mean-and-variance corrected test statistic; and c) to caution that the logic of the new nonnormality corrections to RMSEA and CFI is most appropriate for the maximum likelihood (ML) estimator, and cannot easily be generalized to the most commonly used categorical data estimators.
A Robust Sound Source Localization Approach for Microphone Array with Model Errors
NASA Astrophysics Data System (ADS)
Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong
In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.
Baseline estimation in flame's spectra by using neural networks and robust statistics
NASA Astrophysics Data System (ADS)
Garces, Hugo; Arias, Luis; Rojas, Alejandro
2014-09-01
This work presents a baseline estimation method in flame spectra based on artificial intelligence structure as a neural network, combining robust statistics with multivariate analysis to automatically discriminate measured wavelengths belonging to continuous feature for model adaptation, surpassing restriction of measuring target baseline for training. The main contributions of this paper are: to analyze a flame spectra database computing Jolliffe statistics from Principal Components Analysis detecting wavelengths not correlated with most of the measured data corresponding to baseline; to systematically determine the optimal number of neurons in hidden layers based on Akaike's Final Prediction Error; to estimate baseline in full wavelength range sampling measured spectra; and to train an artificial intelligence structure as a Neural Network which allows to generalize the relation between measured and baseline spectra. The main application of our research is to compute total radiation with baseline information, allowing to diagnose combustion process state for optimization in early stages.
Distributed Optimal Consensus Control for Multiagent Systems With Input Delay.
Zhang, Huaipin; Yue, Dong; Zhao, Wei; Hu, Songlin; Dou, Chunxia; Huaipin Zhang; Dong Yue; Wei Zhao; Songlin Hu; Chunxia Dou; Hu, Songlin; Zhang, Huaipin; Dou, Chunxia; Yue, Dong; Zhao, Wei
2018-06-01
This paper addresses the problem of distributed optimal consensus control for a continuous-time heterogeneous linear multiagent system subject to time varying input delays. First, by discretization and model transformation, the continuous-time input-delayed system is converted into a discrete-time delay-free system. Two delicate performance index functions are defined for these two systems. It is shown that the performance index functions are equivalent and the optimal consensus control problem of the input-delayed system can be cast into that of the delay-free system. Second, by virtue of the Hamilton-Jacobi-Bellman (HJB) equations, an optimal control policy for each agent is designed based on the delay-free system and a novel value iteration algorithm is proposed to learn the solutions to the HJB equations online. The proposed adaptive dynamic programming algorithm is implemented on the basis of a critic-action neural network (NN) structure. Third, it is proved that local consensus errors of the two systems and weight estimation errors of the critic-action NNs are uniformly ultimately bounded while the approximated control policies converge to their target values. Finally, two simulation examples are presented to illustrate the effectiveness of the developed method.
Kebir, Sied; Khurshid, Zain; Gaertner, Florian C; Essler, Markus; Hattingen, Elke; Fimmers, Rolf; Scheffler, Björn; Herrlinger, Ulrich; Bundschuh, Ralph A; Glas, Martin
2017-01-31
Timely detection of pseudoprogression (PSP) is crucial for the management of patients with high-grade glioma (HGG) but remains difficult. Textural features of O-(2-[18F]fluoroethyl)-L-tyrosine positron emission tomography (FET-PET) mirror tumor uptake heterogeneity; some of them may be associated with tumor progression. Fourteen patients with HGG and suspected of PSP underwent FET-PET imaging. A set of 19 conventional and textural FET-PET features were evaluated and subjected to unsupervised consensus clustering. The final diagnosis of true progression vs. PSP was based on follow-up MRI using RANO criteria. Three robust clusters have been identified based on 10 predominantly textural FET-PET features. None of the patients with PSP fell into cluster 2, which was associated with high values for textural FET-PET markers of uptake heterogeneity. Three out of 4 patients with PSP were assigned to cluster 3 that was largely associated with low values of textural FET-PET features. By comparison, tumor-to-normal brain ratio (TNRmax) at the optimal cutoff 2.1 was less predictive of PSP (negative predictive value 57% for detecting true progression, p=0.07 vs. 75% with cluster 3, p=0.04). Clustering based on textural O-(2-[18F]fluoroethyl)-L-tyrosine PET features may provide valuable information in assessing the elusive phenomenon of pseudoprogression.
Tracking Public Beliefs About Anthropogenic Climate Change.
Hamilton, Lawrence C; Hartter, Joel; Lemcke-Stampone, Mary; Moore, David W; Safford, Thomas G
2015-01-01
A simple question about climate change, with one choice designed to match consensus statements by scientists, was asked on 35 US nationwide, single-state or regional surveys from 2010 to 2015. Analysis of these data (over 28,000 interviews) yields robust and exceptionally well replicated findings on public beliefs about anthropogenic climate change, including regional variations, change over time, demographic bases, and the interacting effects of respondent education and political views. We find that more than half of the US public accepts the scientific consensus that climate change is happening now, caused mainly by human activities. A sizable, politically opposite minority (about 30 to 40%) concede the fact of climate change, but believe it has mainly natural causes. Few (about 10 to 15%) say they believe climate is not changing, or express no opinion. The overall proportions appear relatively stable nationwide, but exhibit place-to-place variations. Detailed analysis of 21 consecutive surveys within one fairly representative state (New Hampshire) finds a mild but statistically significant rise in agreement with the scientific consensus over 2010-2015. Effects from daily temperature are detectable but minor. Hurricane Sandy, which brushed New Hampshire but caused no disaster there, shows no lasting impact on that state's time series-suggesting that non-immediate weather disasters have limited effects. In all datasets political orientation dominates among individual-level predictors of climate beliefs, moderating the otherwise positive effects from education. Acceptance of anthropogenic climate change rises with education among Democrats and Independents, but not so among Republicans. The continuing series of surveys provides a baseline for tracking how future scientific, political, socioeconomic or climate developments impact public acceptance of the scientific consensus.
Tracking Public Beliefs About Anthropogenic Climate Change
Hamilton, Lawrence C.; Hartter, Joel; Lemcke-Stampone, Mary; Moore, David W.; Safford, Thomas G.
2015-01-01
A simple question about climate change, with one choice designed to match consensus statements by scientists, was asked on 35 US nationwide, single-state or regional surveys from 2010 to 2015. Analysis of these data (over 28,000 interviews) yields robust and exceptionally well replicated findings on public beliefs about anthropogenic climate change, including regional variations, change over time, demographic bases, and the interacting effects of respondent education and political views. We find that more than half of the US public accepts the scientific consensus that climate change is happening now, caused mainly by human activities. A sizable, politically opposite minority (about 30 to 40%) concede the fact of climate change, but believe it has mainly natural causes. Few (about 10 to 15%) say they believe climate is not changing, or express no opinion. The overall proportions appear relatively stable nationwide, but exhibit place-to-place variations. Detailed analysis of 21 consecutive surveys within one fairly representative state (New Hampshire) finds a mild but statistically significant rise in agreement with the scientific consensus over 2010–2015. Effects from daily temperature are detectable but minor. Hurricane Sandy, which brushed New Hampshire but caused no disaster there, shows no lasting impact on that state’s time series—suggesting that non-immediate weather disasters have limited effects. In all datasets political orientation dominates among individual-level predictors of climate beliefs, moderating the otherwise positive effects from education. Acceptance of anthropogenic climate change rises with education among Democrats and Independents, but not so among Republicans. The continuing series of surveys provides a baseline for tracking how future scientific, political, socioeconomic or climate developments impact public acceptance of the scientific consensus. PMID:26422694
Randomized Subspace Learning for Proline Cis-Trans Isomerization Prediction.
Al-Jarrah, Omar Y; Yoo, Paul D; Taha, Kamal; Muhaidat, Sami; Shami, Abdallah; Zaki, Nazar
2015-01-01
Proline residues are common source of kinetic complications during folding. The X-Pro peptide bond is the only peptide bond for which the stability of the cis and trans conformations is comparable. The cis-trans isomerization (CTI) of X-Pro peptide bonds is a widely recognized rate-limiting factor, which can not only induces additional slow phases in protein folding but also modifies the millisecond and sub-millisecond dynamics of the protein. An accurate computational prediction of proline CTI is of great importance for the understanding of protein folding, splicing, cell signaling, and transmembrane active transport in both the human body and animals. In our earlier work, we successfully developed a biophysically motivated proline CTI predictor utilizing a novel tree-based consensus model with a powerful metalearning technique and achieved 86.58 percent Q2 accuracy and 0.74 Mcc, which is a better result than the results (70-73 percent Q2 accuracies) reported in the literature on the well-referenced benchmark dataset. In this paper, we describe experiments with novel randomized subspace learning and bootstrap seeding techniques as an extension to our earlier work, the consensus models as well as entropy-based learning methods, to obtain better accuracy through a precise and robust learning scheme for proline CTI prediction.
A new class of finite-time nonlinear consensus protocols for multi-agent systems
NASA Astrophysics Data System (ADS)
Zuo, Zongyu; Tie, Lin
2014-02-01
This paper is devoted to investigating the finite-time consensus problem for a multi-agent system in networks with undirected topology. A new class of global continuous time-invariant consensus protocols is constructed for each single-integrator agent dynamics with the aid of Lyapunov functions. In particular, it is shown that the settling time of the proposed new class of finite-time consensus protocols is upper bounded for arbitrary initial conditions. This makes it possible for network consensus problems that the convergence time is designed and estimated offline for a given undirected information flow and a group volume of agents. Finally, a numerical simulation example is presented as a proof of concept.
ERIC Educational Resources Information Center
Rose, V.; Trembath, D.; Keen, D.; Paynter, J.
2016-01-01
Background: Estimates of the proportion of children with autism spectrum disorder (ASD) who are minimally verbal vary from 25% to 35%. However, there is a lack of consensus in defining minimally verbal and few detailed reports of communication outcomes for these children following intervention. The aim of this study was to explore how minimally…
Linden, Ariel
2017-08-01
When a randomized controlled trial is not feasible, health researchers typically use observational data and rely on statistical methods to adjust for confounding when estimating treatment effects. These methods generally fall into 3 categories: (1) estimators based on a model for the outcome using conventional regression adjustment; (2) weighted estimators based on the propensity score (ie, a model for the treatment assignment); and (3) "doubly robust" (DR) estimators that model both the outcome and propensity score within the same framework. In this paper, we introduce a new DR estimator that utilizes marginal mean weighting through stratification (MMWS) as the basis for weighted adjustment. This estimator may prove more accurate than treatment effect estimators because MMWS has been shown to be more accurate than other models when the propensity score is misspecified. We therefore compare the performance of this new estimator to other commonly used treatment effects estimators. Monte Carlo simulation is used to compare the DR-MMWS estimator to regression adjustment, 2 weighted estimators based on the propensity score and 2 other DR methods. To assess performance under varied conditions, we vary the level of misspecification of the propensity score model as well as misspecify the outcome model. Overall, DR estimators generally outperform methods that model one or the other components (eg, propensity score or outcome). The DR-MMWS estimator outperforms all other estimators when both the propensity score and outcome models are misspecified and performs equally as well as other DR estimators when only the propensity score is misspecified. Health researchers should consider using DR-MMWS as the principal evaluation strategy in observational studies, as this estimator appears to outperform other estimators in its class. © 2017 John Wiley & Sons, Ltd.
Carelli, Valerio; Carbonelli, Michele; de Coo, Irenaeus F; Kawasaki, Aki; Klopstock, Thomas; Lagrèze, Wolf A; La Morgia, Chiara; Newman, Nancy J; Orssaud, Christophe; Pott, Jan Willem R; Sadun, Alfredo A; van Everdingen, Judith; Vignal-Clermont, Catherine; Votruba, Marcela; Yu-Wai-Man, Patrick; Barboni, Piero
2017-12-01
Leber hereditary optic neuropathy (LHON) is currently estimated as the most frequent mitochondrial disease (1 in 27,000-45,000). Its molecular pathogenesis and natural history is now fairly well understood. LHON also is the first mitochondrial disease for which a treatment has been approved (idebenone-Raxone, Santhera Pharmaceuticals) by the European Medicine Agency, under exceptional circumstances because of the rarity and severity of the disease. However, what remains unclear includes the optimal target population, timing, dose, and frequency of administration of idebenone in LHON due to lack of accepted definitions, criteria, and general guidelines for the clinical management of LHON. To address these issues, a consensus conference with a panel of experts from Europe and North America was held in Milan, Italy, in 2016. The intent was to provide expert consensus statements for the clinical and therapeutic management of LHON based on the currently available evidence. We report the conclusions of this conference, providing the guidelines for clinical and therapeutic management of LHON.
Gradient descent for robust kernel-based regression
NASA Astrophysics Data System (ADS)
Guo, Zheng-Chu; Hu, Ting; Shi, Lei
2018-06-01
In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.
Neural network uncertainty assessment using Bayesian statistics: a remote sensing application
NASA Technical Reports Server (NTRS)
Aires, F.; Prigent, C.; Rossow, W. B.
2004-01-01
Neural network (NN) techniques have proved successful for many regression problems, in particular for remote sensing; however, uncertainty estimates are rarely provided. In this article, a Bayesian technique to evaluate uncertainties of the NN parameters (i.e., synaptic weights) is first presented. In contrast to more traditional approaches based on point estimation of the NN weights, we assess uncertainties on such estimates to monitor the robustness of the NN model. These theoretical developments are illustrated by applying them to the problem of retrieving surface skin temperature, microwave surface emissivities, and integrated water vapor content from a combined analysis of satellite microwave and infrared observations over land. The weight uncertainty estimates are then used to compute analytically the uncertainties in the network outputs (i.e., error bars and correlation structure of these errors). Such quantities are very important for evaluating any application of an NN model. The uncertainties on the NN Jacobians are then considered in the third part of this article. Used for regression fitting, NN models can be used effectively to represent highly nonlinear, multivariate functions. In this situation, most emphasis is put on estimating the output errors, but almost no attention has been given to errors associated with the internal structure of the regression model. The complex structure of dependency inside the NN is the essence of the model, and assessing its quality, coherency, and physical character makes all the difference between a blackbox model with small output errors and a reliable, robust, and physically coherent model. Such dependency structures are described to the first order by the NN Jacobians: they indicate the sensitivity of one output with respect to the inputs of the model for given input data. We use a Monte Carlo integration procedure to estimate the robustness of the NN Jacobians. A regularization strategy based on principal component analysis is proposed to suppress the multicollinearities in order to make these Jacobians robust and physically meaningful.
Constrained dynamics approach for motion synchronization and consensus
NASA Astrophysics Data System (ADS)
Bhatia, Divya
In this research we propose to develop constrained dynamical systems based stable attitude synchronization, consensus and tracking (SCT) control laws for the formation of rigid bodies. The generalized constrained dynamics Equations of Motion (EOM) are developed utilizing constraint potential energy functions that enforce communication constraints. Euler-Lagrange equations are employed to develop the non-linear constrained dynamics of multiple vehicle systems. The constraint potential energy is synthesized based on a graph theoretic formulation of the vehicle-vehicle communication. Constraint stabilization is achieved via Baumgarte's method. The performance of these constrained dynamics based formations is evaluated for bounded control authority. The above method has been applied to various cases and the results have been obtained using MATLAB simulations showing stability, synchronization, consensus and tracking of formations. The first case corresponds to an N-pendulum formation without external disturbances, in which the springs and the dampers connected between the pendulums act as the communication constraints. The damper helps in stabilizing the system by damping the motion whereas the spring acts as a communication link relaying relative position information between two connected pendulums. Lyapunov stabilization (energy based stabilization) technique is employed to depict the attitude stabilization and boundedness. Various scenarios involving different values of springs and dampers are simulated and studied. Motivated by the first case study, we study the formation of N 2-link robotic manipulators. The governing EOM for this system is derived using Euler-Lagrange equations. A generalized set of communication constraints are developed for this system using graph theory. The constraints are stabilized using Baumgarte's techniques. The attitude SCT is established for this system and the results are shown for the special case of three 2-link robotic manipulators. These methods are then applied to the formation of N-spacecraft. Modified Rodrigues Parameters (MRP) are used for attitude representation of the spacecraft because of their advantage of being a minimum parameter representation. Constrained non-linear equations of motion for this system are developed and stabilized using a Proportional-Derivative (PD) controller derived based on Baumgarte's method. A system of 3 spacecraft is simulated and the results for SCT are shown and analyzed. Another problem studied in this research is that of maintaining SCT under unknown external disturbances. We use an adaptive control algorithm to derive control laws for the actuator torques and develop an estimation law for the unknown disturbance parameters to achieve SCT. The estimate of the disturbance is added as a feed forward term in the actual control law to obtain the stabilization of a 3-spacecraft formation. The disturbance estimates are generated via a Lyapunov analysis of the closed loop system. In summary, the constrained dynamics method shows a lot of potential in formation control, achieving stabilization, synchronization, consensus and tracking of a set of dynamical systems.
The wisdom of the commons: ensemble tree classifiers for prostate cancer prognosis.
Koziol, James A; Feng, Anne C; Jia, Zhenyu; Wang, Yipeng; Goodison, Seven; McClelland, Michael; Mercola, Dan
2009-01-01
Classification and regression trees have long been used for cancer diagnosis and prognosis. Nevertheless, instability and variable selection bias, as well as overfitting, are well-known problems of tree-based methods. In this article, we investigate whether ensemble tree classifiers can ameliorate these difficulties, using data from two recent studies of radical prostatectomy in prostate cancer. Using time to progression following prostatectomy as the relevant clinical endpoint, we found that ensemble tree classifiers robustly and reproducibly identified three subgroups of patients in the two clinical datasets: non-progressors, early progressors and late progressors. Moreover, the consensus classifications were independent predictors of time to progression compared to known clinical prognostic factors.
Dalla Vestra, Michele; Grolla, Elisabetta; Bonanni, Luca; Pesavento, Raffaele
2018-03-01
The use of inferior vena cava filters to prevent pulmonary embolism is increasing mainly because of indications that appear to be unclearly codified and recommended. The evidence supporting this approach is often heterogeneous, and mainly based on observational studies and consensus opinions, while the insertion of an IVC filter exposes patients to the risk of complications and increases health care costs. Thus, several proposed indications for an IVC filter placement remain controversial. We attempt to review the proof on the efficacy and safety of IVC filters in several "special" clinical settings, and assess the robustness of the available evidence for any specific indication to place an IVC filter.
NASA Astrophysics Data System (ADS)
Djomo, S. Njakou; Knudsen, M. T.; Andersen, M. S.; Hermansen, J. E.
2017-11-01
There is an ongoing debate regarding the influence of the source location of pollution on the fate of pollutants and their subsequent impacts. Several methods have been developed to derive site-dependent characterization factors (CFs) for use in life-cycle assessment (LCA). Consistent, precise, and accurate estimates of CFs are crucial for establishing long-term, sustainable air pollution abatement policies. We reviewed currently available studies on the regionalization of non-toxic air pollutants in LCA. We also extracted and converted data into indices for analysis. We showed that CFs can distinguish between emissions occurring in different locations, and that the different methods used to derive CFs map locations consistently from very sensitive to less sensitive. Seasonal variations are less important for the computation of CFs for acidification and eutrophication, but they are relevant for the calculation of CFs for tropospheric ozone formation. Large intra-country differences in estimated CFs suggest that an abatement policy relying on quantitative estimates based upon a single method may have undesirable outcomes. Within country differences in estimates of CFs for acidification and eutrophication are the results of the models used, category definitions, soil sensitivity factors, background emission concentration, critical loads database, and input data. Striking features in these studies were the lack of CFs for countries outside Europe, the USA, Japan, and Canada, the lack of quantification of uncertainties. Parameter and input data uncertainties are well quantified, but the uncertainty associated with the choice of category indicator is rarely quantified and this can be significant. Although CFs are scientifically robust, further refinements are needed before they can be integrated in LCA. Future research should include uncertainty analyses, and should develop a consensus model for CFs. CFs for countries outside Europe, Japan, Canada and the USA are urgently needed.
Mittal, Manish; Harrison, Donald L; Thompson, David M; Miller, Michael J; Farmer, Kevin C; Ng, Yu-Tze
2016-01-01
While the choice of analytical approach affects study results and their interpretation, there is no consensus to guide the choice of statistical approaches to evaluate public health policy change. This study compared and contrasted three statistical estimation procedures in the assessment of a U.S. Food and Drug Administration (FDA) suicidality warning, communicated in January 2008 and implemented in May 2009, on antiepileptic drug (AED) prescription claims. Longitudinal designs were utilized to evaluate Oklahoma (U.S. State) Medicaid claim data from January 2006 through December 2009. The study included 9289 continuously eligible individuals with prevalent diagnoses of epilepsy and/or psychiatric disorder. Segmented regression models using three estimation procedures [i.e., generalized linear models (GLM), generalized estimation equations (GEE), and generalized linear mixed models (GLMM)] were used to estimate trends of AED prescription claims across three time periods: before (January 2006-January 2008); during (February 2008-May 2009); and after (June 2009-December 2009) the FDA warning. All three statistical procedures estimated an increasing trend (P < 0.0001) in AED prescription claims before the FDA warning period. No procedures detected a significant change in trend during (GLM: -30.0%, 99% CI: -60.0% to 10.0%; GEE: -20.0%, 99% CI: -70.0% to 30.0%; GLMM: -23.5%, 99% CI: -58.8% to 1.2%) and after (GLM: 50.0%, 99% CI: -70.0% to 160.0%; GEE: 80.0%, 99% CI: -20.0% to 200.0%; GLMM: 47.1%, 99% CI: -41.2% to 135.3%) the FDA warning when compared to pre-warning period. Although the three procedures provided consistent inferences, the GEE and GLMM approaches accounted appropriately for correlation. Further, marginal models estimated using GEE produced more robust and valid population-level estimations. Copyright © 2016 Elsevier Inc. All rights reserved.
Wang, Xinghu; Hong, Yiguang; Yi, Peng; Ji, Haibo; Kang, Yu
2017-05-24
In this paper, a distributed optimization problem is studied for continuous-time multiagent systems with unknown-frequency disturbances. A distributed gradient-based control is proposed for the agents to achieve the optimal consensus with estimating unknown frequencies and rejecting the bounded disturbance in the semi-global sense. Based on convex optimization analysis and adaptive internal model approach, the exact optimization solution can be obtained for the multiagent system disturbed by exogenous disturbances with uncertain parameters.
Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan
2015-01-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129
Schnitzer, Mireille E; Lok, Judith J; Gruber, Susan
2016-05-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010 [27]) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low- and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios.
Consensus Prediction of Charged Single Alpha-Helices with CSAHserver.
Dudola, Dániel; Tóth, Gábor; Nyitray, László; Gáspári, Zoltán
2017-01-01
Charged single alpha-helices (CSAHs) constitute a rare structural motif. CSAH is characterized by a high density of regularly alternating residues with positively and negatively charged side chains. Such segments exhibit unique structural properties; however, there are only a handful of proteins where its existence is experimentally verified. Therefore, establishing a pipeline that is capable of predicting the presence of CSAH segments with a low false positive rate is of considerable importance. Here we describe a consensus-based approach that relies on two conceptually different CSAH detection methods and a final filter based on the estimated helix-forming capabilities of the segments. This pipeline was shown to be capable of identifying previously uncharacterized CSAH segments that could be verified experimentally. The method is available as a web server at http://csahserver.itk.ppke.hu and also a downloadable standalone program suitable to scan larger sequence collections.
A Robust Statistics Approach to Minimum Variance Portfolio Optimization
NASA Astrophysics Data System (ADS)
Yang, Liusha; Couillet, Romain; McKay, Matthew R.
2015-12-01
We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.
NASA Astrophysics Data System (ADS)
Zahari, Siti Meriam; Ramli, Norazan Mohamed; Moktar, Balkiah; Zainol, Mohammad Said
2014-09-01
In the presence of multicollinearity and multiple outliers, statistical inference of linear regression model using ordinary least squares (OLS) estimators would be severely affected and produces misleading results. To overcome this, many approaches have been investigated. These include robust methods which were reported to be less sensitive to the presence of outliers. In addition, ridge regression technique was employed to tackle multicollinearity problem. In order to mitigate both problems, a combination of ridge regression and robust methods was discussed in this study. The superiority of this approach was examined when simultaneous presence of multicollinearity and multiple outliers occurred in multiple linear regression. This study aimed to look at the performance of several well-known robust estimators; M, MM, RIDGE and robust ridge regression estimators, namely Weighted Ridge M-estimator (WRM), Weighted Ridge MM (WRMM), Ridge MM (RMM), in such a situation. Results of the study showed that in the presence of simultaneous multicollinearity and multiple outliers (in both x and y-direction), the RMM and RIDGE are more or less similar in terms of superiority over the other estimators, regardless of the number of observation, level of collinearity and percentage of outliers used. However, when outliers occurred in only single direction (y-direction), the WRMM estimator is the most superior among the robust ridge regression estimators, by producing the least variance. In conclusion, the robust ridge regression is the best alternative as compared to robust and conventional least squares estimators when dealing with simultaneous presence of multicollinearity and outliers.
A proportional integral estimator-based clock synchronization protocol for wireless sensor networks.
Yang, Wenlun; Fu, Minyue
2017-11-01
Clock synchronization is an issue of vital importance in applications of WSNs. This paper proposes a proportional integral estimator-based protocol (EBP) to achieve clock synchronization for wireless sensor networks. As each local clock skew gradually drifts, synchronization accuracy will decline over time. Compared with existing consensus-based approaches, the proposed synchronization protocol improves synchronization accuracy under time-varying clock skews. Moreover, by restricting synchronization error of clock skew into a relative small quantity, it could reduce periodic re-synchronization frequencies. At last, a pseudo-synchronous implementation for skew compensation is introduced as synchronous protocol is unrealistic in practice. Numerical simulations are shown to illustrate the performance of the proposed protocol. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Improving consensus contact prediction via server correlation reduction.
Gao, Xin; Bu, Dongbo; Xu, Jinbo; Li, Ming
2009-05-06
Protein inter-residue contacts play a crucial role in the determination and prediction of protein structures. Previous studies on contact prediction indicate that although template-based consensus methods outperform sequence-based methods on targets with typical templates, such consensus methods perform poorly on new fold targets. However, we find out that even for new fold targets, the models generated by threading programs can contain many true contacts. The challenge is how to identify them. In this paper, we develop an integer linear programming model for consensus contact prediction. In contrast to the simple majority voting method assuming that all the individual servers are equally important and independent, the newly developed method evaluates their correlation by using maximum likelihood estimation and extracts independent latent servers from them by using principal component analysis. An integer linear programming method is then applied to assign a weight to each latent server to maximize the difference between true contacts and false ones. The proposed method is tested on the CASP7 data set. If the top L/5 predicted contacts are evaluated where L is the protein size, the average accuracy is 73%, which is much higher than that of any previously reported study. Moreover, if only the 15 new fold CASP7 targets are considered, our method achieves an average accuracy of 37%, which is much better than that of the majority voting method, SVM-LOMETS, SVM-SEQ, and SAM-T06. These methods demonstrate an average accuracy of 13.0%, 10.8%, 25.8% and 21.2%, respectively. Reducing server correlation and optimally combining independent latent servers show a significant improvement over the traditional consensus methods. This approach can hopefully provide a powerful tool for protein structure refinement and prediction use.
A frequency-domain estimator for use in adaptive control systems
NASA Technical Reports Server (NTRS)
Lamaire, Richard O.; Valavani, Lena; Athans, Michael; Stein, Gunter
1991-01-01
This paper presents a frequency-domain estimator that can identify both a parametrized nominal model of a plant as well as a frequency-domain bounding function on the modeling error associated with this nominal model. This estimator, which we call a robust estimator, can be used in conjunction with a robust control-law redesign algorithm to form a robust adaptive controller.
How Valid are Estimates of Occupational Illness?
ERIC Educational Resources Information Center
Hilaski, Harvey J.; Wang, Chao Ling
1982-01-01
Examines some of the methods of estimating occupational diseases and suggests that a consensus on the adequacy and reliability of estimates by the Bureau of Labor Statistics and others is not likely. (SK)
Robust Transceiver Design for Multiuser MIMO Downlink with Channel Uncertainties
NASA Astrophysics Data System (ADS)
Miao, Wei; Li, Yunzhou; Chen, Xiang; Zhou, Shidong; Wang, Jing
This letter addresses the problem of robust transceiver design for the multiuser multiple-input-multiple-output (MIMO) downlink where the channel state information at the base station (BS) is imperfect. A stochastic approach which minimizes the expectation of the total mean square error (MSE) of the downlink conditioned on the channel estimates under a total transmit power constraint is adopted. The iterative algorithm reported in [2] is improved to handle the proposed robust optimization problem. Simulation results show that our proposed robust scheme effectively reduces the performance loss due to channel uncertainties and outperforms existing methods, especially when the channel errors of the users are different.
Estimating the number of people in crowded scenes
NASA Astrophysics Data System (ADS)
Kim, Minjin; Kim, Wonjun; Kim, Changick
2011-01-01
This paper presents a method to estimate the number of people in crowded scenes without using explicit object segmentation or tracking. The proposed method consists of three steps as follows: (1) extracting space-time interest points using eigenvalues of the local spatio-temporal gradient matrix, (2) generating crowd regions based on space-time interest points, and (3) estimating the crowd density based on the multiple regression. In experimental results, the efficiency and robustness of our proposed method are demonstrated by using PETS 2009 dataset.
Uehara, Takashi; Sartori, Matteo; Tanaka, Toshihisa; Fiori, Simone
2017-06-01
The estimation of covariance matrices is of prime importance to analyze the distribution of multivariate signals. In motor imagery-based brain-computer interfaces (MI-BCI), covariance matrices play a central role in the extraction of features from recorded electroencephalograms (EEGs); therefore, correctly estimating covariance is crucial for EEG classification. This letter discusses algorithms to average sample covariance matrices (SCMs) for the selection of the reference matrix in tangent space mapping (TSM)-based MI-BCI. Tangent space mapping is a powerful method of feature extraction and strongly depends on the selection of a reference covariance matrix. In general, the observed signals may include outliers; therefore, taking the geometric mean of SCMs as the reference matrix may not be the best choice. In order to deal with the effects of outliers, robust estimators have to be used. In particular, we discuss and test the use of geometric medians and trimmed averages (defined on the basis of several metrics) as robust estimators. The main idea behind trimmed averages is to eliminate data that exhibit the largest distance from the average covariance calculated on the basis of all available data. The results of the experiments show that while the geometric medians show little differences from conventional methods in terms of classification accuracy in the classification of electroencephalographic recordings, the trimmed averages show significant improvement for all subjects.
Robust Gaussian Graphical Modeling via l1 Penalization
Sun, Hokeun; Li, Hongzhe
2012-01-01
Summary Gaussian graphical models have been widely used as an effective method for studying the conditional independency structure among genes and for constructing genetic networks. However, gene expression data typically have heavier tails or more outlying observations than the standard Gaussian distribution. Such outliers in gene expression data can lead to wrong inference on the dependency structure among the genes. We propose a l1 penalized estimation procedure for the sparse Gaussian graphical models that is robustified against possible outliers. The likelihood function is weighted according to how the observation is deviated, where the deviation of the observation is measured based on its own likelihood. An efficient computational algorithm based on the coordinate gradient descent method is developed to obtain the minimizer of the negative penalized robustified-likelihood, where nonzero elements of the concentration matrix represents the graphical links among the genes. After the graphical structure is obtained, we re-estimate the positive definite concentration matrix using an iterative proportional fitting algorithm. Through simulations, we demonstrate that the proposed robust method performs much better than the graphical Lasso for the Gaussian graphical models in terms of both graph structure selection and estimation when outliers are present. We apply the robust estimation procedure to an analysis of yeast gene expression data and show that the resulting graph has better biological interpretation than that obtained from the graphical Lasso. PMID:23020775
The liberal illusion of uniqueness.
Stern, Chadly; West, Tessa V; Schmitt, Peter G
2014-01-01
In two studies, we demonstrated that liberals underestimate their similarity to other liberals (i.e., display truly false uniqueness), whereas moderates and conservatives overestimate their similarity to other moderates and conservatives (i.e., display truly false consensus; Studies 1 and 2). We further demonstrated that a fundamental difference between liberals and conservatives in the motivation to feel unique explains this ideological distinction in the accuracy of estimating similarity (Study 2). Implications of the accuracy of consensus estimates for mobilizing liberal and conservative political movements are discussed.
Tang, Cuong Q; Humphreys, Aelys M; Fontaneto, Diego; Barraclough, Timothy G; Paradis, Emmanuel
2014-01-01
Coalescent-based species delimitation methods combine population genetic and phylogenetic theory to provide an objective means for delineating evolutionarily significant units of diversity. The generalised mixed Yule coalescent (GMYC) and the Poisson tree process (PTP) are methods that use ultrametric (GMYC or PTP) or non-ultrametric (PTP) gene trees as input, intended for use mostly with single-locus data such as DNA barcodes. Here, we assess how robust the GMYC and PTP are to different phylogenetic reconstruction and branch smoothing methods. We reconstruct over 400 ultrametric trees using up to 30 different combinations of phylogenetic and smoothing methods and perform over 2000 separate species delimitation analyses across 16 empirical data sets. We then assess how variable diversity estimates are, in terms of richness and identity, with respect to species delimitation, phylogenetic and smoothing methods. The PTP method generally generates diversity estimates that are more robust to different phylogenetic methods. The GMYC is more sensitive, but provides consistent estimates for BEAST trees. The lower consistency of GMYC estimates is likely a result of differences among gene trees introduced by the smoothing step. Unresolved nodes (real anomalies or methodological artefacts) affect both GMYC and PTP estimates, but have a greater effect on GMYC estimates. Branch smoothing is a difficult step and perhaps an underappreciated source of bias that may be widespread among studies of diversity and diversification. Nevertheless, careful choice of phylogenetic method does produce equivalent PTP and GMYC diversity estimates. We recommend simultaneous use of the PTP model with any model-based gene tree (e.g. RAxML) and GMYC approaches with BEAST trees for obtaining species hypotheses. PMID:25821577
Robust small area prediction for counts.
Tzavidis, Nikos; Ranalli, M Giovanna; Salvati, Nicola; Dreassi, Emanuela; Chambers, Ray
2015-06-01
A new semiparametric approach to model-based small area prediction for counts is proposed and used for estimating the average number of visits to physicians for Health Districts in Central Italy. The proposed small area predictor can be viewed as an outlier robust alternative to the more commonly used empirical plug-in predictor that is based on a Poisson generalized linear mixed model with Gaussian random effects. Results from the real data application and from a simulation experiment confirm that the proposed small area predictor has good robustness properties and in some cases can be more efficient than alternative small area approaches. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bender, Edward T.
Purpose: To develop a robust method for deriving dose-painting prescription functions using spatial information about the risk for disease recurrence. Methods: Spatial distributions of radiobiological model parameters are derived from distributions of recurrence risk after uniform irradiation. These model parameters are then used to derive optimal dose-painting prescription functions given a constant mean biologically effective dose. Results: An estimate for the optimal dose distribution can be derived based on spatial information about recurrence risk. Dose painting based on imaging markers that are moderately or poorly correlated with recurrence risk are predicted to potentially result in inferior disease control when comparedmore » the same mean biologically effective dose delivered uniformly. A robust optimization approach may partially mitigate this issue. Conclusions: The methods described here can be used to derive an estimate for a robust, patient-specific prescription function for use in dose painting. Two approximate scaling relationships were observed: First, the optimal choice for the maximum dose differential when using either a linear or two-compartment prescription function is proportional to R, where R is the Pearson correlation coefficient between a given imaging marker and recurrence risk after uniform irradiation. Second, the predicted maximum possible gain in tumor control probability for any robust optimization technique is nearly proportional to the square of R.« less
Gar, Oron; Sargent, Daniel J.; Tsai, Ching-Jung; Pleban, Tzili; Shalev, Gil; Byrne, David H.; Zamir, Dani
2011-01-01
Polyploidy is a pivotal process in plant evolution as it increase gene redundancy and morphological intricacy but due to the complexity of polysomic inheritance we have only few genetic maps of autopolyploid organisms. A robust mapping framework is particularly important in polyploid crop species, rose included (2n = 4x = 28), where the objective is to study multiallelic interactions that control traits of value for plant breeding. From a cross between the garden, peach red and fragrant cultivar Fragrant Cloud (FC) and a cut-rose yellow cultivar Golden Gate (GG), we generated an autotetraploid GGFC mapping population consisting of 132 individuals. For the map we used 128 sequence-based markers, 141 AFLP, 86 SSR and three morphological markers. Seven linkage groups were resolved for FC (Total 632 cM) and GG (616 cM) which were validated by markers that segregated in both parents as well as the diploid integrated consensus map. The release of the Fragaria vesca genome, which also belongs to the Rosoideae, allowed us to place 70 rose sequenced markers on the seven strawberry pseudo-chromosomes. Synteny between Rosa and Fragaria was high with an estimated four major translocations and six inversions required to place the 17 non-collinear markers in the same order. Based on a verified linear order of the rose markers, we could further partition each of the parents into its four homologous groups, thus providing an essential framework to aid the sequencing of an autotetraploid genome. PMID:21647382
Gar, Oron; Sargent, Daniel J; Tsai, Ching-Jung; Pleban, Tzili; Shalev, Gil; Byrne, David H; Zamir, Dani
2011-01-01
Polyploidy is a pivotal process in plant evolution as it increase gene redundancy and morphological intricacy but due to the complexity of polysomic inheritance we have only few genetic maps of autopolyploid organisms. A robust mapping framework is particularly important in polyploid crop species, rose included (2n = 4x = 28), where the objective is to study multiallelic interactions that control traits of value for plant breeding. From a cross between the garden, peach red and fragrant cultivar Fragrant Cloud (FC) and a cut-rose yellow cultivar Golden Gate (GG), we generated an autotetraploid GGFC mapping population consisting of 132 individuals. For the map we used 128 sequence-based markers, 141 AFLP, 86 SSR and three morphological markers. Seven linkage groups were resolved for FC (Total 632 cM) and GG (616 cM) which were validated by markers that segregated in both parents as well as the diploid integrated consensus map.The release of the Fragaria vesca genome, which also belongs to the Rosoideae, allowed us to place 70 rose sequenced markers on the seven strawberry pseudo-chromosomes. Synteny between Rosa and Fragaria was high with an estimated four major translocations and six inversions required to place the 17 non-collinear markers in the same order. Based on a verified linear order of the rose markers, we could further partition each of the parents into its four homologous groups, thus providing an essential framework to aid the sequencing of an autotetraploid genome.
Zhou, Ping; Guo, Dongwei; Wang, Hong; Chai, Tianyou
2017-09-29
Optimal operation of an industrial blast furnace (BF) ironmaking process largely depends on a reliable measurement of molten iron quality (MIQ) indices, which are not feasible using the conventional sensors. This paper proposes a novel data-driven robust modeling method for the online estimation and control of MIQ indices. First, a nonlinear autoregressive exogenous (NARX) model is constructed for the MIQ indices to completely capture the nonlinear dynamics of the BF process. Then, considering that the standard least-squares support vector regression (LS-SVR) cannot directly cope with the multioutput problem, a multitask transfer learning is proposed to design a novel multioutput LS-SVR (M-LS-SVR) for the learning of the NARX model. Furthermore, a novel M-estimator is proposed to reduce the interference of outliers and improve the robustness of the M-LS-SVR model. Since the weights of different outlier data are properly given by the weight function, their corresponding contributions on modeling can properly be distinguished, thus a robust modeling result can be achieved. Finally, a novel multiobjective evaluation index on the modeling performance is developed by comprehensively considering the root-mean-square error of modeling and the correlation coefficient on trend fitting, based on which the nondominated sorting genetic algorithm II is used to globally optimize the model parameters. Both experiments using industrial data and industrial applications illustrate that the proposed method can eliminate the adverse effect caused by the fluctuation of data in BF process efficiently. This indicates its stronger robustness and higher accuracy. Moreover, control testing shows that the developed model can be well applied to realize data-driven control of the BF process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ping; Guo, Dongwei; Wang, Hong
Optimal operation of an industrial blast furnace (BF) ironmaking process largely depends on a reliable measurement of molten iron quality (MIQ) indices, which are not feasible using the conventional sensors. This paper proposes a novel data-driven robust modeling method for the online estimation and control of MIQ indices. First, a nonlinear autoregressive exogenous (NARX) model is constructed for the MIQ indices to completely capture the nonlinear dynamics of the BF process. Then, considering that the standard least-squares support vector regression (LS-SVR) cannot directly cope with the multioutput problem, a multitask transfer learning is proposed to design a novel multioutput LS-SVRmore » (M-LS-SVR) for the learning of the NARX model. Furthermore, a novel M-estimator is proposed to reduce the interference of outliers and improve the robustness of the M-LS-SVR model. Since the weights of different outlier data are properly given by the weight function, their corresponding contributions on modeling can properly be distinguished, thus a robust modeling result can be achieved. Finally, a novel multiobjective evaluation index on the modeling performance is developed by comprehensively considering the root-mean-square error of modeling and the correlation coefficient on trend fitting, based on which the nondominated sorting genetic algorithm II is used to globally optimize the model parameters. Both experiments using industrial data and industrial applications illustrate that the proposed method can eliminate the adverse effect caused by the fluctuation of data in BF process efficiently. In conclusion, this indicates its stronger robustness and higher accuracy. Moreover, control testing shows that the developed model can be well applied to realize data-driven control of the BF process.« less
Zhou, Ping; Guo, Dongwei; Wang, Hong; ...
2017-09-29
Optimal operation of an industrial blast furnace (BF) ironmaking process largely depends on a reliable measurement of molten iron quality (MIQ) indices, which are not feasible using the conventional sensors. This paper proposes a novel data-driven robust modeling method for the online estimation and control of MIQ indices. First, a nonlinear autoregressive exogenous (NARX) model is constructed for the MIQ indices to completely capture the nonlinear dynamics of the BF process. Then, considering that the standard least-squares support vector regression (LS-SVR) cannot directly cope with the multioutput problem, a multitask transfer learning is proposed to design a novel multioutput LS-SVRmore » (M-LS-SVR) for the learning of the NARX model. Furthermore, a novel M-estimator is proposed to reduce the interference of outliers and improve the robustness of the M-LS-SVR model. Since the weights of different outlier data are properly given by the weight function, their corresponding contributions on modeling can properly be distinguished, thus a robust modeling result can be achieved. Finally, a novel multiobjective evaluation index on the modeling performance is developed by comprehensively considering the root-mean-square error of modeling and the correlation coefficient on trend fitting, based on which the nondominated sorting genetic algorithm II is used to globally optimize the model parameters. Both experiments using industrial data and industrial applications illustrate that the proposed method can eliminate the adverse effect caused by the fluctuation of data in BF process efficiently. In conclusion, this indicates its stronger robustness and higher accuracy. Moreover, control testing shows that the developed model can be well applied to realize data-driven control of the BF process.« less
A robust nonlinear filter for image restoration.
Koivunen, V
1995-01-01
A class of nonlinear regression filters based on robust estimation theory is introduced. The goal of the filtering is to recover a high-quality image from degraded observations. Models for desired image structures and contaminating processes are employed, but deviations from strict assumptions are allowed since the assumptions on signal and noise are typically only approximately true. The robustness of filters is usually addressed only in a distributional sense, i.e., the actual error distribution deviates from the nominal one. In this paper, the robustness is considered in a broad sense since the outliers may also be due to inappropriate signal model, or there may be more than one statistical population present in the processing window, causing biased estimates. Two filtering algorithms minimizing a least trimmed squares criterion are provided. The design of the filters is simple since no scale parameters or context-dependent threshold values are required. Experimental results using both real and simulated data are presented. The filters effectively attenuate both impulsive and nonimpulsive noise while recovering the signal structure and preserving interesting details.
Iterated unscented Kalman filter for phase unwrapping of interferometric fringes.
Xie, Xianming
2016-08-22
A fresh phase unwrapping algorithm based on iterated unscented Kalman filter is proposed to estimate unambiguous unwrapped phase of interferometric fringes. This method is the result of combining an iterated unscented Kalman filter with a robust phase gradient estimator based on amended matrix pencil model, and an efficient quality-guided strategy based on heap sort. The iterated unscented Kalman filter that is one of the most robust methods under the Bayesian theorem frame in non-linear signal processing so far, is applied to perform simultaneously noise suppression and phase unwrapping of interferometric fringes for the first time, which can simplify the complexity and the difficulty of pre-filtering procedure followed by phase unwrapping procedure, and even can remove the pre-filtering procedure. The robust phase gradient estimator is used to efficiently and accurately obtain phase gradient information from interferometric fringes, which is needed for the iterated unscented Kalman filtering phase unwrapping model. The efficient quality-guided strategy is able to ensure that the proposed method fast unwraps wrapped pixels along the path from the high-quality area to the low-quality area of wrapped phase images, which can greatly improve the efficiency of phase unwrapping. Results obtained from synthetic data and real data show that the proposed method can obtain better solutions with an acceptable time consumption, with respect to some of the most used algorithms.
BROJA-2PID: A Robust Estimator for Bivariate Partial Information Decomposition
NASA Astrophysics Data System (ADS)
Makkeh, Abdullah; Theis, Dirk; Vicente, Raul
2018-04-01
Makkeh, Theis, and Vicente found in [8] that Cone Programming model is the most robust to compute the Bertschinger et al. partial information decompostion (BROJA PID) measure [1]. We developed a production-quality robust software that computes the BROJA PID measure based on the Cone Programming model. In this paper, we prove the important property of strong duality for the Cone Program and prove an equivalence between the Cone Program and the original Convex problem. Then describe in detail our software and how to use it.\
A Novel Continuous Blood Pressure Estimation Approach Based on Data Mining Techniques.
Miao, Fen; Fu, Nan; Zhang, Yuan-Ting; Ding, Xiao-Rong; Hong, Xi; He, Qingyun; Li, Ye
2017-11-01
Continuous blood pressure (BP) estimation using pulse transit time (PTT) is a promising method for unobtrusive BP measurement. However, the accuracy of this approach must be improved for it to be viable for a wide range of applications. This study proposes a novel continuous BP estimation approach that combines data mining techniques with a traditional mechanism-driven model. First, 14 features derived from simultaneous electrocardiogram and photoplethysmogram signals were extracted for beat-to-beat BP estimation. A genetic algorithm-based feature selection method was then used to select BP indicators for each subject. Multivariate linear regression and support vector regression were employed to develop the BP model. The accuracy and robustness of the proposed approach were validated for static, dynamic, and follow-up performance. Experimental results based on 73 subjects showed that the proposed approach exhibited excellent accuracy in static BP estimation, with a correlation coefficient and mean error of 0.852 and -0.001 ± 3.102 mmHg for systolic BP, and 0.790 and -0.004 ± 2.199 mmHg for diastolic BP. Similar performance was observed for dynamic BP estimation. The robustness results indicated that the estimation accuracy was lower by a certain degree one day after model construction but was relatively stable from one day to six months after construction. The proposed approach is superior to the state-of-the-art PTT-based model for an approximately 2-mmHg reduction in the standard derivation at different time intervals, thus providing potentially novel insights for cuffless BP estimation.
A Computerized Demonstration of the False Consensus Effect.
ERIC Educational Resources Information Center
Clement, Russell W.; And Others
1997-01-01
Replicates a classic psychology laboratory experiment where students either endorsed or refuted personal statements and estimated how other people would respond. Students always overestimated an affirmative response on the statements they endorsed, thus illustrating the false consensus effect. Includes a list of the statements and statistical…
Validation of biomarkers of food intake-critical assessment of candidate biomarkers.
Dragsted, L O; Gao, Q; Scalbert, A; Vergères, G; Kolehmainen, M; Manach, C; Brennan, L; Afman, L A; Wishart, D S; Andres Lacueva, C; Garcia-Aloy, M; Verhagen, H; Feskens, E J M; Praticò, G
2018-01-01
Biomarkers of food intake (BFIs) are a promising tool for limiting misclassification in nutrition research where more subjective dietary assessment instruments are used. They may also be used to assess compliance to dietary guidelines or to a dietary intervention. Biomarkers therefore hold promise for direct and objective measurement of food intake. However, the number of comprehensively validated biomarkers of food intake is limited to just a few. Many new candidate biomarkers emerge from metabolic profiling studies and from advances in food chemistry. Furthermore, candidate food intake biomarkers may also be identified based on extensive literature reviews such as described in the guidelines for Biomarker of Food Intake Reviews (BFIRev). To systematically and critically assess the validity of candidate biomarkers of food intake, it is necessary to outline and streamline an optimal and reproducible validation process. A consensus-based procedure was used to provide and evaluate a set of the most important criteria for systematic validation of BFIs. As a result, a validation procedure was developed including eight criteria, plausibility, dose-response, time-response, robustness, reliability, stability, analytical performance, and inter-laboratory reproducibility. The validation has a dual purpose: (1) to estimate the current level of validation of candidate biomarkers of food intake based on an objective and systematic approach and (2) to pinpoint which additional studies are needed to provide full validation of each candidate biomarker of food intake. This position paper on biomarker of food intake validation outlines the second step of the BFIRev procedure but may also be used as such for validation of new candidate biomarkers identified, e.g., in food metabolomic studies.
Position Accuracy Analysis of a Robust Vision-Based Navigation
NASA Astrophysics Data System (ADS)
Gaglione, S.; Del Pizzo, S.; Troisi, S.; Angrisano, A.
2018-05-01
Using images to determine camera position and attitude is a consolidated method, very widespread for application like UAV navigation. In harsh environment, where GNSS could be degraded or denied, image-based positioning could represent a possible candidate for an integrated or alternative system. In this paper, such method is investigated using a system based on single camera and 3D maps. A robust estimation method is proposed in order to limit the effect of blunders or noisy measurements on position solution. The proposed approach is tested using images collected in an urban canyon, where GNSS positioning is very unaccurate. A previous photogrammetry survey has been performed to build the 3D model of tested area. The position accuracy analysis is performed and the effect of the robust method proposed is validated.
Generating Multivariate Ordinal Data via Entropy Principles.
Lee, Yen; Kaplan, David
2018-03-01
When conducting robustness research where the focus of attention is on the impact of non-normality, the marginal skewness and kurtosis are often used to set the degree of non-normality. Monte Carlo methods are commonly applied to conduct this type of research by simulating data from distributions with skewness and kurtosis constrained to pre-specified values. Although several procedures have been proposed to simulate data from distributions with these constraints, no corresponding procedures have been applied for discrete distributions. In this paper, we present two procedures based on the principles of maximum entropy and minimum cross-entropy to estimate the multivariate observed ordinal distributions with constraints on skewness and kurtosis. For these procedures, the correlation matrix of the observed variables is not specified but depends on the relationships between the latent response variables. With the estimated distributions, researchers can study robustness not only focusing on the levels of non-normality but also on the variations in the distribution shapes. A simulation study demonstrates that these procedures yield excellent agreement between specified parameters and those of estimated distributions. A robustness study concerning the effect of distribution shape in the context of confirmatory factor analysis shows that shape can affect the robust [Formula: see text] and robust fit indices, especially when the sample size is small, the data are severely non-normal, and the fitted model is complex.
Fenner, Jack N
2005-10-01
The length of the human generation interval is a key parameter when using genetics to date population divergence events. However, no consensus exists regarding the generation interval length, and a wide variety of interval lengths have been used in recent studies. This makes comparison between studies difficult, and questions the accuracy of divergence date estimations. Recent genealogy-based research suggests that the male generation interval is substantially longer than the female interval, and that both are greater than the values commonly used in genetics studies. This study evaluates each of these hypotheses in a broader cross-cultural context, using data from both nation states and recent hunter-gatherer societies. Both hypotheses are supported by this study; therefore, revised estimates of male, female, and overall human generation interval lengths are proposed. The nearly universal, cross-cultural nature of the evidence justifies using these proposed estimates in Y-chromosomal, mitochondrial, and autosomal DNA-based population divergence studies.
ERIC Educational Resources Information Center
Zeoli, April M.; Norris, Alexis; Brenner, Hannah
2011-01-01
Warrantless arrest laws for domestic violence (DV) are generally classified as discretionary, preferred, or mandatory, based on the level of power accorded to police in deciding whether to arrest. However, there is a lack of consensus in the literature regarding how each state's law should be categorized. Using three classification schemes, this…
Butler, Troy; Wildey, Timothy
2018-01-01
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Troy; Wildey, Timothy
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
New spatial upscaling methods for multi-point measurements: From normal to p-normal
NASA Astrophysics Data System (ADS)
Liu, Feng; Li, Xin
2017-12-01
Careful attention must be given to determining whether the geophysical variables of interest are normally distributed, since the assumption of a normal distribution may not accurately reflect the probability distribution of some variables. As a generalization of the normal distribution, the p-normal distribution and its corresponding maximum likelihood estimation (the least power estimation, LPE) were introduced in upscaling methods for multi-point measurements. Six methods, including three normal-based methods, i.e., arithmetic average, least square estimation, block kriging, and three p-normal-based methods, i.e., LPE, geostatistics LPE and inverse distance weighted LPE are compared in two types of experiments: a synthetic experiment to evaluate the performance of the upscaling methods in terms of accuracy, stability and robustness, and a real-world experiment to produce real-world upscaling estimates using soil moisture data obtained from multi-scale observations. The results show that the p-normal-based methods produced lower mean absolute errors and outperformed the other techniques due to their universality and robustness. We conclude that introducing appropriate statistical parameters into an upscaling strategy can substantially improve the estimation, especially if the raw measurements are disorganized; however, further investigation is required to determine which parameter is the most effective among variance, spatial correlation information and parameter p.
Evolutionary divergence in the catalytic activity of the CAM-1, ROR1 and ROR2 kinase domains.
Bainbridge, Travis W; DeAlmeida, Venita I; Izrael-Tomasevic, Anita; Chalouni, Cécile; Pan, Borlan; Goldsmith, Joshua; Schoen, Alia P; Quiñones, Gabriel A; Kelly, Ryan; Lill, Jennie R; Sandoval, Wendy; Costa, Mike; Polakis, Paul; Arnott, David; Rubinfeld, Bonnee; Ernst, James A
2014-01-01
Receptor tyrosine kinase-like orphan receptors (ROR) 1 and 2 are atypical members of the receptor tyrosine kinase (RTK) family and have been associated with several human diseases. The vertebrate RORs contain an ATP binding domain that deviates from the consensus amino acid sequence, although the impact of this deviation on catalytic activity is not known and the kinase function of these receptors remains controversial. Recently, ROR2 was shown to signal through a Wnt responsive, β-catenin independent pathway and suppress a canonical Wnt/β-catenin signal. In this work we demonstrate that both ROR1 and ROR2 kinase domains are catalytically deficient while CAM-1, the C. elegans homolog of ROR, has an active tyrosine kinase domain, suggesting a divergence in the signaling processes of the ROR family during evolution. In addition, we show that substitution of the non-consensus residues from ROR1 or ROR2 into CAM-1 and MuSK markedly reduce kinase activity, while restoration of the consensus residues in ROR does not restore robust kinase function. We further demonstrate that the membrane-bound extracellular domain alone of either ROR1 or ROR2 is sufficient for suppression of canonical Wnt3a signaling, and that this domain can also enhance Wnt5a suppression of Wnt3a signaling. Based on these data, we conclude that human ROR1 and ROR2 are RTK-like pseudokinases.
Shear wave speed estimation by adaptive random sample consensus method.
Lin, Haoming; Wang, Tianfu; Chen, Siping
2014-01-01
This paper describes a new method for shear wave velocity estimation that is capable of extruding outliers automatically without preset threshold. The proposed method is an adaptive random sample consensus (ARANDSAC) and the metric used here is finding the certain percentage of inliers according to the closest distance criterion. To evaluate the method, the simulation and phantom experiment results were compared using linear regression with all points (LRWAP) and radon sum transform (RS) method. The assessment reveals that the relative biases of mean estimation are 20.00%, 4.67% and 5.33% for LRWAP, ARANDSAC and RS respectively for simulation, 23.53%, 4.08% and 1.08% for phantom experiment. The results suggested that the proposed ARANDSAC algorithm is accurate in shear wave speed estimation.
Robust linear discriminant models to solve financial crisis in banking sectors
NASA Astrophysics Data System (ADS)
Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Idris, Faoziah; Ali, Hazlina; Omar, Zurni
2014-12-01
Linear discriminant analysis (LDA) is a widely-used technique in patterns classification via an equation which will minimize the probability of misclassifying cases into their respective categories. However, the performance of classical estimators in LDA highly depends on the assumptions of normality and homoscedasticity. Several robust estimators in LDA such as Minimum Covariance Determinant (MCD), S-estimators and Minimum Volume Ellipsoid (MVE) are addressed by many authors to alleviate the problem of non-robustness of the classical estimates. In this paper, we investigate on the financial crisis of the Malaysian banking institutions using robust LDA and classical LDA methods. Our objective is to distinguish the "distress" and "non-distress" banks in Malaysia by using the LDA models. Hit ratio is used to validate the accuracy predictive of LDA models. The performance of LDA is evaluated by estimating the misclassification rate via apparent error rate. The results and comparisons show that the robust estimators provide a better performance than the classical estimators for LDA.
Effective Visual Tracking Using Multi-Block and Scale Space Based on Kernelized Correlation Filters
Jeong, Soowoong; Kim, Guisik; Lee, Sangkeun
2017-01-01
Accurate scale estimation and occlusion handling is a challenging problem in visual tracking. Recently, correlation filter-based trackers have shown impressive results in terms of accuracy, robustness, and speed. However, the model is not robust to scale variation and occlusion. In this paper, we address the problems associated with scale variation and occlusion by employing a scale space filter and multi-block scheme based on a kernelized correlation filter (KCF) tracker. Furthermore, we develop a more robust algorithm using an appearance update model that approximates the change of state of occlusion and deformation. In particular, an adaptive update scheme is presented to make each process robust. The experimental results demonstrate that the proposed method outperformed 29 state-of-the-art trackers on 100 challenging sequences. Specifically, the results obtained with the proposed scheme were improved by 8% and 18% compared to those of the KCF tracker for 49 occlusion and 64 scale variation sequences, respectively. Therefore, the proposed tracker can be a robust and useful tool for object tracking when occlusion and scale variation are involved. PMID:28241475
Effective Visual Tracking Using Multi-Block and Scale Space Based on Kernelized Correlation Filters.
Jeong, Soowoong; Kim, Guisik; Lee, Sangkeun
2017-02-23
Accurate scale estimation and occlusion handling is a challenging problem in visual tracking. Recently, correlation filter-based trackers have shown impressive results in terms of accuracy, robustness, and speed. However, the model is not robust to scale variation and occlusion. In this paper, we address the problems associated with scale variation and occlusion by employing a scale space filter and multi-block scheme based on a kernelized correlation filter (KCF) tracker. Furthermore, we develop a more robust algorithm using an appearance update model that approximates the change of state of occlusion and deformation. In particular, an adaptive update scheme is presented to make each process robust. The experimental results demonstrate that the proposed method outperformed 29 state-of-the-art trackers on 100 challenging sequences. Specifically, the results obtained with the proposed scheme were improved by 8% and 18% compared to those of the KCF tracker for 49 occlusion and 64 scale variation sequences, respectively. Therefore, the proposed tracker can be a robust and useful tool for object tracking when occlusion and scale variation are involved.
van der Linden, Sander L; Clarke, Chris E; Maibach, Edward W
2015-12-03
A substantial minority of American adults continue to hold influential misperceptions about childhood vaccine safety. Growing public concern and refusal to vaccinate poses a serious public health risk. Evaluations of recent pro-vaccine health communication interventions have revealed mixed results (at best). This study investigated whether highlighting consensus among medical scientists about childhood vaccine safety can lower public concern, reduce key misperceptions about the discredited autism-vaccine link and promote overall support for vaccines. American adults (N = 206) were invited participate in an online survey experiment. Participants were randomly assigned to either a control group or to one of three treatment interventions. The treatment messages were based on expert-consensus estimates and either normatively described or prescribed the extant medical consensus: "90 % of medical scientists agree that vaccines are safe and that all parents should be required to vaccinate their children". Compared to the control group, the consensus-messages significantly reduced vaccine concern (M = 3.51 vs. M = 2.93, p < 0.01) and belief in the vaccine-autism-link (M = 3.07 vs M = 2.15, p < 0.01) while increasing perceived consensus about vaccine safety (M = 83.93 vs M = 89.80, p < 0.01) and public support for vaccines (M = 5.66 vs M = 6.22, p < 0.01). Mediation analysis further revealed that the public's understanding of the level of scientific agreement acts as an important "gateway" belief by promoting public attitudes and policy support for vaccines directly as well as indirectly by reducing endorsement of the discredited autism-vaccine link. These findings suggest that emphasizing the medical consensus about (childhood) vaccine safety is likely to be an effective pro-vaccine message that could help prevent current immunization rates from declining. We recommend that clinicians and public health officials highlight and communicate the high degree of medical consensus on (childhood) vaccine safety when possible.
Human Age Estimation Method Robust to Camera Sensor and/or Face Movement
Nguyen, Dat Tien; Cho, So Ra; Pham, Tuyen Danh; Park, Kang Ryoung
2015-01-01
Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method. PMID:26334282
Validity, reliability, and generalizability in qualitative research
Leung, Lawrence
2015-01-01
In general practice, qualitative research contributes as significantly as quantitative research, in particular regarding psycho-social aspects of patient-care, health services provision, policy setting, and health administrations. In contrast to quantitative research, qualitative research as a whole has been constantly critiqued, if not disparaged, by the lack of consensus for assessing its quality and robustness. This article illustrates with five published studies how qualitative research can impact and reshape the discipline of primary care, spiraling out from clinic-based health screening to community-based disease monitoring, evaluation of out-of-hours triage services to provincial psychiatric care pathways model and finally, national legislation of core measures for children's healthcare insurance. Fundamental concepts of validity, reliability, and generalizability as applicable to qualitative research are then addressed with an update on the current views and controversies. PMID:26288766
NASA Astrophysics Data System (ADS)
Jacobs, P.; Cook, J.; Nuccitelli, D.
2014-12-01
An overwhelming scientific consensus exists on the issue of anthropogenic climate change. Unfortunately, public perception of expert agreement remains low- only around 1 in 10 Americans correctly estimates the actual level of consensus on the topic. Moreover, several recent studies have demonstrated the pivotal role that perceived consensus plays in the public's acceptance of key scientific facts about environmental problems, as well as their willingness to support policy to address them. This "consensus gap", between the high level of scientific agreement vs. the public's perception of it, has led to calls for increased consensus messaging. However this call has been challenged by a number of different groups: climate "skeptics" in denial about the existence and validity of the consensus; some social science researchers and journalists who believe that such messages will be ineffective or counterproductive; and even some scientists and science advocates who downplay the value of consensus in science generally. All of these concerns can be addressed by effectively communicating the role of consensus within science to the public, as well as the conditions under which consensus is likely to be correct. Here, we demonstrate that the scientific consensus on anthropogenic climate change satisfies these conditions, and discuss past examples of purported consensus that failed or succeeded to satisfy them as well. We conclude by discussing the way in which scientific consensus is interpreted by the public, and how consensus messaging can improve climate literacy.
Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR
Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington
2014-01-01
This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868
Motion field estimation for a dynamic scene using a 3D LiDAR.
Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington
2014-09-09
This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively.
Balancing Score Adjusted Targeted Minimum Loss-based Estimation
Lendle, Samuel David; Fireman, Bruce; van der Laan, Mark J.
2015-01-01
Adjusting for a balancing score is sufficient for bias reduction when estimating causal effects including the average treatment effect and effect among the treated. Estimators that adjust for the propensity score in a nonparametric way, such as matching on an estimate of the propensity score, can be consistent when the estimated propensity score is not consistent for the true propensity score but converges to some other balancing score. We call this property the balancing score property, and discuss a class of estimators that have this property. We introduce a targeted minimum loss-based estimator (TMLE) for a treatment-specific mean with the balancing score property that is additionally locally efficient and doubly robust. We investigate the new estimator’s performance relative to other estimators, including another TMLE, a propensity score matching estimator, an inverse probability of treatment weighted estimator, and a regression-based estimator in simulation studies. PMID:26561539
Robust Alternatives to the Standard Deviation in Processing of Physics Experimental Data
NASA Astrophysics Data System (ADS)
Shulenin, V. P.
2016-10-01
Properties of robust estimations of the scale parameter are studied. It is noted that the median of absolute deviations and the modified estimation of the average Gini differences have asymptotically normal distributions and bounded influence functions, are B-robust estimations, and hence, unlike the estimation of the standard deviation, are protected from the presence of outliers in the sample. Results of comparison of estimations of the scale parameter are given for a Gaussian model with contamination. An adaptive variant of the modified estimation of the average Gini differences is considered.
GRIFFITHS, MARK D.; VAN ROOIJ, ANTONIUS J.; KARDEFELT-WINTHER, DANIEL; STARCEVIC, VLADAN; KIRÁLY, ORSOLYA; PALLESEN, STÅLE; MÜLLER, KAI; DREIER, MICHAEL; CARRAS, MICHELLE; PRAUSE, NICOLE; KING, DANIEL L.; ABOUJAOUDE, ELLIAS; KUSS, DARIA J.; PONTES, HALLEY M.; FERNANDEZ, OLATZ LOPEZ; NAGYGYORGY, KATALIN; ACHAB, SOPHIA; BILLIEUX, JOËL; QUANDT, THORSTEN; CARBONELL, XAVIER; FERGUSON, CHRISTOPHER J.; HOFF, RANI A.; DEREVENSKY, JEFFREY; HAAGSMA, MARIA C.; DELFABBRO, PAUL; COULSON, MARK; HUSSAIN, ZAHEER; DEMETROVICS, ZSOLT
2017-01-01
This commentary paper critically discusses the recent debate paper by Petry et al. (2014) that argued there was now an international consensus for assessing Internet Gaming Disorder (IGD). Our collective opinions vary considerably regarding many different aspects of online gaming. However, we contend that the paper by Petry and colleagues does not provide a true and representative international community of researchers in this area. This paper critically discusses and provides commentary on (i) the representativeness of the international group that wrote the ‘consensus’ paper, and (ii) each of the IGD criteria. The paper also includes a brief discussion on initiatives that could be taken to move the field towards consensus. It is hoped that this paper will foster debate in the IGD field and lead to improved theory, better methodologically designed studies, and more robust empirical evidence as regards problematic gaming and its psychosocial consequences and impact. PMID:26669530
Robust location and spread measures for nonparametric probability density function estimation.
López-Rubio, Ezequiel
2009-10-01
Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.
Registration using natural features for augmented reality systems.
Yuan, M L; Ong, S K; Nee, A Y C
2006-01-01
Registration is one of the most difficult problems in augmented reality (AR) systems. In this paper, a simple registration method using natural features based on the projective reconstruction technique is proposed. This method consists of two steps: embedding and rendering. Embedding involves specifying four points to build the world coordinate system on which a virtual object will be superimposed. In rendering, the Kanade-Lucas-Tomasi (KLT) feature tracker is used to track the natural feature correspondences in the live video. The natural features that have been tracked are used to estimate the corresponding projective matrix in the image sequence. Next, the projective reconstruction technique is used to transfer the four specified points to compute the registration matrix for augmentation. This paper also proposes a robust method for estimating the projective matrix, where the natural features that have been tracked are normalized (translation and scaling) and used as the input data. The estimated projective matrix will be used as an initial estimate for a nonlinear optimization method that minimizes the actual residual errors based on the Levenberg-Marquardt (LM) minimization method, thus making the results more robust and stable. The proposed registration method has three major advantages: 1) It is simple, as no predefined fiducials or markers are used for registration for either indoor and outdoor AR applications. 2) It is robust, because it remains effective as long as at least six natural features are tracked during the entire augmentation, and the existence of the corresponding projective matrices in the live video is guaranteed. Meanwhile, the robust method to estimate the projective matrix can obtain stable results even when there are some outliers during the tracking process. 3) Virtual objects can still be superimposed on the specified areas, even if some parts of the areas are occluded during the entire process. Some indoor and outdoor experiments have been conducted to validate the performance of this proposed method.
Simulated performance of an order statistic threshold strategy for detection of narrowband signals
NASA Technical Reports Server (NTRS)
Satorius, E.; Brady, R.; Deich, W.; Gulkis, S.; Olsen, E.
1988-01-01
The application of order statistics to signal detection is becoming an increasingly active area of research. This is due to the inherent robustness of rank estimators in the presence of large outliers that would significantly degrade more conventional mean-level-based detection systems. A detection strategy is presented in which the threshold estimate is obtained using order statistics. The performance of this algorithm in the presence of simulated interference and broadband noise is evaluated. In this way, the robustness of the proposed strategy in the presence of the interference can be fully assessed as a function of the interference, noise, and detector parameters.
Carles, Montserrat; Fechter, Tobias; Nemer, Ursula; Nanko, Norbert; Mix, Michael; Nestle, Ursula; Schaefer, Andrea
2015-12-21
PET/CT plays an important role in radiotherapy planning for lung tumors. Several segmentation algorithms have been proposed for PET tumor segmentation. However, most of them do not take into account respiratory motion and are not well validated. The aim of this work was to evaluate a semi-automated contrast-oriented algorithm (COA) for PET tumor segmentation adapted to retrospectively gated (4D) images. The evaluation involved a wide set of 4D-PET/CT acquisitions of dynamic experimental phantoms and lung cancer patients. In addition, segmentation accuracy of 4D-COA was compared with four other state-of-the-art algorithms. In phantom evaluation, the physical properties of the objects defined the gold standard. In clinical evaluation, the ground truth was estimated by the STAPLE (Simultaneous Truth and Performance Level Estimation) consensus of three manual PET contours by experts. Algorithm evaluation with phantoms resulted in: (i) no statistically significant diameter differences for different targets and movements (Δφ = 0.3 ± 1.6 mm); (ii) reproducibility for heterogeneous and irregular targets independent of user initial interaction and (iii) good segmentation agreement for irregular targets compared to manual CT delineation in terms of Dice Similarity Coefficient (DSC = 0.66 ± 0.04), Positive Predictive Value (PPV = 0.81 ± 0.06) and Sensitivity (Sen. = 0.49 ± 0.05). In clinical evaluation, the segmented volume was in reasonable agreement with the consensus volume (difference in volume (%Vol) = 40 ± 30, DSC = 0.71 ± 0.07 and PPV = 0.90 ± 0.13). High accuracy in target tracking position (ΔME) was obtained for experimental and clinical data (ΔME(exp) = 0 ± 3 mm; ΔME(clin) 0.3 ± 1.4 mm). In the comparison with other lung segmentation methods, 4D-COA has shown the highest volume accuracy in both experimental and clinical data. In conclusion, the accuracy in volume delineation, position tracking and its robustness on highly irregular target movements, make this algorithm a useful tool for 4D-PET based volume definition for radiotherapy planning of lung cancer and may help to improve the reproducibility in PET quantification for therapy response assessment and prognosis.
NASA Astrophysics Data System (ADS)
Carles, Montserrat; Fechter, Tobias; Nemer, Ursula; Nanko, Norbert; Mix, Michael; Nestle, Ursula; Schaefer, Andrea
2015-12-01
PET/CT plays an important role in radiotherapy planning for lung tumors. Several segmentation algorithms have been proposed for PET tumor segmentation. However, most of them do not take into account respiratory motion and are not well validated. The aim of this work was to evaluate a semi-automated contrast-oriented algorithm (COA) for PET tumor segmentation adapted to retrospectively gated (4D) images. The evaluation involved a wide set of 4D-PET/CT acquisitions of dynamic experimental phantoms and lung cancer patients. In addition, segmentation accuracy of 4D-COA was compared with four other state-of-the-art algorithms. In phantom evaluation, the physical properties of the objects defined the gold standard. In clinical evaluation, the ground truth was estimated by the STAPLE (Simultaneous Truth and Performance Level Estimation) consensus of three manual PET contours by experts. Algorithm evaluation with phantoms resulted in: (i) no statistically significant diameter differences for different targets and movements (Δ φ =0.3+/- 1.6 mm); (ii) reproducibility for heterogeneous and irregular targets independent of user initial interaction and (iii) good segmentation agreement for irregular targets compared to manual CT delineation in terms of Dice Similarity Coefficient (DSC = 0.66+/- 0.04 ), Positive Predictive Value (PPV = 0.81+/- 0.06 ) and Sensitivity (Sen. = 0.49+/- 0.05 ). In clinical evaluation, the segmented volume was in reasonable agreement with the consensus volume (difference in volume (%Vol) = 40+/- 30 , DSC = 0.71+/- 0.07 and PPV = 0.90+/- 0.13 ). High accuracy in target tracking position (Δ ME) was obtained for experimental and clinical data (Δ ME{{}\\text{exp}}=0+/- 3 mm; Δ ME{{}\\text{clin}}=0.3+/- 1.4 mm). In the comparison with other lung segmentation methods, 4D-COA has shown the highest volume accuracy in both experimental and clinical data. In conclusion, the accuracy in volume delineation, position tracking and its robustness on highly irregular target movements, make this algorithm a useful tool for 4D-PET based volume definition for radiotherapy planning of lung cancer and may help to improve the reproducibility in PET quantification for therapy response assessment and prognosis.
Regional-scale analysis of extreme precipitation from short and fragmented records
NASA Astrophysics Data System (ADS)
Libertino, Andrea; Allamano, Paola; Laio, Francesco; Claps, Pierluigi
2018-02-01
Rain gauge is the oldest and most accurate instrument for rainfall measurement, able to provide long series of reliable data. However, rain gauge records are often plagued by gaps, spatio-temporal discontinuities and inhomogeneities that could affect their suitability for a statistical assessment of the characteristics of extreme rainfall. Furthermore, the need to discard the shorter series for obtaining robust estimates leads to ignore a significant amount of information which can be essential, especially when large return periods estimates are sought. This work describes a robust statistical framework for dealing with uneven and fragmented rainfall records on a regional spatial domain. The proposed technique, named "patched kriging" allows one to exploit all the information available from the recorded series, independently of their length, to provide extreme rainfall estimates in ungauged areas. The methodology involves the sequential application of the ordinary kriging equations, producing a homogeneous dataset of synthetic series with uniform lengths. In this way, the errors inherent to any regional statistical estimation can be easily represented in the spatial domain and, possibly, corrected. Furthermore, the homogeneity of the obtained series, provides robustness toward local artefacts during the parameter-estimation phase. The application to a case study in the north-western Italy demonstrates the potential of the methodology and provides a significant base for discussing its advantages over previous techniques.
Carmena, Jose M.
2016-01-01
Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter initialization. Finally, the architecture extended control to tasks beyond those used for CLDA training. These results have significant implications towards the development of clinically-viable neuroprosthetics. PMID:27035820
NASA Astrophysics Data System (ADS)
Li, Yi; Abdel-Monem, Mohamed; Gopalakrishnan, Rahul; Berecibar, Maitane; Nanini-Maury, Elise; Omar, Noshin; van den Bossche, Peter; Van Mierlo, Joeri
2018-01-01
This paper proposes an advanced state of health (SoH) estimation method for high energy NMC lithium-ion batteries based on the incremental capacity (IC) analysis. IC curves are used due to their ability of detect and quantify battery degradation mechanism. A simple and robust smoothing method is proposed based on Gaussian filter to reduce the noise on IC curves, the signatures associated with battery ageing can therefore be accurately identified. A linear regression relationship is found between the battery capacity with the positions of features of interest (FOIs) on IC curves. Results show that the developed SoH estimation function from one single battery cell is able to evaluate the SoH of other batteries cycled under different cycling depth with less than 2.5% maximum errors, which proves the robustness of the proposed method on SoH estimation. With this technique, partial charging voltage curves can be used for SoH estimation and the testing time can be therefore largely reduced. This method shows great potential to be applied in reality, as it only requires static charging curves and can be easily implemented in battery management system (BMS).
Data Driven Model Development for the Supersonic Semispan Transport (S(sup 4)T)
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2011-01-01
We investigate two common approaches to model development for robust control synthesis in the aerospace community; namely, reduced order aeroservoelastic modelling based on structural finite-element and computational fluid dynamics based aerodynamic models and a data-driven system identification procedure. It is shown via analysis of experimental Super- Sonic SemiSpan Transport (S4T) wind-tunnel data using a system identification approach it is possible to estimate a model at a fixed Mach, which is parsimonious and robust across varying dynamic pressures.
Li, Qiao; Mark, Roger G; Clifford, Gari D
2009-01-01
Background Within the intensive care unit (ICU), arterial blood pressure (ABP) is typically recorded at different (and sometimes uneven) sampling frequencies, and from different sensors, and is often corrupted by different artifacts and noise which are often non-Gaussian, nonlinear and nonstationary. Extracting robust parameters from such signals, and providing confidences in the estimates is therefore difficult and requires an adaptive filtering approach which accounts for artifact types. Methods Using a large ICU database, and over 6000 hours of simultaneously acquired electrocardiogram (ECG) and ABP waveforms sampled at 125 Hz from a 437 patient subset, we documented six general types of ABP artifact. We describe a new ABP signal quality index (SQI), based upon the combination of two previously reported signal quality measures weighted together. One index measures morphological normality, and the other degradation due to noise. After extracting a 6084-hour subset of clean data using our SQI, we evaluated a new robust tracking algorithm for estimating blood pressure and heart rate (HR) based upon a Kalman Filter (KF) with an update sequence modified by the KF innovation sequence and the value of the SQI. In order to do this, we have created six novel models of different categories of artifacts that we have identified in our ABP waveform data. These artifact models were then injected into clean ABP waveforms in a controlled manner. Clinical blood pressure (systolic, mean and diastolic) estimates were then made from the ABP waveforms for both clean and corrupted data. The mean absolute error for systolic, mean and diastolic blood pressure was then calculated for different levels of artifact pollution to provide estimates of expected errors given a single value of the SQI. Results Our artifact models demonstrate that artifact types have differing effects on systolic, diastolic and mean ABP estimates. We show that, for most artifact types, diastolic ABP estimates are less noise-sensitive than mean ABP estimates, which in turn are more robust than systolic ABP estimates. We also show that our SQI can provide error bounds for both HR and ABP estimates. Conclusion The KF/SQI-fusion method described in this article was shown to provide an accurate estimate of blood pressure and HR derived from the ABP waveform even in the presence of high levels of persistent noise and artifact, and during extreme bradycardia and tachycardia. Differences in error between artifact types, measurement sensors and the quality of the source signal can be factored into physiological estimation using an unbiased adaptive filter, signal innovation and signal quality measures. PMID:19586547
2013-01-29
of modern portfolio and control theory . The reformulation allows for possible changes in estimated quantities (e.g., due to market shifts in... Portfolio Theory (MPT). Final Report: NPS award N00244-11-1-0003 5 Extending CEM and Markov: Agent-Based Modeling Approach Research conducted in the...integration and acquisition from a robust portfolio theory standpoint. Robust portfolio management methodologies have been widely used by financial
Babor, Thomas F; Xuan, Ziming; Damon, Donna
2013-10-01
This study evaluated the use of a modified Delphi technique in combination with a previously developed alcohol advertising rating procedure to detect content violations in the U.S. Beer Institute Code. A related aim was to estimate the minimum number of raters needed to obtain reliable evaluations of code violations in television commercials. Six alcohol ads selected for their likelihood of having code violations were rated by community and expert participants (N = 286). Quantitative rating scales were used to measure the content of alcohol advertisements based on alcohol industry self-regulatory guidelines. The community group participants represented vulnerability characteristics that industry codes were designed to protect (e.g., age <21); experts represented various health-related professions, including public health, human development, alcohol research, and mental health. Alcohol ads were rated on 2 occasions separated by 1 month. After completing Time 1 ratings, participants were randomized to receive feedback from 1 group or the other. Findings indicate that (i) ratings at Time 2 had generally reduced variance, suggesting greater consensus after feedback, (ii) feedback from the expert group was more influential than that of the community group in developing group consensus, (iii) the expert group found significantly fewer violations than the community group, (iv) experts representing different professional backgrounds did not differ among themselves in the number of violations identified, and (v) a rating panel composed of at least 15 raters is sufficient to obtain reliable estimates of code violations. The Delphi technique facilitates consensus development around code violations in alcohol ad content and may enhance the ability of regulatory agencies to monitor the content of alcoholic beverage advertising when combined with psychometric-based rating procedures. Copyright © 2013 by the Research Society on Alcoholism.
Babor, Thomas F.; Xuan, Ziming; Damon, Donna
2013-01-01
Background This study evaluated the use of a modified Delphi technique in combination with a previously developed alcohol advertising rating procedure to detect content violations in the US Beer Institute code. A related aim was to estimate the minimum number of raters needed to obtain reliable evaluations of code violations in television commercials. Methods Six alcohol ads selected for their likelihood of having code violations were rated by community and expert participants (N=286). Quantitative rating scales were used to measure the content of alcohol advertisements based on alcohol industry self-regulatory guidelines. The community group participants represented vulnerability characteristics that industry codes were designed to protect (e.g., age < 21); experts represented various health-related professions, including public health, human development, alcohol research and mental health. Alcohol ads were rated on two occasions separated by one month. After completing Time 1 ratings, participants were randomized to receive feedback from one group or the other. Results Findings indicate that (1) ratings at Time 2 had generally reduced variance, suggesting greater consensus after feedback, (2) feedback from the expert group was more influential than that of the community group in developing group consensus, (3) the expert group found significantly fewer violations than the community group, (4) experts representing different professional backgrounds did not differ among themselves in the number of violations identified; (5) a rating panel composed of at least 15 raters is sufficient to obtain reliable estimates of code violations. Conclusions The Delphi Technique facilitates consensus development around code violations in alcohol ad content and may enhance the ability of regulatory agencies to monitor the content of alcoholic beverage advertising when combined with psychometric-based rating procedures. PMID:23682927
Myers, Teresa A.; Maibach, Edward; Peters, Ellen; Leiserowitz, Anthony
2015-01-01
Human-caused climate change is happening; nearly all climate scientists are convinced of this basic fact according to surveys of experts and reviews of the peer-reviewed literature. Yet, among the American public, there is widespread misunderstanding of this scientific consensus. In this paper, we report results from two experiments, conducted with national samples of American adults, that tested messages designed to convey the high level of agreement in the climate science community about human-caused climate change. The first experiment tested hypotheses about providing numeric versus non-numeric assertions concerning the level of scientific agreement. We found that numeric statements resulted in higher estimates of the scientific agreement. The second experiment tested the effect of eliciting respondents’ estimates of scientific agreement prior to presenting them with a statement about the level of scientific agreement. Participants who estimated the level of agreement prior to being shown the corrective statement gave higher estimates of the scientific consensus than respondents who were not asked to estimate in advance, indicating that incorporating an “estimation and reveal” technique into public communication about scientific consensus may be effective. The interaction of messages with political ideology was also tested, and demonstrated that messages were approximately equally effective among liberals and conservatives. Implications for theory and practice are discussed. PMID:25812121
Myers, Teresa A; Maibach, Edward; Peters, Ellen; Leiserowitz, Anthony
2015-01-01
Human-caused climate change is happening; nearly all climate scientists are convinced of this basic fact according to surveys of experts and reviews of the peer-reviewed literature. Yet, among the American public, there is widespread misunderstanding of this scientific consensus. In this paper, we report results from two experiments, conducted with national samples of American adults, that tested messages designed to convey the high level of agreement in the climate science community about human-caused climate change. The first experiment tested hypotheses about providing numeric versus non-numeric assertions concerning the level of scientific agreement. We found that numeric statements resulted in higher estimates of the scientific agreement. The second experiment tested the effect of eliciting respondents' estimates of scientific agreement prior to presenting them with a statement about the level of scientific agreement. Participants who estimated the level of agreement prior to being shown the corrective statement gave higher estimates of the scientific consensus than respondents who were not asked to estimate in advance, indicating that incorporating an "estimation and reveal" technique into public communication about scientific consensus may be effective. The interaction of messages with political ideology was also tested, and demonstrated that messages were approximately equally effective among liberals and conservatives. Implications for theory and practice are discussed.
Devereaux, Asha V; Tosh, Pritish K; Hick, John L; Hanfling, Dan; Geiling, James; Reed, Mary Jane; Uyeki, Timothy M; Shah, Umair A; Fagbuyi, Daniel B; Skippen, Peter; Dichter, Jeffrey R; Kissoon, Niranjan; Christian, Michael D; Upperman, Jeffrey S
2014-10-01
Engagement and education of ICU clinicians in disaster preparedness is fragmented by time constraints and institutional barriers and frequently occurs during a disaster. We reviewed the existing literature from 2007 to April 2013 and expert opinions about clinician engagement and education for critical care during a pandemic or disaster and offer suggestions for integrating ICU clinicians into planning and response. The suggestions in this article are important for all of those involved in a pandemic or large-scale disaster with multiple critically ill or injured patients, including front-line clinicians, hospital administrators, and public health or government officials. A systematic literature review was performed and suggestions formulated according to the American College of Chest Physicians (CHEST) Consensus Statement development methodology. We assessed articles, documents, reports, and gray literature reported since 2007. Following expert-informed sorting and review of the literature, key priority areas and questions were developed. No studies of sufficient quality were identified upon which to make evidence-based recommendations. Therefore, the panel developed expert opinion-based suggestions using a modified Delphi process. Twenty-three suggestions were formulated based on literature-informed consensus opinion. These suggestions are grouped according to the following thematic elements: (1) situational awareness, (2) clinician roles and responsibilities, (3) education, and (4) community engagement. Together, these four elements are considered to form the basis for effective ICU clinician engagement for mass critical care. The optimal engagement of the ICU clinical team in caring for large numbers of critically ill patients due to a pandemic or disaster will require a departure from the routine independent systems operating in hospitals. An effective response will require robust information systems; coordination among clinicians, hospitals, and governmental organizations; pre-event engagement of relevant stakeholders; and standardized core competencies for the education and training of critical care clinicians.
Macroeconomic effects on mortality revealed by panel analysis with nonlinear trends.
Ionides, Edward L; Wang, Zhen; Tapia Granados, José A
2013-10-03
Many investigations have used panel methods to study the relationships between fluctuations in economic activity and mortality. A broad consensus has emerged on the overall procyclical nature of mortality: perhaps counter-intuitively, mortality typically rises above its trend during expansions. This consensus has been tarnished by inconsistent reports on the specific age groups and mortality causes involved. We show that these inconsistencies result, in part, from the trend specifications used in previous panel models. Standard econometric panel analysis involves fitting regression models using ordinary least squares, employing standard errors which are robust to temporal autocorrelation. The model specifications include a fixed effect, and possibly a linear trend, for each time series in the panel. We propose alternative methodology based on nonlinear detrending. Applying our methodology on data for the 50 US states from 1980 to 2006, we obtain more precise and consistent results than previous studies. We find procyclical mortality in all age groups. We find clear procyclical mortality due to respiratory disease and traffic injuries. Predominantly procyclical cardiovascular disease mortality and countercyclical suicide are subject to substantial state-to-state variation. Neither cancer nor homicide have significant macroeconomic association.
Macroeconomic effects on mortality revealed by panel analysis with nonlinear trends
Ionides, Edward L.; Wang, Zhen; Tapia Granados, José A.
2013-01-01
Many investigations have used panel methods to study the relationships between fluctuations in economic activity and mortality. A broad consensus has emerged on the overall procyclical nature of mortality: perhaps counter-intuitively, mortality typically rises above its trend during expansions. This consensus has been tarnished by inconsistent reports on the specific age groups and mortality causes involved. We show that these inconsistencies result, in part, from the trend specifications used in previous panel models. Standard econometric panel analysis involves fitting regression models using ordinary least squares, employing standard errors which are robust to temporal autocorrelation. The model specifications include a fixed effect, and possibly a linear trend, for each time series in the panel. We propose alternative methodology based on nonlinear detrending. Applying our methodology on data for the 50 US states from 1980 to 2006, we obtain more precise and consistent results than previous studies. We find procyclical mortality in all age groups. We find clear procyclical mortality due to respiratory disease and traffic injuries. Predominantly procyclical cardiovascular disease mortality and countercyclical suicide are subject to substantial state-to-state variation. Neither cancer nor homicide have significant macroeconomic association. PMID:24587843
Adaptive correlation filter-based video stabilization without accumulative global motion estimation
NASA Astrophysics Data System (ADS)
Koh, Eunjin; Lee, Chanyong; Jeong, Dong Gil
2014-12-01
We present a digital video stabilization approach that provides both robustness and efficiency for practical applications. In this approach, we adopt a stabilization model that maintains spatio-temporal information of past input frames efficiently and can track original stabilization position. Because of the stabilization model, the proposed method does not need accumulative global motion estimation and can recover the original position even if there is a failure in interframe motion estimation. It can also intelligently overcome the situation of damaged or interrupted video sequences. Moreover, because it is simple and suitable to parallel scheme, we implement it on a commercial field programmable gate array and a graphics processing unit board with compute unified device architecture in a breeze. Experimental results show that the proposed approach is both fast and robust.
Instrumental variables estimates of peer effects in social networks.
An, Weihua
2015-03-01
Estimating peer effects with observational data is very difficult because of contextual confounding, peer selection, simultaneity bias, and measurement error, etc. In this paper, I show that instrumental variables (IVs) can help to address these problems in order to provide causal estimates of peer effects. Based on data collected from over 4000 students in six middle schools in China, I use the IV methods to estimate peer effects on smoking. My design-based IV approach differs from previous ones in that it helps to construct potentially strong IVs and to directly test possible violation of exogeneity of the IVs. I show that measurement error in smoking can lead to both under- and imprecise estimations of peer effects. Based on a refined measure of smoking, I find consistent evidence for peer effects on smoking. If a student's best friend smoked within the past 30 days, the student was about one fifth (as indicated by the OLS estimate) or 40 percentage points (as indicated by the IV estimate) more likely to smoke in the same time period. The findings are robust to a variety of robustness checks. I also show that sharing cigarettes may be a mechanism for peer effects on smoking. A 10% increase in the number of cigarettes smoked by a student's best friend is associated with about 4% increase in the number of cigarettes smoked by the student in the same time period. Copyright © 2014 Elsevier Inc. All rights reserved.
Liu, Huawei; Li, Baoqing; Yuan, Xiaobing; Zhou, Qianwei; Huang, Jingchang
2018-03-27
Parameters estimation of sequential movement events of vehicles is facing the challenges of noise interferences and the demands of portable implementation. In this paper, we propose a robust direction-of-arrival (DOA) estimation method for the sequential movement events of vehicles based on a small Micro-Electro-Mechanical System (MEMS) microphone array system. Inspired by the incoherent signal-subspace method (ISM), the method that is proposed in this work employs multiple sub-bands, which are selected from the wideband signals with high magnitude-squared coherence to track moving vehicles in the presence of wind noise. The field test results demonstrate that the proposed method has a better performance in emulating the DOA of a moving vehicle even in the case of severe wind interference than the narrowband multiple signal classification (MUSIC) method, the sub-band DOA estimation method, and the classical two-sided correlation transformation (TCT) method.
Adaptive Sparse Representation for Source Localization with Gain/Phase Errors
Sun, Ke; Liu, Yimin; Meng, Huadong; Wang, Xiqin
2011-01-01
Sparse representation (SR) algorithms can be implemented for high-resolution direction of arrival (DOA) estimation. Additionally, SR can effectively separate the coherent signal sources because the spectrum estimation is based on the optimization technique, such as the L1 norm minimization, but not on subspace orthogonality. However, in the actual source localization scenario, an unknown gain/phase error between the array sensors is inevitable. Due to this nonideal factor, the predefined overcomplete basis mismatches the actual array manifold so that the estimation performance is degraded in SR. In this paper, an adaptive SR algorithm is proposed to improve the robustness with respect to the gain/phase error, where the overcomplete basis is dynamically adjusted using multiple snapshots and the sparse solution is adaptively acquired to match with the actual scenario. The simulation results demonstrate the estimation robustness to the gain/phase error using the proposed method. PMID:22163875
Antigenic Distance Measurements for Seasonal Influenza Vaccine Selection
Cai, Zhipeng; Zhang, Tong; Wan, Xiu-Feng
2011-01-01
Influenza vaccination is one of the major options to counteract the effects of influenza diseases. Selection of an effective vaccine strain is the key to the success of an effective vaccination program since vaccine protection can only be achieved when the selected influenza vaccine strain matches the antigenic variants causing future outbreaks. Identification of an antigenic variant is the first step to determine whether vaccine strain needs to be updated. Antigenic distance derived from immunological assays, such as hemagglutination inhibition, is commonly used to measure the antigenic closeness between circulating strains and the current influenza vaccine strain. Thus, consensus on an explicit and robust antigenic distance measurement is critical in influenza surveillance. Based on the current seasonal influenza surveillance procedure, we propose and compare three antigenic distance measurements, including Average antigenic distance (A-distance), Mutual antigenic distance (M-distance), and Largest antigenic distance (L-distance). With the assistance of influenza antigenic cartography, our simulation results demonstrated that M-distance is a robust influenza antigenic distance measurement. Experimental results on both simulation and seasonal influenza surveillance data demonstrate that M-distance can be effectively utilized in influenza vaccine strain selection. PMID:22063385
NASA Astrophysics Data System (ADS)
Song, Yunquan; Lin, Lu; Jian, Ling
2016-07-01
Single-index varying-coefficient model is an important mathematical modeling method to model nonlinear phenomena in science and engineering. In this paper, we develop a variable selection method for high-dimensional single-index varying-coefficient models using a shrinkage idea. The proposed procedure can simultaneously select significant nonparametric components and parametric components. Under defined regularity conditions, with appropriate selection of tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Moreover, due to the robustness of the check loss function to outliers in the finite samples, our proposed variable selection method is more robust than the ones based on the least squares criterion. Finally, the method is illustrated with numerical simulations.
Fee, David; Izbekov, Pavel; Kim, Keehoon; ...
2017-10-09
Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been appliedmore » to the inversion technique. Furthermore we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan.« less
Robust automatic measurement of 3D scanned models for the human body fat estimation.
Giachetti, Andrea; Lovato, Christian; Piscitelli, Francesco; Milanese, Chiara; Zancanaro, Carlo
2015-03-01
In this paper, we present an automatic tool for estimating geometrical parameters from 3-D human scans independent on pose and robustly against the topological noise. It is based on an automatic segmentation of body parts exploiting curve skeleton processing and ad hoc heuristics able to remove problems due to different acquisition poses and body types. The software is able to locate body trunk and limbs, detect their directions, and compute parameters like volumes, areas, girths, and lengths. Experimental results demonstrate that measurements provided by our system on 3-D body scans of normal and overweight subjects acquired in different poses are highly correlated with the body fat estimates obtained on the same subjects with dual-energy X-rays absorptiometry (DXA) scanning. In particular, maximal lengths and girths, not requiring precise localization of anatomical landmarks, demonstrate a good correlation (up to 96%) with the body fat and trunk fat. Regression models based on our automatic measurements can be used to predict body fat values reasonably well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fee, David; Izbekov, Pavel; Kim, Keehoon
Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been appliedmore » to the inversion technique. Furthermore we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan.« less
Robust Portfolio Optimization Using Pseudodistances.
Toma, Aida; Leoni-Aubin, Samuela
2015-01-01
The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature.
Robust Portfolio Optimization Using Pseudodistances
2015-01-01
The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948
NASA Astrophysics Data System (ADS)
Bureick, Johannes; Alkhatib, Hamza; Neumann, Ingo
2016-03-01
In many geodetic engineering applications it is necessary to solve the problem of describing a measured data point cloud, measured, e. g. by laser scanner, by means of free-form curves or surfaces, e. g., with B-Splines as basis functions. The state of the art approaches to determine B-Splines yields results which are seriously manipulated by the occurrence of data gaps and outliers. Optimal and robust B-Spline fitting depend, however, on optimal selection of the knot vector. Hence we combine in our approach Monte-Carlo methods and the location and curvature of the measured data in order to determine the knot vector of the B-Spline in such a way that no oscillating effects at the edges of data gaps occur. We introduce an optimized approach based on computed weights by means of resampling techniques. In order to minimize the effect of outliers, we apply robust M-estimators for the estimation of control points. The above mentioned approach will be applied to a multi-sensor system based on kinematic terrestrial laserscanning in the field of rail track inspection.
Valsecchi, M G; Silvestri, D; Sasieni, P
1996-12-30
We consider methodological problems in evaluating long-term survival in clinical trials. In particular we examine the use of several methods that extend the basic Cox regression analysis. In the presence of a long term observation, the proportional hazard (PH) assumption may easily be violated and a few long term survivors may have a large effect on parameter estimates. We consider both model selection and robust estimation in a data set of 474 ovarian cancer patients enrolled in a clinical trial and followed for between 7 and 12 years after randomization. Two diagnostic plots for assessing goodness-of-fit are introduced. One shows the variation in time of parameter estimates and is an alternative to PH checking based on time-dependent covariates. The other takes advantage of the martingale residual process in time to represent the lack of fit with a metric of the type 'observed minus expected' number of events. Robust estimation is carried out by maximizing a weighted partial likelihood which downweights the contribution to estimation of influential observations. This type of complementary analysis of long-term results of clinical studies is useful in assessing the soundness of the conclusions on treatment effect. In the example analysed here, the difference in survival between treatments was mostly confined to those individuals who survived at least two years beyond randomization.
The wisdom of the commons: ensemble tree classifiers for prostate cancer prognosis
Koziol, James A.; Feng, Anne C.; Jia, Zhenyu; Wang, Yipeng; Goodison, Seven; McClelland, Michael; Mercola, Dan
2009-01-01
Motivation: Classification and regression trees have long been used for cancer diagnosis and prognosis. Nevertheless, instability and variable selection bias, as well as overfitting, are well-known problems of tree-based methods. In this article, we investigate whether ensemble tree classifiers can ameliorate these difficulties, using data from two recent studies of radical prostatectomy in prostate cancer. Results: Using time to progression following prostatectomy as the relevant clinical endpoint, we found that ensemble tree classifiers robustly and reproducibly identified three subgroups of patients in the two clinical datasets: non-progressors, early progressors and late progressors. Moreover, the consensus classifications were independent predictors of time to progression compared to known clinical prognostic factors. Contact: dmercola@uci.edu PMID:18628288
Consensus on surgical aspects of managing osteomyelitis in the diabetic foot
Allahabadi, Sachin; Haroun, Kareem B.; Musher, Daniel M.; Lipsky, Benjamin A.; Barshes, Neal R.
2016-01-01
Background The aim of this study was to develop consensus statements that may help share or even establish ‘best practices’ in the surgical aspects of managing diabetic foot osteomyelitis (DFO) that can be applied in appropriate clinical situations pending the publication of more high-quality data. Methods We asked 14 panelists with expertise in DFO management to participate. Delphi methodology was used to develop consensus statements. First, a questionnaire elicited practices and beliefs concerning various aspects of the surgical management of DFO. Thereafter, we constructed 63 statements for analysis and, using a nine-point Likert scale, asked the panelists to indicate the extent to which they agreed or disagreed with the statements. We defined consensus as a mean score of greater than 7.0. Results The panelists reached consensus on 38 items after three rounds. Among these, seven provide guidance on initial diagnosis of DFO and selection of patients for surgical management. Another 15 statements provide guidance on specific aspects of operative management, including the timing of operations and the type of specimens to be obtained. Ten statements provide guidance on postoperative management, including wound closure and offloading, and six statements summarize the panelists’ agreement on general principles for surgical management of DFO. Conclusions Consensus statement on the perioperative management of DFO were formed with an expert panel comprised of a variety of surgical specialties. We believe these statements may serve as ‘best practice’ guidelines until properly performed studies provide more robust evidence to support or refute specific surgical management steps in DFO. PMID:27414481
Wheeler, Russell L.
2009-01-01
Most probabilistic seismic-hazard assessments require an estimate of Mmax, the magnitude (M) of the largest earthquake that is thought possible within a specified area. In seismically active areas such as some plate boundaries, large earthquakes occur frequently enough that Mmax might have been observed directly during the historical period. In less active regions like most of the Central and Eastern United States and adjacent Canada, large earthquakes are much less frequent and generally Mmax must be estimated indirectly. The indirect-estimation methods are many, their results vary widely, and opinions differ as to which methods are valid. This lack of consensus about Mmax estimation increases the uncertainty of hazard assessments for planned nuclear power reactors and increases design and construction costs. Accordingly, the U.S. Geological Survey and the U.S. Nuclear Regulatory Commission held an open workshop on Mmax estimation in the Central and Eastern United States and adjacent Canada. The workshop was held on Monday and Tuesday, September 8 and 9, 2008, at the U.S. Geological Survey offices in Golden, Colorado. Thirty-five people attended. The workshop goals were to reach consensus on one or more of: (1) the relative merits of the various methods of Mmax estimation, (2) which methods are invalid, (3) which methods are promising but not yet ready for use, and (4) what research is needed to reach consensus on the values and relative importance of the individual estimation methods.
Shortt, S E D; Guillemette, Jean-Marc; Duncan, Anne Marie; Kirby, Frances
2010-01-01
The rapid increase in the use of the Internet for continuing education by physicians suggests the need to define quality criteria for accredited online modules. Continuing medical education (CME) directors from Canadian medical schools and academic researchers participated in a consensus process, Modified Nominal Group Technique, to develop agreement on the most important quality criteria to guide module development. Rankings were compared to responses to a survey of a subset of Canadian Medical Association (CMA) members. A list of 17 items was developed, of which 10 were deemed by experts to be important and 7 were considered secondary. A quality module would: be needs-based; presented in a clinical format; utilize evidence-based information; permit interaction with content and experts; facilitate and attempt to document practice change; be accessible for later review; and include a robust course evaluation. There was less agreement among CMA members on criteria ranking, with consensus on ranking reached on only 12 of 17 items. In contrast to experts, members agreed that the need to assess performance change as a result of an educational experience was not important. This project identified 10 quality criteria for accredited online CME modules that representatives of Canadian organizations involved in continuing education believe should be taken into account when developing learning products. The lack of practitioner support for documentation of change in clinical behavior may suggest that they favor traditional attendance- or completion-based CME; this finding requires further research.
Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos
2016-01-01
Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328
Kebir, Sied; Khurshid, Zain; Gaertner, Florian C.; Essler, Markus; Hattingen, Elke; Fimmers, Rolf; Scheffler, Björn; Herrlinger, Ulrich; Bundschuh, Ralph A.; Glas, Martin
2017-01-01
Rationale Timely detection of pseudoprogression (PSP) is crucial for the management of patients with high-grade glioma (HGG) but remains difficult. Textural features of O-(2-[18F]fluoroethyl)-L-tyrosine positron emission tomography (FET-PET) mirror tumor uptake heterogeneity; some of them may be associated with tumor progression. Methods Fourteen patients with HGG and suspected of PSP underwent FET-PET imaging. A set of 19 conventional and textural FET-PET features were evaluated and subjected to unsupervised consensus clustering. The final diagnosis of true progression vs. PSP was based on follow-up MRI using RANO criteria. Results Three robust clusters have been identified based on 10 predominantly textural FET-PET features. None of the patients with PSP fell into cluster 2, which was associated with high values for textural FET-PET markers of uptake heterogeneity. Three out of 4 patients with PSP were assigned to cluster 3 that was largely associated with low values of textural FET-PET features. By comparison, tumor-to-normal brain ratio (TNRmax) at the optimal cutoff 2.1 was less predictive of PSP (negative predictive value 57% for detecting true progression, p=0.07 vs. 75% with cluster 3, p=0.04). Principal Conclusions Clustering based on textural O-(2-[18F]fluoroethyl)-L-tyrosine PET features may provide valuable information in assessing the elusive phenomenon of pseudoprogression. PMID:28030820
Fine-tuning structural RNA alignments in the twilight zone.
Bremges, Andreas; Schirmer, Stefanie; Giegerich, Robert
2010-04-30
A widely used method to find conserved secondary structure in RNA is to first construct a multiple sequence alignment, and then fold the alignment, optimizing a score based on thermodynamics and covariance. This method works best around 75% sequence similarity. However, in a "twilight zone" below 55% similarity, the sequence alignment tends to obscure the covariance signal used in the second phase. Therefore, while the overall shape of the consensus structure may still be found, the degree of conservation cannot be estimated reliably. Based on a combination of available methods, we present a method named planACstar for improving structure conservation in structural alignments in the twilight zone. After constructing a consensus structure by alignment folding, planACstar abandons the original sequence alignment, refolds the sequences individually, but consistent with the consensus, aligns the structures, irrespective of sequence, by a pure structure alignment method, and derives an improved sequence alignment from the alignment of structures, to be re-submitted to alignment folding, etc.. This circle may be iterated as long as structural conservation improves, but normally, one step suffices. Employing the tools ClustalW, RNAalifold, and RNAforester, we find that for sequences with 30-55% sequence identity, structural conservation can be improved by 10% on average, with a large variation, measured in terms of RNAalifold's own criterion, the structure conservation index.
Comparing capacity value estimation techniques for photovoltaic solar power
Madaeni, Seyed Hossein; Sioshansi, Ramteen; Denholm, Paul
2012-09-28
In this paper, we estimate the capacity value of photovoltaic (PV) solar plants in the western U.S. Our results show that PV plants have capacity values that range between 52% and 93%, depending on location and sun-tracking capability. We further compare more robust but data- and computationally-intense reliability-based estimation techniques with simpler approximation methods. We show that if implemented properly, these techniques provide accurate approximations of reliability-based methods. Overall, methods that are based on the weighted capacity factor of the plant provide the most accurate estimate. As a result, we also examine the sensitivity of PV capacity value to themore » inclusion of sun-tracking systems.« less
A Bayesian Framework for Human Body Pose Tracking from Depth Image Sequences
Zhu, Youding; Fujimura, Kikuo
2010-01-01
This paper addresses the problem of accurate and robust tracking of 3D human body pose from depth image sequences. Recovering the large number of degrees of freedom in human body movements from a depth image sequence is challenging due to the need to resolve the depth ambiguity caused by self-occlusions and the difficulty to recover from tracking failure. Human body poses could be estimated through model fitting using dense correspondences between depth data and an articulated human model (local optimization method). Although it usually achieves a high accuracy due to dense correspondences, it may fail to recover from tracking failure. Alternately, human pose may be reconstructed by detecting and tracking human body anatomical landmarks (key-points) based on low-level depth image analysis. While this method (key-point based method) is robust and recovers from tracking failure, its pose estimation accuracy depends solely on image-based localization accuracy of key-points. To address these limitations, we present a flexible Bayesian framework for integrating pose estimation results obtained by methods based on key-points and local optimization. Experimental results are shown and performance comparison is presented to demonstrate the effectiveness of the proposed approach. PMID:22399933
Generalized Distributed Consensus-based Algorithms for Uncertain Systems and Networks
2010-01-01
time linear systems with markovian jumping parameters and additive disturbances. SIAM Journal on Control and Optimization, 44(4):1165– 1191, 2005... time marko- vian jump linear systems , in the presence of delayed mode observations. Proceed- ings of the 2008 IEEE American Control Conference, pages...Markovian Jump Linear System state estimation . . . . 147 6 Conclusions 152 A Discrete- Time Coupled Matrix Equations 156 A.1 Properties of a special
Robust regression on noisy data for fusion scaling laws
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verdoolaege, Geert, E-mail: geert.verdoolaege@ugent.be; Laboratoire de Physique des Plasmas de l'ERM - Laboratorium voor Plasmafysica van de KMS
2014-11-15
We introduce the method of geodesic least squares (GLS) regression for estimating fusion scaling laws. Based on straightforward principles, the method is easily implemented, yet it clearly outperforms established regression techniques, particularly in cases of significant uncertainty on both the response and predictor variables. We apply GLS for estimating the scaling of the L-H power threshold, resulting in estimates for ITER that are somewhat higher than predicted earlier.
Probabilistic consensus scoring improves tandem mass spectrometry peptide identification.
Nahnsen, Sven; Bertsch, Andreas; Rahnenführer, Jörg; Nordheim, Alfred; Kohlbacher, Oliver
2011-08-05
Database search is a standard technique for identifying peptides from their tandem mass spectra. To increase the number of correctly identified peptides, we suggest a probabilistic framework that allows the combination of scores from different search engines into a joint consensus score. Central to the approach is a novel method to estimate scores for peptides not found by an individual search engine. This approach allows the estimation of p-values for each candidate peptide and their combination across all search engines. The consensus approach works better than any single search engine across all different instrument types considered in this study. Improvements vary strongly from platform to platform and from search engine to search engine. Compared to the industry standard MASCOT, our approach can identify up to 60% more peptides. The software for consensus predictions is implemented in C++ as part of OpenMS, a software framework for mass spectrometry. The source code is available in the current development version of OpenMS and can easily be used as a command line application or via a graphical pipeline designer TOPPAS.
ERIC Educational Resources Information Center
Tanner-Smith, Emily E.; Tipton, Elizabeth
2014-01-01
Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and SPSS (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding…
An application of robust ridge regression model in the presence of outliers to real data problem
NASA Astrophysics Data System (ADS)
Shariff, N. S. Md.; Ferdaos, N. A.
2017-09-01
Multicollinearity and outliers are often leads to inconsistent and unreliable parameter estimates in regression analysis. The well-known procedure that is robust to multicollinearity problem is the ridge regression method. This method however is believed are affected by the presence of outlier. The combination of GM-estimation and ridge parameter that is robust towards both problems is on interest in this study. As such, both techniques are employed to investigate the relationship between stock market price and macroeconomic variables in Malaysia due to curiosity of involving the multicollinearity and outlier problem in the data set. There are four macroeconomic factors selected for this study which are Consumer Price Index (CPI), Gross Domestic Product (GDP), Base Lending Rate (BLR) and Money Supply (M1). The results demonstrate that the proposed procedure is able to produce reliable results towards the presence of multicollinearity and outliers in the real data.
Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†
Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia
2015-01-01
Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144
Robust electroencephalogram phase estimation with applications in brain-computer interface systems.
Seraj, Esmaeil; Sameni, Reza
2017-03-01
In this study, a robust method is developed for frequency-specific electroencephalogram (EEG) phase extraction using the analytic representation of the EEG. Based on recent theoretical findings in this area, it is shown that some of the phase variations-previously associated to the brain response-are systematic side-effects of the methods used for EEG phase calculation, especially during low analytical amplitude segments of the EEG. With this insight, the proposed method generates randomized ensembles of the EEG phase using minor perturbations in the zero-pole loci of narrow-band filters, followed by phase estimation using the signal's analytical form and ensemble averaging over the randomized ensembles to obtain a robust EEG phase and frequency. This Monte Carlo estimation method is shown to be very robust to noise and minor changes of the filter parameters and reduces the effect of fake EEG phase jumps, which do not have a cerebral origin. As proof of concept, the proposed method is used for extracting EEG phase features for a brain computer interface (BCI) application. The results show significant improvement in classification rates using rather simple phase-related features and a standard K-nearest neighbors and random forest classifiers, over a standard BCI dataset. The average performance was improved between 4-7% (in absence of additive noise) and 8-12% (in presence of additive noise). The significance of these improvements was statistically confirmed by a paired sample t-test, with 0.01 and 0.03 p-values, respectively. The proposed method for EEG phase calculation is very generic and may be applied to other EEG phase-based studies.
Covariate selection with group lasso and doubly robust estimation of causal effects
Koch, Brandon; Vock, David M.; Wolfson, Julian
2017-01-01
Summary The efficiency of doubly robust estimators of the average causal effect (ACE) of a treatment can be improved by including in the treatment and outcome models only those covariates which are related to both treatment and outcome (i.e., confounders) or related only to the outcome. However, it is often challenging to identify such covariates among the large number that may be measured in a given study. In this paper, we propose GLiDeR (Group Lasso and Doubly Robust Estimation), a novel variable selection technique for identifying confounders and predictors of outcome using an adaptive group lasso approach that simultaneously performs coefficient selection, regularization, and estimation across the treatment and outcome models. The selected variables and corresponding coefficient estimates are used in a standard doubly robust ACE estimator. We provide asymptotic results showing that, for a broad class of data generating mechanisms, GLiDeR yields a consistent estimator of the ACE when either the outcome or treatment model is correctly specified. A comprehensive simulation study shows that GLiDeR is more efficient than doubly robust methods using standard variable selection techniques and has substantial computational advantages over a recently proposed doubly robust Bayesian model averaging method. We illustrate our method by estimating the causal treatment effect of bilateral versus single-lung transplant on forced expiratory volume in one year after transplant using an observational registry. PMID:28636276
Covariate selection with group lasso and doubly robust estimation of causal effects.
Koch, Brandon; Vock, David M; Wolfson, Julian
2018-03-01
The efficiency of doubly robust estimators of the average causal effect (ACE) of a treatment can be improved by including in the treatment and outcome models only those covariates which are related to both treatment and outcome (i.e., confounders) or related only to the outcome. However, it is often challenging to identify such covariates among the large number that may be measured in a given study. In this article, we propose GLiDeR (Group Lasso and Doubly Robust Estimation), a novel variable selection technique for identifying confounders and predictors of outcome using an adaptive group lasso approach that simultaneously performs coefficient selection, regularization, and estimation across the treatment and outcome models. The selected variables and corresponding coefficient estimates are used in a standard doubly robust ACE estimator. We provide asymptotic results showing that, for a broad class of data generating mechanisms, GLiDeR yields a consistent estimator of the ACE when either the outcome or treatment model is correctly specified. A comprehensive simulation study shows that GLiDeR is more efficient than doubly robust methods using standard variable selection techniques and has substantial computational advantages over a recently proposed doubly robust Bayesian model averaging method. We illustrate our method by estimating the causal treatment effect of bilateral versus single-lung transplant on forced expiratory volume in one year after transplant using an observational registry. © 2017, The International Biometric Society.
Using Robust Variance Estimation to Combine Multiple Regression Estimates with Meta-Analysis
ERIC Educational Resources Information Center
Williams, Ryan
2013-01-01
The purpose of this study was to explore the use of robust variance estimation for combining commonly specified multiple regression models and for combining sample-dependent focal slope estimates from diversely specified models. The proposed estimator obviates traditionally required information about the covariance structure of the dependent…
Nakamura, Yoshihiro; Hasegawa, Osamu
2017-01-01
With the ongoing development and expansion of communication networks and sensors, massive amounts of data are continuously generated in real time from real environments. Beforehand, prediction of a distribution underlying such data is difficult; furthermore, the data include substantial amounts of noise. These factors make it difficult to estimate probability densities. To handle these issues and massive amounts of data, we propose a nonparametric density estimator that rapidly learns data online and has high robustness. Our approach is an extension of both kernel density estimation (KDE) and a self-organizing incremental neural network (SOINN); therefore, we call our approach KDESOINN. An SOINN provides a clustering method that learns about the given data as networks of prototype of data; more specifically, an SOINN can learn the distribution underlying the given data. Using this information, KDESOINN estimates the probability density function. The results of our experiments show that KDESOINN outperforms or achieves performance comparable to the current state-of-the-art approaches in terms of robustness, learning time, and accuracy.
Tire Force Estimation using a Proportional Integral Observer
NASA Astrophysics Data System (ADS)
Farhat, Ahmad; Koenig, Damien; Hernandez-Alcantara, Diana; Morales-Menendez, Ruben
2017-01-01
This paper addresses a method for detecting critical stability situations in the lateral vehicle dynamics by estimating the non-linear part of the tire forces. These forces indicate the road holding performance of the vehicle. The estimation method is based on a robust fault detection and estimation approach which minimize the disturbance and uncertainties to residual sensitivity. It consists in the design of a Proportional Integral Observer (PIO), while minimizing the well known H ∞ norm for the worst case uncertainties and disturbance attenuation, and combining a transient response specification. This multi-objective problem is formulated as a Linear Matrix Inequalities (LMI) feasibility problem where a cost function subject to LMI constraints is minimized. This approach is employed to generate a set of switched robust observers for uncertain switched systems, where the convergence of the observer is ensured using a Multiple Lyapunov Function (MLF). Whilst the forces to be estimated can not be physically measured, a simulation scenario with CarSimTM is presented to illustrate the developed method.
Meta-analysis of alcohol price and income elasticities – with corrections for publication bias
2013-01-01
Background This paper contributes to the evidence-base on prices and alcohol use by presenting meta-analytic summaries of price and income elasticities for alcohol beverages. The analysis improves on previous meta-analyses by correcting for outliers and publication bias. Methods Adjusting for outliers is important to avoid assigning too much weight to studies with very small standard errors or large effect sizes. Trimmed samples are used for this purpose. Correcting for publication bias is important to avoid giving too much weight to studies that reflect selection by investigators or others involved with publication processes. Cumulative meta-analysis is proposed as a method to avoid or reduce publication bias, resulting in more robust estimates. The literature search obtained 182 primary studies for aggregate alcohol consumption, which exceeds the database used in previous reviews and meta-analyses. Results For individual beverages, corrected price elasticities are smaller (less elastic) by 28-29 percent compared with consensus averages frequently used for alcohol beverages. The average price and income elasticities are: beer, -0.30 and 0.50; wine, -0.45 and 1.00; and spirits, -0.55 and 1.00. For total alcohol, the price elasticity is -0.50 and the income elasticity is 0.60. Conclusions These new results imply that attempts to reduce alcohol consumption through price or tax increases will be less effective or more costly than previously claimed. PMID:23883547
Multi-locus phylogenetic analysis reveals the pattern and tempo of bony fish evolution
Broughton, Richard E.; Betancur-R., Ricardo; Li, Chenhong; Arratia, Gloria; Ortí, Guillermo
2013-01-01
Over half of all vertebrates are “fishes”, which exhibit enormous diversity in morphology, physiology, behavior, reproductive biology, and ecology. Investigation of fundamental areas of vertebrate biology depend critically on a robust phylogeny of fishes, yet evolutionary relationships among the major actinopterygian and sarcopterygian lineages have not been conclusively resolved. Although a consensus phylogeny of teleosts has been emerging recently, it has been based on analyses of various subsets of actinopterygian taxa, but not on a full sample of all bony fishes. Here we conducted a comprehensive phylogenetic study on a broad taxonomic sample of 61 actinopterygian and sarcopterygian lineages (with a chondrichthyan outgroup) using a molecular data set of 21 independent loci. These data yielded a resolved phylogenetic hypothesis for extant Osteichthyes, including 1) reciprocally monophyletic Sarcopterygii and Actinopterygii, as currently understood, with polypteriforms as the first diverging lineage within Actinopterygii; 2) a monophyletic group containing gars and bowfin (= Holostei) as sister group to teleosts; and 3) the earliest diverging lineage among teleosts being Elopomorpha, rather than Osteoglossomorpha. Relaxed-clock dating analysis employing a set of 24 newly applied fossil calibrations reveals divergence times that are more consistent with paleontological estimates than previous studies. Establishing a new phylogenetic pattern with accurate divergence dates for bony fishes illustrates several areas where the fossil record is incomplete and provides critical new insights on diversification of this important vertebrate group. PMID:23788273
Aqil, Muhammad; Jeong, Myung Yung
2018-04-24
The robust characterization of real-time brain activity carries potential for many applications. However, the contamination of measured signals by various instrumental, environmental, and physiological sources of noise introduces a substantial amount of signal variance and, consequently, challenges real-time estimation of contributions from underlying neuronal sources. Functional near infra-red spectroscopy (fNIRS) is an emerging imaging modality whose real-time potential is yet to be fully explored. The objectives of the current study are to (i) validate a time-dependent linear model of hemodynamic responses in fNIRS, and (ii) test the robustness of this approach against measurement noise (instrumental and physiological) and mis-specification of the hemodynamic response basis functions (amplitude, latency, and duration). We propose a linear hemodynamic model with time-varying parameters, which are estimated (adapted and tracked) using a dynamic recursive least square algorithm. Owing to the linear nature of the activation model, the problem of achieving robust convergence to an accurate estimation of the model parameters is recast as a problem of parameter error stability around the origin. We show that robust convergence of the proposed method is guaranteed in the presence of an acceptable degree of model misspecification and we derive an upper bound on noise under which reliable parameters can still be inferred. We also derived a lower bound on signal-to-noise-ratio over which the reliable parameters can still be inferred from a channel/voxel. Whilst here applied to fNIRS, the proposed methodology is applicable to other hemodynamic-based imaging technologies such as functional magnetic resonance imaging. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Martinsson, J.
2013-03-01
We propose methods for robust Bayesian inference of the hypocentre in presence of poor, inconsistent and insufficient phase arrival times. The objectives are to increase the robustness, the accuracy and the precision by introducing heavy-tailed distributions and an informative prior distribution of the seismicity. The effects of the proposed distributions are studied under real measurement conditions in two underground mine networks and validated using 53 blasts with known hypocentres. To increase the robustness against poor, inconsistent or insufficient arrivals, a Gaussian Mixture Model is used as a hypocentre prior distribution to describe the seismically active areas, where the parameters are estimated based on previously located events in the region. The prior is truncated to constrain the solution to valid geometries, for example below the ground surface, excluding known cavities, voids and fractured zones. To reduce the sensitivity to outliers, different heavy-tailed distributions are evaluated to model the likelihood distribution of the arrivals given the hypocentre and the origin time. Among these distributions, the multivariate t-distribution is shown to produce the overall best performance, where the tail-mass adapts to the observed data. Hypocentre and uncertainty region estimates are based on simulations from the posterior distribution using Markov Chain Monte Carlo techniques. Velocity graphs (equivalent to traveltime graphs) are estimated using blasts from known locations, and applied to reduce the main uncertainties and thereby the final estimation error. To focus on the behaviour and the performance of the proposed distributions, a basic single-event Bayesian procedure is considered in this study for clarity. Estimation results are shown with different distributions, with and without prior distribution of seismicity, with wrong prior distribution, with and without error compensation, with and without error description, with insufficient arrival times and in presence of significant outliers. A particular focus is on visual results and comparisons to give a better understanding of the Bayesian advantage and to show the effects of heavy-tailed distributions and informative prior information on real data.
Lack of consensus in social systems
NASA Astrophysics Data System (ADS)
Benczik, I. J.; Benczik, S. Z.; Schmittmann, B.; Zia, R. K. P.
2008-05-01
We propose an exactly solvable model for the dynamics of voters in a two-party system. The opinion formation process is modeled on a random network of agents. The dynamical nature of interpersonal relations is also reflected in the model, as the connections in the network evolve with the dynamics of the voters. In the infinite time limit, an exact solution predicts the emergence of consensus, for arbitrary initial conditions. However, before consensus is reached, two different metastable states can persist for exponentially long times. One state reflects a perfect balancing of opinions, the other reflects a completely static situation. An estimate of the associated lifetimes suggests that lack of consensus is typical for large systems.
Robust regression for large-scale neuroimaging studies.
Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand
2015-05-01
Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Min, Kyoungwon; Farah, Annette E.; Lee, Seung Ryeol; Lee, Jong Ik
2017-01-01
Shock conditions of Martian meteorites provide crucial information about ejection dynamics and original features of the Martian rocks. To better constrain equilibrium shock temperatures (Tequi-shock) of Martian meteorites, we investigated (U-Th)/He systematics of moderately-shocked (Zagami) and intensively shocked (ALHA77005) Martian meteorites. Multiple phosphate aggregates from Zagami and ALHA77005 yielded overall (U-Th)/He ages 92.2 ± 4.4 Ma (2σ) and 8.4 ± 1.2 Ma, respectively. These ages correspond to fractional losses of 0.49 ± 0.03 (Zagami) and 0.97 ± 0.01 (ALHA77005), assuming that the ejection-related shock event at ∼3 Ma is solely responsible for diffusive helium loss since crystallization. For He diffusion modeling, the diffusion domain radius is estimated based on detailed examination of fracture patterns in phosphates using a scanning electron microscope. For Zagami, the diffusion domain radius is estimated to be ∼2-9 μm, which is generally consistent with calculations from isothermal heating experiments (1-4 μm). For ALHA77005, the diffusion domain radius of ∼4-20 μm is estimated. Using the newly constrained (U-Th)/He data, diffusion domain radii, and other previously estimated parameters, the conductive cooling models yield Tequi-shock estimates of 360-410 °C and 460-560 °C for Zagami and ALHA77005, respectively. According to the sensitivity test, the estimated Tequi-shock values are relatively robust to input parameters. The Tequi-shock estimates for Zagami are more robust than those for ALHA77005, primarily because Zagami yielded intermediate fHe value (0.49) compared to ALHA77005 (0.97). For less intensively shocked Zagami, the He diffusion-based Tequi-shock estimates (this study) are significantly higher than expected from previously reported Tpost-shock values. For intensively shocked ALHA77005, the two independent approaches yielded generally consistent results. Using two other examples of previously studied Martian meteorites (ALHA84001 and Los Angeles), we compared Tequi-shock and Tpost-shock estimates. For intensively shocked meteorites (ALHA77005, Los Angeles), the He diffusion-based approach yield slightly higher or consistent Tequi-shock with estimations from Tpost-shock, and the discrepancy between the two methods increases as the intensity of shock increases. The reason for the discrepancy between the two methods, particularly for less-intensively shocked meteorites (Zagami, ALHA84001), remains to be resolved, but we prefer the He diffusion-based approach because its Tequi-shock estimates are relatively robust to input parameters.
ERIC Educational Resources Information Center
Wu, Jiun-Yu; Kwok, Oi-man
2012-01-01
Both ad-hoc robust sandwich standard error estimators (design-based approach) and multilevel analysis (model-based approach) are commonly used for analyzing complex survey data with nonindependent observations. Although these 2 approaches perform equally well on analyzing complex survey data with equal between- and within-level model structures…
Progress toward Consensus Estimates of Regional Glacier Mass Balances for IPCC AR5
NASA Astrophysics Data System (ADS)
Arendt, A. A.; Gardner, A. S.; Cogley, J. G.
2011-12-01
Glaciers are potentially large contributors to rising sea level. Since the last IPCC report in 2007 (AR4), there has been a widespread increase in the use of geodetic observations from satellite and airborne platforms to complement field observations of glacier mass balance, as well as significant improvements in the global glacier inventory. Here we summarize our ongoing efforts to integrate data from multiple sources to arrive at a consensus estimate for each region, and to quantify uncertainties in those estimates. We will use examples from Alaska to illustrate methods for combining Gravity Recovery and Climate Experiment (GRACE), elevation differencing and field observations into a single time series with related uncertainty estimates. We will pay particular attention to reconciling discrepancies between GRACE estimates from multiple processing centers. We will also investigate the extent to which improvements in the glacier inventory affect the accuracy of our regional mass balances.
A Novel Robust H∞ Filter Based on Krein Space Theory in the SINS/CNS Attitude Reference System.
Yu, Fei; Lv, Chongyang; Dong, Qianhui
2016-03-18
Owing to their numerous merits, such as compact, autonomous and independence, the strapdown inertial navigation system (SINS) and celestial navigation system (CNS) can be used in marine applications. What is more, due to the complementary navigation information obtained from two different kinds of sensors, the accuracy of the SINS/CNS integrated navigation system can be enhanced availably. Thus, the SINS/CNS system is widely used in the marine navigation field. However, the CNS is easily interfered with by the surroundings, which will lead to the output being discontinuous. Thus, the uncertainty problem caused by the lost measurement will reduce the system accuracy. In this paper, a robust H∞ filter based on the Krein space theory is proposed. The Krein space theory is introduced firstly, and then, the linear state and observation models of the SINS/CNS integrated navigation system are established reasonably. By taking the uncertainty problem into account, in this paper, a new robust H∞ filter is proposed to improve the robustness of the integrated system. At last, this new robust filter based on the Krein space theory is estimated by numerical simulations and actual experiments. Additionally, the simulation and experiment results and analysis show that the attitude errors can be reduced by utilizing the proposed robust filter effectively when the measurements are missing discontinuous. Compared to the traditional Kalman filter (KF) method, the accuracy of the SINS/CNS integrated system is improved, verifying the robustness and the availability of the proposed robust H∞ filter.
Competitive Dynamics in MSTd: A Mechanism for Robust Heading Perception Based on Optic Flow
Layton, Oliver W.; Fajen, Brett R.
2016-01-01
Human heading perception based on optic flow is not only accurate, it is also remarkably robust and stable. These qualities are especially apparent when observers move through environments containing other moving objects, which introduce optic flow that is inconsistent with observer self-motion and therefore uninformative about heading direction. Moving objects may also occupy large portions of the visual field and occlude regions of the background optic flow that are most informative about heading perception. The fact that heading perception is biased by no more than a few degrees under such conditions attests to the robustness of the visual system and warrants further investigation. The aim of the present study was to investigate whether recurrent, competitive dynamics among MSTd neurons that serve to reduce uncertainty about heading over time offer a plausible mechanism for capturing the robustness of human heading perception. Simulations of existing heading models that do not contain competitive dynamics yield heading estimates that are far more erratic and unstable than human judgments. We present a dynamical model of primate visual areas V1, MT, and MSTd based on that of Layton, Mingolla, and Browning that is similar to the other models, except that the model includes recurrent interactions among model MSTd neurons. Competitive dynamics stabilize the model’s heading estimate over time, even when a moving object crosses the future path. Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field. Our findings support recurrent competitive temporal dynamics as a crucial mechanism underlying the robustness and stability of perception of heading. PMID:27341686
Robust optimal design of diffusion-weighted magnetic resonance experiments for skin microcirculation
NASA Astrophysics Data System (ADS)
Choi, J.; Raguin, L. G.
2010-10-01
Skin microcirculation plays an important role in several diseases including chronic venous insufficiency and diabetes. Magnetic resonance (MR) has the potential to provide quantitative information and a better penetration depth compared with other non-invasive methods such as laser Doppler flowmetry or optical coherence tomography. The continuous progress in hardware resulting in higher sensitivity must be coupled with advances in data acquisition schemes. In this article, we first introduce a physical model for quantifying skin microcirculation using diffusion-weighted MR (DWMR) based on an effective dispersion model for skin leading to a q-space model of the DWMR complex signal, and then design the corresponding robust optimal experiments. The resulting robust optimal DWMR protocols improve the worst-case quality of parameter estimates using nonlinear least squares optimization by exploiting available a priori knowledge of model parameters. Hence, our approach optimizes the gradient strengths and directions used in DWMR experiments to robustly minimize the size of the parameter estimation error with respect to model parameter uncertainty. Numerical evaluations are presented to demonstrate the effectiveness of our approach as compared to conventional DWMR protocols.
Modeling of a Robust Confidence Band for the Power Curve of a Wind Turbine.
Hernandez, Wilmar; Méndez, Alfredo; Maldonado-Correa, Jorge L; Balleteros, Francisco
2016-12-07
Having an accurate model of the power curve of a wind turbine allows us to better monitor its operation and planning of storage capacity. Since wind speed and direction is of a highly stochastic nature, the forecasting of the power generated by the wind turbine is of the same nature as well. In this paper, a method for obtaining a robust confidence band containing the power curve of a wind turbine under test conditions is presented. Here, the confidence band is bound by two curves which are estimated using parametric statistical inference techniques. However, the observations that are used for carrying out the statistical analysis are obtained by using the binning method, and in each bin, the outliers are eliminated by using a censorship process based on robust statistical techniques. Then, the observations that are not outliers are divided into observation sets. Finally, both the power curve of the wind turbine and the two curves that define the robust confidence band are estimated using each of the previously mentioned observation sets.
Modeling of a Robust Confidence Band for the Power Curve of a Wind Turbine
Hernandez, Wilmar; Méndez, Alfredo; Maldonado-Correa, Jorge L.; Balleteros, Francisco
2016-01-01
Having an accurate model of the power curve of a wind turbine allows us to better monitor its operation and planning of storage capacity. Since wind speed and direction is of a highly stochastic nature, the forecasting of the power generated by the wind turbine is of the same nature as well. In this paper, a method for obtaining a robust confidence band containing the power curve of a wind turbine under test conditions is presented. Here, the confidence band is bound by two curves which are estimated using parametric statistical inference techniques. However, the observations that are used for carrying out the statistical analysis are obtained by using the binning method, and in each bin, the outliers are eliminated by using a censorship process based on robust statistical techniques. Then, the observations that are not outliers are divided into observation sets. Finally, both the power curve of the wind turbine and the two curves that define the robust confidence band are estimated using each of the previously mentioned observation sets. PMID:27941604
On robust parameter estimation in brain-computer interfacing
NASA Astrophysics Data System (ADS)
Samek, Wojciech; Nakajima, Shinichi; Kawanabe, Motoaki; Müller, Klaus-Robert
2017-12-01
Objective. The reliable estimation of parameters such as mean or covariance matrix from noisy and high-dimensional observations is a prerequisite for successful application of signal processing and machine learning algorithms in brain-computer interfacing (BCI). This challenging task becomes significantly more difficult if the data set contains outliers, e.g. due to subject movements, eye blinks or loose electrodes, as they may heavily bias the estimation and the subsequent statistical analysis. Although various robust estimators have been developed to tackle the outlier problem, they ignore important structural information in the data and thus may not be optimal. Typical structural elements in BCI data are the trials consisting of a few hundred EEG samples and indicating the start and end of a task. Approach. This work discusses the parameter estimation problem in BCI and introduces a novel hierarchical view on robustness which naturally comprises different types of outlierness occurring in structured data. Furthermore, the class of minimum divergence estimators is reviewed and a robust mean and covariance estimator for structured data is derived and evaluated with simulations and on a benchmark data set. Main results. The results show that state-of-the-art BCI algorithms benefit from robustly estimated parameters. Significance. Since parameter estimation is an integral part of various machine learning algorithms, the presented techniques are applicable to many problems beyond BCI.
A Simple Approach to Account for Climate Model Interdependence in Multi-Model Ensembles
NASA Astrophysics Data System (ADS)
Herger, N.; Abramowitz, G.; Angelil, O. M.; Knutti, R.; Sanderson, B.
2016-12-01
Multi-model ensembles are an indispensable tool for future climate projection and its uncertainty quantification. Ensembles containing multiple climate models generally have increased skill, consistency and reliability. Due to the lack of agreed-on alternatives, most scientists use the equally-weighted multi-model mean as they subscribe to model democracy ("one model, one vote").Different research groups are known to share sections of code, parameterizations in their model, literature, or even whole model components. Therefore, individual model runs do not represent truly independent estimates. Ignoring this dependence structure might lead to a false model consensus, wrong estimation of uncertainty and effective number of independent models.Here, we present a way to partially address this problem by selecting a subset of CMIP5 model runs so that its climatological mean minimizes the RMSE compared to a given observation product. Due to the cancelling out of errors, regional biases in the ensemble mean are reduced significantly.Using a model-as-truth experiment we demonstrate that those regional biases persist into the future and we are not fitting noise, thus providing improved observationally-constrained projections of the 21st century. The optimally selected ensemble shows significantly higher global mean surface temperature projections than the original ensemble, where all the model runs are considered. Moreover, the spread is decreased well beyond that expected from the decreased ensemble size.Several previous studies have recommended an ensemble selection approach based on performance ranking of the model runs. Here, we show that this approach can perform even worse than randomly selecting ensemble members and can thus be harmful. We suggest that accounting for interdependence in the ensemble selection process is a necessary step for robust projections for use in impact assessments, adaptation and mitigation of climate change.
NASA Astrophysics Data System (ADS)
Shrivastava, Akash; Mohanty, A. R.
2018-03-01
This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.
Robust estimators for speech enhancement in real environments
NASA Astrophysics Data System (ADS)
Sandoval-Ibarra, Yuma; Diaz-Ramirez, Victor H.; Kober, Vitaly
2015-09-01
Common statistical estimators for speech enhancement rely on several assumptions about stationarity of speech signals and noise. These assumptions may not always valid in real-life due to nonstationary characteristics of speech and noise processes. We propose new estimators based on existing estimators by incorporation of computation of rank-order statistics. The proposed estimators are better adapted to non-stationary characteristics of speech signals and noise processes. Through computer simulations we show that the proposed estimators yield a better performance in terms of objective metrics than that of known estimators when speech signals are contaminated with airport, babble, restaurant, and train-station noise.
Histogram equalization with Bayesian estimation for noise robust speech recognition.
Suh, Youngjoo; Kim, Hoirin
2018-02-01
The histogram equalization approach is an efficient feature normalization technique for noise robust automatic speech recognition. However, it suffers from performance degradation when some fundamental conditions are not satisfied in the test environment. To remedy these limitations of the original histogram equalization methods, class-based histogram equalization approach has been proposed. Although this approach showed substantial performance improvement under noise environments, it still suffers from performance degradation due to the overfitting problem when test data are insufficient. To address this issue, the proposed histogram equalization technique employs the Bayesian estimation method in the test cumulative distribution function estimation. It was reported in a previous study conducted on the Aurora-4 task that the proposed approach provided substantial performance gains in speech recognition systems based on the acoustic modeling of the Gaussian mixture model-hidden Markov model. In this work, the proposed approach was examined in speech recognition systems with deep neural network-hidden Markov model (DNN-HMM), the current mainstream speech recognition approach where it also showed meaningful performance improvement over the conventional maximum likelihood estimation-based method. The fusion of the proposed features with the mel-frequency cepstral coefficients provided additional performance gains in DNN-HMM systems, which otherwise suffer from performance degradation in the clean test condition.
Generalized Ordinary Differential Equation Models 1
Miao, Hongyu; Wu, Hulin; Xue, Hongqi
2014-01-01
Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method. PMID:25544787
Generalized Ordinary Differential Equation Models.
Miao, Hongyu; Wu, Hulin; Xue, Hongqi
2014-10-01
Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method.
ERIC Educational Resources Information Center
Wang, Yan; Rodríguez de Gil, Patricia; Chen, Yi-Hsin; Kromrey, Jeffrey D.; Kim, Eun Sook; Pham, Thanh; Nguyen, Diep; Romano, Jeanine L.
2017-01-01
Various tests to check the homogeneity of variance assumption have been proposed in the literature, yet there is no consensus as to their robustness when the assumption of normality does not hold. This simulation study evaluated the performance of 14 tests for the homogeneity of variance assumption in one-way ANOVA models in terms of Type I error…
Autonomous frequency domain identification: Theory and experiment
NASA Technical Reports Server (NTRS)
Yam, Yeung; Bayard, D. S.; Hadaegh, F. Y.; Mettler, E.; Milman, M. H.; Scheid, R. E.
1989-01-01
The analysis, design, and on-orbit tuning of robust controllers require more information about the plant than simply a nominal estimate of the plant transfer function. Information is also required concerning the uncertainty in the nominal estimate, or more generally, the identification of a model set within which the true plant is known to lie. The identification methodology that was developed and experimentally demonstrated makes use of a simple but useful characterization of the model uncertainty based on the output error. This is a characterization of the additive uncertainty in the plant model, which has found considerable use in many robust control analysis and synthesis techniques. The identification process is initiated by a stochastic input u which is applied to the plant p giving rise to the output. Spectral estimation (h = P sub uy/P sub uu) is used as an estimate of p and the model order is estimated using the produce moment matrix (PMM) method. A parametric model unit direction vector p is then determined by curve fitting the spectral estimate to a rational transfer function. The additive uncertainty delta sub m = p - unit direction vector p is then estimated by the cross spectral estimate delta = P sub ue/P sub uu where e = y - unit direction vectory y is the output error, and unit direction vector y = unit direction vector pu is the computed output of the parametric model subjected to the actual input u. The experimental results demonstrate the curve fitting algorithm produces the reduced-order plant model which minimizes the additive uncertainty. The nominal transfer function estimate unit direction vector p and the estimate delta of the additive uncertainty delta sub m are subsequently available to be used for optimization of robust controller performance and stability.
Linear Parameter Varying Control Synthesis for Actuator Failure, Based on Estimated Parameter
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Wu, N. Eva; Belcastro, Christine
2002-01-01
The design of a linear parameter varying (LPV) controller for an aircraft at actuator failure cases is presented. The controller synthesis for actuator failure cases is formulated into linear matrix inequality (LMI) optimizations based on an estimated failure parameter with pre-defined estimation error bounds. The inherent conservatism of an LPV control synthesis methodology is reduced using a scaling factor on the uncertainty block which represents estimated parameter uncertainties. The fault parameter is estimated using the two-stage Kalman filter. The simulation results of the designed LPV controller for a HiMXT (Highly Maneuverable Aircraft Technology) vehicle with the on-line estimator show that the desired performance and robustness objectives are achieved for actuator failure cases.
NASA Astrophysics Data System (ADS)
Kwon, Ki-Won; Cho, Yongsoo
This letter presents a simple joint estimation method for residual frequency offset (RFO) and sampling frequency offset (STO) in OFDM-based digital video broadcasting (DVB) systems. The proposed method selects a continual pilot (CP) subset from an unsymmetrically and non-uniformly distributed CP set to obtain an unbiased estimator. Simulation results show that the proposed method using a properly selected CP subset is unbiased and performs robustly.
NASA Astrophysics Data System (ADS)
Kim, Y.; Chung, E. S.
2014-12-01
This study suggests a robust prioritization framework for climate change adaptation strategies under multiple climate change scenarios with a case study of selecting sites for reusing treated wastewater (TWW) in a Korean urban watershed. The framework utilizes various multi-criteria decision making techniques, including the VIKOR method and the Shannon entropy-based weights. In this case study, the sustainability of TWW use is quantified with indicator-based approaches with the DPSIR framework, which considers both hydro-environmental and socio-economic aspects of the watershed management. Under the various climate change scenarios, the hydro-environmental responses to reusing TWW in potential alternative sub-watersheds are determined using the Hydrologic Simulation Program in Fortran (HSPF). The socio-economic indicators are obtained from the statistical databases. Sustainability scores for multiple scenarios are estimated individually and then integrated with the proposed approach. At last, the suggested framework allows us to prioritize adaptation strategies in a robust manner with varying levels of compromise between utility-based and regret-based strategies.
Klett, Hagen; Fuellgraf, Hannah; Levit-Zerdoun, Ella; Hussung, Saskia; Kowar, Silke; Küsters, Simon; Bronsert, Peter; Werner, Martin; Wittel, Uwe; Fritsch, Ralph; Busch, Hauke; Boerries, Melanie
2018-01-01
Late diagnosis and systemic dissemination essentially contribute to the invariably poor prognosis of pancreatic ductal adenocarcinoma (PDAC). Therefore, the development of diagnostic biomarkers for PDAC are urgently needed to improve patient stratification and outcome in the clinic. By studying the transcriptomes of independent PDAC patient cohorts of tumor and non-tumor tissues, we identified 81 robustly regulated genes, through a novel, generally applicable meta-analysis. Using consensus clustering on co-expression values revealed four distinct clusters with genes originating from exocrine/endocrine pancreas, stromal and tumor cells. Three clusters were strongly associated with survival of PDAC patients based on TCGA database underlining the prognostic potential of the identified genes. With the added information of impact of survival and the robustness within the meta-analysis, we extracted a 17-gene subset for further validation. We show that it did not only discriminate PDAC from non-tumor tissue and stroma in fresh-frozen as well as formalin-fixed paraffin embedded samples, but also detected pancreatic precursor lesions and singled out pancreatitis samples. Moreover, the classifier discriminated PDAC from other cancers in the TCGA database. In addition, we experimentally validated the classifier in PDAC patients on transcript level using qPCR and exemplify the usage on protein level for three proteins (AHNAK2, LAMC2, TFF1) using immunohistochemistry and for two secreted proteins (TFF1, SERPINB5) using ELISA-based protein detection in blood-plasma. In conclusion, we present a novel robust diagnostic and prognostic gene signature for PDAC with future potential applicability in the clinic.
Klett, Hagen; Fuellgraf, Hannah; Levit-Zerdoun, Ella; Hussung, Saskia; Kowar, Silke; Küsters, Simon; Bronsert, Peter; Werner, Martin; Wittel, Uwe; Fritsch, Ralph; Busch, Hauke; Boerries, Melanie
2018-01-01
Late diagnosis and systemic dissemination essentially contribute to the invariably poor prognosis of pancreatic ductal adenocarcinoma (PDAC). Therefore, the development of diagnostic biomarkers for PDAC are urgently needed to improve patient stratification and outcome in the clinic. By studying the transcriptomes of independent PDAC patient cohorts of tumor and non-tumor tissues, we identified 81 robustly regulated genes, through a novel, generally applicable meta-analysis. Using consensus clustering on co-expression values revealed four distinct clusters with genes originating from exocrine/endocrine pancreas, stromal and tumor cells. Three clusters were strongly associated with survival of PDAC patients based on TCGA database underlining the prognostic potential of the identified genes. With the added information of impact of survival and the robustness within the meta-analysis, we extracted a 17-gene subset for further validation. We show that it did not only discriminate PDAC from non-tumor tissue and stroma in fresh-frozen as well as formalin-fixed paraffin embedded samples, but also detected pancreatic precursor lesions and singled out pancreatitis samples. Moreover, the classifier discriminated PDAC from other cancers in the TCGA database. In addition, we experimentally validated the classifier in PDAC patients on transcript level using qPCR and exemplify the usage on protein level for three proteins (AHNAK2, LAMC2, TFF1) using immunohistochemistry and for two secreted proteins (TFF1, SERPINB5) using ELISA-based protein detection in blood-plasma. In conclusion, we present a novel robust diagnostic and prognostic gene signature for PDAC with future potential applicability in the clinic. PMID:29675033
The Impact of Missing Data on Species Tree Estimation.
Xi, Zhenxiang; Liu, Liang; Davis, Charles C
2016-03-01
Phylogeneticists are increasingly assembling genome-scale data sets that include hundreds of genes to resolve their focal clades. Although these data sets commonly include a moderate to high amount of missing data, there remains no consensus on their impact to species tree estimation. Here, using several simulated and empirical data sets, we assess the effects of missing data on species tree estimation under varying degrees of incomplete lineage sorting (ILS) and gene rate heterogeneity. We demonstrate that concatenation (RAxML), gene-tree-based coalescent (ASTRAL, MP-EST, and STAR), and supertree (matrix representation with parsimony [MRP]) methods perform reliably, so long as missing data are randomly distributed (by gene and/or by species) and that a sufficiently large number of genes are sampled. When data sets are indecisive sensu Sanderson et al. (2010. Phylogenomics with incomplete taxon coverage: the limits to inference. BMC Evol Biol. 10:155) and/or ILS is high, however, high amounts of missing data that are randomly distributed require exhaustive levels of gene sampling, likely exceeding most empirical studies to date. Moreover, missing data become especially problematic when they are nonrandomly distributed. We demonstrate that STAR produces inconsistent results when the amount of nonrandom missing data is high, regardless of the degree of ILS and gene rate heterogeneity. Similarly, concatenation methods using maximum likelihood can be misled by nonrandom missing data in the presence of gene rate heterogeneity, which becomes further exacerbated when combined with high ILS. In contrast, ASTRAL, MP-EST, and MRP are more robust under all of these scenarios. These results underscore the importance of understanding the influence of missing data in the phylogenomics era. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Shangjie; Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California; Hara, Wendy
Purpose: To develop a reliable method to estimate electron density based on anatomic magnetic resonance imaging (MRI) of the brain. Methods and Materials: We proposed a unifying multi-atlas approach for electron density estimation based on standard T1- and T2-weighted MRI. First, a composite atlas was constructed through a voxelwise matching process using multiple atlases, with the goal of mitigating effects of inherent anatomic variations between patients. Next we computed for each voxel 2 kinds of conditional probabilities: (1) electron density given its image intensity on T1- and T2-weighted MR images; and (2) electron density given its spatial location in a referencemore » anatomy, obtained by deformable image registration. These were combined into a unifying posterior probability density function using the Bayesian formalism, which provided the optimal estimates for electron density. We evaluated the method on 10 patients using leave-one-patient-out cross-validation. Receiver operating characteristic analyses for detecting different tissue types were performed. Results: The proposed method significantly reduced the errors in electron density estimation, with a mean absolute Hounsfield unit error of 119, compared with 140 and 144 (P<.0001) using conventional T1-weighted intensity and geometry-based approaches, respectively. For detection of bony anatomy, the proposed method achieved an 89% area under the curve, 86% sensitivity, 88% specificity, and 90% accuracy, which improved upon intensity and geometry-based approaches (area under the curve: 79% and 80%, respectively). Conclusion: The proposed multi-atlas approach provides robust electron density estimation and bone detection based on anatomic MRI. If validated on a larger population, our work could enable the use of MRI as a primary modality for radiation treatment planning.« less
Spatiotemporal multistage consensus clustering in molecular dynamics studies of large proteins.
Kenn, Michael; Ribarics, Reiner; Ilieva, Nevena; Cibena, Michael; Karch, Rudolf; Schreiner, Wolfgang
2016-04-26
The aim of this work is to find semi-rigid domains within large proteins as reference structures for fitting molecular dynamics trajectories. We propose an algorithm, multistage consensus clustering, MCC, based on minimum variation of distances between pairs of Cα-atoms as target function. The whole dataset (trajectory) is split into sub-segments. For a given sub-segment, spatial clustering is repeatedly started from different random seeds, and we adopt the specific spatial clustering with minimum target function: the process described so far is stage 1 of MCC. Then, in stage 2, the results of spatial clustering are consolidated, to arrive at domains stable over the whole dataset. We found that MCC is robust regarding the choice of parameters and yields relevant information on functional domains of the major histocompatibility complex (MHC) studied in this paper: the α-helices and β-floor of the protein (MHC) proved to be most flexible and did not contribute to clusters of significant size. Three alleles of the MHC, each in complex with ABCD3 peptide and LC13 T-cell receptor (TCR), yielded different patterns of motion. Those alleles causing immunological allo-reactions showed distinct correlations of motion between parts of the peptide, the binding cleft and the complementary determining regions (CDR)-loops of the TCR. Multistage consensus clustering reflected functional differences between MHC alleles and yields a methodological basis to increase sensitivity of functional analyses of bio-molecules. Due to the generality of approach, MCC is prone to lend itself as a potent tool also for the analysis of other kinds of big data.
Anandarajah, Gowri; Craigie, Frederic; Hatch, Robert; Kliewer, Stephen; Marchand, Lucille; King, Dana; Hobbs, Richard; Daaleman, Timothy P
2010-12-01
Spiritual care is increasingly recognized as an important component of medical care. Although many primary care residency programs incorporate spiritual care into their curricula, there are currently no consensus guidelines regarding core competencies necessary for primary care training. In 2006, the Society of Teachers of Family Medicine's Interest Group on Spirituality undertook a three-year initiative to address this need. The project leader assembled a diverse panel of eight educators with dual expertise in (1) spirituality and health and (2) family medicine. The multidisciplinary panel members represented different geographic regions and diverse faith traditions and were nationally recognized senior faculty. They underwent three rounds of a modified Delphi technique to achieve initial consensus regarding spiritual care competencies (SCCs) tailored for family medicine residency training, followed by an iterative process of external validation, feedback, and consensus modifications of the SCCs. Panel members identified six knowledge, nine skills, and four attitude core SCCs for use in training and linked these to competencies of the Accreditation Council for Graduate Medical Education. They identified three global competencies for use in promotion and graduation criteria. Defining core competencies in spiritual care clarifies training goals and provides the basis for robust curricula evaluation. Given the breadth of family medicine, these competencies may be adaptable to other primary care fields, to medical and surgical specialties, and to medical student education. Effective training in this area may enhance physicians' ability to attend to the physical, mental, and spiritual needs of patients and better maintain sustainable healing relationships.
NASA Astrophysics Data System (ADS)
Cao, Lu; Li, Hengnian
2016-10-01
For the satellite attitude estimation problem, the serious model errors always exist and hider the estimation performance of the Attitude Determination and Control System (ACDS), especially for a small satellite with low precision sensors. To deal with this problem, a new algorithm for the attitude estimation, referred to as the unscented predictive variable structure filter (UPVSF) is presented. This strategy is proposed based on the variable structure control concept and unscented transform (UT) sampling method. It can be implemented in real time with an ability to estimate the model errors on-line, in order to improve the state estimation precision. In addition, the model errors in this filter are not restricted only to the Gaussian noises; therefore, it has the advantages to deal with the various kinds of model errors or noises. It is anticipated that the UT sampling strategy can further enhance the robustness and accuracy of the novel UPVSF. Numerical simulations show that the proposed UPVSF is more effective and robustness in dealing with the model errors and low precision sensors compared with the traditional unscented Kalman filter (UKF).
Robust estimation for ordinary differential equation models.
Cao, J; Wang, L; Xu, J
2011-12-01
Applied scientists often like to use ordinary differential equations (ODEs) to model complex dynamic processes that arise in biology, engineering, medicine, and many other areas. It is interesting but challenging to estimate ODE parameters from noisy data, especially when the data have some outliers. We propose a robust method to address this problem. The dynamic process is represented with a nonparametric function, which is a linear combination of basis functions. The nonparametric function is estimated by a robust penalized smoothing method. The penalty term is defined with the parametric ODE model, which controls the roughness of the nonparametric function and maintains the fidelity of the nonparametric function to the ODE model. The basis coefficients and ODE parameters are estimated in two nested levels of optimization. The coefficient estimates are treated as an implicit function of ODE parameters, which enables one to derive the analytic gradients for optimization using the implicit function theorem. Simulation studies show that the robust method gives satisfactory estimates for the ODE parameters from noisy data with outliers. The robust method is demonstrated by estimating a predator-prey ODE model from real ecological data. © 2011, The International Biometric Society.
NREL module energy rating methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitaker, C.; Newmiller, J.; Kroposki, B.
1995-11-01
The goals of this project were to develop a tool for: evaluating one module in different climates; comparing different modules; provide a Q&D method for estimating periodic energy production; provide an achievable module rating; provide an incentive for manufacturers to optimize modules to non-STC conditions; and to have a consensus-based, NREL-sponsored activity. The approach taken was to simulate module energy for five reference days of various weather conditions. A performance model was developed.
USDA-ARS?s Scientific Manuscript database
Thermal-infrared remote sensing of land surface temperature provides valuable information for quantifying root-zone water availability, evapotranspiration (ET) and crop condition. This paper describes a robust but relatively simple thermal-based energy balance model that parameterizes the key soil/s...
Data Driven Model Development for the SuperSonic SemiSpan Transport (S(sup 4)T)
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2011-01-01
In this report, we will investigate two common approaches to model development for robust control synthesis in the aerospace community; namely, reduced order aeroservoelastic modelling based on structural finite-element and computational fluid dynamics based aerodynamic models, and a data-driven system identification procedure. It is shown via analysis of experimental SuperSonic SemiSpan Transport (S4T) wind-tunnel data that by using a system identification approach it is possible to estimate a model at a fixed Mach, which is parsimonious and robust across varying dynamic pressures.
Robust and accurate vectorization of line drawings.
Hilaire, Xavier; Tombre, Karl
2006-06-01
This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.
Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.
Kervrann, C; Legland, D; Pardini, L
2004-06-01
Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.
Handheld pose tracking using vision-inertial sensors with occlusion handling
NASA Astrophysics Data System (ADS)
Li, Juan; Slembrouck, Maarten; Deboeverie, Francis; Bernardos, Ana M.; Besada, Juan A.; Veelaert, Peter; Aghajan, Hamid; Casar, José R.; Philips, Wilfried
2016-07-01
Tracking of a handheld device's three-dimensional (3-D) position and orientation is fundamental to various application domains, including augmented reality (AR), virtual reality, and interaction in smart spaces. Existing systems still offer limited performance in terms of accuracy, robustness, computational cost, and ease of deployment. We present a low-cost, accurate, and robust system for handheld pose tracking using fused vision and inertial data. The integration of measurements from embedded accelerometers reduces the number of unknown parameters in the six-degree-of-freedom pose calculation. The proposed system requires two light-emitting diode (LED) markers to be attached to the device, which are tracked by external cameras through a robust algorithm against illumination changes. Three data fusion methods have been proposed, including the triangulation-based stereo-vision system, constraint-based stereo-vision system with occlusion handling, and triangulation-based multivision system. Real-time demonstrations of the proposed system applied to AR and 3-D gaming are also included. The accuracy assessment of the proposed system is carried out by comparing with the data generated by the state-of-the-art commercial motion tracking system OptiTrack. Experimental results show that the proposed system has achieved high accuracy of few centimeters in position estimation and few degrees in orientation estimation.
A novel time of arrival estimation algorithm using an energy detector receiver in MMW systems
NASA Astrophysics Data System (ADS)
Liang, Xiaolin; Zhang, Hao; Lyu, Tingting; Xiao, Han; Gulliver, T. Aaron
2017-12-01
This paper presents a new time of arrival (TOA) estimation technique using an improved energy detection (ED) receiver based on the empirical mode decomposition (EMD) in an impulse radio (IR) 60 GHz millimeter wave (MMW) system. A threshold is employed via analyzing the characteristics of the received energy values with an extreme learning machine (ELM). The effect of the channel and integration period on the TOA estimation is evaluated. Several well-known ED-based TOA algorithms are used to compare with the proposed technique. It is shown that this ELM-based technique has lower TOA estimation error compared to other approaches and provides robust performance with the IEEE 802.15.3c channel models.
Robust, automatic GPS station velocities and velocity time series
NASA Astrophysics Data System (ADS)
Blewitt, G.; Kreemer, C.; Hammond, W. C.
2014-12-01
Automation in GPS coordinate time series analysis makes results more objective and reproducible, but not necessarily as robust as the human eye to detect problems. Moreover, it is not a realistic option to manually scan our current load of >20,000 time series per day. This motivates us to find an automatic way to estimate station velocities that is robust to outliers, discontinuities, seasonality, and noise characteristics (e.g., heteroscedasticity). Here we present a non-parametric method based on the Theil-Sen estimator, defined as the median of velocities vij=(xj-xi)/(tj-ti) computed between all pairs (i, j). Theil-Sen estimators produce statistically identical solutions to ordinary least squares for normally distributed data, but they can tolerate up to 29% of data being problematic. To mitigate seasonality, our proposed estimator only uses pairs approximately separated by an integer number of years (N-δt)<(tj-ti )<(N+δt), where δt is chosen to be small enough to capture seasonality, yet large enough to reduce random error. We fix N=1 to maximally protect against discontinuities. In addition to estimating an overall velocity, we also use these pairs to estimate velocity time series. To test our methods, we process real data sets that have already been used with velocities published in the NA12 reference frame. Accuracy can be tested by the scatter of horizontal velocities in the North American plate interior, which is known to be stable to ~0.3 mm/yr. This presents new opportunities for time series interpretation. For example, the pattern of velocity variations at the interannual scale can help separate tectonic from hydrological processes. Without any step detection, velocity estimates prove to be robust for stations affected by the Mw7.2 2010 El Mayor-Cucapah earthquake, and velocity time series show a clear change after the earthquake, without any of the usual parametric constraints, such as relaxation of postseismic velocities to their preseismic values.
van Dierendonk, Roland C.H.; van Egmond, Maria A.N.E.; ten Hagen, Sjang L.; Kreuning, Jippe
2017-01-01
The dodo (Raphus cucullatus) might be the most enigmatic bird of all times. It is, therefore, highly remarkable that no consensus has yet been reached on its body mass; previous scientific estimates of its mass vary by more than 100%. Until now, the vast amount of bones stored at the Natural History Museum in Mauritius has not yet been studied morphometrically nor in relation to body mass. Here, a new estimate of the dodo’s mass is presented based on the largest sample of dodo femora ever measured (n = 174). In order to do this, we have used the regression method and chosen our variables based on biological, mathematical and physical arguments. The results indicate that the mean mass of the dodo was circa 12 kg, which is approximately five times as heavy as the largest living Columbidae (pigeons and doves), the clade to which the dodo belongs. PMID:29230358
ERIC Educational Resources Information Center
Korendijk, Elly J. H.; Moerbeek, Mirjam; Maas, Cora J. M.
2010-01-01
In the case of trials with nested data, the optimal allocation of units depends on the budget, the costs, and the intracluster correlation coefficient. In general, the intracluster correlation coefficient is unknown in advance and an initial guess has to be made based on published values or subject matter knowledge. This initial estimate is likely…
Tsiatis, Anastasios A.; Davidian, Marie; Cao, Weihua
2010-01-01
Summary A routine challenge is that of making inference on parameters in a statistical model of interest from longitudinal data subject to drop out, which are a special case of the more general setting of monotonely coarsened data. Considerable recent attention has focused on doubly robust estimators, which in this context involve positing models for both the missingness (more generally, coarsening) mechanism and aspects of the distribution of the full data, that have the appealing property of yielding consistent inferences if only one of these models is correctly specified. Doubly robust estimators have been criticized for potentially disastrous performance when both of these models are even only mildly misspecified. We propose a doubly robust estimator applicable in general monotone coarsening problems that achieves comparable or improved performance relative to existing doubly robust methods, which we demonstrate via simulation studies and by application to data from an AIDS clinical trial. PMID:20731640
Xia, Peng; Hu, Jie; Peng, Yinghong
2017-10-25
A novel model based on deep learning is proposed to estimate kinematic information for myoelectric control from multi-channel electromyogram (EMG) signals. The neural information of limb movement is embedded in EMG signals that are influenced by all kinds of factors. In order to overcome the negative effects of variability in signals, the proposed model employs the deep architecture combining convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The EMG signals are transformed to time-frequency frames as the input to the model. The limb movement is estimated by the model that is trained with the gradient descent and backpropagation procedure. We tested the model for simultaneous and proportional estimation of limb movement in eight healthy subjects and compared it with support vector regression (SVR) and CNNs on the same data set. The experimental studies show that the proposed model has higher estimation accuracy and better robustness with respect to time. The combination of CNNs and RNNs can improve the model performance compared with using CNNs alone. The model of deep architecture is promising in EMG decoding and optimization of network structures can increase the accuracy and robustness. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Inferring Centrality from Network Snapshots
Shao, Haibin; Mesbahi, Mehran; Li, Dewei; Xi, Yugeng
2017-01-01
The topology and dynamics of a complex network shape its functionality. However, the topologies of many large-scale networks are either unavailable or incomplete. Without the explicit knowledge of network topology, we show how the data generated from the network dynamics can be utilised to infer the tempo centrality, which is proposed to quantify the influence of nodes in a consensus network. We show that the tempo centrality can be used to construct an accurate estimate of both the propagation rate of influence exerted on consensus networks and the Kirchhoff index of the underlying graph. Moreover, the tempo centrality also encodes the disturbance rejection of nodes in a consensus network. Our findings provide an approach to infer the performance of a consensus network from its temporal data. PMID:28098166
Inferring Centrality from Network Snapshots
NASA Astrophysics Data System (ADS)
Shao, Haibin; Mesbahi, Mehran; Li, Dewei; Xi, Yugeng
2017-01-01
The topology and dynamics of a complex network shape its functionality. However, the topologies of many large-scale networks are either unavailable or incomplete. Without the explicit knowledge of network topology, we show how the data generated from the network dynamics can be utilised to infer the tempo centrality, which is proposed to quantify the influence of nodes in a consensus network. We show that the tempo centrality can be used to construct an accurate estimate of both the propagation rate of influence exerted on consensus networks and the Kirchhoff index of the underlying graph. Moreover, the tempo centrality also encodes the disturbance rejection of nodes in a consensus network. Our findings provide an approach to infer the performance of a consensus network from its temporal data.
Graph Lasso-Based Test for Evaluating Functional Brain Connectivity in Sickle Cell Disease.
Coloigner, Julie; Phlypo, Ronald; Coates, Thomas D; Lepore, Natasha; Wood, John C
2017-09-01
Sickle cell disease (SCD) is a vascular disorder that is often associated with recurrent ischemia-reperfusion injury, anemia, vasculopathy, and strokes. These cerebral injuries are associated with neurological dysfunction, limiting the full developing potential of the patient. However, recent large studies of SCD have demonstrated that cognitive impairment occurs even in the absence of brain abnormalities on conventional magnetic resonance imaging (MRI). These observations support an emerging consensus that brain injury in SCD is diffuse and that conventional neuroimaging often underestimates the extent of injury. In this article, we postulated that alterations in the cerebral connectivity may constitute a sensitive biomarker of SCD severity. Using functional MRI, a connectivity study analyzing the SCD patients individually was performed. First, a robust learning scheme based on graphical lasso model and Fréchet mean was used for estimating a consistent descriptor of healthy brain connectivity. Then, we tested a statistical method that provides an individual index of similarity between this healthy connectivity model and each SCD patient's connectivity matrix. Our results demonstrated that the reference connectivity model was not appropriate to model connectivity for only 4 out of 27 patients. After controlling for the gender, two separate predictors of this individual similarity index were the anemia (p = 0.02) and white matter hyperintensities (WMH) (silent stroke) (p = 0.03), so that patients with low hemoglobin level or with WMH have the least similarity to the reference connectivity model. Further studies are required to determine whether the resting-state connectivity changes reflect pathological changes or compensatory responses to chronic anemia.
Milker, Yvonne; Weinkauf, Manuel F G; Titschack, Jürgen; Freiwald, Andre; Krüger, Stefan; Jorissen, Frans J; Schmiedl, Gerhard
2017-01-01
We present paleo-water depth reconstructions for the Pefka E section deposited on the island of Rhodes (Greece) during the early Pleistocene. For these reconstructions, a transfer function (TF) using modern benthic foraminifera surface samples from the Adriatic and Western Mediterranean Seas has been developed. The TF model gives an overall predictive accuracy of ~50 m over a water depth range of ~1200 m. Two separate TF models for shallower and deeper water depth ranges indicate a good predictive accuracy of 9 m for shallower water depths (0-200 m) but far less accuracy of 130 m for deeper water depths (200-1200 m) due to uneven sampling along the water depth gradient. To test the robustness of the TF, we randomly selected modern samples to develop random TFs, showing that the model is robust for water depths between 20 and 850 m while greater water depths are underestimated. We applied the TF to the Pefka E fossil data set. The goodness-of-fit statistics showed that most fossil samples have a poor to extremely poor fit to water depth. We interpret this as a consequence of a lack of modern analogues for the fossil samples and removed all samples with extremely poor fit. To test the robustness and significance of the reconstructions, we compared them to reconstructions from an alternative TF model based on the modern analogue technique and applied the randomization TF test. We found our estimates to be robust and significant at the 95% confidence level, but we also observed that our estimates are strongly overprinted by orbital, precession-driven changes in paleo-productivity and corrected our estimates by filtering out the precession-related component. We compared our corrected record to reconstructions based on a modified plankton/benthos (P/B) ratio, excluding infaunal species, and to stable oxygen isotope data from the same section, as well as to paleo-water depth estimates for the Lindos Bay Formation of other sediment sections of Rhodes. These comparisons indicate that our orbital-corrected reconstructions are reasonable and reflect major tectonic movements of Rhodes during the early Pleistocene.
Weinkauf, Manuel F. G.; Titschack, Jürgen; Freiwald, Andre; Krüger, Stefan; Jorissen, Frans J.; Schmiedl, Gerhard
2017-01-01
We present paleo-water depth reconstructions for the Pefka E section deposited on the island of Rhodes (Greece) during the early Pleistocene. For these reconstructions, a transfer function (TF) using modern benthic foraminifera surface samples from the Adriatic and Western Mediterranean Seas has been developed. The TF model gives an overall predictive accuracy of ~50 m over a water depth range of ~1200 m. Two separate TF models for shallower and deeper water depth ranges indicate a good predictive accuracy of 9 m for shallower water depths (0–200 m) but far less accuracy of 130 m for deeper water depths (200–1200 m) due to uneven sampling along the water depth gradient. To test the robustness of the TF, we randomly selected modern samples to develop random TFs, showing that the model is robust for water depths between 20 and 850 m while greater water depths are underestimated. We applied the TF to the Pefka E fossil data set. The goodness-of-fit statistics showed that most fossil samples have a poor to extremely poor fit to water depth. We interpret this as a consequence of a lack of modern analogues for the fossil samples and removed all samples with extremely poor fit. To test the robustness and significance of the reconstructions, we compared them to reconstructions from an alternative TF model based on the modern analogue technique and applied the randomization TF test. We found our estimates to be robust and significant at the 95% confidence level, but we also observed that our estimates are strongly overprinted by orbital, precession-driven changes in paleo-productivity and corrected our estimates by filtering out the precession-related component. We compared our corrected record to reconstructions based on a modified plankton/benthos (P/B) ratio, excluding infaunal species, and to stable oxygen isotope data from the same section, as well as to paleo-water depth estimates for the Lindos Bay Formation of other sediment sections of Rhodes. These comparisons indicate that our orbital-corrected reconstructions are reasonable and reflect major tectonic movements of Rhodes during the early Pleistocene. PMID:29166653
Transfer Alignment Error Compensator Design Based on Robust State Estimation
NASA Astrophysics Data System (ADS)
Lyou, Joon; Lim, You-Chol
This paper examines the transfer alignment problem of the StrapDown Inertial Navigation System (SDINS), which is subject to the ship’s roll and pitch. Major error sources for velocity and attitude matching are lever arm effect, measurement time delay and ship-body flexure. To reduce these alignment errors, an error compensation method based on state augmentation and robust state estimation is devised. A linearized error model for the velocity and attitude matching transfer alignment system is derived first by linearizing the nonlinear measurement equation with respect to its time delay and dominant Y-axis flexure, and by augmenting the delay state and flexure state into conventional linear state equations. Then an H∞ filter is introduced to account for modeling uncertainties of time delay and the ship-body flexure. The simulation results show that this method considerably decreases azimuth alignment errors considerably.
NASA Astrophysics Data System (ADS)
Carranza, N.; Cristóbal, G.; Sroubek, F.; Ledesma-Carbayo, M. J.; Santos, A.
2006-08-01
Myocardial motion analysis and quantification is of utmost importance for analyzing contractile heart abnormalities and it can be a symptom of a coronary artery disease. A fundamental problem in processing sequences of images is the computation of the optical flow, which is an approximation to the real image motion. This paper presents a new algorithm for optical flow estimation based on a spatiotemporal-frequency (STF) approach, more specifically on the computation of the Wigner-Ville distribution (WVD) and the Hough Transform (HT) of the motion sequences. The later is a well-known line and shape detection method very robust against incomplete data and noise. The rationale of using the HT in this context is because it provides a value of the displacement field from the STF representation. In addition, a probabilistic approach based on Gaussian mixtures has been implemented in order to improve the accuracy of the motion detection. Experimental results with synthetic sequences are compared against an implementation of the variational technique for local and global motion estimation, where it is shown that the results obtained here are accurate and robust to noise degradations. Real cardiac magnetic resonance images have been tested and evaluated with the current method.
Chen, Peng; Yang, Yixin; Wang, Yong; Ma, Yuanliang
2018-05-08
When sensor position errors exist, the performance of recently proposed interference-plus-noise covariance matrix (INCM)-based adaptive beamformers may be severely degraded. In this paper, we propose a weighted subspace fitting-based INCM reconstruction algorithm to overcome sensor displacement for linear arrays. By estimating the rough signal directions, we construct a novel possible mismatched steering vector (SV) set. We analyze the proximity of the signal subspace from the sample covariance matrix (SCM) and the space spanned by the possible mismatched SV set. After solving an iterative optimization problem, we reconstruct the INCM using the estimated sensor position errors. Then we estimate the SV of the desired signal by solving an optimization problem with the reconstructed INCM. The main advantage of the proposed algorithm is its robustness against SV mismatches dominated by unknown sensor position errors. Numerical examples show that even if the position errors are up to half of the assumed sensor spacing, the output signal-to-interference-plus-noise ratio is only reduced by 4 dB. Beam patterns plotted using experiment data show that the interference suppression capability of the proposed beamformer outperforms other tested beamformers.
Qu, Mingkai; Wang, Yan; Huang, Biao; Zhao, Yongcun
2018-06-01
The traditional source apportionment models, such as absolute principal component scores-multiple linear regression (APCS-MLR), are usually susceptible to outliers, which may be widely present in the regional geochemical dataset. Furthermore, the models are merely built on variable space instead of geographical space and thus cannot effectively capture the local spatial characteristics of each source contributions. To overcome the limitations, a new receptor model, robust absolute principal component scores-robust geographically weighted regression (RAPCS-RGWR), was proposed based on the traditional APCS-MLR model. Then, the new method was applied to the source apportionment of soil metal elements in a region of Wuhan City, China as a case study. Evaluations revealed that: (i) RAPCS-RGWR model had better performance than APCS-MLR model in the identification of the major sources of soil metal elements, and (ii) source contributions estimated by RAPCS-RGWR model were more close to the true soil metal concentrations than that estimated by APCS-MLR model. It is shown that the proposed RAPCS-RGWR model is a more effective source apportionment method than APCS-MLR (i.e., non-robust and global model) in dealing with the regional geochemical dataset. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kosztyla, Robert, E-mail: rkosztyla@bccancer.bc.ca; Chan, Elisa K.; Hsu, Fred
Purpose: The objective of this study was to compare recurrent tumor locations after radiation therapy with pretreatment delineations of high-grade gliomas from magnetic resonance imaging (MRI) and 3,4-dihydroxy-6-[{sup 18}F]fluoro-L-phenylalanine ({sup 18}F-FDOPA) positron emission tomography (PET) using contours delineated by multiple observers. Methods and Materials: Nineteen patients with newly diagnosed high-grade gliomas underwent computed tomography (CT), gadolinium contrast-enhanced MRI, and {sup 18}F-FDOPA PET/CT. The image sets (CT, MRI, and PET/CT) were registered, and 5 observers contoured gross tumor volumes (GTVs) using MRI and PET. Consensus contours were obtained by simultaneous truth and performance level estimation (STAPLE). Interobserver variability was quantified bymore » the percentage of volume overlap. Recurrent tumor locations after radiation therapy were contoured by each observer using CT or MRI. Consensus recurrence contours were obtained with STAPLE. Results: The mean interobserver volume overlap for PET GTVs (42% ± 22%) and MRI GTVs (41% ± 22%) was not significantly different (P=.67). The mean consensus volume was significantly larger for PET GTVs (58.6 ± 52.4 cm{sup 3}) than for MRI GTVs (30.8 ± 26.0 cm{sup 3}, P=.003). More than 95% of the consensus recurrence volume was within the 95% isodose surface for 11 of 12 (92%) cases with recurrent tumor imaging. Ten (91%) of these cases extended beyond the PET GTV, and 9 (82%) were contained within a 2-cm margin on the MRI GTV. One recurrence (8%) was located outside the 95% isodose surface. Conclusions: High-grade glioma contours obtained with {sup 18}F-FDOPA PET had similar interobserver agreement to volumes obtained with MRI. Although PET-based consensus target volumes were larger than MRI-based volumes, treatment planning using PET-based volumes may not have yielded better treatment outcomes, given that all but 1 recurrence extended beyond the PET GTV and most were contained by a 2-cm margin on the MRI GTV.« less
Efficient Robust Regression via Two-Stage Generalized Empirical Likelihood
Bondell, Howard D.; Stefanski, Leonard A.
2013-01-01
Large- and finite-sample efficiency and resistance to outliers are the key goals of robust statistics. Although often not simultaneously attainable, we develop and study a linear regression estimator that comes close. Efficiency obtains from the estimator’s close connection to generalized empirical likelihood, and its favorable robustness properties are obtained by constraining the associated sum of (weighted) squared residuals. We prove maximum attainable finite-sample replacement breakdown point, and full asymptotic efficiency for normal errors. Simulation evidence shows that compared to existing robust regression estimators, the new estimator has relatively high efficiency for small sample sizes, and comparable outlier resistance. The estimator is further illustrated and compared to existing methods via application to a real data set with purported outliers. PMID:23976805
NASA Astrophysics Data System (ADS)
Huijse, Pablo; Estévez, Pablo A.; Förster, Francisco; Daniel, Scott F.; Connolly, Andrew J.; Protopapas, Pavlos; Carrasco, Rodrigo; Príncipe, José C.
2018-05-01
The Large Synoptic Survey Telescope (LSST) will produce an unprecedented amount of light curves using six optical bands. Robust and efficient methods that can aggregate data from multidimensional sparsely sampled time-series are needed. In this paper we present a new method for light curve period estimation based on quadratic mutual information (QMI). The proposed method does not assume a particular model for the light curve nor its underlying probability density and it is robust to non-Gaussian noise and outliers. By combining the QMI from several bands the true period can be estimated even when no single-band QMI yields the period. Period recovery performance as a function of average magnitude and sample size is measured using 30,000 synthetic multiband light curves of RR Lyrae and Cepheid variables generated by the LSST Operations and Catalog simulators. The results show that aggregating information from several bands is highly beneficial in LSST sparsely sampled time-series, obtaining an absolute increase in period recovery rate up to 50%. We also show that the QMI is more robust to noise and light curve length (sample size) than the multiband generalizations of the Lomb–Scargle and AoV periodograms, recovering the true period in 10%–30% more cases than its competitors. A python package containing efficient Cython implementations of the QMI and other methods is provided.
NASA Astrophysics Data System (ADS)
Meng, Deyuan; Tao, Guoliang; Liu, Hao; Zhu, Xiaocong
2014-07-01
Friction compensation is particularly important for motion trajectory tracking control of pneumatic cylinders at low speed movement. However, most of the existing model-based friction compensation schemes use simple classical models, which are not enough to address applications with high-accuracy position requirements. Furthermore, the friction force in the cylinder is time-varying, and there exist rather severe unmodelled dynamics and unknown disturbances in the pneumatic system. To deal with these problems effectively, an adaptive robust controller with LuGre model-based dynamic friction compensation is constructed. The proposed controller employs on-line recursive least squares estimation (RLSE) to reduce the extent of parametric uncertainties, and utilizes the sliding mode control method to attenuate the effects of parameter estimation errors, unmodelled dynamics and disturbances. In addition, in order to realize LuGre model-based friction compensation, the modified dual-observer structure for estimating immeasurable friction internal state is developed. Therefore, a prescribed motion tracking transient performance and final tracking accuracy can be guaranteed. Since the system model uncertainties are unmatched, the recursive backstepping design technology is applied. In order to solve the conflicts between the sliding mode control design and the adaptive control design, the projection mapping is used to condition the RLSE algorithm so that the parameter estimates are kept within a known bounded convex set. Finally, the proposed controller is tested for tracking sinusoidal trajectories and smooth square trajectory under different loads and sudden disturbance. The testing results demonstrate that the achievable performance of the proposed controller is excellent and is much better than most other studies in literature. Especially when a 0.5 Hz sinusoidal trajectory is tracked, the maximum tracking error is 0.96 mm and the average tracking error is 0.45 mm. This paper constructs an adaptive robust controller which can compensate the friction force in the cylinder.
Safe Maneuvering Envelope Estimation Based on a Physical Approach
NASA Technical Reports Server (NTRS)
Lombaerts, Thomas J. J.; Schuet, Stefan R.; Wheeler, Kevin R.; Acosta, Diana; Kaneshige, John T.
2013-01-01
This paper discusses a computationally efficient algorithm for estimating the safe maneuvering envelope of damaged aircraft. The algorithm performs a robust reachability analysis through an optimal control formulation while making use of time scale separation and taking into account uncertainties in the aerodynamic derivatives. This approach differs from others since it is physically inspired. This more transparent approach allows interpreting data in each step, and it is assumed that these physical models based upon flight dynamics theory will therefore facilitate certification for future real life applications.