Robust extrema features for time-series data analysis.
Vemulapalli, Pramod K; Monga, Vishal; Brennan, Sean N
2013-06-01
The extraction of robust features for comparing and analyzing time series is a fundamentally important problem. Research efforts in this area encompass dimensionality reduction using popular signal analysis tools such as the discrete Fourier and wavelet transforms, various distance metrics, and the extraction of interest points from time series. Recently, extrema features for analysis of time-series data have assumed increasing significance because of their natural robustness under a variety of practical distortions, their economy of representation, and their computational benefits. Invariably, the process of encoding extrema features is preceded by filtering of the time series with an intuitively motivated filter (e.g., for smoothing), and subsequent thresholding to identify robust extrema. We define the properties of robustness, uniqueness, and cardinality as a means to identify the design choices available in each step of the feature generation process. Unlike existing methods, which utilize filters "inspired" from either domain knowledge or intuition, we explicitly optimize the filter based on training time series to optimize robustness of the extracted extrema features. We demonstrate further that the underlying filter optimization problem reduces to an eigenvalue problem and has a tractable solution. An encoding technique that enhances control over cardinality and uniqueness is also presented. Experimental results obtained for the problem of time series subsequence matching establish the merits of the proposed algorithm.
A P-Norm Robust Feature Extraction Method for Identifying Differentially Expressed Genes
Liu, Jian; Liu, Jin-Xing; Gao, Ying-Lian; Kong, Xiang-Zhen; Wang, Xue-Song; Wang, Dong
2015-01-01
In current molecular biology, it becomes more and more important to identify differentially expressed genes closely correlated with a key biological process from gene expression data. In this paper, based on the Schatten p-norm and Lp-norm, a novel p-norm robust feature extraction method is proposed to identify the differentially expressed genes. In our method, the Schatten p-norm is used as the regularization function to obtain a low-rank matrix and the Lp-norm is taken as the error function to improve the robustness to outliers in the gene expression data. The results on simulation data show that our method can obtain higher identification accuracies than the competitive methods. Numerous experiments on real gene expression data sets demonstrate that our method can identify more differentially expressed genes than the others. Moreover, we confirmed that the identified genes are closely correlated with the corresponding gene expression data. PMID:26201006
A P-Norm Robust Feature Extraction Method for Identifying Differentially Expressed Genes.
Liu, Jian; Liu, Jin-Xing; Gao, Ying-Lian; Kong, Xiang-Zhen; Wang, Xue-Song; Wang, Dong
2015-01-01
In current molecular biology, it becomes more and more important to identify differentially expressed genes closely correlated with a key biological process from gene expression data. In this paper, based on the Schatten p-norm and Lp-norm, a novel p-norm robust feature extraction method is proposed to identify the differentially expressed genes. In our method, the Schatten p-norm is used as the regularization function to obtain a low-rank matrix and the Lp-norm is taken as the error function to improve the robustness to outliers in the gene expression data. The results on simulation data show that our method can obtain higher identification accuracies than the competitive methods. Numerous experiments on real gene expression data sets demonstrate that our method can identify more differentially expressed genes than the others. Moreover, we confirmed that the identified genes are closely correlated with the corresponding gene expression data.
Re-thinking our understanding of immunity: Robustness in the tissue reconstruction system.
Truchetet, Marie-Elise; Pradeu, Thomas
2018-04-01
Robustness, understood as the maintenance of specific functionalities of a given system against internal and external perturbations, is pervasive in today's biology. Yet precise applications of this notion to the immune system have been scarce. Here we show that the concept of robustness sheds light on tissue repair, and particularly on the crucial role the immune system plays in this process. We describe the specific mechanisms, including plasticity and redundancy, by which robustness is achieved in the tissue reconstruction system (TRS). In turn, tissue repair offers a very important test case for assessing the usefulness of the concept of robustness, and identifying different varieties of robustness. Copyright © 2018 Elsevier Ltd. All rights reserved.
An Intercompany Perspective on Biopharmaceutical Drug Product Robustness Studies.
Morar-Mitrica, Sorina; Adams, Monica L; Crotts, George; Wurth, Christine; Ihnat, Peter M; Tabish, Tanvir; Antochshuk, Valentyn; DiLuzio, Willow; Dix, Daniel B; Fernandez, Jason E; Gupta, Kapil; Fleming, Michael S; He, Bing; Kranz, James K; Liu, Dingjiang; Narasimhan, Chakravarthy; Routhier, Eric; Taylor, Katherine D; Truong, Nobel; Stokes, Elaine S E
2018-02-01
The Biophorum Development Group (BPDG) is an industry-wide consortium enabling networking and sharing of best practices for the development of biopharmaceuticals. To gain a better understanding of current industry approaches for establishing biopharmaceutical drug product (DP) robustness, the BPDG-Formulation Point Share group conducted an intercompany collaboration exercise, which included a bench-marking survey and extensive group discussions around the scope, design, and execution of robustness studies. The results of this industry collaboration revealed several key common themes: (1) overall DP robustness is defined by both the formulation and the manufacturing process robustness; (2) robustness integrates the principles of quality by design (QbD); (3) DP robustness is an important factor in setting critical quality attribute control strategies and commercial specifications; (4) most companies employ robustness studies, along with prior knowledge, risk assessments, and statistics, to develop the DP design space; (5) studies are tailored to commercial development needs and the practices of each company. Three case studies further illustrate how a robustness study design for a biopharmaceutical DP balances experimental complexity, statistical power, scientific understanding, and risk assessment to provide the desired product and process knowledge. The BPDG-Formulation Point Share discusses identified industry challenges with regard to biopharmaceutical DP robustness and presents some recommendations for best practices. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Defining robustness protocols: a method to include and evaluate robustness in clinical plans
NASA Astrophysics Data System (ADS)
McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.
2015-04-01
We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.
The effectiveness of robust RMCD control chart as outliers’ detector
NASA Astrophysics Data System (ADS)
Darmanto; Astutik, Suci
2017-12-01
A well-known control chart to monitor a multivariate process is Hotelling’s T 2 which its parameters are estimated classically, very sensitive and also marred by masking and swamping of outliers data effect. To overcome these situation, robust estimators are strongly recommended. One of robust estimators is re-weighted minimum covariance determinant (RMCD) which has robust characteristics as same as MCD. In this paper, the effectiveness term is accuracy of the RMCD control chart in detecting outliers as real outliers. In other word, how effectively this control chart can identify and remove masking and swamping effects of outliers. We assessed the effectiveness the robust control chart based on simulation by considering different scenarios: n sample sizes, proportion of outliers, number of p quality characteristics. We found that in some scenarios, this RMCD robust control chart works effectively.
Liu, Jian; Cheng, Yuhu; Wang, Xuesong; Zhang, Lin; Liu, Hui
2017-08-17
It is urgent to diagnose colorectal cancer in the early stage. Some feature genes which are important to colorectal cancer development have been identified. However, for the early stage of colorectal cancer, less is known about the identity of specific cancer genes that are associated with advanced clinical stage. In this paper, we conducted a feature extraction method named Optimal Mean based Block Robust Feature Extraction method (OMBRFE) to identify feature genes associated with advanced colorectal cancer in clinical stage by using the integrated colorectal cancer data. Firstly, based on the optimal mean and L 2,1 -norm, a novel feature extraction method called Optimal Mean based Robust Feature Extraction method (OMRFE) is proposed to identify feature genes. Then the OMBRFE method which introduces the block ideology into OMRFE method is put forward to process the colorectal cancer integrated data which includes multiple genomic data: copy number alterations, somatic mutations, methylation expression alteration, as well as gene expression changes. Experimental results demonstrate that the OMBRFE is more effective than previous methods in identifying the feature genes. Moreover, genes identified by OMBRFE are verified to be closely associated with advanced colorectal cancer in clinical stage.
Robust Kriged Kalman Filtering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baingana, Brian; Dall'Anese, Emiliano; Mateos, Gonzalo
2015-11-11
Although the kriged Kalman filter (KKF) has well-documented merits for prediction of spatial-temporal processes, its performance degrades in the presence of outliers due to anomalous events, or measurement equipment failures. This paper proposes a robust KKF model that explicitly accounts for presence of measurement outliers. Exploiting outlier sparsity, a novel l1-regularized estimator that jointly predicts the spatial-temporal process at unmonitored locations, while identifying measurement outliers is put forth. Numerical tests are conducted on a synthetic Internet protocol (IP) network, and real transformer load data. Test results corroborate the effectiveness of the novel estimator in joint spatial prediction and outlier identification.
Optimum Design of Forging Process Parameters and Preform Shape under Uncertainties
NASA Astrophysics Data System (ADS)
Repalle, Jalaja; Grandhi, Ramana V.
2004-06-01
Forging is a highly complex non-linear process that is vulnerable to various uncertainties, such as variations in billet geometry, die temperature, material properties, workpiece and forging equipment positional errors and process parameters. A combination of these uncertainties could induce heavy manufacturing losses through premature die failure, final part geometric distortion and production risk. Identifying the sources of uncertainties, quantifying and controlling them will reduce risk in the manufacturing environment, which will minimize the overall cost of production. In this paper, various uncertainties that affect forging tool life and preform design are identified, and their cumulative effect on the forging process is evaluated. Since the forging process simulation is computationally intensive, the response surface approach is used to reduce time by establishing a relationship between the system performance and the critical process design parameters. Variability in system performance due to randomness in the parameters is computed by applying Monte Carlo Simulations (MCS) on generated Response Surface Models (RSM). Finally, a Robust Methodology is developed to optimize forging process parameters and preform shape. The developed method is demonstrated by applying it to an axisymmetric H-cross section disk forging to improve the product quality and robustness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edward Nichols
2002-05-03
In this quarter we continued the processing of the Safford IP survey data. The processing identified a time shift problem between the sites that was caused by a GPS firmware error. A software procedure was developed to identify and correct the shift, and this was applied to the data. Preliminary estimates were made of the remote referenced MT parameters, and initial data quality assessment showed the data quality was good for most of the line. The multi-site robust processing code of Egbert was linked to the new data and processing initiated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ping; Lv, Youbin; Wang, Hong
Optimal operation of a practical blast furnace (BF) ironmaking process depends largely on a good measurement of molten iron quality (MIQ) indices. However, measuring the MIQ online is not feasible using the available techniques. In this paper, a novel data-driven robust modeling is proposed for online estimation of MIQ using improved random vector functional-link networks (RVFLNs). Since the output weights of traditional RVFLNs are obtained by the least squares approach, a robustness problem may occur when the training dataset is contaminated with outliers. This affects the modeling accuracy of RVFLNs. To solve this problem, a Cauchy distribution weighted M-estimation basedmore » robust RFVLNs is proposed. Since the weights of different outlier data are properly determined by the Cauchy distribution, their corresponding contribution on modeling can be properly distinguished. Thus robust and better modeling results can be achieved. Moreover, given that the BF is a complex nonlinear system with numerous coupling variables, the data-driven canonical correlation analysis is employed to identify the most influential components from multitudinous factors that affect the MIQ indices to reduce the model dimension. Finally, experiments using industrial data and comparative studies have demonstrated that the obtained model produces a better modeling and estimating accuracy and stronger robustness than other modeling methods.« less
Enabling Rapid and Robust Structural Analysis During Conceptual Design
NASA Technical Reports Server (NTRS)
Eldred, Lloyd B.; Padula, Sharon L.; Li, Wu
2015-01-01
This paper describes a multi-year effort to add a structural analysis subprocess to a supersonic aircraft conceptual design process. The desired capabilities include parametric geometry, automatic finite element mesh generation, static and aeroelastic analysis, and structural sizing. The paper discusses implementation details of the new subprocess, captures lessons learned, and suggests future improvements. The subprocess quickly compares concepts and robustly handles large changes in wing or fuselage geometry. The subprocess can rank concepts with regard to their structural feasibility and can identify promising regions of the design space. The automated structural analysis subprocess is deemed robust and rapid enough to be included in multidisciplinary conceptual design and optimization studies.
NASA Astrophysics Data System (ADS)
Zwickl, Titus; Carleer, Bart; Kubli, Waldemar
2005-08-01
In the past decade, sheet metal forming simulation became a well established tool to predict the formability of parts. In the automotive industry, this has enabled significant reduction in the cost and time for vehicle design and development, and has helped to improve the quality and performance of vehicle parts. However, production stoppages for troubleshooting and unplanned die maintenance, as well as production quality fluctuations continue to plague manufacturing cost and time. The focus therefore has shifted in recent times beyond mere feasibility to robustness of the product and process being engineered. Ensuring robustness is the next big challenge for the virtual tryout / simulation technology. We introduce new methods, based on systematic stochastic simulations, to visualize the behavior of the part during the whole forming process — in simulation as well as in production. Sensitivity analysis explains the response of the part to changes in influencing parameters. Virtual tryout allows quick exploration of changed designs and conditions. Robust design and manufacturing guarantees quality and process capability for the production process. While conventional simulations helped to reduce development time and cost by ensuring feasible processes, robustness engineering tools have the potential for far greater cost and time savings. Through examples we illustrate how expected and unexpected behavior of deep drawing parts may be tracked down, identified and assigned to the influential parameters. With this knowledge, defects can be eliminated or springback can be compensated e.g.; the response of the part to uncontrollable noise can be predicted and minimized. The newly introduced methods enable more reliable and predictable stamping processes in general.
A robust human face detection algorithm
NASA Astrophysics Data System (ADS)
Raviteja, Thaluru; Karanam, Srikrishna; Yeduguru, Dinesh Reddy V.
2012-01-01
Human face detection plays a vital role in many applications like video surveillance, managing a face image database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram, morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of interest.
Robust watermark technique using masking and Hermite transform.
Coronel, Sandra L Gomez; Ramírez, Boris Escalante; Mosqueda, Marco A Acevedo
2016-01-01
The following paper evaluates a watermark algorithm designed for digital images by using a perceptive mask and a normalization process, thus preventing human eye detection, as well as ensuring its robustness against common processing and geometric attacks. The Hermite transform is employed because it allows a perfect reconstruction of the image, while incorporating human visual system properties; moreover, it is based on the Gaussian functions derivates. The applied watermark represents information of the digital image proprietor. The extraction process is blind, because it does not require the original image. The following techniques were utilized in the evaluation of the algorithm: peak signal-to-noise ratio, the structural similarity index average, the normalized crossed correlation, and bit error rate. Several watermark extraction tests were performed, with against geometric and common processing attacks. It allowed us to identify how many bits in the watermark can be modified for its adequate extraction.
Automatic Image Registration of Multimodal Remotely Sensed Data with Global Shearlet Features
NASA Technical Reports Server (NTRS)
Murphy, James M.; Le Moigne, Jacqueline; Harding, David J.
2015-01-01
Automatic image registration is the process of aligning two or more images of approximately the same scene with minimal human assistance. Wavelet-based automatic registration methods are standard, but sometimes are not robust to the choice of initial conditions. That is, if the images to be registered are too far apart relative to the initial guess of the algorithm, the registration algorithm does not converge or has poor accuracy, and is thus not robust. These problems occur because wavelet techniques primarily identify isotropic textural features and are less effective at identifying linear and curvilinear edge features. We integrate the recently developed mathematical construction of shearlets, which is more effective at identifying sparse anisotropic edges, with an existing automatic wavelet-based registration algorithm. Our shearlet features algorithm produces more distinct features than wavelet features algorithms; the separation of edges from textures is even stronger than with wavelets. Our algorithm computes shearlet and wavelet features for the images to be registered, then performs least squares minimization on these features to compute a registration transformation. Our algorithm is two-staged and multiresolution in nature. First, a cascade of shearlet features is used to provide a robust, though approximate, registration. This is then refined by registering with a cascade of wavelet features. Experiments across a variety of image classes show an improved robustness to initial conditions, when compared to wavelet features alone.
Automatic Image Registration of Multi-Modal Remotely Sensed Data with Global Shearlet Features
Murphy, James M.; Le Moigne, Jacqueline; Harding, David J.
2017-01-01
Automatic image registration is the process of aligning two or more images of approximately the same scene with minimal human assistance. Wavelet-based automatic registration methods are standard, but sometimes are not robust to the choice of initial conditions. That is, if the images to be registered are too far apart relative to the initial guess of the algorithm, the registration algorithm does not converge or has poor accuracy, and is thus not robust. These problems occur because wavelet techniques primarily identify isotropic textural features and are less effective at identifying linear and curvilinear edge features. We integrate the recently developed mathematical construction of shearlets, which is more effective at identifying sparse anisotropic edges, with an existing automatic wavelet-based registration algorithm. Our shearlet features algorithm produces more distinct features than wavelet features algorithms; the separation of edges from textures is even stronger than with wavelets. Our algorithm computes shearlet and wavelet features for the images to be registered, then performs least squares minimization on these features to compute a registration transformation. Our algorithm is two-staged and multiresolution in nature. First, a cascade of shearlet features is used to provide a robust, though approximate, registration. This is then refined by registering with a cascade of wavelet features. Experiments across a variety of image classes show an improved robustness to initial conditions, when compared to wavelet features alone. PMID:29123329
Robust input design for nonlinear dynamic modeling of AUV.
Nouri, Nowrouz Mohammad; Valadi, Mehrdad
2017-09-01
Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, S.; Ancell, B. C.; Huang, G. H.; Baetz, B. W.
2018-03-01
Data assimilation using the ensemble Kalman filter (EnKF) has been increasingly recognized as a promising tool for probabilistic hydrologic predictions. However, little effort has been made to conduct the pre- and post-processing of assimilation experiments, posing a significant challenge in achieving the best performance of hydrologic predictions. This paper presents a unified data assimilation framework for improving the robustness of hydrologic ensemble predictions. Statistical pre-processing of assimilation experiments is conducted through the factorial design and analysis to identify the best EnKF settings with maximized performance. After the data assimilation operation, statistical post-processing analysis is also performed through the factorial polynomial chaos expansion to efficiently address uncertainties in hydrologic predictions, as well as to explicitly reveal potential interactions among model parameters and their contributions to the predictive accuracy. In addition, the Gaussian anamorphosis is used to establish a seamless bridge between data assimilation and uncertainty quantification of hydrologic predictions. Both synthetic and real data assimilation experiments are carried out to demonstrate feasibility and applicability of the proposed methodology in the Guadalupe River basin, Texas. Results suggest that statistical pre- and post-processing of data assimilation experiments provide meaningful insights into the dynamic behavior of hydrologic systems and enhance robustness of hydrologic ensemble predictions.
NASA Astrophysics Data System (ADS)
Chebbi, A.; Bargaoui, Z. K.; da Conceição Cunha, M.
2012-12-01
Based on rainfall intensity-duration-frequency (IDF) curves, a robust optimization approach is proposed to identify the best locations to install new rain gauges. The advantage of robust optimization is that the resulting design solutions yield networks which behave acceptably under hydrological variability. Robust optimisation can overcome the problem of selecting representative rainfall events when building the optimization process. This paper reports an original approach based on Montana IDF model parameters. The latter are assumed to be geostatistical variables and their spatial interdependence is taken into account through the adoption of cross-variograms in the kriging process. The problem of optimally locating a fixed number of new monitoring stations based on an existing rain gauge network is addressed. The objective function is based on the mean spatial kriging variance and rainfall variogram structure using a variance-reduction method. Hydrological variability was taken into account by considering and implementing several return periods to define the robust objective function. Variance minimization is performed using a simulated annealing algorithm. In addition, knowledge of the time horizon is needed for the computation of the robust objective function. A short and a long term horizon were studied, and optimal networks are identified for each. The method developed is applied to north Tunisia (area = 21 000 km2). Data inputs for the variogram analysis were IDF curves provided by the hydrological bureau and available for 14 tipping bucket type rain gauges. The recording period was from 1962 to 2001, depending on the station. The study concerns an imaginary network augmentation based on the network configuration in 1973, which is a very significant year in Tunisia because there was an exceptional regional flood event in March 1973. This network consisted of 13 stations and did not meet World Meteorological Organization (WMO) recommendations for the minimum spatial density. So, it is proposed to virtually augment it by 25, 50, 100 and 160% which is the rate that would meet WMO requirements. Results suggest that for a given augmentation robust networks remain stable overall for the two time horizons.
NASA Astrophysics Data System (ADS)
Chebbi, A.; Bargaoui, Z. K.; da Conceição Cunha, M.
2013-10-01
Based on rainfall intensity-duration-frequency (IDF) curves, fitted in several locations of a given area, a robust optimization approach is proposed to identify the best locations to install new rain gauges. The advantage of robust optimization is that the resulting design solutions yield networks which behave acceptably under hydrological variability. Robust optimization can overcome the problem of selecting representative rainfall events when building the optimization process. This paper reports an original approach based on Montana IDF model parameters. The latter are assumed to be geostatistical variables, and their spatial interdependence is taken into account through the adoption of cross-variograms in the kriging process. The problem of optimally locating a fixed number of new monitoring stations based on an existing rain gauge network is addressed. The objective function is based on the mean spatial kriging variance and rainfall variogram structure using a variance-reduction method. Hydrological variability was taken into account by considering and implementing several return periods to define the robust objective function. Variance minimization is performed using a simulated annealing algorithm. In addition, knowledge of the time horizon is needed for the computation of the robust objective function. A short- and a long-term horizon were studied, and optimal networks are identified for each. The method developed is applied to north Tunisia (area = 21 000 km2). Data inputs for the variogram analysis were IDF curves provided by the hydrological bureau and available for 14 tipping bucket type rain gauges. The recording period was from 1962 to 2001, depending on the station. The study concerns an imaginary network augmentation based on the network configuration in 1973, which is a very significant year in Tunisia because there was an exceptional regional flood event in March 1973. This network consisted of 13 stations and did not meet World Meteorological Organization (WMO) recommendations for the minimum spatial density. Therefore, it is proposed to augment it by 25, 50, 100 and 160% virtually, which is the rate that would meet WMO requirements. Results suggest that for a given augmentation robust networks remain stable overall for the two time horizons.
Cuellar, Maria C; Heijnen, Joseph J; van der Wielen, Luuk A M
2013-06-01
Industrial biotechnology is playing an important role in the transition to a bio-based economy. Currently, however, industrial implementation is still modest, despite the advances made in microorganism development. Given that the fuels and commodity chemicals sectors are characterized by tight economic margins, we propose to address overall process design and efficiency at the start of bioprocess development. While current microorganism development is targeted at product formation and product yield, addressing process design at the start of bioprocess development means that microorganism selection can also be extended to other critical targets for process technology and process scale implementation, such as enhancing cell separation or increasing cell robustness at operating conditions that favor the overall process. In this paper we follow this approach for the microbial production of diesel-like biofuels. We review current microbial routes with both oleaginous and engineered microorganisms. For the routes leading to extracellular production, we identify the process conditions for large scale operation. The process conditions identified are finally translated to microorganism development targets. We show that microorganism development should be directed at anaerobic production, increasing robustness at extreme process conditions and tailoring cell surface properties. All the same time, novel process configurations integrating fermentation and product recovery, cell reuse and low-cost technologies for product separation are mandatory. This review provides a state-of-the-art summary of the latest challenges in large-scale production of diesel-like biofuels. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Social Cognition, Social Skill, and the Broad Autism Phenotype
ERIC Educational Resources Information Center
Sasson, Noah J.; Nowlin, Rachel B.; Pinkham, Amy E.
2013-01-01
Social-cognitive deficits differentiate parents with the "broad autism phenotype" from non-broad autism phenotype parents more robustly than other neuropsychological features of autism, suggesting that this domain may be particularly informative for identifying genetic and brain processes associated with the phenotype. The current study…
Genome-wide screen identifies a novel prognostic signature for breast cancer survival
Mao, Xuan Y.; Lee, Matthew J.; Zhu, Jeffrey; ...
2017-01-21
Large genomic datasets in combination with clinical data can be used as an unbiased tool to identify genes important in patient survival and discover potential therapeutic targets. We used a genome-wide screen to identify 587 genes significantly and robustly deregulated across four independent breast cancer (BC) datasets compared to normal breast tissue. Gene expression of 381 genes was significantly associated with relapse-free survival (RFS) in BC patients. We used a gene co-expression network approach to visualize the genetic architecture in normal breast and BCs. In normal breast tissue, co-expression cliques were identified enriched for cell cycle, gene transcription, cell adhesion,more » cytoskeletal organization and metabolism. In contrast, in BC, only two major co-expression cliques were identified enriched for cell cycle-related processes or blood vessel development, cell adhesion and mammary gland development processes. Interestingly, gene expression levels of 7 genes were found to be negatively correlated with many cell cycle related genes, highlighting these genes as potential tumor suppressors and novel therapeutic targets. A forward-conditional Cox regression analysis was used to identify a 12-gene signature associated with RFS. A prognostic scoring system was created based on the 12-gene signature. This scoring system robustly predicted BC patient RFS in 60 sampling test sets and was further validated in TCGA and METABRIC BC data. Our integrated study identified a 12-gene prognostic signature that could guide adjuvant therapy for BC patients and includes novel potential molecular targets for therapy.« less
Genome-wide screen identifies a novel prognostic signature for breast cancer survival
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mao, Xuan Y.; Lee, Matthew J.; Zhu, Jeffrey
Large genomic datasets in combination with clinical data can be used as an unbiased tool to identify genes important in patient survival and discover potential therapeutic targets. We used a genome-wide screen to identify 587 genes significantly and robustly deregulated across four independent breast cancer (BC) datasets compared to normal breast tissue. Gene expression of 381 genes was significantly associated with relapse-free survival (RFS) in BC patients. We used a gene co-expression network approach to visualize the genetic architecture in normal breast and BCs. In normal breast tissue, co-expression cliques were identified enriched for cell cycle, gene transcription, cell adhesion,more » cytoskeletal organization and metabolism. In contrast, in BC, only two major co-expression cliques were identified enriched for cell cycle-related processes or blood vessel development, cell adhesion and mammary gland development processes. Interestingly, gene expression levels of 7 genes were found to be negatively correlated with many cell cycle related genes, highlighting these genes as potential tumor suppressors and novel therapeutic targets. A forward-conditional Cox regression analysis was used to identify a 12-gene signature associated with RFS. A prognostic scoring system was created based on the 12-gene signature. This scoring system robustly predicted BC patient RFS in 60 sampling test sets and was further validated in TCGA and METABRIC BC data. Our integrated study identified a 12-gene prognostic signature that could guide adjuvant therapy for BC patients and includes novel potential molecular targets for therapy.« less
Comparing Four Instructional Techniques for Promoting Robust Knowledge
ERIC Educational Resources Information Center
Richey, J. Elizabeth; Nokes-Malach, Timothy J.
2015-01-01
Robust knowledge serves as a common instructional target in academic settings. Past research identifying characteristics of experts' knowledge across many domains can help clarify the features of robust knowledge as well as ways of assessing it. We review the expertise literature and identify three key features of robust knowledge (deep,…
Paolantonacci, Philippe; Appourchaux, Philippe; Claudel, Béatrice; Ollivier, Monique; Dennett, Richard; Siret, Laurent
2018-01-01
Polyvalent human normal immunoglobulins for intravenous use (IVIg), indicated for rare and often severe diseases, are complex plasma-derived protein preparations. A quality by design approach has been used to develop the Laboratoire Français du Fractionnement et des Biotechnologies new-generation IVIg, targeting a high level of purity to generate an enhanced safety profile while maintaining a high level of efficacy. A modular approach of quality by design was implemented consisting of five consecutive steps to cover all the stages from the product design to the final product control strategy.A well-defined target product profile was translated into 27 product quality attributes that formed the basis of the process design. In parallel, a product risk analysis was conducted and identified 19 critical quality attributes among the product quality attributes. Process risk analysis was carried out to establish the links between process parameters and critical quality attributes. Twelve critical steps were identified, and for each of these steps a risk mitigation plan was established.Among the different process risk mitigation exercises, five process robustness studies were conducted at qualified small scale with a design of experiment approach. For each process step, critical process parameters were identified and, for each critical process parameter, proven acceptable ranges were established. The quality risk management and risk mitigation outputs, including verification of proven acceptable ranges, were used to design the process verification exercise at industrial scale.Finally, the control strategy was established using a mix, or hybrid, of the traditional approach plus elements of the quality by design enhanced approach, as illustrated, to more robustly assign material and process controls and in order to securely meet product specifications.The advantages of this quality by design approach were improved process knowledge for industrial design and process validation and a clear justification of the process and product specifications as a basis for control strategy and future comparability exercises. © PDA, Inc. 2018.
Lachowiec, Jennifer; Queitsch, Christine; Kliebenstein, Daniel J.
2016-01-01
Background Robustness to genetic and environmental perturbation is a salient feature of multicellular organisms. Loss of developmental robustness can lead to severe phenotypic defects and fitness loss. However, perfect robustness, i.e. no variation at all, is evolutionarily unfit as organisms must be able to change phenotype to properly respond to changing environments and biotic challenges. Plasticity is the ability to adjust phenotypes predictably in response to specific environmental stimuli, which can be considered a transient shift allowing an organism to move from one robust phenotypic state to another. Plants, as sessile organisms that undergo continuous development, are particularly dependent on an exquisite fine-tuning of the processes that balance robustness and plasticity to maximize fitness. Scope and Conclusions This paper reviews recently identified mechanisms, both systems-level and molecular, that modulate robustness, and discusses their implications for the optimization of plant fitness. Robustness in living systems arises from the structure of genetic networks, the specific molecular functions of the underlying genes, and their interactions. This very same network responsible for the robustness of specific developmental states also has to be built such that it enables plastic yet robust shifts in response to environmental changes. In plants, the interactions and functions of signal transduction pathways activated by phytohormones and the tendency for plants to tolerate whole-genome duplications, tandem gene duplication and hybridization are emerging as major regulators of robustness in development. Despite their obvious implications for plant evolution and plant breeding, the mechanistic underpinnings by which plants modulate precise levels of robustness, plasticity and evolvability in networks controlling different phenotypes are under-studied. PMID:26473020
Robust Sensitivity Analysis for Multi-Attribute Deterministic Hierarchical Value Models
2002-03-01
such as weighted sum method, weighted 5 product method, and the Analytic Hierarchy Process ( AHP ). This research focuses on only weighted sum...different groups. They can be termed as deterministic, stochastic, or fuzzy multi-objective decision methods if they are classified according to the...weighted product model (WPM), and analytic hierarchy process ( AHP ). His method attempts to identify the most important criteria weight and the most
Face Processing: Models For Recognition
NASA Astrophysics Data System (ADS)
Turk, Matthew A.; Pentland, Alexander P.
1990-03-01
The human ability to process faces is remarkable. We can identify perhaps thousands of faces learned throughout our lifetime and read facial expression to understand such subtle qualities as emotion. These skills are quite robust, despite sometimes large changes in the visual stimulus due to expression, aging, and distractions such as glasses or changes in hairstyle or facial hair. Computers which model and recognize faces will be useful in a variety of applications, including criminal identification, human-computer interface, and animation. We discuss models for representing faces and their applicability to the task of recognition, and present techniques for identifying faces and detecting eye blinks.
Plint, Simon; Patterson, Fiona
2010-06-01
The UK national recruitment process into general practice training has been developed over several years, with incremental introduction of stages which have been piloted and validated. Previously independent processes, which encouraged multiple applications and produced inconsistent outcomes, have been replaced by a robust national process which has high reliability and predictive validity, and is perceived to be fair by candidates and allocates applicants equitably across the country. Best selection practice involves a job analysis which identifies required competencies, then designs reliable assessment methods to measure them, and over the long term ensures that the process has predictive validity against future performance. The general practitioner recruitment process introduced machine markable short listing assessments for the first time in the UK postgraduate recruitment context, and also adopted selection centre workplace simulations. The key success factors have been identified as corporate commitment to the goal of a national process, with gradual convergence maintaining locus of control rather than the imposition of change without perceived legitimate authority.
Development of a Robust Identifier for NPPs Transients Combining ARIMA Model and EBP Algorithm
NASA Astrophysics Data System (ADS)
Moshkbar-Bakhshayesh, Khalil; Ghofrani, Mohammad B.
2014-08-01
This study introduces a novel identification method for recognition of nuclear power plants (NPPs) transients by combining the autoregressive integrated moving-average (ARIMA) model and the neural network with error backpropagation (EBP) learning algorithm. The proposed method consists of three steps. First, an EBP based identifier is adopted to distinguish the plant normal states from the faulty ones. In the second step, ARIMA models use integrated (I) process to convert non-stationary data of the selected variables into stationary ones. Subsequently, ARIMA processes, including autoregressive (AR), moving-average (MA), or autoregressive moving-average (ARMA) are used to forecast time series of the selected plant variables. In the third step, for identification the type of transients, the forecasted time series are fed to the modular identifier which has been developed using the latest advances of EBP learning algorithm. Bushehr nuclear power plant (BNPP) transients are probed to analyze the ability of the proposed identifier. Recognition of transient is based on similarity of its statistical properties to the reference one, rather than the values of input patterns. More robustness against noisy data and improvement balance between memorization and generalization are salient advantages of the proposed identifier. Reduction of false identification, sole dependency of identification on the sign of each output signal, selection of the plant variables for transients training independent of each other, and extendibility for identification of more transients without unfavorable effects are other merits of the proposed identifier.
Optimal robust control strategy of a solid oxide fuel cell system
NASA Astrophysics Data System (ADS)
Wu, Xiaojuan; Gao, Danhui
2018-01-01
Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.
2016-01-01
Passive content fingerprinting is widely used for video content identification and monitoring. However, many challenges remain unsolved especially for partial-copies detection. The main challenge is to find the right balance between the computational cost of fingerprint extraction and fingerprint dimension, without compromising detection performance against various attacks (robustness). Fast video detection performance is desirable in several modern applications, for instance, in those where video detection involves the use of large video databases or in applications requiring real-time video detection of partial copies, a process whose difficulty increases when videos suffer severe transformations. In this context, conventional fingerprinting methods are not fully suitable to cope with the attacks and transformations mentioned before, either because the robustness of these methods is not enough or because their execution time is very high, where the time bottleneck is commonly found in the fingerprint extraction and matching operations. Motivated by these issues, in this work we propose a content fingerprinting method based on the extraction of a set of independent binary global and local fingerprints. Although these features are robust against common video transformations, their combination is more discriminant against severe video transformations such as signal processing attacks, geometric transformations and temporal and spatial desynchronization. Additionally, we use an efficient multilevel filtering system accelerating the processes of fingerprint extraction and matching. This multilevel filtering system helps to rapidly identify potential similar video copies upon which the fingerprint process is carried out only, thus saving computational time. We tested with datasets of real copied videos, and the results show how our method outperforms state-of-the-art methods regarding detection scores. Furthermore, the granularity of our method makes it suitable for partial-copy detection; that is, by processing only short segments of 1 second length. PMID:27861492
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiebenga, J. H.; Atzema, E. H.; Boogaard, A. H. van den
Robust design of forming processes using numerical simulations is gaining attention throughout the industry. In this work, it is demonstrated how robust optimization can assist in further stretching the limits of metal forming processes. A deterministic and a robust optimization study are performed, considering a stretch-drawing process of a hemispherical cup product. For the robust optimization study, both the effect of material and process scatter are taken into account. For quantifying the material scatter, samples of 41 coils of a drawing quality forming steel have been collected. The stochastic material behavior is obtained by a hybrid approach, combining mechanical testingmore » and texture analysis, and efficiently implemented in a metamodel based optimization strategy. The deterministic and robust optimization results are subsequently presented and compared, demonstrating an increased process robustness and decreased number of product rejects by application of the robust optimization approach.« less
Modeling stochasticity and robustness in gene regulatory networks.
Garg, Abhishek; Mohanram, Kartik; Di Cara, Alessandro; De Micheli, Giovanni; Xenarios, Ioannis
2009-06-15
Understanding gene regulation in biological processes and modeling the robustness of underlying regulatory networks is an important problem that is currently being addressed by computational systems biologists. Lately, there has been a renewed interest in Boolean modeling techniques for gene regulatory networks (GRNs). However, due to their deterministic nature, it is often difficult to identify whether these modeling approaches are robust to the addition of stochastic noise that is widespread in gene regulatory processes. Stochasticity in Boolean models of GRNs has been addressed relatively sparingly in the past, mainly by flipping the expression of genes between different expression levels with a predefined probability. This stochasticity in nodes (SIN) model leads to over representation of noise in GRNs and hence non-correspondence with biological observations. In this article, we introduce the stochasticity in functions (SIF) model for simulating stochasticity in Boolean models of GRNs. By providing biological motivation behind the use of the SIF model and applying it to the T-helper and T-cell activation networks, we show that the SIF model provides more biologically robust results than the existing SIN model of stochasticity in GRNs. Algorithms are made available under our Boolean modeling toolbox, GenYsis. The software binaries can be downloaded from http://si2.epfl.ch/ approximately garg/genysis.html.
ERIC Educational Resources Information Center
LaBeouf, Joanne P.
This paper argues that, as institutional revenues continue to decline, community college administrators must not only work at developing a robust foundation with an identifiable process similar to that of a private nonprofit foundation, but also provide professional stewardship in its operation. The author used a qualitative, evaluative approach…
Bioterrorism Preparedness in Public Health: Knowledge Needs for Robust Transformations
ERIC Educational Resources Information Center
Ipe, Minu
2007-01-01
The typical response of organizations dealing with external uncertainty is to develop strategies to adapt to the situation and focus on regaining a stable state. A crucial element of responding successfully to external uncertainties is to identify changes in knowledge needs within core organizational processes. This paper discusses the changing…
Lignosulfonate and elevated pH can enhance enzymatic saccharification of lignocelluloses
ZJ Wang; TQ Lan; JY Zhu
2013-01-01
Nonspecific (nonproductive) binding (adsorption) of cellulase by lignin has been identified as a key barrier to reduce cellulase loading for economical sugar and biofuel production from lignocellulosic biomass. Sulfite Pretreatment to Overcome Recalcitrance of Lignocelluloses (SPORL) is a relatively new process, but demonstrated robust performance for sugar and biofuel...
The design briefing process matters: a case study on telehealthcare device providers in the UK.
Yang, Fan; Renda, Gianni
2018-01-23
The telehealthcare sector has been expanding steadily in the UK. However, confusing, complex and unwieldy designs of telehealthcare devices are at best, less effective than they could be, at worst, they are potentially dangerous to the users. This study investigated the factors within the new product development process that hindered satisfactory product design outcomes, through working collaboratively with a leading provider based in the UK. This study identified that there are too many costly late-stage design changes; a critical and persistent problem area ripe for improvement. The findings from analyzing 30 recent devices, interviewing key stakeholders and observing on-going projects further revealed that one major cause of the issue was poor practice in defining and communicating the product design criteria and requirements. Addressing the characteristics of the telehealthcare industry, such as multiple design commissioners and frequent deployment of design subcontracts, this paper argues that undertaking a robust process of creating the product design brief is the key to improving the outcomes of telehealthcare device design, particularly for the small and medium-sized enterprises dominating the sector. Implications for rehabilitation Product design criteria and requirements are frequently ill-defined and ineffectively communicated to the designers within the processes of developing new telehealthcare devices. The absence of a (robust) process of creating the design brief is the root cause of the identified issues in defining and communicating the design task. Deploying a formal process of creating the product design brief is particularly important for the telehealthcare sector.
Evaluation of eco-friendly zwitterionic detergents for enveloped virus inactivation.
Conley, Lynn; Tao, Yinying; Henry, Alexis; Koepf, Edward; Cecchini, Douglas; Pieracci, John; Ghose, Sanchayita
2017-04-01
Inclusion of a detergent in protein biotherapeutic purification processes is a simple and very robust method for inactivating enveloped viruses. The detergent Triton X-100 has been used for many years and is part of the production process of several commercial therapeutic proteins. However, recent ecological studies have suggested that Triton X-100 and its break-down products can potentially behave as endocrine disrupters in aquatic organisms, raising concerns from an environmental impact perspective. As such, discharge of Triton X-100 into the waste water treatment plants is regulated in some jurisdictions, and alternative detergents for viral inactivation are required. In this work, we report on the identification and evaluation of more eco-friendly detergents as viable replacements for Triton X-100. Five detergent candidates with low to moderate environmental impact were initially identified and evaluated with respect to protein stability, followed by proof-of-concept virus inactivation studies using a model enveloped virus. From the set of candidates lauryldimethylamine N-oxide (LDAO) was identified as the most promising detergent due to its low ecotoxicity, robust anti-viral activity (LRV >4 at validation set-point conditions with X-MuLX), and absence of any negative impact on protein function. This detergent exhibited effective and robust virus inactivation in a broad range of protein concentrations, solution conductivities, pHs, and in several different cell culture fluid matrices. The only process parameter which correlated with reduced virus inactivation potency was LDAO concentration, and then only when the concentration was reduced to below the detergent's critical micelle concentration (CMC). Additionally, this work also demonstrated that LDAO was cleared to below detectable levels after Protein A affinity chromatography, making it suitable for use in a platform process that utilizes this chromatographic mode for protein capture. All these findings suggest that LDAO may be a practical alternative to Triton X-100 for use in protein therapeutic production processes for inactivating enveloped viruses. Biotechnol. Bioeng. 2017;114: 813-820. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, S.; Toll, J.; Cothern, K.
1995-12-31
The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less
Comparison of fMRI paradigms assessing visuospatial processing: Robustness and reproducibility
Herholz, Peer; Zimmermann, Kristin M.; Westermann, Stefan; Frässle, Stefan; Jansen, Andreas
2017-01-01
The development of brain imaging techniques, in particular functional magnetic resonance imaging (fMRI), made it possible to non-invasively study the hemispheric lateralization of cognitive brain functions in large cohorts. Comprehensive models of hemispheric lateralization are, however, still missing and should not only account for the hemispheric specialization of individual brain functions, but also for the interactions among different lateralized cognitive processes (e.g., language and visuospatial processing). This calls for robust and reliable paradigms to study hemispheric lateralization for various cognitive functions. While numerous reliable imaging paradigms have been developed for language, which represents the most prominent left-lateralized brain function, the reliability of imaging paradigms investigating typically right-lateralized brain functions, such as visuospatial processing, has received comparatively less attention. In the present study, we aimed to establish an fMRI paradigm that robustly and reliably identifies right-hemispheric activation evoked by visuospatial processing in individual subjects. In a first study, we therefore compared three frequently used paradigms for assessing visuospatial processing and evaluated their utility to robustly detect right-lateralized brain activity on a single-subject level. In a second study, we then assessed the test-retest reliability of the so-called Landmark task–the paradigm that yielded the most robust results in study 1. At the single-voxel level, we found poor reliability of the brain activation underlying visuospatial attention. This suggests that poor signal-to-noise ratios can become a limiting factor for test-retest reliability. This represents a common detriment of fMRI paradigms investigating visuospatial attention in general and therefore highlights the need for careful considerations of both the possibilities and limitations of the respective fMRI paradigm–in particular, when being interested in effects at the single-voxel level. Notably, however, when focusing on the reliability of measures of hemispheric lateralization (which was the main goal of study 2), we show that hemispheric dominance (quantified by the lateralization index, LI, with |LI| >0.4) of the evoked activation could be robustly determined in more than 62% and, if considering only two categories (i.e., left, right), in more than 93% of our subjects. Furthermore, the reliability of the lateralization strength (LI) was “fair” to “good”. In conclusion, our results suggest that the degree of right-hemispheric dominance during visuospatial processing can be reliably determined using the Landmark task, both at the group and single-subject level, while at the same time stressing the need for future refinements of experimental paradigms and more sophisticated fMRI data acquisition techniques. PMID:29059201
Lewis, Ashley Glen; Schriefers, Herbert; Bastiaansen, Marcel; Schoffelen, Jan-Mathijs
2018-05-21
Reinstatement of memory-related neural activity measured with high temporal precision potentially provides a useful index for real-time monitoring of the timing of activation of memory content during cognitive processing. The utility of such an index extends to any situation where one is interested in the (relative) timing of activation of different sources of information in memory, a paradigm case of which is tracking lexical activation during language processing. Essential for this approach is that memory reinstatement effects are robust, so that their absence (in the average) definitively indicates that no lexical activation is present. We used electroencephalography to test the robustness of a reported subsequent memory finding involving reinstatement of frequency-specific entrained oscillatory brain activity during subsequent recognition. Participants learned lists of words presented on a background flickering at either 6 or 15 Hz to entrain a steady-state brain response. Target words subsequently presented on a non-flickering background that were correctly identified as previously seen exhibited reinstatement effects at both entrainment frequencies. Reliability of these statistical inferences was however critically dependent on the approach used for multiple comparisons correction. We conclude that effects are not robust enough to be used as a reliable index of lexical activation during language processing.
Fast and Accurate Cell Tracking by a Novel Optical-Digital Hybrid Method
NASA Astrophysics Data System (ADS)
Torres-Cisneros, M.; Aviña-Cervantes, J. G.; Pérez-Careta, E.; Ambriz-Colín, F.; Tinoco, Verónica; Ibarra-Manzano, O. G.; Plascencia-Mora, H.; Aguilera-Gómez, E.; Ibarra-Manzano, M. A.; Guzman-Cabrera, R.; Debeir, Olivier; Sánchez-Mondragón, J. J.
2013-09-01
An innovative methodology to detect and track cells using microscope images enhanced by optical cross-correlation techniques is proposed in this paper. In order to increase the tracking sensibility, image pre-processing has been implemented as a morphological operator on the microscope image. Results show that the pre-processing process allows for additional frames of cell tracking, therefore increasing its robustness. The proposed methodology can be used in analyzing different problems such as mitosis, cell collisions, and cell overlapping, ultimately designed to identify and treat illnesses and malignancies.
Design-Based School Improvement: A Practical Guide for Education Leaders
ERIC Educational Resources Information Center
Mintrop, Rick
2016-01-01
At the heart of the effort to enact and scale up successful school reforms is the need for more robust links between research and practice. One promising approach is design development, a methodology widely used in other fields and only recently adapted to education, which offers a disciplined process for identifying practical problems, assessing…
Mechanisms for Robust Cognition.
Walsh, Matthew M; Gluck, Kevin A
2015-08-01
To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within variable environments. This raises the question, how do cognitive systems achieve similarly high degrees of robustness? The aim of this study was to identify a set of mechanisms that enhance robustness in cognitive systems. We identify three mechanisms that enhance robustness in biological and engineered systems: system control, redundancy, and adaptability. After surveying the psychological literature for evidence of these mechanisms, we provide simulations illustrating how each contributes to robust cognition in a different psychological domain: psychomotor vigilance, semantic memory, and strategy selection. These simulations highlight features of a mathematical approach for quantifying robustness, and they provide concrete examples of mechanisms for robust cognition. © 2014 Cognitive Science Society, Inc.
Robust crop and weed segmentation under uncontrolled outdoor illumination.
Jeon, Hong Y; Tian, Lei F; Zhu, Heping
2011-01-01
An image processing algorithm for detecting individual weeds was developed and evaluated. Weed detection processes included were normalized excessive green conversion, statistical threshold value estimation, adaptive image segmentation, median filter, morphological feature calculation and Artificial Neural Network (ANN). The developed algorithm was validated for its ability to identify and detect weeds and crop plants under uncontrolled outdoor illuminations. A machine vision implementing field robot captured field images under outdoor illuminations and the image processing algorithm automatically processed them without manual adjustment. The errors of the algorithm, when processing 666 field images, ranged from 2.1 to 2.9%. The ANN correctly detected 72.6% of crop plants from the identified plants, and considered the rest as weeds. However, the ANN identification rates for crop plants were improved up to 95.1% by addressing the error sources in the algorithm. The developed weed detection and image processing algorithm provides a novel method to identify plants against soil background under the uncontrolled outdoor illuminations, and to differentiate weeds from crop plants. Thus, the proposed new machine vision and processing algorithm may be useful for outdoor applications including plant specific direct applications (PSDA).
Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy
NASA Astrophysics Data System (ADS)
Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.
2011-08-01
The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.
Engineering Elegant Systems: Postulates, Principles, and Hypotheses of Systems Engineering
NASA Technical Reports Server (NTRS)
Watson, Michael D.
2018-01-01
Definition: System Engineering is the engineering discipline which integrates the system functions, system environment, and the engineering disciplines necessary to produce and/or operate an elegant system; Elegant System - A system that is robust in application, fully meeting specified and adumbrated intent, is well structured, and is graceful in operation. Primary Focus: System Design and Integration: Identify system couplings and interactions; Identify system uncertainties and sensitivities; Identify emergent properties; Manage the effectiveness of the system. Engineering Discipline Integration: Manage flow of information for system development and/or operations; Maintain system activities within budget and schedule. Supporting Activities: Process application and execution.
Applying quality by design (QbD) concept for fabrication of chitosan coated nanoliposomes.
Pandey, Abhijeet P; Karande, Kiran P; Sonawane, Raju O; Deshmukh, Prashant K
2014-03-01
In the present investigation, a quality by design (QbD) strategy was successfully applied to the fabrication of chitosan-coated nanoliposomes (CH-NLPs) encapsulating a hydrophilic drug. The effects of the processing variables on the particle size, encapsulation efficiency (%EE) and coating efficiency (%CE) of CH-NLPs (prepared using a modified ethanol injection method) were investigated. The concentrations of lipid, cholesterol, drug and chitosan; stirring speed, sonication time; organic:aqueous phase ratio; and temperature were identified as the key factors after risk analysis for conducting a screening design study. A separate study was designed to investigate the robustness of the predicted design space. The particle size, %EE and %CE of the optimized CH-NLPs were 111.3 nm, 33.4% and 35.2%, respectively. The observed responses were in accordance with the predicted response, which confirms the suitability and robustness of the design space for CH-NLP formulation. In conclusion, optimization of the selected key variables will help minimize the problems related to size, %EE and %CE that are generally encountered when scaling up processes for NLP formulations. The robustness of the design space will help minimize both intra-batch and inter-batch variations, which are quite common in the pharmaceutical industry.
Garland, Ellen C; Rendell, Luke; Lilley, Matthew S; Poole, M Michael; Allen, Jenny; Noad, Michael J
2017-07-01
Identifying and quantifying variation in vocalizations is fundamental to advancing our understanding of processes such as speciation, sexual selection, and cultural evolution. The song of the humpback whale (Megaptera novaeangliae) presents an extreme example of complexity and cultural evolution. It is a long, hierarchically structured vocal display that undergoes constant evolutionary change. Obtaining robust metrics to quantify song variation at multiple scales (from a sound through to population variation across the seascape) is a substantial challenge. Here, the authors present a method to quantify song similarity at multiple levels within the hierarchy. To incorporate the complexity of these multiple levels, the calculation of similarity is weighted by measurements of sound units (lower levels within the display) to bridge the gap in information between upper and lower levels. Results demonstrate that the inclusion of weighting provides a more realistic and robust representation of song similarity at multiple levels within the display. This method permits robust quantification of cultural patterns and processes that will also contribute to the conservation management of endangered humpback whale populations, and is applicable to any hierarchically structured signal sequence.
Bellenguez, Céline; Strange, Amy; Freeman, Colin; Donnelly, Peter; Spencer, Chris C A
2012-01-01
High-throughput genotyping arrays provide an efficient way to survey single nucleotide polymorphisms (SNPs) across the genome in large numbers of individuals. Downstream analysis of the data, for example in genome-wide association studies (GWAS), often involves statistical models of genotype frequencies across individuals. The complexities of the sample collection process and the potential for errors in the experimental assay can lead to biases and artefacts in an individual's inferred genotypes. Rather than attempting to model these complications, it has become a standard practice to remove individuals whose genome-wide data differ from the sample at large. Here we describe a simple, but robust, statistical algorithm to identify samples with atypical summaries of genome-wide variation. Its use as a semi-automated quality control tool is demonstrated using several summary statistics, selected to identify different potential problems, and it is applied to two different genotyping platforms and sample collections. The algorithm is written in R and is freely available at www.well.ox.ac.uk/chris-spencer chris.spencer@well.ox.ac.uk Supplementary data are available at Bioinformatics online.
SU-E-J-212: Identifying Bones From MRI: A Dictionary Learnign and Sparse Regression Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruan, D; Yang, Y; Cao, M
2014-06-01
Purpose: To develop an efficient and robust scheme to identify bony anatomy based on MRI-only simulation images. Methods: MRI offers important soft tissue contrast and functional information, yet its lack of correlation to electron-density has placed it as an auxiliary modality to CT in radiotherapy simulation and adaptation. An effective scheme to identify bony anatomy is an important first step towards MR-only simulation/treatment paradigm and would satisfy most practical purposes. We utilize a UTE acquisition sequence to achieve visibility of the bone. By contrast to manual + bulk or registration-to identify bones, we propose a novel learning-based approach for improvedmore » robustness to MR artefacts and environmental changes. Specifically, local information is encoded with MR image patch, and the corresponding label is extracted (during training) from simulation CT aligned to the UTE. Within each class (bone vs. nonbone), an overcomplete dictionary is learned so that typical patches within the proper class can be represented as a sparse combination of the dictionary entries. For testing, an acquired UTE-MRI is divided to patches using a sliding scheme, where each patch is sparsely regressed against both bone and nonbone dictionaries, and subsequently claimed to be associated with the class with the smaller residual. Results: The proposed method has been applied to the pilot site of brain imaging and it has showed general good performance, with dice similarity coefficient of greater than 0.9 in a crossvalidation study using 4 datasets. Importantly, it is robust towards consistent foreign objects (e.g., headset) and the artefacts relates to Gibbs and field heterogeneity. Conclusion: A learning perspective has been developed for inferring bone structures based on UTE MRI. The imaging setting is subject to minimal motion effects and the post-processing is efficient. The improved efficiency and robustness enables a first translation to MR-only routine. The scheme generalizes to multiple tissue classes.« less
Design of forging process variables under uncertainties
NASA Astrophysics Data System (ADS)
Repalle, Jalaja; Grandhi, Ramana V.
2005-02-01
Forging is a complex nonlinear process that is vulnerable to various manufacturing anomalies, such as variations in billet geometry, billet/die temperatures, material properties, and workpiece and forging equipment positional errors. A combination of these uncertainties could induce heavy manufacturing losses through premature die failure, final part geometric distortion, and reduced productivity. Identifying, quantifying, and controlling the uncertainties will reduce variability risk in a manufacturing environment, which will minimize the overall production cost. In this article, various uncertainties that affect the forging process are identified, and their cumulative effect on the forging tool life is evaluated. Because the forging process simulation is time-consuming, a response surface model is used to reduce computation time by establishing a relationship between the process performance and the critical process variables. A robust design methodology is developed by incorporating reliability-based optimization techniques to obtain sound forging components. A case study of an automotive-component forging-process design is presented to demonstrate the applicability of the method.
A robust real-time abnormal region detection framework from capsule endoscopy images
NASA Astrophysics Data System (ADS)
Cheng, Yanfen; Liu, Xu; Li, Huiping
2009-02-01
In this paper we present a novel method to detect abnormal regions from capsule endoscopy images. Wireless Capsule Endoscopy (WCE) is a recent technology where a capsule with an embedded camera is swallowed by the patient to visualize the gastrointestinal tract. One challenge is one procedure of diagnosis will send out over 50,000 images, making physicians' reviewing process expensive. Physicians' reviewing process involves in identifying images containing abnormal regions (tumor, bleeding, etc) from this large number of image sequence. In this paper we construct a novel framework for robust and real-time abnormal region detection from large amount of capsule endoscopy images. The detected potential abnormal regions can be labeled out automatically to let physicians review further, therefore, reduce the overall reviewing process. In this paper we construct an abnormal region detection framework with the following advantages: 1) Trainable. Users can define and label any type of abnormal region they want to find; The abnormal regions, such as tumor, bleeding, etc., can be pre-defined and labeled using the graphical user interface tool we provided. 2) Efficient. Due to the large number of image data, the detection speed is very important. Our system can detect very efficiently at different scales due to the integral image features we used; 3) Robust. After feature selection we use a cascade of classifiers to further enforce the detection accuracy.
A Robust Framework for Microbial Archaeology
Warinner, Christina; Herbig, Alexander; Mann, Allison; Yates, James A. Fellows; Weiβ, Clemens L.; Burbano, Hernán A.; Orlando, Ludovic; Krause, Johannes
2017-01-01
Microbial archaeology is flourishing in the era of high-throughput sequencing, revealing the agents behind devastating historical plagues, identifying the cryptic movements of pathogens in prehistory, and reconstructing the ancestral microbiota of humans. Here, we introduce the fundamental concepts and theoretical framework of the discipline, then discuss applied methodologies for pathogen identification and microbiome characterization from archaeological samples. We give special attention to the process of identifying, validating, and authenticating ancient microbes using high-throughput DNA sequencing data. Finally, we outline standards and precautions to guide future research in the field. PMID:28460196
Mining manufacturing data for discovery of high productivity process characteristics.
Charaniya, Salim; Le, Huong; Rangwala, Huzefa; Mills, Keri; Johnson, Kevin; Karypis, George; Hu, Wei-Shou
2010-06-01
Modern manufacturing facilities for bioproducts are highly automated with advanced process monitoring and data archiving systems. The time dynamics of hundreds of process parameters and outcome variables over a large number of production runs are archived in the data warehouse. This vast amount of data is a vital resource to comprehend the complex characteristics of bioprocesses and enhance production robustness. Cell culture process data from 108 'trains' comprising production as well as inoculum bioreactors from Genentech's manufacturing facility were investigated. Each run constitutes over one-hundred on-line and off-line temporal parameters. A kernel-based approach combined with a maximum margin-based support vector regression algorithm was used to integrate all the process parameters and develop predictive models for a key cell culture performance parameter. The model was also used to identify and rank process parameters according to their relevance in predicting process outcome. Evaluation of cell culture stage-specific models indicates that production performance can be reliably predicted days prior to harvest. Strong associations between several temporal parameters at various manufacturing stages and final process outcome were uncovered. This model-based data mining represents an important step forward in establishing a process data-driven knowledge discovery in bioprocesses. Implementation of this methodology on the manufacturing floor can facilitate a real-time decision making process and thereby improve the robustness of large scale bioprocesses. 2010 Elsevier B.V. All rights reserved.
Background noise exerts diverse effects on the cortical encoding of foreground sounds.
Malone, B J; Heiser, Marc A; Beitel, Ralph E; Schreiner, Christoph E
2017-08-01
In natural listening conditions, many sounds must be detected and identified in the context of competing sound sources, which function as background noise. Traditionally, noise is thought to degrade the cortical representation of sounds by suppressing responses and increasing response variability. However, recent studies of neural network models and brain slices have shown that background synaptic noise can improve the detection of signals. Because acoustic noise affects the synaptic background activity of cortical networks, it may improve the cortical responses to signals. We used spike train decoding techniques to determine the functional effects of a continuous white noise background on the responses of clusters of neurons in auditory cortex to foreground signals, specifically frequency-modulated sweeps (FMs) of different velocities, directions, and amplitudes. Whereas the addition of noise progressively suppressed the FM responses of some cortical sites in the core fields with decreasing signal-to-noise ratios (SNRs), the stimulus representation remained robust or was even significantly enhanced at specific SNRs in many others. Even though the background noise level was typically not explicitly encoded in cortical responses, significant information about noise context could be decoded from cortical responses on the basis of how the neural representation of the foreground sweeps was affected. These findings demonstrate significant diversity in signal in noise processing even within the core auditory fields that could support noise-robust hearing across a wide range of listening conditions. NEW & NOTEWORTHY The ability to detect and discriminate sounds in background noise is critical for our ability to communicate. The neural basis of robust perceptual performance in noise is not well understood. We identified neuronal populations in core auditory cortex of squirrel monkeys that differ in how they process foreground signals in background noise and that may contribute to robust signal representation and discrimination in acoustic environments with prominent background noise. Copyright © 2017 the American Physiological Society.
Robust adaptive multichannel SAR processing based on covariance matrix reconstruction
NASA Astrophysics Data System (ADS)
Tan, Zhen-ya; He, Feng
2018-04-01
With the combination of digital beamforming (DBF) processing, multichannel synthetic aperture radar(SAR) systems in azimuth promise well in high-resolution and wide-swath imaging, whereas conventional processing methods don't take the nonuniformity of scattering coefficient into consideration. This paper brings up a robust adaptive Multichannel SAR processing method which utilizes the Capon spatial spectrum estimator to obtain the spatial spectrum distribution over all ambiguous directions first, and then the interference-plus-noise covariance Matrix is reconstructed based on definition to acquire the Multichannel SAR processing filter. The performance of processing under nonuniform scattering coefficient is promoted by this novel method and it is robust again array errors. The experiments with real measured data demonstrate the effectiveness and robustness of the proposed method.
Keller, Brad M; Oustimov, Andrew; Wang, Yan; Chen, Jinbo; Acciavatti, Raymond J; Zheng, Yuanjie; Ray, Shonket; Gee, James C; Maidment, Andrew D A; Kontos, Despina
2015-04-01
An analytical framework is presented for evaluating the equivalence of parenchymal texture features across different full-field digital mammography (FFDM) systems using a physical breast phantom. Phantom images (FOR PROCESSING) are acquired from three FFDM systems using their automated exposure control setting. A panel of texture features, including gray-level histogram, co-occurrence, run length, and structural descriptors, are extracted. To identify features that are robust across imaging systems, a series of equivalence tests are performed on the feature distributions, in which the extent of their intersystem variation is compared to their intrasystem variation via the Hodges-Lehmann test statistic. Overall, histogram and structural features tend to be most robust across all systems, and certain features, such as edge enhancement, tend to be more robust to intergenerational differences between detectors of a single vendor than to intervendor differences. Texture features extracted from larger regions of interest (i.e., [Formula: see text]) and with a larger offset length (i.e., [Formula: see text]), when applicable, also appear to be more robust across imaging systems. This framework and observations from our experiments may benefit applications utilizing mammographic texture analysis on images acquired in multivendor settings, such as in multicenter studies of computer-aided detection and breast cancer risk assessment.
Robust Crop and Weed Segmentation under Uncontrolled Outdoor Illumination
Jeon, Hong Y.; Tian, Lei F.; Zhu, Heping
2011-01-01
An image processing algorithm for detecting individual weeds was developed and evaluated. Weed detection processes included were normalized excessive green conversion, statistical threshold value estimation, adaptive image segmentation, median filter, morphological feature calculation and Artificial Neural Network (ANN). The developed algorithm was validated for its ability to identify and detect weeds and crop plants under uncontrolled outdoor illuminations. A machine vision implementing field robot captured field images under outdoor illuminations and the image processing algorithm automatically processed them without manual adjustment. The errors of the algorithm, when processing 666 field images, ranged from 2.1 to 2.9%. The ANN correctly detected 72.6% of crop plants from the identified plants, and considered the rest as weeds. However, the ANN identification rates for crop plants were improved up to 95.1% by addressing the error sources in the algorithm. The developed weed detection and image processing algorithm provides a novel method to identify plants against soil background under the uncontrolled outdoor illuminations, and to differentiate weeds from crop plants. Thus, the proposed new machine vision and processing algorithm may be useful for outdoor applications including plant specific direct applications (PSDA). PMID:22163954
ERIC Educational Resources Information Center
Jones, Deborah J.; Lewis, Terri; Litrownik, Alan; Thompson, Richard; Proctor, Laura J.; Isbell, Patricia; Dubowitz, Howard; English, Diana; Jones, Bobby; Nagin, Daniel; Runyan, Desmond
2013-01-01
A robust literature links childhood sexual abuse (CSA) to later substance use and sexual risk behavior; yet, relatively little empirical attention has been devoted to identifying the mechanisms linking CSA to risky behavior among youth, with even less work examining such processes in boys. With the aim of addressing this gap in the literature, the…
Functional Groups Based on Leaf Physiology: Are they Spatially and Temporally Robust?
NASA Technical Reports Server (NTRS)
Foster, Tammy E.; Brooks, J. Renee
2004-01-01
The functional grouping hypothesis, which suggests that complexity in ecosystem function can be simplified by grouping species with similar responses, was tested in the Florida scrub habitat. Functional groups were identified based on how species in fire maintained Florida scrub regulate exchange of carbon and water with the atmosphere as indicated by both instantaneous gas exchange measurements and integrated measures of function (%N, delta C-13, delta N-15, C-N ratio). Using cluster analysis, five distinct physiologically-based functional groups were identified in the fire maintained scrub. These functional groups were tested to determine if they were robust spatially, temporally, and with management regime. Analysis of Similarities (ANOSIM), a non-parametric multivariate analysis, indicated that these five physiologically-based groupings were not altered by plot differences (R = -0.115, p = 0.893) or by the three different management regimes; prescribed burn, mechanically treated and burn, and fire-suppressed (R = 0.018, p = 0.349). The physiological groupings also remained robust between the two climatically different years 1999 and 2000 (R = -0.027, p = 0.725). Easy-to-measure morphological characteristics indicating functional groups would be more practical for scaling and modeling ecosystem processes than detailed gas-exchange measurements, therefore we tested a variety of morphological characteristics as functional indicators. A combination of non-parametric multivariate techniques (Hierarchical cluster analysis, non-metric Multi-Dimensional Scaling, and ANOSIM) were used to compare the ability of life form, leaf thickness, and specific leaf area classifications to identify the physiologically-based functional groups. Life form classifications (ANOSIM; R = 0.629, p 0.001) were able to depict the physiological groupings more adequately than either specific leaf area (ANOSIM; R = 0.426, p = 0.001) or leaf thickness (ANOSIM; R 0.344, p 0.001). The ability of life forms to depict the physiological groupings was improved by separating the parasitic Ximenia americana from the shrub category (ANOSIM; R = 0.794, p = 0.001). Therefore, a life form classification including parasites was determined to be a good indicator of the physiological processes of scrub species, and would be a useful method of grouping for scaling physiological processes to the ecosystem level.
NASA Astrophysics Data System (ADS)
Liu, Weiqiang; Chen, Rujun; Cai, Hongzhu; Luo, Weibin
2016-12-01
In this paper, we investigated the robust processing of noisy spread spectrum induced polarization (SSIP) data. SSIP is a new frequency domain induced polarization method that transmits pseudo-random m-sequence as source current where m-sequence is a broadband signal. The potential information at multiple frequencies can be obtained through measurement. Removing the noise is a crucial problem for SSIP data processing. Considering that if the ordinary mean stack and digital filter are not capable of reducing the impulse noise effectively in SSIP data processing, the impact of impulse noise will remain in the complex resistivity spectrum that will affect the interpretation of profile anomalies. We implemented a robust statistical method to SSIP data processing. The robust least-squares regression is used to fit and remove the linear trend from the original data before stacking. The robust M estimate is used to stack the data of all periods. The robust smooth filter is used to suppress the residual noise for data after stacking. For robust statistical scheme, the most appropriate influence function and iterative algorithm are chosen by testing the simulated data to suppress the outliers' influence. We tested the benefits of the robust SSIP data processing using examples of SSIP data recorded in a test site beside a mine in Gansu province, China.
Qiu, Shi; Yang, Wen-Zhi; Yao, Chang-Liang; Qiu, Zhi-Dong; Shi, Xiao-Jian; Zhang, Jing-Xian; Hou, Jin-Jun; Wang, Qiu-Rong; Wu, Wan-Ying; Guo, De-An
2016-07-01
A key segment in authentication of herbal medicines is the establishment of robust biomarkers that embody the intrinsic metabolites difference independent of the growing environment or processing technics. We present a strategy by nontargeted metabolomics and "Commercial-homophyletic" comparison-induced biomarkers verification with new bioinformatic vehicles, to improve the efficiency and reliability in authentication of herbal medicines. The chemical differentiation of five different parts (root, leaf, flower bud, berry, and seed) of Panax ginseng was illustrated as a case study. First, an optimized ultra-performance liquid chromatography/quadrupole time-of-flight-MS(E) (UPLC/QTOF-MS(E)) approach was established for global metabolites profiling. Second, UNIFI™ combined with search of an in-house library was employed to automatically characterize the metabolites. Third, pattern recognition multivariate statistical analysis of the MS(E) data of different parts of commercial and homophyletic samples were separately performed to explore potential biomarkers. Fourth, potential biomarkers deduced from commercial and homophyletic root and leaf samples were cross-compared to infer robust biomarkers. Fifth, discriminating models by artificial neutral network (ANN) were established to identify different parts of P. ginseng. Consequently, 164 compounds were characterized, and 11 robust biomarkers enabling the differentiation among root, leaf, flower bud, and berry, were discovered by removing those structurally unstable and possibly processing-related ones. The ANN models using the robust biomarkers managed to exactly discriminate four different parts and root adulterant with leaf as well. Conclusively, biomarkers verification using homophyletic samples conduces to the discovery of robust biomarkers. The integrated strategy facilitates authentication of herbal medicines in a more efficient and more intelligent manner. Copyright © 2016 Elsevier B.V. All rights reserved.
A General Framework of Persistence Strategies for Biological Systems Helps Explain Domains of Life
Yafremava, Liudmila S.; Wielgos, Monica; Thomas, Suravi; Nasir, Arshan; Wang, Minglei; Mittenthal, Jay E.; Caetano-Anollés, Gustavo
2012-01-01
The nature and cause of the division of organisms in superkingdoms is not fully understood. Assuming that environment shapes physiology, here we construct a novel theoretical framework that helps identify general patterns of organism persistence. This framework is based on Jacob von Uexküll’s organism-centric view of the environment and James G. Miller’s view of organisms as matter-energy-information processing molecular machines. Three concepts describe an organism’s environmental niche: scope, umwelt, and gap. Scope denotes the entirety of environmental events and conditions to which the organism is exposed during its lifetime. Umwelt encompasses an organism’s perception of these events. The gap is the organism’s blind spot, the scope that is not covered by umwelt. These concepts bring organisms of different complexity to a common ecological denominator. Ecological and physiological data suggest organisms persist using three strategies: flexibility, robustness, and economy. All organisms use umwelt information to flexibly adapt to environmental change. They implement robustness against environmental perturbations within the gap generally through redundancy and reliability of internal constituents. Both flexibility and robustness improve survival. However, they also incur metabolic matter-energy processing costs, which otherwise could have been used for growth and reproduction. Lineages evolve unique tradeoff solutions among strategies in the space of what we call “a persistence triangle.” Protein domain architecture and other evidence support the preferential use of flexibility and robustness properties. Archaea and Bacteria gravitate toward the triangle’s economy vertex, with Archaea biased toward robustness. Eukarya trade economy for survivability. Protista occupy a saddle manifold separating akaryotes from multicellular organisms. Plants and the more flexible Fungi share an economic stratum, and Metazoa are locked in a positive feedback loop toward flexibility. PMID:23443991
Distributed Sensing and Processing for Multi-Camera Networks
NASA Astrophysics Data System (ADS)
Sankaranarayanan, Aswin C.; Chellappa, Rama; Baraniuk, Richard G.
Sensor networks with large numbers of cameras are becoming increasingly prevalent in a wide range of applications, including video conferencing, motion capture, surveillance, and clinical diagnostics. In this chapter, we identify some of the fundamental challenges in designing such systems: robust statistical inference, computationally efficiency, and opportunistic and parsimonious sensing. We show that the geometric constraints induced by the imaging process are extremely useful for identifying and designing optimal estimators for object detection and tracking tasks. We also derive pipelined and parallelized implementations of popular tools used for statistical inference in non-linear systems, of which multi-camera systems are examples. Finally, we highlight the use of the emerging theory of compressive sensing in reducing the amount of data sensed and communicated by a camera network.
A robust automated system elucidates mouse home cage behavioral structure
Goulding, Evan H.; Schenk, A. Katrin; Juneja, Punita; MacKay, Adrienne W.; Wade, Jennifer M.; Tecott, Laurence H.
2008-01-01
Patterns of behavior exhibited by mice in their home cages reflect the function and interaction of numerous behavioral and physiological systems. Detailed assessment of these patterns thus has the potential to provide a powerful tool for understanding basic aspects of behavioral regulation and their perturbation by disease processes. However, the capacity to identify and examine these patterns in terms of their discrete levels of organization across diverse behaviors has been difficult to achieve and automate. Here, we describe an automated approach for the quantitative characterization of fundamental behavioral elements and their patterns in the freely behaving mouse. We demonstrate the utility of this approach by identifying unique features of home cage behavioral structure and changes in distinct levels of behavioral organization in mice with single gene mutations altering energy balance. The robust, automated, reproducible quantification of mouse home cage behavioral structure detailed here should have wide applicability for the study of mammalian physiology, behavior, and disease. PMID:19106295
Power independent EMG based gesture recognition for robotics.
Li, Ling; Looney, David; Park, Cheolsoo; Rehman, Naveed U; Mandic, Danilo P
2011-01-01
A novel method for detecting muscle contraction is presented. This method is further developed for identifying four different gestures to facilitate a hand gesture controlled robot system. It is achieved based on surface Electromyograph (EMG) measurements of groups of arm muscles. The cross-information is preserved through a simultaneous processing of EMG channels using a recent multivariate extension of Empirical Mode Decomposition (EMD). Next, phase synchrony measures are employed to make the system robust to different power levels due to electrode placements and impedances. The multiple pairwise muscle synchronies are used as features of a discrete gesture space comprising four gestures (flexion, extension, pronation, supination). Simulations on real-time robot control illustrate the enhanced accuracy and robustness of the proposed methodology.
Kazemi, Mahdi; Arefi, Mohammad Mehdi
2017-03-01
In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Project risk management in the construction of high-rise buildings
NASA Astrophysics Data System (ADS)
Titarenko, Boris; Hasnaoui, Amir; Titarenko, Roman; Buzuk, Liliya
2018-03-01
This paper shows the project risk management methods, which allow to better identify risks in the construction of high-rise buildings and to manage them throughout the life cycle of the project. One of the project risk management processes is a quantitative analysis of risks. The quantitative analysis usually includes the assessment of the potential impact of project risks and their probabilities. This paper shows the most popular methods of risk probability assessment and tries to indicate the advantages of the robust approach over the traditional methods. Within the framework of the project risk management model a robust approach of P. Huber is applied and expanded for the tasks of regression analysis of project data. The suggested algorithms used to assess the parameters in statistical models allow to obtain reliable estimates. A review of the theoretical problems of the development of robust models built on the methodology of the minimax estimates was done and the algorithm for the situation of asymmetric "contamination" was developed.
Robust Fault Detection for Aircraft Using Mixed Structured Singular Value Theory and Fuzzy Logic
NASA Technical Reports Server (NTRS)
Collins, Emmanuel G.
2000-01-01
The purpose of fault detection is to identify when a fault or failure has occurred in a system such as an aircraft or expendable launch vehicle. The faults may occur in sensors, actuators, structural components, etc. One of the primary approaches to model-based fault detection relies on analytical redundancy. That is the output of a computer-based model (actually a state estimator) is compared with the sensor measurements of the actual system to determine when a fault has occurred. Unfortunately, the state estimator is based on an idealized mathematical description of the underlying plant that is never totally accurate. As a result of these modeling errors, false alarms can occur. This research uses mixed structured singular value theory, a relatively recent and powerful robustness analysis tool, to develop robust estimators and demonstrates the use of these estimators in fault detection. To allow qualitative human experience to be effectively incorporated into the detection process fuzzy logic is used to predict the seriousness of the fault that has occurred.
Zischg, Jonatan; Goncalves, Mariana L R; Bacchin, Taneha Kuzniecow; Leonhardt, Günther; Viklander, Maria; van Timmeren, Arjan; Rauch, Wolfgang; Sitzenfrei, Robert
2017-09-01
In the urban water cycle, there are different ways of handling stormwater runoff. Traditional systems mainly rely on underground piped, sometimes named 'gray' infrastructure. New and so-called 'green/blue' ambitions aim for treating and conveying the runoff at the surface. Such concepts are mainly based on ground infiltration and temporal storage. In this work a methodology to create and compare different planning alternatives for stormwater handling on their pathways to a desired system state is presented. Investigations are made to assess the system performance and robustness when facing the deeply uncertain spatial and temporal developments in the future urban fabric, including impacts caused by climate change, urbanization and other disruptive events, like shifts in the network layout and interactions of 'gray' and 'green/blue' structures. With the Info-Gap robustness pathway method, three planning alternatives are evaluated to identify critical performance levels at different stages over time. This novel methodology is applied to a real case study problem where a city relocation process takes place during the upcoming decades. In this case study it is shown that hybrid systems including green infrastructures are more robust with respect to future uncertainties, compared to traditional network design.
A model to assess the Mars Telecommunications Network relay robustness
NASA Technical Reports Server (NTRS)
Girerd, Andre R.; Meshkat, Leila; Edwards, Charles D., Jr.; Lee, Charles H.
2005-01-01
The relatively long mission durations and compatible radio protocols of current and projected Mars orbiters have enabled the gradual development of a heterogeneous constellation providing proximity communication services for surface assets. The current and forecasted capability of this evolving network has reached the point that designers of future surface missions consider complete dependence on it. Such designers, along with those architecting network requirements, have a need to understand the robustness of projected communication service. A model has been created to identify the robustness of the Mars Network as a function of surface location and time. Due to the decade-plus time horizon considered, the network will evolve, with emerging productive nodes and nodes that cease or fail to contribute. The model is a flexible framework to holistically process node information into measures of capability robustness that can be visualized for maximum understanding. Outputs from JPL's Telecom Orbit Analysis Simulation Tool (TOAST) provide global telecom performance parameters for current and projected orbiters. Probabilistic estimates of orbiter fuel life are derived from orbit keeping burn rates, forecasted maneuver tasking, and anomaly resolution budgets. Orbiter reliability is estimated probabilistically. A flexible scheduling framework accommodates the projected mission queue as well as potential alterations.
NASA Astrophysics Data System (ADS)
Lu, Mark; Liang, Curtis; King, Dion; Melvin, Lawrence S., III
2005-11-01
Model-based Optical Proximity correction has become an indispensable tool for achieving wafer pattern to design fidelity at current manufacturing process nodes. Most model-based OPC is performed considering the nominal process condition, with limited consideration of through process manufacturing robustness. This study examines the use of off-target process models - models that represent non-nominal process states such as would occur with a dose or focus variation - to understands and manipulate the final pattern correction to a more process robust configuration. The study will first examine and validate the process of generating an off-target model, then examine the quality of the off-target model. Once the off-target model is proven, it will be used to demonstrate methods of generating process robust corrections. The concepts are demonstrated using a 0.13 μm logic gate process. Preliminary indications show success in both off-target model production and process robust corrections. With these off-target models as tools, mask production cycle times can be reduced.
Gupta, Munish; Kaplan, Heather C
2017-09-01
Quality improvement (QI) is based on measuring performance over time, and variation in data measured over time must be understood to guide change and make optimal improvements. Common cause variation is natural variation owing to factors inherent to any process; special cause variation is unnatural variation owing to external factors. Statistical process control methods, and particularly control charts, are robust tools for understanding data over time and identifying common and special cause variation. This review provides a practical introduction to the use of control charts in health care QI, with a focus on neonatology. Copyright © 2017 Elsevier Inc. All rights reserved.
Characterizing Cancer Drug Response and Biological Correlates: A Geometric Network Approach.
Pouryahya, Maryam; Oh, Jung Hun; Mathews, James C; Deasy, Joseph O; Tannenbaum, Allen R
2018-04-23
In the present work, we apply a geometric network approach to study common biological features of anticancer drug response. We use for this purpose the panel of 60 human cell lines (NCI-60) provided by the National Cancer Institute. Our study suggests that mathematical tools for network-based analysis can provide novel insights into drug response and cancer biology. We adopted a discrete notion of Ricci curvature to measure, via a link between Ricci curvature and network robustness established by the theory of optimal mass transport, the robustness of biological networks constructed with a pre-treatment gene expression dataset and coupled the results with the GI50 response of the cell lines to the drugs. Based on the resulting drug response ranking, we assessed the impact of genes that are likely associated with individual drug response. For genes identified as important, we performed a gene ontology enrichment analysis using a curated bioinformatics database which resulted in biological processes associated with drug response across cell lines and tissue types which are plausible from the point of view of the biological literature. These results demonstrate the potential of using the mathematical network analysis in assessing drug response and in identifying relevant genomic biomarkers and biological processes for precision medicine.
Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes
NASA Astrophysics Data System (ADS)
Harrington, James William
Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present a local classical processing scheme for correcting errors on toric codes, which demonstrates that quantum information can be maintained in two dimensions by purely local (quantum and classical) resources.
Analytical redundancy and the design of robust failure detection systems
NASA Technical Reports Server (NTRS)
Chow, E. Y.; Willsky, A. S.
1984-01-01
The Failure Detection and Identification (FDI) process is viewed as consisting of two stages: residual generation and decision making. It is argued that a robust FDI system can be achieved by designing a robust residual generation process. Analytical redundancy, the basis for residual generation, is characterized in terms of a parity space. Using the concept of parity relations, residuals can be generated in a number of ways and the design of a robust residual generation process can be formulated as a minimax optimization problem. An example is included to illustrate this design methodology. Previously announcedd in STAR as N83-20653
Jin, Suoqin; MacLean, Adam L; Peng, Tao; Nie, Qing
2018-02-05
Single-cell RNA-sequencing (scRNA-seq) offers unprecedented resolution for studying cellular decision-making processes. Robust inference of cell state transition paths and probabilities is an important yet challenging step in the analysis of these data. Here we present scEpath, an algorithm that calculates energy landscapes and probabilistic directed graphs in order to reconstruct developmental trajectories. We quantify the energy landscape using "single-cell energy" and distance-based measures, and find that the combination of these enables robust inference of the transition probabilities and lineage relationships between cell states. We also identify marker genes and gene expression patterns associated with cell state transitions. Our approach produces pseudotemporal orderings that are - in combination - more robust and accurate than current methods, and offers higher resolution dynamics of the cell state transitions, leading to new insight into key transition events during differentiation and development. Moreover, scEpath is robust to variation in the size of the input gene set, and is broadly unsupervised, requiring few parameters to be set by the user. Applications of scEpath led to the identification of a cell-cell communication network implicated in early human embryo development, and novel transcription factors important for myoblast differentiation. scEpath allows us to identify common and specific temporal dynamics and transcriptional factor programs along branched lineages, as well as the transition probabilities that control cell fates. A MATLAB package of scEpath is available at https://github.com/sqjin/scEpath. qnie@uci.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2018. Published by Oxford University Press.
RootGraph: a graphic optimization tool for automated image analysis of plant roots
Cai, Jinhai; Zeng, Zhanghui; Connor, Jason N.; Huang, Chun Yuan; Melino, Vanessa; Kumar, Pankaj; Miklavcic, Stanley J.
2015-01-01
This paper outlines a numerical scheme for accurate, detailed, and high-throughput image analysis of plant roots. In contrast to existing root image analysis tools that focus on root system-average traits, a novel, fully automated and robust approach for the detailed characterization of root traits, based on a graph optimization process is presented. The scheme, firstly, distinguishes primary roots from lateral roots and, secondly, quantifies a broad spectrum of root traits for each identified primary and lateral root. Thirdly, it associates lateral roots and their properties with the specific primary root from which the laterals emerge. The performance of this approach was evaluated through comparisons with other automated and semi-automated software solutions as well as against results based on manual measurements. The comparisons and subsequent application of the algorithm to an array of experimental data demonstrate that this method outperforms existing methods in terms of accuracy, robustness, and the ability to process root images under high-throughput conditions. PMID:26224880
Detecting phenotype-driven transitions in regulatory network structure.
Padi, Megha; Quackenbush, John
2018-01-01
Complex traits and diseases like human height or cancer are often not caused by a single mutation or genetic variant, but instead arise from functional changes in the underlying molecular network. Biological networks are known to be highly modular and contain dense "communities" of genes that carry out cellular processes, but these structures change between tissues, during development, and in disease. While many methods exist for inferring networks and analyzing their topologies separately, there is a lack of robust methods for quantifying differences in network structure. Here, we describe ALPACA (ALtered Partitions Across Community Architectures), a method for comparing two genome-scale networks derived from different phenotypic states to identify condition-specific modules. In simulations, ALPACA leads to more nuanced, sensitive, and robust module discovery than currently available network comparison methods. As an application, we use ALPACA to compare transcriptional networks in three contexts: angiogenic and non-angiogenic subtypes of ovarian cancer, human fibroblasts expressing transforming viral oncogenes, and sexual dimorphism in human breast tissue. In each case, ALPACA identifies modules enriched for processes relevant to the phenotype. For example, modules specific to angiogenic ovarian tumors are enriched for genes associated with blood vessel development, and modules found in female breast tissue are enriched for genes involved in estrogen receptor and ERK signaling. The functional relevance of these new modules suggests that not only can ALPACA identify structural changes in complex networks, but also that these changes may be relevant for characterizing biological phenotypes.
Chung, Bongjae; Cebral, Juan Raul
2015-01-01
Computational fluid dynamics (CFD) has been used for several years to identify mechanical risk factors associated with aneurysmal evolution and rupture as well as to understand flow characteristics before and after surgical treatments in order to help the clinical decision making process. We used the keywords, "CFD" and "aneurysms" to search recent publications since about 2000, and categorized them into (i) studies of rupture risk factors and (ii) investigations of pre- and post-evaluations of surgical treatment with devices like coils and flow diverters (FD). This search enables us to examine the current status of CFD as a clinical tool and to determine if CFD can potentially become an important part of the routine clinical practice for the evaluation and treatment of aneurysms in near future. According to previous reports, it has been argued that CFD has become a quite robust non-invasive tool for the evaluation of surgical devices, especially in the early stages of device design and it has also been applied successfully to the study of rupture risk assessment. However, we find that due to the large number of pre-processing inputs further efforts of validation and reproducibility of CFD with larger clinical datasets are still essential to identify standardized mechanical risk factors. As a result, we identify the following needs to have a robust CFD tool for clinical use: (i) more reliability tests through validation studies, (ii) analyses of larger generalized clinical datasets to find converging universal risk parameters, (iii) fluid structure interaction (FSI) analyses to better understand the detailed vascular remodeling processes associated with aneurysm growth, evolution and rupture, and (iv) better coordinated and organized communications and collaborations between engineers and clinicians.
Narasimhan, Seetharam; Chiel, Hillel J; Bhunia, Swarup
2009-01-01
For implantable neural interface applications, it is important to compress data and analyze spike patterns across multiple channels in real time. Such a computational task for online neural data processing requires an innovative circuit-architecture level design approach for low-power, robust and area-efficient hardware implementation. Conventional microprocessor or Digital Signal Processing (DSP) chips would dissipate too much power and are too large in size for an implantable system. In this paper, we propose a novel hardware design approach, referred to as "Preferential Design" that exploits the nature of the neural signal processing algorithm to achieve a low-voltage, robust and area-efficient implementation using nanoscale process technology. The basic idea is to isolate the critical components with respect to system performance and design them more conservatively compared to the noncritical ones. This allows aggressive voltage scaling for low power operation while ensuring robustness and area efficiency. We have applied the proposed approach to a neural signal processing algorithm using the Discrete Wavelet Transform (DWT) and observed significant improvement in power and robustness over conventional design.
Talluri, Murali V N Kumar; Kalariya, Pradipbhai D; Dharavath, Shireesha; Shaikh, Naeem; Garg, Prabha; Ramisetti, Nageswara Rao; Ragampeta, Srinivas
2016-09-01
A novel ultra high performance liquid chromatography method development strategy was ameliorated by applying quality by design approach. The developed systematic approach was divided into five steps (i) Analytical Target Profile, (ii) Critical Quality Attributes, (iii) Risk Assessments of Critical parameters using design of experiments (screening and optimization phases), (iv) Generation of design space, and (v) Process Capability Analysis (Cp) for robustness study using Monte Carlo simulation. The complete quality-by-design-based method development was made automated and expedited by employing sub-2 μm particles column with an ultra high performance liquid chromatography system. Successful chromatographic separation of the Coenzyme Q10 from its biotechnological process related impurities was achieved on a Waters Acquity phenyl hexyl (100 mm × 2.1 mm, 1.7 μm) column with gradient elution of 10 mM ammonium acetate buffer (pH 4.0) and a mixture of acetonitrile/2-propanol (1:1) as the mobile phase. Through this study, fast and organized method development workflow was developed and robustness of the method was also demonstrated. The method was validated for specificity, linearity, accuracy, precision, and robustness in compliance to the International Conference on Harmonization, Q2 (R1) guidelines. The impurities were identified by atmospheric pressure chemical ionization-mass spectrometry technique. Further, the in silico toxicity of impurities was analyzed using TOPKAT and DEREK software. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Fugitive Methane Gas Emission Monitoring in oil and gas industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klein, Levente
Identifying fugitive methane leaks allow optimization of the extraction process, can extend gas extraction equipment lifetime, and eliminate hazardous work conditions. We demonstrate a wireless sensor network based on cost effective and robust chemi-resistive methane sensors combined with real time analytics to identify leaks from 2 scfh to 10000 scfh. The chemi-resistive sensors were validated for sensitivity better than 1 ppm of methane plume detection. The real time chemical sensor and wind data is integrated into an inversion models to identify the location and the magnitude of the methane leak. This integrated solution can be deployed in outdoor environment formore » long term monitoring of chemical plumes.« less
Manufacturing Execution Systems: Examples of Performance Indicator and Operational Robustness Tools.
Gendre, Yannick; Waridel, Gérard; Guyon, Myrtille; Demuth, Jean-François; Guelpa, Hervé; Humbert, Thierry
Manufacturing Execution Systems (MES) are computerized systems used to measure production performance in terms of productivity, yield, and quality. In the first part, performance indicator and overall equipment effectiveness (OEE), process robustness tools and statistical process control are described. The second part details some tools to help process robustness and control by operators by preventing deviations from target control charts. MES was developed by Syngenta together with CIMO for automation.
Sehgal, Vasudha; Seviour, Elena G; Moss, Tyler J; Mills, Gordon B; Azencott, Robert; Ram, Prahlad T
2015-01-01
MicroRNAs (miRNAs) play a crucial role in the maintenance of cellular homeostasis by regulating the expression of their target genes. As such, the dysregulation of miRNA expression has been frequently linked to cancer. With rapidly accumulating molecular data linked to patient outcome, the need for identification of robust multi-omic molecular markers is critical in order to provide clinical impact. While previous bioinformatic tools have been developed to identify potential biomarkers in cancer, these methods do not allow for rapid classification of oncogenes versus tumor suppressors taking into account robust differential expression, cutoffs, p-values and non-normality of the data. Here, we propose a methodology, Robust Selection Algorithm (RSA) that addresses these important problems in big data omics analysis. The robustness of the survival analysis is ensured by identification of optimal cutoff values of omics expression, strengthened by p-value computed through intensive random resampling taking into account any non-normality in the data and integration into multi-omic functional networks. Here we have analyzed pan-cancer miRNA patient data to identify functional pathways involved in cancer progression that are associated with selected miRNA identified by RSA. Our approach demonstrates the way in which existing survival analysis techniques can be integrated with a functional network analysis framework to efficiently identify promising biomarkers and novel therapeutic candidates across diseases.
Temperature-Robust Neural Function from Activity-Dependent Ion Channel Regulation.
O'Leary, Timothy; Marder, Eve
2016-11-07
Many species of cold-blooded animals experience substantial and rapid fluctuations in body temperature. Because biological processes are differentially temperature dependent, it is difficult to understand how physiological processes in such animals can be temperature robust [1-8]. Experiments have shown that core neural circuits, such as the pyloric circuit of the crab stomatogastric ganglion (STG), exhibit robust neural activity in spite of large (20°C) temperature fluctuations [3, 5, 7, 8]. This robustness is surprising because (1) each neuron has many different kinds of ion channels with different temperature dependencies (Q 10 s) that interact in a highly nonlinear way to produce firing patterns and (2) across animals there is substantial variability in conductance densities that nonetheless produce almost identical firing properties. The high variability in conductance densities in these neurons [9, 10] appears to contradict the possibility that robustness is achieved through precise tuning of key temperature-dependent processes. In this paper, we develop a theoretical explanation for how temperature robustness can emerge from a simple regulatory control mechanism that is compatible with highly variable conductance densities [11-13]. The resulting model suggests a general mechanism for how nervous systems and excitable tissues can exploit degenerate relationships among temperature-sensitive processes to achieve robust function. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nath, Nayani Kishore
2017-08-01
The throat back up liners is used to protect the nozzle structural members from the severe thermal environment in solid rocket nozzles. The throat back up liners is made with E-glass phenolic prepregs by tape winding process. The objective of this work is to demonstrate the optimization of process parameters of tape winding process to achieve better insulative resistance using Taguchi's robust design methodology. In this method four control factors machine speed, roller pressure, tape tension, tape temperature that were investigated for the tape winding process. The presented work was to study the cogency and acceptability of Taguchi's methodology in manufacturing of throat back up liners. The quality characteristic identified was Back wall temperature. Experiments carried out using L 9 ' (34) orthogonal array with three levels of four different control factors. The test results were analyzed using smaller the better criteria for Signal to Noise ratio in order to optimize the process. The experimental results were analyzed conformed and successfully used to achieve the minimum back wall temperature of the throat back up liners. The enhancement in performance of the throat back up liners was observed by carrying out the oxy-acetylene tests. The influence of back wall temperature on the performance of throat back up liners was verified by ground firing test.
Simaria, Ana S; Hassan, Sally; Varadaraju, Hemanthram; Rowley, Jon; Warren, Kim; Vanek, Philip; Farid, Suzanne S
2014-01-01
For allogeneic cell therapies to reach their therapeutic potential, challenges related to achieving scalable and robust manufacturing processes will need to be addressed. A particular challenge is producing lot-sizes capable of meeting commercial demands of up to 109 cells/dose for large patient numbers due to the current limitations of expansion technologies. This article describes the application of a decisional tool to identify the most cost-effective expansion technologies for different scales of production as well as current gaps in the technology capabilities for allogeneic cell therapy manufacture. The tool integrates bioprocess economics with optimization to assess the economic competitiveness of planar and microcarrier-based cell expansion technologies. Visualization methods were used to identify the production scales where planar technologies will cease to be cost-effective and where microcarrier-based bioreactors become the only option. The tool outputs also predict that for the industry to be sustainable for high demand scenarios, significant increases will likely be needed in the performance capabilities of microcarrier-based systems. These data are presented using a technology S-curve as well as windows of operation to identify the combination of cell productivities and scale of single-use bioreactors required to meet future lot sizes. The modeling insights can be used to identify where future R&D investment should be focused to improve the performance of the most promising technologies so that they become a robust and scalable option that enables the cell therapy industry reach commercially relevant lot sizes. The tool outputs can facilitate decision-making very early on in development and be used to predict, and better manage, the risk of process changes needed as products proceed through the development pathway. Biotechnol. Bioeng. 2014;111: 69–83. © 2013 Wiley Periodicals, Inc. PMID:23893544
Simaria, Ana S; Hassan, Sally; Varadaraju, Hemanthram; Rowley, Jon; Warren, Kim; Vanek, Philip; Farid, Suzanne S
2014-01-01
For allogeneic cell therapies to reach their therapeutic potential, challenges related to achieving scalable and robust manufacturing processes will need to be addressed. A particular challenge is producing lot-sizes capable of meeting commercial demands of up to 10(9) cells/dose for large patient numbers due to the current limitations of expansion technologies. This article describes the application of a decisional tool to identify the most cost-effective expansion technologies for different scales of production as well as current gaps in the technology capabilities for allogeneic cell therapy manufacture. The tool integrates bioprocess economics with optimization to assess the economic competitiveness of planar and microcarrier-based cell expansion technologies. Visualization methods were used to identify the production scales where planar technologies will cease to be cost-effective and where microcarrier-based bioreactors become the only option. The tool outputs also predict that for the industry to be sustainable for high demand scenarios, significant increases will likely be needed in the performance capabilities of microcarrier-based systems. These data are presented using a technology S-curve as well as windows of operation to identify the combination of cell productivities and scale of single-use bioreactors required to meet future lot sizes. The modeling insights can be used to identify where future R&D investment should be focused to improve the performance of the most promising technologies so that they become a robust and scalable option that enables the cell therapy industry reach commercially relevant lot sizes. The tool outputs can facilitate decision-making very early on in development and be used to predict, and better manage, the risk of process changes needed as products proceed through the development pathway. © 2013 Wiley Periodicals, Inc.
Robustness surfaces of complex networks
NASA Astrophysics Data System (ADS)
Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis
2014-09-01
Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared.
Robustness surfaces of complex networks.
Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis
2014-09-02
Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared.
Shah, Dheeraj; Choudhury, Panna; Gupta, Piyush; Mathew, Joseph L; Gera, Tarun; Gogia, Siddhartha; Mohan, Pavitra; Panda, Rajmohan; Menon, Subhadra
2012-08-01
Scaling up of evidence-based management and prevention of childhood diarrhea is a public health priority in India, and necessitates robust literature review, for advocacy and action. To identify, synthesize and summarize current evidence to guide scaling up of management of diarrhea among under-five children in India, and identify existing knowledge gaps. A set of questions pertaining to the management (prevention, treatment, and control) of childhood diarrhea was identified through a consultative process. A modified systematic review process developed a priori was used to identify, synthesize and summarize, research evidence and operational information, pertaining to the problem in India. Areas with limited or no evidence were identified as knowledge gaps. Childhood diarrhea is a significant public health problem in India; the point (two weeks) prevalence is 9 to 20%. Diarrhea accounts for 14% of the total deaths in under-five children in India. Infants aged 6 to 24 months are at the highest risk of diarrhea. There is a lack of robust nation-wide data on etiology; rotavirus and diarrheogenic E.coli are the most common organisms identified. The current National Guidelines are sufficient for case-management of childhood diarrhea. Exclusive breastfeeding, handwashing and point of use water treatment are effective strategies for prevention of all-cause diarrhea; rotavirus vaccines are efficacious to prevent rotavirus specific diarrhea. ORS and zinc are the mainstay of management during an episode of childhood diarrhea but have low coverage in India due to policy and programmatic barriers, whereas indiscriminate use of antibiotics and other drugs is common. Zinc therapy given during diarrhea can be upscaled through existing infrastructure is introducing the training component and information, education and communication activities. This systematic review summarizes current evidence on childhood diarrhea and provides evidence to inform child health programs in India.
Vehicle active steering control research based on two-DOF robust internal model control
NASA Astrophysics Data System (ADS)
Wu, Jian; Liu, Yahui; Wang, Fengbo; Bao, Chunjiang; Sun, Qun; Zhao, Youqun
2016-07-01
Because of vehicle's external disturbances and model uncertainties, robust control algorithms have obtained popularity in vehicle stability control. The robust control usually gives up performance in order to guarantee the robustness of the control algorithm, therefore an improved robust internal model control(IMC) algorithm blending model tracking and internal model control is put forward for active steering system in order to reach high performance of yaw rate tracking with certain robustness. The proposed algorithm inherits the good model tracking ability of the IMC control and guarantees robustness to model uncertainties. In order to separate the design process of model tracking from the robustness design process, the improved 2 degree of freedom(DOF) robust internal model controller structure is given from the standard Youla parameterization. Simulations of double lane change maneuver and those of crosswind disturbances are conducted for evaluating the robust control algorithm, on the basis of a nonlinear vehicle simulation model with a magic tyre model. Results show that the established 2-DOF robust IMC method has better model tracking ability and a guaranteed level of robustness and robust performance, which can enhance the vehicle stability and handling, regardless of variations of the vehicle model parameters and the external crosswind interferences. Contradiction between performance and robustness of active steering control algorithm is solved and higher control performance with certain robustness to model uncertainties is obtained.
Robustness Elasticity in Complex Networks
Matisziw, Timothy C.; Grubesic, Tony H.; Guo, Junyu
2012-01-01
Network robustness refers to a network’s resilience to stress or damage. Given that most networks are inherently dynamic, with changing topology, loads, and operational states, their robustness is also likely subject to change. However, in most analyses of network structure, it is assumed that interaction among nodes has no effect on robustness. To investigate the hypothesis that network robustness is not sensitive or elastic to the level of interaction (or flow) among network nodes, this paper explores the impacts of network disruption, namely arc deletion, over a temporal sequence of observed nodal interactions for a large Internet backbone system. In particular, a mathematical programming approach is used to identify exact bounds on robustness to arc deletion for each epoch of nodal interaction. Elasticity of the identified bounds relative to the magnitude of arc deletion is assessed. Results indicate that system robustness can be highly elastic to spatial and temporal variations in nodal interactions within complex systems. Further, the presence of this elasticity provides evidence that a failure to account for nodal interaction can confound characterizations of complex networked systems. PMID:22808060
DETECTION AND IDENTIFICATION OF SPEECH SOUNDS USING CORTICAL ACTIVITY PATTERNS
Centanni, T.M.; Sloan, A.M.; Reed, A.C.; Engineer, C.T.; Rennaker, R.; Kilgard, M.P.
2014-01-01
We have developed a classifier capable of locating and identifying speech sounds using activity from rat auditory cortex with an accuracy equivalent to behavioral performance without the need to specify the onset time of the speech sounds. This classifier can identify speech sounds from a large speech set within 40 ms of stimulus presentation. To compare the temporal limits of the classifier to behavior, we developed a novel task that requires rats to identify individual consonant sounds from a stream of distracter consonants. The classifier successfully predicted the ability of rats to accurately identify speech sounds for syllable presentation rates up to 10 syllables per second (up to 17.9 ± 1.5 bits/sec), which is comparable to human performance. Our results demonstrate that the spatiotemporal patterns generated in primary auditory cortex can be used to quickly and accurately identify consonant sounds from a continuous speech stream without prior knowledge of the stimulus onset times. Improved understanding of the neural mechanisms that support robust speech processing in difficult listening conditions could improve the identification and treatment of a variety of speech processing disorders. PMID:24286757
Schirmer, Emily B; Golden, Kathryn; Xu, Jin; Milling, Jesse; Murillo, Alec; Lowden, Patricia; Mulagapati, Srihariraju; Hou, Jinzhao; Kovalchin, Joseph T; Masci, Allyson; Collins, Kathryn; Zarbis-Papastoitsis, Gregory
2013-08-01
Through a parallel approach of tracking product quality through fermentation and purification development, a robust process was designed to reduce the levels of product-related species. Three biochemically similar product-related species were identified as byproducts of host-cell enzymatic activity. To modulate intracellular proteolytic activity, key fermentation parameters (temperature, pH, trace metals, EDTA levels, and carbon source) were evaluated through bioreactor optimization, while balancing negative effects on growth, productivity, and oxygen demand. The purification process was based on three non-affinity steps and resolved product-related species by exploiting small charge differences. Using statistical design of experiments for elution conditions, a high-resolution cation exchange capture column was optimized for resolution and recovery. Further reduction of product-related species was achieved by evaluating a matrix of conditions for a ceramic hydroxyapatite column. The optimized fermentation process was transferred from the 2-L laboratory scale to the 100-L pilot scale and the purification process was scaled accordingly to process the fermentation harvest. The laboratory- and pilot-scale processes resulted in similar process recoveries of 60 and 65%, respectively, and in a product that was of equal quality and purity to that of small-scale development preparations. The parallel approach for up- and downstream development was paramount in achieving a robust and scalable clinical process. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Semiautomated Framework for Integrating Expert Knowledge into Disease Marker Identification
Wang, Jing; Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; ...
2013-01-01
Background . The availability of large complex data sets generated by high throughput technologies has enabled the recent proliferation of disease biomarker studies. However, a recurring problem in deriving biological information from large data sets is how to best incorporate expert knowledge into the biomarker selection process. Objective . To develop a generalizable framework that can incorporate expert knowledge into data-driven processes in a semiautomated way while providing a metric for optimization in a biomarker selection scheme. Methods . The framework was implemented as a pipeline consisting of five components for the identification of signatures from integrated clustering (ISIC). Expertmore » knowledge was integrated into the biomarker identification process using the combination of two distinct approaches; a distance-based clustering approach and an expert knowledge-driven functional selection. Results . The utility of the developed framework ISIC was demonstrated on proteomics data from a study of chronic obstructive pulmonary disease (COPD). Biomarker candidates were identified in a mouse model using ISIC and validated in a study of a human cohort. Conclusions . Expert knowledge can be introduced into a biomarker discovery process in different ways to enhance the robustness of selected marker candidates. Developing strategies for extracting orthogonal and robust features from large data sets increases the chances of success in biomarker identification.« less
A Semiautomated Framework for Integrating Expert Knowledge into Disease Marker Identification
Wang, Jing; Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; Varnum, Susan M.; Brown, Joseph N.; Riensche, Roderick M.; Adkins, Joshua N.; Jacobs, Jon M.; Hoidal, John R.; Scholand, Mary Beth; Pounds, Joel G.; Blackburn, Michael R.; Rodland, Karin D.; McDermott, Jason E.
2013-01-01
Background. The availability of large complex data sets generated by high throughput technologies has enabled the recent proliferation of disease biomarker studies. However, a recurring problem in deriving biological information from large data sets is how to best incorporate expert knowledge into the biomarker selection process. Objective. To develop a generalizable framework that can incorporate expert knowledge into data-driven processes in a semiautomated way while providing a metric for optimization in a biomarker selection scheme. Methods. The framework was implemented as a pipeline consisting of five components for the identification of signatures from integrated clustering (ISIC). Expert knowledge was integrated into the biomarker identification process using the combination of two distinct approaches; a distance-based clustering approach and an expert knowledge-driven functional selection. Results. The utility of the developed framework ISIC was demonstrated on proteomics data from a study of chronic obstructive pulmonary disease (COPD). Biomarker candidates were identified in a mouse model using ISIC and validated in a study of a human cohort. Conclusions. Expert knowledge can be introduced into a biomarker discovery process in different ways to enhance the robustness of selected marker candidates. Developing strategies for extracting orthogonal and robust features from large data sets increases the chances of success in biomarker identification. PMID:24223463
A Semiautomated Framework for Integrating Expert Knowledge into Disease Marker Identification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jing; Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.
2013-10-01
Background. The availability of large complex data sets generated by high throughput technologies has enabled the recent proliferation of disease biomarker studies. However, a recurring problem in deriving biological information from large data sets is how to best incorporate expert knowledge into the biomarker selection process. Objective. To develop a generalizable framework that can incorporate expert knowledge into data-driven processes in a semiautomated way while providing a metric for optimization in a biomarker selection scheme. Methods. The framework was implemented as a pipeline consisting of five components for the identification of signatures from integrated clustering (ISIC). Expert knowledge was integratedmore » into the biomarker identification process using the combination of two distinct approaches; a distance-based clustering approach and an expert knowledge-driven functional selection. Results. The utility of the developed framework ISIC was demonstrated on proteomics data from a study of chronic obstructive pulmonary disease (COPD). Biomarker candidates were identified in a mouse model using ISIC and validated in a study of a human cohort. Conclusions. Expert knowledge can be introduced into a biomarker discovery process in different ways to enhance the robustness of selected marker candidates. Developing strategies for extracting orthogonal and robust features from large data sets increases the chances of success in biomarker identification.« less
Mentalized affectivity: A new model and assessment of emotion regulation
Kolasi, Jonela; Hegsted, Camilla P.; Berkowitz, Yoni; Jurist, Elliot L.
2017-01-01
Here we introduce a new assessment of emotion regulation called the Mentalized Affectivity Scale (MAS). A large online adult sample (N = 2,840) completed the 60-item MAS along with a battery of psychological measures. Results revealed a robust three-component structure underlying mentalized affectivity, which we labeled: Identifying emotions (the ability to identify emotions and to reflect on the factors that influence them); Processing emotions (the ability to modulate and distinguish complex emotions); and Expressing emotions (the tendency to express emotions outwardly or inwardly). Hierarchical modeling suggested that Processing emotions delineates from Identifying them, and Expressing emotions delineates from Processing them. We then showed how these components are associated with personality traits, well-being, trauma, and 18 different psychological disorders (including mood, neurological, and personality disorders). Notably, those with anxiety, mood, and personality disorders showed a profile of high Identifying and low Processing compared to controls. Further, results showed how mentalized affectivity scores varied across psychological treatment modalities and years spent in therapy. Taken together, the model of mentalized affectivity advances prior theory and research on emotion regulation and the MAS is a useful and reliable instrument that can be used in both clinical and non-clinical settings in psychology, psychiatry, and neuroscience. PMID:29045403
Mathew, Joseph L; Patwari, Ashok K; Gupta, Piyush; Shah, Dheeraj; Gera, Tarun; Gogia, Siddhartha; Mohan, Pavitra; Panda, Rajmohan; Menon, Subhadra
2011-03-01
Scaling up of evidence based management of childhood acute respiratory infection/pneumonia, is a public health priority in India, and necessitates robust literature review, for advocacy and action. To identify, synthesize and summarize current evidence to guide scaling up of management of childhood acute respiratory infection/pneumonia in India, and identify existing knowledge gaps. A set of ten questions pertaining to the management (prevention, treatment, and control) of childhood ARI/pneumonia was identified through a consultative process. A modified systematic review process developed a priori was used to identify, synthesize and summarize, research evidence and operational information, pertaining to the problem in India. Areas with limited or no evidence were identified as knowledge gaps. Childhood ARI/pneumonia is a significant public health problem in India, although robust epidemiological data is not available on its incidence. Mortality due to pneumonia accounts for approximately one-fourth of the total deaths in under five children, in India. Pneumonia affects children irrespective of socioeconomic status; with higher risk among young infants, malnourished children, non-exclusively breastfed children and those with exposure to solid fuel use. There is lack of robust nation-wide data on etiology; bacteria (including Pneumococcus, H. influenzae, S. aureus and Gram negative bacilli), viruses (especially RSV) and Mycoplasma, are the common organisms identified. In-vitro resistance to cotrimoxazole is high. Wheezing is commonly associated with ARI/pneumonia in children, but difficult to appreciate without auscultation. The current WHO guidelines as modified by IndiaCLEN Task force on Penumonia (2010), are sufficient for case-management of childhood pneumonia. Other important interventions to prevent mortality are oxygen therapy for those with severe or very severe pneumonia and measles vaccination for all infants. There is insufficient evidence for protective or curative effect of vitamin A; zinc supplementation could be beneficial to prevent pneumonia, although it has no therapeutic benefit. There is insufficient evidence on potential effectiveness and cost-effectiveness of Hib and Pneumococcal vaccines on reduction of ARI specific mortality. Case-finding and community-based management are effective management strategies, but have low coverage in India due to policy and programmatic barriers. There is a significant gap in the utilization of existing services, provider practices as well as family practices in seeking care. The systematic review summarizes current evidence on childhood ARI and pneumonia management and provides evidence to inform child health programs in India.
On adaptive robustness approach to Anti-Jam signal processing
NASA Astrophysics Data System (ADS)
Poberezhskiy, Y. S.; Poberezhskiy, G. Y.
An effective approach to exploiting statistical differences between desired and jamming signals named adaptive robustness is proposed and analyzed in this paper. It combines conventional Bayesian, adaptive, and robust approaches that are complementary to each other. This combining strengthens the advantages and mitigates the drawbacks of the conventional approaches. Adaptive robustness is equally applicable to both jammers and their victim systems. The capabilities required for realization of adaptive robustness in jammers and victim systems are determined. The employment of a specific nonlinear robust algorithm for anti-jam (AJ) processing is described and analyzed. Its effectiveness in practical situations has been proven analytically and confirmed by simulation. Since adaptive robustness can be used by both sides in electronic warfare, it is more advantageous for the fastest and most intelligent side. Many results obtained and discussed in this paper are also applicable to commercial applications such as communications in unregulated or poorly regulated frequency ranges and systems with cognitive capabilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luther, Erik; Rooyen, Isabella van; Leckie, Rafael
2015-03-01
In an effort to explore fuel systems that are more robust under accident scenarios, the DOE-NE has identified the need to resume transient testing. The Transient Reactor Test (TREAT) facility has been identified as the preferred option for the resumption of transient testing of nuclear fuel in the United States. In parallel, NNSA’s Global Threat Reduction Initiative (GTRI) Convert program is exploring the needs to replace the existing highly enriched uranium (HEU) core with low enriched uranium (LEU) core. In order to construct a new LEU core, materials and fabrication processes similar to those used in the initial core fabricationmore » must be identified, developed and characterized. In this research, graphite matrix fuel blocks were extruded and materials properties of were measured. Initially the extrusion process followed the historic route; however, the project was expanded to explore methods to increase the graphite content of the fuel blocks and explore modern resins. Materials properties relevant to fuel performance including density, heat capacity and thermal diffusivity were measured. The relationship between process defects and materials properties will be discussed.« less
Integrated Process Modeling-A Process Validation Life Cycle Companion.
Zahel, Thomas; Hauer, Stefan; Mueller, Eric M; Murphy, Patrick; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph
2017-10-17
During the regulatory requested process validation of pharmaceutical manufacturing processes, companies aim to identify, control, and continuously monitor process variation and its impact on critical quality attributes (CQAs) of the final product. It is difficult to directly connect the impact of single process parameters (PPs) to final product CQAs, especially in biopharmaceutical process development and production, where multiple unit operations are stacked together and interact with each other. Therefore, we want to present the application of Monte Carlo (MC) simulation using an integrated process model (IPM) that enables estimation of process capability even in early stages of process validation. Once the IPM is established, its capability in risk and criticality assessment is furthermore demonstrated. IPMs can be used to enable holistic production control strategies that take interactions of process parameters of multiple unit operations into account. Moreover, IPMs can be trained with development data, refined with qualification runs, and maintained with routine manufacturing data which underlines the lifecycle concept. These applications will be shown by means of a process characterization study recently conducted at a world-leading contract manufacturing organization (CMO). The new IPM methodology therefore allows anticipation of out of specification (OOS) events, identify critical process parameters, and take risk-based decisions on counteractions that increase process robustness and decrease the likelihood of OOS events.
Panaceas, uncertainty, and the robust control framework in sustainability science
Anderies, John M.; Rodriguez, Armando A.; Janssen, Marco A.; Cifdaloz, Oguzhan
2007-01-01
A critical challenge faced by sustainability science is to develop strategies to cope with highly uncertain social and ecological dynamics. This article explores the use of the robust control framework toward this end. After briefly outlining the robust control framework, we apply it to the traditional Gordon–Schaefer fishery model to explore fundamental performance–robustness and robustness–vulnerability trade-offs in natural resource management. We find that the classic optimal control policy can be very sensitive to parametric uncertainty. By exploring a large class of alternative strategies, we show that there are no panaceas: even mild robustness properties are difficult to achieve, and increasing robustness to some parameters (e.g., biological parameters) results in decreased robustness with respect to others (e.g., economic parameters). On the basis of this example, we extract some broader themes for better management of resources under uncertainty and for sustainability science in general. Specifically, we focus attention on the importance of a continual learning process and the use of robust control to inform this process. PMID:17881574
Redundant Design in Interdependent Networks
2016-01-01
Modern infrastructure networks are often coupled together and thus could be modeled as interdependent networks. Overload and interdependent effect make interdependent networks more fragile when suffering from attacks. Existing research has primarily concentrated on the cascading failure process of interdependent networks without load, or the robustness of isolated network with load. Only limited research has been done on the cascading failure process caused by overload in interdependent networks. Redundant design is a primary approach to enhance the reliability and robustness of the system. In this paper, we propose two redundant methods, node back-up and dependency redundancy, and the experiment results indicate that two measures are effective and costless. Two detailed models about redundant design are introduced based on the non-linear load-capacity model. Based on the attributes and historical failure distribution of nodes, we introduce three static selecting strategies-Random-based, Degree-based, Initial load-based and a dynamic strategy-HFD (historical failure distribution) to identify which nodes could have a back-up with priority. In addition, we consider the cost and efficiency of different redundant proportions to determine the best proportion with maximal enhancement and minimal cost. Experiments on interdependent networks demonstrate that the combination of HFD and dependency redundancy is an effective and preferred measure to implement redundant design on interdependent networks. The results suggest that the redundant design proposed in this paper can permit construction of highly robust interactive networked systems. PMID:27764174
Park, Yongjin; Yoon, Sang Sun
2011-01-01
Pseudomonas aeruginosa, a gram-negative bacterium of clinical importance, forms more robust biofilm during anaerobic respiration, a mode of growth presumed to occur in abnormally thickened mucus layer lining the cystic fibrosis (CF) patient airway. However, molecular basis behind this anaerobiosis-triggered robust biofilm formation is not clearly defined yet. Here, we identified a morphological change naturally accompanied by anaerobic respiration in P. aeruginosa and investigated its effect on the biofilm formation in vitro. A standard laboratory strain, PAO1 was highly elongated during anaerobic respiration compared with bacteria grown aerobically. Microscopic analysis demonstrated that cell elongation likely occurred as a consequence of defective cell division. Cell elongation was dependent on the presence of nitrite reductase (NIR) that reduces nitrite (NO2 −) to nitric oxide (NO) and was repressed in PAO1 in the presence of carboxy-PTIO, a NO antagonist, demonstrating that cell elongation involves a process to respond to NO, a spontaneous byproduct of the anaerobic respiration. Importantly, the non-elongated NIR-deficient mutant failed to form biofilm, while a mutant of nitrate reductase (NAR) and wild type PAO1, both of which were highly elongated, formed robust biofilm. Taken together, our data reveal a role of previously undescribed cell biological event in P. aeruginosa biofilm formation and suggest NIR as a key player involved in such process. PMID:21267455
Oscillatory Protein Expression Dynamics Endows Stem Cells with Robust Differentiation Potential
Kaneko, Kunihiko
2011-01-01
The lack of understanding of stem cell differentiation and proliferation is a fundamental problem in developmental biology. Although gene regulatory networks (GRNs) for stem cell differentiation have been partially identified, the nature of differentiation dynamics and their regulation leading to robust development remain unclear. Herein, using a dynamical system modeling cell approach, we performed simulations of the developmental process using all possible GRNs with a few genes, and screened GRNs that could generate cell type diversity through cell-cell interactions. We found that model stem cells that both proliferated and differentiated always exhibited oscillatory expression dynamics, and the differentiation frequency of such stem cells was regulated, resulting in a robust number distribution. Moreover, we uncovered the common regulatory motifs for stem cell differentiation, in which a combination of regulatory motifs that generated oscillatory expression dynamics and stabilized distinct cellular states played an essential role. These findings may explain the recently observed heterogeneity and dynamic equilibrium in cellular states of stem cells, and can be used to predict regulatory networks responsible for differentiation in stem cell systems. PMID:22073296
Tracking transcriptional activities with high-content epifluorescent imaging
NASA Astrophysics Data System (ADS)
Hua, Jianping; Sima, Chao; Cypert, Milana; Gooden, Gerald C.; Shack, Sonsoles; Alla, Lalitamba; Smith, Edward A.; Trent, Jeffrey M.; Dougherty, Edward R.; Bittner, Michael L.
2012-04-01
High-content cell imaging based on fluorescent protein reporters has recently been used to track the transcriptional activities of multiple genes under different external stimuli for extended periods. This technology enhances our ability to discover treatment-induced regulatory mechanisms, temporally order their onsets and recognize their relationships. To fully realize these possibilities and explore their potential in biological and pharmaceutical applications, we introduce a new data processing procedure to extract information about the dynamics of cell processes based on this technology. The proposed procedure contains two parts: (1) image processing, where the fluorescent images are processed to identify individual cells and allow their transcriptional activity levels to be quantified; and (2) data representation, where the extracted time course data are summarized and represented in a way that facilitates efficient evaluation. Experiments show that the proposed procedure achieves fast and robust image segmentation with sufficient accuracy. The extracted cellular dynamics are highly reproducible and sensitive enough to detect subtle activity differences and identify mechanisms responding to selected perturbations. This method should be able to help biologists identify the alterations of cellular mechanisms that allow drug candidates to change cell behavior and thereby improve the efficiency of drug discovery and treatment design.
NASA Astrophysics Data System (ADS)
Reed, Patrick; Trindade, Bernardo; Jonathan, Herman; Harrison, Zeff; Gregory, Characklis
2016-04-01
Emerging water scarcity concerns in southeastern US are associated with several deeply uncertain factors, including rapid population growth, limited coordination across adjacent municipalities and the increasing risks for sustained regional droughts. Managing these uncertainties will require that regional water utilities identify regionally coordinated, scarcity-mitigating strategies that trigger the appropriate actions needed to avoid water shortages and financial instabilities. This research focuses on the Research Triangle area of North Carolina, seeking to engage the water utilities within Raleigh, Durham, Cary and Chapel Hill in cooperative and robust regional water portfolio planning. Prior analysis of this region through the year 2025 has identified significant regional vulnerabilities to volumetric shortfalls and financial losses. Moreover, efforts to maximize the individual robustness of any of the mentioned utilities also have the potential to strongly degrade the robustness of the others. This research advances a multi-stakeholder Many-Objective Robust Decision Making (MORDM) framework to better account for deeply uncertain factors when identifying cooperative management strategies. Results show that the sampling of deeply uncertain factors in the computational search phase of MORDM can aid in the discovery of management actions that substantially improve the robustness of individual utilities as well as the overall region to water scarcity. Cooperative water transfers, financial risk mitigation tools, and coordinated regional demand management must be explored jointly to decrease robustness conflicts between the utilities. The insights from this work have general merit for regions where adjacent municipalities can benefit from cooperative regional water portfolio planning.
NASA Astrophysics Data System (ADS)
Trindade, B. C.; Reed, P. M.; Herman, J. D.; Zeff, H. B.; Characklis, G. W.
2015-12-01
Emerging water scarcity concerns in southeastern US are associated with several deeply uncertain factors, including rapid population growth, limited coordination across adjacent municipalities and the increasing risks for sustained regional droughts. Managing these uncertainties will require that regional water utilities identify regionally coordinated, scarcity-mitigating strategies that trigger the appropriate actions needed to avoid water shortages and financial instabilities. This research focuses on the Research Triangle area of North Carolina, seeking to engage the water utilities within Raleigh, Durham, Cary and Chapel Hill in cooperative and robust regional water portfolio planning. Prior analysis of this region through the year 2025 has identified significant regional vulnerabilities to volumetric shortfalls and financial losses. Moreover, efforts to maximize the individual robustness of any of the mentioned utilities also have the potential to strongly degrade the robustness of the others. This research advances a multi-stakeholder Many-Objective Robust Decision Making (MORDM) framework to better account for deeply uncertain factors when identifying cooperative management strategies. Results show that the sampling of deeply uncertain factors in the computational search phase of MORDM can aid in the discovery of management actions that substantially improve the robustness of individual utilities as well as of the overall region to water scarcity. Cooperative water transfers, financial risk mitigation tools, and coordinated regional demand management should be explored jointly to decrease robustness conflicts between the utilities. The insights from this work have general merit for regions where adjacent municipalities can benefit from cooperative regional water portfolio planning.
Automatic identification of bacterial types using statistical imaging methods
NASA Astrophysics Data System (ADS)
Trattner, Sigal; Greenspan, Hayit; Tepper, Gapi; Abboud, Shimon
2003-05-01
The objective of the current study is to develop an automatic tool to identify bacterial types using computer-vision and statistical modeling techniques. Bacteriophage (phage)-typing methods are used to identify and extract representative profiles of bacterial types, such as the Staphylococcus Aureus. Current systems rely on the subjective reading of plaque profiles by human expert. This process is time-consuming and prone to errors, especially as technology is enabling the increase in the number of phages used for typing. The statistical methodology presented in this work, provides for an automated, objective and robust analysis of visual data, along with the ability to cope with increasing data volumes.
Terzić, Jelena; Popović, Igor; Stajić, Ana; Tumpa, Anja; Jančić-Stojanović, Biljana
2016-06-05
This paper deals with the development of hydrophilic interaction liquid chromatographic (HILIC) method for the analysis of bilastine and its degradation impurities following Analytical Quality by Design approach. It is the first time that the method for bilastine and its impurities is proposed. The main objective was to identify the conditions where an adequate separation in minimal analysis duration could be achieved within a robust region. Critical process parameters which have the most influence on method performance were defined as acetonitrile content in the mobile phase, pH of the aqueous phase and ammonium acetate concentration in the aqueous phase. Box-Behnken design was applied for establishing a relationship between critical process parameters and critical quality attributes. The defined mathematical models and Monte Carlo simulations were used to identify the design space. Fractional factorial design was applied for experimental robustness testing and the method is validated to verify the adequacy of selected optimal conditions: the analytical column Luna(®) HILIC (100mm×4.6mm, 5μm particle size); mobile phase consisted of acetonitrile-aqueous phase (50mM ammonium acetate, pH adjusted to 5.3 with glacial acetic acid) (90.5:9.5, v/v); column temperature 30°C, mobile phase flow rate 1mLmin(-1), wavelength of detection 275nm. Copyright © 2016 Elsevier B.V. All rights reserved.
Problem-Solving Phase Transitions During Team Collaboration.
Wiltshire, Travis J; Butner, Jonathan E; Fiore, Stephen M
2018-01-01
Multiple theories of problem-solving hypothesize that there are distinct qualitative phases exhibited during effective problem-solving. However, limited research has attempted to identify when transitions between phases occur. We integrate theory on collaborative problem-solving (CPS) with dynamical systems theory suggesting that when a system is undergoing a phase transition it should exhibit a peak in entropy and that entropy levels should also relate to team performance. Communications from 40 teams that collaborated on a complex problem were coded for occurrence of problem-solving processes. We applied a sliding window entropy technique to each team's communications and specified criteria for (a) identifying data points that qualify as peaks and (b) determining which peaks were robust. We used multilevel modeling, and provide a qualitative example, to evaluate whether phases exhibit distinct distributions of communication processes. We also tested whether there was a relationship between entropy values at transition points and CPS performance. We found that a proportion of entropy peaks was robust and that the relative occurrence of communication codes varied significantly across phases. Peaks in entropy thus corresponded to qualitative shifts in teams' CPS communications, providing empirical evidence that teams exhibit phase transitions during CPS. Also, lower average levels of entropy at the phase transition points predicted better CPS performance. We specify future directions to improve understanding of phase transitions during CPS, and collaborative cognition, more broadly. Copyright © 2017 Cognitive Science Society, Inc.
Prioritization of Stockpile Maintenance with Layered Pareto Fronts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, Sarah E.; Anderson-Cook, Christine M.; Lu, Lu
Difficult choices are required for a decision-making process where resources and budgets are increasingly constrained. This study demonstrates a structured decision-making approach using layered Pareto fronts to identify priorities about how to allocate funds between munitions stockpiles based on their estimated reliability, the urgency of needing available units, and the consequences if adequate numbers of units are not available. This case study, while specific to the characteristics of a group of munitions stockpiles, illustrates the general process of structured decision-making based on first identifying appropriate metrics that summarize the important dimensions of the decision, and then objectively eliminating non-contenders frommore » further consideration. Finally, the final subjective stage incorporates user priorities to select the four stockpiles to receive additional maintenance and surveillance funds based on understanding the trade-offs and robustness to various user priorities.« less
Prioritization of Stockpile Maintenance with Layered Pareto Fronts
Burke, Sarah E.; Anderson-Cook, Christine M.; Lu, Lu; ...
2017-10-11
Difficult choices are required for a decision-making process where resources and budgets are increasingly constrained. This study demonstrates a structured decision-making approach using layered Pareto fronts to identify priorities about how to allocate funds between munitions stockpiles based on their estimated reliability, the urgency of needing available units, and the consequences if adequate numbers of units are not available. This case study, while specific to the characteristics of a group of munitions stockpiles, illustrates the general process of structured decision-making based on first identifying appropriate metrics that summarize the important dimensions of the decision, and then objectively eliminating non-contenders frommore » further consideration. Finally, the final subjective stage incorporates user priorities to select the four stockpiles to receive additional maintenance and surveillance funds based on understanding the trade-offs and robustness to various user priorities.« less
Robustness surfaces of complex networks
Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis
2014-01-01
Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared. PMID:25178402
Chung, Eun-Sung; Kim, Yeonjoo
2014-12-15
This study proposed a robust prioritization framework to identify the priorities of treated wastewater (TWW) use locations with consideration of various uncertainties inherent in the climate change scenarios and the decision-making process. First, a fuzzy concept was applied because future forecast precipitation and their hydrological impact analysis results displayed significant variances when considering various climate change scenarios and long periods (e.g., 2010-2099). Second, various multi-criteria decision making (MCDM) techniques including weighted sum method (WSM), Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) and fuzzy TOPSIS were introduced to robust prioritization because different MCDM methods use different decision philosophies. Third, decision making method under complete uncertainty (DMCU) including maximin, maximax, minimax regret, Hurwicz, and equal likelihood were used to find robust final rankings. This framework is then applied to a Korean urban watershed. As a result, different rankings were obviously appeared between fuzzy TOPSIS and non-fuzzy MCDMs (e.g., WSM and TOPSIS) because the inter-annual variability in effectiveness was considered only with fuzzy TOPSIS. Then, robust prioritizations were derived based on 18 rankings from nine decadal periods of RCP4.5 and RCP8.5. For more robust rankings, five DMCU approaches using the rankings from fuzzy TOPSIS were derived. This framework combining fuzzy TOPSIS with DMCU approaches can be rendered less controversial among stakeholders under complete uncertainty of changing environments. Copyright © 2014 Elsevier Ltd. All rights reserved.
On the inequivalence of the CH and CHSH inequalities due to finite statistics
NASA Astrophysics Data System (ADS)
Renou, M. O.; Rosset, D.; Martin, A.; Gisin, N.
2017-06-01
Different variants of a Bell inequality, such as CHSH and CH, are known to be equivalent when evaluated on nonsignaling outcome probability distributions. However, in experimental setups, the outcome probability distributions are estimated using a finite number of samples. Therefore the nonsignaling conditions are only approximately satisfied and the robustness of the violation depends on the chosen inequality variant. We explain that phenomenon using the decomposition of the space of outcome probability distributions under the action of the symmetry group of the scenario, and propose a method to optimize the statistical robustness of a Bell inequality. In the process, we describe the finite group composed of relabeling of parties, measurement settings and outcomes, and identify correspondences between the irreducible representations of this group and properties of outcome probability distributions such as normalization, signaling or having uniform marginals.
Galí, A; García-Montoya, E; Ascaso, M; Pérez-Lozano, P; Ticó, J R; Miñarro, M; Suñé-Negre, J M
2016-09-01
Although tablet coating processes are widely used in the pharmaceutical industry, they often lack adequate robustness. Up-scaling can be challenging as minor changes in parameters can lead to varying quality results. To select critical process parameters (CPP) using retrospective data of a commercial product and to establish a design of experiments (DoE) that would improve the robustness of the coating process. A retrospective analysis of data from 36 commercial batches. Batches were selected based on the quality results generated during batch release, some of which revealed quality deviations concerning the appearance of the coated tablets. The product is already marketed and belongs to the portfolio of a multinational pharmaceutical company. The Statgraphics 5.1 software was used for data processing to determine critical process parameters in order to propose new working ranges. This study confirms that it is possible to determine the critical process parameters and create design spaces based on retrospective data of commercial batches. This type of analysis is thus converted into a tool to optimize the robustness of existing processes. Our results show that a design space can be established with minimum investment in experiments, since current commercial batch data are processed statistically.
NASA Astrophysics Data System (ADS)
Trindade, B. C.; Reed, P. M.; Herman, J. D.; Zeff, H. B.; Characklis, G. W.
2017-06-01
Emerging water scarcity concerns in many urban regions are associated with several deeply uncertain factors, including rapid population growth, limited coordination across adjacent municipalities and the increasing risks for sustained regional droughts. Managing these uncertainties will require that regional water utilities identify coordinated, scarcity-mitigating strategies that trigger the appropriate actions needed to avoid water shortages and financial instabilities. This research focuses on the Research Triangle area of North Carolina, seeking to engage the water utilities within Raleigh, Durham, Cary and Chapel Hill in cooperative and robust regional water portfolio planning. Prior analysis of this region through the year 2025 has identified significant regional vulnerabilities to volumetric shortfalls and financial losses. Moreover, efforts to maximize the individual robustness of any of the mentioned utilities also have the potential to strongly degrade the robustness of the others. This research advances a multi-stakeholder Many-Objective Robust Decision Making (MORDM) framework to better account for deeply uncertain factors when identifying cooperative drought management strategies. Our results show that appropriately designing adaptive risk-of-failure action triggers required stressing them with a comprehensive sample of deeply uncertain factors in the computational search phase of MORDM. Search under the new ensemble of states-of-the-world is shown to fundamentally change perceived performance tradeoffs and substantially improve the robustness of individual utilities as well as the overall region to water scarcity. Search under deep uncertainty enhanced the discovery of how cooperative water transfers, financial risk mitigation tools, and coordinated regional demand management must be employed jointly to improve regional robustness and decrease robustness conflicts between the utilities. Insights from this work have general merit for regions where adjacent municipalities can benefit from cooperative regional water portfolio planning.
The Role of Design-of-Experiments in Managing Flow in Compact Air Vehicle Inlets
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.; Miller, Daniel N.; Gridley, Marvin C.; Agrell, Johan
2003-01-01
It is the purpose of this study to demonstrate the viability and economy of Design-of-Experiments methodologies to arrive at microscale secondary flow control array designs that maintain optimal inlet performance over a wide range of the mission variables and to explore how these statistical methods provide a better understanding of the management of flow in compact air vehicle inlets. These statistical design concepts were used to investigate the robustness properties of low unit strength micro-effector arrays. Low unit strength micro-effectors are micro-vanes set at very low angles-of-incidence with very long chord lengths. They were designed to influence the near wall inlet flow over an extended streamwise distance, and their advantage lies in low total pressure loss and high effectiveness in managing engine face distortion. The term robustness is used in this paper in the same sense as it is used in the industrial problem solving community. It refers to minimizing the effects of the hard-to-control factors that influence the development of a product or process. In Robustness Engineering, the effects of the hard-to-control factors are often called noise , and the hard-to-control factors themselves are referred to as the environmental variables or sometimes as the Taguchi noise variables. Hence Robust Optimization refers to minimizing the effects of the environmental or noise variables on the development (design) of a product or process. In the management of flow in compact inlets, the environmental or noise variables can be identified with the mission variables. Therefore this paper formulates a statistical design methodology that minimizes the impact of variations in the mission variables on inlet performance and demonstrates that these statistical design concepts can lead to simpler inlet flow management systems.
Nagashima, Hiroaki; Watari, Akiko; Shinoda, Yasuharu; Okamoto, Hiroshi; Takuma, Shinya
2013-12-01
This case study describes the application of Quality by Design elements to the process of culturing Chinese hamster ovary cells in the production of a monoclonal antibody. All steps in the cell culture process and all process parameters in each step were identified by using a cause-and-effect diagram. Prospective risk assessment using failure mode and effects analysis identified the following four potential critical process parameters in the production culture step: initial viable cell density, culture duration, pH, and temperature. These parameters and lot-to-lot variability in raw material were then evaluated by process characterization utilizing a design of experiments approach consisting of a face-centered central composite design integrated with a full factorial design. Process characterization was conducted using a scaled down model that had been qualified by comparison with large-scale production data. Multivariate regression analysis was used to establish statistical prediction models for performance indicators and quality attributes; with these, we constructed contour plots and conducted Monte Carlo simulation to clarify the design space. The statistical analyses, especially for raw materials, identified set point values, which were most robust with respect to the lot-to-lot variability of raw materials while keeping the product quality within the acceptance criteria. © 2013 Wiley Periodicals, Inc. and the American Pharmacists Association.
Robust Intratumor Partitioning to Identify High-Risk Subregions in Lung Cancer: A Pilot Study.
Wu, Jia; Gensheimer, Michael F; Dong, Xinzhe; Rubin, Daniel L; Napel, Sandy; Diehn, Maximilian; Loo, Billy W; Li, Ruijiang
2016-08-01
To develop an intratumor partitioning framework for identifying high-risk subregions from (18)F-fluorodeoxyglucose positron emission tomography (FDG-PET) and computed tomography (CT) imaging and to test whether tumor burden associated with the high-risk subregions is prognostic of outcomes in lung cancer. In this institutional review board-approved retrospective study, we analyzed the pretreatment FDG-PET and CT scans of 44 lung cancer patients treated with radiation therapy. A novel, intratumor partitioning method was developed, based on a 2-stage clustering process: first at the patient level, each tumor was over-segmented into many superpixels by k-means clustering of integrated PET and CT images; next, tumor subregions were identified by merging previously defined superpixels via population-level hierarchical clustering. The volume associated with each of the subregions was evaluated using Kaplan-Meier analysis regarding its prognostic capability in predicting overall survival (OS) and out-of-field progression (OFP). Three spatially distinct subregions were identified within each tumor that were highly robust to uncertainty in PET/CT co-registration. Among these, the volume of the most metabolically active and metabolically heterogeneous solid component of the tumor was predictive of OS and OFP on the entire cohort, with a concordance index or CI of 0.66-0.67. When restricting the analysis to patients with stage III disease (n=32), the same subregion achieved an even higher CI of 0.75 (hazard ratio 3.93, log-rank P=.002) for predicting OS, and a CI of 0.76 (hazard ratio 4.84, log-rank P=.002) for predicting OFP. In comparison, conventional imaging markers, including tumor volume, maximum standardized uptake value, and metabolic tumor volume using threshold of 50% standardized uptake value maximum, were not predictive of OS or OFP, with CI mostly below 0.60 (log-rank P>.05). We propose a robust intratumor partitioning method to identify clinically relevant, high-risk subregions in lung cancer. We envision that this approach will be applicable to identifying useful imaging biomarkers in many cancer types. Copyright © 2016 Elsevier Inc. All rights reserved.
Robust Intratumor Partitioning to Identify High-Risk Subregions in Lung Cancer: A Pilot Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Jia; Gensheimer, Michael F.; Dong, Xinzhe
2016-08-01
Purpose: To develop an intratumor partitioning framework for identifying high-risk subregions from {sup 18}F-fluorodeoxyglucose positron emission tomography (FDG-PET) and computed tomography (CT) imaging and to test whether tumor burden associated with the high-risk subregions is prognostic of outcomes in lung cancer. Methods and Materials: In this institutional review board–approved retrospective study, we analyzed the pretreatment FDG-PET and CT scans of 44 lung cancer patients treated with radiation therapy. A novel, intratumor partitioning method was developed, based on a 2-stage clustering process: first at the patient level, each tumor was over-segmented into many superpixels by k-means clustering of integrated PET andmore » CT images; next, tumor subregions were identified by merging previously defined superpixels via population-level hierarchical clustering. The volume associated with each of the subregions was evaluated using Kaplan-Meier analysis regarding its prognostic capability in predicting overall survival (OS) and out-of-field progression (OFP). Results: Three spatially distinct subregions were identified within each tumor that were highly robust to uncertainty in PET/CT co-registration. Among these, the volume of the most metabolically active and metabolically heterogeneous solid component of the tumor was predictive of OS and OFP on the entire cohort, with a concordance index or CI of 0.66-0.67. When restricting the analysis to patients with stage III disease (n=32), the same subregion achieved an even higher CI of 0.75 (hazard ratio 3.93, log-rank P=.002) for predicting OS, and a CI of 0.76 (hazard ratio 4.84, log-rank P=.002) for predicting OFP. In comparison, conventional imaging markers, including tumor volume, maximum standardized uptake value, and metabolic tumor volume using threshold of 50% standardized uptake value maximum, were not predictive of OS or OFP, with CI mostly below 0.60 (log-rank P>.05). Conclusion: We propose a robust intratumor partitioning method to identify clinically relevant, high-risk subregions in lung cancer. We envision that this approach will be applicable to identifying useful imaging biomarkers in many cancer types.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardenas, Ibsen C., E-mail: c.cardenas@utwente.nl; Halman, Johannes I.M., E-mail: J.I.M.Halman@utwente.nl
Uncertainty is virtually unavoidable in environmental impact assessments (EIAs). From the literature related to treating and managing uncertainty, we have identified specific techniques for coping with uncertainty in EIAs. Here, we have focused on basic steps in the decision-making process that take place within an EIA setting. More specifically, we have identified uncertainties involved in each decision-making step and discussed the extent to which these can be treated and managed in the context of an activity or project that may have environmental impacts. To further demonstrate the relevance of the techniques identified, we have examined the extent to which themore » EIA guidelines currently used in Colombia consider and provide guidance on managing the uncertainty involved in these assessments. Some points that should be considered in order to provide greater robustness in impact assessments in Colombia have been identified. These include the management of stakeholder values, the systematic generation of project options, and their associated impacts as well as the associated management actions, and the evaluation of uncertainties and assumptions. We believe that the relevant and specific techniques reported here can be a reference for future evaluations of other EIA guidelines in different countries. - Highlights: • uncertainty is unavoidable in environmental impact assessments, EIAs; • we have identified some open techniques to EIAs for treating and managing uncertainty in these assessments; • points for improvement that should be considered in order to provide greater robustness in EIAs in Colombia have been identified; • the paper provides substantiated a reference for possible examinations of EIAs guidelines in other countries.« less
On-Line Robust Modal Stability Prediction using Wavelet Processing
NASA Technical Reports Server (NTRS)
Brenner, Martin J.; Lind, Rick
1998-01-01
Wavelet analysis for filtering and system identification has been used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins is reduced with parametric and nonparametric time- frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data is used to reduce the effects of external disturbances and unmodeled dynamics. Parametric estimates of modal stability are also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. The F-18 High Alpha Research Vehicle aeroservoelastic flight test data demonstrates improved robust stability prediction by extension of the stability boundary beyond the flight regime. Guidelines and computation times are presented to show the efficiency and practical aspects of these procedures for on-line implementation. Feasibility of the method is shown for processing flight data from time- varying nonstationary test points.
Chen, Bor-Sen; Lin, Ying-Po
2011-01-01
In the evolutionary process, the random transmission and mutation of genes provide biological diversities for natural selection. In order to preserve functional phenotypes between generations, gene networks need to evolve robustly under the influence of random perturbations. Therefore, the robustness of the phenotype, in the evolutionary process, exerts a selection force on gene networks to keep network functions. However, gene networks need to adjust, by variations in genetic content, to generate phenotypes for new challenges in the network’s evolution, ie, the evolvability. Hence, there should be some interplay between the evolvability and network robustness in evolutionary gene networks. In this study, the interplay between the evolvability and network robustness of a gene network and a biochemical network is discussed from a nonlinear stochastic system point of view. It was found that if the genetic robustness plus environmental robustness is less than the network robustness, the phenotype of the biological network is robust in evolution. The tradeoff between the genetic robustness and environmental robustness in evolution is discussed from the stochastic stability robustness and sensitivity of the nonlinear stochastic biological network, which may be relevant to the statistical tradeoff between bias and variance, the so-called bias/variance dilemma. Further, the tradeoff could be considered as an antagonistic pleiotropic action of a gene network and discussed from the systems biology perspective. PMID:22084563
Identification of novel peptides for horse meat speciation in highly processed foodstuffs.
Claydon, Amy J; Grundy, Helen H; Charlton, Adrian J; Romero, M Rosario
2015-01-01
There is a need for robust analytical methods to support enforcement of food labelling legislation. Proteomics is emerging as a complementary methodology to existing tools such as DNA and antibody-based techniques. Here we describe the development of a proteomics strategy for the determination of meat species in highly processed foods. A database of specific peptides for nine relevant animal species was used to enable semi-targeted species determination. This principle was tested for horse meat speciation, and a range of horse-specific peptides were identified as heat stable marker peptides for the detection of low levels of horse meat in mixtures with other species.
A Non-parametric Cutout Index for Robust Evaluation of Identified Proteins*
Serang, Oliver; Paulo, Joao; Steen, Hanno; Steen, Judith A.
2013-01-01
This paper proposes a novel, automated method for evaluating sets of proteins identified using mass spectrometry. The remaining peptide-spectrum match score distributions of protein sets are compared to an empirical absent peptide-spectrum match score distribution, and a Bayesian non-parametric method reminiscent of the Dirichlet process is presented to accurately perform this comparison. Thus, for a given protein set, the process computes the likelihood that the proteins identified are correctly identified. First, the method is used to evaluate protein sets chosen using different protein-level false discovery rate (FDR) thresholds, assigning each protein set a likelihood. The protein set assigned the highest likelihood is used to choose a non-arbitrary protein-level FDR threshold. Because the method can be used to evaluate any protein identification strategy (and is not limited to mere comparisons of different FDR thresholds), we subsequently use the method to compare and evaluate multiple simple methods for merging peptide evidence over replicate experiments. The general statistical approach can be applied to other types of data (e.g. RNA sequencing) and generalizes to multivariate problems. PMID:23292186
Koedinger, Kenneth R; Corbett, Albert T; Perfetti, Charles
2012-07-01
Despite the accumulation of substantial cognitive science research relevant to education, there remains confusion and controversy in the application of research to educational practice. In support of a more systematic approach, we describe the Knowledge-Learning-Instruction (KLI) framework. KLI promotes the emergence of instructional principles of high potential for generality, while explicitly identifying constraints of and opportunities for detailed analysis of the knowledge students may acquire in courses. Drawing on research across domains of science, math, and language learning, we illustrate the analyses of knowledge, learning, and instructional events that the KLI framework affords. We present a set of three coordinated taxonomies of knowledge, learning, and instruction. For example, we identify three broad classes of learning events (LEs): (a) memory and fluency processes, (b) induction and refinement processes, and (c) understanding and sense-making processes, and we show how these can lead to different knowledge changes and constraints on optimal instructional choices. Copyright © 2012 Cognitive Science Society, Inc.
Robust Learning Control Design for Quantum Unitary Transformations.
Wu, Chengzhi; Qi, Bo; Chen, Chunlin; Dong, Daoyi
2017-12-01
Robust control design for quantum unitary transformations has been recognized as a fundamental and challenging task in the development of quantum information processing due to unavoidable decoherence or operational errors in the experimental implementation of quantum operations. In this paper, we extend the systematic methodology of sampling-based learning control (SLC) approach with a gradient flow algorithm for the design of robust quantum unitary transformations. The SLC approach first uses a "training" process to find an optimal control strategy robust against certain ranges of uncertainties. Then a number of randomly selected samples are tested and the performance is evaluated according to their average fidelity. The approach is applied to three typical examples of robust quantum transformation problems including robust quantum transformations in a three-level quantum system, in a superconducting quantum circuit, and in a spin chain system. Numerical results demonstrate the effectiveness of the SLC approach and show its potential applications in various implementation of quantum unitary transformations.
ERIC Educational Resources Information Center
Fisher, Anna V.
2011-01-01
Is processing of conceptual information as robust as processing of perceptual information early in development? Existing empirical evidence is insufficient to answer this question. To examine this issue, 3- to 5-year-old children were presented with a flexible categorization task, in which target items (e.g., an open red umbrella) shared category…
Statistical modeling of SRAM yield performance and circuit variability
NASA Astrophysics Data System (ADS)
Cheng, Qi; Chen, Yijian
2015-03-01
In this paper, we develop statistical models to investigate SRAM yield performance and circuit variability in the presence of self-aligned multiple patterning (SAMP) process. It is assumed that SRAM fins are fabricated by a positivetone (spacer is line) self-aligned sextuple patterning (SASP) process which accommodates two types of spacers, while gates are fabricated by a more pitch-relaxed self-aligned quadruple patterning (SAQP) process which only allows one type of spacer. A number of possible inverter and SRAM structures are identified and the related circuit multi-modality is studied using the developed failure-probability and yield models. It is shown that SRAM circuit yield is significantly impacted by the multi-modality of fins' spatial variations in a SRAM cell. The sensitivity of 6-transistor SRAM read/write failure probability to SASP process variations is calculated and the specific circuit type with the highest probability to fail in the reading/writing operation is identified. Our study suggests that the 6-transistor SRAM configuration may not be scalable to 7-nm half pitch and more robust SRAM circuit design needs to be researched.
Development of an Integrated Metabolomic Profiling Approach for Infectious Diseases Research
Lv, Haitao; Hung, Chia S.; Chaturvedi, Kaveri S.; Hooton, Thomas M.; Henderson, Jeffrey P.
2013-01-01
Metabolomic profiling offers direct insights into the chemical environment and metabolic pathway activities at sites of human disease. During infection, this environment may receive important contributions from both host and pathogen. Here we apply untargeted metabolomics approach to identify compounds associated with an E. coli urinary tract infection population. Correlative and structural data from minimally processed samples were obtained using an optimized LC-MS platform capable of resolving ∼2300 molecular features. Principal components analysis readily distinguished patient groups and multiple supervised chemometric analyses resolved robust metabolomic shifts between groups. These analyses revealed nine compounds whose provisional structures suggest candidate infection-associated endocrine, catabolic, and lipid pathways. Several of these metabolite signatures may derive from microbial processing of host metabolites. Overall, this study highlights the ability of metabolomic approaches to directly identify compounds encountered by, and produced from, bacterial pathogens within human hosts. PMID:21922104
Nessen, Merel A; van der Zwaan, Dennis J; Grevers, Sander; Dalebout, Hans; Staats, Martijn; Kok, Esther; Palmblad, Magnus
2016-05-11
Proteomics methodology has seen increased application in food authentication, including tandem mass spectrometry of targeted species-specific peptides in raw, processed, or mixed food products. We have previously described an alternative principle that uses untargeted data acquisition and spectral library matching, essentially spectral counting, to compare and identify samples without the need for genomic sequence information in food species populations. Here, we present an interlaboratory comparison demonstrating how a method based on this principle performs in a realistic context. We also increasingly challenge the method by using data from different types of mass spectrometers, by trying to distinguish closely related and commercially important flatfish, and by analyzing heavily contaminated samples. The method was found to be robust in different laboratories, and 94-97% of the analyzed samples were correctly identified, including all processed and contaminated samples.
An Improved Strong Tracking Cubature Kalman Filter for GPS/INS Integrated Navigation Systems.
Feng, Kaiqiang; Li, Jie; Zhang, Xi; Zhang, Xiaoming; Shen, Chong; Cao, Huiliang; Yang, Yanyu; Liu, Jun
2018-06-12
The cubature Kalman filter (CKF) is widely used in the application of GPS/INS integrated navigation systems. However, its performance may decline in accuracy and even diverge in the presence of process uncertainties. To solve the problem, a new algorithm named improved strong tracking seventh-degree spherical simplex-radial cubature Kalman filter (IST-7thSSRCKF) is proposed in this paper. In the proposed algorithm, the effect of process uncertainty is mitigated by using the improved strong tracking Kalman filter technique, in which the hypothesis testing method is adopted to identify the process uncertainty and the prior state estimate covariance in the CKF is further modified online according to the change in vehicle dynamics. In addition, a new seventh-degree spherical simplex-radial rule is employed to further improve the estimation accuracy of the strong tracking cubature Kalman filter. In this way, the proposed comprehensive algorithm integrates the advantage of 7thSSRCKF’s high accuracy and strong tracking filter’s strong robustness against process uncertainties. The GPS/INS integrated navigation problem with significant dynamic model errors is utilized to validate the performance of proposed IST-7thSSRCKF. Results demonstrate that the improved strong tracking cubature Kalman filter can achieve higher accuracy than the existing CKF and ST-CKF, and is more robust for the GPS/INS integrated navigation system.
Definition and characterization of an extended social-affective default network.
Amft, Maren; Bzdok, Danilo; Laird, Angela R; Fox, Peter T; Schilbach, Leonhard; Eickhoff, Simon B
2015-03-01
Recent evidence suggests considerable overlap between the default mode network (DMN) and regions involved in social, affective and introspective processes. We considered these overlapping regions as the social-affective part of the DMN. In this study, we established a robust mapping of the underlying brain network formed by these regions and those strongly connected to them (the extended social-affective default network). We first seeded meta-analytic connectivity modeling and resting-state analyses in the meta-analytically defined DMN regions that showed statistical overlap with regions associated with social and affective processing. Consensus connectivity of each seed was subsequently delineated by a conjunction across both connectivity analyses. We then functionally characterized the ensuing regions and performed several cluster analyses. Among the identified regions, the amygdala/hippocampus formed a cluster associated with emotional processes and memory functions. The ventral striatum, anterior cingulum, subgenual cingulum and ventromedial prefrontal cortex formed a heterogeneous subgroup associated with motivation, reward and cognitive modulation of affect. Posterior cingulum/precuneus and dorsomedial prefrontal cortex were associated with mentalizing, self-reference and autobiographic information. The cluster formed by the temporo-parietal junction and anterior middle temporal sulcus/gyrus was associated with language and social cognition. Taken together, the current work highlights a robustly interconnected network that may be central to introspective, socio-affective, that is, self- and other-related mental processes.
NASA Astrophysics Data System (ADS)
Shauly, Eitan N.; Levi, Shimon; Schwarzband, Ishai; Adan, Ofer; Latinsky, Sergey
2015-04-01
A fully automated silicon-based methodology for systematic analysis of electrical features is shown. The system was developed for process monitoring and electrical variability reduction. A mapping step was created by dedicated structures such as static-random-access-memory (SRAM) array or standard cell library, or by using a simple design rule checking run-set. The resulting database was then used as an input for choosing locations for critical dimension scanning electron microscope images and for specific layout parameter extraction then was input to SPICE compact modeling simulation. Based on the experimental data, we identified two items that must be checked and monitored using the method described here: transistor's sensitivity to the distance between the poly end cap and edge of active area (AA) due to AA rounding, and SRAM leakage due to a too close N-well to P-well. Based on this example, for process monitoring and variability analyses, we extensively used this method to analyze transistor gates having different shapes. In addition, analysis for a large area of high density standard cell library was done. Another set of monitoring focused on a high density SRAM array is also presented. These examples provided information on the poly and AA layers, using transistor parameters such as leakage current and drive current. We successfully define "robust" and "less-robust" transistor configurations included in the library and identified unsymmetrical transistors in the SRAM bit-cells. These data were compared to data extracted from the same devices at the end of the line. Another set of analyses was done to samples after Cu M1 etch. Process monitoring information on M1 enclosed contact was extracted based on contact resistance as a feedback. Guidelines for the optimal M1 space for different layout configurations were also extracted. All these data showed the successful in-field implementation of our methodology as a useful process monitoring method.
Ray, Chad A; Patel, Vimal; Shih, Judy; Macaraeg, Chris; Wu, Yuling; Thway, Theingi; Ma, Mark; Lee, Jean W; Desilva, Binodh
2009-02-20
Developing a process that generates robust immunoassays that can be used to support studies with tight timelines is a common challenge for bioanalytical laboratories. Design of experiments (DOEs) is a tool that has been used by many industries for the purpose of optimizing processes. The approach is capable of identifying critical factors and their interactions with a minimal number of experiments. The challenge for implementing this tool in the bioanalytical laboratory is to develop a user-friendly approach that scientists can understand and apply. We have successfully addressed these challenges by eliminating the screening design, introducing automation, and applying a simple mathematical approach for the output parameter. A modified central composite design (CCD) was applied to three ligand binding assays. The intra-plate factors selected were coating, detection antibody concentration, and streptavidin-HRP concentrations. The inter-plate factors included incubation times for each step. The objective was to maximize the logS/B (S/B) of the low standard to the blank. The maximum desirable conditions were determined using JMP 7.0. To verify the validity of the predictions, the logS/B prediction was compared against the observed logS/B during pre-study validation experiments. The three assays were optimized using the multi-factorial DOE. The total error for all three methods was less than 20% which indicated method robustness. DOE identified interactions in one of the methods. The model predictions for logS/B were within 25% of the observed pre-study validation values for all methods tested. The comparison between the CCD and hybrid screening design yielded comparable parameter estimates. The user-friendly design enables effective application of multi-factorial DOE to optimize ligand binding assays for therapeutic proteins. The approach allows for identification of interactions between factors, consistency in optimal parameter determination, and reduced method development time.
Barish, Syndi; Ochs, Michael F.; Sontag, Eduardo D.; Gevertz, Jana L.
2017-01-01
Cancer is a highly heterogeneous disease, exhibiting spatial and temporal variations that pose challenges for designing robust therapies. Here, we propose the VEPART (Virtual Expansion of Populations for Analyzing Robustness of Therapies) technique as a platform that integrates experimental data, mathematical modeling, and statistical analyses for identifying robust optimal treatment protocols. VEPART begins with time course experimental data for a sample population, and a mathematical model fit to aggregate data from that sample population. Using nonparametric statistics, the sample population is amplified and used to create a large number of virtual populations. At the final step of VEPART, robustness is assessed by identifying and analyzing the optimal therapy (perhaps restricted to a set of clinically realizable protocols) across each virtual population. As proof of concept, we have applied the VEPART method to study the robustness of treatment response in a mouse model of melanoma subject to treatment with immunostimulatory oncolytic viruses and dendritic cell vaccines. Our analysis (i) showed that every scheduling variant of the experimentally used treatment protocol is fragile (nonrobust) and (ii) discovered an alternative region of dosing space (lower oncolytic virus dose, higher dendritic cell dose) for which a robust optimal protocol exists. PMID:28716945
Wang, Yilong; Zhang, Yun; Hu, Yunfeng
2016-11-01
One novel microbial esterase PHE21 was cloned from the genome of Pseudomonas oryzihabitans HUP022 identified from the deep sea of the Western Pacific. PHE21 was heterologously expressed and functionally characterized to be a robust esterase which behaved high resistance to various metal ions, organic solvents, surfactants, and NaCl. Despite the fact that the two enantiomers of ethyl 3-hydroxybutyrate were hard to be enzymatically resolved before, we successfully resolved racemic ethyl 3-hydroxybutyrate through direct hydrolysis reactions and generated chiral ethyl (S)-3-hydroxybutyrate using esterase PHE21. After process optimization, the enantiomeric excess, the conversion rate, and the yield of desired product ethyl (S)-3-hydroxybutyrate could reach 99, 65, and 87 %, respectively. PHE21 is a novel marine microbial esterase with great potential in asymmetric synthesis as well as in other industries.
A quantitative description for efficient financial markets
NASA Astrophysics Data System (ADS)
Immonen, Eero
2015-09-01
In this article we develop a control system model for describing efficient financial markets. We define the efficiency of a financial market in quantitative terms by robust asymptotic price-value equality in this model. By invoking the Internal Model Principle of robust output regulation theory we then show that under No Bubble Conditions, in the proposed model, the market is efficient if and only if the following conditions hold true: (1) the traders, as a group, can identify any mispricing in asset value (even if no one single trader can do it accurately), and (2) the traders, as a group, incorporate an internal model of the value process (again, even if no one single trader knows it). This main result of the article, which deliberately avoids the requirement for investor rationality, demonstrates, in quantitative terms, that the more transparent the markets are, the more efficient they are. An extensive example is provided to illustrate the theoretical development.
High-order sliding-mode control for blood glucose regulation in the presence of uncertain dynamics.
Hernández, Ana Gabriela Gallardo; Fridman, Leonid; Leder, Ron; Andrade, Sergio Islas; Monsalve, Cristina Revilla; Shtessel, Yuri; Levant, Arie
2011-01-01
The success of blood glucose automatic regulation depends on the robustness of the control algorithm used. It is a difficult task to perform due to the complexity of the glucose-insulin regulation system. The variety of model existing reflects the great amount of phenomena involved in the process, and the inter-patient variability of the parameters represent another challenge. In this research a High-Order Sliding-Mode Control is proposed. It is applied to two well known models, Bergman Minimal Model, and Sorensen Model, to test its robustness with respect to uncertain dynamics, and patients' parameter variability. The controller designed based on the simulations is tested with the specific Bergman Minimal Model of a diabetic patient whose parameters were identified from an in vivo assay. To minimize the insulin infusion rate, and avoid the hypoglycemia risk, the glucose target is a dynamical profile.
Robust information propagation through noisy neural circuits
Pouget, Alexandre
2017-01-01
Sensory neurons give highly variable responses to stimulation, which can limit the amount of stimulus information available to downstream circuits. Much work has investigated the factors that affect the amount of information encoded in these population responses, leading to insights about the role of covariability among neurons, tuning curve shape, etc. However, the informativeness of neural responses is not the only relevant feature of population codes; of potentially equal importance is how robustly that information propagates to downstream structures. For instance, to quantify the retina’s performance, one must consider not only the informativeness of the optic nerve responses, but also the amount of information that survives the spike-generating nonlinearity and noise corruption in the next stage of processing, the lateral geniculate nucleus. Our study identifies the set of covariance structures for the upstream cells that optimize the ability of information to propagate through noisy, nonlinear circuits. Within this optimal family are covariances with “differential correlations”, which are known to reduce the information encoded in neural population activities. Thus, covariance structures that maximize information in neural population codes, and those that maximize the ability of this information to propagate, can be very different. Moreover, redundancy is neither necessary nor sufficient to make population codes robust against corruption by noise: redundant codes can be very fragile, and synergistic codes can—in some cases—optimize robustness against noise. PMID:28419098
Incoherence-Mediated Remote Synchronization
NASA Astrophysics Data System (ADS)
Zhang, Liyue; Motter, Adilson E.; Nishikawa, Takashi
2017-04-01
In previously identified forms of remote synchronization between two nodes, the intermediate portion of the network connecting the two nodes is not synchronized with them but generally exhibits some coherent dynamics. Here we report on a network phenomenon we call incoherence-mediated remote synchronization (IMRS), in which two noncontiguous parts of the network are identically synchronized while the dynamics of the intermediate part is statistically and information-theoretically incoherent. We identify mirror symmetry in the network structure as a mechanism allowing for such behavior, and show that IMRS is robust against dynamical noise as well as against parameter changes. IMRS may underlie neuronal information processing and potentially lead to network solutions for encryption key distribution and secure communication.
Bounded-Degree Approximations of Stochastic Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, Christopher J.; Pinar, Ali; Kiyavash, Negar
2017-06-01
We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with speci ed in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identifymore » the r-best approximations among these classes, enabling robust decision making.« less
Liu, Yixin; Zhou, Kai; Lei, Yu
2015-01-01
High temperature gas sensors have been highly demanded for combustion process optimization and toxic emissions control, which usually suffer from poor selectivity. In order to solve this selectivity issue and identify unknown reducing gas species (CO, CH 4 , and CH 8 ) and concentrations, a high temperature resistive sensor array data set was built in this study based on 5 reported sensors. As each sensor showed specific responses towards different types of reducing gas with certain concentrations, based on which calibration curves were fitted, providing benchmark sensor array response database, then Bayesian inference framework was utilized to process themore » sensor array data and build a sample selection program to simultaneously identify gas species and concentration, by formulating proper likelihood between input measured sensor array response pattern of an unknown gas and each sampled sensor array response pattern in benchmark database. This algorithm shows good robustness which can accurately identify gas species and predict gas concentration with a small error of less than 10% based on limited amount of experiment data. These features indicate that Bayesian probabilistic approach is a simple and efficient way to process sensor array data, which can significantly reduce the required computational overhead and training data.« less
NASA Astrophysics Data System (ADS)
Wang, S.; Huang, G. H.; Baetz, B. W.; Ancell, B. C.
2017-05-01
The particle filtering techniques have been receiving increasing attention from the hydrologic community due to its ability to properly estimate model parameters and states of nonlinear and non-Gaussian systems. To facilitate a robust quantification of uncertainty in hydrologic predictions, it is necessary to explicitly examine the forward propagation and evolution of parameter uncertainties and their interactions that affect the predictive performance. This paper presents a unified probabilistic framework that merges the strengths of particle Markov chain Monte Carlo (PMCMC) and factorial polynomial chaos expansion (FPCE) algorithms to robustly quantify and reduce uncertainties in hydrologic predictions. A Gaussian anamorphosis technique is used to establish a seamless bridge between the data assimilation using the PMCMC and the uncertainty propagation using the FPCE through a straightforward transformation of posterior distributions of model parameters. The unified probabilistic framework is applied to the Xiangxi River watershed of the Three Gorges Reservoir (TGR) region in China to demonstrate its validity and applicability. Results reveal that the degree of spatial variability of soil moisture capacity is the most identifiable model parameter with the fastest convergence through the streamflow assimilation process. The potential interaction between the spatial variability in soil moisture conditions and the maximum soil moisture capacity has the most significant effect on the performance of streamflow predictions. In addition, parameter sensitivities and interactions vary in magnitude and direction over time due to temporal and spatial dynamics of hydrologic processes.
Wu, Sheng; Jin, Qibing; Zhang, Ridong; Zhang, Junfeng; Gao, Furong
2017-07-01
In this paper, an improved constrained tracking control design is proposed for batch processes under uncertainties. A new process model that facilitates process state and tracking error augmentation with further additional tuning is first proposed. Then a subsequent controller design is formulated using robust stable constrained MPC optimization. Unlike conventional robust model predictive control (MPC), the proposed method enables the controller design to bear more degrees of tuning so that improved tracking control can be acquired, which is very important since uncertainties exist inevitably in practice and cause model/plant mismatches. An injection molding process is introduced to illustrate the effectiveness of the proposed MPC approach in comparison with conventional robust MPC. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Damage localization by statistical evaluation of signal-processed mode shapes
NASA Astrophysics Data System (ADS)
Ulriksen, M. D.; Damkilde, L.
2015-07-01
Due to their inherent, ability to provide structural information on a local level, mode shapes and t.lieir derivatives are utilized extensively for structural damage identification. Typically, more or less advanced mathematical methods are implemented to identify damage-induced discontinuities in the spatial mode shape signals, hereby potentially facilitating damage detection and/or localization. However, by being based on distinguishing damage-induced discontinuities from other signal irregularities, an intrinsic deficiency in these methods is the high sensitivity towards measurement, noise. The present, article introduces a damage localization method which, compared to the conventional mode shape-based methods, has greatly enhanced robustness towards measurement, noise. The method is based on signal processing of spatial mode shapes by means of continuous wavelet, transformation (CWT) and subsequent, application of a generalized discrete Teager-Kaiser energy operator (GDTKEO) to identify damage-induced mode shape discontinuities. In order to evaluate whether the identified discontinuities are in fact, damage-induced, outlier analysis of principal components of the signal-processed mode shapes is conducted on the basis of T2-statistics. The proposed method is demonstrated in the context, of analytical work with a free-vibrating Euler-Bernoulli beam under noisy conditions.
Advanced Control Synthesis for Reverse Osmosis Water Desalination Processes.
Phuc, Bui Duc Hong; You, Sam-Sang; Choi, Hyeung-Six; Jeong, Seok-Kwon
2017-11-01
In this study, robust control synthesis has been applied to a reverse osmosis desalination plant whose product water flow and salinity are chosen as two controlled variables. The reverse osmosis process has been selected to study since it typically uses less energy than thermal distillation. The aim of the robust design is to overcome the limitation of classical controllers in dealing with large parametric uncertainties, external disturbances, sensor noises, and unmodeled process dynamics. The analyzed desalination process is modeled as a multi-input multi-output (MIMO) system with varying parameters. The control system is decoupled using a feed forward decoupling method to reduce the interactions between control channels. Both nominal and perturbed reverse osmosis systems have been analyzed using structured singular values for their stabilities and performances. Simulation results show that the system responses meet all the control requirements against various uncertainties. Finally the reduced order controller provides excellent robust performance, with achieving decoupling, disturbance attenuation, and noise rejection. It can help to reduce the membrane cleanings, increase the robustness against uncertainties, and lower the energy consumption for process monitoring.
Functional Groups Based on Leaf Physiology: Are they Spatially and Temporally Robust?
NASA Technical Reports Server (NTRS)
Foster, Tammy E.; Brooks, J. Renee; Quincy, Charles (Technical Monitor)
2002-01-01
The functional grouping hypothesis, which suggests that complexity in function can be simplified by grouping species with similar responses, was tested in the Florida scrub habitat. Functional groups were identified based on how species in fire maintained FL scrub function in terms of carbon, water and nitrogen dynamics. The suite of physiologic parameters measured to determine function included both instantaneous gas exchange measurements obtained from photosynthetic light response curves and integrated measures of function. Using cluster analysis, five distinct physiologically-based functional groups were identified. Using non-parametric multivariate analyses, it was determined that these five groupings were not altered by plot differences or by the three different management regimes; prescribed burn, mechanically treated and burn, and fire-suppressed. The physiological groupings also remained robust between the two years 1999 and 2000. In order for these groupings to be of use for scaling ecosystem processes, there needs to be an easy-to-measure morphological indicator of function. Life form classifications were able to depict the physiological groupings more adequately than either specific leaf area or leaf thickness. THe ability of life forms to depict the groupings was improved by separating the parasitic Ximenia americana from the shrub category.
Biomining of MoS2 with Peptide-based Smart Biomaterials.
Cetinel, Sibel; Shen, Wei-Zheng; Aminpour, Maral; Bhomkar, Prasanna; Wang, Feng; Borujeny, Elham Rafie; Sharma, Kumakshi; Nayebi, Niloofar; Montemagno, Carlo
2018-02-20
Biomining of valuable metals using a target specific approach promises increased purification yields and decreased cost. Target specificity can be implemented with proteins/peptides, the biological molecules, responsible from various structural and functional pathways in living organisms by virtue of their specific recognition abilities towards both organic and inorganic materials. Phage display libraries are used to identify peptide biomolecules capable of specifically recognizing and binding organic/inorganic materials of interest with high affinities. Using combinatorial approaches, these molecular recognition elements can be converted into smart hybrid biomaterials and harnessed for biotechnological applications. Herein, we used a commercially available phage-display library to identify peptides with specific binding affinity to molybdenite (MoS 2 ) and used them to decorate magnetic NPs. These peptide-coupled NPs could capture MoS 2 under a variety of environmental conditions. The same batch of NPs could be re-used multiple times to harvest MoS 2 , clearly suggesting that this hybrid material was robust and recyclable. The advantages of this smart hybrid biomaterial with respect to its MoS 2 -binding specificity, robust performance under environmentally challenging conditions and its recyclability suggests its potential application in harvesting MoS 2 from tailing ponds and downstream mining processes.
2-DE analysis indicates that Acinetobacter baumannii displays a robust and versatile metabolism
Soares, Nelson C; Cabral, Maria P; Parreira, José R; Gayoso, Carmen; Barba, Maria J; Bou, Germán
2009-01-01
Background Acinetobacter baumannii is a nosocomial pathogen that has been associated with outbreak infections in hospitals. Despite increasing awareness about this bacterium, its proteome remains poorly characterised, however recently the complete genome of A. baumannii reference strain ATCC 17978 has been sequenced. Here, we have used 2-DE and MALDI-TOF/TOF approach to characterise the proteome of this strain. Results The membrane and cytoplasmatic protein extracts were analysed separately, these analyses revealed the reproducible presence of 239 and 511 membrane and cytoplamatic protein spots, respectively. MALDI-TOF/TOF characterisation identified a total of 192 protein spots (37 membrane and 155 cytoplasmatic) and revealed that the identified membrane proteins were mainly transport-related proteins, whereas the cytoplasmatic proteins were of diverse nature, although mainly related to metabolic processes. Conclusion This work indicates that A. baumannii has a versatile and robust metabolism and also reveal a number of proteins that may play a key role in the mechanism of drug resistance and virulence. The data obtained complements earlier reports of A. baumannii proteome and provides new tools to increase our knowledge on the protein expression profile of this pathogen. PMID:19785748
A Methodology for Robust Comparative Life Cycle Assessments Incorporating Uncertainty.
Gregory, Jeremy R; Noshadravan, Arash; Olivetti, Elsa A; Kirchain, Randolph E
2016-06-21
We propose a methodology for conducting robust comparative life cycle assessments (LCA) by leveraging uncertainty. The method evaluates a broad range of the possible scenario space in a probabilistic fashion while simultaneously considering uncertainty in input data. The method is intended to ascertain which scenarios have a definitive environmentally preferable choice among the alternatives being compared and the significance of the differences given uncertainty in the parameters, which parameters have the most influence on this difference, and how we can identify the resolvable scenarios (where one alternative in the comparison has a clearly lower environmental impact). This is accomplished via an aggregated probabilistic scenario-aware analysis, followed by an assessment of which scenarios have resolvable alternatives. Decision-tree partitioning algorithms are used to isolate meaningful scenario groups. In instances where the alternatives cannot be resolved for scenarios of interest, influential parameters are identified using sensitivity analysis. If those parameters can be refined, the process can be iterated using the refined parameters. We also present definitions of uncertainty quantities that have not been applied in the field of LCA and approaches for characterizing uncertainty in those quantities. We then demonstrate the methodology through a case study of pavements.
A frequency-domain estimator for use in adaptive control systems
NASA Technical Reports Server (NTRS)
Lamaire, Richard O.; Valavani, Lena; Athans, Michael; Stein, Gunter
1991-01-01
This paper presents a frequency-domain estimator that can identify both a parametrized nominal model of a plant as well as a frequency-domain bounding function on the modeling error associated with this nominal model. This estimator, which we call a robust estimator, can be used in conjunction with a robust control-law redesign algorithm to form a robust adaptive controller.
He, Fei; Fromion, Vincent; Westerhoff, Hans V
2013-11-21
Metabolic control analysis (MCA) and supply-demand theory have led to appreciable understanding of the systems properties of metabolic networks that are subject exclusively to metabolic regulation. Supply-demand theory has not yet considered gene-expression regulation explicitly whilst a variant of MCA, i.e. Hierarchical Control Analysis (HCA), has done so. Existing analyses based on control engineering approaches have not been very explicit about whether metabolic or gene-expression regulation would be involved, but designed different ways in which regulation could be organized, with the potential of causing adaptation to be perfect. This study integrates control engineering and classical MCA augmented with supply-demand theory and HCA. Because gene-expression regulation involves time integration, it is identified as a natural instantiation of the 'integral control' (or near integral control) known in control engineering. This study then focuses on robustness against and adaptation to perturbations of process activities in the network, which could result from environmental perturbations, mutations or slow noise. It is shown however that this type of 'integral control' should rarely be expected to lead to the 'perfect adaptation': although the gene-expression regulation increases the robustness of important metabolite concentrations, it rarely makes them infinitely robust. For perfect adaptation to occur, the protein degradation reactions should be zero order in the concentration of the protein, which may be rare biologically for cells growing steadily. A proposed new framework integrating the methodologies of control engineering and metabolic and hierarchical control analysis, improves the understanding of biological systems that are regulated both metabolically and by gene expression. In particular, the new approach enables one to address the issue whether the intracellular biochemical networks that have been and are being identified by genomics and systems biology, correspond to the 'perfect' regulatory structures designed by control engineering vis-à-vis optimal functions such as robustness. To the extent that they are not, the analyses suggest how they may become so and this in turn should facilitate synthetic biology and metabolic engineering.
2013-01-01
Background Metabolic control analysis (MCA) and supply–demand theory have led to appreciable understanding of the systems properties of metabolic networks that are subject exclusively to metabolic regulation. Supply–demand theory has not yet considered gene-expression regulation explicitly whilst a variant of MCA, i.e. Hierarchical Control Analysis (HCA), has done so. Existing analyses based on control engineering approaches have not been very explicit about whether metabolic or gene-expression regulation would be involved, but designed different ways in which regulation could be organized, with the potential of causing adaptation to be perfect. Results This study integrates control engineering and classical MCA augmented with supply–demand theory and HCA. Because gene-expression regulation involves time integration, it is identified as a natural instantiation of the ‘integral control’ (or near integral control) known in control engineering. This study then focuses on robustness against and adaptation to perturbations of process activities in the network, which could result from environmental perturbations, mutations or slow noise. It is shown however that this type of ‘integral control’ should rarely be expected to lead to the ‘perfect adaptation’: although the gene-expression regulation increases the robustness of important metabolite concentrations, it rarely makes them infinitely robust. For perfect adaptation to occur, the protein degradation reactions should be zero order in the concentration of the protein, which may be rare biologically for cells growing steadily. Conclusions A proposed new framework integrating the methodologies of control engineering and metabolic and hierarchical control analysis, improves the understanding of biological systems that are regulated both metabolically and by gene expression. In particular, the new approach enables one to address the issue whether the intracellular biochemical networks that have been and are being identified by genomics and systems biology, correspond to the ‘perfect’ regulatory structures designed by control engineering vis-à-vis optimal functions such as robustness. To the extent that they are not, the analyses suggest how they may become so and this in turn should facilitate synthetic biology and metabolic engineering. PMID:24261908
Churilov, Leonid; Liu, Daniel; Ma, Henry; Christensen, Soren; Nagakane, Yoshinari; Campbell, Bruce; Parsons, Mark W; Levi, Christopher R; Davis, Stephen M; Donnan, Geoffrey A
2013-04-01
The appropriateness of a software platform for rapid MRI assessment of the amount of salvageable brain tissue after stroke is critical for both the validity of the Extending the Time for Thrombolysis in Emergency Neurological Deficits (EXTEND) Clinical Trial of stroke thrombolysis beyond 4.5 hours and for stroke patient care outcomes. The objective of this research is to develop and implement a methodology for selecting the acute stroke imaging software platform most appropriate for the setting of a multi-centre clinical trial. A multi-disciplinary decision making panel formulated the set of preferentially independent evaluation attributes. Alternative Multi-Attribute Value Measurement methods were used to identify the best imaging software platform followed by sensitivity analysis to ensure the validity and robustness of the proposed solution. Four alternative imaging software platforms were identified. RApid processing of PerfusIon and Diffusion (RAPID) software was selected as the most appropriate for the needs of the EXTEND trial. A theoretically grounded generic multi-attribute selection methodology for imaging software was developed and implemented. The developed methodology assured both a high quality decision outcome and a rational and transparent decision process. This development contributes to stroke literature in the area of comprehensive evaluation of MRI clinical software. At the time of evaluation, RAPID software presented the most appropriate imaging software platform for use in the EXTEND clinical trial. The proposed multi-attribute imaging software evaluation methodology is based on sound theoretical foundations of multiple criteria decision analysis and can be successfully used for choosing the most appropriate imaging software while ensuring both robust decision process and outcomes. © 2012 The Authors. International Journal of Stroke © 2012 World Stroke Organization.
Kitano, Hiroaki
2004-11-01
Robustness is a ubiquitously observed property of biological systems. It is considered to be a fundamental feature of complex evolvable systems. It is attained by several underlying principles that are universal to both biological organisms and sophisticated engineering systems. Robustness facilitates evolvability and robust traits are often selected by evolution. Such a mutually beneficial process is made possible by specific architectural features observed in robust systems. But there are trade-offs between robustness, fragility, performance and resource demands, which explain system behaviour, including the patterns of failure. Insights into inherent properties of robust systems will provide us with a better understanding of complex diseases and a guiding principle for therapy design.
Vestibular blueprint in early vertebrates.
Straka, Hans; Baker, Robert
2013-11-19
Central vestibular neurons form identifiable subgroups within the boundaries of classically outlined octavolateral nuclei in primitive vertebrates that are distinct from those processing lateral line, electrosensory, and auditory signals. Each vestibular subgroup exhibits a particular morpho-physiological property that receives origin-specific sensory inputs from semicircular canal and otolith organs. Behaviorally characterized phenotypes send discrete axonal projections to extraocular, spinal, and cerebellar targets including other ipsi- and contralateral vestibular nuclei. The anatomical locations of vestibuloocular and vestibulospinal neurons correlate with genetically defined hindbrain compartments that are well conserved throughout vertebrate evolution though some variability exists in fossil and extant vertebrate species. The different vestibular subgroups exhibit a robust sensorimotor signal processing complemented with a high degree of vestibular and visual adaptive plasticity.
Wavelet Filtering to Reduce Conservatism in Aeroservoelastic Robust Stability Margins
NASA Technical Reports Server (NTRS)
Brenner, Marty; Lind, Rick
1998-01-01
Wavelet analysis for filtering and system identification was used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins was reduced with parametric and nonparametric time-frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data was used to reduce the effects of external desirableness and unmodeled dynamics. Parametric estimates of modal stability were also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. F-18 high Alpha Research Vehicle aeroservoelastic flight test data demonstrated improved robust stability prediction by extension of the stability boundary beyond the flight regime.
A Baseline for the Multivariate Comparison of Resting-State Networks
Allen, Elena A.; Erhardt, Erik B.; Damaraju, Eswar; Gruner, William; Segall, Judith M.; Silva, Rogers F.; Havlicek, Martin; Rachakonda, Srinivas; Fries, Jill; Kalyanam, Ravi; Michael, Andrew M.; Caprihan, Arvind; Turner, Jessica A.; Eichele, Tom; Adelsheim, Steven; Bryan, Angela D.; Bustillo, Juan; Clark, Vincent P.; Feldstein Ewing, Sarah W.; Filbey, Francesca; Ford, Corey C.; Hutchison, Kent; Jung, Rex E.; Kiehl, Kent A.; Kodituwakku, Piyadasa; Komesu, Yuko M.; Mayer, Andrew R.; Pearlson, Godfrey D.; Phillips, John P.; Sadek, Joseph R.; Stevens, Michael; Teuscher, Ursina; Thoma, Robert J.; Calhoun, Vince D.
2011-01-01
As the size of functional and structural MRI datasets expands, it becomes increasingly important to establish a baseline from which diagnostic relevance may be determined, a processing strategy that efficiently prepares data for analysis, and a statistical approach that identifies important effects in a manner that is both robust and reproducible. In this paper, we introduce a multivariate analytic approach that optimizes sensitivity and reduces unnecessary testing. We demonstrate the utility of this mega-analytic approach by identifying the effects of age and gender on the resting-state networks (RSNs) of 603 healthy adolescents and adults (mean age: 23.4 years, range: 12–71 years). Data were collected on the same scanner, preprocessed using an automated analysis pipeline based in SPM, and studied using group independent component analysis. RSNs were identified and evaluated in terms of three primary outcome measures: time course spectral power, spatial map intensity, and functional network connectivity. Results revealed robust effects of age on all three outcome measures, largely indicating decreases in network coherence and connectivity with increasing age. Gender effects were of smaller magnitude but suggested stronger intra-network connectivity in females and more inter-network connectivity in males, particularly with regard to sensorimotor networks. These findings, along with the analysis approach and statistical framework described here, provide a useful baseline for future investigations of brain networks in health and disease. PMID:21442040
Sanaie, Nooshafarin; Cecchini, Douglas; Pieracci, John
2012-10-01
Micro-scale chromatography formats are becoming more routinely used in purification process development because of their ability to rapidly screen large number of process conditions at a time with minimal material. Given the usual constraints that exist on development timelines and resources, these systems can provide a means to maximize process knowledge and process robustness compared to traditional packed column formats. In this work, a high-throughput, 96-well filter plate format was used in the development of the cation exchange and hydrophobic interaction chromatography steps of a purification process designed to alter the glycoform distribution of a small protein. The significant input parameters affecting process performance were rapidly identified for both steps and preliminary operating conditions were identified. These ranges were verified in a packed chromatography column in order to assess the ability of the 96-well plate to predict packed column performance. In both steps, the 96-well plate format consistently led to underestimated glycoform-enrichment levels and to overestimated product recovery rates compared to the column-based approach. These studies demonstrate that the plate format can be used as a screening tool to narrow the operating ranges prior to further optimization on packed chromatography columns. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Robust geostatistical analysis of spatial data
NASA Astrophysics Data System (ADS)
Papritz, A.; Künsch, H. R.; Schwierz, C.; Stahel, W. A.
2012-04-01
Most of the geostatistical software tools rely on non-robust algorithms. This is unfortunate, because outlying observations are rather the rule than the exception, in particular in environmental data sets. Outlying observations may results from errors (e.g. in data transcription) or from local perturbations in the processes that are responsible for a given pattern of spatial variation. As an example, the spatial distribution of some trace metal in the soils of a region may be distorted by emissions of local anthropogenic sources. Outliers affect the modelling of the large-scale spatial variation, the so-called external drift or trend, the estimation of the spatial dependence of the residual variation and the predictions by kriging. Identifying outliers manually is cumbersome and requires expertise because one needs parameter estimates to decide which observation is a potential outlier. Moreover, inference after the rejection of some observations is problematic. A better approach is to use robust algorithms that prevent automatically that outlying observations have undue influence. Former studies on robust geostatistics focused on robust estimation of the sample variogram and ordinary kriging without external drift. Furthermore, Richardson and Welsh (1995) [2] proposed a robustified version of (restricted) maximum likelihood ([RE]ML) estimation for the variance components of a linear mixed model, which was later used by Marchant and Lark (2007) [1] for robust REML estimation of the variogram. We propose here a novel method for robust REML estimation of the variogram of a Gaussian random field that is possibly contaminated by independent errors from a long-tailed distribution. It is based on robustification of estimating equations for the Gaussian REML estimation. Besides robust estimates of the parameters of the external drift and of the variogram, the method also provides standard errors for the estimated parameters, robustified kriging predictions at both sampled and unsampled locations and kriging variances. The method has been implemented in an R package. Apart from presenting our modelling framework, we shall present selected simulation results by which we explored the properties of the new method. This will be complemented by an analysis of the Tarrawarra soil moisture data set [3].
Development of a Comprehensive Digital Avionics Curriculum for the Aeronautical Engineer
2006-03-01
able to analyze and design aircraft and missile guidance and control systems, including feedback stabilization schemes and stochastic processes, using ...Uncertainty modeling for robust control; Robust closed-loop stability and performance; Robust H- infinity control; Robustness check using mu-analysis...Controlled feedback (reduces noise) 3. Statistical group response (reduce pressure toward conformity) When used as a tool to study a complex problem
Jiang, Canping; Flansburg, Lisa; Ghose, Sanchayita; Jorjorian, Paul; Shukla, Abhinav A
2010-12-15
The concept of design space has been taking root under the quality by design paradigm as a foundation of in-process control strategies for biopharmaceutical manufacturing processes. This paper outlines the development of a design space for a hydrophobic interaction chromatography (HIC) process step. The design space included the impact of raw material lot-to-lot variability and variations in the feed stream from cell culture. A failure modes and effects analysis was employed as the basis for the process characterization exercise. During mapping of the process design space, the multi-dimensional combination of operational variables were studied to quantify the impact on process performance in terms of yield and product quality. Variability in resin hydrophobicity was found to have a significant influence on step yield and high-molecular weight aggregate clearance through the HIC step. A robust operating window was identified for this process step that enabled a higher step yield while ensuring acceptable product quality. © 2010 Wiley Periodicals, Inc.
Multi-wavelength approach towards on-product overlay accuracy and robustness
NASA Astrophysics Data System (ADS)
Bhattacharyya, Kaustuve; Noot, Marc; Chang, Hammer; Liao, Sax; Chang, Ken; Gosali, Benny; Su, Eason; Wang, Cathy; den Boef, Arie; Fouquet, Christophe; Huang, Guo-Tsai; Chen, Kai-Hsiung; Cheng, Kevin; Lin, John
2018-03-01
Success of diffraction-based overlay (DBO) technique1,4,5 in the industry is not just for its good precision and low toolinduced shift, but also for the measurement accuracy2 and robustness that DBO can provide. Significant efforts are put in to capitalize on the potential that DBO has to address measurement accuracy and robustness. Introduction of many measurement wavelength choices (continuous wavelength) in DBO is one of the key new capabilities in this area. Along with the continuous choice of wavelengths, the algorithms (fueled by swing-curve physics) on how to use these wavelengths are of high importance for a robust recipe setup that can avoid the impact from process stack variations (symmetric as well as asymmetric). All these are discussed. Moreover, another aspect of boosting measurement accuracy and robustness is discussed that deploys the capability to combine overlay measurement data from multiple wavelength measurements. The goal is to provide a method to make overlay measurements immune from process stack variations and also to report health KPIs for every measurement. By combining measurements from multiple wavelengths, a final overlay measurement is generated. The results show a significant benefit in accuracy and robustness against process stack variation. These results are supported by both measurement data as well as simulation from many product stacks.
Tian, Huawei; Zhao, Yao; Ni, Rongrong; Cao, Gang
2009-11-23
In a feature-based geometrically robust watermarking system, it is a challenging task to detect geometric-invariant regions (GIRs) which can survive a broad range of image processing operations. Instead of commonly used Harris detector or Mexican hat wavelet method, a more robust corner detector named multi-scale curvature product (MSCP) is adopted to extract salient features in this paper. Based on such features, disk-like GIRs are found, which consists of three steps. First, robust edge contours are extracted. Then, MSCP is utilized to detect the centers for GIRs. Third, the characteristic scale selection is performed to calculate the radius of each GIR. A novel sector-shaped partitioning method for the GIRs is designed, which can divide a GIR into several sector discs with the help of the most important corner (MIC). The watermark message is then embedded bit by bit in each sector by using Quantization Index Modulation (QIM). The GIRs and the divided sector discs are invariant to geometric transforms, so the watermarking method inherently has high robustness against geometric attacks. Experimental results show that the scheme has a better robustness against various image processing operations including common processing attacks, affine transforms, cropping, and random bending attack (RBA) than the previous approaches.
Young Kim, Eun; Johnson, Hans J
2013-01-01
A robust multi-modal tool, for automated registration, bias correction, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. This work focused on improving the an iterative optimization framework between bias-correction, registration, and tissue classification inspired from previous work. The primary contributions are robustness improvements from incorporation of following four elements: (1) utilize multi-modal and repeated scans, (2) incorporate high-deformable registration, (3) use extended set of tissue definitions, and (4) use of multi-modal aware intensity-context priors. The benefits of these enhancements were investigated by a series of experiments with both simulated brain data set (BrainWeb) and by applying to highly-heterogeneous data from a 32 site imaging study with quality assessments through the expert visual inspection. The implementation of this tool is tailored for, but not limited to, large-scale data processing with great data variation with a flexible interface. In this paper, we describe enhancements to a joint registration, bias correction, and the tissue classification, that improve the generalizability and robustness for processing multi-modal longitudinal MR scans collected at multi-sites. The tool was evaluated by using both simulated and simulated and human subject MRI images. With these enhancements, the results showed improved robustness for large-scale heterogeneous MRI processing.
Multi-point objective-oriented sequential sampling strategy for constrained robust design
NASA Astrophysics Data System (ADS)
Zhu, Ping; Zhang, Siliang; Chen, Wei
2015-03-01
Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.
Advancements in robust algorithm formulation for speaker identification of whispered speech
NASA Astrophysics Data System (ADS)
Fan, Xing
Whispered speech is an alternative speech production mode from neutral speech, which is used by talkers intentionally in natural conversational scenarios to protect privacy and to avoid certain content from being overheard/made public. Due to the profound differences between whispered and neutral speech in production mechanism and the absence of whispered adaptation data, the performance of speaker identification systems trained with neutral speech degrades significantly. This dissertation therefore focuses on developing a robust closed-set speaker recognition system for whispered speech by using no or limited whispered adaptation data from non-target speakers. This dissertation proposes the concept of "High''/"Low'' performance whispered data for the purpose of speaker identification. A variety of acoustic properties are identified that contribute to the quality of whispered data. An acoustic analysis is also conducted to compare the phoneme/speaker dependency of the differences between whispered and neutral data in the feature domain. The observations from those acoustic analysis are new in this area and also serve as a guidance for developing robust speaker identification systems for whispered speech. This dissertation further proposes two systems for speaker identification of whispered speech. One system focuses on front-end processing. A two-dimensional feature space is proposed to search for "Low''-quality performance based whispered utterances and separate feature mapping functions are applied to vowels and consonants respectively in order to retain the speaker's information shared between whispered and neutral speech. The other system focuses on speech-mode-independent model training. The proposed method generates pseudo whispered features from neutral features by using the statistical information contained in a whispered Universal Background model (UBM) trained from extra collected whispered data from non-target speakers. Four modeling methods are proposed for the transformation estimation in order to generate the pseudo whispered features. Both of the above two systems demonstrate a significant improvement over the baseline system on the evaluation data. This dissertation has therefore contributed to providing a scientific understanding of the differences between whispered and neutral speech as well as improved front-end processing and modeling method for speaker identification of whispered speech. Such advancements will ultimately contribute to improve the robustness of speech processing systems.
Autonomous Modelling of X-ray Spectra Using Robust Global Optimization Methods
NASA Astrophysics Data System (ADS)
Rogers, Adam; Safi-Harb, Samar; Fiege, Jason
2015-08-01
The standard approach to model fitting in X-ray astronomy is by means of local optimization methods. However, these local optimizers suffer from a number of problems, such as a tendency for the fit parameters to become trapped in local minima, and can require an involved process of detailed user intervention to guide them through the optimization process. In this work we introduce a general GUI-driven global optimization method for fitting models to X-ray data, written in MATLAB, which searches for optimal models with minimal user interaction. We directly interface with the commonly used XSPEC libraries to access the full complement of pre-existing spectral models that describe a wide range of physics appropriate for modelling astrophysical sources, including supernova remnants and compact objects. Our algorithm is powered by the Ferret genetic algorithm and Locust particle swarm optimizer from the Qubist Global Optimization Toolbox, which are robust at finding families of solutions and identifying degeneracies. This technique will be particularly instrumental for multi-parameter models and high-fidelity data. In this presentation, we provide details of the code and use our techniques to analyze X-ray data obtained from a variety of astrophysical sources.
Shlizerman, Eli; Riffell, Jeffrey A.; Kutz, J. Nathan
2014-01-01
The antennal lobe (AL), olfactory processing center in insects, is able to process stimuli into distinct neural activity patterns, called olfactory neural codes. To model their dynamics we perform multichannel recordings from the projection neurons in the AL driven by different odorants. We then derive a dynamic neuronal network from the electrophysiological data. The network consists of lateral-inhibitory neurons and excitatory neurons (modeled as firing-rate units), and is capable of producing unique olfactory neural codes for the tested odorants. To construct the network, we (1) design a projection, an odor space, for the neural recording from the AL, which discriminates between distinct odorants trajectories (2) characterize scent recognition, i.e., decision-making based on olfactory signals and (3) infer the wiring of the neural circuit, the connectome of the AL. We show that the constructed model is consistent with biological observations, such as contrast enhancement and robustness to noise. The study suggests a data-driven approach to answer a key biological question in identifying how lateral inhibitory neurons can be wired to excitatory neurons to permit robust activity patterns. PMID:25165442
A multipurpose model of Hermes-Columbus docking mechanism
NASA Technical Reports Server (NTRS)
Gonzalez-Vallejo, J. J.; Fehse, W.; Tobias, A.
1992-01-01
One of the foreseen missions of the HERMES spacevehicle is the servicing to the Columbus Free Flying Laboratory (MTFF). Docking between the two spacecraft is a critical operation in which the Docking Mechanism (DM) has a major role. In order to analyze and assess robustness of initially selected concepts and to identify suitable implementation solutions, through the investigation of main parameters involved in the docking functions, a multipurpose model of DM was developed and tested. This paper describes the main design features as well as the process of calibrating and testing.
Targeted exploration and analysis of large cross-platform human transcriptomic compendia
Zhu, Qian; Wong, Aaron K; Krishnan, Arjun; Aure, Miriam R; Tadych, Alicja; Zhang, Ran; Corney, David C; Greene, Casey S; Bongo, Lars A; Kristensen, Vessela N; Charikar, Moses; Li, Kai; Troyanskaya, Olga G.
2016-01-01
We present SEEK (http://seek.princeton.edu), a query-based search engine across very large transcriptomic data collections, including thousands of human data sets from almost 50 microarray and next-generation sequencing platforms. SEEK uses a novel query-level cross-validation-based algorithm to automatically prioritize data sets relevant to the query and a robust search approach to identify query-coregulated genes, pathways, and processes. SEEK provides cross-platform handling, multi-gene query search, iterative metadata-based search refinement, and extensive visualization-based analysis options. PMID:25581801
CARDS: A blueprint and environment for domain-specific software reuse
NASA Technical Reports Server (NTRS)
Wallnau, Kurt C.; Solderitsch, Anne Costa; Smotherman, Catherine
1992-01-01
CARDS (Central Archive for Reusable Defense Software) exploits advances in domain analysis and domain modeling to identify, specify, develop, archive, retrieve, understand, and reuse domain-specific software components. An important element of CARDS is to provide visibility into the domain model artifacts produced by, and services provided by, commercial computer-aided software engineering (CASE) technology. The use of commercial CASE technology is important to provide rich, robust support for the varied roles involved in a reuse process. We refer to this kind of use of knowledge representation systems as supporting 'knowledge-based integration.'
NASA Astrophysics Data System (ADS)
Mannar, Kamal; Ceglarek, Darek
2005-11-01
Customer feedback in the form of warranty/field performance is an important and direct indicator of quality and robustness of a product. Linking warranty information to manufacturing measurements can identify key design parameters and process variables (DPs and PVs) that are related to warranty failures. Warranty data has been traditionally used in reliability studies to determine failure distributions and warranty cost. This paper proposes a novel Fault Region Localization (FRL) methodology to map warranty failures to manufacturing measurements (hence to DPs/PVs) to diagnose warranty failures and perform tolerance revaluation. The FRL methodology consists of two parts: 1. Identifying relations between warranty failures and DPs and PVs using the Generalized Rough Set (GRS) method. GRS is a supervised learning technique to identify specific DPs and PVs related to the given warranty failures and then determining the corresponding Warranty Fault Regions (WFR), Normal Region (NR) and Boundary region (BND). GRS expands traditional Rough Set method by allowing inclusion of noise and uncertainty of warranty data classes. 2. Revaluating the original tolerances of DPs/PVs based on the WFR and BND region identified. The FRL methodology is illustrated using case studies based on two warranty failures from the electronics industry.
Robustness of the Process of Nucleoid Exclusion of Protein Aggregates in Escherichia coli
Neeli-Venkata, Ramakanth; Martikainen, Antti; Gupta, Abhishekh; Gonçalves, Nadia; Fonseca, Jose
2016-01-01
ABSTRACT Escherichia coli segregates protein aggregates to the poles by nucleoid exclusion. Combined with cell divisions, this generates heterogeneous aggregate distributions in subsequent cell generations. We studied the robustness of this process with differing medium richness and antibiotics stress, which affect nucleoid size, using multimodal, time-lapse microscopy of live cells expressing both a fluorescently tagged chaperone (IbpA), which identifies in vivo the location of aggregates, and HupA-mCherry, a fluorescent variant of a nucleoid-associated protein. We find that the relative sizes of the nucleoid's major and minor axes change widely, in a positively correlated fashion, with medium richness and antibiotic stress. The aggregate's distribution along the major cell axis also changes between conditions and in agreement with the nucleoid exclusion phenomenon. Consequently, the fraction of aggregates at the midcell region prior to cell division differs between conditions, which will affect the degree of asymmetries in the partitioning of aggregates between cells of future generations. Finally, from the location of the peak of anisotropy in the aggregate displacement distribution, the nucleoid relative size, and the spatiotemporal aggregate distribution, we find that the exclusion of detectable aggregates from midcell is most pronounced in cells with mid-sized nucleoids, which are most common under optimal conditions. We conclude that the aggregate management mechanisms of E. coli are significantly robust but are not immune to stresses due to the tangible effect that these have on nucleoid size. IMPORTANCE Escherichia coli segregates protein aggregates to the poles by nucleoid exclusion. From live single-cell microscopy studies of the robustness of this process to various stresses known to affect nucleoid size, we find that nucleoid size and aggregate preferential locations change concordantly between conditions. Also, the degree of influence of the nucleoid on aggregate positioning differs between conditions, causing aggregate numbers at midcell to differ in cell division events, which will affect the degree of asymmetries in the partitioning of aggregates between cells of future generations. Finally, we find that aggregate segregation to the cell poles is most pronounced in cells with mid-sized nucleoids. We conclude that the energy-free process of the midcell exclusion of aggregates partially loses effectiveness under stressful conditions. PMID:26728194
Strain-Dependent Transcriptome Signatures for Robustness in Lactococcus lactis
Dijkstra, Annereinou R.; Alkema, Wynand; Starrenburg, Marjo J. C.; van Hijum, Sacha A. F. T.; Bron, Peter A.
2016-01-01
Recently, we demonstrated that fermentation conditions have a strong impact on subsequent survival of Lactococcus lactis strain MG1363 during heat and oxidative stress, two important parameters during spray drying. Moreover, employment of a transcriptome-phenotype matching approach revealed groups of genes associated with robustness towards heat and/or oxidative stress. To investigate if other strains have similar or distinct transcriptome signatures for robustness, we applied an identical transcriptome-robustness phenotype matching approach on the L. lactis strains IL1403, KF147 and SK11, which have previously been demonstrated to display highly diverse robustness phenotypes. These strains were subjected to an identical fermentation regime as was performed earlier for strain MG1363 and consisted of twelve conditions, varying in the level of salt and/or oxygen, as well as fermentation temperature and pH. In the exponential phase of growth, cells were harvested for transcriptome analysis and assessment of heat and oxidative stress survival phenotypes. The variation in fermentation conditions resulted in differences in heat and oxidative stress survival of up to five 10-log units. Effects of the fermentation conditions on stress survival of the L. lactis strains were typically strain-dependent, although the fermentation conditions had mainly similar effects on the growth characteristics of the different strains. By association of the transcriptomes and robustness phenotypes highly strain-specific transcriptome signatures for robustness towards heat and oxidative stress were identified, indicating that multiple mechanisms exist to increase robustness and, as a consequence, robustness of each strain requires individual optimization. However, a relatively small overlap in the transcriptome responses of the strains was also identified and this generic transcriptome signature included genes previously associated with stress (ctsR and lplL) and novel genes, including nanE and genes encoding transport proteins. The transcript levels of these genes can function as indicators of robustness and could aid in selection of fermentation parameters, potentially resulting in more optimal robustness during spray drying. PMID:27973578
Fernandez-Leon, Jose A; Acosta, Gerardo G; Rozenfeld, Alejandro
2014-10-01
Researchers in diverse fields, such as in neuroscience, systems biology and autonomous robotics, have been intrigued by the origin and mechanisms for biological robustness. Darwinian evolution, in general, has suggested that adaptive mechanisms as a way of reaching robustness, could evolve by natural selection acting successively on numerous heritable variations. However, is this understanding enough for realizing how biological systems remain robust during their interactions with the surroundings? Here, we describe selected studies of bio-inspired systems that show behavioral robustness. From neurorobotics, cognitive, self-organizing and artificial immune system perspectives, our discussions focus mainly on how robust behaviors evolve or emerge in these systems, having the capacity of interacting with their surroundings. These descriptions are twofold. Initially, we introduce examples from autonomous robotics to illustrate how the process of designing robust control can be idealized in complex environments for autonomous navigation in terrain and underwater vehicles. We also include descriptions of bio-inspired self-organizing systems. Then, we introduce other studies that contextualize experimental evolution with simulated organisms and physical robots to exemplify how the process of natural selection can lead to the evolution of robustness by means of adaptive behaviors. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Kojima, Kyoko; Bowersock, Gregory J; Kojima, Chinatsu; Klug, Christopher A; Grizzle, William E; Mobley, James A
2012-11-01
A number of reports have recently emerged with focus on extraction of proteins from formalin-fixed paraffin-embedded (FFPE) tissues for MS analysis; however, reproducibility and robustness as compared to flash frozen controls is generally overlooked. The goal of this study was to identify and validate a practical and highly robust approach for the proteomics analysis of FFPE tissues. FFPE and matched frozen pancreatic tissues obtained from mice (n = 8) were analyzed using 1D-nanoLC-MS(MS)(2) following work up with commercially available kits. The chosen approach for FFPE tissues was found to be highly comparable to that of frozen. In addition, the total number of unique peptides identified between the two groups was highly similar, with 958 identified for FFPE and 1070 identified for frozen, with protein identifications that corresponded by approximately 80%. This approach was then applied to archived human FFPE pancreatic cancer specimens (n = 11) as compared to uninvolved tissues (n = 8), where 47 potential pancreatic ductal adenocarcinoma markers were identified as significantly increased, of which 28 were previously reported. Further, these proteins share strongly overlapping pathway associations to pancreatic cancer that include estrogen receptor α. Together, these data support the validation of an approach for the proteomic analysis of FFPE tissues that is straightforward and highly robust, which can also be effectively applied toward translational studies of disease. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Robust functional regression model for marginal mean and subject-specific inferences.
Cao, Chunzheng; Shi, Jian Qing; Lee, Youngjo
2017-01-01
We introduce flexible robust functional regression models, using various heavy-tailed processes, including a Student t-process. We propose efficient algorithms in estimating parameters for the marginal mean inferences and in predicting conditional means as well as interpolation and extrapolation for the subject-specific inferences. We develop bootstrap prediction intervals (PIs) for conditional mean curves. Numerical studies show that the proposed model provides a robust approach against data contamination or distribution misspecification, and the proposed PIs maintain the nominal confidence levels. A real data application is presented as an illustrative example.
Robust non-parametric one-sample tests for the analysis of recurrent events.
Rebora, Paola; Galimberti, Stefania; Valsecchi, Maria Grazia
2010-12-30
One-sample non-parametric tests are proposed here for inference on recurring events. The focus is on the marginal mean function of events and the basis for inference is the standardized distance between the observed and the expected number of events under a specified reference rate. Different weights are considered in order to account for various types of alternative hypotheses on the mean function of the recurrent events process. A robust version and a stratified version of the test are also proposed. The performance of these tests was investigated through simulation studies under various underlying event generation processes, such as homogeneous and nonhomogeneous Poisson processes, autoregressive and renewal processes, with and without frailty effects. The robust versions of the test have been shown to be suitable in a wide variety of event generating processes. The motivating context is a study on gene therapy in a very rare immunodeficiency in children, where a major end-point is the recurrence of severe infections. Robust non-parametric one-sample tests for recurrent events can be useful to assess efficacy and especially safety in non-randomized studies or in epidemiological studies for comparison with a standard population. Copyright © 2010 John Wiley & Sons, Ltd.
Discovery and problem solving: Triangulation as a weak heuristic
NASA Technical Reports Server (NTRS)
Rochowiak, Daniel
1987-01-01
Recently the artificial intelligence community has turned its attention to the process of discovery and found that the history of science is a fertile source for what Darden has called compiled hindsight. Such hindsight generates weak heuristics for discovery that do not guarantee that discoveries will be made but do have proven worth in leading to discoveries. Triangulation is one such heuristic that is grounded in historical hindsight. This heuristic is explored within the general framework of the BACON, GLAUBER, STAHL, DALTON, and SUTTON programs. In triangulation different bases of information are compared in an effort to identify gaps between the bases. Thus, assuming that the bases of information are relevantly related, the gaps that are identified should be good locations for discovery and robust analysis.
MSL's Widgets: Adding Rebustness to Martian Sample Acquisition, Handling, and Processing
NASA Technical Reports Server (NTRS)
Roumeliotis, Chris; Kennedy, Brett; Lin, Justin; DeGrosse, Patrick; Cady, Ian; Onufer, Nicholas; Sigel, Deborah; Jandura, Louise; Anderson, Robert; Katz, Ira;
2013-01-01
Mars Science Laboratory's (MSL) Sample Acquisition Sample Processing and Handling (SA-SPaH) system is one of the most ambitious terrain interaction and manipulation systems ever built and successfully used outside of planet earth. Mars has a ruthless environment that has surprised many who have tried to explore there. The robustness widget program was implemented by the MSL project to help ensure the SA-SPaH system would be robust enough to the surprises of this ruthless Martian environment. The robustness widget program was an effort of extreme schedule pressure and responsibility, but was accomplished with resounding success. This paper will focus on a behind the scenes look at MSL's robustness widgets: the particle fun zone, the wind guards, and the portioner pokers.
Sulfite pretreatment (SPORL) for robust enzymatic saccharification of spruce and red pine
J.Y. Zhu; X.J. Pan; G.S. Wang; R. Gleisner
2009-01-01
This study established a novel process using sulfite pretreatment to overcome recalcitrance of lignocellulose (SPORL) for robust and efficient bioconversion of softwoods. The process consists of sulfite treatment of wood chips under acidic conditions followed by mechanical size reduction using disk refining. The results indicated that after the SPORL pretreatment of...
NASA Astrophysics Data System (ADS)
Wang, Limin; Shen, Yiteng; Yu, Jingxian; Li, Ping; Zhang, Ridong; Gao, Furong
2018-01-01
In order to cope with system disturbances in multi-phase batch processes with different dimensions, a hybrid robust control scheme of iterative learning control combined with feedback control is proposed in this paper. First, with a hybrid iterative learning control law designed by introducing the state error, the tracking error and the extended information, the multi-phase batch process is converted into a two-dimensional Fornasini-Marchesini (2D-FM) switched system with different dimensions. Second, a switching signal is designed using the average dwell-time method integrated with the related switching conditions to give sufficient conditions ensuring stable running for the system. Finally, the minimum running time of the subsystems and the control law gains are calculated by solving the linear matrix inequalities. Meanwhile, a compound 2D controller with robust performance is obtained, which includes a robust extended feedback control for ensuring the steady-state tracking error to converge rapidly. The application on an injection molding process displays the effectiveness and superiority of the proposed strategy.
Failure detection system design methodology. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chow, E. Y.
1980-01-01
The design of a failure detection and identification system consists of designing a robust residual generation process and a high performance decision making process. The design of these two processes are examined separately. Residual generation is based on analytical redundancy. Redundancy relations that are insensitive to modelling errors and noise effects are important for designing robust residual generation processes. The characterization of the concept of analytical redundancy in terms of a generalized parity space provides a framework in which a systematic approach to the determination of robust redundancy relations are developed. The Bayesian approach is adopted for the design of high performance decision processes. The FDI decision problem is formulated as a Bayes sequential decision problem. Since the optimal decision rule is incomputable, a methodology for designing suboptimal rules is proposed. A numerical algorithm is developed to facilitate the design and performance evaluation of suboptimal rules.
NASA Astrophysics Data System (ADS)
Hsieh, Fu-Shiung
2011-03-01
Design of robust supervisory controllers for manufacturing systems with unreliable resources has received significant attention recently. Robustness analysis provides an alternative way to analyse a perturbed system to quickly respond to resource failures. Although we have analysed the robustness properties of several subclasses of ordinary Petri nets (PNs), analysis for non-ordinary PNs has not been done. Non-ordinary PNs have weighted arcs and have the advantage to compactly model operations requiring multiple parts or resources. In this article, we consider a class of flexible assembly/disassembly manufacturing systems and propose a non-ordinary flexible assembly/disassembly Petri net (NFADPN) model for this class of systems. As the class of flexible assembly/disassembly manufacturing systems can be regarded as the integration and interactions of a set of assembly/disassembly subprocesses, a bottom-up approach is adopted in this article to construct the NFADPN models. Due to the routing flexibility in NFADPN, there may exist different ways to accomplish the tasks. To characterise different ways to accomplish the tasks, we propose the concept of completely connected subprocesses. As long as there exists a set of completely connected subprocesses for certain type of products, the production of that type of products can still be maintained without requiring the whole NFADPN to be live. To take advantage of the alternative routes without enforcing liveness for the whole system, we generalise the concept of persistent production proposed to NFADPN. We propose a condition for persistent production based on the concept of completely connected subprocesses. We extend robustness analysis to NFADPN by exploiting its structure. We identify several patterns of resource failures and characterise the conditions to maintain operation in the presence of resource failures.
Automated robust registration of grossly misregistered whole-slide images with varying stains
NASA Astrophysics Data System (ADS)
Litjens, G.; Safferling, K.; Grabe, N.
2016-03-01
Cancer diagnosis and pharmaceutical research increasingly depend on the accurate quantification of cancer biomarkers. Identification of biomarkers is usually performed through immunohistochemical staining of cancer sections on glass slides. However, combination of multiple biomarkers from a wide variety of immunohistochemically stained slides is a tedious process in traditional histopathology due to the switching of glass slides and re-identification of regions of interest by pathologists. Digital pathology now allows us to apply image registration algorithms to digitized whole-slides to align the differing immunohistochemical stains automatically. However, registration algorithms need to be robust to changes in color due to differing stains and severe changes in tissue content between slides. In this work we developed a robust registration methodology to allow for fast coarse alignment of multiple immunohistochemical stains to the base hematyoxylin and eosin stained image. We applied HSD color model conversion to obtain a less stain color dependent representation of the whole-slide images. Subsequently, optical density thresholding and connected component analysis were used to identify the relevant regions for registration. Template matching using normalized mutual information was applied to provide initial translation and rotation parameters, after which a cost function-driven affine registration was performed. The algorithm was validated using 40 slides from 10 prostate cancer patients, with landmark registration error as a metric. Median landmark registration error was around 180 microns, which indicates performance is adequate for practical application. None of the registrations failed, indicating the robustness of the algorithm.
Fast and Robust Segmentation and Classification for Change Detection in Urban Point Clouds
NASA Astrophysics Data System (ADS)
Roynard, X.; Deschaud, J.-E.; Goulette, F.
2016-06-01
Change detection is an important issue in city monitoring to analyse street furniture, road works, car parking, etc. For example, parking surveys are needed but are currently a laborious task involving sending operators in the streets to identify the changes in car locations. In this paper, we propose a method that performs a fast and robust segmentation and classification of urban point clouds, that can be used for change detection. We apply this method to detect the cars, as a particular object class, in order to perform parking surveys automatically. A recently proposed method already addresses the need for fast segmentation and classification of urban point clouds, using elevation images. The interest to work on images is that processing is much faster, proven and robust. However there may be a loss of information in complex 3D cases: for example when objects are one above the other, typically a car under a tree or a pedestrian under a balcony. In this paper we propose a method that retain the three-dimensional information while preserving fast computation times and improving segmentation and classification accuracy. It is based on fast region-growing using an octree, for the segmentation, and specific descriptors with Random-Forest for the classification. Experiments have been performed on large urban point clouds acquired by Mobile Laser Scanning. They show that the method is as fast as the state of the art, and that it gives more robust results in the complex 3D cases.
Strauss, Daniel; Goldstein, Joshua; Hongo-Hirasaki, Tomoko; Yokoyama, Yoshiro; Hirotomi, Naokatsu; Miyabayashi, Tomoyuki; Vacante, Dominick
2017-09-01
Virus filtration provides robust removal of potential viral contaminants and is a critical step during the manufacture of biotherapeutic products. However, recent studies have shown that small virus removal can be impacted by low operating pressure and depressurization. To better understand the impact of these conditions and to define robust virus filtration design spaces, we conducted multivariate analyses to evaluate parvovirus removal over wide ranges of operating pressure, solution pH, and conductivity for three mAb products on Planova™ BioEX and 20N filters. Pressure ranges from 0.69 to 3.43 bar (10.0-49.7 psi) for Planova BioEX filters and from 0.50 to 1.10 bar (7.3 to 16.0 psi) for Planova 20N filters were identified as ranges over which effective removal of parvovirus is achieved for different products over wide ranges of pH and conductivity. Viral clearance at operating pressure below the robust pressure range suggests that effective parvovirus removal can be achieved at low pressure but that Minute virus of mice (MVM) logarithmic reduction value (LRV) results may be impacted by product and solution conditions. These results establish robust design spaces for Planova BioEX and 20N filters where high parvovirus clearance can be expected for most antibody products and provide further understanding of viral clearance mechanisms. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:1294-1302, 2017. © 2017 American Institute of Chemical Engineers.
Cross-reference identification within a PDF document
NASA Astrophysics Data System (ADS)
Li, Sida; Gao, Liangcai; Tang, Zhi; Yu, Yinyan
2015-01-01
Cross-references, such like footnotes, endnotes, figure/table captions, references, are a common and useful type of page elements to further explain their corresponding entities in the target document. In this paper, we focus on cross-reference identification in a PDF document, and present a robust method as a case study of identifying footnotes and figure references. The proposed method first extracts footnotes and figure captions, and then matches them with their corresponding references within a document. A number of novel features within a PDF document, i.e., page layout, font information, lexical and linguistic features of cross-references, are utilized for the task. Clustering is adopted to handle the features that are stable in one document but varied in different kinds of documents so that the process of identification is adaptive with document types. In addition, this method leverages results from the matching process to provide feedback to the identification process and further improve the algorithm accuracy. The primary experiments in real document sets show that the proposed method is promising to identify cross-reference in a PDF document.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allgor, R.J.; Feehery, W.F.; Tolsma, J.E.
The batch process development problem serves as good candidate to guide the development of process modeling environments. It demonstrates that very robust numerical techniques are required within an environment that can collect, organize, and maintain the data and models required to address the batch process development problem. This paper focuses on improving the robustness and efficiency of the numerical algorithms required in such a modeling environment through the development of hybrid numerical and symbolic strategies.
Robust object tracking techniques for vision-based 3D motion analysis applications
NASA Astrophysics Data System (ADS)
Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.
2016-04-01
Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.
NASA Astrophysics Data System (ADS)
Dos Santos Ferreira, Olavio; Sadat Gousheh, Reza; Visser, Bart; Lie, Kenrick; Teuwen, Rachel; Izikson, Pavel; Grzela, Grzegorz; Mokaberi, Babak; Zhou, Steve; Smith, Justin; Husain, Danish; Mandoy, Ram S.; Olvera, Raul
2018-03-01
Ever increasing need for tighter on-product overlay (OPO), as well as enhanced accuracy in overlay metrology and methodology, is driving semiconductor industry's technologists to innovate new approaches to OPO measurements. In case of High Volume Manufacturing (HVM) fabs, it is often critical to strive for both accuracy and robustness. Robustness, in particular, can be challenging in metrology since overlay targets can be impacted by proximity of other structures next to the overlay target (asymmetric effects), as well as symmetric stack changes such as photoresist height variations. Both symmetric and asymmetric contributors have impact on robustness. Furthermore, tweaking or optimizing wafer processing parameters for maximum yield may have an adverse effect on physical target integrity. As a result, measuring and monitoring physical changes or process abnormalities/artefacts in terms of new Key Performance Indicators (KPIs) is crucial for the end goal of minimizing true in-die overlay of the integrated circuits (ICs). IC manufacturing fabs often relied on CD-SEM in the past to capture true in-die overlay. Due to destructive and intrusive nature of CD-SEMs on certain materials, it's desirable to characterize asymmetry effects for overlay targets via inline KPIs utilizing YieldStar (YS) metrology tools. These KPIs can also be integrated as part of (μDBO) target evaluation and selection for final recipe flow. In this publication, the Holistic Metrology Qualification (HMQ) flow was extended to account for process induced (asymmetric) effects such as Grating Imbalance (GI) and Bottom Grating Asymmetry (BGA). Local GI typically contributes to the intrafield OPO whereas BGA typically impacts the interfield OPO, predominantly at the wafer edge. Stack height variations highly impact overlay metrology accuracy, in particular in case of multi-layer LithoEtch Litho-Etch (LELE) overlay control scheme. Introducing a GI impact on overlay (in nm) KPI check quantifies the grating imbalance impact on overlay, whereas optimizing for accuracy using self-reference captures the bottom grating asymmetry effect. Measuring BGA after each process step before exposure of the top grating helps to identify which specific step introduces the asymmetry in the bottom grating. By evaluating this set of KPI's to a BEOL LELE overlay scheme, we can enhance robustness of recipe selection and target selection. Furthermore, these KPIs can be utilized to highlight process and equipment abnormalities. In this work, we also quantified OPO results with a self-contained methodology called Triangle Method. This method can be utilized for LELE layers with a common target and reference. This allows validating general μDBO accuracy, hence reducing the need for CD-SEM verification.
Robust Design of Sheet Metal Forming Process Based on Kriging Metamodel
NASA Astrophysics Data System (ADS)
Xie, Yanmin
2011-08-01
Nowadays, sheet metal forming processes design is not a trivial task due to the complex issues to be taken into account (conflicting design goals, complex shapes forming and so on). Optimization methods have also been widely applied in sheet metal forming. Therefore, proper design methods to reduce time and costs have to be developed mostly based on computer aided procedures. At the same time, the existence of variations during manufacturing processes significantly may influence final product quality, rendering non-robust optimal solutions. In this paper, a small size of design of experiments is conducted to investigate how a stochastic behavior of noise factors affects drawing quality. The finite element software (LS_DYNA) is used to simulate the complex sheet metal stamping processes. The Kriging metamodel is adopted to map the relation between input process parameters and part quality. Robust design models for sheet metal forming process integrate adaptive importance sampling with Kriging model, in order to minimize impact of the variations and achieve reliable process parameters. In the adaptive sample, an improved criterion is used to provide direction in which additional training samples can be added to better the Kriging model. Nonlinear functions as test functions and a square stamping example (NUMISHEET'93) are employed to verify the proposed method. Final results indicate application feasibility of the aforesaid method proposed for multi-response robust design.
Multi-Hypothesis Modelling Capabilities for Robust Data-Model Integration
NASA Astrophysics Data System (ADS)
Walker, A. P.; De Kauwe, M. G.; Lu, D.; Medlyn, B.; Norby, R. J.; Ricciuto, D. M.; Rogers, A.; Serbin, S.; Weston, D. J.; Ye, M.; Zaehle, S.
2017-12-01
Large uncertainty is often inherent in model predictions due to imperfect knowledge of how to describe the mechanistic processes (hypotheses) that a model is intended to represent. Yet this model hypothesis uncertainty (MHU) is often overlooked or informally evaluated, as methods to quantify and evaluate MHU are limited. MHU is increased as models become more complex because each additional processes added to a model comes with inherent MHU as well as parametric unceratinty. With the current trend of adding more processes to Earth System Models (ESMs), we are adding uncertainty, which can be quantified for parameters but not MHU. Model inter-comparison projects do allow for some consideration of hypothesis uncertainty but in an ad hoc and non-independent fashion. This has stymied efforts to evaluate ecosystem models against data and intepret the results mechanistically because it is not simple to interpret exactly why a model is producing the results it does and identify which model assumptions are key as they combine models of many sub-systems and processes, each of which may be conceptualised and represented mathematically in various ways. We present a novel modelling framework—the multi-assumption architecture and testbed (MAAT)—that automates the combination, generation, and execution of a model ensemble built with different representations of process. We will present the argument that multi-hypothesis modelling needs to be considered in conjunction with other capabilities (e.g. the Predictive Ecosystem Analyser; PecAn) and statistical methods (e.g. sensitivity anaylsis, data assimilation) to aid efforts in robust data model integration to enhance our predictive understanding of biological systems.
Magis, David
2014-11-01
In item response theory, the classical estimators of ability are highly sensitive to response disturbances and can return strongly biased estimates of the true underlying ability level. Robust methods were introduced to lessen the impact of such aberrant responses on the estimation process. The computation of asymptotic (i.e., large-sample) standard errors (ASE) for these robust estimators, however, has not yet been fully considered. This paper focuses on a broad class of robust ability estimators, defined by an appropriate selection of the weight function and the residual measure, for which the ASE is derived from the theory of estimating equations. The maximum likelihood (ML) and the robust estimators, together with their estimated ASEs, are then compared in a simulation study by generating random guessing disturbances. It is concluded that both the estimators and their ASE perform similarly in the absence of random guessing, while the robust estimator and its estimated ASE are less biased and outperform their ML counterparts in the presence of random guessing with large impact on the item response process. © 2013 The British Psychological Society.
The Stochastic Evolutionary Game for a Population of Biological Networks Under Natural Selection
Chen, Bor-Sen; Ho, Shih-Ju
2014-01-01
In this study, a population of evolutionary biological networks is described by a stochastic dynamic system with intrinsic random parameter fluctuations due to genetic variations and external disturbances caused by environmental changes in the evolutionary process. Since information on environmental changes is unavailable and their occurrence is unpredictable, they can be considered as a game player with the potential to destroy phenotypic stability. The biological network needs to develop an evolutionary strategy to improve phenotypic stability as much as possible, so it can be considered as another game player in the evolutionary process, ie, a stochastic Nash game of minimizing the maximum network evolution level caused by the worst environmental disturbances. Based on the nonlinear stochastic evolutionary game strategy, we find that some genetic variations can be used in natural selection to construct negative feedback loops, efficiently improving network robustness. This provides larger genetic robustness as a buffer against neutral genetic variations, as well as larger environmental robustness to resist environmental disturbances and maintain a network phenotypic traits in the evolutionary process. In this situation, the robust phenotypic traits of stochastic biological networks can be more frequently selected by natural selection in evolution. However, if the harbored neutral genetic variations are accumulated to a sufficiently large degree, and environmental disturbances are strong enough that the network robustness can no longer confer enough genetic robustness and environmental robustness, then the phenotype robustness might break down. In this case, a network phenotypic trait may be pushed from one equilibrium point to another, changing the phenotypic trait and starting a new phase of network evolution through the hidden neutral genetic variations harbored in network robustness by adaptive evolution. Further, the proposed evolutionary game is extended to an n-tuple evolutionary game of stochastic biological networks with m players (competitive populations) and k environmental dynamics. PMID:24558296
Jovanović, Marko; Rakić, Tijana; Tumpa, Anja; Jančić Stojanović, Biljana
2015-06-10
This study presents the development of hydrophilic interaction liquid chromatographic method for the analysis of iohexol, its endo-isomer and three impurities following Quality by Design (QbD) approach. The main objective of the method was to identify the conditions where adequate separation quality in minimal analysis duration could be achieved within a robust region that guarantees the stability of method performance. The relationship between critical process parameters (acetonitrile content in the mobile phase, pH of the water phase and ammonium acetate concentration in the water phase) and critical quality attributes is created applying design of experiments methodology. The defined mathematical models and Monte Carlo simulation are used to evaluate the risk of uncertainty in models prediction and incertitude in adjusting the process parameters and to identify the design space. The borders of the design space are experimentally verified and confirmed that the quality of the method is preserved in this region. Moreover, Plackett-Burman design is applied for experimental robustness testing and method is fully validated to verify the adequacy of selected optimal conditions: the analytical column ZIC HILIC (100 mm × 4.6 mm, 5 μm particle size); mobile phase consisted of acetonitrile-water phase (72 mM ammonium acetate, pH adjusted to 6.5 with glacial acetic acid) (86.7:13.3) v/v; column temperature 25 °C, mobile phase flow rate 1 mL min(-1), wavelength of detection 254 nm. Copyright © 2015 Elsevier B.V. All rights reserved.
Visual control of flight speed in Drosophila melanogaster.
Fry, Steven N; Rohrseitz, Nicola; Straw, Andrew D; Dickinson, Michael H
2009-04-01
Flight control in insects depends on self-induced image motion (optic flow), which the visual system must process to generate appropriate corrective steering maneuvers. Classic experiments in tethered insects applied rigorous system identification techniques for the analysis of turning reactions in the presence of rotating pattern stimuli delivered in open-loop. However, the functional relevance of these measurements for visual free-flight control remains equivocal due to the largely unknown effects of the highly constrained experimental conditions. To perform a systems analysis of the visual flight speed response under free-flight conditions, we implemented a 'one-parameter open-loop' paradigm using 'TrackFly' in a wind tunnel equipped with real-time tracking and virtual reality display technology. Upwind flying flies were stimulated with sine gratings of varying temporal and spatial frequencies, and the resulting speed responses were measured from the resulting flight speed reactions. To control flight speed, the visual system of the fruit fly extracts linear pattern velocity robustly over a broad range of spatio-temporal frequencies. The speed signal is used for a proportional control of flight speed within locomotor limits. The extraction of pattern velocity over a broad spatio-temporal frequency range may require more sophisticated motion processing mechanisms than those identified in flies so far. In Drosophila, the neuromotor pathways underlying flight speed control may be suitably explored by applying advanced genetic techniques, for which our data can serve as a baseline. Finally, the high-level control principles identified in the fly can be meaningfully transferred into a robotic context, such as for the robust and efficient control of autonomous flying micro air vehicles.
Vortex line topology during vortex tube reconnection
NASA Astrophysics Data System (ADS)
McGavin, P.; Pontin, D. I.
2018-05-01
This paper addresses reconnection of vortex tubes, with particular focus on the topology of the vortex lines (field lines of the vorticity). This analysis of vortex line topology reveals key features of the reconnection process, such as the generation of many small flux rings, formed when reconnection occurs in multiple locations in the vortex sheet between the tubes. Consideration of three-dimensional reconnection principles leads to a robust measurement of the reconnection rate, even once instabilities break the symmetry. It also allows us to identify internal reconnection of vortex lines within the individual vortex tubes. Finally, the introduction of a third vortex tube is shown to render the vortex reconnection process fully three-dimensional, leading to a fundamental change in the topological structure of the process. An additional interesting feature is the generation of vorticity null points.
Comin, Cesar Henrique; Xu, Xiaoyin; Wang, Yaming; Costa, Luciano da Fontoura; Yang, Zhong
2014-12-01
We present an image processing approach to automatically analyze duo-channel microscopic images of muscular fiber nuclei and cytoplasm. Nuclei and cytoplasm play a critical role in determining the health and functioning of muscular fibers as changes of nuclei and cytoplasm manifest in many diseases such as muscular dystrophy and hypertrophy. Quantitative evaluation of muscle fiber nuclei and cytoplasm thus is of great importance to researchers in musculoskeletal studies. The proposed computational approach consists of steps of image processing to segment and delineate cytoplasm and identify nuclei in two-channel images. Morphological operations like skeletonization is applied to extract the length of cytoplasm for quantification. We tested the approach on real images and found that it can achieve high accuracy, objectivity, and robustness. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jia, Ningning; Y Lam, Edmund
2010-04-01
Inverse lithography technology (ILT) synthesizes photomasks by solving an inverse imaging problem through optimization of an appropriate functional. Much effort on ILT is dedicated to deriving superior masks at a nominal process condition. However, the lower k1 factor causes the mask to be more sensitive to process variations. Robustness to major process variations, such as focus and dose variations, is desired. In this paper, we consider the focus variation as a stochastic variable, and treat the mask design as a machine learning problem. The stochastic gradient descent approach, which is a useful tool in machine learning, is adopted to train the mask design. Compared with previous work, simulation shows that the proposed algorithm is effective in producing robust masks.
NASA Technical Reports Server (NTRS)
Pandya, Shishir; Chaderjian, Neal; Ahmad, Jasim; Kwak, Dochan (Technical Monitor)
2002-01-01
A process is described which enables the generation of 35 time-dependent viscous solutions for a YAV-8B Harrier in ground effect in one week. Overset grids are used to model the complex geometry of the Harrier aircraft and the interaction of its jets with the ground plane and low-speed ambient flow. The time required to complete this parametric study is drastically reduced through the use of process automation, modern computational platforms, and parallel computing. Moreover, a dual-time-stepping algorithm is described which improves solution robustness. Unsteady flow visualization and a frequency domain analysis are also used to identify and correlated key flow structures with the time variation of lift.
Electronic whiteboards: review of the literature.
Randell, Rebecca; Greenhalgh, Joanne; Wyatt, Jeremy; Gardner, Peter; Pearman, Alan; Honey, Stephanie; Dowding, Dawn
2015-01-01
Electronic whiteboards are being introduced into hospitals to communicate real-time patient information instantly to staff. This paper provides a preliminary review of the current state of evidence for the effect of electronic whiteboards on care processes and patient outcomes. A literature search was performed for the dates 1996 to 2014 on MEDLINE, EMBASE, IEEE Xplore, Science Direct, and the ACM Digital Library. Thirteen papers, describing 11 studies, meeting the inclusion criteria were identified. The majority of studies took place in the Emergency Department. While studies looked at the impact of electronic whiteboards on the process of care, there is an absence of evidence concerning impact on patient outcomes. There is a need for robust research measuring the impact of electronic whiteboards on inpatient care.
Optimizing spacecraft design - optimization engine development : progress and plans
NASA Technical Reports Server (NTRS)
Cornford, Steven L.; Feather, Martin S.; Dunphy, Julia R; Salcedo, Jose; Menzies, Tim
2003-01-01
At JPL and NASA, a process has been developed to perform life cycle risk management. This process requires users to identify: goals and objectives to be achieved (and their relative priorities), the various risks to achieving those goals and objectives, and options for risk mitigation (prevention, detection ahead of time, and alleviation). Risks are broadly defined to include the risk of failing to design a system with adequate performance, compatibility and robustness in addition to more traditional implementation and operational risks. The options for mitigating these different kinds of risks can include architectural and design choices, technology plans and technology back-up options, test-bed and simulation options, engineering models and hardware/software development techniques and other more traditional risk reduction techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, W; Riyahi, S; Lu, W
Purpose: Normal lung CT texture features have been used for the prediction of radiation-induced lung disease (radiation pneumonitis and radiation fibrosis). For these features to be clinically useful, they need to be relatively invariant (robust) to tumor size and not correlated with normal lung volume. Methods: The free-breathing CTs of 14 lung SBRT patients were studied. Different sizes of GTVs were simulated with spheres placed at the upper lobe and lower lobe respectively in the normal lung (contralateral to tumor). 27 texture features (9 from intensity histogram, 8 from grey-level co-occurrence matrix [GLCM] and 10 from grey-level run-length matrix [GLRM])more » were extracted from [normal lung-GTV]. To measure the variability of a feature F, the relative difference D=|Fref -Fsim|/Fref*100% was calculated, where Fref was for the entire normal lung and Fsim was for [normal lung-GTV]. A feature was considered as robust if the largest non-outlier (Q3+1.5*IQR) D was less than 5%, and considered as not correlated with normal lung volume when their Pearson correlation was lower than 0.50. Results: Only 11 features were robust. All first-order intensity-histogram features (mean, max, etc.) were robust, while most higher-order features (skewness, kurtosis, etc.) were unrobust. Only two of the GLCM and four of the GLRM features were robust. Larger GTV resulted greater feature variation, this was particularly true for unrobust features. All robust features were not correlated with normal lung volume while three unrobust features showed high correlation. Excessive variations were observed in two low grey-level run features and were later identified to be from one patient with local lung diseases (atelectasis) in the normal lung. There was no dependence on GTV location. Conclusion: We identified 11 robust normal lung CT texture features that can be further examined for the prediction of radiation-induced lung disease. Interestingly, low grey-level run features identified normal lung diseases. This work was supported in part by the National Cancer Institute Grants R01CA172638.« less
NASA Astrophysics Data System (ADS)
Chou, Shuo-Ju
2011-12-01
In recent years the United States has shifted from a threat-based acquisition policy that developed systems for countering specific threats to a capabilities-based strategy that emphasizes the acquisition of systems that provide critical national defense capabilities. This shift in policy, in theory, allows for the creation of an "optimal force" that is robust against current and future threats regardless of the tactics and scenario involved. In broad terms, robustness can be defined as the insensitivity of an outcome to "noise" or non-controlled variables. Within this context, the outcome is the successful achievement of defense strategies and the noise variables are tactics and scenarios that will be associated with current and future enemies. Unfortunately, a lack of system capability, budget, and schedule robustness against technology performance and development uncertainties has led to major setbacks in recent acquisition programs. This lack of robustness stems from the fact that immature technologies have uncertainties in their expected performance, development cost, and schedule that cause to variations in system effectiveness and program development budget and schedule requirements. Unfortunately, the Technology Readiness Assessment process currently used by acquisition program managers and decision-makers to measure technology uncertainty during critical program decision junctions does not adequately capture the impact of technology performance and development uncertainty on program capability and development metrics. The Technology Readiness Level metric employed by the TRA to describe program technology elements uncertainties can only provide a qualitative and non-descript estimation of the technology uncertainties. In order to assess program robustness, specifically requirements robustness, against technology performance and development uncertainties, a new process is needed. This process should provide acquisition program managers and decision-makers with the ability to assess or measure the robustness of program requirements against such uncertainties. A literature review of techniques for forecasting technology performance and development uncertainties and subsequent impacts on capability, budget, and schedule requirements resulted in the conclusion that an analysis process that coupled a probabilistic analysis technique such as Monte Carlo Simulations with quantitative and parametric models of technology performance impact and technology development time and cost requirements would allow the probabilities of meeting specific constraints of these requirements to be established. These probabilities of requirements success metrics can then be used as a quantitative and probabilistic measure of program requirements robustness against technology uncertainties. Combined with a Multi-Objective Genetic Algorithm optimization process and computer-based Decision Support System, critical information regarding requirements robustness against technology uncertainties can be captured and quantified for acquisition decision-makers. This results in a more informed and justifiable selection of program technologies during initial program definition as well as formulation of program development and risk management strategies. To meet the stated research objective, the ENhanced TEchnology Robustness Prediction and RISk Evaluation (ENTERPRISE) methodology was formulated to provide a structured and transparent process for integrating these enabling techniques to provide a probabilistic and quantitative assessment of acquisition program requirements robustness against technology performance and development uncertainties. In order to demonstrate the capabilities of the ENTERPRISE method and test the research Hypotheses, an demonstration application of this method was performed on a notional program for acquiring the Carrier-based Suppression of Enemy Air Defenses (SEAD) using Unmanned Combat Aircraft Systems (UCAS) and their enabling technologies. The results of this implementation provided valuable insights regarding the benefits and inner workings of this methodology as well as its limitations that should be addressed in the future to narrow the gap between current state and the desired state.
Modeling the temporal periodicity of growth increments based on harmonic functions
Morales-Bojórquez, Enrique; González-Peláez, Sergio Scarry; Bautista-Romero, J. Jesús; Lluch-Cota, Daniel Bernardo
2018-01-01
Age estimation methods based on hard structures require a process of validation to confirm the periodical pattern of growth marks. Among such processes, one of the most used is the marginal increment ratio (MIR), which was stated to follow a sinusoidal cycle in a population. Despite its utility, in most cases, its implementation has lacked robust statistical analysis. Accordingly, we propose a modeling approach for the temporal periodicity of growth increments based on single and second order harmonic functions. For illustrative purposes, the MIR periodicities for two geoduck species (Panopea generosa and Panopea globosa) were modeled to identify the periodical pattern of growth increments in the shell. This model identified an annual periodicity for both species but described different temporal patterns. The proposed procedure can be broadly used to objectively define the timing of the peak, the degree of symmetry, and therefore, the synchrony of band deposition of different species on the basis of MIR data. PMID:29694381
Gaussian processes with optimal kernel construction for neuro-degenerative clinical onset prediction
NASA Astrophysics Data System (ADS)
Canas, Liane S.; Yvernault, Benjamin; Cash, David M.; Molteni, Erika; Veale, Tom; Benzinger, Tammie; Ourselin, Sébastien; Mead, Simon; Modat, Marc
2018-02-01
Gaussian Processes (GP) are a powerful tool to capture the complex time-variations of a dataset. In the context of medical imaging analysis, they allow a robust modelling even in case of highly uncertain or incomplete datasets. Predictions from GP are dependent of the covariance kernel function selected to explain the data variance. To overcome this limitation, we propose a framework to identify the optimal covariance kernel function to model the data.The optimal kernel is defined as a composition of base kernel functions used to identify correlation patterns between data points. Our approach includes a modified version of the Compositional Kernel Learning (CKL) algorithm, in which we score the kernel families using a new energy function that depends both the Bayesian Information Criterion (BIC) and the explained variance score. We applied the proposed framework to model the progression of neurodegenerative diseases over time, in particular the progression of autosomal dominantly-inherited Alzheimer's disease, and use it to predict the time to clinical onset of subjects carrying genetic mutation.
Mitochondrial Protein Interaction Mapping Identifies Regulators of Respiratory Chain Function.
Floyd, Brendan J; Wilkerson, Emily M; Veling, Mike T; Minogue, Catie E; Xia, Chuanwu; Beebe, Emily T; Wrobel, Russell L; Cho, Holly; Kremer, Laura S; Alston, Charlotte L; Gromek, Katarzyna A; Dolan, Brendan K; Ulbrich, Arne; Stefely, Jonathan A; Bohl, Sarah L; Werner, Kelly M; Jochem, Adam; Westphall, Michael S; Rensvold, Jarred W; Taylor, Robert W; Prokisch, Holger; Kim, Jung-Ja P; Coon, Joshua J; Pagliarini, David J
2016-08-18
Mitochondria are essential for numerous cellular processes, yet hundreds of their proteins lack robust functional annotation. To reveal functions for these proteins (termed MXPs), we assessed condition-specific protein-protein interactions for 50 select MXPs using affinity enrichment mass spectrometry. Our data connect MXPs to diverse mitochondrial processes, including multiple aspects of respiratory chain function. Building upon these observations, we validated C17orf89 as a complex I (CI) assembly factor. Disruption of C17orf89 markedly reduced CI activity, and its depletion is found in an unresolved case of CI deficiency. We likewise discovered that LYRM5 interacts with and deflavinates the electron-transferring flavoprotein that shuttles electrons to coenzyme Q (CoQ). Finally, we identified a dynamic human CoQ biosynthetic complex involving multiple MXPs whose topology we map using purified components. Collectively, our data lend mechanistic insight into respiratory chain-related activities and prioritize hundreds of additional interactions for further exploration of mitochondrial protein function. Copyright © 2016 Elsevier Inc. All rights reserved.
Baumann, Andrea; Holness, D Linn; Norman, Patrica; Idriss-Wheeler, Dina; Boucher, Patricia
2012-07-01
This article presents a health and safety intervention model and the use of process evaluation to assess a participatory ergonomic intervention. The effectiveness of the Ergonomic Program Implementation Continuum (EPIC) was assessed at six healthcare pilot sites in Ontario, Canada. The model provided a framework to demonstrate evaluation findings. Participants reported that EPIC was thorough and identified improvements related to its use. Participants believed the program contributed to advancing an organizational culture of safety (COS). Main barriers to program uptake included resistance to change and need for adequate funding and resources. The dedication of organizational leaders and consultant coaches was identified as essential to the program's success. In terms of impact on industry, findings contribute to the evidence-based knowledge of health and safety interventions and support use of the framework for creating a robust infrastructure to advance organizational COS and link staff safety and wellness with patient safety in healthcare. Copyright © 2012 National Safety Council and Elsevier Ltd. All rights reserved.
The Problem of Size in Robust Design
NASA Technical Reports Server (NTRS)
Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri
1997-01-01
To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.
Robust and Accurate Anomaly Detection in ECG Artifacts Using Time Series Motif Discovery
Sivaraks, Haemwaan
2015-01-01
Electrocardiogram (ECG) anomaly detection is an important technique for detecting dissimilar heartbeats which helps identify abnormal ECGs before the diagnosis process. Currently available ECG anomaly detection methods, ranging from academic research to commercial ECG machines, still suffer from a high false alarm rate because these methods are not able to differentiate ECG artifacts from real ECG signal, especially, in ECG artifacts that are similar to ECG signals in terms of shape and/or frequency. The problem leads to high vigilance for physicians and misinterpretation risk for nonspecialists. Therefore, this work proposes a novel anomaly detection technique that is highly robust and accurate in the presence of ECG artifacts which can effectively reduce the false alarm rate. Expert knowledge from cardiologists and motif discovery technique is utilized in our design. In addition, every step of the algorithm conforms to the interpretation of cardiologists. Our method can be utilized to both single-lead ECGs and multilead ECGs. Our experiment results on real ECG datasets are interpreted and evaluated by cardiologists. Our proposed algorithm can mostly achieve 100% of accuracy on detection (AoD), sensitivity, specificity, and positive predictive value with 0% false alarm rate. The results demonstrate that our proposed method is highly accurate and robust to artifacts, compared with competitive anomaly detection methods. PMID:25688284
Bakal, Tomas; Janata, Jiri; Sabova, Lenka; Grabic, Roman; Zlabek, Vladimir; Najmanova, Lucie
2018-06-16
A robust and widely applicable method for sampling of aquatic microbial biofilm and further sample processing is presented. The method is based on next-generation sequencing of V4-V5 variable regions of 16S rRNA gene and further statistical analysis of sequencing data, which could be useful not only to investigate taxonomic composition of biofilm bacterial consortia but also to assess aquatic ecosystem health. Five artificial materials commonly used for biofilm growth (glass, stainless steel, aluminum, polypropylene, polyethylene) were tested to determine the one giving most robust and reproducible results. The effect of used sampler material on total microbial composition was not statistically significant; however, the non-plastic materials (glass, metal) gave more stable outputs without irregularities among sample parallels. The bias of the method is assessed with respect to the employment of a non-quantitative step (PCR amplification) to obtain quantitative results (relative abundance of identified taxa). This aspect is often overlooked in ecological and medical studies. We document that sequencing of a mixture of three merged primary PCR reactions for each sample and further evaluation of median values from three technical replicates for each sample enables to overcome this bias and gives robust and repeatable results well distinguishing among sampling localities and seasons.
McCormack, James L; Sittig, Dean F; Wright, Adam; McMullen, Carmit; Bates, David W
2012-01-01
Objective Computerized provider order entry (CPOE) with clinical decision support (CDS) can help hospitals improve care. Little is known about what CDS is presently in use and how it is managed, however, especially in community hospitals. This study sought to address this knowledge gap by identifying standard practices related to CDS in US community hospitals with mature CPOE systems. Materials and Methods Representatives of 34 community hospitals, each of which had over 5 years experience with CPOE, were interviewed to identify standard practices related to CDS. Data were analyzed with a mix of descriptive statistics and qualitative approaches to the identification of patterns, themes and trends. Results This broad sample of community hospitals had robust levels of CDS despite their small size and the independent nature of many of their physician staff members. The hospitals uniformly used medication alerts and order sets, had sophisticated governance procedures for CDS, and employed staff to customize CDS. Discussion The level of customization needed for most CDS before implementation was greater than expected. Customization requires skilled individuals who represent an emerging manpower need at this type of hospital. Conclusion These results bode well for robust diffusion of CDS to similar hospitals in the process of adopting CDS and suggest that national policies to promote CDS use may be successful. PMID:22707744
2D approaches to 3D watermarking: state-of-the-art and perspectives
NASA Astrophysics Data System (ADS)
Mitrea, M.; Duţă, S.; Prêteux, F.
2006-02-01
With the advent of the Information Society, video, audio, speech, and 3D media represent the source of huge economic benefits. Consequently, there is a continuously increasing demand for protecting their related intellectual property rights. The solution can be provided by robust watermarking, a research field which exploded in the last 7 years. However, the largest part of the scientific effort was devoted to video and audio protection, the 3D objects being quite neglected. In the absence of any standardisation attempt, the paper starts by summarising the approaches developed in this respect and by further identifying the main challenges to be addressed in the next years. Then, it describes an original oblivious watermarking method devoted to the protection of the 3D objects represented by NURBS (Non uniform Rational B Spline) surfaces. Applied to both free form objects and CAD models, the method exhibited very good transparency (no visible differences between the marked and the unmarked model) and robustness (with respect to both traditional attacks and to NURBS processing).
Lu, Tao
2016-01-01
The gene regulation network (GRN) evaluates the interactions between genes and look for models to describe the gene expression behavior. These models have many applications; for instance, by characterizing the gene expression mechanisms that cause certain disorders, it would be possible to target those genes to block the progress of the disease. Many biological processes are driven by nonlinear dynamic GRN. In this article, we propose a nonparametric differential equation (ODE) to model the nonlinear dynamic GRN. Specially, we address following questions simultaneously: (i) extract information from noisy time course gene expression data; (ii) model the nonlinear ODE through a nonparametric smoothing function; (iii) identify the important regulatory gene(s) through a group smoothly clipped absolute deviation (SCAD) approach; (iv) test the robustness of the model against possible shortening of experimental duration. We illustrate the usefulness of the model and associated statistical methods through a simulation and a real application examples.
A Pub/Sub Message Distribution Architecture for Disruption Tolerant Networks
NASA Astrophysics Data System (ADS)
Carrilho, Sergio; Esaki, Hiroshi
Access to information is taken for granted in urban areas covered by a robust communication infrastructure. Nevertheless most of the areas in the world, are not covered by such infrastructures. We propose a DTN publish and subscribe system called Hikari, which uses nodes' mobility in order to distribute messages without using a robust infrastructure. The area of Disruption/Delay Tolerant Networks (DTN) focuses on providing connectivity to locations separated by networks with disruptions and delays. The Hikari system does not use node identifiers for message forwarding thus eliminating the complexity of routing associated with many forwarding schemes in DTN. Hikari uses nodes paths' information, advertised by special nodes in the system or predicted by the system itself, for optimizing the message dissemination process. We have used the Paris subway system, due to it's complexity, to validate Hikari and to analyze it's performance. We have shown that Hikari achieves a superior deliver rate while keeping redundant messages in the system low, which is ideal when using devices with limited resources for message dissemination.
Emergence of robust growth laws from optimal regulation of ribosome synthesis.
Scott, Matthew; Klumpp, Stefan; Mateescu, Eduard M; Hwa, Terence
2014-08-22
Bacteria must constantly adapt their growth to changes in nutrient availability; yet despite large-scale changes in protein expression associated with sensing, adaptation, and processing different environmental nutrients, simple growth laws connect the ribosome abundance and the growth rate. Here, we investigate the origin of these growth laws by analyzing the features of ribosomal regulation that coordinate proteome-wide expression changes with cell growth in a variety of nutrient conditions in the model organism Escherichia coli. We identify supply-driven feedforward activation of ribosomal protein synthesis as the key regulatory motif maximizing amino acid flux, and autonomously guiding a cell to achieve optimal growth in different environments. The growth laws emerge naturally from the robust regulatory strategy underlying growth rate control, irrespective of the details of the molecular implementation. The study highlights the interplay between phenomenological modeling and molecular mechanisms in uncovering fundamental operating constraints, with implications for endogenous and synthetic design of microorganisms. © 2014 The Authors. Published under the terms of the CC BY 4.0 license.
Jopp, Eilin; Scheffler, Christiane; Hermanussen, Michael
2014-01-01
Screening is an important issue in medicine and is used to early identify unrecognised diseases in persons who are apparently in good health. Screening strongly relies on the concept of "normal values". Normal values are defined as values that are frequently observed in a population and usually range within certain statistical limits. Screening for obesity should start early as the prevalence of obesity consolidates already at early school age. Though widely practiced, measuring BMI is not the ultimate solution for detecting obesity. Children with high BMI may be "robust" in skeletal dimensions. Assessing skeletal robustness and in particularly assessing developmental tempo in adolescents are also important issues in health screening. Yet, in spite of the necessity of screening investigations, appropriate reference values are often missing. Meanwhile, new concepts of growth diagrams have been developed. Stage line diagrams are useful for tracking developmental processes over time. Functional data analyses have efficiently been used for analysing longitudinal growth in height and assessing the tempo of maturation. Convenient low-cost statistics have also been developed for generating synthetic national references.
Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin
2011-01-01
Objective this article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on CT examinations. Methods we developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. Results the scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing dataset of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. Conclusions The proposed method is able to robustly and accurately disconnect all connections between left and right lungs and the guided dynamic programming algorithm is able to remove redundant processing. PMID:21412104
Anomaly Detection of Electromyographic Signals.
Ijaz, Ahsan; Choi, Jongeun
2018-04-01
In this paper, we provide a robust framework to detect anomalous electromyographic (EMG) signals and identify contamination types. As a first step for feature selection, optimally selected Lawton wavelets transform is applied. Robust principal component analysis (rPCA) is then performed on these wavelet coefficients to obtain features in a lower dimension. The rPCA based features are used for constructing a self-organizing map (SOM). Finally, hierarchical clustering is applied on the SOM that separates anomalous signals residing in the smaller clusters and breaks them into logical units for contamination identification. The proposed methodology is tested using synthetic and real world EMG signals. The synthetic EMG signals are generated using a heteroscedastic process mimicking desired experimental setups. A sub-part of these synthetic signals is introduced with anomalies. These results are followed with real EMG signals introduced with synthetic anomalies. Finally, a heterogeneous real world data set is used with known quality issues under an unsupervised setting. The framework provides recall of 90% (± 3.3) and precision of 99%(±0.4).
Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin
2011-01-01
This article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on computed tomography (CT) examinations. We developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. The scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing data set of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. The proposed method is able to robustly and accurately disconnect all connections between left and right lungs, and the guided dynamic programming algorithm is able to remove redundant processing.
Robust Control Design for Systems With Probabilistic Uncertainty
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.
2005-01-01
This paper presents a reliability- and robustness-based formulation for robust control synthesis for systems with probabilistic uncertainty. In a reliability-based formulation, the probability of violating design requirements prescribed by inequality constraints is minimized. In a robustness-based formulation, a metric which measures the tendency of a random variable/process to cluster close to a target scalar/function is minimized. A multi-objective optimization procedure, which combines stability and performance requirements in time and frequency domains, is used to search for robustly optimal compensators. Some of the fundamental differences between the proposed strategy and conventional robust control methods are: (i) unnecessary conservatism is eliminated since there is not need for convex supports, (ii) the most likely plants are favored during synthesis allowing for probabilistic robust optimality, (iii) the tradeoff between robust stability and robust performance can be explored numerically, (iv) the uncertainty set is closely related to parameters with clear physical meaning, and (v) compensators with improved robust characteristics for a given control structure can be synthesized.
Disease-specific health-related quality of life instruments for IgE-mediated food allergy.
Salvilla, S A; Dubois, A E J; Flokstra-de Blok, B M J; Panesar, S S; Worth, A; Patel, S; Muraro, A; Halken, S; Hoffmann-Sommergruber, K; DunnGalvin, A; Hourihane, J O'B; Regent, L; de Jong, N W; Roberts, G; Sheikh, A
2014-07-01
This is one of seven interlinked systematic reviews undertaken on behalf of the European Academy of Allergy and Clinical Immunology as part of their Guidelines for Food Allergy and Anaphylaxis, which focuses on instruments developed for IgE-mediated food allergy. Disease-specific questionnaires are significantly more sensitive than generic ones in measuring the response to interventions or future treatments, as well as estimating the general burden of food allergy. The aim of this systematic review was therefore to identify which disease-specific, validated instruments can be employed to enable assessment of the impact of, and investigations and interventions for, IgE-mediated food allergy on health-related quality of life (HRQL). Using a sensitive search strategy, we searched seven electronic bibliographic databases to identify disease-specific quality of life (QOL) tools relating to IgE-mediated food allergy. From the 17 eligible studies, we identified seven disease-specific HRQL instruments, which were then subjected to detailed quality appraisal. This revealed that these instruments have undergone formal development and validation processes, and have robust psychometric properties, and therefore provide a robust means of establishing the impact of food allergy on QOL. Suitable instruments are now available for use in children, adolescents, parents/caregivers, and adults. Further work must continue to develop a clinical minimal important difference for food allergy and for making these instruments available in a wider range of European languages. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Nie, Xianghui; Huang, Guo H; Li, Yongping
2009-11-01
This study integrates the concepts of interval numbers and fuzzy sets into optimization analysis by dynamic programming as a means of accounting for system uncertainty. The developed interval fuzzy robust dynamic programming (IFRDP) model improves upon previous interval dynamic programming methods. It allows highly uncertain information to be effectively communicated into the optimization process through introducing the concept of fuzzy boundary interval and providing an interval-parameter fuzzy robust programming method for an embedded linear programming problem. Consequently, robustness of the optimization process and solution can be enhanced. The modeling approach is applied to a hypothetical problem for the planning of waste-flow allocation and treatment/disposal facility expansion within a municipal solid waste (MSW) management system. Interval solutions for capacity expansion of waste management facilities and relevant waste-flow allocation are generated and interpreted to provide useful decision alternatives. The results indicate that robust and useful solutions can be obtained, and the proposed IFRDP approach is applicable to practical problems that are associated with highly complex and uncertain information.
NASA Astrophysics Data System (ADS)
Whitney, Heather M.; Drukker, Karen; Edwards, Alexandra; Papaioannou, John; Giger, Maryellen L.
2018-02-01
Radiomics features extracted from breast lesion images have shown potential in diagnosis and prognosis of breast cancer. As clinical institutions transition from 1.5 T to 3.0 T magnetic resonance imaging (MRI), it is helpful to identify robust features across these field strengths. In this study, dynamic contrast-enhanced MR images were acquired retrospectively under IRB/HIPAA compliance, yielding 738 cases: 241 and 124 benign lesions imaged at 1.5 T and 3.0 T and 231 and 142 luminal A cancers imaged at 1.5 T and 3.0 T, respectively. Lesions were segmented using a fuzzy C-means method. Extracted radiomic values for each group of lesions by cancer status and field strength of acquisition were compared using a Kolmogorov-Smirnov test for the null hypothesis that two groups being compared came from the same distribution, with p-values being corrected for multiple comparisons by the Holm-Bonferroni method. Two shape features, one texture feature, and three enhancement variance kinetics features were found to be potentially robust. All potentially robust features had areas under the receiver operating characteristic curve (AUC) statistically greater than 0.5 in the task of distinguishing between lesion types (range of means 0.57-0.78). The significant difference in voxel size between field strength of acquisition limits the ability to affirm more features as robust or not robust according to field strength alone, and inhomogeneities in static field strength and radiofrequency field could also have affected the assessment of kinetic curve features as robust or not. Vendor-specific image scaling could have also been a factor. These findings will contribute to the development of radiomic signatures that use features identified as robust across field strength.
Covariate selection with group lasso and doubly robust estimation of causal effects
Koch, Brandon; Vock, David M.; Wolfson, Julian
2017-01-01
Summary The efficiency of doubly robust estimators of the average causal effect (ACE) of a treatment can be improved by including in the treatment and outcome models only those covariates which are related to both treatment and outcome (i.e., confounders) or related only to the outcome. However, it is often challenging to identify such covariates among the large number that may be measured in a given study. In this paper, we propose GLiDeR (Group Lasso and Doubly Robust Estimation), a novel variable selection technique for identifying confounders and predictors of outcome using an adaptive group lasso approach that simultaneously performs coefficient selection, regularization, and estimation across the treatment and outcome models. The selected variables and corresponding coefficient estimates are used in a standard doubly robust ACE estimator. We provide asymptotic results showing that, for a broad class of data generating mechanisms, GLiDeR yields a consistent estimator of the ACE when either the outcome or treatment model is correctly specified. A comprehensive simulation study shows that GLiDeR is more efficient than doubly robust methods using standard variable selection techniques and has substantial computational advantages over a recently proposed doubly robust Bayesian model averaging method. We illustrate our method by estimating the causal treatment effect of bilateral versus single-lung transplant on forced expiratory volume in one year after transplant using an observational registry. PMID:28636276
Covariate selection with group lasso and doubly robust estimation of causal effects.
Koch, Brandon; Vock, David M; Wolfson, Julian
2018-03-01
The efficiency of doubly robust estimators of the average causal effect (ACE) of a treatment can be improved by including in the treatment and outcome models only those covariates which are related to both treatment and outcome (i.e., confounders) or related only to the outcome. However, it is often challenging to identify such covariates among the large number that may be measured in a given study. In this article, we propose GLiDeR (Group Lasso and Doubly Robust Estimation), a novel variable selection technique for identifying confounders and predictors of outcome using an adaptive group lasso approach that simultaneously performs coefficient selection, regularization, and estimation across the treatment and outcome models. The selected variables and corresponding coefficient estimates are used in a standard doubly robust ACE estimator. We provide asymptotic results showing that, for a broad class of data generating mechanisms, GLiDeR yields a consistent estimator of the ACE when either the outcome or treatment model is correctly specified. A comprehensive simulation study shows that GLiDeR is more efficient than doubly robust methods using standard variable selection techniques and has substantial computational advantages over a recently proposed doubly robust Bayesian model averaging method. We illustrate our method by estimating the causal treatment effect of bilateral versus single-lung transplant on forced expiratory volume in one year after transplant using an observational registry. © 2017, The International Biometric Society.
DOT National Transportation Integrated Search
2010-06-01
The purpose of this project is to conduct a pilot application of the Network : Robustness Index (NRI) for the Chittenden County Regional Transportation Model. : Using the results, improvements to the method to increase its effectiveness for more : wi...
Robust detection-isolation-accommodation for sensor failures
NASA Technical Reports Server (NTRS)
Weiss, J. L.; Pattipati, K. R.; Willsky, A. S.; Eterno, J. S.; Crawford, J. T.
1985-01-01
The results of a one year study to: (1) develop a theory for Robust Failure Detection and Identification (FDI) in the presence of model uncertainty, (2) develop a design methodology which utilizes the robust FDI ththeory, (3) apply the methodology to a sensor FDI problem for the F-100 jet engine, and (4) demonstrate the application of the theory to the evaluation of alternative FDI schemes are presented. Theoretical results in statistical discrimination are used to evaluate the robustness of residual signals (or parity relations) in terms of their usefulness for FDI. Furthermore, optimally robust parity relations are derived through the optimization of robustness metrics. The result is viewed as decentralization of the FDI process. A general structure for decentralized FDI is proposed and robustness metrics are used for determining various parameters of the algorithm.
Dynamic genome-scale metabolic modeling of the yeast Pichia pastoris.
Saitua, Francisco; Torres, Paulina; Pérez-Correa, José Ricardo; Agosin, Eduardo
2017-02-21
Pichia pastoris shows physiological advantages in producing recombinant proteins, compared to other commonly used cell factories. This yeast is mostly grown in dynamic cultivation systems, where the cell's environment is continuously changing and many variables influence process productivity. In this context, a model capable of explaining and predicting cell behavior for the rational design of bioprocesses is highly desirable. Currently, there are five genome-scale metabolic reconstructions of P. pastoris which have been used to predict extracellular cell behavior in stationary conditions. In this work, we assembled a dynamic genome-scale metabolic model for glucose-limited, aerobic cultivations of Pichia pastoris. Starting from an initial model structure for batch and fed-batch cultures, we performed pre/post regression diagnostics to ensure that model parameters were identifiable, significant and sensitive. Once identified, the non-relevant ones were iteratively fixed until a priori robust modeling structures were found for each type of cultivation. Next, the robustness of these reduced structures was confirmed by calibrating the model with new datasets, where no sensitivity, identifiability or significance problems appeared in their parameters. Afterwards, the model was validated for the prediction of batch and fed-batch dynamics in the studied conditions. Lastly, the model was employed as a case study to analyze the metabolic flux distribution of a fed-batch culture and to unravel genetic and process engineering strategies to improve the production of recombinant Human Serum Albumin (HSA). Simulation of single knock-outs indicated that deviation of carbon towards cysteine and tryptophan formation improves HSA production. The deletion of methylene tetrahydrofolate dehydrogenase could increase the HSA volumetric productivity by 630%. Moreover, given specific bioprocess limitations and strain characteristics, the model suggests that implementation of a decreasing specific growth rate during the feed phase of a fed-batch culture results in a 25% increase of the volumetric productivity of the protein. In this work, we formulated a dynamic genome scale metabolic model of Pichia pastoris that yields realistic metabolic flux distributions throughout dynamic cultivations. The model can be calibrated with experimental data to rationally propose genetic and process engineering strategies to improve the performance of a P. pastoris strain of interest.
NASA Astrophysics Data System (ADS)
Eleftheriou, Alexander; Filizzola, Carolina; Genzano, Nicola; Lacava, Teodosio; Lisi, Mariano; Paciello, Rossana; Pergola, Nicola; Vallianatos, Filippos; Tramutoli, Valerio
2016-01-01
Real-time integration of multi-parametric observations is expected to accelerate the process toward improved, and operationally more effective, systems for time-Dependent Assessment of Seismic Hazard (t-DASH) and earthquake short-term (from days to weeks) forecast. However, a very preliminary step in this direction is the identification of those parameters (chemical, physical, biological, etc.) whose anomalous variations can be, to some extent, associated with the complex process of preparation for major earthquakes. In this paper one of these parameters (the Earth's emitted radiation in the Thermal InfraRed spectral region) is considered for its possible correlation with M ≥ 4 earthquakes occurred in Greece in between 2004 and 2013. The Robust Satellite Technique (RST) data analysis approach and Robust Estimator of TIR Anomalies (RETIRA) index were used to preliminarily define, and then to identify, significant sequences of TIR anomalies (SSTAs) in 10 years (2004-2013) of daily TIR images acquired by the Spinning Enhanced Visible and Infrared Imager on board the Meteosat Second Generation satellite. Taking into account the physical models proposed for justifying the existence of a correlation among TIR anomalies and earthquake occurrences, specific validation rules (in line with the ones used by the Collaboratory for the Study of Earthquake Predictability—CSEP—Project) have been defined to drive a retrospective correlation analysis process. The analysis shows that more than 93 % of all identified SSTAs occur in the prefixed space-time window around ( M ≥ 4) earthquake's time and location of occurrence with a false positive rate smaller than 7 %. Molchan error diagram analysis shows that such a correlation is far to be achievable by chance notwithstanding the huge amount of missed events due to frequent space/time data gaps produced by the presence of clouds over the scene. Achieved results, and particularly the very low rate of false positives registered on a so long testing period, seems already sufficient (at least) to qualify TIR anomalies (identified by RST approach and RETIRA index) among the parameters to be considered in the framework of a multi-parametric approach to t-DASH.
NASA Astrophysics Data System (ADS)
Kim, Kyung-Su; Lee, Hae-Yeoun; Im, Dong-Hyuck; Lee, Heung-Kyu
Commercial markets employ digital right management (DRM) systems to protect valuable high-definition (HD) quality videos. DRM system uses watermarking to provide copyright protection and ownership authentication of multimedia contents. We propose a real-time video watermarking scheme for HD video in the uncompressed domain. Especially, our approach is in aspect of practical perspectives to satisfy perceptual quality, real-time processing, and robustness requirements. We simplify and optimize human visual system mask for real-time performance and also apply dithering technique for invisibility. Extensive experiments are performed to prove that the proposed scheme satisfies the invisibility, real-time processing, and robustness requirements against video processing attacks. We concentrate upon video processing attacks that commonly occur in HD quality videos to display on portable devices. These attacks include not only scaling and low bit-rate encoding, but also malicious attacks such as format conversion and frame rate change.
NASA Astrophysics Data System (ADS)
Najafi, Ali; Acar, Erdem; Rais-Rohani, Masoud
2014-02-01
The stochastic uncertainties associated with the material, process and product are represented and propagated to process and performance responses. A finite element-based sequential coupled process-performance framework is used to simulate the forming and energy absorption responses of a thin-walled tube in a manner that both material properties and component geometry can evolve from one stage to the next for better prediction of the structural performance measures. Metamodelling techniques are used to develop surrogate models for manufacturing and performance responses. One set of metamodels relates the responses to the random variables whereas the other relates the mean and standard deviation of the responses to the selected design variables. A multi-objective robust design optimization problem is formulated and solved to illustrate the methodology and the influence of uncertainties on manufacturability and energy absorption of a metallic double-hat tube. The results are compared with those of deterministic and augmented robust optimization problems.
NASA Technical Reports Server (NTRS)
Ryan, Robert
1993-01-01
The concept of rubustness includes design simplicity, component and path redundancy, desensitization to the parameter and environment variations, control of parameter variations, and punctual operations. These characteristics must be traded with functional concepts, materials, and fabrication approach against the criteria of performance, cost, and reliability. The paper describes the robustness design process, which includes the following seven major coherent steps: translation of vision into requirements, definition of the robustness characteristics desired, criteria formulation of required robustness, concept selection, detail design, manufacturing and verification, operations.
Redundancy relations and robust failure detection
NASA Technical Reports Server (NTRS)
Chow, E. Y.; Lou, X. C.; Verghese, G. C.; Willsky, A. S.
1984-01-01
All failure detection methods are based on the use of redundancy, that is on (possible dynamic) relations among the measured variables. Consequently the robustness of the failure detection process depends to a great degree on the reliability of the redundancy relations given the inevitable presence of model uncertainties. The problem of determining redundancy relations which are optimally robust in a sense which includes the major issues of importance in practical failure detection is addressed. A significant amount of intuition concerning the geometry of robust failure detection is provided.
Adaptive control of large space structures using recursive lattice filters
NASA Technical Reports Server (NTRS)
Goglia, G. L.
1985-01-01
The use of recursive lattice filters for identification and adaptive control of large space structures was studied. Lattice filters are used widely in the areas of speech and signal processing. Herein, they are used to identify the structural dynamics model of the flexible structures. This identified model is then used for adaptive control. Before the identified model and control laws are integrated, the identified model is passed through a series of validation procedures and only when the model passes these validation procedures control is engaged. This type of validation scheme prevents instability when the overall loop is closed. The results obtained from simulation were compared to those obtained from experiments. In this regard, the flexible beam and grid apparatus at the Aerospace Control Research Lab (ACRL) of NASA Langley Research Center were used as the principal candidates for carrying out the above tasks. Another important area of research, namely that of robust controller synthesis, was investigated using frequency domain multivariable controller synthesis methods.
NASA Astrophysics Data System (ADS)
Gupta, Lokesh Kumar
2012-11-01
Seven process related impurities were identified by LC-MS in the atorvastatin calcium drug substance. These impurities were identified by LC-MS. The structure of impurities was confirmed by modern spectroscopic techniques like 1H NMR and IR and physicochemical studies conducted by using synthesized authentic reference compounds. The synthesized reference samples of the impurity compounds were used for the quantitative HPLC determination. These impurities were detected by newly developed gradient, reverse phase high performance liquid chromatographic (HPLC) method. The system suitability of HPLC analysis established the validity of the separation. The analytical method was validated according to International Conference of Harmonization (ICH) with respect to specificity, precision, accuracy, linearity, robustness and stability of analytical solutions to demonstrate the power of newly developed HPLC method.
Chen, Rui; Hyrien, Ollivier
2011-01-01
This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356
Standardization Process for Space Radiation Models Used for Space System Design
NASA Technical Reports Server (NTRS)
Barth, Janet; Daly, Eamonn; Brautigam, Donald
2005-01-01
The space system design community has three concerns related to models of the radiation belts and plasma: 1) AP-8 and AE-8 models are not adequate for modern applications; 2) Data that have become available since the creation of AP-8 and AE-8 are not being fully exploited for modeling purposes; 3) When new models are produced, there is no authorizing organization identified to evaluate the models or their datasets for accuracy and robustness. This viewgraph presentation provided an overview of the roadmap adopted by the Working Group Meeting on New Standard Radiation Belt and Space Plasma Models.
Lam, Philippe; Stern, Al
2010-01-01
We developed several techniques for visualizing the fit between a stopper and a vial in the critical flange area, a location typically hidden from view. Using these tools, it is possible to identify surfaces involved in forming the initial seal immediately after stopper insertion. We present examples illustrating important design elements that can contribute to forming a robust primary package. These techniques can also be used for component screening by facilitating the identification of combinations that do not fit well together so that they can be eliminated early in the selection process.
Wavelet Applications for Flight Flutter Testing
NASA Technical Reports Server (NTRS)
Lind, Rick; Brenner, Marty; Freudinger, Lawrence C.
1999-01-01
Wavelets present a method for signal processing that may be useful for analyzing responses of dynamical systems. This paper describes several wavelet-based tools that have been developed to improve the efficiency of flight flutter testing. One of the tools uses correlation filtering to identify properties of several modes throughout a flight test for envelope expansion. Another tool uses features in time-frequency representations of responses to characterize nonlinearities in the system dynamics. A third tool uses modulus and phase information from a wavelet transform to estimate modal parameters that can be used to update a linear model and reduce conservatism in robust stability margins.
On the robustness of Herlihy's hierarchy
NASA Technical Reports Server (NTRS)
Jayanti, Prasad
1993-01-01
A wait-free hierarchy maps object types to levels in Z(+) U (infinity) and has the following property: if a type T is at level N, and T' is an arbitrary type, then there is a wait-free implementation of an object of type T', for N processes, using only registers and objects of type T. The infinite hierarchy defined by Herlihy is an example of a wait-free hierarchy. A wait-free hierarchy is robust if it has the following property: if T is at level N, and S is a finite set of types belonging to levels N - 1 or lower, then there is no wait-free implementation of an object of type T, for N processes, using any number and any combination of objects belonging to the types in S. Robustness implies that there are no clever ways of combining weak shared objects to obtain stronger ones. Contrary to what many researchers believe, we prove that Herlihy's hierarchy is not robust. We then define some natural variants of Herlihy's hierarchy, which are also infinite wait-free hierarchies. With the exception of one, which is still open, these are not robust either. We conclude with the open question of whether non-trivial robust wait-free hierarchies exist.
Many-objective robust decision making for water allocation under climate change.
Yan, Dan; Ludwig, Fulco; Huang, He Qing; Werners, Saskia E
2017-12-31
Water allocation is facing profound challenges due to climate change uncertainties. To identify adaptive water allocation strategies that are robust to climate change uncertainties, a model framework combining many-objective robust decision making and biophysical modeling is developed for large rivers. The framework was applied to the Pearl River basin (PRB), China where sufficient flow to the delta is required to reduce saltwater intrusion in the dry season. Before identifying and assessing robust water allocation plans for the future, the performance of ten state-of-the-art MOEAs (multi-objective evolutionary algorithms) is evaluated for the water allocation problem in the PRB. The Borg multi-objective evolutionary algorithm (Borg MOEA), which is a self-adaptive optimization algorithm, has the best performance during the historical periods. Therefore it is selected to generate new water allocation plans for the future (2079-2099). This study shows that robust decision making using carefully selected MOEAs can help limit saltwater intrusion in the Pearl River Delta. However, the framework could perform poorly due to larger than expected climate change impacts on water availability. Results also show that subjective design choices from the researchers and/or water managers could potentially affect the ability of the model framework, and cause the most robust water allocation plans to fail under future climate change. Developing robust allocation plans in a river basin suffering from increasing water shortage requires the researchers and water managers to well characterize future climate change of the study regions and vulnerabilities of their tools. Copyright © 2017 Elsevier B.V. All rights reserved.
Overview of intercalibration of satellite instruments
Chander, G.; Hewison, T.J.; Fox, N.; Wu, X.; Xiong, X.; Blackwell, W.J.
2013-01-01
Inter-calibration of satellite instruments is critical for detection and quantification of changes in the Earth’s environment, weather forecasting, understanding climate processes, and monitoring climate and land cover change. These applications use data from many satellites; for the data to be inter-operable, the instruments must be cross-calibrated. To meet the stringent needs of such applications requires that instruments provide reliable, accurate, and consistent measurements over time. Robust techniques are required to ensure that observations from different instruments can be normalized to a common scale that the community agrees on. The long-term reliability of this process needs to be sustained in accordance with established reference standards and best practices. Furthermore, establishing physical meaning to the information through robust Système International d'unités (SI) traceable Calibration and Validation (Cal/Val) is essential to fully understand the parameters under observation. The processes of calibration, correction, stability monitoring, and quality assurance need to be underpinned and evidenced by comparison with “peer instruments” and, ideally, highly calibrated in-orbit reference instruments. Inter-calibration between instruments is a central pillar of the Cal/Val strategies of many national and international satellite remote sensing organizations. Inter-calibration techniques as outlined in this paper not only provide a practical means of identifying and correcting relative biases in radiometric calibration between instruments but also enable potential data gaps between measurement records in a critical time series to be bridged. Use of a robust set of internationally agreed upon and coordinated inter-calibration techniques will lead to significant improvement in the consistency between satellite instruments and facilitate accurate monitoring of the Earth’s climate at uncertainty levels needed to detect and attribute the mechanisms of change. This paper summarizes the state-of-the-art of post-launch radiometric calibration of remote sensing satellite instruments, through inter-calibration.
Hemispheric differences in recognizing upper and lower facial displays of emotion.
Prodan, C I; Orbelo, D M; Testa, J A; Ross, E D
2001-01-01
To determine if there are hemispheric differences in processing upper versus lower facial displays of emotion. Recent evidence suggests that there are two broad classes of emotions with differential hemispheric lateralization. Primary emotions (e.g. anger, fear) and associated displays are innate, are recognized across all cultures, and are thought to be modulated by the right hemisphere. Social emotions (e.g., guilt, jealousy) and associated "display rules" are learned during early child development, vary across cultures, and are thought to be modulated by the left hemisphere. Display rules are used by persons to alter, suppress or enhance primary emotional displays for social purposes. During deceitful behaviors, a subject's true emotional state is often leaked through upper rather than lower facial displays, giving rise to facial blends of emotion. We hypothesized that upper facial displays are processed preferentially by the right hemisphere, as part of the primary emotional system, while lower facial displays are processed preferentially by the left hemisphere, as part of the social emotional system. 30 strongly right-handed adult volunteers were tested tachistoscopically by randomly flashing facial displays of emotion to the right and left visual fields. The stimuli were line drawings of facial blends with different emotions displayed on the upper versus lower face. The subjects were tested under two conditions: 1) without instructions and 2) with instructions to attend to the upper face. Without instructions, the subjects robustly identified the emotion displayed on the lower face, regardless of visual field presentation. With instructions to attend to the upper face, for the left visual field they robustly identified the emotion displayed on the upper face. For the right visual field, they continued to identify the emotion displayed on the lower face, but to a lesser degree. Our results support the hypothesis that hemispheric differences exist in the ability to process upper versus lower facial displays of emotion. Attention appears to enhance the ability to explore these hemispheric differences under experimental conditions. Our data also support the recent observation that the right hemisphere has a greater ability to recognize deceitful behaviors compared with the left hemisphere. This may be attributable to the different roles the hemispheres play in modulating social versus primary emotions and related behaviors.
Baronsky-Probst, J; Möltgen, C-V; Kessler, W; Kessler, R W
2016-05-25
Hot melt extrusion (HME) is a well-known process within the plastic and food industries that has been utilized for the past several decades and is increasingly accepted by the pharmaceutical industry for continuous manufacturing. For tamper-resistant formulations of e.g. opioids, HME is the most efficient production technique. The focus of this study is thus to evaluate the manufacturability of the HME process for tamper-resistant formulations. Parameters such as the specific mechanical energy (SME), as well as the melt pressure and its standard deviation, are important and will be discussed in this study. In the first step, the existing process data are analyzed by means of multivariate data analysis. Key critical process parameters such as feed rate, screw speed, and the concentration of the API in the polymers are identified, and critical quality parameters of the tablet are defined. In the second step, a relationship between the critical material, product and process quality attributes are established by means of Design of Experiments (DoEs). The resulting SME and the temperature at the die are essential data points needed to indirectly qualify the degradation of the API, which should be minimal. NIR-spectroscopy is used to monitor the material during the extrusion process. In contrast to most applications in which the probe is directly integrated into the die, the optical sensor is integrated into the cooling line of the strands. This saves costs in the probe design and maintenance and increases the robustness of the chemometric models. Finally, a process measurement system is installed to monitor and control all of the critical attributes in real-time by means of first principles, DoE models, soft sensor models, and spectroscopic information. Overall, the process is very robust as long as the screw speed is kept low. Copyright © 2015 Elsevier B.V. All rights reserved.
THREaD Mapper Studio: a novel, visual web server for the estimation of genetic linkage maps
Cheema, Jitender; Ellis, T. H. Noel; Dicks, Jo
2010-01-01
The estimation of genetic linkage maps is a key component in plant and animal research, providing both an indication of the genetic structure of an organism and a mechanism for identifying candidate genes associated with traits of interest. Because of this importance, several computational solutions to genetic map estimation exist, mostly implemented as stand-alone software packages. However, the estimation process is often largely hidden from the user. Consequently, problems such as a program crashing may occur that leave a user baffled. THREaD Mapper Studio (http://cbr.jic.ac.uk/threadmapper) is a new web site that implements a novel, visual and interactive method for the estimation of genetic linkage maps from DNA markers. The rationale behind the web site is to make the estimation process as transparent and robust as possible, while also allowing users to use their expert knowledge during analysis. Indeed, the 3D visual nature of the tool allows users to spot features in a data set, such as outlying markers and potential structural rearrangements that could cause problems with the estimation procedure and to account for them in their analysis. Furthermore, THREaD Mapper Studio facilitates the visual comparison of genetic map solutions from third party software, aiding users in developing robust solutions for their data sets. PMID:20494977
Bonilauri Ferreira, Ana Paula Ribeiro; Ferreira, Rodrigo Fernando; Rajgor, Dimple; Shah, Jatin; Menezes, Andrea; Pietrobon, Ricardo
2010-04-20
Little is known about the reasoning mechanisms used by physicians in decision-making and how this compares to diagnostic clinical practice guidelines. We explored the clinical reasoning process in a real life environment. This is a qualitative study evaluating transcriptions of sixteen physicians' reasoning during appointments with patients, clinical discussions between specialists, and personal interviews with physicians affiliated to a hospital in Brazil. FOUR MAIN THEMES WERE IDENTIFIED: simple and robust heuristics, extensive use of social environment rationality, attempts to prove diagnostic and therapeutic hypothesis while refuting potential contradictions using positive test strategy, and reaching the saturation point. Physicians constantly attempted to prove their initial hypothesis while trying to refute any contradictions. While social environment rationality was the main factor in the determination of all steps of the clinical reasoning process, factors such as referral letters and number of contradictions associated with the initial hypothesis had influence on physicians' confidence and determination of the threshold to reach a final decision. Physicians rely on simple heuristics associated with environmental factors. This model allows for robustness, simplicity, and cognitive energy saving. Since this model does not fit into current diagnostic clinical practice guidelines, we make some propositions to help its integration.
Wireless Sensors and Networks for Advanced Energy Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, J.E.
Numerous national studies and working groups have identified low-cost, very low-power wireless sensors and networks as a critical enabling technology for increasing energy efficiency, reducing waste, and optimizing processes. Research areas for developing such sensor and network platforms include microsensor arrays, ultra-low power electronics and signal conditioning, data/control transceivers, and robust wireless networks. A review of some of the research in the following areas will be discussed: (1) Low-cost, flexible multi-sensor array platforms (CO{sub 2}, NO{sub x}, CO, humidity, NH{sub 3}, O{sub 2}, occupancy, etc.) that enable energy and emission reductions in applications such as buildings and manufacturing; (2) Modelingmore » investments (energy usage and savings to drive capital investment decisions) and estimated uptime improvements through pervasive gathering of equipment and process health data and its effects on energy; (3) Robust, self-configuring wireless sensor networks for energy management; and (4) Quality-of-service for secure and reliable data transmission from widely distributed sensors. Wireless communications is poised to support technical innovations in the industrial community, with widespread use of wireless sensors forecasted to improve manufacturing production and energy efficiency and reduce emissions. Progress being made in wireless system components, as described in this paper, is helping bring these projected improvements to reality.« less
Bonilauri Ferreira, Ana Paula Ribeiro; Ferreira, Rodrigo Fernando; Rajgor, Dimple; Shah, Jatin; Menezes, Andrea; Pietrobon, Ricardo
2010-01-01
Background Little is known about the reasoning mechanisms used by physicians in decision-making and how this compares to diagnostic clinical practice guidelines. We explored the clinical reasoning process in a real life environment. Method This is a qualitative study evaluating transcriptions of sixteen physicians' reasoning during appointments with patients, clinical discussions between specialists, and personal interviews with physicians affiliated to a hospital in Brazil. Results Four main themes were identified: simple and robust heuristics, extensive use of social environment rationality, attempts to prove diagnostic and therapeutic hypothesis while refuting potential contradictions using positive test strategy, and reaching the saturation point. Physicians constantly attempted to prove their initial hypothesis while trying to refute any contradictions. While social environment rationality was the main factor in the determination of all steps of the clinical reasoning process, factors such as referral letters and number of contradictions associated with the initial hypothesis had influence on physicians' confidence and determination of the threshold to reach a final decision. Discussion Physicians rely on simple heuristics associated with environmental factors. This model allows for robustness, simplicity, and cognitive energy saving. Since this model does not fit into current diagnostic clinical practice guidelines, we make some propositions to help its integration. PMID:20421920
Robust Inference of Genetic Exchange Communities from Microbial Genomes Using TF-IDF.
Cong, Yingnan; Chan, Yao-Ban; Phillips, Charles A; Langston, Michael A; Ragan, Mark A
2017-01-01
Bacteria and archaea can exchange genetic material across lineages through processes of lateral genetic transfer (LGT). Collectively, these exchange relationships can be modeled as a network and analyzed using concepts from graph theory. In particular, densely connected regions within an LGT network have been defined as genetic exchange communities (GECs). However, it has been problematic to construct networks in which edges solely represent LGT. Here we apply term frequency-inverse document frequency (TF-IDF), an alignment-free method originating from document analysis, to infer regions of lateral origin in bacterial genomes. We examine four empirical datasets of different size (number of genomes) and phyletic breadth, varying a key parameter (word length k ) within bounds established in previous work. We map the inferred lateral regions to genes in recipient genomes, and construct networks in which the nodes are groups of genomes, and the edges natively represent LGT. We then extract maximum and maximal cliques (i.e., GECs) from these graphs, and identify nodes that belong to GECs across a wide range of k . Most surviving lateral transfer has happened within these GECs. Using Gene Ontology enrichment tests we demonstrate that biological processes associated with metabolism, regulation and transport are often over-represented among the genes affected by LGT within these communities. These enrichments are largely robust to change of k .
Mathematical Modeling of RNA-Based Architectures for Closed Loop Control of Gene Expression.
Agrawal, Deepak K; Tang, Xun; Westbrook, Alexandra; Marshall, Ryan; Maxwell, Colin S; Lucks, Julius; Noireaux, Vincent; Beisel, Chase L; Dunlop, Mary J; Franco, Elisa
2018-05-08
Feedback allows biological systems to control gene expression precisely and reliably, even in the presence of uncertainty, by sensing and processing environmental changes. Taking inspiration from natural architectures, synthetic biologists have engineered feedback loops to tune the dynamics and improve the robustness and predictability of gene expression. However, experimental implementations of biomolecular control systems are still far from satisfying performance specifications typically achieved by electrical or mechanical control systems. To address this gap, we present mathematical models of biomolecular controllers that enable reference tracking, disturbance rejection, and tuning of the temporal response of gene expression. These controllers employ RNA transcriptional regulators to achieve closed loop control where feedback is introduced via molecular sequestration. Sensitivity analysis of the models allows us to identify which parameters influence the transient and steady state response of a target gene expression process, as well as which biologically plausible parameter values enable perfect reference tracking. We quantify performance using typical control theory metrics to characterize response properties and provide clear selection guidelines for practical applications. Our results indicate that RNA regulators are well-suited for building robust and precise feedback controllers for gene expression. Additionally, our approach illustrates several quantitative methods useful for assessing the performance of biomolecular feedback control systems.
NASA Astrophysics Data System (ADS)
Fakhari, Vahid; Choi, Seung-Bok; Cho, Chang-Hyun
2015-04-01
This work presents a new robust model reference adaptive control (MRAC) for vibration control caused from vehicle engine using an electromagnetic type of active engine mount. Vibration isolation performances of the active mount associated with the robust controller are evaluated in the presence of large uncertainties. As a first step, an active mount with linear solenoid actuator is prepared and its dynamic model is identified via experimental test. Subsequently, a new robust MRAC based on the gradient method with σ-modification is designed by selecting a proper reference model. In designing the robust adaptive control, structured (parametric) uncertainties in the stiffness of the passive part of the mount and in damping ratio of the active part of the mount are considered to investigate the robustness of the proposed controller. Experimental and simulation results are presented to evaluate performance focusing on the robustness behavior of the controller in the face of large uncertainties. The obtained results show that the proposed controller can sufficiently provide the robust vibration control performance even in the presence of large uncertainties showing an effective vibration isolation.
Design optimization for cost and quality: The robust design approach
NASA Technical Reports Server (NTRS)
Unal, Resit
1990-01-01
Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.
Robust fusion-based processing for military polarimetric imaging systems
NASA Astrophysics Data System (ADS)
Hickman, Duncan L.; Smith, Moira I.; Kim, Kyung Su; Choi, Hyun-Jin
2017-05-01
Polarisation information within a scene can be exploited in military systems to give enhanced automatic target detection and recognition (ATD/R) performance. However, the performance gain achieved is highly dependent on factors such as the geometry, viewing conditions, and the surface finish of the target. Such performance sensitivities are highly undesirable in many tactical military systems where operational conditions can vary significantly and rapidly during a mission. Within this paper, a range of processing architectures and fusion methods is considered in terms of their practical viability and operational robustness for systems requiring ATD/R. It is shown that polarisation information can give useful performance gains but, to retained system robustness, the introduction of polarimetric processing should be done in such a way as to not compromise other discriminatory scene information in the spectral and spatial domains. The analysis concludes that polarimetric data can be effectively integrated with conventional intensity-based ATD/R by either adapting the ATD/R processing function based on the scene polarisation or else by detection-level fusion. Both of these approaches avoid the introduction of processing bottlenecks and limit the impact of processing on system latency.
NASA Astrophysics Data System (ADS)
Chiron, L.; Oger, G.; de Leffe, M.; Le Touzé, D.
2018-02-01
While smoothed-particle hydrodynamics (SPH) simulations are usually performed using uniform particle distributions, local particle refinement techniques have been developed to concentrate fine spatial resolutions in identified areas of interest. Although the formalism of this method is relatively easy to implement, its robustness at coarse/fine interfaces can be problematic. Analysis performed in [16] shows that the radius of refined particles should be greater than half the radius of unrefined particles to ensure robustness. In this article, the basics of an Adaptive Particle Refinement (APR) technique, inspired by AMR in mesh-based methods, are presented. This approach ensures robustness with alleviated constraints. Simulations applying the new formalism proposed achieve accuracy comparable to fully refined spatial resolutions, together with robustness, low CPU times and maintained parallel efficiency.
Shape detection of Gaborized outline versions of everyday objects
Sassi, Michaël; Machilsen, Bart; Wagemans, Johan
2012-01-01
We previously tested the identifiability of six versions of Gaborized outlines of everyday objects, differing in the orientations assigned to elements inside and outside the outline. We found significant differences in identifiability between the versions, and related a number of stimulus metrics to identifiability [Sassi, M., Vancleef, K., Machilsen, B., Panis, S., & Wagemans, J. (2010). Identification of everyday objects on the basis of Gaborized outline versions. i-Perception, 1(3), 121–142]. In this study, after retesting the identifiability of new variants of three of the stimulus versions, we tested their robustness to local orientation jitter in a detection experiment. In general, our results replicated the key findings from the previous study, and allowed us to substantiate our earlier interpretations of the effects of our stimulus metrics and of the performance differences between the different stimulus versions. The results of the detection task revealed a different ranking order of stimulus versions than the identification task. By examining the parallels and differences between the effects of our stimulus metrics in the two tasks, we found evidence for a trade-off between shape detectability and identifiability. The generally simple and smooth shapes that yield the strongest contour integration and most robust detectability tend to lack the distinguishing features necessary for clear-cut identification. Conversely, contours that do contain such identifying features tend to be inherently more complex and, therefore, yield weaker integration and less robust detectability. PMID:23483752
Acceptance testing for PACS: from methodology to design to implementation
NASA Astrophysics Data System (ADS)
Liu, Brent J.; Huang, H. K.
2004-04-01
Acceptance Testing (AT) is a crucial step in the implementation process of a PACS within a clinical environment. AT determines whether the PACS is ready for clinical use and marks the official sign off of the PACS product. Most PACS vendors have Acceptance Testing (AT) plans, however, these plans do not provide a complete and robust evaluation of the full system. In addition, different sites will have different special requirements that vendor AT plans do not cover. The purpose of this paper is to introduce a protocol for AT design and present case studies of AT performed on clinical PACS. A methodology is presented that includes identifying testing components within PACS, quality assurance for both functionality and performance, and technical testing focusing on key single points-of-failure within the PACS product. Tools and resources that provide assistance in performing AT are discussed. In addition, implementation of the AT within the clinical environment and the overall implementation timeline of the PACS process are presented. Finally, case studies of actual AT of clinical PACS performed in the healthcare environment will be reviewed. The methodology for designing and implementing a robust AT plan for PACS was documented and has been used in PACS acceptance tests in several sites. This methodology can be applied to any PACS and can be used as a validation for the PACS product being acquired by radiology departments and hospitals. A methodology for AT design and implementation was presented that can be applied to future PACS installations. A robust AT plan for a PACS installation can increase both the utilization and satisfaction of a successful implementation of a PACS product that benefits both vendor and customer.
Lee, Won Seok; Won, Sejeong; Park, Jeunghee; Lee, Jihye; Park, Inkyu
2012-06-07
Controlled alignment and mechanically robust bonding between nanowires (NWs) and electrodes are essential requirements for reliable operation of functional NW-based electronic devices. In this work, we developed a novel process for the alignment and bonding between NWs and metal electrodes by using thermo-compressive transfer printing. In this process, bottom-up synthesized NWs were aligned in parallel by shear loading onto the intermediate substrate and then finally transferred onto the target substrate with low melting temperature metal electrodes. In particular, multi-layer (e.g. Cr/Au/In/Au and Cr/Cu/In/Au) metal electrodes are softened at low temperatures (below 100 °C) and facilitate submergence of aligned NWs into the surface of electrodes at a moderate pressure (∼5 bar). By using this thermo-compressive transfer printing process, robust electrical and mechanical contact between NWs and metal electrodes can be realized. This method is believed to be very useful for the large-area fabrication of NW-based electrical devices with improved mechanical robustness, electrical contact resistance, and reliability.
NASA Astrophysics Data System (ADS)
Vora, V. P.; Mahmassani, H. S.
2002-02-01
This work proposes and implements a comprehensive evaluation framework to document the telecommuter, organizational, and societal impacts of telecommuting through telecommuting programs. Evaluation processes and materials within the outlined framework are also proposed and implemented. As the first component of the evaluation process, the executive survey is administered within a public sector agency. The survey data is examined through exploratory analysis and is compared to a previous survey of private sector executives. The ordinal probit, dynamic probit, and dynamic generalized ordinal probit (DGOP) models of telecommuting adoption are calibrated to identify factors which significantly influence executive adoption preferences and to test the robustness of such factors. The public sector DGOP model of executive willingness to support telecommuting under different program scenarios is compared with an equivalent private sector DGOP model. Through the telecommuting program, a case study of telecommuting travel impacts is performed to further substantiate research.
M4SF-17LL010301071: Thermodynamic Database Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zavarin, M.; Wolery, T. J.
2017-09-05
This progress report (Level 4 Milestone Number M4SF-17LL010301071) summarizes research conducted at Lawrence Livermore National Laboratory (LLNL) within the Argillite Disposal R&D Work Package Number M4SF-17LL01030107. The DR Argillite Disposal R&D control account is focused on the evaluation of important processes in the analysis of disposal design concepts and related materials for nuclear fuel disposal in clay-bearing repository media. The objectives of this work package are to develop model tools for evaluating impacts of THMC process on long-term disposal of spent fuel in argillite rocks, and to establish the scientific basis for high thermal limits. This work is contributing tomore » the GDSA model activities to identify gaps, develop process models, provide parameter feeds and support requirements providing the capability for a robust repository performance assessment model by 2020.« less
Integrative analyses of human reprogramming reveal dynamic nature of induced pluripotency
Cacchiarelli, Davide; Trapnell, Cole; Ziller, Michael J.; Soumillon, Magali; Cesana, Marcella; Karnik, Rahul; Donaghey, Julie; Smith, Zachary D.; Ratanasirintrawoot, Sutheera; Zhang, Xiaolan; Ho Sui, Shannan J.; Wu, Zhaoting; Akopian, Veronika; Gifford, Casey A.; Doench, John; Rinn, John L.; Daley, George Q.; Meissner, Alexander; Lander, Eric S.; Mikkelsen, Tarjei S.
2015-01-01
Summary Induced pluripotency is a promising avenue for disease modeling and therapy, but the molecular principles underlying this process, particularly in human cells, remain poorly understood due to donor-to-donor variability and intercellular heterogeneity. Here we constructed and characterized a clonal, inducible human reprogramming system that provides a reliable source of cells at any stage of the process. This system enabled integrative transcriptional and epigenomic analysis across the human reprogramming timeline at high resolution. We observed distinct waves of gene network activation, including the ordered reactivation of broad developmental regulators followed by early embryonic patterning genes and culminating in the emergence of a signature reminiscent of pre-implantation stages. Moreover, complementary functional analyses allowed us to identify and validate novel regulators of the reprogramming process. Altogether, this study sheds light on the molecular underpinnings of induced pluripotency in human cells and provides a robust cell platform for further studies. PMID:26186193
2009-10-01
phase and factors which may cause accelerated growth rates is key to achieving a reliable and robust bearing design . The end goal is to identify...key to achieving a reliable and robust bearing design . The end goal is to identify control parameters for optimizing bearing materials for improved...25.0 nm and were each fabricated from same material heats respectively to a custom design print to ABEC 5 quality and had split inner rings. Each had
Identifying Pre-Seismic TIR Anomalies: A Long Term (2004-2015) Of RST Analysis Over Turkish Area
NASA Astrophysics Data System (ADS)
Perrone, A.; Tramutoli, V.; Corrado, A.; Filizzola, C.; Genzano, N.; Lisi, M.; Paciello, R.; Pergola, N.
2017-12-01
Since eighties, fluctuations of Earth's thermally emitted radiation, measured by satellite sensors operating in the thermal infrared (TIR) spectral range (i.e. 10-12 µm), have been associated with the complex process of preparation of earthquakes. Several theories have been proposed to explain their origin and their space-time evolution. In this paper, the Earth's emitted radiation in the Thermal Infra-Red spectral region is considered for its possible correlation with M≥4 earthquakes occurred in Turkey in between 2004 and 2015. Robust Satellite Technique (RST) and RETIRA (Robust Estimator of TIR Anomalies) index were used to preliminarily define, and then to identify, Significant Sequences of TIR Anomalies (SSTAs) in the period 1 April 2004- 31 October 2015 (12 years) of daily TIR images acquired by the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on board the Meteosat Second Generation (MSG) satellite. The performed analysis shows that more than 67% of all identified SSTAs occur in the pre-fixed space-time window around the occurrence time and location of earthquakes (M≥4), with a false positive rate smaller than 33%. Moreover, Molchan error diagram analysis gave a clear indication of non-casualty of such a correlation, in comparison with the random guess function. Notwithstanding the huge amount of missed events due to frequent space/time data gaps produced by the presence of clouds over the scene the achieved results, and particularly the low rate of false positives registered on a so long testing period, seems sufficient (at least) to qualify TIR anomalies (identified by RST approach and RETIRA index) among the parameters to be considered in the framework of a multi-parametric approach to time-Dependent Assessment of Seismic Hazard (t-DASH).
Huang, X N; Ren, H P
2016-05-13
Robust adaptation is a critical ability of gene regulatory network (GRN) to survive in a fluctuating environment, which represents the system responding to an input stimulus rapidly and then returning to its pre-stimulus steady state timely. In this paper, the GRN is modeled using the Michaelis-Menten rate equations, which are highly nonlinear differential equations containing 12 undetermined parameters. The robust adaption is quantitatively described by two conflicting indices. To identify the parameter sets in order to confer the GRNs with robust adaptation is a multi-variable, multi-objective, and multi-peak optimization problem, which is difficult to acquire satisfactory solutions especially high-quality solutions. A new best-neighbor particle swarm optimization algorithm is proposed to implement this task. The proposed algorithm employs a Latin hypercube sampling method to generate the initial population. The particle crossover operation and elitist preservation strategy are also used in the proposed algorithm. The simulation results revealed that the proposed algorithm could identify multiple solutions in one time running. Moreover, it demonstrated a superior performance as compared to the previous methods in the sense of detecting more high-quality solutions within an acceptable time. The proposed methodology, owing to its universality and simplicity, is useful for providing the guidance to design GRN with superior robust adaptation.
Harnessing QbD, Programming Languages, and Automation for Reproducible Biology.
Sadowski, Michael I; Grant, Chris; Fell, Tim S
2016-03-01
Building robust manufacturing processes from biological components is a task that is highly complex and requires sophisticated tools to describe processes, inputs, and measurements and administrate management of knowledge, data, and materials. We argue that for bioengineering to fully access biological potential, it will require application of statistically designed experiments to derive detailed empirical models of underlying systems. This requires execution of large-scale structured experimentation for which laboratory automation is necessary. This requires development of expressive, high-level languages that allow reusability of protocols, characterization of their reliability, and a change in focus from implementation details to functional properties. We review recent developments in these areas and identify what we believe is an exciting trend that promises to revolutionize biotechnology. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Modeling lung cancer evolution and preclinical response by orthotopic mouse allografts.
Ambrogio, Chiara; Carmona, Francisco J; Vidal, August; Falcone, Mattia; Nieto, Patricia; Romero, Octavio A; Puertas, Sara; Vizoso, Miguel; Nadal, Ernest; Poggio, Teresa; Sánchez-Céspedes, Montserrat; Esteller, Manel; Mulero, Francisca; Voena, Claudia; Chiarle, Roberto; Barbacid, Mariano; Santamaría, David; Villanueva, Alberto
2014-11-01
Cancer evolution is a process that is still poorly understood because of the lack of versatile in vivo longitudinal studies. By generating murine non-small cell lung cancer (NSCLC) orthoallobanks and paired primary cell lines, we provide a detailed description of an in vivo, time-dependent cancer malignization process. We identify the acquisition of metastatic dissemination potential, the selection of co-driver mutations, and the appearance of naturally occurring intratumor heterogeneity, thus recapitulating the stochastic nature of human cancer development. This approach combines the robustness of genetically engineered cancer models with the flexibility of allograft methodology. We have applied this tool for the preclinical evaluation of therapeutic approaches. This system can be implemented to improve the design of future treatments for patients with NSCLC. ©2014 American Association for Cancer Research.
Thalamocortical mechanisms for integrating musical tone and rhythm
Musacchia, Gabriella; Large, Edward
2014-01-01
Studies over several decades have identified many of the neuronal substrates of music perception by pursuing pitch and rhythm perception separately. Here, we address the question of how these mechanisms interact, starting with the observation that the peripheral pathways of the so-called “Core” and “Matrix” thalamocortical system provide the anatomical bases for tone and rhythm channels. We then examine the hypothesis that these specialized inputs integrate tonal content within rhythm context in auditory cortex using classical types of “driving” and “modulatory” mechanisms. This hypothesis provides a framework for deriving testable predictions about the early stages of music processing. Furthermore, because thalamocortical circuits are shared by speech and music processing, such a model provides concrete implications for how music experience contributes to the development of robust speech encoding mechanisms. PMID:24103509
Deficits in facial affect recognition among antisocial populations: a meta-analysis.
Marsh, Abigail A; Blair, R J R
2008-01-01
Individuals with disorders marked by antisocial behavior frequently show deficits in recognizing displays of facial affect. Antisociality may be associated with specific deficits in identifying fearful expressions, which would implicate dysfunction in neural structures that subserve fearful expression processing. A meta-analysis of 20 studies was conducted to assess: (a) if antisocial populations show any consistent deficits in recognizing six emotional expressions; (b) beyond any generalized impairment, whether specific fear recognition deficits are apparent; and (c) if deficits in fear recognition are a function of task difficulty. Results show a robust link between antisocial behavior and specific deficits in recognizing fearful expressions. This impairment cannot be attributed solely to task difficulty. These results suggest dysfunction among antisocial individuals in specified neural substrates, namely the amygdala, involved in processing fearful facial affect.
NASA Astrophysics Data System (ADS)
Addawe, Rizavel C.; Addawe, Joel M.; Magadia, Joselito C.
2016-10-01
Accurate forecasting of dengue cases would significantly improve epidemic prevention and control capabilities. This paper attempts to provide useful models in forecasting dengue epidemic specific to the young and adult population of Baguio City. To capture the seasonal variations in dengue incidence, this paper develops a robust modeling approach to identify and estimate seasonal autoregressive integrated moving average (SARIMA) models in the presence of additive outliers. Since the least squares estimators are not robust in the presence of outliers, we suggest a robust estimation based on winsorized and reweighted least squares estimators. A hybrid algorithm, Differential Evolution - Simulated Annealing (DESA), is used to identify and estimate the parameters of the optimal SARIMA model. The method is applied to the monthly reported dengue cases in Baguio City, Philippines.
Development of EarthCube Governance: An Agile Approach
NASA Astrophysics Data System (ADS)
Pearthree, G.; Allison, M. L.; Patten, K.
2013-12-01
Governance of geosciences cyberinfrastructure is a complex and essential undertaking, critical in enabling distributed knowledge communities to collaborate and communicate across disciplines, distances, and cultures. Advancing science with respect to 'grand challenges," such as global climate change, weather prediction, and core fundamental science, depends not just on technical cyber systems, but also on social systems for strategic planning, decision-making, project management, learning, teaching, and building a community of practice. Simply put, a robust, agile technical system depends on an equally robust and agile social system. Cyberinfrastructure development is wrapped in social, organizational and governance challenges, which may significantly impede progress. An agile development process is underway for governance of transformative investments in geosciences cyberinfrastructure through the NSF EarthCube initiative. Agile development is iterative and incremental, and promotes adaptive planning and rapid and flexible response. Such iterative deployment across a variety of EarthCube stakeholders encourages transparency, consensus, accountability, and inclusiveness. A project Secretariat acts as the coordinating body, carrying out duties for planning, organizing, communicating, and reporting. A broad coalition of stakeholder groups comprises an Assembly (Mainstream Scientists, Cyberinfrastructure Institutions, Information Technology/Computer Sciences, NSF EarthCube Investigators, Science Communities, EarthCube End-User Workshop Organizers, Professional Societies) to serve as a preliminary venue for identifying, evaluating, and testing potential governance models. To offer opportunity for broader end-user input, a crowd-source approach will engage stakeholders not involved otherwise. An Advisory Committee from the Earth, ocean, atmosphere, social, computer and library sciences is guiding the process from a high-level policy point of view. Developmental evaluators from the social sciences embedded in the project provide real-time review and adjustments. While a large number of agencies and organizations have agreed to participate, in order to ensure an open and inclusive process, community selected leaders yet to be identified will play key roles through an Assembly Advisory Council. Once consensus is reached on a governing framework, a community-selected demonstration governance pilot will help facilitate community convergence on system design.
Lotte, Fabien; Larrue, Florian; Mühl, Christian
2013-01-01
While recent research on Brain-Computer Interfaces (BCI) has highlighted their potential for many applications, they remain barely used outside laboratories. The main reason is their lack of robustness. Indeed, with current BCI, mental state recognition is usually slow and often incorrect. Spontaneous BCI (i.e., mental imagery-based BCI) often rely on mutual learning efforts by the user and the machine, with BCI users learning to produce stable ElectroEncephaloGraphy (EEG) patterns (spontaneous BCI control being widely acknowledged as a skill) while the computer learns to automatically recognize these EEG patterns, using signal processing. Most research so far was focused on signal processing, mostly neglecting the human in the loop. However, how well the user masters the BCI skill is also a key element explaining BCI robustness. Indeed, if the user is not able to produce stable and distinct EEG patterns, then no signal processing algorithm would be able to recognize them. Unfortunately, despite the importance of BCI training protocols, they have been scarcely studied so far, and used mostly unchanged for years. In this paper, we advocate that current human training approaches for spontaneous BCI are most likely inappropriate. We notably study instructional design literature in order to identify the key requirements and guidelines for a successful training procedure that promotes a good and efficient skill learning. This literature study highlights that current spontaneous BCI user training procedures satisfy very few of these requirements and hence are likely to be suboptimal. We therefore identify the flaws in BCI training protocols according to instructional design principles, at several levels: in the instructions provided to the user, in the tasks he/she has to perform, and in the feedback provided. For each level, we propose new research directions that are theoretically expected to address some of these flaws and to help users learn the BCI skill more efficiently. PMID:24062669
More About Robustness of Coherence
NASA Astrophysics Data System (ADS)
Li, Pi-Yu; Liu, Feng; Xu, Yan-Qin; La, Dong-Sheng
2018-07-01
Quantum coherence is an important physical resource in quantum computation and quantum information processing. In this paper, the distribution of the robustness of coherence in multipartite quantum system is considered. It is shown that the additivity of the robustness of coherence is not always valid for general quantum state, but the robustness of coherence is decreasing under partial trace for any bipartite quantum system. The ordering states with the coherence measures RoC, the l 1 norm of coherence C_{l1} and the relative entropy of coherence C r are also discussed.
Haebig, Eileen; Leonard, Laurence; Usler, Evan; Deevy, Patricia; Weber, Christine
2018-03-15
Previous behavioral studies have found deficits in lexical-semantic abilities in children with specific language impairment (SLI), including reduced depth and breadth of word knowledge. This study explored the neural correlates of early emerging familiar word processing in preschoolers with SLI and typical development. Fifteen preschoolers with typical development and 15 preschoolers with SLI were presented with pictures followed after a brief delay by an auditory label that did or did not match. Event-related brain potentials were time locked to the onset of the auditory labels. Children provided verbal judgments of whether the label matched the picture. There were no group differences in the accuracy of identifying when pictures and labels matched or mismatched. Event-related brain potential data revealed that mismatch trials elicited a robust N400 in both groups, with no group differences in mean amplitude or peak latency. However, the typically developing group demonstrated a more robust late positive component, elicited by mismatch trials. These initial findings indicate that lexical-semantic access of early acquired words, indexed by the N400, does not differ between preschoolers with SLI and typical development when highly familiar words are presented in isolation. However, the typically developing group demonstrated a more mature profile of postlexical reanalysis and integration, indexed by an emerging late positive component. The findings lay the necessary groundwork for better understanding processing of newly learned words in children with SLI.
Robust Regression for Slope Estimation in Curriculum-Based Measurement Progress Monitoring
ERIC Educational Resources Information Center
Mercer, Sterett H.; Lyons, Alina F.; Johnston, Lauren E.; Millhoff, Courtney L.
2015-01-01
Although ordinary least-squares (OLS) regression has been identified as a preferred method to calculate rates of improvement for individual students during curriculum-based measurement (CBM) progress monitoring, OLS slope estimates are sensitive to the presence of extreme values. Robust estimators have been developed that are less biased by…
Robust Spatial Autoregressive Modeling for Hardwood Log Inspection
Dongping Zhu; A.A. Beex
1994-01-01
We explore the application of a stochastic texture modeling method toward a machine vision system for log inspection in the forest products industry. This machine vision system uses computerized tomography (CT) imaging to locate and identify internal defects in hardwood logs. The application of CT to such industrial vision problems requires efficient and robust image...
Racial bias in implicit danger associations generalizes to older male targets.
Lundberg, Gustav J W; Neel, Rebecca; Lassetter, Bethany; Todd, Andrew R
2018-01-01
Across two experiments, we examined whether implicit stereotypes linking younger (~28-year-old) Black versus White men with violence and criminality extend to older (~68-year-old) Black versus White men. In Experiment 1, participants completed a sequential priming task wherein they categorized objects as guns or tools after seeing briefly-presented facial images of men who varied in age (younger versus older) and race (Black versus White). In Experiment 2, we used different face primes of younger and older Black and White men, and participants categorized words as 'threatening' or 'safe.' Results consistently revealed robust racial biases in object and word identification: Dangerous objects and words were identified more easily (faster response times, lower error rates), and non-dangerous objects and words were identified less easily, after seeing Black face primes than after seeing White face primes. Process dissociation procedure analyses, which aim to isolate the unique contributions of automatic and controlled processes to task performance, further indicated that these effects were driven entirely by racial biases in automatic processing. In neither experiment did prime age moderate racial bias, suggesting that the implicit danger associations commonly evoked by younger Black versus White men appear to generalize to older Black versus White men.
NASA Astrophysics Data System (ADS)
Pourbabaee, Bahareh; Meskin, Nader; Khorasani, Khashayar
2016-08-01
In this paper, a novel robust sensor fault detection and isolation (FDI) strategy using the multiple model-based (MM) approach is proposed that remains robust with respect to both time-varying parameter uncertainties and process and measurement noise in all the channels. The scheme is composed of robust Kalman filters (RKF) that are constructed for multiple piecewise linear (PWL) models that are constructed at various operating points of an uncertain nonlinear system. The parameter uncertainty is modeled by using a time-varying norm bounded admissible structure that affects all the PWL state space matrices. The robust Kalman filter gain matrices are designed by solving two algebraic Riccati equations (AREs) that are expressed as two linear matrix inequality (LMI) feasibility conditions. The proposed multiple RKF-based FDI scheme is simulated for a single spool gas turbine engine to diagnose various sensor faults despite the presence of parameter uncertainties, process and measurement noise. Our comparative studies confirm the superiority of our proposed FDI method when compared to the methods that are available in the literature.
Image Corruption Detection in Diffusion Tensor Imaging for Post-Processing and Real-Time Monitoring
Li, Yue; Shea, Steven M.; Lorenz, Christine H.; Jiang, Hangyi; Chou, Ming-Chung; Mori, Susumu
2013-01-01
Due to the high sensitivity of diffusion tensor imaging (DTI) to physiological motion, clinical DTI scans often suffer a significant amount of artifacts. Tensor-fitting-based, post-processing outlier rejection is often used to reduce the influence of motion artifacts. Although it is an effective approach, when there are multiple corrupted data, this method may no longer correctly identify and reject the corrupted data. In this paper, we introduce a new criterion called “corrected Inter-Slice Intensity Discontinuity” (cISID) to detect motion-induced artifacts. We compared the performance of algorithms using cISID and other existing methods with regard to artifact detection. The experimental results show that the integration of cISID into fitting-based methods significantly improves the retrospective detection performance at post-processing analysis. The performance of the cISID criterion, if used alone, was inferior to the fitting-based methods, but cISID could effectively identify severely corrupted images with a rapid calculation time. In the second part of this paper, an outlier rejection scheme was implemented on a scanner for real-time monitoring of image quality and reacquisition of the corrupted data. The real-time monitoring, based on cISID and followed by post-processing, fitting-based outlier rejection, could provide a robust environment for routine DTI studies. PMID:24204551
Dimitrova, N; Nagaraj, A B; Razi, A; Singh, S; Kamalakaran, S; Banerjee, N; Joseph, P; Mankovich, A; Mittal, P; DiFeo, A; Varadan, V
2017-04-27
Characterizing the complex interplay of cellular processes in cancer would enable the discovery of key mechanisms underlying its development and progression. Published approaches to decipher driver mechanisms do not explicitly model tissue-specific changes in pathway networks and the regulatory disruptions related to genomic aberrations in cancers. We therefore developed InFlo, a novel systems biology approach for characterizing complex biological processes using a unique multidimensional framework integrating transcriptomic, genomic and/or epigenomic profiles for any given cancer sample. We show that InFlo robustly characterizes tissue-specific differences in activities of signalling networks on a genome scale using unique probabilistic models of molecular interactions on a per-sample basis. Using large-scale multi-omics cancer datasets, we show that InFlo exhibits higher sensitivity and specificity in detecting pathway networks associated with specific disease states when compared to published pathway network modelling approaches. Furthermore, InFlo's ability to infer the activity of unmeasured signalling network components was also validated using orthogonal gene expression signatures. We then evaluated multi-omics profiles of primary high-grade serous ovarian cancer tumours (N=357) to delineate mechanisms underlying resistance to frontline platinum-based chemotherapy. InFlo was the only algorithm to identify hyperactivation of the cAMP-CREB1 axis as a key mechanism associated with resistance to platinum-based therapy, a finding that we subsequently experimentally validated. We confirmed that inhibition of CREB1 phosphorylation potently sensitized resistant cells to platinum therapy and was effective in killing ovarian cancer stem cells that contribute to both platinum-resistance and tumour recurrence. Thus, we propose InFlo to be a scalable and widely applicable and robust integrative network modelling framework for the discovery of evidence-based biomarkers and therapeutic targets.
Li, Zukui; Floudas, Christodoulos A.
2012-01-01
Probabilistic guarantees on constraint satisfaction for robust counterpart optimization are studied in this paper. The robust counterpart optimization formulations studied are derived from box, ellipsoidal, polyhedral, “interval+ellipsoidal” and “interval+polyhedral” uncertainty sets (Li, Z., Ding, R., and Floudas, C.A., A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear and Robust Mixed Integer Linear Optimization, Ind. Eng. Chem. Res, 2011, 50, 10567). For those robust counterpart optimization formulations, their corresponding probability bounds on constraint satisfaction are derived for different types of uncertainty characteristic (i.e., bounded or unbounded uncertainty, with or without detailed probability distribution information). The findings of this work extend the results in the literature and provide greater flexibility for robust optimization practitioners in choosing tighter probability bounds so as to find less conservative robust solutions. Extensive numerical studies are performed to compare the tightness of the different probability bounds and the conservatism of different robust counterpart optimization formulations. Guiding rules for the selection of robust counterpart optimization models and for the determination of the size of the uncertainty set are discussed. Applications in production planning and process scheduling problems are presented. PMID:23329868
A Hybrid Interval-Robust Optimization Model for Water Quality Management.
Xu, Jieyu; Li, Yongping; Huang, Guohe
2013-05-01
In water quality management problems, uncertainties may exist in many system components and pollution-related processes ( i.e. , random nature of hydrodynamic conditions, variability in physicochemical processes, dynamic interactions between pollutant loading and receiving water bodies, and indeterminacy of available water and treated wastewater). These complexities lead to difficulties in formulating and solving the resulting nonlinear optimization problems. In this study, a hybrid interval-robust optimization (HIRO) method was developed through coupling stochastic robust optimization and interval linear programming. HIRO can effectively reflect the complex system features under uncertainty, where implications of water quality/quantity restrictions for achieving regional economic development objectives are studied. By delimiting the uncertain decision space through dimensional enlargement of the original chemical oxygen demand (COD) discharge constraints, HIRO enhances the robustness of the optimization processes and resulting solutions. This method was applied to planning of industry development in association with river-water pollution concern in New Binhai District of Tianjin, China. Results demonstrated that the proposed optimization model can effectively communicate uncertainties into the optimization process and generate a spectrum of potential inexact solutions supporting local decision makers in managing benefit-effective water quality management schemes. HIRO is helpful for analysis of policy scenarios related to different levels of economic penalties, while also providing insight into the tradeoff between system benefits and environmental requirements.
Stochastic simulation and robust design optimization of integrated photonic filters
NASA Astrophysics Data System (ADS)
Weng, Tsui-Wei; Melati, Daniele; Melloni, Andrea; Daniel, Luca
2017-01-01
Manufacturing variations are becoming an unavoidable issue in modern fabrication processes; therefore, it is crucial to be able to include stochastic uncertainties in the design phase. In this paper, integrated photonic coupled ring resonator filters are considered as an example of significant interest. The sparsity structure in photonic circuits is exploited to construct a sparse combined generalized polynomial chaos model, which is then used to analyze related statistics and perform robust design optimization. Simulation results show that the optimized circuits are more robust to fabrication process variations and achieve a reduction of 11%-35% in the mean square errors of the 3 dB bandwidth compared to unoptimized nominal designs.
NASA Astrophysics Data System (ADS)
Szelag, Bertrand; Abraham, Alexis; Brision, Stéphane; Gindre, Paul; Blampey, Benjamin; Myko, André; Olivier, Segolene; Kopp, Christophe
2017-05-01
Silicon photonic is becoming a reality for next generation communication system addressing the increasing needs of HPC (High Performance Computing) systems and datacenters. CMOS compatible photonic platforms are developed in many foundries integrating passive and active devices. The use of existing and qualified microelectronics process guarantees cost efficient and mature photonic technologies. Meanwhile, photonic devices have their own fabrication constraints, not similar to those of cmos devices, which can affect their performances. In this paper, we are addressing the integration of PN junction Mach Zehnder modulator in a 200mm CMOS compatible photonic platform. Implantation based device characteristics are impacted by many process variations among which screening layer thickness, dopant diffusion, implantation mask overlay. CMOS devices are generally quite robust with respect to these processes thanks to dedicated design rules. For photonic devices, the situation is different since, most of the time, doped areas must be carefully located within waveguides and CMOS solutions like self-alignment to the gate cannot be applied. In this work, we present different robust integration solutions for junction-based modulators. A simulation setup has been built in order to optimize of the process conditions. It consist in a Mathlab interface coupling process and device electro-optic simulators in order to run many iterations. Illustrations of modulator characteristic variations with process parameters are done using this simulation setup. Parameters under study are, for instance, X and Y direction lithography shifts, screening oxide and slab thicknesses. A robust process and design approach leading to a pn junction Mach Zehnder modulator insensitive to lithography misalignment is then proposed. Simulation results are compared with experimental datas. Indeed, various modulators have been fabricated with different process conditions and integration schemes. Extensive electro-optic characterization of these components will be presented.
Semantic richness effects in lexical decision: The role of feedback.
Yap, Melvin J; Lim, Gail Y; Pexman, Penny M
2015-11-01
Across lexical processing tasks, it is well established that words with richer semantic representations are recognized faster. This suggests that the lexical system has access to meaning before a word is fully identified, and is consistent with a theoretical framework based on interactive and cascaded processing. Specifically, semantic richness effects are argued to be produced by feedback from semantic representations to lower-level representations. The present study explores the extent to which richness effects are mediated by feedback from lexical- to letter-level representations. In two lexical decision experiments, we examined the joint effects of stimulus quality and four semantic richness dimensions (imageability, number of features, semantic neighborhood density, semantic diversity). With the exception of semantic diversity, robust additive effects of stimulus quality and richness were observed for the targeted dimensions. Our results suggest that semantic feedback does not typically reach earlier levels of representation in lexical decision, and further reinforces the idea that task context modulates the processing dynamics of early word recognition processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Ying; Piehowski, Paul D.; Zhao, Rui
Nanoscale or single cell technologies are critical for biomedical applications. However, current mass spectrometry (MS)-based proteomic approaches require samples comprising a minimum of thousands of cells to provide in-depth profiling. Here, we report the development of a nanoPOTS (Nanodroplet Processing in One pot for Trace Samples) platform as a major advance in overall sensitivity. NanoPOTS dramatically enhances the efficiency and recovery of sample processing by downscaling processing volumes to <200 nL to minimize surface losses. When combined with ultrasensitive LC-MS, nanoPOTS allows identification of ~1500 to ~3,000 proteins from ~10 to ~140 cells, respectively. By incorporating the Match Between Runsmore » algorithm of MaxQuant, >3000 proteins were consistently identified from as few as 10 cells. Furthermore, we demonstrate robust quantification of ~2400 proteins from single human pancreatic islet thin sections from type 1 diabetic and control donors, illustrating the application of nanoPOTS for spatially resolved proteome measurements from clinical tissues.« less
Schlägel, Ulrike E; Lewis, Mark A
2016-12-01
Discrete-time random walks and their extensions are common tools for analyzing animal movement data. In these analyses, resolution of temporal discretization is a critical feature. Ideally, a model both mirrors the relevant temporal scale of the biological process of interest and matches the data sampling rate. Challenges arise when resolution of data is too coarse due to technological constraints, or when we wish to extrapolate results or compare results obtained from data with different resolutions. Drawing loosely on the concept of robustness in statistics, we propose a rigorous mathematical framework for studying movement models' robustness against changes in temporal resolution. In this framework, we define varying levels of robustness as formal model properties, focusing on random walk models with spatially-explicit component. With the new framework, we can investigate whether models can validly be applied to data across varying temporal resolutions and how we can account for these different resolutions in statistical inference results. We apply the new framework to movement-based resource selection models, demonstrating both analytical and numerical calculations, as well as a Monte Carlo simulation approach. While exact robustness is rare, the concept of approximate robustness provides a promising new direction for analyzing movement models.
Processes contributing to resilience of coastal wetlands to sea-level rise
Stagg, Camille L.; Krauss, Ken W.; Cahoon, Donald R.; Cormier, Nicole; Conner, William H.; Swarzenski, Christopher M.
2016-01-01
The objectives of this study were to identify processes that contribute to resilience of coastal wetlands subject to rising sea levels and to determine whether the relative contribution of these processes varies across different wetland community types. We assessed the resilience of wetlands to sea-level rise along a transitional gradient from tidal freshwater forested wetland (TFFW) to marsh by measuring processes controlling wetland elevation. We found that, over 5 years of measurement, TFFWs were resilient, although some marginally, and oligohaline marshes exhibited robust resilience to sea-level rise. We identified fundamental differences in how resilience is maintained across wetland community types, which have important implications for management activities that aim to restore or conserve resilient systems. We showed that the relative importance of surface and subsurface processes in controlling wetland surface elevation change differed between TFFWs and oligohaline marshes. The marshes had significantly higher rates of surface accretion than the TFFWs, and in the marshes, surface accretion was the primary contributor to elevation change. In contrast, elevation change in TFFWs was more heavily influenced by subsurface processes, such as root zone expansion or compaction, which played an important role in determining resilience of TFFWs to rising sea level. When root zone contributions were removed statistically from comparisons between relative sea-level rise and surface elevation change, sites that previously had elevation rate deficits showed a surplus. Therefore, assessments of wetland resilience that do not include subsurface processes will likely misjudge vulnerability to sea-level rise.
Bridge to shared governance: developing leadership of frontline nurses.
Dearmon, Valorie A; Riley, Bettina H; Mestas, Lisa G; Buckner, Ellen B
2015-01-01
Transforming health care systems to improve quality is the responsibility of nurse executives and frontline nurses alike, yet frontline nurses are often ill-prepared to share leadership and accountability needed for transformation. The aim of this qualitative study was to describe the process used to build leadership capacity of frontline nurses engaged in resolving operational failures interrupting nursing care. The leadership development process served to bridge staff transition to shared governance. This institutional review board-approved qualitative research was designed to identify the effects of mentoring by the chief nursing officer and faculty partners on leadership development of frontline nurses working to find solutions to operational failures. Twelve nurses from 4 medical surgical units participated in a Frontline Innovations' nurse-led interdisciplinary group, which met over 18 months. Transcriptions of audiotaped meetings were analyzed for emerging process and outcome themes. The transcripts revealed a robust leadership development journey of frontline nurses engaged in process improvement. Themes that emerged from the mentoring process included engagement, collaboration, empowerment, confidence, and lifelong learning. The mentoring process provided frontline nurses the leadership foundation necessary to initiate shared governance.
Mandillo, Silvia; Tucci, Valter; Hölter, Sabine M.; Meziane, Hamid; Banchaabouchi, Mumna Al; Kallnik, Magdalena; Lad, Heena V.; Nolan, Patrick M.; Ouagazzal, Abdel-Mouttalib; Coghill, Emma L.; Gale, Karin; Golini, Elisabetta; Jacquot, Sylvie; Krezel, Wojtek; Parker, Andy; Riet, Fabrice; Schneider, Ilka; Marazziti, Daniela; Auwerx, Johan; Brown, Steve D. M.; Chambon, Pierre; Rosenthal, Nadia; Tocchini-Valentini, Glauco; Wurst, Wolfgang
2008-01-01
Establishing standard operating procedures (SOPs) as tools for the analysis of behavioral phenotypes is fundamental to mouse functional genomics. It is essential that the tests designed provide reliable measures of the process under investigation but most importantly that these are reproducible across both time and laboratories. For this reason, we devised and tested a set of SOPs to investigate mouse behavior. Five research centers were involved across France, Germany, Italy, and the UK in this study, as part of the EUMORPHIA program. All the procedures underwent a cross-validation experimental study to investigate the robustness of the designed protocols. Four inbred reference strains (C57BL/6J, C3HeB/FeJ, BALB/cByJ, 129S2/SvPas), reflecting their use as common background strains in mutagenesis programs, were analyzed to validate these tests. We demonstrate that the operating procedures employed, which includes open field, SHIRPA, grip-strength, rotarod, Y-maze, prepulse inhibition of acoustic startle response, and tail flick tests, generated reproducible results between laboratories for a number of the test output parameters. However, we also identified several uncontrolled variables that constitute confounding factors in behavioral phenotyping. The EUMORPHIA SOPs described here are an important start-point for the ongoing development of increasingly robust phenotyping platforms and their application in large-scale, multicentre mouse phenotyping programs. PMID:18505770
Click-On-Diagram Questions: a New Tool to Study Conceptions Using Classroom Response Systems
NASA Astrophysics Data System (ADS)
LaDue, Nicole D.; Shipley, Thomas F.
2018-06-01
Geoscience instructors depend upon photos, diagrams, and other visualizations to depict geologic structures and processes that occur over a wide range of temporal and spatial scales. This proof-of-concept study tests click-on-diagram (COD) questions, administered using a classroom response system (CRS), as a research tool for identifying spatial misconceptions. First, we propose a categorization of spatial conceptions associated with geoscience concepts. Second, we implemented the COD questions in an undergraduate introductory geology course. Each question was implemented three times: pre-instruction, post-instruction, and at the end of the course to evaluate the stability of students' conceptual understanding. We classified each instance as (1) a false belief that was easily remediated, (2) a flawed mental model that was not fully transformed, or (3) a robust misconception that persisted despite targeted instruction. Geographic Information System (GIS) software facilitated spatial analysis of students' answers. The COD data confirmed known misconceptions about Earth's structure, geologic time, and base level and revealed a novel robust misconception about hot spot formation. Questions with complex spatial attributes were less likely to change following instruction and more likely to be classified as a robust misconception. COD questions provided efficient access to students' conceptual understanding. CRS-administered COD questions present an opportunity to gather spatial conceptions with large groups of students, immediately, building the knowledge base about students' misconceptions and providing feedback to guide instruction.
Passive forensics for copy-move image forgery using a method based on DCT and SVD.
Zhao, Jie; Guo, Jichang
2013-12-10
As powerful image editing tools are widely used, the demand for identifying the authenticity of an image is much increased. Copy-move forgery is one of the tampering techniques which are frequently used. Most existing techniques to expose this forgery need to improve the robustness for common post-processing operations and fail to precisely locate the tampering region especially when there are large similar or flat regions in the image. In this paper, a robust method based on DCT and SVD is proposed to detect this specific artifact. Firstly, the suspicious image is divided into fixed-size overlapping blocks and 2D-DCT is applied to each block, then the DCT coefficients are quantized by a quantization matrix to obtain a more robust representation of each block. Secondly, each quantized block is divided non-overlapping sub-blocks and SVD is applied to each sub-block, then features are extracted to reduce the dimension of each block using its largest singular value. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks will be matched by predefined shift frequency threshold. Experiment results demonstrate that our proposed method can effectively detect multiple copy-move forgery and precisely locate the duplicated regions, even when an image was distorted by Gaussian blurring, AWGN, JPEG compression and their mixed operations. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bukhari, Hassan J.
2017-12-01
In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.
NASA Technical Reports Server (NTRS)
Knuth, Kevin H.; Shah, Ankoor S.; Truccolo, Wilson; Ding, Ming-Zhou; Bressler, Steven L.; Schroeder, Charles E.
2003-01-01
Electric potentials and magnetic fields generated by ensembles of synchronously active neurons in response to external stimuli provide information essential to understanding the processes underlying cognitive and sensorimotor activity. Interpreting recordings of these potentials and fields is difficult as each detector records signals simultaneously generated by various regions throughout the brain. We introduce the differentially Variable Component Analysis (dVCA) algorithm, which relies on trial-to-trial variability in response amplitude and latency to identify multiple components. Using simulations we evaluate the importance of response variability to component identification, the robustness of dVCA to noise, and its ability to characterize single-trial data. Finally, we evaluate the technique using visually evoked field potentials recorded at incremental depths across the layers of cortical area VI, in an awake, behaving macaque monkey.
NASA Astrophysics Data System (ADS)
Hou, Liqiang; Cai, Yuanli; Liu, Jin; Hou, Chongyuan
2016-04-01
A variable fidelity robust optimization method for pulsed laser orbital debris removal (LODR) under uncertainty is proposed. Dempster-shafer theory of evidence (DST), which merges interval-based and probabilistic uncertainty modeling, is used in the robust optimization. The robust optimization method optimizes the performance while at the same time maximizing its belief value. A population based multi-objective optimization (MOO) algorithm based on a steepest descent like strategy with proper orthogonal decomposition (POD) is used to search robust Pareto solutions. Analytical and numerical lifetime predictors are used to evaluate the debris lifetime after the laser pulses. Trust region based fidelity management is designed to reduce the computational cost caused by the expensive model. When the solutions fall into the trust region, the analytical model is used to reduce the computational cost. The proposed robust optimization method is first tested on a set of standard problems and then applied to the removal of Iridium 33 with pulsed lasers. It will be shown that the proposed approach can identify the most robust solutions with minimum lifetime under uncertainty.
Distribution path robust optimization of electric vehicle with multiple distribution centers
Hao, Wei; He, Ruichun; Jia, Xiaoyan; Pan, Fuquan; Fan, Jing; Xiong, Ruiqi
2018-01-01
To identify electrical vehicle (EV) distribution paths with high robustness, insensitivity to uncertainty factors, and detailed road-by-road schemes, optimization of the distribution path problem of EV with multiple distribution centers and considering the charging facilities is necessary. With the minimum transport time as the goal, a robust optimization model of EV distribution path with adjustable robustness is established based on Bertsimas’ theory of robust discrete optimization. An enhanced three-segment genetic algorithm is also developed to solve the model, such that the optimal distribution scheme initially contains all road-by-road path data using the three-segment mixed coding and decoding method. During genetic manipulation, different interlacing and mutation operations are carried out on different chromosomes, while, during population evolution, the infeasible solution is naturally avoided. A part of the road network of Xifeng District in Qingyang City is taken as an example to test the model and the algorithm in this study, and the concrete transportation paths are utilized in the final distribution scheme. Therefore, more robust EV distribution paths with multiple distribution centers can be obtained using the robust optimization model. PMID:29518169
Evaluating the value and impact of the Victorian Audit of Surgical Mortality.
Retegan, Claudia; Russell, Colin; Harris, Darren; Andrianopoulos, Nick; Beiles, C Barry
2013-10-01
Since the Victorian Audit of Surgical Mortality (VASM) commenced in 2007, 95% of Victorian Fellows have agreed to participate and have provided data on the deaths of patients receiving surgical care. All public, and the majority of private, hospitals involved in the delivery of surgical services in Victoria have been submitting data on deaths associated with surgery. De-identified reports on this data are distributed in regular annual reports and case note review booklets. Although informal feedback on the perceived value of the audit was encouraging, a formal review of all aspects of the audit was felt necessary. An independent formal review of VASM governance, documentation, datasets and data analysis was performed, in addition to a survey of 257 individuals (surgeons and other stakeholders) on the perceived impact of VASM. The review confirmed increasing participation and acceptance by surgeons since the inception of the project. Governance mechanisms were found to be effective and acknowledged by stakeholders and collaborators. Robust participation rates have been achieved, and stakeholders were generally satisfied with the quality of feedback. Suggestions for improvement were provided by some surgeons and hospitals. External review of VASM processes and procedures confirmed that the audit was operating effectively, with robust quality control and achieving the trust of stakeholders. The educational value of the audit to the surgical community was acknowledged and areas for future improvement have been identified. © 2013 Royal Australasian College of Surgeons.
Alibhai, Sky; Jewell, Zoe; Evans, Jonah
2017-01-01
Acquiring reliable data on large felid populations is crucial for effective conservation and management. However, large felids, typically solitary, elusive and nocturnal, are difficult to survey. Tagging and following individuals with VHF or GPS technology is the standard approach, but costs are high and these methodologies can compromise animal welfare. Such limitations can restrict the use of these techniques at population or landscape levels. In this paper we describe a robust technique to identify and sex individual pumas from footprints. We used a standardized image collection protocol to collect a reference database of 535 footprints from 35 captive pumas over 10 facilities; 19 females (300 footprints) and 16 males (235 footprints), ranging in age from 1-20 yrs. Images were processed in JMP data visualization software, generating one hundred and twenty three measurements from each footprint. Data were analyzed using a customized model based on a pairwise trail comparison using robust cross-validated discriminant analysis with a Ward's clustering method. Classification accuracy was consistently > 90% for individuals, and for the correct classification of footprints within trails, and > 99% for sex classification. The technique has the potential to greatly augment the methods available for studying puma and other elusive felids, and is amenable to both citizen-science and opportunistic/local community data collection efforts, particularly as the data collection protocol is inexpensive and intuitive.
Ausseil, Frederic; Samson, Arnaud; Aussagues, Yannick; Vandenberghe, Isabelle; Creancier, Laurent; Pouny, Isabelle; Kruczynski, Anna; Massiot, Georges; Bailly, Christian
2007-02-01
To discover original inhibitors of the ubiquitin-proteasome pathway, the authors have developed a cell-based bioluminescent assay and used it to screen collections of plant extracts and chemical compounds. They first established a DLD-1 human colon cancer cell line that stably expresses a 4Ubiquitin-Luciferase (4Ub-Luc) reporter protein, efficiently targeted to the ubiquitin-proteasome degradation pathway. The assay was then adapted to 96- and 384-well plate formats and calibrated with reference proteasome inhibitors. Assay robustness was carefully assessed, particularly cell toxicity, and the statistical Z factor value was calculated to 0.83, demonstrating a good performance level of the assay. A total of 18,239 molecules and 15,744 plant extracts and fractions thereof were screened for their capacity to increase the luciferase activity in DLD-1 4Ub-Luc cells, and 21 molecules and 66 extracts inhibiting the ubiquitin-proteasome pathway were identified. The fractionation of an active methanol extract of Physalis angulata L. aerial parts was performed to isolate 2 secosteroids known as physalin B and C. In a cell-based Western blot assay, the ubiquitinated protein accumulation was confirmed after a physalin treatment confirming the accuracy of the screening process. The method reported here thus provides a robust approach to identify novel ubiquitin-proteasome pathway inhibitors in large collections of chemical compounds and natural products.
A signature of 12 microRNAs is robustly associated with growth rate in a variety of CHO cell lines.
Klanert, Gerald; Jadhav, Vaibhav; Shanmukam, Vinoth; Diendorfer, Andreas; Karbiener, Michael; Scheideler, Marcel; Bort, Juan Hernández; Grillari, Johannes; Hackl, Matthias; Borth, Nicole
2016-10-10
As Chinese Hamster Ovary (CHO) cells are the cell line of choice for the production of human-like recombinant proteins, there is interest in genetic optimization of host cell lines to overcome certain limitations in their growth rate and protein secretion. At the same time, a detailed understanding of these processes could be used to advantage by identification of marker transcripts that characterize states of performance. In this context, microRNAs (miRNAs) that exhibit a robust correlation to the growth rate of CHO cells were determined by analyzing miRNA expression profiles in a comprehensive collection of 46 samples including CHO-K1, CHO-S and CHO-DUKXB11, which were adapted to various culture conditions, and analyzed in different growth stages using microarrays. By applying Spearman or Pearson correlation coefficient criteria of>|0.6|, miRNAs with high correlation to the overall growth, or growth rates observed in exponential, serum-free, and serum-free exponential phase were identified. An overlap of twelve miRNAs common for all sample sets was revealed, with nine positively and three negatively correlating miRNAs. The here identified panel of miRNAs can help to understand growth regulation in CHO cells and contains putative engineering targets as well as biomarkers for cell lines with advantageous growth characteristics. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Zanchettin, Davide; Khodri, Myriam; Timmreck, Claudia; Toohey, Matthew; Schmidt, Anja; Gerber, Edwin P.; Hegerl, Gabriele; Robock, Alan; Pausata, Francesco; Ball, William T.;
2016-01-01
The enhancement of the stratospheric aerosol layer by volcanic eruptions induces a complex set of responses causing global and regional climate effects on a broad range of timescales. Uncertainties exist regarding the climatic response to strong volcanic forcing identified in coupled climate simulations that contributed to the fifth phase of the Coupled Model Intercomparison Project (CMIP5). In order to better understand the sources of these model diversities, the Model Intercomparison Project on the climatic response to Volcanic forcing (VolMIP) has defined a coordinated set of idealized volcanic perturbation experiments to be carried out in alignment with the CMIP6 protocol. VolMIP provides a common stratospheric aerosol data set for each experiment to minimize differences in the applied volcanic forcing. It defines a set of initial conditions to assess how internal climate variability contributes to determining the response. VolMIP will assess to what extent volcanically forced responses of the coupled ocean-atmosphere system are robustly simulated by state-of-the-art coupled climate models and identify the causes that limit robust simulated behavior, especially differences in the treatment of physical processes. This paper illustrates the design of the idealized volcanic perturbation experiments in the VolMIP protocol and describes the common aerosol forcing input data sets to be used.
Motulsky, Harvey J; Brown, Ronald E
2006-01-01
Background Nonlinear regression, like linear regression, assumes that the scatter of data around the ideal curve follows a Gaussian or normal distribution. This assumption leads to the familiar goal of regression: to minimize the sum of the squares of the vertical or Y-value distances between the points and the curve. Outliers can dominate the sum-of-the-squares calculation, and lead to misleading results. However, we know of no practical method for routinely identifying outliers when fitting curves with nonlinear regression. Results We describe a new method for identifying outliers when fitting data with nonlinear regression. We first fit the data using a robust form of nonlinear regression, based on the assumption that scatter follows a Lorentzian distribution. We devised a new adaptive method that gradually becomes more robust as the method proceeds. To define outliers, we adapted the false discovery rate approach to handling multiple comparisons. We then remove the outliers, and analyze the data using ordinary least-squares regression. Because the method combines robust regression and outlier removal, we call it the ROUT method. When analyzing simulated data, where all scatter is Gaussian, our method detects (falsely) one or more outlier in only about 1–3% of experiments. When analyzing data contaminated with one or several outliers, the ROUT method performs well at outlier identification, with an average False Discovery Rate less than 1%. Conclusion Our method, which combines a new method of robust nonlinear regression with a new method of outlier identification, identifies outliers from nonlinear curve fits with reasonable power and few false positives. PMID:16526949
ERIC Educational Resources Information Center
Li, Ming
2013-01-01
The goal of this work is to enhance the robustness and efficiency of the multimodal human states recognition task. Human states recognition can be considered as a joint term for identifying/verifing various kinds of human related states, such as biometric identity, language spoken, age, gender, emotion, intoxication level, physical activity, vocal…
Smith predictor-based multiple periodic disturbance compensation for long dead-time processes
NASA Astrophysics Data System (ADS)
Tan, Fang; Li, Han-Xiong; Shen, Ping
2018-05-01
Many disturbance rejection methods have been proposed for processes with dead-time, while these existing methods may not work well under multiple periodic disturbances. In this paper, a multiple periodic disturbance rejection is proposed under the Smith predictor configuration for processes with long dead-time. One feedback loop is added to compensate periodic disturbance while retaining the advantage of the Smith predictor. With information of the disturbance spectrum, the added feedback loop can remove multiple periodic disturbances effectively. The robust stability can be easily maintained through the rigorous analysis. Finally, simulation examples demonstrate the effectiveness and robustness of the proposed method for processes with long dead-time.
Li, Zukui; Ding, Ran; Floudas, Christodoulos A.
2011-01-01
Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263
An Automated Mouse Tail Vascular Access System by Vision and Pressure Feedback.
Chang, Yen-Chi; Berry-Pusey, Brittany; Yasin, Rashid; Vu, Nam; Maraglia, Brandon; Chatziioannou, Arion X; Tsao, Tsu-Chin
2015-08-01
This paper develops an automated vascular access system (A-VAS) with novel vision-based vein and needle detection methods and real-time pressure feedback for murine drug delivery. Mouse tail vein injection is a routine but critical step for preclinical imaging applications. Due to the small vein diameter and external disturbances such as tail hair, pigmentation, and scales, identifying vein location is difficult and manual injections usually result in poor repeatability. To improve the injection accuracy, consistency, safety, and processing time, A-VAS was developed to overcome difficulties in vein detection noise rejection, robustness in needle tracking, and visual servoing integration with the mechatronics system.
Prediction of intestinal absorption and blood-brain barrier penetration by computational methods.
Clark, D E
2001-09-01
This review surveys the computational methods that have been developed with the aim of identifying drug candidates likely to fail later on the road to market. The specifications for such computational methods are outlined, including factors such as speed, interpretability, robustness and accuracy. Then, computational filters aimed at predicting "drug-likeness" in a general sense are discussed before methods for the prediction of more specific properties--intestinal absorption and blood-brain barrier penetration--are reviewed. Directions for future research are discussed and, in concluding, the impact of these methods on the drug discovery process, both now and in the future, is briefly considered.
[Responsible research and development? Translating absences from nanotechnology in Portugal].
Fonseca, Paulo F C; Pereira, Tiago Santos
2017-01-01
This article analyzes how responsible innovation has been discussed and implemented in the context of one of the Portuguese government's main activities to foster nanotechnology. Through the actor-network theory and the sociology of absences, we investigate the process of coproduction at the International Iberian Nanotechnology Laboratory to identify how concerns about responsible development have been implemented or ignored in the rules and practices. The institute emerged from a sociotechnical imagination that views it as an autonomous unit for producing technological innovations aimed exclusively at increasing competitiveness in a global market, which has been an obstacle to the materialization of robust responsible development practices.
Magnetoencephalographic Signals Identify Stages in Real-Life Decision Processes
Braeutigam, Sven; Stins, John F.; Rose, Steven P. R.; Swithenby, Stephen J.; Ambler, Tim
2001-01-01
We used magnetoencephalography (MEG) to study the dynamics of neural responses in eight subjects engaged in shopping for day-to-day items from supermarket shelves. This behavior not only has personal and economic importance but also provides an example of an experience that is both personal and shared between individuals. The shopping experience enables the exploration of neural mechanisms underlying choice based on complex memories. Choosing among different brands of closely related products activated a robust sequence of signals within the first second after the presentation of the choice images. This sequence engaged first the visual cortex (80-100 ms), then as the images were analyzed, predominantly the left temporal regions (310-340 ms). At longer latency, characteristic neural activetion was found in motor speech areas (500-520 ms) for images requiring low salience choices with respect to previous (brand) memory, and in right parietal cortex for high salience choices (850-920 ms). We argue that the neural processes associated with the particular brand-choice stimulus can be separated into identifiable stages through observation of MEG responses and knowledge of functional anatomy. PMID:12018772
Using a Red Team to devise countermeasures
NASA Astrophysics Data System (ADS)
Swedenburg, R. L.
1995-01-01
The ability of a defense system to operate effectively when deployed in battle is dependent on designs able to deal with countermeasures against the defense. The formation of a technical Red Team to stress the preliminary designs of the defensive system with technologically feasible and effective potential countermeasures provides a means to identify such potential countermeasures. This paper describes the experience of the U.S. Ballistic Missile Defense Organization's (BMDO) Theater Missile Defense Red Team since the Gulf War in 1991, the Red-Blue Exchange process, and the value it has provided to the designers of the U.S. Theater Missile Defense systems for developing robust systems. A wide-range of technologically feasible countermeasures has been devised, analyzed, tested for feasibility, and provided to the system developers for mitigation design. The process for independently analyzing possible susceptibilities of preliminary designs and exploiting the susceptibilities to identify possible countermeasures is explained. Designing and characterizing the Red Team's countermeasures, determining their feasibility, and analyzing their potential effectiveness against the defense are explained. A technique for the Blue Team's designers to deal with a wide range of potential countermeasures is explained.
Joining and Integration of Silicon Carbide-Based Materials for High Temperature Applications
NASA Technical Reports Server (NTRS)
Halbig, Michael C.; Singh, Mrityunjay
2016-01-01
Advanced joining and integration technologies of silicon carbide-based ceramics and ceramic matrix composites are enabling for their implementation into wide scale aerospace and ground-based applications. The robust joining and integration technologies allow for large and complex shapes to be fabricated and integrated with the larger system. Potential aerospace applications include lean-direct fuel injectors, thermal actuators, turbine vanes, blades, shrouds, combustor liners and other hot section components. Ground based applications include components for energy and environmental systems. Performance requirements and processing challenges are identified for the successful implementation different joining technologies. An overview will be provided of several joining approaches which have been developed for high temperature applications. In addition, various characterization approaches were pursued to provide an understanding of the processing-microstructure-property relationships. Microstructural analysis of the joint interfaces was conducted using optical, scanning electron, and transmission electron microscopy to identify phases and evaluate the bond quality. Mechanical testing results will be presented along with the need for new standardized test methods. The critical need for tailoring interlayer compositions for optimum joint properties will also be highlighted.
NASA Astrophysics Data System (ADS)
Ben-Zikri, Yehuda Kfir; Linte, Cristian A.
2016-03-01
Region of interest detection is a precursor to many medical image processing and analysis applications, including segmentation, registration and other image manipulation techniques. The optimal region of interest is often selected manually, based on empirical knowledge and features of the image dataset. However, if inconsistently identified, the selected region of interest may greatly affect the subsequent image analysis or interpretation steps, in turn leading to incomplete assessment during computer-aided diagnosis or incomplete visualization or identification of the surgical targets, if employed in the context of pre-procedural planning or image-guided interventions. Therefore, the need for robust, accurate and computationally efficient region of interest localization techniques is prevalent in many modern computer-assisted diagnosis and therapy applications. Here we propose a fully automated, robust, a priori learning-based approach that provides reliable estimates of the left and right ventricle features from cine cardiac MR images. The proposed approach leverages the temporal frame-to-frame motion extracted across a range of short axis left ventricle slice images with small training set generated from les than 10% of the population. This approach is based on histogram of oriented gradients features weighted by local intensities to first identify an initial region of interest depicting the left and right ventricles that exhibits the greatest extent of cardiac motion. This region is correlated with the homologous region that belongs to the training dataset that best matches the test image using feature vector correlation techniques. Lastly, the optimal left ventricle region of interest of the test image is identified based on the correlation of known ground truth segmentations associated with the training dataset deemed closest to the test image. The proposed approach was tested on a population of 100 patient datasets and was validated against the ground truth region of interest of the test images manually annotated by experts. This tool successfully identified a mask around the LV and RV and furthermore the minimal region of interest around the LV that fully enclosed the left ventricle from all testing datasets, yielding a 98% overlap with their corresponding ground truth. The achieved mean absolute distance error between the two contours that normalized by the radius of the ground truth is 0.20 +/- 0.09.
Supervisor Expertise, Teacher Autonomy and Environmental Robustness.
ERIC Educational Resources Information Center
Street, Sue; Licata, Joseph W.
This study examines the collective perspective that teachers in schools have about the relationship between the supervisory expertise of the principal, teacher work autonomy, and school environmental robustness. Supervisory expertise, and teachers' satisfaction with the supervisory process, is measured with the "Fidelity of Supervision…
Temporal assessment of radiomic features on clinical mammography in a high-risk population
NASA Astrophysics Data System (ADS)
Mendel, Kayla R.; Li, Hui; Lan, Li; Chan, Chun-Wai; King, Lauren M.; Tayob, Nabihah; Whitman, Gary; El-Zein, Randa; Bedrosian, Isabelle; Giger, Maryellen L.
2018-02-01
Extraction of high-dimensional quantitative data from medical images has become necessary in disease risk assessment, diagnostics and prognostics. Radiomic workflows for mammography typically involve a single medical image for each patient although medical images may exist for multiple imaging exams, especially in screening protocols. Our study takes advantage of the availability of mammograms acquired over multiple years for the prediction of cancer onset. This study included 841 images from 328 patients who developed subsequent mammographic abnormalities, which were confirmed as either cancer (n=173) or non-cancer (n=155) through diagnostic core needle biopsy. Quantitative radiomic analysis was conducted on antecedent FFDMs acquired a year or more prior to diagnostic biopsy. Analysis was limited to the breast contralateral to that in which the abnormality arose. Novel metrics were used to identify robust radiomic features. The most robust features were evaluated in the task of predicting future malignancies on a subset of 72 subjects (23 cancer cases and 49 non-cancer controls) with mammograms over multiple years. Using linear discriminant analysis, the robust radiomic features were merged into predictive signatures by: (i) using features from only the most recent contralateral mammogram, (ii) change in feature values between mammograms, and (iii) ratio of feature values over time, yielding AUCs of 0.57 (SE=0.07), 0.63 (SE=0.06), and 0.66 (SE=0.06), respectively. The AUCs for temporal radiomics (ratio) statistically differed from chance, suggesting that changes in radiomics over time may be critical for risk assessment. Overall, we found that our two-stage process of robustness assessment followed by performance evaluation served well in our investigation on the role of temporal radiomics in risk assessment.
Buss, Aaron T.; Fox, Nicholas; Boas, David A.; Spencer, John P.
2013-01-01
Visual working memory (VWM) is a core cognitive system with a highly limited capacity. The present study is the first to examine VWM capacity limits in early development using functional neuroimaging. We recorded optical neuroimaging data while 3- and 4-year-olds completed a change detection task where they detected changes in the shapes of objects after a brief delay. Near-infrared sources and detectors were placed over the following 10–20 positions: F3 and F5 in left frontal cortex, F4 and F6 in right frontal cortex, P3 and P5 in left parietal cortex, and P4 and P6 in right parietal cortex. The first question was whether we would see robust task-specific activation of the frontal-parietal network identified in the adult fMRI literature. This was indeed the case: three left frontal channels and 11 of 12 parietal channels showed a statistically robust difference between the concentration of oxygenated and deoxygenated hemoglobin following the presentation of the sample array. Moreover, four channels in the left hemisphere near P3, P5, and F5 showed a robust increase as the working memory load increased from 1–3 items. Notably, the hemodynamic response did not asymptote at 1–2 items as expected from previous fMRI studies with adults. Finally, 4-year-olds showed a more robust parietal response relative to 3-year-olds, and an increasing sensitivity to the memory load manipulation. These results demonstrate that fNIRS is an effective tool to study the neural processes that underlie the early development of VWM capacity. PMID:23707803
Dietzel, Matthias; Baltzer, Pascal A T; Dietzel, Andreas; Zoubi, Ramy; Gröschel, Tobias; Burmeister, Hartmut P; Bogdan, Martin; Kaiser, Werner A
2012-07-01
Differential diagnosis of lesions in MR-Mammography (MRM) remains a complex task. The aim of this MRM study was to design and to test robustness of Artificial Neural Network architectures to predict malignancy using a large clinical database. For this IRB-approved investigation standardized protocols and study design were applied (T1w-FLASH; 0.1 mmol/kgBW Gd-DTPA; T2w-TSE; histological verification after MRM). All lesions were evaluated by two experienced (>500 MRM) radiologists in consensus. In every lesion, 18 previously published descriptors were assessed and documented in the database. An Artificial Neural Network (ANN) was developed to process this database (The-MathWorks/Inc., feed-forward-architecture/resilient back-propagation-algorithm). All 18 descriptors were set as input variables, whereas histological results (malignant vs. benign) was defined as classification variable. Initially, the ANN was optimized in terms of "Training Epochs" (TE), "Hidden Layers" (HL), "Learning Rate" (LR) and "Neurons" (N). Robustness of the ANN was addressed by repeated evaluation cycles (n: 9) with receiver operating characteristics (ROC) analysis of the results applying 4-fold Cross Validation. The best network architecture was identified comparing the corresponding Area under the ROC curve (AUC). Histopathology revealed 436 benign and 648 malignant lesions. Enhancing the level of complexity could not increase diagnostic accuracy of the network (P: n.s.). The optimized ANN architecture (TE: 20, HL: 1, N: 5, LR: 1.2) was accurate (mean-AUC 0.888; P: <0.001) and robust (CI: 0.885-0.892; range: 0.880-0.898). The optimized neural network showed robust performance and high diagnostic accuracy for prediction of malignancy on unknown data. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
An index-based robust decision making framework for watershed management in a changing climate.
Kim, Yeonjoo; Chung, Eun-Sung
2014-03-01
This study developed an index-based robust decision making framework for watershed management dealing with water quantity and quality issues in a changing climate. It consists of two parts of management alternative development and analysis. The first part for alternative development consists of six steps: 1) to understand the watershed components and process using HSPF model, 2) to identify the spatial vulnerability ranking using two indices: potential streamflow depletion (PSD) and potential water quality deterioration (PWQD), 3) to quantify the residents' preferences on water management demands and calculate the watershed evaluation index which is the weighted combinations of PSD and PWQD, 4) to set the quantitative targets for water quantity and quality, 5) to develop a list of feasible alternatives and 6) to eliminate the unacceptable alternatives. The second part for alternative analysis has three steps: 7) to analyze all selected alternatives with a hydrologic simulation model considering various climate change scenarios, 8) to quantify the alternative evaluation index including social and hydrologic criteria with utilizing multi-criteria decision analysis methods and 9) to prioritize all options based on a minimax regret strategy for robust decision. This framework considers the uncertainty inherent in climate models and climate change scenarios with utilizing the minimax regret strategy, a decision making strategy under deep uncertainty and thus this procedure derives the robust prioritization based on the multiple utilities of alternatives from various scenarios. In this study, the proposed procedure was applied to the Korean urban watershed, which has suffered from streamflow depletion and water quality deterioration. Our application shows that the framework provides a useful watershed management tool for incorporating quantitative and qualitative information into the evaluation of various policies with regard to water resource planning and management. Copyright © 2013 Elsevier B.V. All rights reserved.
Buss, Aaron T; Fox, Nicholas; Boas, David A; Spencer, John P
2014-01-15
Visual working memory (VWM) is a core cognitive system with a highly limited capacity. The present study is the first to examine VWM capacity limits in early development using functional neuroimaging. We recorded optical neuroimaging data while 3- and 4-year-olds completed a change detection task where they detected changes in the shapes of objects after a brief delay. Near-infrared sources and detectors were placed over the following 10-20 positions: F3 and F5 in left frontal cortex, F4 and F6 in right frontal cortex, P3 and P5 in left parietal cortex, and P4 and P6 in right parietal cortex. The first question was whether we would see robust task-specific activation of the frontal-parietal network identified in the adult fMRI literature. This was indeed the case: three left frontal channels and 11 of 12 parietal channels showed a statistically robust difference between the concentration of oxygenated and deoxygenated hemoglobin following the presentation of the sample array. Moreover, four channels in the left hemisphere near P3, P5, and F5 showed a robust increase as the working memory load increased from 1 to 3 items. Notably, the hemodynamic response did not asymptote at 1-2 items as expected from previous fMRI studies with adults. Finally, 4-year-olds showed a more robust parietal response relative to 3-year-olds, and an increasing sensitivity to the memory load manipulation. These results demonstrate that fNIRS is an effective tool to study the neural processes that underlie the early development of VWM capacity. Copyright © 2013 Elsevier Inc. All rights reserved.
Robust, open-source removal of systematics in Kepler data
NASA Astrophysics Data System (ADS)
Aigrain, S.; Parviainen, H.; Roberts, S.; Reece, S.; Evans, T.
2017-10-01
We present ARC2 (Astrophysically Robust Correction 2), an open-source python-based systematics-correction pipeline, to correct for the Kepler prime mission long-cadence light curves. The ARC2 pipeline identifies and corrects any isolated discontinuities in the light curves and then removes trends common to many light curves. These trends are modelled using the publicly available co-trending basis vectors, within an (approximate) Bayesian framework with 'shrinkage' priors to minimize the risk of overfitting and the injection of any additional noise into the corrected light curves, while keeping any astrophysical signals intact. We show that the ARC2 pipeline's performance matches that of the standard Kepler PDC-MAP data products using standard noise metrics, and demonstrate its ability to preserve astrophysical signals using injection tests with simulated stellar rotation and planetary transit signals. Although it is not identical, the ARC2 pipeline can thus be used as an open-source alternative to PDC-MAP, whenever the ability to model the impact of the systematics removal process on other kinds of signal is important.
The current status of clinical proteomics and the use of MRM and MRM(3) for biomarker validation.
Lemoine, Jérôme; Fortin, Tanguy; Salvador, Arnaud; Jaffuel, Aurore; Charrier, Jean-Philippe; Choquet-Kastylevsky, Geneviève
2012-05-01
The transfer of biomarkers from the discovery field to clinical use is still, despite progress, on a road filled with pitfalls. Since the emergence of proteomics, thousands of putative biomarkers have been published, often with overlapping diagnostic capacities. The strengthening of the robustness of discovery technologies, particularly in mass spectrometry, has been followed by intense discussions on establishing well-defined evaluation procedures for the identified targets to ultimately allow the clinical validation and then the clinical use of some of these biomarkers. Some of the obstacles to the evaluation process have been the lack of the availability of quick and easy-to-develop, easy-to-use, robust, specific and sensitive alternative quantitative methods when immunoaffinity-based tests are unavailable. Multiple reaction monitoring (MRM; also called selected reaction monitoring) is currently proving its capabilities as a complementary or alternative technique to ELISA for large biomarker panel evaluation. Here, we present how MRM(3) can overcome the lack of specificity and sensitivity often encountered by MRM when tracking minor proteins diluted by complex biological matrices.
Understanding the science of portion control and the art of downsizing.
Hetherington, Marion M; Blundell-Birtill, Pam; Caton, Samantha J; Cecil, Joanne E; Evans, Charlotte E; Rolls, Barbara J; Tang, Tang
2018-05-24
Offering large portions of high-energy-dense (HED) foods increases overall intake in children and adults. This is known as the portion size effect (PSE). It is robust, reliable and enduring. Over time, the PSE may facilitate overeating and ultimately positive energy balance. Therefore, it is important to understand what drives the PSE and what might be done to counter the effects of an environment promoting large portions, especially in children. Explanations for the PSE are many and diverse, ranging from consumer error in estimating portion size to simple heuristics such as cleaning the plate or eating in accordance with consumption norms. However, individual characteristics and hedonic processes influence the PSE, suggesting a more complex explanation than error or heuristics. Here PSE studies are reviewed to identify interventions that can be used to downsize portions of HED foods, with a focus on children who are still learning about social norms for portion size. Although the scientific evidence for the PSE is robust, there is still a need for creative downsizing solutions to facilitate portion control as children and adolescents establish their eating habits.
An improved finger-vein recognition algorithm based on template matching
NASA Astrophysics Data System (ADS)
Liu, Yueyue; Di, Si; Jin, Jian; Huang, Daoping
2016-10-01
Finger-vein recognition has became the most popular biometric identify methods. The investigation on the recognition algorithms always is the key point in this field. So far, there are many applicable algorithms have been developed. However, there are still some problems in practice, such as the variance of the finger position which may lead to the image distortion and shifting; during the identification process, some matching parameters determined according to experience may also reduce the adaptability of algorithm. Focus on above mentioned problems, this paper proposes an improved finger-vein recognition algorithm based on template matching. In order to enhance the robustness of the algorithm for the image distortion, the least squares error method is adopted to correct the oblique finger. During the feature extraction, local adaptive threshold method is adopted. As regard as the matching scores, we optimized the translation preferences as well as matching distance between the input images and register images on the basis of Naoto Miura algorithm. Experimental results indicate that the proposed method can improve the robustness effectively under the finger shifting and rotation conditions.
Doloc-Mihu, Anca; Calabrese, Ronald L
2016-01-01
The underlying mechanisms that support robustness in neuronal networks are as yet unknown. However, recent studies provide evidence that neuronal networks are robust to natural variations, modulation, and environmental perturbations of parameters, such as maximal conductances of intrinsic membrane and synaptic currents. Here we sought a method for assessing robustness, which might easily be applied to large brute-force databases of model instances. Starting with groups of instances with appropriate activity (e.g., tonic spiking), our method classifies instances into much smaller subgroups, called families, in which all members vary only by the one parameter that defines the family. By analyzing the structures of families, we developed measures of robustness for activity type. Then, we applied these measures to our previously developed model database, HCO-db, of a two-neuron half-center oscillator (HCO), a neuronal microcircuit from the leech heartbeat central pattern generator where the appropriate activity type is alternating bursting. In HCO-db, the maximal conductances of five intrinsic and two synaptic currents were varied over eight values (leak reversal potential also varied, five values). We focused on how variations of particular conductance parameters maintain normal alternating bursting activity while still allowing for functional modulation of period and spike frequency. We explored the trade-off between robustness of activity type and desirable change in activity characteristics when intrinsic conductances are altered and identified the hyperpolarization-activated (h) current as an ideal target for modulation. We also identified ensembles of model instances that closely approximate physiological activity and can be used in future modeling studies.
A fractional Fourier transform analysis of a bubble excited by an ultrasonic chirp.
Barlow, Euan; Mulholland, Anthony J
2011-11-01
The fractional Fourier transform is proposed here as a model based, signal processing technique for determining the size of a bubble in a fluid. The bubble is insonified with an ultrasonic chirp and the radiated pressure field is recorded. This experimental bubble response is then compared with a series of theoretical model responses to identify the most accurate match between experiment and theory which allows the correct bubble size to be identified. The fractional Fourier transform is used to produce a more detailed description of each response, and two-dimensional cross correlation is then employed to identify the similarities between the experimental response and each theoretical response. In this paper the experimental bubble response is simulated by adding various levels of noise to the theoretical model output. The method is compared to the standard technique of using time-domain cross correlation. The proposed method is shown to be far more robust at correctly sizing the bubble and can cope with much lower signal to noise ratios.
Sarah, S A; Faradalila, W N; Salwani, M S; Amin, I; Karsani, S A; Sazili, A Q
2016-05-15
The purpose of this study was to identify porcine-specific peptide markers from thermally processed meat that could differentiate pork from beef, chevon and chicken meat. In the initial stage, markers from tryptic digested protein of chilled, boiled and autoclaved pork were identified using LC-QTOF-MS. An MRM method was then established for verification. A thorough investigation of LC-QTOF-MS data showed that only seven porcine-specific peptides were consistently detected. Among these peptides, two were derived from lactate dehydrogenase, one from creatine kinase, and four from serum albumin protein. However, MRM could only detect four peptides (EVTEFAK, LVVITAGAR, FVIER and TVLGNFAAFVQK) that were consistently present in pork samples. In conclusion, meat species determination through a tandem mass spectrometry platform shows high potential in providing scientifically valid and reliable results even at peptide level. Besides, the specificity and selectivity offered by the proteomics approach also provide a robust platform for Halal authentication. Copyright © 2015 Elsevier Ltd. All rights reserved.
Use of similarity scoring in the development of oral solid dosage forms.
Ferreira, Ana P; Olusanmi, Dolapo; Sprockel, Omar; Abebe, Admassu; Nikfar, Faranak; Tobyn, Mike
2016-12-05
In the oral solid dosage form space, material physical properties have a strong impact on the behaviour of the formulation during processing. The ability to identify materials with similar characteristics (and thus expected to exhibit similar behaviour) within the company's portfolio can help accelerate drug development by enabling early assessment and prediction of potential challenges associated with the powder properties of a new active pharmaceutical ingredient. Such developments will aid the production of robust dosage forms, in an efficient manner. Similarity scoring metrics are widely used in a number of scientific fields. This study proposes a practical implementation of this methodology within pharmaceutical development. The developed similarity metrics is based on the Mahalanobis distance. Scanning electron microscopy was used to confirm morphological similarity between the reference material and the closest matches identified by the metrics proposed. The results show that the metrics proposed are able to successfully identify material with similar physical properties. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, J; Gensheimer, M; Dong, X
Purpose: To develop an intra-tumor partitioning framework for identifying high-risk subregions from 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) and CT imaging, and to test whether tumor burden associated with the high-risk subregions is prognostic of outcomes in lung cancer. Methods: In this institutional review board-approved retrospective study, we analyzed the pre-treatment FDG-PET and CT scans of 44 lung cancer patients treated with radiotherapy. A novel, intra-tumor partitioning method was developed based on a two-stage clustering process: first at patient-level, each tumor was over-segmented into many superpixels by k-means clustering of integrated PET and CT images; next, tumor subregions were identified bymore » merging previously defined superpixels via population-level hierarchical clustering. The volume associated with each of the subregions was evaluated using Kaplan-Meier analysis regarding its prognostic capability in predicting overall survival (OS) and out-of-field progression (OFP). Results: Three spatially distinct subregions were identified within each tumor, which were highly robust to uncertainty in PET/CT co-registration. Among these, the volume of the most metabolically active and metabolically heterogeneous solid component of the tumor was predictive of OS and OFP on the entire cohort, with a concordance index or CI = 0.66–0.67. When restricting the analysis to patients with stage III disease (n = 32), the same subregion achieved an even higher CI = 0.75 (HR = 3.93, logrank p = 0.002) for predicting OS, and a CI = 0.76 (HR = 4.84, logrank p = 0.002) for predicting OFP. In comparison, conventional imaging markers including tumor volume, SUVmax and MTV50 were not predictive of OS or OFP, with CI mostly below 0.60 (p < 0.001). Conclusion: We propose a robust intra-tumor partitioning method to identify clinically relevant, high-risk subregions in lung cancer. We envision that this approach will be applicable to identifying useful imaging biomarkers in many cancer types.« less
Enhanced echolocation via robust statistics and super-resolution of sonar images
NASA Astrophysics Data System (ADS)
Kim, Kio
Echolocation is a process in which an animal uses acoustic signals to exchange information with environments. In a recent study, Neretti et al. have shown that the use of robust statistics can significantly improve the resiliency of echolocation against noise and enhance its accuracy by suppressing the development of sidelobes in the processing of an echo signal. In this research, the use of robust statistics is extended to problems in underwater explorations. The dissertation consists of two parts. Part I describes how robust statistics can enhance the identification of target objects, which in this case are cylindrical containers filled with four different liquids. Particularly, this work employs a variation of an existing robust estimator called an L-estimator, which was first suggested by Koenker and Bassett. As pointed out by Au et al.; a 'highlight interval' is an important feature, and it is closely related with many other important features that are known to be crucial for dolphin echolocation. A varied L-estimator described in this text is used to enhance the detection of highlight intervals, which eventually leads to a successful classification of echo signals. Part II extends the problem into 2 dimensions. Thanks to the advances in material and computer technology, various sonar imaging modalities are available on the market. By registering acoustic images from such video sequences, one can extract more information on the region of interest. Computer vision and image processing allowed application of robust statistics to the acoustic images produced by forward looking sonar systems, such as Dual-frequency Identification Sonar and ProViewer. The first use of robust statistics for sonar image enhancement in this text is in image registration. Random Sampling Consensus (RANSAC) is widely used for image registration. The registration algorithm using RANSAC is optimized for sonar image registration, and the performance is studied. The second use of robust statistics is in fusing the images. It is shown that the maximum a posteriori fusion method can be formulated in a Kalman filter-like manner, and also that the resulting expression is identical to a W-estimator with a specific weight function.
H.264/AVC digital fingerprinting based on spatio-temporal just noticeable distortion
NASA Astrophysics Data System (ADS)
Ait Saadi, Karima; Bouridane, Ahmed; Guessoum, Abderrezak
2014-01-01
This paper presents a robust adaptive embedding scheme using a modified Spatio-Temporal noticeable distortion (JND) model that is designed for tracing the distribution of the H.264/AVC video content and protecting them from unauthorized redistribution. The Embedding process is performed during coding process in selected macroblocks type Intra 4x4 within I-Frame. The method uses spread-spectrum technique in order to obtain robustness against collusion attacks and the JND model to dynamically adjust the embedding strength and control the energy of the embedded fingerprints so as to ensure their imperceptibility. Linear and non linear collusion attacks are performed to show the robustness of the proposed technique against collusion attacks while maintaining visual quality unchanged.
Fišer, Jaromír; Zítek, Pavel; Skopec, Pavel; Knobloch, Jan; Vyhlídal, Tomáš
2017-05-01
The purpose of the paper is to achieve a constrained estimation of process state variables using the anisochronic state observer tuned by the dominant root locus technique. The anisochronic state observer is based on the state-space time delay model of the process. Moreover the process model is identified not only as delayed but also as non-linear. This model is developed to describe a material flow process. The root locus technique combined with the magnitude optimum method is utilized to investigate the estimation process. Resulting dominant roots location serves as a measure of estimation process performance. The higher the dominant (natural) frequency in the leftmost position of the complex plane the more enhanced performance with good robustness is achieved. Also the model based observer control methodology for material flow processes is provided by means of the separation principle. For demonstration purposes, the computer-based anisochronic state observer is applied to the strip temperatures estimation in the hot strip finishing mill composed of seven stands. This application was the original motivation to the presented research. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Aircraft conceptual design - an adaptable parametric sizing methodology
NASA Astrophysics Data System (ADS)
Coleman, Gary John, Jr.
Aerospace is a maturing industry with successful and refined baselines which work well for traditional baseline missions, markets and technologies. However, when new markets (space tourism) or new constrains (environmental) or new technologies (composite, natural laminar flow) emerge, the conventional solution is not necessarily best for the new situation. Which begs the question "how does a design team quickly screen and compare novel solutions to conventional solutions for new aerospace challenges?" The answer is rapid and flexible conceptual design Parametric Sizing. In the product design life-cycle, parametric sizing is the first step in screening the total vehicle in terms of mission, configuration and technology to quickly assess first order design and mission sensitivities. During this phase, various missions and technologies are assessed. During this phase, the designer is identifying design solutions of concepts and configurations to meet combinations of mission and technology. This research undertaking contributes the state-of-the-art in aircraft parametric sizing through (1) development of a dedicated conceptual design process and disciplinary methods library, (2) development of a novel and robust parametric sizing process based on 'best-practice' approaches found in the process and disciplinary methods library, and (3) application of the parametric sizing process to a variety of design missions (transonic, supersonic and hypersonic transports), different configurations (tail-aft, blended wing body, strut-braced wing, hypersonic blended bodies, etc.), and different technologies (composite, natural laminar flow, thrust vectored control, etc.), in order to demonstrate the robustness of the methodology and unearth first-order design sensitivities to current and future aerospace design problems. This research undertaking demonstrates the importance of this early design step in selecting the correct combination of mission, technologies and configuration to meet current aerospace challenges. Overarching goal is to avoid the reoccurring situation of optimizing an already ill-fated solution.
Reliability of an fMRI Paradigm for Emotional Processing in a Multisite Longitudinal Study
Gee, Dylan G.; McEwen, Sarah C.; Forsyth, Jennifer K.; Haut, Kristen M.; Bearden, Carrie E.; Addington, Jean; Goodyear, Bradley; Cadenhead, Kristin S.; Mirzakhanian, Heline; Cornblatt, Barbara A.; Olvet, Doreen; Mathalon, Daniel H.; McGlashan, Thomas H.; Perkins, Diana O.; Belger, Aysenil; Seidman, Larry J.; Thermenos, Heidi; Tsuang, Ming T.; van Erp, Theo G.M.; Walker, Elaine F.; Hamann, Stephan; Woods, Scott W.; Constable, Todd; Cannon, Tyrone D.
2015-01-01
Multisite neuroimaging studies can facilitate the investigation of brain-related changes in many contexts, including patient groups that are relatively rare in the general population. Though multisite studies have characterized the reliability of brain activation during working memory and motor functional magnetic resonance imaging tasks, emotion processing tasks, pertinent to many clinical populations, remain less explored. A traveling participants study was conducted with eight healthy volunteers scanned twice on consecutive days at each of the eight North American Longitudinal Prodrome Study sites. Tests derived from generalizability theory showed excellent reliability in the amygdala (Eρ2=0.82), inferior frontal gyrus (IFG;Eρ2=0.83), anterior cingulate cortex (ACC;Eρ2=0.76), insula (Eρ2=0.85), and fusiform gyrus (Eρ2=0.91) for maximum activation and fair to excellent reliability in the amygdala (Eρ2=0.44), IFG (Eρ2=0.48), ACC (Eρ2=0.55), insula (Eρ2=0.42), and fusiform gyrus (Eρ2=0.83) for mean activation across sites and test days. For the amygdala, habituation (Eρ2=0.71) was more stable than mean activation. In a second investigation, data from 111 healthy individuals across sites were aggregated in a voxelwise, quantitative meta-analysis. When compared with a mixed effects model controlling for site, both approaches identified robust activation in regions consistent with expected results based on prior single-site research. Overall, regions central to emotion processing showed strong reliability in the traveling participants study and robust activation in the aggregation study. These results support the reliability of blood oxygen level-dependent signal in emotion processing areas across different sites and scanners and may inform future efforts to increase efficiency and enhance knowledge of rare conditions in the population through multisite neuroimaging paradigms. PMID:25821147
On the applicability of brain reading for predictive human-machine interfaces in robotics.
Kirchner, Elsa Andrea; Kim, Su Kyoung; Straube, Sirko; Seeland, Anett; Wöhrle, Hendrik; Krell, Mario Michael; Tabie, Marc; Fahle, Manfred
2013-01-01
The ability of today's robots to autonomously support humans in their daily activities is still limited. To improve this, predictive human-machine interfaces (HMIs) can be applied to better support future interaction between human and machine. To infer upcoming context-based behavior relevant brain states of the human have to be detected. This is achieved by brain reading (BR), a passive approach for single trial EEG analysis that makes use of supervised machine learning (ML) methods. In this work we propose that BR is able to detect concrete states of the interacting human. To support this, we show that BR detects patterns in the electroencephalogram (EEG) that can be related to event-related activity in the EEG like the P300, which are indicators of concrete states or brain processes like target recognition processes. Further, we improve the robustness and applicability of BR in application-oriented scenarios by identifying and combining most relevant training data for single trial classification and by applying classifier transfer. We show that training and testing, i.e., application of the classifier, can be carried out on different classes, if the samples of both classes miss a relevant pattern. Classifier transfer is important for the usage of BR in application scenarios, where only small amounts of training examples are available. Finally, we demonstrate a dual BR application in an experimental setup that requires similar behavior as performed during the teleoperation of a robotic arm. Here, target recognition processes and movement preparation processes are detected simultaneously. In summary, our findings contribute to the development of robust and stable predictive HMIs that enable the simultaneous support of different interaction behaviors.
Operational Monitoring of GOME-2 and IASI Level 1 Product Processing at EUMETSAT
NASA Astrophysics Data System (ADS)
Livschitz, Yakov; Munro, Rosemary; Lang, Rüdiger; Fiedler, Lars; Dyer, Richard; Eisinger, Michael
2010-05-01
The growing complexity of operational level 1 radiance products from Low Earth Orbiting (LEO) platforms like EUMETSATs Metop series makes near-real-time monitoring of product quality a challenging task. The main challenge is to provide a monitoring system which is flexible and robust enough to identify and to react to anomalies which may be previously unknown to the system, as well as to provide all means and parameters necessary in order to support efficient ad-hoc analysis of the incident. The operational monitoring system developed at EUMETSAT for monitoring of GOME-2 and IASI level 1 data allows to perform near-real-time monitoring of operational products and instrument's health in a robust and flexible fashion. For effective information management, the system is based on a relational database (Oracle). An Extract, Transform, Load (ETL) process transforms products in EUMETSAT Polar System (EPS) format into relational data structures. The identification of commonalities between products and instruments allows for a database structure design in such a way that different data can be analyzed using the same business intelligence functionality. An interactive analysis software implementing modern data mining techniques is also provided for a detailed look into the data. The system is effectively used for day-to-day monitoring, long-term reporting, instrument's degradation analysis as well as for ad-hoc queries in case of an unexpected instrument or processing behaviour. Having data from different sources on a single instrument and even from different instruments, platforms or numerical weather prediction within the same database allows effective cross-comparison and looking for correlated parameters. Automatic alarms raised by checking for deviation of certain parameters, for data losses and other events significantly reduce time, necessary to monitor the processing on a day-to-day basis.
Operational Monitoring of GOME-2 and IASI Level 1 Product Processing at EUMETSAT
NASA Astrophysics Data System (ADS)
Livschitz, Y.; Munro, R.; Lang, R.; Fiedler, L.; Dyer, R.; Eisinger, M.
2009-12-01
The growing complexity of operational level 1 radiance products from Low Earth Orbiting (LEO) platforms like EUMETSATs Metop series makes near-real-time monitoring of product quality a challenging task. The main challenge is to provide a monitoring system which is flexible and robust enough to identify and to react to anomalies which may be previously unknown to the system, as well as to provide all means and parameters necessary in order to support efficient ad-hoc analysis of the incident. The operational monitoring system developed at EUMETSAT for monitoring of GOME-2 and IASI level 1 data allows to perform near-real-time monitoring of operational products and instrument’s health in a robust and flexible fashion. For effective information management, the system is based on a relational database (Oracle). An Extract, Transform, Load (ETL) process transforms products in EUMETSAT Polar System (EPS) format into relational data structures. The identification of commonalities between products and instruments allows for a database structure design in such a way that different data can be analyzed using the same business intelligence functionality. An interactive analysis software implementing modern data mining techniques is also provided for a detailed look into the data. The system is effectively used for day-to-day monitoring, long-term reporting, instrument’s degradation analysis as well as for ad-hoc queries in case of an unexpected instrument or processing behaviour. Having data from different sources on a single instrument and even from different instruments, platforms or numerical weather prediction within the same database allows effective cross-comparison and looking for correlated parameters. Automatic alarms raised by checking for deviation of certain parameters, for data losses and other events significantly reduce time, necessary to monitor the processing on a day-to-day basis.
The difficulties of systematic reviews.
Westgate, Martin J; Lindenmayer, David B
2017-10-01
The need for robust evidence to support conservation actions has driven the adoption of systematic approaches to research synthesis in ecology. However, applying systematic review to complex or open questions remains challenging, and this task is becoming more difficult as the quantity of scientific literature increases. We drew on the science of linguistics for guidance as to why the process of identifying and sorting information during systematic review remains so labor intensive, and to provide potential solutions. Several linguistic properties of peer-reviewed corpora-including nonrandom selection of review topics, small-world properties of semantic networks, and spatiotemporal variation in word meaning-greatly increase the effort needed to complete the systematic review process. Conversely, the resolution of these semantic complexities is a common motivation for narrative reviews, but this process is rarely enacted with the rigor applied during linguistic analysis. Therefore, linguistics provides a unifying framework for understanding some key challenges of systematic review and highlights 2 useful directions for future research. First, in cases where semantic complexity generates barriers to synthesis, ecologists should consider drawing on existing methods-such as natural language processing or the construction of research thesauri and ontologies-that provide tools for mapping and resolving that complexity. These tools could help individual researchers classify research material in a more robust manner and provide valuable guidance for future researchers on that topic. Second, a linguistic perspective highlights that scientific writing is a rich resource worthy of detailed study, an observation that can sometimes be lost during the search for data during systematic review or meta-analysis. For example, mapping semantic networks can reveal redundancy and complementarity among scientific concepts, leading to new insights and research questions. Consequently, wider adoption of linguistic approaches may facilitate improved rigor and richness in research synthesis. © 2017 Society for Conservation Biology.
On the Applicability of Brain Reading for Predictive Human-Machine Interfaces in Robotics
Kirchner, Elsa Andrea; Kim, Su Kyoung; Straube, Sirko; Seeland, Anett; Wöhrle, Hendrik; Krell, Mario Michael; Tabie, Marc; Fahle, Manfred
2013-01-01
The ability of today's robots to autonomously support humans in their daily activities is still limited. To improve this, predictive human-machine interfaces (HMIs) can be applied to better support future interaction between human and machine. To infer upcoming context-based behavior relevant brain states of the human have to be detected. This is achieved by brain reading (BR), a passive approach for single trial EEG analysis that makes use of supervised machine learning (ML) methods. In this work we propose that BR is able to detect concrete states of the interacting human. To support this, we show that BR detects patterns in the electroencephalogram (EEG) that can be related to event-related activity in the EEG like the P300, which are indicators of concrete states or brain processes like target recognition processes. Further, we improve the robustness and applicability of BR in application-oriented scenarios by identifying and combining most relevant training data for single trial classification and by applying classifier transfer. We show that training and testing, i.e., application of the classifier, can be carried out on different classes, if the samples of both classes miss a relevant pattern. Classifier transfer is important for the usage of BR in application scenarios, where only small amounts of training examples are available. Finally, we demonstrate a dual BR application in an experimental setup that requires similar behavior as performed during the teleoperation of a robotic arm. Here, target recognition processes and movement preparation processes are detected simultaneously. In summary, our findings contribute to the development of robust and stable predictive HMIs that enable the simultaneous support of different interaction behaviors. PMID:24358125
Cormode, Graham; Dasgupta, Anirban; Goyal, Amit; Lee, Chi Hoon
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users' queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with "vanilla" LSH, even when using the same amount of space.
Fujimoto, Shinta
2015-01-01
Abstract Marine tardigrades of the family Halechiniscidae (Heterotardigrada: Arthrotardigrada) are reported from Oura Bay, Okinawajima, one of the Ryukyu Islands, Japan, including Dipodarctus sp., Florarctus wunai sp. n., Halechiniscus churakaagii sp. n., Halechiniscus yanakaagii sp. n. and Styraconyx sp. The attributes distinguishing Florarctus wunai sp. n. from its congeners is a combination of two characters, the smooth dorsal cuticle and two small projections of the caudal alae caestus. Halechiniscus churakaagii sp. n. is differentiated from its congeners by the combination of two characters, the robust cephalic cirrophores and the scapular processes with flat oval tips, while Halechiniscus yanakaagii sp. n. can be identified by the laterally protruded arched double processes with acute tips situated dorsally at the level of leg I. A list of marine tardigrades reported from the Ryukyu Islands is provided. PMID:25755627
siMacro: A Fast and Easy Data Processing Tool for Cell-Based Genomewide siRNA Screens.
Singh, Nitin Kumar; Seo, Bo Yeun; Vidyasagar, Mathukumalli; White, Michael A; Kim, Hyun Seok
2013-03-01
Growing numbers of studies employ cell line-based systematic short interfering RNA (siRNA) screens to study gene functions and to identify drug targets. As multiple sources of variations that are unique to siRNA screens exist, there is a growing demand for a computational tool that generates normalized values and standardized scores. However, only a few tools have been available so far with limited usability. Here, we present siMacro, a fast and easy-to-use Microsoft Office Excel-based tool with a graphic user interface, designed to process single-condition or two-condition synthetic screen datasets. siMacro normalizes position and batch effects, censors outlier samples, and calculates Z-scores and robust Z-scores, with a spreadsheet output of >120,000 samples in under 1 minute.
siMacro: A Fast and Easy Data Processing Tool for Cell-Based Genomewide siRNA Screens
Singh, Nitin Kumar; Seo, Bo Yeun; Vidyasagar, Mathukumalli; White, Michael A.
2013-01-01
Growing numbers of studies employ cell line-based systematic short interfering RNA (siRNA) screens to study gene functions and to identify drug targets. As multiple sources of variations that are unique to siRNA screens exist, there is a growing demand for a computational tool that generates normalized values and standardized scores. However, only a few tools have been available so far with limited usability. Here, we present siMacro, a fast and easy-to-use Microsoft Office Excel-based tool with a graphic user interface, designed to process single-condition or two-condition synthetic screen datasets. siMacro normalizes position and batch effects, censors outlier samples, and calculates Z-scores and robust Z-scores, with a spreadsheet output of >120,000 samples in under 1 minute. PMID:23613684
Velez, Lady; Sokoloff, Greta; Miczek, Klaus A; Palmer, Abraham A; Dulawa, Stephanie C
2010-03-01
Some BALB/c substrains exhibit different levels of aggression. We compared aggression levels between male BALB/cJ and BALB/cByJ substrains using the resident intruder paradigm. These substrains were also assessed in other tests of emotionality and information processing including the open field, forced swim, fear conditioning, and prepulse inhibition tests. We also evaluated single nucleotide polymorphisms (SNPs) previously reported between these BALB/c substrains. Finally, we compared BALB/cJ and BALB/cByJ mice for genomic deletions or duplications, collectively termed copy number variants (CNVs), to identify candidate genes that might underlie the observed behavioral differences. BALB/cJ mice showed substantially higher aggression levels than BALB/cByJ mice; however, only minor differences in other behaviors were observed. None of the previously reported SNPs were verified. Eleven CNV regions were identified between the two BALB/c substrains. Our findings identify a robust difference in aggressive behavior between BALB/cJ and BALB/cByJ substrains, which could be the result of the identified CNVs.
NASA Astrophysics Data System (ADS)
Székely, B.; Karátson, D.; Koma, Zs.; Dorninger, P.; Wörner, G.; Brandmeier, M.; Nothegger, C.
2012-04-01
The Western slope of the Central Andes between 22° and 17°S is characterized by large, quasi-planar landforms with tilted ignimbrite surfaces and overlying younger sedimentary deposits (e.g. Nazca, Oxaya, Huaylillas ignimbrites). These surfaces were only modified by tectonic uplift and tilting of the Western Cordillera preserving minor now fossilized drainage systems. Several deep, canyons started to form from about 5 Ma ago. Due to tectonic oversteepening in a arid region of very low erosion rates, gravitational collapses and landslides additionally modified the Andean slope and valley flanks. Large areas of fossil surfaces, however, remain. The age of these surfaces has been dated between 11 Ma and 25 Ma at elevations of 3500 m in the Precordillera and at c. 1000 m near the coast. Due to their excellent preservation, our aim is to identify, delineate, and reconstruct these original ignimbrite and sediment surfaces via a sophisticated evaluation of SRTM DEMs. The technique we use here is a robust morphological segmentation method that is insensitive to a certain amount of outliers, even if they are spatially correlated. This paves the way to identify common local planar features and combine these into larger areas of a particular surface segment. Erosional dissection and faulting, tilting and folding define subdomains, and thus the original quasi-planar surfaces are modified. Additional processes may create younger surfaces, such as sedimentary floodplains and salt pans. The procedure is tuned to provide a distinction of these features. The technique is based on the evaluation of local normal vectors (perpendicular to the actual surface) that are obtained by determination of locally fitting planes. Then, this initial set of normal vectors are gradually classified into groups with similar properties providing candidate point clouds that are quasi co-planar. The quasi co-planar sets of points are analysed further against other criteria, such as number of minimum points, maximized standard deviation of spatial scatter, maximum point-to-plane surface, etc. SRTM DEMs of selected areas of the Western slope of the Central Andes have been processed with various parameter sets. The resulting domain structure shows strong correlation with tectonic features (e.g. faulting) and younger depositional surfaces whereas other segmentation features appear or disappear depending on parameters of the analysis. For example, a fine segmentation results - for a given study area - in ca. 2500 planar features (of course not all are geologically meaningful), whereas a more meaningful result has an order of magnitude less planes, ca. 270. The latter segmentation still covers the key areas, and the dissecting features (e.g., large incised canyons) are typically identified. For the fine segmentation version an area of 3863 km2 is covered by fitted planes for the ignimbrite surfaces, whereas for the more robust segmentation this area is 2555 km2. The same values for the sedimentary surfaces are 3162 km2 and 2080 km2, respectively. The total processed area was 14498 km2. As the previous numbers and the 18,1% and 18,6% decrease in the coverage suggest, the robust segmentation remains meaningful for large parts of the area while the number of planar features decreased by an order of magnitude. This result also emphasizes the importance of the initial parameters. To verify the results in more detail, residuals (difference between measured and modelled elevation) are also evaluated, and the results are fed back to the segmentation procedure. Steeper landscapes (young volcanic edifices) are clearly separated from higher-order (long-wavelength) structures. This method allows to quantitatively identify uniform surface segments and to relate these to geologically and morphologically meaningful parameters (type of depositional surface, rock type, surface age).
Robust estimation for ordinary differential equation models.
Cao, J; Wang, L; Xu, J
2011-12-01
Applied scientists often like to use ordinary differential equations (ODEs) to model complex dynamic processes that arise in biology, engineering, medicine, and many other areas. It is interesting but challenging to estimate ODE parameters from noisy data, especially when the data have some outliers. We propose a robust method to address this problem. The dynamic process is represented with a nonparametric function, which is a linear combination of basis functions. The nonparametric function is estimated by a robust penalized smoothing method. The penalty term is defined with the parametric ODE model, which controls the roughness of the nonparametric function and maintains the fidelity of the nonparametric function to the ODE model. The basis coefficients and ODE parameters are estimated in two nested levels of optimization. The coefficient estimates are treated as an implicit function of ODE parameters, which enables one to derive the analytic gradients for optimization using the implicit function theorem. Simulation studies show that the robust method gives satisfactory estimates for the ODE parameters from noisy data with outliers. The robust method is demonstrated by estimating a predator-prey ODE model from real ecological data. © 2011, The International Biometric Society.
Robust path planning for flexible needle insertion using Markov decision processes.
Tan, Xiaoyu; Yu, Pengqian; Lim, Kah-Bin; Chui, Chee-Kong
2018-05-11
Flexible needle has the potential to accurately navigate to a treatment region in the least invasive manner. We propose a new planning method using Markov decision processes (MDPs) for flexible needle navigation that can perform robust path planning and steering under the circumstance of complex tissue-needle interactions. This method enhances the robustness of flexible needle steering from three different perspectives. First, the method considers the problem caused by soft tissue deformation. The method then resolves the common needle penetration failure caused by patterns of targets, while the last solution addresses the uncertainty issues in flexible needle motion due to complex and unpredictable tissue-needle interaction. Computer simulation and phantom experimental results show that the proposed method can perform robust planning and generate a secure control policy for flexible needle steering. Compared with a traditional method using MDPs, the proposed method achieves higher accuracy and probability of success in avoiding obstacles under complicated and uncertain tissue-needle interactions. Future work will involve experiment with biological tissue in vivo. The proposed robust path planning method can securely steer flexible needle within soft phantom tissues and achieve high adaptability in computer simulation.
Non-rigid Reconstruction of Casting Process with Temperature Feature
NASA Astrophysics Data System (ADS)
Lin, Jinhua; Wang, Yanjie; Li, Xin; Wang, Ying; Wang, Lu
2017-09-01
Off-line reconstruction of rigid scene has made a great progress in the past decade. However, the on-line reconstruction of non-rigid scene is still a very challenging task. The casting process is a non-rigid reconstruction problem, it is a high-dynamic molding process lacking of geometric features. In order to reconstruct the casting process robustly, an on-line fusion strategy is proposed for dynamic reconstruction of casting process. Firstly, the geometric and flowing feature of casting are parameterized in manner of TSDF (truncated signed distance field) which is a volumetric block, parameterized casting guarantees real-time tracking and optimal deformation of casting process. Secondly, data structure of the volume grid is extended to have temperature value, the temperature interpolation function is build to generate the temperature of each voxel. This data structure allows for dynamic tracking of temperature of casting during deformation stages. Then, the sparse RGB features is extracted from casting scene to search correspondence between geometric representation and depth constraint. The extracted color data guarantees robust tracking of flowing motion of casting. Finally, the optimal deformation of the target space is transformed into a nonlinear regular variational optimization problem. This optimization step achieves smooth and optimal deformation of casting process. The experimental results show that the proposed method can reconstruct the casting process robustly and reduce drift in the process of non-rigid reconstruction of casting.
A Hybrid Interval–Robust Optimization Model for Water Quality Management
Xu, Jieyu; Li, Yongping; Huang, Guohe
2013-01-01
Abstract In water quality management problems, uncertainties may exist in many system components and pollution-related processes (i.e., random nature of hydrodynamic conditions, variability in physicochemical processes, dynamic interactions between pollutant loading and receiving water bodies, and indeterminacy of available water and treated wastewater). These complexities lead to difficulties in formulating and solving the resulting nonlinear optimization problems. In this study, a hybrid interval–robust optimization (HIRO) method was developed through coupling stochastic robust optimization and interval linear programming. HIRO can effectively reflect the complex system features under uncertainty, where implications of water quality/quantity restrictions for achieving regional economic development objectives are studied. By delimiting the uncertain decision space through dimensional enlargement of the original chemical oxygen demand (COD) discharge constraints, HIRO enhances the robustness of the optimization processes and resulting solutions. This method was applied to planning of industry development in association with river-water pollution concern in New Binhai District of Tianjin, China. Results demonstrated that the proposed optimization model can effectively communicate uncertainties into the optimization process and generate a spectrum of potential inexact solutions supporting local decision makers in managing benefit-effective water quality management schemes. HIRO is helpful for analysis of policy scenarios related to different levels of economic penalties, while also providing insight into the tradeoff between system benefits and environmental requirements. PMID:23922495
TU-AB-BRB-01: Coverage Evaluation and Probabilistic Treatment Planning as a Margin Alternative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siebers, J.
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, H.
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
TU-AB-BRB-02: Stochastic Programming Methods for Handling Uncertainty and Motion in IMRT Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unkelbach, J.
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
TU-AB-BRB-00: New Methods to Ensure Target Coverage
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2015-06-15
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
Zhou, Ping; Guo, Dongwei; Wang, Hong; Chai, Tianyou
2017-09-29
Optimal operation of an industrial blast furnace (BF) ironmaking process largely depends on a reliable measurement of molten iron quality (MIQ) indices, which are not feasible using the conventional sensors. This paper proposes a novel data-driven robust modeling method for the online estimation and control of MIQ indices. First, a nonlinear autoregressive exogenous (NARX) model is constructed for the MIQ indices to completely capture the nonlinear dynamics of the BF process. Then, considering that the standard least-squares support vector regression (LS-SVR) cannot directly cope with the multioutput problem, a multitask transfer learning is proposed to design a novel multioutput LS-SVR (M-LS-SVR) for the learning of the NARX model. Furthermore, a novel M-estimator is proposed to reduce the interference of outliers and improve the robustness of the M-LS-SVR model. Since the weights of different outlier data are properly given by the weight function, their corresponding contributions on modeling can properly be distinguished, thus a robust modeling result can be achieved. Finally, a novel multiobjective evaluation index on the modeling performance is developed by comprehensively considering the root-mean-square error of modeling and the correlation coefficient on trend fitting, based on which the nondominated sorting genetic algorithm II is used to globally optimize the model parameters. Both experiments using industrial data and industrial applications illustrate that the proposed method can eliminate the adverse effect caused by the fluctuation of data in BF process efficiently. This indicates its stronger robustness and higher accuracy. Moreover, control testing shows that the developed model can be well applied to realize data-driven control of the BF process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ping; Guo, Dongwei; Wang, Hong
Optimal operation of an industrial blast furnace (BF) ironmaking process largely depends on a reliable measurement of molten iron quality (MIQ) indices, which are not feasible using the conventional sensors. This paper proposes a novel data-driven robust modeling method for the online estimation and control of MIQ indices. First, a nonlinear autoregressive exogenous (NARX) model is constructed for the MIQ indices to completely capture the nonlinear dynamics of the BF process. Then, considering that the standard least-squares support vector regression (LS-SVR) cannot directly cope with the multioutput problem, a multitask transfer learning is proposed to design a novel multioutput LS-SVRmore » (M-LS-SVR) for the learning of the NARX model. Furthermore, a novel M-estimator is proposed to reduce the interference of outliers and improve the robustness of the M-LS-SVR model. Since the weights of different outlier data are properly given by the weight function, their corresponding contributions on modeling can properly be distinguished, thus a robust modeling result can be achieved. Finally, a novel multiobjective evaluation index on the modeling performance is developed by comprehensively considering the root-mean-square error of modeling and the correlation coefficient on trend fitting, based on which the nondominated sorting genetic algorithm II is used to globally optimize the model parameters. Both experiments using industrial data and industrial applications illustrate that the proposed method can eliminate the adverse effect caused by the fluctuation of data in BF process efficiently. In conclusion, this indicates its stronger robustness and higher accuracy. Moreover, control testing shows that the developed model can be well applied to realize data-driven control of the BF process.« less
Zhou, Ping; Guo, Dongwei; Wang, Hong; ...
2017-09-29
Optimal operation of an industrial blast furnace (BF) ironmaking process largely depends on a reliable measurement of molten iron quality (MIQ) indices, which are not feasible using the conventional sensors. This paper proposes a novel data-driven robust modeling method for the online estimation and control of MIQ indices. First, a nonlinear autoregressive exogenous (NARX) model is constructed for the MIQ indices to completely capture the nonlinear dynamics of the BF process. Then, considering that the standard least-squares support vector regression (LS-SVR) cannot directly cope with the multioutput problem, a multitask transfer learning is proposed to design a novel multioutput LS-SVRmore » (M-LS-SVR) for the learning of the NARX model. Furthermore, a novel M-estimator is proposed to reduce the interference of outliers and improve the robustness of the M-LS-SVR model. Since the weights of different outlier data are properly given by the weight function, their corresponding contributions on modeling can properly be distinguished, thus a robust modeling result can be achieved. Finally, a novel multiobjective evaluation index on the modeling performance is developed by comprehensively considering the root-mean-square error of modeling and the correlation coefficient on trend fitting, based on which the nondominated sorting genetic algorithm II is used to globally optimize the model parameters. Both experiments using industrial data and industrial applications illustrate that the proposed method can eliminate the adverse effect caused by the fluctuation of data in BF process efficiently. In conclusion, this indicates its stronger robustness and higher accuracy. Moreover, control testing shows that the developed model can be well applied to realize data-driven control of the BF process.« less
Diffusion pseudotime robustly reconstructs lineage branching.
Haghverdi, Laleh; Büttner, Maren; Wolf, F Alexander; Buettner, Florian; Theis, Fabian J
2016-10-01
The temporal order of differentiating cells is intrinsically encoded in their single-cell expression profiles. We describe an efficient way to robustly estimate this order according to diffusion pseudotime (DPT), which measures transitions between cells using diffusion-like random walks. Our DPT software implementations make it possible to reconstruct the developmental progression of cells and identify transient or metastable states, branching decisions and differentiation endpoints.
Robust Mokken Scale Analysis by Means of the Forward Search Algorithm for Outlier Detection
ERIC Educational Resources Information Center
Zijlstra, Wobbe P.; van der Ark, L. Andries; Sijtsma, Klaas
2011-01-01
Exploratory Mokken scale analysis (MSA) is a popular method for identifying scales from larger sets of items. As with any statistical method, in MSA the presence of outliers in the data may result in biased results and wrong conclusions. The forward search algorithm is a robust diagnostic method for outlier detection, which we adapt here to…
Crop Row Detection in Maize Fields Inspired on the Human Visual Perception
Romeo, J.; Pajares, G.; Montalvo, M.; Guerrero, J. M.; Guijarro, M.; Ribeiro, A.
2012-01-01
This paper proposes a new method, oriented to image real-time processing, for identifying crop rows in maize fields in the images. The vision system is designed to be installed onboard a mobile agricultural vehicle, that is, submitted to gyros, vibrations, and undesired movements. The images are captured under image perspective, being affected by the above undesired effects. The image processing consists of two main processes: image segmentation and crop row detection. The first one applies a threshold to separate green plants or pixels (crops and weeds) from the rest (soil, stones, and others). It is based on a fuzzy clustering process, which allows obtaining the threshold to be applied during the normal operation process. The crop row detection applies a method based on image perspective projection that searches for maximum accumulation of segmented green pixels along straight alignments. They determine the expected crop lines in the images. The method is robust enough to work under the above-mentioned undesired effects. It is favorably compared against the well-tested Hough transformation for line detection. PMID:22623899
Successful photoresist removal: incorporating chemistry, conditions, and equipment
NASA Astrophysics Data System (ADS)
Moore, John C.
2002-07-01
The material make-up of photoresists span a wide polarity range and chemistry. Resists contain reactive components which are photochemically triggered to convert and condense to forms that result in a solubility change. When designing a cleaning process, a knowledge of the resist chemistry is fundamental. A DNQ/novolak system may follow a simple dissolution model under normal conditions. However, when the same resist is sent through a dry etch process, crosslinking and metallic impregnation occurs to form a residue that is insoluble by simple dissolution. The same applies for negative-tone resists, where bonds must be broken and a high chemical interaction is needed to facilitate solvent penetration. Negative resists of different chemistry, such as the benzoin/acrylic, trazine/novolak, and azide/isoprene, must be addressed separately for specific polarity and reactant requirements. When dissolving and removing these crosslinked systems, benefits in formulated chemistries such as GenSolveTM and GenCleanTM are immediately observed. Once the chemistry is identified, conditions can be optimized with process design using temperature, agitation, and rinsing to achieve a robust process with a wide process latitude.
Quality control of inkjet technology for DNA microarray fabrication.
Pierik, Anke; Dijksman, Frits; Raaijmakers, Adrie; Wismans, Ton; Stapert, Henk
2008-12-01
A robust manufacturing process is essential to make high-quality DNA microarrays, especially for use in diagnostic tests. We investigated different failure modes of the inkjet printing process used to manufacture low-density microarrays. A single nozzle inkjet spotter was provided with two optical imaging systems, monitoring in real time the flight path of every droplet. If a droplet emission failure is detected, the printing process is automatically stopped. We analyzed over 1.3 million droplets. This information was used to investigate the performance of the inkjet system and to obtain detailed insight into the frequency and causes of jetting failures. Of all the substrates investigated, 96.2% were produced without any system or jetting failures. In 1.6% of the substrates, droplet emission failed and was correctly identified. Appropriate measures could then be taken to get the process back on track. In 2.2%, the imaging systems failed while droplet emission occurred correctly. In 0.1% of the substrates, droplet emission failure that was not timely detected occurred. Thus, the overall yield of the microarray manufacturing process was 99.9%, which is highly acceptable for prototyping.
Robust Inference of Genetic Exchange Communities from Microbial Genomes Using TF-IDF
Cong, Yingnan; Chan, Yao-ban; Phillips, Charles A.; Langston, Michael A.; Ragan, Mark A.
2017-01-01
Bacteria and archaea can exchange genetic material across lineages through processes of lateral genetic transfer (LGT). Collectively, these exchange relationships can be modeled as a network and analyzed using concepts from graph theory. In particular, densely connected regions within an LGT network have been defined as genetic exchange communities (GECs). However, it has been problematic to construct networks in which edges solely represent LGT. Here we apply term frequency-inverse document frequency (TF-IDF), an alignment-free method originating from document analysis, to infer regions of lateral origin in bacterial genomes. We examine four empirical datasets of different size (number of genomes) and phyletic breadth, varying a key parameter (word length k) within bounds established in previous work. We map the inferred lateral regions to genes in recipient genomes, and construct networks in which the nodes are groups of genomes, and the edges natively represent LGT. We then extract maximum and maximal cliques (i.e., GECs) from these graphs, and identify nodes that belong to GECs across a wide range of k. Most surviving lateral transfer has happened within these GECs. Using Gene Ontology enrichment tests we demonstrate that biological processes associated with metabolism, regulation and transport are often over-represented among the genes affected by LGT within these communities. These enrichments are largely robust to change of k. PMID:28154557
Lee, Unseok; Chang, Sungyul; Putra, Gian Anantrio; Kim, Hyoungseok; Kim, Dong Hwan
2018-01-01
A high-throughput plant phenotyping system automatically observes and grows many plant samples. Many plant sample images are acquired by the system to determine the characteristics of the plants (populations). Stable image acquisition and processing is very important to accurately determine the characteristics. However, hardware for acquiring plant images rapidly and stably, while minimizing plant stress, is lacking. Moreover, most software cannot adequately handle large-scale plant imaging. To address these problems, we developed a new, automated, high-throughput plant phenotyping system using simple and robust hardware, and an automated plant-imaging-analysis pipeline consisting of machine-learning-based plant segmentation. Our hardware acquires images reliably and quickly and minimizes plant stress. Furthermore, the images are processed automatically. In particular, large-scale plant-image datasets can be segmented precisely using a classifier developed using a superpixel-based machine-learning algorithm (Random Forest), and variations in plant parameters (such as area) over time can be assessed using the segmented images. We performed comparative evaluations to identify an appropriate learning algorithm for our proposed system, and tested three robust learning algorithms. We developed not only an automatic analysis pipeline but also a convenient means of plant-growth analysis that provides a learning data interface and visualization of plant growth trends. Thus, our system allows end-users such as plant biologists to analyze plant growth via large-scale plant image data easily.
Pandemic influenza preparedness: an ethical framework to guide decision-making.
Thompson, Alison K; Faith, Karen; Gibson, Jennifer L; Upshur, Ross E G
2006-12-04
Planning for the next pandemic influenza outbreak is underway in hospitals across the world. The global SARS experience has taught us that ethical frameworks to guide decision-making may help to reduce collateral damage and increase trust and solidarity within and between health care organisations. Good pandemic planning requires reflection on values because science alone cannot tell us how to prepare for a public health crisis. In this paper, we present an ethical framework for pandemic influenza planning. The ethical framework was developed with expertise from clinical, organisational and public health ethics and validated through a stakeholder engagement process. The ethical framework includes both substantive and procedural elements for ethical pandemic influenza planning. The incorporation of ethics into pandemic planning can be helped by senior hospital administrators sponsoring its use, by having stakeholders vet the framework, and by designing or identifying decision review processes. We discuss the merits and limits of an applied ethical framework for hospital decision-making, as well as the robustness of the framework. The need for reflection on the ethical issues raised by the spectre of a pandemic influenza outbreak is great. Our efforts to address the normative aspects of pandemic planning in hospitals have generated interest from other hospitals and from the governmental sector. The framework will require re-evaluation and refinement and we hope that this paper will generate feedback on how to make it even more robust.
Discrete Walsh Hadamard transform based visible watermarking technique for digital color images
NASA Astrophysics Data System (ADS)
Santhi, V.; Thangavelu, Arunkumar
2011-10-01
As the size of the Internet is growing enormously the illegal manipulation of digital multimedia data become very easy with the advancement in technology tools. In order to protect those multimedia data from unauthorized access the digital watermarking system is used. In this paper a new Discrete walsh Hadamard Transform based visible watermarking system is proposed. As the watermark is embedded in transform domain, the system is robust to many signal processing attacks. Moreover in this proposed method the watermark is embedded in tiling manner in all the range of frequencies to make it robust to compression and cropping attack. The robustness of the algorithm is tested against noise addition, cropping, compression, Histogram equalization and resizing attacks. The experimental results show that the algorithm is robust to common signal processing attacks and the observed peak signal to noise ratio (PSNR) of watermarked image is varying from 20 to 30 db depends on the size of the watermark.
Exploiting structure: Introduction and motivation
NASA Technical Reports Server (NTRS)
Xu, Zhong Ling
1993-01-01
Research activities performed during the period of 29 June 1993 through 31 Aug. 1993 are summarized. The Robust Stability of Systems where transfer function or characteristic polynomial are multilinear affine functions of parameters of interest in two directions, Algorithmic and Theoretical, was developed. In the algorithmic direction, a new approach that reduces the computational burden of checking the robust stability of the system with multilinear uncertainty is found. This technique is called 'Stability by linear process.' In fact, the 'Stability by linear process' described gives an algorithm. In analysis, we obtained a robustness criterion for the family of polynomials with coefficients of multilinear affine function in the coefficient space and obtained the result for the robust stability of diamond families of polynomials with complex coefficients also. We obtained the limited results for SPR design and we provide a framework for solving ACS. Finally, copies of the outline of our results are provided in the appendix. Also, there is an administration issue in the appendix.
A robust nonlinear filter for image restoration.
Koivunen, V
1995-01-01
A class of nonlinear regression filters based on robust estimation theory is introduced. The goal of the filtering is to recover a high-quality image from degraded observations. Models for desired image structures and contaminating processes are employed, but deviations from strict assumptions are allowed since the assumptions on signal and noise are typically only approximately true. The robustness of filters is usually addressed only in a distributional sense, i.e., the actual error distribution deviates from the nominal one. In this paper, the robustness is considered in a broad sense since the outliers may also be due to inappropriate signal model, or there may be more than one statistical population present in the processing window, causing biased estimates. Two filtering algorithms minimizing a least trimmed squares criterion are provided. The design of the filters is simple since no scale parameters or context-dependent threshold values are required. Experimental results using both real and simulated data are presented. The filters effectively attenuate both impulsive and nonimpulsive noise while recovering the signal structure and preserving interesting details.
Robust nonlinear system identification: Bayesian mixture of experts using the t-distribution
NASA Astrophysics Data System (ADS)
Baldacchino, Tara; Worden, Keith; Rowson, Jennifer
2017-02-01
A novel variational Bayesian mixture of experts model for robust regression of bifurcating and piece-wise continuous processes is introduced. The mixture of experts model is a powerful model which probabilistically splits the input space allowing different models to operate in the separate regions. However, current methods have no fail-safe against outliers. In this paper, a robust mixture of experts model is proposed which consists of Student-t mixture models at the gates and Student-t distributed experts, trained via Bayesian inference. The Student-t distribution has heavier tails than the Gaussian distribution, and so it is more robust to outliers, noise and non-normality in the data. Using both simulated data and real data obtained from the Z24 bridge this robust mixture of experts performs better than its Gaussian counterpart when outliers are present. In particular, it provides robustness to outliers in two forms: unbiased parameter regression models, and robustness to overfitting/complex models.
NASA Astrophysics Data System (ADS)
Yang, Chao; Jiao, Xiaohong; Li, Liang; Zhang, Yuanbo; Chen, Zheng
2018-01-01
To realize a fast and smooth operating mode transition process from electric driving mode to engine-on driving mode, this paper presents a novel robust hierarchical mode transition control method for a plug-in hybrid electric bus (PHEB) with pre-transmission parallel hybrid powertrain. Firstly, the mode transition process is divided into five stages to clearly describe the powertrain dynamics. Based on the dynamics models of powertrain and clutch actuating mechanism, a hierarchical control structure including two robust H∞ controllers in both upper layer and lower layer is proposed. In upper layer, the demand clutch torque can be calculated by a robust H∞controller considering the clutch engaging time and the vehicle jerk. While in lower layer a robust tracking controller with L2-gain is designed to perform the accurate position tracking control, especially when the parameters uncertainties and external disturbance occur in the clutch actuating mechanism. Simulation and hardware-in-the-loop (HIL) test are carried out in a traditional driving condition of PHEB. Results show that the proposed hierarchical control approach can obtain the good control performance: mode transition time is greatly reduced with the acceptable jerk. Meanwhile, the designed control system shows the obvious robustness with the uncertain parameters and disturbance. Therefore, the proposed approach may offer a theoretical reference for the actual vehicle controller.
Towards a commercial process for the manufacture of genetically modified T cells for therapy
Kaiser, A D; Assenmacher, M; Schröder, B; Meyer, M; Orentas, R; Bethke, U; Dropulic, B
2015-01-01
The recent successes of adoptive T-cell immunotherapy for the treatment of hematologic malignancies have highlighted the need for manufacturing processes that are robust and scalable for product commercialization. Here we review some of the more outstanding issues surrounding commercial scale manufacturing of personalized-adoptive T-cell medicinal products. These include closed system operations, improving process robustness and simplifying work flows, reducing labor intensity by implementing process automation, scalability and cost, as well as appropriate testing and tracking of products, all while maintaining strict adherence to Current Good Manufacturing Practices and regulatory guidelines. A decentralized manufacturing model is proposed, where in the future patients' cells could be processed at the point-of-care in the hospital. PMID:25613483
Conceptual information processing: A robust approach to KBS-DBMS integration
NASA Technical Reports Server (NTRS)
Lazzara, Allen V.; Tepfenhart, William; White, Richard C.; Liuzzi, Raymond
1987-01-01
Integrating the respective functionality and architectural features of knowledge base and data base management systems is a topic of considerable interest. Several aspects of this topic and associated issues are addressed. The significance of integration and the problems associated with accomplishing that integration are discussed. The shortcomings of current approaches to integration and the need to fuse the capabilities of both knowledge base and data base management systems motivates the investigation of information processing paradigms. One such paradigm is concept based processing, i.e., processing based on concepts and conceptual relations. An approach to robust knowledge and data base system integration is discussed by addressing progress made in the development of an experimental model for conceptual information processing.
Less can be more: How to make operations more flexible and robust with fewer resources
NASA Astrophysics Data System (ADS)
Haksöz, ćaǧrı; Katsikopoulos, Konstantinos; Gigerenzer, Gerd
2018-06-01
We review empirical evidence from practice and general theoretical conditions, under which simple rules of thumb can help to make operations flexible and robust. An operation is flexible when it responds adaptively to adverse events such as natural disasters; an operation is robust when it is less affected by adverse events in the first place. We illustrate the relationship between flexibility and robustness in the context of supply chain risk. In addition to increasing flexibility and robustness, simple rules simultaneously reduce the need for resources such as time, money, information, and computation. We illustrate the simple-rules approach with an easy-to-use graphical aid for diagnosing and managing supply chain risk. More generally, we recommend a four-step process for determining the amount of resources that decision makers should invest in so as to increase flexibility and robustness.
Robust optimization of front members in a full frontal car impact
NASA Astrophysics Data System (ADS)
Aspenberg (né Lönn), David; Jergeus, Johan; Nilsson, Larsgunnar
2013-03-01
In the search for lightweight automobile designs, it is necessary to assure that robust crashworthiness performance is achieved. Structures that are optimized to handle a finite number of load cases may perform poorly when subjected to various dispersions. Thus, uncertainties must be accounted for in the optimization process. This article presents an approach to optimization where all design evaluations include an evaluation of the robustness. Metamodel approximations are applied both to the design space and the robustness evaluations, using artifical neural networks and polynomials, respectively. The features of the robust optimization approach are displayed in an analytical example, and further demonstrated in a large-scale design example of front side members of a car. Different optimization formulations are applied and it is shown that the proposed approach works well. It is also concluded that a robust optimization puts higher demands on the finite element model performance than normally.
NASA Astrophysics Data System (ADS)
Ryan, R.
1993-03-01
Robustness is a buzz word common to all newly proposed space systems design as well as many new commercial products. The image that one conjures up when the word appears is a 'Paul Bunyon' (lumberjack design), strong and hearty; healthy with margins in all aspects of the design. In actuality, robustness is much broader in scope than margins, including such factors as simplicity, redundancy, desensitization to parameter variations, control of parameter variations (environments flucation), and operational approaches. These must be traded with concepts, materials, and fabrication approaches against the criteria of performance, cost, and reliability. This includes manufacturing, assembly, processing, checkout, and operations. The design engineer or project chief is faced with finding ways and means to inculcate robustness into an operational design. First, however, be sure he understands the definition and goals of robustness. This paper will deal with these issues as well as the need for the requirement for robustness.
NASA Technical Reports Server (NTRS)
Ryan, R.
1993-01-01
Robustness is a buzz word common to all newly proposed space systems design as well as many new commercial products. The image that one conjures up when the word appears is a 'Paul Bunyon' (lumberjack design), strong and hearty; healthy with margins in all aspects of the design. In actuality, robustness is much broader in scope than margins, including such factors as simplicity, redundancy, desensitization to parameter variations, control of parameter variations (environments flucation), and operational approaches. These must be traded with concepts, materials, and fabrication approaches against the criteria of performance, cost, and reliability. This includes manufacturing, assembly, processing, checkout, and operations. The design engineer or project chief is faced with finding ways and means to inculcate robustness into an operational design. First, however, be sure he understands the definition and goals of robustness. This paper will deal with these issues as well as the need for the requirement for robustness.
Optimally robust redundancy relations for failure detection in uncertain systems
NASA Technical Reports Server (NTRS)
Lou, X.-C.; Willsky, A. S.; Verghese, G. C.
1986-01-01
All failure detection methods are based, either explicitly or implicitly, on the use of redundancy, i.e. on (possibly dynamic) relations among the measured variables. The robustness of the failure detection process consequently depends to a great degree on the reliability of the redundancy relations, which in turn is affected by the inevitable presence of model uncertainties. In this paper the problem of determining redundancy relations that are optimally robust is addressed in a sense that includes several major issues of importance in practical failure detection and that provides a significant amount of intuition concerning the geometry of robust failure detection. A procedure is given involving the construction of a single matrix and its singular value decomposition for the determination of a complete sequence of redundancy relations, ordered in terms of their level of robustness. This procedure also provides the basis for comparing levels of robustness in redundancy provided by different sets of sensors.
Pimperl, Alexander F; Rodriguez, Hector P; Schmittdiel, Julie A; Shortell, Stephen M
2018-06-01
To identify positive deviant (PD) physician organizations of Accountable Care Organizations (ACOs) with robust performance management systems (PMSYS). Third National Survey of Physician Organizations (NSPO3, n = 1,398). Organizational and external factors from NSPO3 were analyzed. Linear regression estimated the association of internal and contextual factors on PMSYS. Two cutpoints (75th/90th percentiles) identified PDs with the largest residuals and highest PMSYS scores. A total of 65 and 41 PDs were identified using 75th and 90th percentiles cutpoints, respectively. The 90th percentile more strongly differentiated PDs from non-PDs. Having a high proportion of vulnerable patients appears to constrain PMSYS development. Our PD identification method increases the likelihood that PD organizations selected for in-depth inquiry are high-performing organizations that exceed expectations. © Health Research and Educational Trust.
McElearney, Kyle; Ali, Amr; Gilbert, Alan; Kshirsagar, Rashmi; Zang, Li
2016-01-01
Chemically defined media have been widely used in the biopharmaceutical industry to enhance cell culture productivities and ensure process robustness. These media, which are quite complex, often contain a mixture of many components such as vitamins, amino acids, metals and other chemicals. Some of these components are known to be sensitive to various stress factors including photodegradation. Previous work has shown that small changes in impurity concentrations induced by these potential stresses can have a large impact on the cell culture process including growth and product quality attributes. Furthermore, it has been shown to be difficult to detect these modifications analytically due to the complexity of the cell culture media and the trace level of the degradant products. Here, we describe work performed to identify the specific chemical(s) in photodegraded medium that affect cell culture performance. First, we developed a model system capable of detecting changes in cell culture performance. Second, we used these data and applied an LC-MS analytical technique to characterize the cell culture media and identify degradant products which affect cell culture performance. Riboflavin limitation and N-formylkynurenine (NFK), a tryptophan oxidation catabolite, were identified as chemicals which results in a reduction in cell culture performance. © 2015 American Institute of Chemical Engineers.
Combining Digital Watermarking and Fingerprinting Techniques to Identify Copyrights for Color Images
Hsieh, Shang-Lin; Chen, Chun-Che; Shen, Wen-Shan
2014-01-01
This paper presents a copyright identification scheme for color images that takes advantage of the complementary nature of watermarking and fingerprinting. It utilizes an authentication logo and the extracted features of the host image to generate a fingerprint, which is then stored in a database and also embedded in the host image to produce a watermarked image. When a dispute over the copyright of a suspect image occurs, the image is first processed by watermarking. If the watermark can be retrieved from the suspect image, the copyright can then be confirmed; otherwise, the watermark then serves as the fingerprint and is processed by fingerprinting. If a match in the fingerprint database is found, then the suspect image will be considered a duplicated one. Because the proposed scheme utilizes both watermarking and fingerprinting, it is more robust than those that only adopt watermarking, and it can also obtain the preliminary result more quickly than those that only utilize fingerprinting. The experimental results show that when the watermarked image suffers slight attacks, watermarking alone is enough to identify the copyright. The results also show that when the watermarked image suffers heavy attacks that render watermarking incompetent, fingerprinting can successfully identify the copyright, hence demonstrating the effectiveness of the proposed scheme. PMID:25114966
The Robust Beauty of Ordinary Information
ERIC Educational Resources Information Center
Katsikopoulos, Konstantinos V.; Schooler, Lael J.; Hertwig, Ralph
2010-01-01
Heuristics embodying limited information search and noncompensatory processing of information can yield robust performance relative to computationally more complex models. One criticism raised against heuristics is the argument that complexity is hidden in the calculation of the cue order used to make predictions. We discuss ways to order cues…
Gowin, Joshua L; Ball, Tali M; Wittmann, Marc; Tapert, Susan F; Paulus, Martin P
2015-07-01
Nearly half of individuals with substance use disorders relapse in the year after treatment. A diagnostic tool to help clinicians make decisions regarding treatment does not exist for psychiatric conditions. Identifying individuals with high risk for relapse to substance use following abstinence has profound clinical consequences. This study aimed to develop neuroimaging as a robust tool to predict relapse. 68 methamphetamine-dependent adults (15 female) were recruited from 28-day inpatient treatment. During treatment, participants completed a functional MRI scan that examined brain activation during reward processing. Patients were followed 1 year later to assess abstinence. We examined brain activation during reward processing between relapsing and abstaining individuals and employed three random forest prediction models (clinical and personality measures, neuroimaging measures, a combined model) to generate predictions for each participant regarding their relapse likelihood. 18 individuals relapsed. There were significant group by reward-size interactions for neural activation in the left insula and right striatum for rewards. Abstaining individuals showed increased activation for large, risky relative to small, safe rewards, whereas relapsing individuals failed to show differential activation between reward types. All three random forest models yielded good test characteristics such that a positive test for relapse yielded a likelihood ratio 2.63, whereas a negative test had a likelihood ratio of 0.48. These findings suggest that neuroimaging can be developed in combination with other measures as an instrument to predict relapse, advancing tools providers can use to make decisions about individualized treatment of substance use disorders. Published by Elsevier Ireland Ltd.
Xiong, Shisheng; Wan, Lei; Ishida, Yoshihito; Chapuis, Yves-Andre; Craig, Gordon S W; Ruiz, Ricardo; Nealey, Paul F
2016-08-23
Directed self-assembly (DSA) of block copolymers (BCPs) is a leading strategy to pattern at sublithographic resolution in the technology roadmap for semiconductors and is the only known solution to fabricate nanoimprint templates for the production of bit pattern media. While great progress has been made to implement block copolymer lithography with features in the range of 10-20 nm, patterning solutions below 10 nm are still not mature. Many BCP systems self-assemble at this length scale, but challenges remain in simultaneously tuning the interfacial energy atop the film to control the orientation of BCP domains, designing materials, templates, and processes for ultra-high-density DSA, and establishing a robust pattern transfer strategy. Among the various solutions to achieve domains that are perpendicular to the substrate, solvent annealing is advantageous because it is a versatile method that can be applied to a diversity of materials. Here we report a DSA process based on chemical contrast templates and solvent annealing to fabricate 8 nm features on a 16 nm pitch. To make this possible, a number of innovations were brought in concert with a common platform: (1) assembling the BCP in the phase-separated, solvated state, (2) identifying a larger process window for solvated triblock vs diblock BCPs as a function of solvent volume fraction, (3) employing templates for sub-10-nm BCP systems accessible by lithography, and (4) integrating a robust pattern transfer strategy by vapor infiltration of organometallic precursors for selective metal oxide synthesis to prepare an inorganic hard mask.
Optimization of robustness of interdependent network controllability by redundant design
2018-01-01
Controllability of complex networks has been a hot topic in recent years. Real networks regarded as interdependent networks are always coupled together by multiple networks. The cascading process of interdependent networks including interdependent failure and overload failure will destroy the robustness of controllability for the whole network. Therefore, the optimization of the robustness of interdependent network controllability is of great importance in the research area of complex networks. In this paper, based on the model of interdependent networks constructed first, we determine the cascading process under different proportions of node attacks. Then, the structural controllability of interdependent networks is measured by the minimum driver nodes. Furthermore, we propose a parameter which can be obtained by the structure and minimum driver set of interdependent networks under different proportions of node attacks and analyze the robustness for interdependent network controllability. Finally, we optimize the robustness of interdependent network controllability by redundant design including node backup and redundancy edge backup and improve the redundant design by proposing different strategies according to their cost. Comparative strategies of redundant design are conducted to find the best strategy. Results shows that node backup and redundancy edge backup can indeed decrease those nodes suffering from failure and improve the robustness of controllability. Considering the cost of redundant design, we should choose BBS (betweenness-based strategy) or DBS (degree based strategy) for node backup and HDF(high degree first) for redundancy edge backup. Above all, our proposed strategies are feasible and effective at improving the robustness of interdependent network controllability. PMID:29438426
Asfaw, Abay
2011-02-01
Overweight/obesity, caused by the 'nutrition transition', is identified as one of the leading risk factors for non-communicable mortality. The nutrition transition in developing countries is associated with a major shift from the consumption of staple crops and whole grains to highly and partially processed foods. This study examines the contribution of processed foods consumption to the prevalence of overweight/obesity in Guatemala using generalized methods of moments (GMM) regression. The results show that all other things remaining constant, a 10% point increase in the share of partially processed foods from the total household food expenditure increases the BMI of family members (aged 10 years and above) by 3.95%. The impact of highly processed foods is much stronger. A 10% point increase in the share of highly processed food items increases the BMI of individuals by 4.25%, ceteris paribus. The results are robust when body weight is measured by overweight/obesity indicators. These findings suggest that increasing shares of partially and highly processed foods from the total consumption expenditure could be one of the major risk factors for the high prevalence of overweight/obesity in the country.
What Is Robustness?: Problem Framing Challenges for Water Systems Planning Under Change
NASA Astrophysics Data System (ADS)
Herman, J. D.; Reed, P. M.; Zeff, H. B.; Characklis, G. W.
2014-12-01
Water systems planners have long recognized the need for robust solutions capable of withstanding deviations from the conditions for which they were designed. Faced with a set of alternatives to choose from—for example, resulting from a multi-objective optimization—existing analysis frameworks offer competing definitions of robustness under change. Robustness analyses have moved from expected utility to exploratory "bottom-up" approaches in which vulnerable scenarios are identified prior to assigning likelihoods; examples include Robust Decision Making (RDM), Decision Scaling, Info-Gap, and Many-Objective Robust Decision Making (MORDM). We propose a taxonomy of robustness frameworks to compare and contrast these approaches, based on their methods of (1) alternative selection, (2) sampling of states of the world, (3) quantification of robustness measures, and (4) identification of key uncertainties using sensitivity analysis. Using model simulations from recent work in multi-objective urban water supply portfolio planning, we illustrate the decision-relevant consequences that emerge from each of these choices. Results indicate that the methodological choices in the taxonomy lead to substantially different planning alternatives, underscoring the importance of an informed definition of robustness. We conclude with a set of recommendations for problem framing: that alternatives should be searched rather than prespecified; dominant uncertainties should be discovered rather than assumed; and that a multivariate satisficing measure of robustness allows stakeholders to achieve their problem-specific performance requirements. This work highlights the importance of careful problem formulation, and provides a common vocabulary to link the robustness frameworks widely used in the field of water systems planning.
Subedi, Amit; Futamura, Yushi; Nishi, Mayuko; Ryo, Akihide; Watanabe, Nobumoto; Osada, Hiroyuki
2016-09-02
Cancer stem cells (CSCs) have robust systems to maintain cancer stemness and drug resistance. Thus, targeting such robust systems instead of focusing on individual signaling pathways should be the approach allowing the identification of selective CSC inhibitors. Here, we used the alkaline phosphatase (ALP) assay to identify inhibitors for cancer stemness in induced cancer stem-like (iCSCL) cells. We screened several compounds from natural product chemical library and evaluated hit compounds for their efficacy on cancer stemness in iCSCL tumorspheres. We identified artesunate, an antimalarial drug, as a selective inhibitor of cancer stemness. Artesunate induced mitochondrial dysfunction that selectively inhibited cancer stemness of iCSCL cells, indicating an essential role of mitochondrial metabolism in cancer stemness. Copyright © 2016 Elsevier Inc. All rights reserved.
Mitsui, Jun; Fukuda, Yoko; Azuma, Kyo; Tozaki, Hirokazu; Ishiura, Hiroyuki; Takahashi, Yuji; Goto, Jun; Tsuji, Shoji
2010-07-01
We have recently found that multiple rare variants of the glucocerebrosidase gene (GBA) confer a robust risk for Parkinson disease, supporting the 'common disease-multiple rare variants' hypothesis. To develop an efficient method of identifying rare variants in a large number of samples, we applied multiplexed resequencing using a next-generation sequencer to identification of rare variants of GBA. Sixteen sets of pooled DNAs from six pooled DNA samples were prepared. Each set of pooled DNAs was subjected to polymerase chain reaction to amplify the target gene (GBA) covering 6.5 kb, pooled into one tube with barcode indexing, and then subjected to extensive sequence analysis using the SOLiD System. Individual samples were also subjected to direct nucleotide sequence analysis. With the optimization of data processing, we were able to extract all the variants from 96 samples with acceptable rates of false-positive single-nucleotide variants.
Using cluster analysis to organize and explore regional GPS velocities
Simpson, Robert W.; Thatcher, Wayne; Savage, James C.
2012-01-01
Cluster analysis offers a simple visual exploratory tool for the initial investigation of regional Global Positioning System (GPS) velocity observations, which are providing increasingly precise mappings of actively deforming continental lithosphere. The deformation fields from dense regional GPS networks can often be concisely described in terms of relatively coherent blocks bounded by active faults, although the choice of blocks, their number and size, can be subjective and is often guided by the distribution of known faults. To illustrate our method, we apply cluster analysis to GPS velocities from the San Francisco Bay Region, California, to search for spatially coherent patterns of deformation, including evidence of block-like behavior. The clustering process identifies four robust groupings of velocities that we identify with four crustal blocks. Although the analysis uses no prior geologic information other than the GPS velocities, the cluster/block boundaries track three major faults, both locked and creeping.
Real-time Bayesian anomaly detection in streaming environmental data
NASA Astrophysics Data System (ADS)
Hill, David J.; Minsker, Barbara S.; Amir, Eyal
2009-04-01
With large volumes of data arriving in near real time from environmental sensors, there is a need for automated detection of anomalous data caused by sensor or transmission errors or by infrequent system behaviors. This study develops and evaluates three automated anomaly detection methods using dynamic Bayesian networks (DBNs), which perform fast, incremental evaluation of data as they become available, scale to large quantities of data, and require no a priori information regarding process variables or types of anomalies that may be encountered. This study investigates these methods' abilities to identify anomalies in eight meteorological data streams from Corpus Christi, Texas. The results indicate that DBN-based detectors, using either robust Kalman filtering or Rao-Blackwellized particle filtering, outperform a DBN-based detector using Kalman filtering, with the former having false positive/negative rates of less than 2%. These methods were successful at identifying data anomalies caused by two real events: a sensor failure and a large storm.
Container weld identification using portable laser scanners
NASA Astrophysics Data System (ADS)
Taddei, Pierluigi; Boström, Gunnar; Puig, David; Kravtchenko, Victor; Sequeira, Vítor
2015-03-01
Identification and integrity verification of sealed containers for security applications can be obtained by employing noninvasive portable optical systems. We present a portable laser range imaging system capable of identifying welds, a byproduct of a container's physical sealing, with micrometer accuracy. It is based on the assumption that each weld has a unique three-dimensional (3-D) structure which cannot be copied or forged. We process the 3-D surface to generate a normalized depth map which is invariant to mechanical alignment errors and that is used to build compact signatures representing the weld. A weld is identified by performing cross correlations of its signature against a set of known signatures. The system has been tested on realistic datasets, containing hundreds of welds, yielding no false positives or false negatives and thus showing the robustness of the system and the validity of the chosen signature.
Two-photon calcium imaging from head-fixed Drosophila during optomotor walking behavior.
Seelig, Johannes D; Chiappe, M Eugenia; Lott, Gus K; Dutta, Anirban; Osborne, Jason E; Reiser, Michael B; Jayaraman, Vivek
2010-07-01
Drosophila melanogaster is a model organism rich in genetic tools to manipulate and identify neural circuits involved in specific behaviors. Here we present a technique for two-photon calcium imaging in the central brain of head-fixed Drosophila walking on an air-supported ball. The ball's motion is tracked at high resolution and can be treated as a proxy for the fly's own movements. We used the genetically encoded calcium sensor, GCaMP3.0, to record from important elements of the motion-processing pathway, the horizontal-system lobula plate tangential cells (LPTCs) in the fly optic lobe. We presented motion stimuli to the tethered fly and found that calcium transients in horizontal-system neurons correlated with robust optomotor behavior during walking. Our technique allows both behavior and physiology in identified neurons to be monitored in a genetic model organism with an extensive repertoire of walking behaviors.
Robust Kalman filter design for predictive wind shear detection
NASA Technical Reports Server (NTRS)
Stratton, Alexander D.; Stengel, Robert F.
1991-01-01
Severe, low-altitude wind shear is a threat to aviation safety. Airborne sensors under development measure the radial component of wind along a line directly in front of an aircraft. In this paper, optimal estimation theory is used to define a detection algorithm to warn of hazardous wind shear from these sensors. To achieve robustness, a wind shear detection algorithm must distinguish threatening wind shear from less hazardous gustiness, despite variations in wind shear structure. This paper presents statistical analysis methods to refine wind shear detection algorithm robustness. Computational methods predict the ability to warn of severe wind shear and avoid false warning. Comparative capability of the detection algorithm as a function of its design parameters is determined, identifying designs that provide robust detection of severe wind shear.
Galán, Chardée A; Shaw, Daniel S; Dishion, Thomas J; Wilson, Melvin N
2017-07-01
The tremendous negative impact of conduct problems to the individual and society has provided the impetus for identifying risk factors, particularly in early childhood. Exposure to neighborhood deprivation in early childhood is a robust predictor of conduct problems in middle childhood. Efforts to identify and test mediating mechanisms by which neighborhood deprivation confers increased risk for behavioral problems have predominantly focused on peer relationships and community-level social processes. Less attention has been dedicated to potential cognitive mediators of this relationship, such as aggressive response generation, which refers to the tendency to generate aggressive solutions to ambiguous social stimuli with negative outcomes. In this study, we examined aggressive response generation, a salient component of social information processing, as a mediating process linking neighborhood deprivation to later conduct problems at age 10.5. Participants (N = 731; 50.5 % male) were drawn from a multisite randomized prevention trial that includes an ethnically diverse and low-income sample of male and female children and their primary caregivers followed prospectively from toddlerhood to middle childhood. Results indicated that aggressive response generation partially mediated the relationship between neighborhood deprivation and parent- and teacher-report of conduct problems, but not youth-report. Results suggest that the detrimental effects of neighborhood deprivation on youth adjustment may occur by altering the manner in which children process social information.
Shaw, Daniel S.; Dishion, Thomas J.; Wilson, Melvin N.
2018-01-01
The tremendous negative impact of conduct problems to the individual and society has provided the impetus for identifying risk factors, particularly in early childhood. Exposure to neighborhood deprivation in early childhood is a robust predictor of conduct problems in middle childhood. Efforts to identify and test mediating mechanisms by which neighborhood deprivation confers increased risk for behavioral problems have predominantly focused on peer relationships and community-level social processes. Less attention has been dedicated to potential cognitive mediators of this relationship, such as aggressive response generation, which refers to the tendency to generate aggressive solutions to ambiguous social stimuli with negative outcomes. In this study, we examined aggressive response generation, a salient component of social information processing, as a mediating process linking neighborhood deprivation to later conduct problems at age 10.5. Participants (N = 731; 50.5 % male) were drawn from a multisite randomized prevention trial that includes an ethnically diverse and low-income sample of male and female children and their primary caregivers followed prospectively from toddlerhood to middle childhood. Results indicated that aggressive response generation partially mediated the relationship between neighborhood deprivation and parent- and teacher-report of conduct problems, but not youth-report. Results suggest that the detrimental effects of neighborhood deprivation on youth adjustment may occur by altering the manner in which children process social information. PMID:27696324
Precup, Radu-Emil; David, Radu-Codrut; Petriu, Emil M; Radac, Mircea-Bogdan; Preitl, Stefan
2014-11-01
This paper suggests a new generation of optimal PI controllers for a class of servo systems characterized by saturation and dead zone static nonlinearities and second-order models with an integral component. The objective functions are expressed as the integral of time multiplied by absolute error plus the weighted sum of the integrals of output sensitivity functions of the state sensitivity models with respect to two process parametric variations. The PI controller tuning conditions applied to a simplified linear process model involve a single design parameter specific to the extended symmetrical optimum (ESO) method which offers the desired tradeoff to several control system performance indices. An original back-calculation and tracking anti-windup scheme is proposed in order to prevent the integrator wind-up and to compensate for the dead zone nonlinearity of the process. The minimization of the objective functions is carried out in the framework of optimization problems with inequality constraints which guarantee the robust stability with respect to the process parametric variations and the controller robustness. An adaptive gravitational search algorithm (GSA) solves the optimization problems focused on the optimal tuning of the design parameter specific to the ESO method and of the anti-windup tracking gain. A tuning method for PI controllers is proposed as an efficient approach to the design of resilient control systems. The tuning method and the PI controllers are experimentally validated by the adaptive GSA-based tuning of PI controllers for the angular position control of a laboratory servo system.
NASA Astrophysics Data System (ADS)
Rosolem, R.; Rahman, M.; Kollet, S. J.; Wagener, T.
2017-12-01
Understanding the impacts of land cover and climate changes on terrestrial hydrometeorology is important across a range of spatial and temporal scales. Earth System Models (ESMs) provide a robust platform for evaluating these impacts. However, current ESMs lack the representation of key hydrological processes (e.g., preferential water flow, and direct interactions with aquifers) in general. The typical "free drainage" conceptualization of land models can misrepresent the magnitude of those interactions, consequently affecting the exchange of energy and water at the surface as well as estimates of groundwater recharge. Recent studies show the benefits of explicitly simulating the interactions between subsurface and surface processes in similar models. However, such parameterizations are often computationally demanding resulting in limited application for large/global-scale studies. Here, we take a different approach in developing a novel parameterization for groundwater dynamics. Instead of directly adding another complex process to an established land model, we examine a set of comprehensive experimental scenarios using a very robust and establish three-dimensional hydrological model to develop a simpler parameterization that represents the aquifer to land surface interactions. The main goal of our developed parameterization is to simultaneously maximize the computational gain (i.e., "efficiency") while minimizing simulation errors in comparison to the full 3D model (i.e., "robustness") to allow for easy implementation in ESMs globally. Our study focuses primarily on understanding both the dynamics for groundwater recharge and discharge, respectively. Preliminary results show that our proposed approach significantly reduced the computational demand while model deviations from the full 3D model are considered to be small for these processes.
Conservation planning under uncertainty in urban development and vegetation dynamics
Carmel, Yohay
2018-01-01
Systematic conservation planning is a framework for optimally locating and prioritizing areas for conservation. An often-noted shortcoming of most conservation planning studies is that they do not address future uncertainty. The selection of protected areas that are intended to ensure the long-term persistence of biodiversity is often based on a snapshot of the current situation, ignoring processes such as climate change. Scenarios, in the sense of being accounts of plausible futures, can be utilized to identify conservation area portfolios that are robust to future uncertainty. We compared three approaches for utilizing scenarios in conservation area selection: considering a full set of scenarios (all-scenarios portfolio), assuming the realization of specific scenarios, and a reference strategy based on the current situation (current distributions portfolio). Our objective was to compare the robustness of these approaches in terms of their relative performance across future scenarios. We focused on breeding bird species in Israel’s Mediterranean region. We simulated urban development and vegetation dynamics scenarios 60 years into the future using DINAMICA-EGO, a cellular-automata simulation model. For each scenario, we mapped the target species’ available habitat distribution, identified conservation priority areas using the site-selection software MARXAN, and constructed conservation area portfolios using the three aforementioned strategies. We then assessed portfolio performance based on the number of species for which representation targets were met in each scenario. The all-scenarios portfolio consistently outperformed the other portfolios, and was more robust to ‘errors’ (e.g., when an assumed specific scenario did not occur). On average, the all-scenarios portfolio achieved representation targets for five additional species compared with the current distributions portfolio (approximately 33 versus 28 species). Our findings highlight the importance of considering a broad and meaningful set of scenarios, rather than relying on the current situation, the expected occurrence of specific scenarios, or the worst-case scenario. PMID:29621330
Conservation planning under uncertainty in urban development and vegetation dynamics.
Troupin, David; Carmel, Yohay
2018-01-01
Systematic conservation planning is a framework for optimally locating and prioritizing areas for conservation. An often-noted shortcoming of most conservation planning studies is that they do not address future uncertainty. The selection of protected areas that are intended to ensure the long-term persistence of biodiversity is often based on a snapshot of the current situation, ignoring processes such as climate change. Scenarios, in the sense of being accounts of plausible futures, can be utilized to identify conservation area portfolios that are robust to future uncertainty. We compared three approaches for utilizing scenarios in conservation area selection: considering a full set of scenarios (all-scenarios portfolio), assuming the realization of specific scenarios, and a reference strategy based on the current situation (current distributions portfolio). Our objective was to compare the robustness of these approaches in terms of their relative performance across future scenarios. We focused on breeding bird species in Israel's Mediterranean region. We simulated urban development and vegetation dynamics scenarios 60 years into the future using DINAMICA-EGO, a cellular-automata simulation model. For each scenario, we mapped the target species' available habitat distribution, identified conservation priority areas using the site-selection software MARXAN, and constructed conservation area portfolios using the three aforementioned strategies. We then assessed portfolio performance based on the number of species for which representation targets were met in each scenario. The all-scenarios portfolio consistently outperformed the other portfolios, and was more robust to 'errors' (e.g., when an assumed specific scenario did not occur). On average, the all-scenarios portfolio achieved representation targets for five additional species compared with the current distributions portfolio (approximately 33 versus 28 species). Our findings highlight the importance of considering a broad and meaningful set of scenarios, rather than relying on the current situation, the expected occurrence of specific scenarios, or the worst-case scenario.
Unsupervised Detection of Planetary Craters by a Marked Point Process
NASA Technical Reports Server (NTRS)
Troglio, G.; Benediktsson, J. A.; Le Moigne, J.; Moser, G.; Serpico, S. B.
2011-01-01
With the launch of several planetary missions in the last decade, a large amount of planetary images is being acquired. Preferably, automatic and robust processing techniques need to be used for data analysis because of the huge amount of the acquired data. Here, the aim is to achieve a robust and general methodology for crater detection. A novel technique based on a marked point process is proposed. First, the contours in the image are extracted. The object boundaries are modeled as a configuration of an unknown number of random ellipses, i.e., the contour image is considered as a realization of a marked point process. Then, an energy function is defined, containing both an a priori energy and a likelihood term. The global minimum of this function is estimated by using reversible jump Monte-Carlo Markov chain dynamics and a simulated annealing scheme. The main idea behind marked point processes is to model objects within a stochastic framework: Marked point processes represent a very promising current approach in the stochastic image modeling and provide a powerful and methodologically rigorous framework to efficiently map and detect objects and structures in an image with an excellent robustness to noise. The proposed method for crater detection has several feasible applications. One such application area is image registration by matching the extracted features.
Kaneko, Kunihiko
2011-06-01
Here I present and discuss a model that, among other things, appears able to describe the dynamics of cancer cell origin from the perspective of stable and unstable gene expression profiles. In identifying such aberrant gene expression profiles as lying outside the normal stable states attracted through development and normal cell differentiation, the hypothesis explains why cancer cells accumulate mutations, to which they are not robust, and why these mutations create a new stable state far from the normal gene expression profile space. Such cells are in strong contrast with normal cell types that appeared as an attractor state in the gene expression dynamical system under cell-cell interaction and achieved robustness to noise through evolution, which in turn also conferred robustness to mutation. In complex gene regulation networks, other aberrant cellular states lacking such high robustness are expected to remain, which would correspond to cancer cells. Copyright © 2011 WILEY Periodicals, Inc.
Emergence of robustness in networks of networks
NASA Astrophysics Data System (ADS)
Roth, Kevin; Morone, Flaviano; Min, Byungjoon; Makse, Hernán A.
2017-06-01
A model of interdependent networks of networks (NONs) was introduced recently [Proc. Natl. Acad. Sci. (USA) 114, 3849 (2017), 10.1073/pnas.1620808114] in the context of brain activation to identify the neural collective influencers in the brain NON. Here we investigate the emergence of robustness in such a model, and we develop an approach to derive an exact expression for the random percolation transition in Erdös-Rényi NONs of this kind. Analytical calculations are in agreement with numerical simulations, and highlight the robustness of the NON against random node failures, which thus presents a new robust universality class of NONs. The key aspect of this robust NON model is that a node can be activated even if it does not belong to the giant mutually connected component, thus allowing the NON to be built from below the percolation threshold, which is not possible in previous models of interdependent networks. Interestingly, the phase diagram of the model unveils particular patterns of interconnectivity for which the NON is most vulnerable, thereby marking the boundary above which the robustness of the system improves with increasing dependency connections.
Robust allocation of a defensive budget considering an attacker's private information.
Nikoofal, Mohammad E; Zhuang, Jun
2012-05-01
Attackers' private information is one of the main issues in defensive resource allocation games in homeland security. The outcome of a defense resource allocation decision critically depends on the accuracy of estimations about the attacker's attributes. However, terrorists' goals may be unknown to the defender, necessitating robust decisions by the defender. This article develops a robust-optimization game-theoretical model for identifying optimal defense resource allocation strategies for a rational defender facing a strategic attacker while the attacker's valuation of targets, being the most critical attribute of the attacker, is unknown but belongs to bounded distribution-free intervals. To our best knowledge, no previous research has applied robust optimization in homeland security resource allocation when uncertainty is defined in bounded distribution-free intervals. The key features of our model include (1) modeling uncertainty in attackers' attributes, where uncertainty is characterized by bounded intervals; (2) finding the robust-optimization equilibrium for the defender using concepts dealing with budget of uncertainty and price of robustness; and (3) applying the proposed model to real data. © 2011 Society for Risk Analysis.
Lee, Ji Min; Park, Sung Hwan; Kim, Jong Shik
2013-01-01
A robust control scheme is proposed for the position control of the electrohydrostatic actuator (EHA) when considering hardware saturation, load disturbance, and lumped system uncertainties and nonlinearities. To reduce overshoot due to a saturation of electric motor and to realize robustness against load disturbance and lumped system uncertainties such as varying parameters and modeling error, this paper proposes an adaptive antiwindup PID sliding mode scheme as a robust position controller for the EHA system. An optimal PID controller and an optimal anti-windup PID controller are also designed to compare control performance. An EHA prototype is developed, carrying out system modeling and parameter identification in designing the position controller. The simply identified linear model serves as the basis for the design of the position controllers, while the robustness of the control systems is compared by experiments. The adaptive anti-windup PID sliding mode controller has been found to have the desired performance and become robust against hardware saturation, load disturbance, and lumped system uncertainties and nonlinearities. PMID:23983640
Stochastic Noise and Synchronisation during Dictyostelium Aggregation Make cAMP Oscillations Robust
Kim, Jongrae; Heslop-Harrison, Pat; Postlethwaite, Ian; Bates, Declan G
2007-01-01
Stable and robust oscillations in the concentration of adenosine 3′, 5′-cyclic monophosphate (cAMP) are observed during the aggregation phase of starvation-induced development in Dictyostelium discoideum. In this paper we use mathematical modelling together with ideas from robust control theory to identify two factors which appear to make crucial contributions to ensuring the robustness of these oscillations. Firstly, we show that stochastic fluctuations in the molecular interactions play an important role in preserving stable oscillations in the face of variations in the kinetics of the intracellular network. Secondly, we show that synchronisation of the aggregating cells through the diffusion of extracellular cAMP is a key factor in ensuring robustness of the oscillatory waves of cAMP observed in Dictyostelium cell cultures to cell-to-cell variations. A striking and quite general implication of the results is that the robustness analysis of models of oscillating biomolecular networks (circadian clocks, Ca2+ oscillations, etc.) can only be done reliably by using stochastic simulations, even in the case where molecular concentrations are very high. PMID:17997595
Prabhakaran, Shyam; Khorzad, Rebeca; Brown, Alexandra; Nannicelli, Anna P; Khare, Rahul; Holl, Jane L
2015-10-01
Although best practices have been developed for achieving door-to-needle (DTN) times ≤60 minutes for stroke thrombolysis, critical DTN process failures persist. We sought to compare these failures in the Emergency Department at an academic medical center and a community hospital. Failure modes effects and criticality analysis was used to identify system and process failures. Multidisciplinary teams involved in DTN care participated in moderated sessions at each site. As a result, DTN process maps were created and potential failures and their causes, frequency, severity, and existing safeguards were identified. For each failure, a risk priority number and criticality score were calculated; failures were then ranked, with the highest scores representing the most critical failures and targets for intervention. We detected a total of 70 failures in 50 process steps and 76 failures in 42 process steps at the community hospital and academic medical center, respectively. At the community hospital, critical failures included (1) delay in registration because of Emergency Department overcrowding, (2) incorrect triage diagnosis among walk-in patients, and (3) delay in obtaining consent for thrombolytic treatment. At the academic medical center, critical failures included (1) incorrect triage diagnosis among walk-in patients, (2) delay in stroke team activation, and (3) delay in obtaining computed tomographic imaging. Although the identification of common critical failures suggests opportunities for a generalizable process redesign, differences in the criticality and nature of failures must be addressed at the individual hospital level, to develop robust and sustainable solutions to reduce DTN time. © 2015 American Heart Association, Inc.
This paper proposes a robustness analysis based on Multiple Criteria Decision Aiding (MCDA). The ensuing model was used to assess the implementation of green chemistry principles in the synthesis of silver nanoparticles. Its recommendations were also compared to an earlier develo...
Robustness, evolvability, and the logic of genetic regulation.
Payne, Joshua L; Moore, Jason H; Wagner, Andreas
2014-01-01
In gene regulatory circuits, the expression of individual genes is commonly modulated by a set of regulating gene products, which bind to a gene's cis-regulatory region. This region encodes an input-output function, referred to as signal-integration logic, that maps a specific combination of regulatory signals (inputs) to a particular expression state (output) of a gene. The space of all possible signal-integration functions is vast and the mapping from input to output is many-to-one: For the same set of inputs, many functions (genotypes) yield the same expression output (phenotype). Here, we exhaustively enumerate the set of signal-integration functions that yield identical gene expression patterns within a computational model of gene regulatory circuits. Our goal is to characterize the relationship between robustness and evolvability in the signal-integration space of regulatory circuits, and to understand how these properties vary between the genotypic and phenotypic scales. Among other results, we find that the distributions of genotypic robustness are skewed, so that the majority of signal-integration functions are robust to perturbation. We show that the connected set of genotypes that make up a given phenotype are constrained to specific regions of the space of all possible signal-integration functions, but that as the distance between genotypes increases, so does their capacity for unique innovations. In addition, we find that robust phenotypes are (i) evolvable, (ii) easily identified by random mutation, and (iii) mutationally biased toward other robust phenotypes. We explore the implications of these latter observations for mutation-based evolution by conducting random walks between randomly chosen source and target phenotypes. We demonstrate that the time required to identify the target phenotype is independent of the properties of the source phenotype.
Robustness, Evolvability, and the Logic of Genetic Regulation
Moore, Jason H.; Wagner, Andreas
2014-01-01
In gene regulatory circuits, the expression of individual genes is commonly modulated by a set of regulating gene products, which bind to a gene’s cis-regulatory region. This region encodes an input-output function, referred to as signal-integration logic, that maps a specific combination of regulatory signals (inputs) to a particular expression state (output) of a gene. The space of all possible signal-integration functions is vast and the mapping from input to output is many-to-one: for the same set of inputs, many functions (genotypes) yield the same expression output (phenotype). Here, we exhaustively enumerate the set of signal-integration functions that yield idential gene expression patterns within a computational model of gene regulatory circuits. Our goal is to characterize the relationship between robustness and evolvability in the signal-integration space of regulatory circuits, and to understand how these properties vary between the genotypic and phenotypic scales. Among other results, we find that the distributions of genotypic robustness are skewed, such that the majority of signal-integration functions are robust to perturbation. We show that the connected set of genotypes that make up a given phenotype are constrained to specific regions of the space of all possible signal-integration functions, but that as the distance between genotypes increases, so does their capacity for unique innovations. In addition, we find that robust phenotypes are (i) evolvable, (ii) easily identified by random mutation, and (iii) mutationally biased toward other robust phenotypes. We explore the implications of these latter observations for mutation-based evolution by conducting random walks between randomly chosen source and target phenotypes. We demonstrate that the time required to identify the target phenotype is independent of the properties of the source phenotype. PMID:23373974
NASA Astrophysics Data System (ADS)
Koiter, A. J.; Owens, P. N.; Petticrew, E. L.; Lobb, D. A.
2013-10-01
Sediment fingerprinting is a technique that is increasingly being used to improve the understanding of sediment dynamics within river basins. At present, one of the main limitations of the technique is the ability to link sediment back to their sources due to the non-conservative nature of many of the sediment properties. The processes that occur between the sediment source locations and the point of collection downstream are not well understood or quantified and currently represent a black-box in the sediment fingerprinting approach. The literature on sediment fingerprinting tends to assume that there is a direct connection between sources and sinks, while much of the broader environmental sedimentology literature identifies that numerous chemical, biological and physical transformations and alterations can occur as sediment moves through the landscape. The focus of this paper is on the processes that drive particle size and organic matter selectivity and biological, geochemical and physical transformations and how understanding these processes can be used to guide sampling protocols, fingerprint selection and data interpretation. The application of statistical approaches without consideration of how unique sediment fingerprints have developed and how robust they are within the environment is a major limitation of many recent studies. This review summarises the current information, identifies areas that need further investigation and provides recommendations for sediment fingerprinting that should be considered for adoption in future studies if the full potential and utility of the approach are to be realised.
Neuropsychological Profiles on the WAIS-IV of Adults With ADHD.
Theiling, Johanna; Petermann, Franz
2016-11-01
The aim of the study was to investigate the pattern of neuropsychological profiles on the Wechsler Adult Intelligence Scale-IV (WAIS-IV) for adults With ADHD relative to randomly matched controls and to assess overall intellectual ability discrepancies of the Full Scale Intelligence Quotient (FSIQ) and the General Ability Index (GAI). In all, 116 adults With ADHD and 116 controls between 16 and 71 years were assessed. Relative to controls, adults With ADHD show significant decrements in subtests with working memory and processing speed demands with moderate to large effect sizes and a higher GAI in comparison with the FSIQ. This suggests first that deficits identified with previous WAIS versions are robust in adults With ADHD and remain deficient when assessed with the WAIS-IV; second that the WAIS-IV reliably differentiates between patients and controls; and third that a reduction of the FSIQ is most likely due to a decrement in working memory and processing speed abilities. The findings have essential implications for the diagnostic process. © The Author(s) 2014.
[INVITED] Computational intelligence for smart laser materials processing
NASA Astrophysics Data System (ADS)
Casalino, Giuseppe
2018-03-01
Computational intelligence (CI) involves using a computer algorithm to capture hidden knowledge from data and to use them for training ;intelligent machine; to make complex decisions without human intervention. As simulation is becoming more prevalent from design and planning to manufacturing and operations, laser material processing can also benefit from computer generating knowledge through soft computing. This work is a review of the state-of-the-art on the methodology and applications of CI in laser materials processing (LMP), which is nowadays receiving increasing interest from world class manufacturers and 4.0 industry. The focus is on the methods that have been proven effective and robust in solving several problems in welding, cutting, drilling, surface treating and additive manufacturing using the laser beam. After a basic description of the most common computational intelligences employed in manufacturing, four sections, namely, laser joining, machining, surface, and additive covered the most recent applications in the already extensive literature regarding the CI in LMP. Eventually, emerging trends and future challenges were identified and discussed.
Potential Signatures of Semi-volatile Compounds Associated With Nuclear Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Probasco, Kathleen M.; Birnbaum, Jerome C.; Maughan, A. D.
2002-06-01
Semi-volatile chemicals associated with nuclear processes (e.g., the reprocessing of uranium to produce plutonium for nuclear weapons, or the separation of actinides from processing waste streams), can provide sticky residues or signatures that will attach to piping, ducting, soil, water, or other surface media. Volatile compounds, that are more suitable for electro-optical sensing, have been well studied. However, the semi-volatile compounds have not been well documented or studied. A majority of these semi-volatile chemicals are more robust than typical gaseous or liquid chemicals and can have lifetimes of several weeks, months, or years in the environment. However, large data gapsmore » exist concerning these potential signature compounds and more research is needed to fill these data gaps so that important signature information is not overlooked or discarded. This report investigates key semi-volatile compounds associated with nuclear separations, identifies available chemical and physical properties, and discusses the degradation products that would result from hydrolysis, radiolysis and oxidation reactions on these compounds.« less
Ni, Xiao Yu; Drengstig, Tormod; Ruoff, Peter
2009-09-02
Organisms have the property to adapt to a changing environment and keep certain components within a cell regulated at the same level (homeostasis). "Perfect adaptation" describes an organism's response to an external stepwise perturbation by regulating some of its variables/components precisely to their original preperturbation values. Numerous examples of perfect adaptation/homeostasis have been found, as for example, in bacterial chemotaxis, photoreceptor responses, MAP kinase activities, or in metal-ion homeostasis. Two concepts have evolved to explain how perfect adaptation may be understood: In one approach (robust perfect adaptation), the adaptation is a network property, which is mostly, but not entirely, independent of rate constant values; in the other approach (nonrobust perfect adaptation), a fine-tuning of rate constant values is needed. Here we identify two classes of robust molecular homeostatic mechanisms, which compensate for environmental variations in a controlled variable's inflow or outflow fluxes, and allow for the presence of robust temperature compensation. These two classes of homeostatic mechanisms arise due to the fact that concentrations must have positive values. We show that the concept of integral control (or integral feedback), which leads to robust homeostasis, is associated with a control species that has to work under zero-order flux conditions and does not necessarily require the presence of a physico-chemical feedback structure. There are interesting links between the two identified classes of homeostatic mechanisms and molecular mechanisms found in mammalian iron and calcium homeostasis, indicating that homeostatic mechanisms may underlie similar molecular control structures.
LMI Based Robust Blood Glucose Regulation in Type-1 Diabetes Patient with Daily Multi-meal Ingestion
NASA Astrophysics Data System (ADS)
Mandal, S.; Bhattacharjee, A.; Sutradhar, A.
2014-04-01
This paper illustrates the design of a robust output feedback H ∞ controller for the nonlinear glucose-insulin (GI) process in a type-1 diabetes patient to deliver insulin through intravenous infusion device. The H ∞ design specification have been realized using the concept of linear matrix inequality (LMI) and the LMI approach has been used to quadratically stabilize the GI process via output feedback H ∞ controller. The controller has been designed on the basis of full 19th order linearized state-space model generated from the modified Sorensen's nonlinear model of GI process. The resulting controller has been tested with the nonlinear patient model (the modified Sorensen's model) in presence of patient parameter variations and other uncertainty conditions. The performance of the controller was assessed in terms of its ability to track the normoglycemic set point of 81 mg/dl with a typical multi-meal disturbance throughout a day that yields robust performance and noise rejection.
Sheldon, E M; Downar, J B
2000-08-15
Novel approaches to the development of analytical procedures for monitoring incoming starting material in support of chemical/pharmaceutical processes are described. High technology solutions were utilized for timely process development and preparation of high quality clinical supplies. A single robust HPLC method was developed and characterized for the analysis of the key starting material from three suppliers. Each supplier used a different process for the preparation of this material and, therefore, each suppliers' material exhibited a unique impurity profile. The HPLC method utilized standard techniques acceptable for release testing in a QC/manufacturing environment. An automated experimental design protocol was used to characterize the robustness of the HPLC method. The method was evaluated for linearity, limit of quantitation, solution stability, and precision of replicate injections. An LC-MS method that emulated the release HPLC method was developed and the identities of impurities were mapped between the two methods.
A Robust False Matching Points Detection Method for Remote Sensing Image Registration
NASA Astrophysics Data System (ADS)
Shan, X. J.; Tang, P.
2015-04-01
Given the influences of illumination, imaging angle, and geometric distortion, among others, false matching points still occur in all image registration algorithms. Therefore, false matching points detection is an important step in remote sensing image registration. Random Sample Consensus (RANSAC) is typically used to detect false matching points. However, RANSAC method cannot detect all false matching points in some remote sensing images. Therefore, a robust false matching points detection method based on Knearest- neighbour (K-NN) graph (KGD) is proposed in this method to obtain robust and high accuracy result. The KGD method starts with the construction of the K-NN graph in one image. K-NN graph can be first generated for each matching points and its K nearest matching points. Local transformation model for each matching point is then obtained by using its K nearest matching points. The error of each matching point is computed by using its transformation model. Last, L matching points with largest error are identified false matching points and removed. This process is iterative until all errors are smaller than the given threshold. In addition, KGD method can be used in combination with other methods, such as RANSAC. Several remote sensing images with different resolutions and terrains are used in the experiment. We evaluate the performance of KGD method, RANSAC + KGD method, RANSAC, and Graph Transformation Matching (GTM). The experimental results demonstrate the superior performance of the KGD and RANSAC + KGD methods.
NASA Astrophysics Data System (ADS)
Li, Wei; Xiao, Chuan; Liu, Yaduo
2013-12-01
Audio identification via fingerprint has been an active research field for years. However, most previously reported methods work on the raw audio format in spite of the fact that nowadays compressed format audio, especially MP3 music, has grown into the dominant way to store music on personal computers and/or transmit it over the Internet. It will be interesting if a compressed unknown audio fragment could be directly recognized from the database without decompressing it into the wave format at first. So far, very few algorithms run directly on the compressed domain for music information retrieval, and most of them take advantage of the modified discrete cosine transform coefficients or derived cepstrum and energy type of features. As a first attempt, we propose in this paper utilizing compressed domain auditory Zernike moment adapted from image processing techniques as the key feature to devise a novel robust audio identification algorithm. Such fingerprint exhibits strong robustness, due to its statistically stable nature, against various audio signal distortions such as recompression, noise contamination, echo adding, equalization, band-pass filtering, pitch shifting, and slight time scale modification. Experimental results show that in a music database which is composed of 21,185 MP3 songs, a 10-s long music segment is able to identify its original near-duplicate recording, with average top-5 hit rate up to 90% or above even under severe audio signal distortions.
NASA Astrophysics Data System (ADS)
Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei
2016-04-01
Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models tend to contain a large number of poorly defined and spatially varying model parameters which are therefore computationally expensive to calibrate. Insufficient data can result in model parameter and structural equifinality, particularly when calibration is reliant on catchment outlet discharge behaviour alone. Evaluating spatial patterns of internal hydrological behaviour has the potential to reveal simulations that, whilst consistent with measured outlet discharge, are qualitatively dissimilar to our perceptual understanding of how the system should behave. We argue that such understanding, which may be derived from stakeholder knowledge across different catchments for certain process dynamics, is a valuable source of information to help reject non-behavioural models, and therefore identify feasible model structures and parameters. The challenge, however, is to convert different sources of often qualitative and/or semi-qualitative information into robust quantitative constraints of model states and fluxes, and combine these sources of information together to reject models within an efficient calibration framework. Here we present the development of a framework to incorporate different sources of data to efficiently calibrate distributed catchment models. For each source of information, an interval or inequality is used to define the behaviour of the catchment system. These intervals are then combined to produce a hyper-volume in state space, which is used to identify behavioural models. We apply the methodology to calibrate the Penn State Integrated Hydrological Model (PIHM) at the Wye catchment, Plynlimon, UK. Outlet discharge behaviour is successfully simulated when perceptual understanding of relative groundwater levels between lowland peat, upland peat and valley slopes within the catchment are used to identify behavioural models. The process of converting qualitative information into quantitative constraints forces us to evaluate the assumptions behind our perceptual understanding in order to derive robust constraints, and therefore fairly reject models and avoid type II errors. Likewise, consideration needs to be given to the commensurability problem when mapping perceptual understanding to constrain model states.
DOT National Transportation Integrated Search
2010-05-31
In this research project, transportation flexibility and reliability concepts are extended and applied : to a new method for identifying the most critical links in a road network. Current transportation : management practices typically utilize locali...
Ji, Xiaoting; Niu, Yifeng; Shen, Lincheng
2016-01-01
This paper presents a robust satisficing decision-making method for Unmanned Aerial Vehicles (UAVs) executing complex missions in an uncertain environment. Motivated by the info-gap decision theory, we formulate this problem as a novel robust satisficing optimization problem, of which the objective is to maximize the robustness while satisfying some desired mission requirements. Specifically, a new info-gap based Markov Decision Process (IMDP) is constructed to abstract the uncertain UAV system and specify the complex mission requirements with the Linear Temporal Logic (LTL). A robust satisficing policy is obtained to maximize the robustness to the uncertain IMDP while ensuring a desired probability of satisfying the LTL specifications. To this end, we propose a two-stage robust satisficing solution strategy which consists of the construction of a product IMDP and the generation of a robust satisficing policy. In the first stage, a product IMDP is constructed by combining the IMDP with an automaton representing the LTL specifications. In the second, an algorithm based on robust dynamic programming is proposed to generate a robust satisficing policy, while an associated robustness evaluation algorithm is presented to evaluate the robustness. Finally, through Monte Carlo simulation, the effectiveness of our algorithms is demonstrated on an UAV search mission under severe uncertainty so that the resulting policy can maximize the robustness while reaching the desired performance level. Furthermore, by comparing the proposed method with other robust decision-making methods, it can be concluded that our policy can tolerate higher uncertainty so that the desired performance level can be guaranteed, which indicates that the proposed method is much more effective in real applications. PMID:27835670
Ji, Xiaoting; Niu, Yifeng; Shen, Lincheng
2016-01-01
This paper presents a robust satisficing decision-making method for Unmanned Aerial Vehicles (UAVs) executing complex missions in an uncertain environment. Motivated by the info-gap decision theory, we formulate this problem as a novel robust satisficing optimization problem, of which the objective is to maximize the robustness while satisfying some desired mission requirements. Specifically, a new info-gap based Markov Decision Process (IMDP) is constructed to abstract the uncertain UAV system and specify the complex mission requirements with the Linear Temporal Logic (LTL). A robust satisficing policy is obtained to maximize the robustness to the uncertain IMDP while ensuring a desired probability of satisfying the LTL specifications. To this end, we propose a two-stage robust satisficing solution strategy which consists of the construction of a product IMDP and the generation of a robust satisficing policy. In the first stage, a product IMDP is constructed by combining the IMDP with an automaton representing the LTL specifications. In the second, an algorithm based on robust dynamic programming is proposed to generate a robust satisficing policy, while an associated robustness evaluation algorithm is presented to evaluate the robustness. Finally, through Monte Carlo simulation, the effectiveness of our algorithms is demonstrated on an UAV search mission under severe uncertainty so that the resulting policy can maximize the robustness while reaching the desired performance level. Furthermore, by comparing the proposed method with other robust decision-making methods, it can be concluded that our policy can tolerate higher uncertainty so that the desired performance level can be guaranteed, which indicates that the proposed method is much more effective in real applications.
Best Practices for Reliable and Robust Spacecraft Structures
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.; Murthy, P. L. N.; Patel, Naresh R.; Bonacuse, Peter J.; Elliott, Kenny B.; Gordon, S. A.; Gyekenyesi, J. P.; Daso, E. O.; Aggarwal, P.; Tillman, R. F.
2007-01-01
A study was undertaken to capture the best practices for the development of reliable and robust spacecraft structures for NASA s next generation cargo and crewed launch vehicles. In this study, the NASA heritage programs such as Mercury, Gemini, Apollo, and the Space Shuttle program were examined. A series of lessons learned during the NASA and DoD heritage programs are captured. The processes that "make the right structural system" are examined along with the processes to "make the structural system right". The impact of technology advancements in materials and analysis and testing methods on reliability and robustness of spacecraft structures is studied. The best practices and lessons learned are extracted from these studies. Since the first human space flight, the best practices for reliable and robust spacecraft structures appear to be well established, understood, and articulated by each generation of designers and engineers. However, these best practices apparently have not always been followed. When the best practices are ignored or short cuts are taken, risks accumulate, and reliability suffers. Thus program managers need to be vigilant of circumstances and situations that tend to violate best practices. Adherence to the best practices may help develop spacecraft systems with high reliability and robustness against certain anomalies and unforeseen events.
PS3-21: Extracting Utilization Data from Clarity into VDW Using Oracle and SAS
Chimmula, Srivardhan
2013-01-01
Background/Aims The purpose of the presentation is to demonstrate how we use SAS and Oracle to load VDW_Utilization, VDW_DX, and VDW_PX tables from Clarity at the Kaiser Permanente Northern California (KPNC) Division of Research (DOR) site. Methods DOR uses the best of Oracle PL/ SQL and SAS capabilities in building Extract Transform and Load (ETL) processes. These processes extract patient encounter, diagnosis, and procedure data from Teradata-based Clarity. The data is then transformed to fit HMORN’s VDW definitions of the table. This data is then loaded into the Oracle-based VDW table on DOR’s research database and then finally a copy of the table is also created as a SAS dataset. Results DOR builds robust and efficient ETL processes that refresh VDW Utilization table on a monthly basis processing millions of records/observations. The ETL processes have the capability to identify daily changes in Clarity and update the VDW tables on a daily basis. Conclusions KPNC DOR combines the best of both Oracle and SAS worlds to build ETL processes that load the data into VDW Utilization tables efficiently.
A Rule-Based Modeling for the Description of Flexible and Self-healing Business Processes
NASA Astrophysics Data System (ADS)
Boukhebouze, Mohamed; Amghar, Youssef; Benharkat, Aïcha-Nabila; Maamar, Zakaria
In this paper we discuss the importance of ensuring that business processes are label robust and agile at the same time robust and agile. To this end, we consider reviewing the way business processes are managed. For instance we consider offering a flexible way to model processes so that changes in regulations are handled through some self-healing mechanisms. These changes may raise exceptions at run-time if not properly reflected on these processes. To this end we propose a new rule based model that adopts the ECA rules and is built upon formal tools. The business logic of a process can be summarized with a set of rules that implement an organization’s policies. Each business rule is formalized using our ECAPE formalism (Event-Condition-Action-Post condition- post Event). This formalism allows translating a process into a graph of rules that is analyzed in terms of reliably and flexibility.
NASA Astrophysics Data System (ADS)
Taner, M. U.; Ray, P.; Brown, C.
2016-12-01
Hydroclimatic nonstationarity due to climate change poses challenges for long-term water infrastructure planning in river basin systems. While designing strategies that are flexible or adaptive hold intuitive appeal, development of well-performing strategies requires rigorous quantitative analysis that address uncertainties directly while making the best use of scientific information on the expected evolution of future climate. Multi-stage robust optimization (RO) offers a potentially effective and efficient technique for addressing the problem of staged basin-level planning under climate change, however the necessity of assigning probabilities to future climate states or scenarios is an obstacle to implementation, given that methods to reliably assign probabilities to future climate states are not well developed. We present a method that overcomes this challenge by creating a bottom-up RO-based framework that decreases the dependency on probability distributions of future climate and rather employs them after optimization to aid selection amongst competing alternatives. The iterative process yields a vector of `optimal' decision pathways each under the associated set of probabilistic assumptions. In the final phase, the vector of optimal decision pathways is evaluated to identify the solutions that are least sensitive to the scenario probabilities and are most-likely conditional on the climate information. The framework is illustrated for the planning of new dam and hydro-agricultural expansions projects in the Niger River Basin over a 45-year planning period from 2015 to 2060.
Climbing with adhesion: from bioinspiration to biounderstanding
Cutkosky, Mark R.
2015-01-01
Bioinspiration is an increasingly popular design paradigm, especially as robots venture out of the laboratory and into the world. Animals are adept at coping with the variability that the world imposes. With advances in scientific tools for understanding biological structures in detail, we are increasingly able to identify design features that account for animals' robust performance. In parallel, advances in fabrication methods and materials are allowing us to engineer artificial structures with similar properties. The resulting robots become useful platforms for testing hypotheses about which principles are most important. Taking gecko-inspired climbing as an example, we show that the process of extracting principles from animals and adapting them to robots provides insights for both robotics and biology. PMID:26464786
Conservation paleobiology: putting the dead to work.
Dietl, Gregory P; Flessa, Karl W
2011-01-01
Geohistorical data and analyses are playing an increasingly important role in conservation biology practice and policy. In this review, we discuss examples of how the near-time and deep-time fossil record can be used to understand the ecological and evolutionary responses of species to changes in their environment. We show that beyond providing crucial baseline data, the conservation paleobiology perspective helps us to identify which species will be most vulnerable and what kinds of responses will be most common. We stress that inclusion of geohistorical data in our decision-making process provides a more scientifically robust basis for conservation policies than those dependent on short-term observations alone. © 2010 Elsevier Ltd. All rights reserved.
Raff, Adam B.; Seiler, Theo G.; Apiou-Sbirlea, Gabriela
2017-01-01
The ‘Bridging medicine and biomedical technology’ special all-congress session took place for the first time at the OSA Biophotonics Congress: Optics in Life Sciences in 2017 (http://www.osa.org/enus/meetings/osa_meetings/optics_in_the_life_sciences/bridging_medicine_and_biomedical_technology_specia/). The purpose was to identify key challenges the biomedical scientists in academia have to overcome to translate their discoveries into clinical practice through robust collaborations with industry and discuss best practices to facilitate and accelerate the process. Our paper is intended to complement the session by providing a deeper insight into the concept behind the structure and the content we developed. PMID:29296473
Discovery and process development of a novel TACE inhibitor for the topical treatment of psoriasis.
Boiteau, Jean-Guy; Ouvry, Gilles; Arlabosse, Jean-Marie; Astri, Stéphanie; Beillard, Audrey; Bhurruth-Alcor, Yushma; Bonnary, Laetitia; Bouix-Peter, Claire; Bouquet, Karine; Bourotte, Marilyne; Cardinaud, Isabelle; Comino, Catherine; Deprez, Benoît; Duvert, Denis; Féret, Angélique; Hacini-Rachinel, Feriel; Harris, Craig S; Luzy, Anne-Pascale; Mathieu, Arnaud; Millois, Corinne; Orsini, Nicolas; Pascau, Jonathan; Pinto, Artur; Piwnica, David; Polge, Gaëlle; Reitz, Arnaud; Reversé, Kevin; Rodeville, Nicolas; Rossio, Patricia; Spiesse, Delphine; Tabet, Samuel; Taquet, Nathalie; Tomas, Loïc; Vial, Emmanuel; Hennequin, Laurent F
2018-02-15
Targeting the TNFα pathway is a validated approach to the treatment of psoriasis. In this pathway, TACE stands out as a druggable target and has been the focus of in-house research programs. In this article, we present the discovery of clinical candidate 26a. Starting from hits plagued with poor solubility or genotoxicity, 26a was identified through thorough multiparameter optimisation. Showing robust in vivo activity in an oxazolone-mediated inflammation model, the compound was selected for development. Following a polymorph screen, the hydrochloride salt was selected and the synthesis was efficiently developed to yield the API in 47% overall yield. Copyright © 2017. Published by Elsevier Ltd.
Design principles for robust oscillatory behavior.
Castillo-Hair, Sebastian M; Villota, Elizabeth R; Coronado, Alberto M
2015-09-01
Oscillatory responses are ubiquitous in regulatory networks of living organisms, a fact that has led to extensive efforts to study and replicate the circuits involved. However, to date, design principles that underlie the robustness of natural oscillators are not completely known. Here we study a three-component enzymatic network model in order to determine the topological requirements for robust oscillation. First, by simulating every possible topological arrangement and varying their parameter values, we demonstrate that robust oscillators can be obtained by augmenting the number of both negative feedback loops and positive autoregulations while maintaining an appropriate balance of positive and negative interactions. We then identify network motifs, whose presence in more complex topologies is a necessary condition for obtaining oscillatory responses. Finally, we pinpoint a series of simple architectural patterns that progressively render more robust oscillators. Together, these findings can help in the design of more reliable synthetic biomolecular networks and may also have implications in the understanding of other oscillatory systems.
Advanced manufacturing—A transformative enabling capability for fusion
Nygren, Richard E.; Dehoff, Ryan R.; Youchison, Dennis L.; ...
2018-05-24
Additive Manufacturing (AM) can create novel and complex engineered material structures. Features such as controlled porosity, micro-fibers and/or nano-particles, transitions in materials and integral robust coatings can be important in developing solutions for fusion subcomponents. A realistic understanding of this capability would be particularly valuable in identifying development paths. Major concerns for using AM processes with lasers or electron beams that melt powder to make refractory parts are the power required and residual stresses arising in fabrication. A related issue is the required combination of lasers or e-beams to continue heating of deposited material (to reduce stresses) and to depositmore » new material at a reasonable built rate while providing adequate surface finish and resolution for meso-scale features. In conclusion, Some Direct Write processes that can make suitable preforms and be cured to an acceptable density may offer another approach for PFCs.« less
Sex differences in the development of brain mechanisms for processing biological motion.
Anderson, L C; Bolling, D Z; Schelinski, S; Coffman, M C; Pelphrey, K A; Kaiser, M D
2013-12-01
Disorders related to social functioning including autism and schizophrenia differ drastically in incidence and severity between males and females. Little is known about the neural systems underlying these sex-linked differences in risk and resiliency. Using functional magnetic resonance imaging and a task involving the visual perception of point-light displays of coherent and scrambled biological motion, we discovered sex differences in the development of neural systems for basic social perception. In adults, we identified enhanced activity during coherent biological motion perception in females relative to males in a network of brain regions previously implicated in social perception including amygdala, medial temporal gyrus, and temporal pole. These sex differences were less pronounced in our sample of school-age youth. We hypothesize that the robust neural circuitry supporting social perception in females, which diverges from males beginning in childhood, may underlie sex differences in disorders related to social processing. © 2013 Elsevier Inc. All rights reserved.
Impact of Pathogen Population Heterogeneity and Stress-Resistant Variants on Food Safety.
Abee, T; Koomen, J; Metselaar, K I; Zwietering, M H; den Besten, H M W
2016-01-01
This review elucidates the state-of-the-art knowledge about pathogen population heterogeneity and describes the genotypic and phenotypic analyses of persister subpopulations and stress-resistant variants. The molecular mechanisms underlying the generation of persister phenotypes and genetic variants are identified. Zooming in on Listeria monocytogenes, a comparative whole-genome sequence analysis of wild types and variants that enabled the identification of mutations in variants obtained after a single exposure to lethal food-relevant stresses is described. Genotypic and phenotypic features are compared to those for persistent strains isolated from food processing environments. Inactivation kinetics, models used for fitting, and the concept of kinetic modeling-based schemes for detection of variants are presented. Furthermore, robustness and fitness parameters of L. monocytogenes wild type and variants are used to model their performance in food chains. Finally, the impact of stress-resistant variants and persistence in food processing environments on food safety is discussed.
NASA Astrophysics Data System (ADS)
Zhao, Tieyu; Ran, Qiwen; Yuan, Lin; Chi, Yingying; Ma, Jing
2015-09-01
In this paper, a novel image encryption system with fingerprint used as a secret key is proposed based on the phase retrieval algorithm and RSA public key algorithm. In the system, the encryption keys include the fingerprint and the public key of RSA algorithm, while the decryption keys are the fingerprint and the private key of RSA algorithm. If the users share the fingerprint, then the system will meet the basic agreement of asymmetric cryptography. The system is also applicable for the information authentication. The fingerprint as secret key is used in both the encryption and decryption processes so that the receiver can identify the authenticity of the ciphertext by using the fingerprint in decryption process. Finally, the simulation results show the validity of the encryption scheme and the high robustness against attacks based on the phase retrieval technique.
Advanced manufacturing—A transformative enabling capability for fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nygren, Richard E.; Dehoff, Ryan R.; Youchison, Dennis L.
Additive Manufacturing (AM) can create novel and complex engineered material structures. Features such as controlled porosity, micro-fibers and/or nano-particles, transitions in materials and integral robust coatings can be important in developing solutions for fusion subcomponents. A realistic understanding of this capability would be particularly valuable in identifying development paths. Major concerns for using AM processes with lasers or electron beams that melt powder to make refractory parts are the power required and residual stresses arising in fabrication. A related issue is the required combination of lasers or e-beams to continue heating of deposited material (to reduce stresses) and to depositmore » new material at a reasonable built rate while providing adequate surface finish and resolution for meso-scale features. In conclusion, Some Direct Write processes that can make suitable preforms and be cured to an acceptable density may offer another approach for PFCs.« less
Activation and synchronization of the oscillatory morphodynamics in multicellular monolayer
Lin, Shao-Zhen; Li, Bo; Lan, Ganhui; Feng, Xi-Qiao
2017-01-01
Oscillatory morphodynamics provides necessary mechanical cues for many multicellular processes. Owing to their collective nature, these processes require robustly coordinated dynamics of individual cells, which are often separated too distantly to communicate with each other through biomaterial transportation. Although it is known that the mechanical balance generally plays a significant role in the systems’ morphologies, it remains elusive whether and how the mechanical components may contribute to the systems’ collective morphodynamics. Here, we study the collective oscillations in the Drosophila amnioserosa tissue to elucidate the regulatory roles of the mechanical components. We identify that the tensile stress is the key activator that switches the collective oscillations on and off. This regulatory role is shown analytically using the Hopf bifurcation theory. We find that the physical properties of the tissue boundary are directly responsible for synchronizing the oscillatory intensity and polarity of all inner cells and for orchestrating the spatial oscillation patterns inthe tissue. PMID:28716911
NASA Astrophysics Data System (ADS)
Almuhammadi, Khaled; Selvakumaran, Lakshmi; Alfano, Marco; Yang, Yang; Bera, Tushar Kanti; Lubineau, Gilles
2015-12-01
Electrical impedance tomography (EIT) is a low-cost, fast and effective structural health monitoring technique that can be used on carbon fiber reinforced polymers (CFRP). Electrodes are a key component of any EIT system and as such they should feature low resistivity as well as high robustness and reproducibility. Surface preparation is required prior to bonding of electrodes. Currently this task is mostly carried out by traditional sanding. However this is a time consuming procedure which can also induce damage to surface fibers and lead to spurious electrode properties. Here we propose an alternative processing technique based on the use of pulsed laser irradiation. The processing parameters that result in selective removal of the electrically insulating resin with minimum surface fiber damage are identified. A quantitative analysis of the electrical contact resistance is presented and the results are compared with those obtained using sanding.
NASA Astrophysics Data System (ADS)
Wantuch, Andrew C.; Vita, Joshua A.; Jimenez, Edward S.; Bray, Iliana E.
2016-10-01
Despite object detection, recognition, and identification being very active areas of computer vision research, many of the available tools to aid in these processes are designed with only photographs in mind. Although some algorithms used specifically for feature detection and identification may not take explicit advantage of the colors available in the image, they still under-perform on radiographs, which are grayscale images. We are especially interested in the robustness of these algorithms, specifically their performance on a preexisting database of X-ray radiographs in compressed JPEG form, with multiple ways of describing pixel information. We will review various aspects of the performance of available feature detection and identification systems, including MATLABs Computer Vision toolbox, VLFeat, and OpenCV on our non-ideal database. In the process, we will explore possible reasons for the algorithms' lessened ability to detect and identify features from the X-ray radiographs.
Sex Differences in the Development of Brain Mechanisms for Processing Biological Motion
Anderson, L.C.; Bolling, D.Z.; Schelinski, S.; Coffman, M.C.; Pelphrey, K.A.; Kaiser, M.D.
2013-01-01
Disorders related to social functioning including autism and schizophrenia differ drastically in incidence and severity between males and females. Little is known about the neural systems underlying these sex-linked differences in risk and resiliency. Using functional magnetic resonance imaging and a task involving the visual perception of point-light displays of coherent and scrambled biological motion, we discovered sex differences in the development of neural systems for basic social perception. In adults, we identified enhanced activity during coherent biological motion perception in females relative to males in a network of brain regions previously implicated in social perception including amygdala, medial temporal gyrus, and temporal pole. These sex differences were less pronounced in our sample of school-age youth. We hypothesize that the robust neural circuitry supporting social perception in females, which diverges from males beginning in childhood, may underlie sex differences in disorders related to social processing. PMID:23876243
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users’ queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with “vanilla” LSH, even when using the same amount of space. PMID:29346410
Cause-and-effect mapping of critical events.
Graves, Krisanne; Simmons, Debora; Galley, Mark D
2010-06-01
Health care errors are routinely reported in the scientific and public press and have become a major concern for most Americans. In learning to identify and analyze errors health care can develop some of the skills of a learning organization, including the concept of systems thinking. Modern experts in improving quality have been working in other high-risk industries since the 1920s making structured organizational changes through various frameworks for quality methods including continuous quality improvement and total quality management. When using these tools, it is important to understand systems thinking and the concept of processes within organization. Within these frameworks of improvement, several tools can be used in the analysis of errors. This article introduces a robust tool with a broad analytical view consistent with systems thinking, called CauseMapping (ThinkReliability, Houston, TX, USA), which can be used to systematically analyze the process and the problem at the same time. Copyright 2010 Elsevier Inc. All rights reserved.
JSD: Parallel Job Accounting on the IBM SP2
NASA Technical Reports Server (NTRS)
Saphir, William; Jones, James Patton; Walter, Howard (Technical Monitor)
1995-01-01
The IBM SP2 is one of the most promising parallel computers for scientific supercomputing - it is fast and usually reliable. One of its biggest problems is a lack of robust and comprehensive system software. Among other things, this software allows a collection of Unix processes to be treated as a single parallel application. It does not, however, provide accounting for parallel jobs other than what is provided by AIX for the individual process components. Without parallel job accounting, it is not possible to monitor system use, measure the effectiveness of system administration strategies, or identify system bottlenecks. To address this problem, we have written jsd, a daemon that collects accounting data for parallel jobs. jsd records information in a format that is easily machine- and human-readable, allowing us to extract the most important accounting information with very little effort. jsd also notifies system administrators in certain cases of system failure.
We've Come a Long Way, Baby (But We're Not There Yet): Gender Past, Present, and Future.
Liben, Lynn S
2016-01-01
Gender has long been, and continues to be, a powerful predictor of developmental experiences and outcomes. Observations drawn from personal history, developmental science, and life beyond the academy show that historically, gender constraints have diminished in some ways, but remain robust in others. Reviewed are children's constructive processes that--in interaction with the embedding ecology--foster the emergence and persistence of gendered phenomena. Reviews of interventions designed to increase girls' science participation demonstrate the need to evaluate both intended and unintended program consequences. Discussion of the single-sex schooling debate shows the importance of foundational conceptualizations of gender, and illuminates research-to-policy processes. After identifying newly emerging gender conceptualizations, the concluding section highlights the need to consider how gender conceptualizations do and should affect science and society. © 2016 The Author. Child Development © 2016 Society for Research in Child Development, Inc.
Thomassen, Yvonne E; van Sprang, Eric N M; van der Pol, Leo A; Bakker, Wilfried A M
2010-09-01
Historical manufacturing data can potentially harbor a wealth of information for process optimization and enhancement of efficiency and robustness. To extract useful data multivariate data analysis (MVDA) using projection methods is often applied. In this contribution, the results obtained from applying MVDA on data from inactivated polio vaccine (IPV) production runs are described. Data from over 50 batches at two different production scales (700-L and 1,500-L) were available. The explorative analysis performed on single unit operations indicated consistent manufacturing. Known outliers (e.g., rejected batches) were identified using principal component analysis (PCA). The source of operational variation was pinpointed to variation of input such as media. Other relevant process parameters were in control and, using this manufacturing data, could not be correlated to product quality attributes. The gained knowledge of the IPV production process, not only from the MVDA, but also from digitalizing the available historical data, has proven to be useful for troubleshooting, understanding limitations of available data and seeing the opportunity for improvements. 2010 Wiley Periodicals, Inc.
A Scenario-Based Process for Requirements Development: Application to Mission Operations Systems
NASA Technical Reports Server (NTRS)
Bindschadler, Duane L.; Boyles, Carole A.
2008-01-01
The notion of using operational scenarios as part of requirements development during mission formulation (Phases A & B) is widely accepted as good system engineering practice. In the context of developing a Mission Operations System (MOS), there are numerous practical challenges to translating that notion into the cost-effective development of a useful set of requirements. These challenges can include such issues as a lack of Project-level focus on operations issues, insufficient or improper flowdown of requirements, flowdown of immature or poor-quality requirements from Project level, and MOS resource constraints (personnel expertise and/or dollars). System engineering theory must be translated into a practice that provides enough structure and standards to serve as guidance, but that retains sufficient flexibility to be tailored to the needs and constraints of a particular MOS or Project. We describe a detailed, scenario-based process for requirements development. Identifying a set of attributes for high quality requirements, we show how the portions of the process address many of those attributes. We also find that the basic process steps are robust, and can be effective even in challenging Project environments.
On-road anomaly detection by multimodal sensor analysis and multimedia processing
NASA Astrophysics Data System (ADS)
Orhan, Fatih; Eren, P. E.
2014-03-01
The use of smartphones in Intelligent Transportation Systems is gaining popularity, yet many challenges exist in developing functional applications. Due to the dynamic nature of transportation, vehicular social applications face complexities such as developing robust sensor management, performing signal and image processing tasks, and sharing information among users. This study utilizes a multimodal sensor analysis framework which enables the analysis of sensors in multimodal aspect. It also provides plugin-based analyzing interfaces to develop sensor and image processing based applications, and connects its users via a centralized application as well as to social networks to facilitate communication and socialization. With the usage of this framework, an on-road anomaly detector is being developed and tested. The detector utilizes the sensors of a mobile device and is able to identify anomalies such as hard brake, pothole crossing, and speed bump crossing. Upon such detection, the video portion containing the anomaly is automatically extracted in order to enable further image processing analysis. The detection results are shared on a central portal application for online traffic condition monitoring.
Zhu, Yikang; Hu, Xiaochen; Wang, Jijun; Chen, Jue; Guo, Qian; Li, Chunbo; Enck, Paul
2012-11-01
The characteristics of the cognitive processing of food, body and emotional information in patients with anorexia nervosa (AN) are debatable. We reviewed functional magnetic resonance imaging studies to assess whether there were consistent neural basis and networks in the studies to date. Searching PubMed, Ovid, Web of Science, The Cochrane Library and Google Scholar between January 1980 and May 2012, we identified 17 relevant studies. Activation likelihood estimation was used to perform a quantitative meta-analysis of functional magnetic resonance imaging studies. For both food stimuli and body stimuli, AN patients showed increased hemodynamic response in the emotion-related regions (frontal, caudate, uncus, insula and temporal) and decreased activation in the parietal region. Although no robust brain activation has been found in response to emotional stimuli, emotion-related neural networks are involved in the processing of food and body stimuli among AN. It suggests that negative emotional arousal is related to cognitive processing bias of food and body stimuli in AN. Copyright © 2012 John Wiley & Sons, Ltd and Eating Disorders Association.
TU-H-CAMPUS-JeP3-01: Towards Robust Adaptive Radiation Therapy Strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boeck, M; KTH Royal Institute of Technology, Stockholm; Eriksson, K
Purpose: To set up a framework combining robust treatment planning with adaptive reoptimization in order to maintain high treatment quality, to respond to interfractional variations and to identify those patients who will benefit the most from an adaptive fractionation schedule. Methods: We propose adaptive strategies based on stochastic minimax optimization for a series of simulated treatments on a one-dimensional patient phantom. The plan should be able to handle anticipated systematic and random errors and is applied during the first fractions. Information on the individual geometric variations is gathered at each fraction. At scheduled fractions, the impact of the measured errorsmore » on the delivered dose distribution is evaluated. For a patient that receives a dose that does not satisfy specified plan quality criteria, the plan is reoptimized based on these individual measurements using one of three different adaptive strategies. The reoptimized plan is then applied during future fractions until a new scheduled adaptation becomes necessary. In the first adaptive strategy the measured systematic and random error scenarios and their assigned probabilities are updated to guide the robust reoptimization. The focus of the second strategy lies on variation of the fraction of the worst scenarios taken into account during robust reoptimization. In the third strategy the uncertainty margins around the target are recalculated with the measured errors. Results: By studying the effect of the three adaptive strategies combined with various adaptation schedules on the same patient population, the group which benefits from adaptation is identified together with the most suitable strategy and schedule. Preliminary computational results indicate when and how best to adapt for the three different strategies. Conclusion: A workflow is presented that provides robust adaptation of the treatment plan throughout the course of treatment and useful measures to identify patients in need for an adaptive treatment strategy.« less
ERIC Educational Resources Information Center
Nushi, Musa
2016-01-01
Han's (2009, 2013) selective fossilization hypothesis (SFH) claims that L1 markedness and L2 input robustness determine the fossilizability (and learnability) of an L2 feature. To test the validity of the model, a pseudo-longitudinal study was designed in which the errors in the argumentative essays of 52 Iranian EFL learners were identified and…
Urich, Christian; Rauch, Wolfgang
2014-12-01
Long-term projections for key drivers needed in urban water infrastructure planning such as climate change, population growth, and socio-economic changes are deeply uncertain. Traditional planning approaches heavily rely on these projections, which, if a projection stays unfulfilled, can lead to problematic infrastructure decisions causing high operational costs and/or lock-in effects. New approaches based on exploratory modelling take a fundamentally different view. Aim of these is, to identify an adaptation strategy that performs well under many future scenarios, instead of optimising a strategy for a handful. However, a modelling tool to support strategic planning to test the implication of adaptation strategies under deeply uncertain conditions for urban water management does not exist yet. This paper presents a first step towards a new generation of such strategic planning tools, by combing innovative modelling tools, which coevolve the urban environment and urban water infrastructure under many different future scenarios, with robust decision making. The developed approach is applied to the city of Innsbruck, Austria, which is spatially explicitly evolved 20 years into the future under 1000 scenarios to test the robustness of different adaptation strategies. Key findings of this paper show that: (1) Such an approach can be used to successfully identify parameter ranges of key drivers in which a desired performance criterion is not fulfilled, which is an important indicator for the robustness of an adaptation strategy; and (2) Analysis of the rich dataset gives new insights into the adaptive responses of agents to key drivers in the urban system by modifying a strategy. Copyright © 2014 Elsevier Ltd. All rights reserved.
De-identification of health records using Anonym: effectiveness and robustness across datasets.
Zuccon, Guido; Kotzur, Daniel; Nguyen, Anthony; Bergheim, Anton
2014-07-01
Evaluate the effectiveness and robustness of Anonym, a tool for de-identifying free-text health records based on conditional random fields classifiers informed by linguistic and lexical features, as well as features extracted by pattern matching techniques. De-identification of personal health information in electronic health records is essential for the sharing and secondary usage of clinical data. De-identification tools that adapt to different sources of clinical data are attractive as they would require minimal intervention to guarantee high effectiveness. The effectiveness and robustness of Anonym are evaluated across multiple datasets, including the widely adopted Integrating Biology and the Bedside (i2b2) dataset, used for evaluation in a de-identification challenge. The datasets used here vary in type of health records, source of data, and their quality, with one of the datasets containing optical character recognition errors. Anonym identifies and removes up to 96.6% of personal health identifiers (recall) with a precision of up to 98.2% on the i2b2 dataset, outperforming the best system proposed in the i2b2 challenge. The effectiveness of Anonym across datasets is found to depend on the amount of information available for training. Findings show that Anonym compares to the best approach from the 2006 i2b2 shared task. It is easy to retrain Anonym with new datasets; if retrained, the system is robust to variations of training size, data type and quality in presence of sufficient training data. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.
Xu, Yanjun; Li, Feng; Wu, Tan; Xu, Yingqi; Yang, Haixiu; Dong, Qun; Zheng, Meiyu; Shang, Desi; Zhang, Chunlong; Zhang, Yunpeng; Li, Xia
2017-02-28
Long non-coding RNAs (lncRNAs) play important roles in various biological processes, including the development of many diseases. Pathway analysis is a valuable aid for understanding the cellular functions of these transcripts. We have developed and characterized LncSubpathway, a novel method that integrates lncRNA and protein coding gene (PCG) expression with interactome data to identify disease risk subpathways that functionally associated with risk lncRNAs. LncSubpathway identifies the most relevance regions which are related with risk lncRNA set and implicated with study conditions through simultaneously considering the dysregulation extent of lncRNAs, PCGs and their correlations. Simulation studies demonstrated that the sensitivity and false positive rates of LncSubpathway were within acceptable ranges, and that LncSubpathway could accurately identify dysregulated regions that related with disease risk lncRNAs within pathways. When LncSubpathway was applied to colorectal carcinoma and breast cancer subtype datasets, it identified cancer type- and breast cancer subtype-related meaningful subpathways. Further, analysis of its robustness and reproducibility indicated that LncSubpathway was a reliable means of identifying subpathways that functionally associated with lncRNAs. LncSubpathway is freely available at http://www.bio-bigdata.com/lncSubpathway/.
Wu, Tan; Xu, Yingqi; Yang, Haixiu; Dong, Qun; Zheng, Meiyu; Shang, Desi; Zhang, Chunlong; Zhang, Yunpeng; Li, Xia
2017-01-01
Long non-coding RNAs (lncRNAs) play important roles in various biological processes, including the development of many diseases. Pathway analysis is a valuable aid for understanding the cellular functions of these transcripts. We have developed and characterized LncSubpathway, a novel method that integrates lncRNA and protein coding gene (PCG) expression with interactome data to identify disease risk subpathways that functionally associated with risk lncRNAs. LncSubpathway identifies the most relevance regions which are related with risk lncRNA set and implicated with study conditions through simultaneously considering the dysregulation extent of lncRNAs, PCGs and their correlations. Simulation studies demonstrated that the sensitivity and false positive rates of LncSubpathway were within acceptable ranges, and that LncSubpathway could accurately identify dysregulated regions that related with disease risk lncRNAs within pathways. When LncSubpathway was applied to colorectal carcinoma and breast cancer subtype datasets, it identified cancer type- and breast cancer subtype-related meaningful subpathways. Further, analysis of its robustness and reproducibility indicated that LncSubpathway was a reliable means of identifying subpathways that functionally associated with lncRNAs. LncSubpathway is freely available at http://www.bio-bigdata.com/lncSubpathway/. PMID:28152521
Zhou, Haiying; Purdie, Jennifer; Wang, Tongtong; Ouyang, Anli
2010-01-01
The number of therapeutic proteins produced by cell culture in the pharmaceutical industry continues to increase. During the early stages of manufacturing process development, hundreds of clones and various cell culture conditions are evaluated to develop a robust process to identify and select cell lines with high productivity. It is highly desirable to establish a high throughput system to accelerate process development and reduce cost. Multiwell plates and shake flasks are widely used in the industry as the scale down model for large-scale bioreactors. However, one of the limitations of these two systems is the inability to measure and control pH in a high throughput manner. As pH is an important process parameter for cell culture, this could limit the applications of these scale down model vessels. An economical, rapid, and robust pH measurement method was developed at Eli Lilly and Company by employing SNARF-4F 5-(-and 6)-carboxylic acid. The method demonstrated the ability to measure the pH values of cell culture samples in a high throughput manner. Based upon the chemical equilibrium of CO(2), HCO(3)(-), and the buffer system, i.e., HEPES, we established a mathematical model to regulate pH in multiwell plates and shake flasks. The model calculates the required %CO(2) from the incubator and the amount of sodium bicarbonate to be added to adjust pH to a preset value. The model was validated by experimental data, and pH was accurately regulated by this method. The feasibility of studying the pH effect on cell culture in 96-well plates and shake flasks was also demonstrated in this study. This work shed light on mini-bioreactor scale down model construction and paved the way for cell culture process development to improve productivity or product quality using high throughput systems. Copyright 2009 American Institute of Chemical Engineers
Standard and Robust Methods in Regression Imputation
ERIC Educational Resources Information Center
Moraveji, Behjat; Jafarian, Koorosh
2014-01-01
The aim of this paper is to provide an introduction of new imputation algorithms for estimating missing values from official statistics in larger data sets of data pre-processing, or outliers. The goal is to propose a new algorithm called IRMI (iterative robust model-based imputation). This algorithm is able to deal with all challenges like…
Using Multi-Objective Optimization to Explore Robust Policies in the Colorado River Basin
NASA Astrophysics Data System (ADS)
Alexander, E.; Kasprzyk, J. R.; Zagona, E. A.; Prairie, J. R.; Jerla, C.; Butler, A.
2017-12-01
The long term reliability of water deliveries in the Colorado River Basin has degraded due to the imbalance of growing demand and dwindling supply. The Colorado River meanders 1,450 miles across a watershed that covers seven US states and Mexico and is an important cultural, economic, and natural resource for nearly 40 million people. Its complex operating policy is based on the "Law of the River," which has evolved since the Colorado River Compact in 1922. Recent (2007) refinements to address shortage reductions and coordinated operations of Lakes Powell and Mead were negotiated with stakeholders in which thousands of scenarios were explored to identify operating guidelines that could ultimately be agreed on. This study explores a different approach to searching for robust operating policies to inform the policy making process. The Colorado River Simulation System (CRSS), a long-term water management simulation model implemented in RiverWare, is combined with the Borg multi-objective evolutionary algorithm (MOEA) to solve an eight objective problem formulation. Basin-wide performance metrics are closely tied to system health through incorporating critical reservoir pool elevations, duration, frequency and quantity of shortage reductions in the objective set. For example, an objective to minimize the frequency that Lake Powell falls below the minimum power pool elevation of 3,490 feet for Glen Canyon Dam protects a vital economic and renewable energy source for the southwestern US. The decision variables correspond to operating tiers in Lakes Powell and Mead that drive the implementation of various shortage and release policies, thus affecting system performance. The result will be a set of non-dominated solutions that can be compared with respect to their trade-offs based on the various objectives. These could inform policy making processes by eliminating dominated solutions and revealing robust solutions that could remain hidden under conventional analysis.
NASA Astrophysics Data System (ADS)
Liu, Feng; Wang, Shuliang; Zhang, Ming; Ma, Miaolian; Wang, Chengyu; Li, Jian
2013-09-01
Improvement of the robustness of superhydrophobic surfaces is crucial for the purpose of achieving commercial applications of these surfaces in such various areas as self-cleaning, water repellency and corrosion resistance. We have investigated a fabrication of polyvinyl alcohol (PVA)/silica (SiO2) composite polymer coating on wooden substrates with super repellency toward water, low sliding angles, low contact angle hysteresis, and relatively better mechanical robustness. The composite polymer slurry, consisting of well-mixing SiO2 particles and PVA, is prepared simply and subsequently coated over wooden substrates with good adhesion. In this study, the mechanical robustness of superhydrophobic wood surfaces was evaluated. The effect of petaloid structures of the composite polymer on robustness was investigated using an abrasion test and the results were compared with those of superhydrophobic wood surfaces fabricated by other processes. The produced wood surfaces exhibited promising superhydrophobic properties with a contact angle of 159̊ and a sliding angle of 4̊, and the relatively better mechanical robustness.
Relationship of cranial robusticity to cranial form, geography and climate in Homo sapiens.
Baab, Karen L; Freidline, Sarah E; Wang, Steven L; Hanson, Timothy
2010-01-01
Variation in cranial robusticity among modern human populations is widely acknowledged but not well-understood. While the use of "robust" cranial traits in hominin systematics and phylogeny suggests that these characters are strongly heritable, this hypothesis has not been tested. Alternatively, cranial robusticity may be a response to differences in diet/mastication or it may be an adaptation to cold, harsh environments. This study quantifies the distribution of cranial robusticity in 14 geographically widespread human populations, and correlates this variation with climatic variables, neutral genetic distances, cranial size, and cranial shape. With the exception of the occipital torus region, all traits were positively correlated with each other, suggesting that they should not be treated as individual characters. While males are more robust than females within each of the populations, among the independent variables (cranial shape, size, climate, and neutral genetic distances), only shape is significantly correlated with inter-population differences in robusticity. Two-block partial least-squares analysis was used to explore the relationship between cranial shape (captured by three-dimensional landmark data) and robusticity across individuals. Weak support was found for the hypothesis that robusticity was related to mastication as the shape associated with greater robusticity was similar to that described for groups that ate harder-to-process diets. Specifically, crania with more prognathic faces, expanded glabellar and occipital regions, and (slightly) longer skulls were more robust than those with rounder vaults and more orthognathic faces. However, groups with more mechanically demanding diets (hunter-gatherers) were not always more robust than groups practicing some form of agriculture.
Song, Jing-Zheng; Li, Song-Lin; Zhou, Yan; Qiao, Chun-Feng; Chen, Shi-Lin; Xu, Hong-Xi
2010-11-02
In a well-controlled experiment, outliers discriminated by robust principal component analysis (RPCA) represent contents in samples which are of particular quality distinguishable from the rest of the others, therefore chemical constituents in a natural product causing discrimination between outliers and the majority of samples could be considered as analytical markers for quality control. Based on this strategy, a novel approach for rapidly exploring characteristic analytical markers was proposed for the quality control of extract granules of Radix Salviae Miltiorrhizae (EGRSM). In this study, large sizes of samples were analyzed via high-throughput ultra-high performance liquid chromatography-ultraviolet-quadrupole time-of-flight mass spectrometry (UHPLC-UV-Q-Tof MS). RPCA was first performed on the three groups of samples: RSM (the raw material), the in-house prepared aqueous extract of Radix Salviae Miltiorrhizae (AERSM) and commercial product of EGRSM, to determine the variation of specific constituents between raw material and the final products as well as the effect of manufacturing process on the overall quality. Then RPCA was performed on the commercial products of EGRSM to explore the applicability of identified characteristic markers for the quality control of EGRSM. Candidate markers were extracted by RPCA, and their molecular formulae were determined by high resolution electrospray ionization-mass spectrometric (ESI-MS) analysis. The suitability of identified markers was then evaluated by determining the relationship between quantities of the identified markers with their antioxidant activities biologically, and further confirmed in a variety of samples. In conclusion, the combination of RPCA with UHPLC-UV-Q-Tof MS is a reliable means to identify chemical markers for evaluating quality of herbal medicines. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Processing Robustness for A Phenylethynyl Terminated Polyimide Composite
NASA Technical Reports Server (NTRS)
Hou, Tan-Hung
2004-01-01
The processability of a phenylethynyl terminated imide resin matrix (designated as PETI-5) composite is investigated. Unidirectional prepregs are made by coating an N-methylpyrrolidone solution of the amide acid oligomer (designated as PETAA-5/NMP) onto unsized IM7 fibers. Two batches of prepregs are used: one is made by NASA in-house, and the other is from an industrial source. The composite processing robustness is investigated with respect to the prepreg shelf life, the effect of B-staging conditions, and the optimal processing window. Prepreg rheology and open hole compression (OHC) strengths are found not to be affected by prolonged (i.e., up to 60 days) ambient storage. Rheological measurements indicate that the PETAA-5/NMP processability is only slightly affected over a wide range of B-stage temperatures from 250 deg C to 300 deg C. The OHC strength values are statistically indistinguishable among laminates consolidated using various B-staging conditions. An optimal processing window is established by means of the response surface methodology. IM7/PETAA-5/NMP prepreg is more sensitive to consolidation temperature than to pressure. A good consolidation is achievable at 371 deg C (700 deg F)/100 Psi, which yields an RT OHC strength of 62 Ksi. However, processability declines dramatically at temperatures below 350 deg C (662 deg F), as evidenced by the OHC strength values. The processability of the IM7/LARC(TM) PETI-5 prepreg was found to be robust.
The Legacy of Space Shuttle Flight Software
NASA Technical Reports Server (NTRS)
Hickey, Christopher J.; Loveall, James B.; Orr, James K.; Klausman, Andrew L.
2011-01-01
The initial goals of the Space Shuttle Program required that the avionics and software systems blaze new trails in advancing avionics system technology. Many of the requirements placed on avionics and software were accomplished for the first time on this program. Examples include comprehensive digital fly-by-wire technology, use of a digital databus for flight critical functions, fail operational/fail safe requirements, complex automated redundancy management, and the use of a high-order software language for flight software development. In order to meet the operational and safety goals of the program, the Space Shuttle software had to be extremely high quality, reliable, robust, reconfigurable and maintainable. To achieve this, the software development team evolved a software process focused on continuous process improvement and defect elimination that consistently produced highly predictable and top quality results, providing software managers the confidence needed to sign each Certificate of Flight Readiness (COFR). This process, which has been appraised at Capability Maturity Model (CMM)/Capability Maturity Model Integration (CMMI) Level 5, has resulted in one of the lowest software defect rates in the industry. This paper will present an overview of the evolution of the Primary Avionics Software System (PASS) project and processes over thirty years, an argument for strong statistical control of software processes with examples, an overview of the success story for identifying and driving out errors before flight, a case study of the few significant software issues and how they were either identified before flight or slipped through the process onto a flight vehicle, and identification of the valuable lessons learned over the life of the project.
An fMRI study of magnitude comparison and exact addition in children.
Meintjes, Ernesta M; Jacobson, Sandra W; Molteno, Christopher D; Gatenby, J Christopher; Warton, Christopher; Cannistraci, Christopher J; Gore, John C; Jacobson, Joseph L
2010-04-01
By contrast to the adult literature, in which a consistent parietofrontal network for number processing has been identified, the data from studies of number processing in children have been less consistent, probably due to differences in study design and control conditions. Number processing was examined using functional magnetic resonance imaging in 18 right-handed children (8-12 years) from the Cape Coloured community in Cape Town, South Africa, using Proximity Judgment and Exact Addition (EA) tasks. The findings were consistent with the hypothesis that, as in adults, the anterior horizontal intraparietal sulcus (HIPS) plays a major role in the representation and manipulation of quantity in children. The posterior medial frontal cortex, believed to be involved in performance monitoring in more complex arithmetic manipulations in adults, was extensively activated even for relatively simple symbolic number processing in the children. Other areas activated to a greater degree in the children included the left precentral sulcus, which may mediate number knowledge and, for EA, the head of the caudate nucleus, which is part of a fronto-subcortical circuit involved in the behavioral execution of sequences. Two regions that have been linked to number processing in adults - the angular gyrus and posterior superior parietal lobule - were not activated in the children. The data are consistent with the inference that although the functional specialization of the anterior HIPS may increase as symbolic number processing becomes increasingly automatic, this region and other elements of the parietofrontal network identified in adults are already reliably and robustly activated by middle childhood. Copyright 2010 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melintescu, A.; Galeriu, D.; Diabate, S.
2015-03-15
The processes involved in tritium transfer in crops are complex and regulated by many feedback mechanisms. A full mechanistic model is difficult to develop due to the complexity of the processes involved in tritium transfer and environmental conditions. First, a review of existing models (ORYZA2000, CROPTRIT and WOFOST) presenting their features and limits, is made. Secondly, the preparatory steps for a robust model are discussed, considering the role of dry matter and photosynthesis contribution to the OBT (Organically Bound Tritium) dynamics in crops.
Robust, directed assembly of fluorescent nanodiamonds.
Kianinia, Mehran; Shimoni, Olga; Bendavid, Avi; Schell, Andreas W; Randolph, Steven J; Toth, Milos; Aharonovich, Igor; Lobo, Charlene J
2016-10-27
Arrays of fluorescent nanoparticles are highly sought after for applications in sensing, nanophotonics and quantum communications. Here we present a simple and robust method of assembling fluorescent nanodiamonds into macroscopic arrays. Remarkably, the yield of this directed assembly process is greater than 90% and the assembled patterns withstand ultra-sonication for more than three hours. The assembly process is based on covalent bonding of carboxyl to amine functional carbon seeds and is applicable to any material, and to non-planar surfaces. Our results pave the way to directed assembly of sensors and nanophotonics devices.
Intergration of system identification and robust controller designs for flexible structures in space
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Lew, Jiann-Shiun
1990-01-01
An approach is developed using experimental data to identify a reduced-order model and its model error for a robust controller design. There are three steps involved in the approach. First, an approximately balanced model is identified using the Eigensystem Realization Algorithm, which is an identification algorithm. Second, the model error is calculated and described in frequency domain in terms of the H(infinity) norm. Third, a pole placement technique in combination with a H(infinity) control method is applied to design a controller for the considered system. A set experimental data from an existing setup, namely the Mini-Mast system, is used to illustrate and verify the approach.
The 32nd CDC: System identification using interval dynamic models
NASA Technical Reports Server (NTRS)
Keel, L. H.; Lew, J. S.; Bhattacharyya, S. P.
1992-01-01
Motivated by the recent explosive development of results in the area of parametric robust control, a new technique to identify a family of uncertain systems is identified. The new technique takes the frequency domain input and output data obtained from experimental test signals and produces an 'interval transfer function' that contains the complete frequency domain behavior with respect to the test signals. This interval transfer function is one of the key concepts in the parametric robust control approach and identification with such an interval model allows one to predict the worst case performance and stability margins using recent results on interval systems. The algorithm is illustrated by applying it to an 18 bay Mini-Mast truss structure.
Reducing the overlay metrology sensitivity to perturbations of the measurement stack
NASA Astrophysics Data System (ADS)
Zhou, Yue; Park, DeNeil; Gutjahr, Karsten; Gottipati, Abhishek; Vuong, Tam; Bae, Sung Yong; Stokes, Nicholas; Jiang, Aiqin; Hsu, Po Ya; O'Mahony, Mark; Donini, Andrea; Visser, Bart; de Ruiter, Chris; Grzela, Grzegorz; van der Laan, Hans; Jak, Martin; Izikson, Pavel; Morgan, Stephen
2017-03-01
Overlay metrology setup today faces a continuously changing landscape of process steps. During Diffraction Based Overlay (DBO) metrology setup, many different metrology target designs are evaluated in order to cover the full process window. The standard method for overlay metrology setup consists of single-wafer optimization in which the performance of all available metrology targets is evaluated. Without the availability of external reference data or multiwafer measurements it is hard to predict the metrology accuracy and robustness against process variations which naturally occur from wafer-to-wafer and lot-to-lot. In this paper, the capabilities of the Holistic Metrology Qualification (HMQ) setup flow are outlined, in particular with respect to overlay metrology accuracy and process robustness. The significance of robustness and its impact on overlay measurements is discussed using multiple examples. Measurement differences caused by slight stack variations across the target area, called grating imbalance, are shown to cause significant errors in the overlay calculation in case the recipe and target have not been selected properly. To this point, an overlay sensitivity check on perturbations of the measurement stack is presented for improvement of the overlay metrology setup flow. An extensive analysis on Key Performance Indicators (KPIs) from HMQ recipe optimization is performed on µDBO measurements of product wafers. The key parameters describing the sensitivity to perturbations of the measurement stack are based on an intra-target analysis. Using advanced image analysis, which is only possible for image plane detection of μDBO instead of pupil plane detection of DBO, the process robustness performance of a recipe can be determined. Intra-target analysis can be applied for a wide range of applications, independent of layers and devices.
Pharmacist-led discharge medication counselling: A scoping review.
Bonetti, Aline F; Reis, Wálleri C; Lombardi, Natália Fracaro; Mendes, Antonio M; Netto, Harli Pasquini; Rotta, Inajara; Fernandez-Llimos, Fernando; Pontarolo, Roberto
2018-06-01
Discharge medication counselling has produced improved quality of care and health outcomes, especially by reducing medication errors and readmission rates, and improving medication adherence. However, no studies have assembled an evidence-based discharge counselling process for clinical pharmacists. Thus, the present study aims to map the components of the pharmacist-led discharge medication counselling process. We performed a scoping review by searching electronic databases (Pubmed, Scopus, and DOAJ) and conducting a manual search to identify studies published up to July 2017. Studies that addressed pharmacist-led discharge medication counselling, regardless of the population, clinical conditions, and outcomes evaluated, were included. A total of 1563 studies were retrieved, with 75 matching the inclusion criteria. Thirty-two different components were identified, and the most prevalent were the indication of the medications and adverse drug reactions, which were reported in more than 50% of the studies. The components were reported similarly by studies from the USA and the rest of the world, and over the years. However, 2 differences were identified: the use of a dosage schedule, which was more frequent in studies published in 2011 or before and in studies outside the USA; and the teach-back technique, which was used more frequently in the USA. Poor quality reporting was also observed, especially regarding the duration of the counselling, the number of patients, and the medical condition. Mapping the components of the pharmacist-led discharge counselling studies through a scoping review allowed us to reveal how this service is performed around the world. Wide variability in this process and poor reporting were identified. Future studies are needed to define the core outcome set of this clinical pharmacy service to allow the generation of robust evidence and reproducibility in clinical practice. © 2018 John Wiley & Sons, Ltd.
Robust Alternatives to the Standard Deviation in Processing of Physics Experimental Data
NASA Astrophysics Data System (ADS)
Shulenin, V. P.
2016-10-01
Properties of robust estimations of the scale parameter are studied. It is noted that the median of absolute deviations and the modified estimation of the average Gini differences have asymptotically normal distributions and bounded influence functions, are B-robust estimations, and hence, unlike the estimation of the standard deviation, are protected from the presence of outliers in the sample. Results of comparison of estimations of the scale parameter are given for a Gaussian model with contamination. An adaptive variant of the modified estimation of the average Gini differences is considered.
Robust Speaker Authentication Based on Combined Speech and Voiceprint Recognition
NASA Astrophysics Data System (ADS)
Malcangi, Mario
2009-08-01
Personal authentication is becoming increasingly important in many applications that have to protect proprietary data. Passwords and personal identification numbers (PINs) prove not to be robust enough to ensure that unauthorized people do not use them. Biometric authentication technology may offer a secure, convenient, accurate solution but sometimes fails due to its intrinsically fuzzy nature. This research aims to demonstrate that combining two basic speech processing methods, voiceprint identification and speech recognition, can provide a very high degree of robustness, especially if fuzzy decision logic is used.
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Fales, Carl L.
1990-01-01
Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.
Reframing implementation as an organisational behaviour problem.
Clay-Williams, Robyn; Braithwaite, Jeffrey
2015-01-01
The purpose of this paper is to report on a process evaluation of a randomised controlled trial (RCT) intervention study that tested the effectiveness of classroom- and simulation-based crew resource management courses, alone and in combination, and identifies organisational barriers and facilitators to implementation of team training programmes in healthcare. The RCT design consisted of a before and after study with a team training intervention. Quantitative data were gathered on utility and affective reactions to training, and on teamwork knowledge, attitudes, and behaviours of the learners. A sample of participants was interviewed at the conclusion of the study. Interview responses were analysed, alongside qualitative elements of the classroom course critique, to search for evidence, context, and facilitation clues to the implementation process. The RCT method provided scientifically robust data that supported the benefits of classroom training. Qualitative data identified a number of facilitators to implementation of team training, and shed light on some of the ways that learning was diffused throughout the organisation. Barriers to successful implementation were also identified, including hospital time and resource constraints and poor organisational communication. Quantitative randomised methods have intermittently been used to evaluate team training interventions in healthcare. Despite two decades of team training trials, however, the authors do not know as well as the authors would like what goes on inside the "black box" of such RCTs. While results are usually centred on outcomes, this study also provides insight into the context and mechanisms associated with those outcomes and identifies barriers and facilitators to successful intervention implementation.
Boutsen, Frank A; Dvorak, Justin D; Pulusu, Vinay K; Ross, Elliott D
2017-04-01
Depending on a subject's attentional bias, robust changes in emotional perception occur when facial blends (different emotions expressed on upper/lower face) are presented tachistoscopically. If no instructions are given, subjects overwhelmingly identify the lower facial expression when blends are presented to either visual field. If asked to attend to the upper face, subjects overwhelmingly identify the upper facial expression in the left visual field but remain slightly biased to the lower facial expression in the right visual field. The current investigation sought to determine whether differences in initial saccadic targets could help explain the perceptual biases described above. Ten subjects were presented with full and blend facial expressions under different attentional conditions. No saccadic differences were found for left versus right visual field presentations or for full facial versus blend stimuli. When asked to identify the presented emotion, saccades were directed to the lower face. When asked to attend to the upper face, saccades were directed to the upper face. When asked to attend to the upper face and try to identify the emotion, saccades were directed to the upper face but to a lesser degree. Thus, saccadic behavior supports the concept that there are cognitive-attentional pre-attunements when subjects visually process facial expressions. However, these pre-attunements do not fully explain the perceptual superiority of the left visual field for identifying the upper facial expression when facial blends are presented tachistoscopically. Hence other perceptual factors must be in play, such as the phenomenon of virtual scanning. Published by Elsevier Ltd.
A database de-identification framework to enable direct queries on medical data for secondary use.
Erdal, B S; Liu, J; Ding, J; Chen, J; Marsh, C B; Kamal, J; Clymer, B D
2012-01-01
To qualify the use of patient clinical records as non-human-subject for research purpose, electronic medical record data must be de-identified so there is minimum risk to protected health information exposure. This study demonstrated a robust framework for structured data de-identification that can be applied to any relational data source that needs to be de-identified. Using a real world clinical data warehouse, a pilot implementation of limited subject areas were used to demonstrate and evaluate this new de-identification process. Query results and performances are compared between source and target system to validate data accuracy and usability. The combination of hashing, pseudonyms, and session dependent randomizer provides a rigorous de-identification framework to guard against 1) source identifier exposure; 2) internal data analyst manually linking to source identifiers; and 3) identifier cross-link among different researchers or multiple query sessions by the same researcher. In addition, a query rejection option is provided to refuse queries resulting in less than preset numbers of subjects and total records to prevent users from accidental subject identification due to low volume of data. This framework does not prevent subject re-identification based on prior knowledge and sequence of events. Also, it does not deal with medical free text de-identification, although text de-identification using natural language processing can be included due its modular design. We demonstrated a framework resulting in HIPAA Compliant databases that can be directly queried by researchers. This technique can be augmented to facilitate inter-institutional research data sharing through existing middleware such as caGrid.
Grouin, Cyril; Zweigenbaum, Pierre
2013-01-01
In this paper, we present a comparison of two approaches to automatically de-identify medical records written in French: a rule-based system and a machine-learning based system using a conditional random fields (CRF) formalism. Both systems have been designed to process nine identifiers in a corpus of medical records in cardiology. We performed two evaluations: first, on 62 documents in cardiology, and on 10 documents in foetopathology - produced by optical character recognition (OCR) - to evaluate the robustness of our systems. We achieved a 0.843 (rule-based) and 0.883 (machine-learning) exact match overall F-measure in cardiology. While the rule-based system allowed us to achieve good results on nominative (first and last names) and numerical data (dates, phone numbers, and zip codes), the machine-learning approach performed best on more complex categories (postal addresses, hospital names, medical devices, and towns). On the foetopathology corpus, although our systems have not been designed for this corpus and despite OCR character recognition errors, we obtained promising results: a 0.681 (rule-based) and 0.638 (machine-learning) exact-match overall F-measure. This demonstrates that existing tools can be applied to process new documents of lower quality.
Schneid, Stefan C; Stärtzel, Peter M; Lettner, Patrick; Gieseler, Henning
2011-01-01
The recent US Food and Drug Administration (FDA) legislation has introduced the evaluation of the Design Space of critical process parameters in manufacturing processes. In freeze-drying, a "formulation" is expected to be robust when minor deviations of the product temperature do not negatively affect the final product quality attributes. To evaluate "formulation" robustness by investigating the effect of elevated product temperature on product quality using a bacterial vaccine solution. The vaccine solution was characterized by freeze-dry microscopy to determine the critical formulation temperature. A conservative cycle was developed using the SMART™ mode of a Lyostar II freeze dryer. Product temperature was elevated to imitate intermediate and aggressive cycle conditions. The final product was analyzed using X-ray powder diffraction (XRPD), scanning electron microscopy (SEM), Karl Fischer, and modulated differential scanning calorimetry (MDSC), and the life cell count (LCC) during accelerated stability testing. The cakes processed at intermediate and aggressive conditions displayed larger pores with microcollapse of walls and stronger loss in LCC than the conservatively processed product, especially during stability testing. For all process conditions, a loss of the majority of cells was observed during storage. For freeze-drying of life bacterial vaccine solutions, the product temperature profile during primary drying appeared to be inter-related to product quality attributes.
Robust Prediction for Stationary Processes. 2D Enriched Version.
1987-11-24
the absence of data outliers. Important performance characteristics studied include the breakdown point and the influence function . Included are numerical results, for some autoregressive nominal processes.
Canales, Javier; Moyano, Tomás C.; Villarroel, Eva; Gutiérrez, Rodrigo A.
2014-01-01
Nitrogen (N) is an essential macronutrient for plant growth and development. Plants adapt to changes in N availability partly by changes in global gene expression. We integrated publicly available root microarray data under contrasting nitrate conditions to identify new genes and functions important for adaptive nitrate responses in Arabidopsis thaliana roots. Overall, more than 2000 genes exhibited changes in expression in response to nitrate treatments in Arabidopsis thaliana root organs. Global regulation of gene expression by nitrate depends largely on the experimental context. However, despite significant differences from experiment to experiment in the identity of regulated genes, there is a robust nitrate response of specific biological functions. Integrative gene network analysis uncovered relationships between nitrate-responsive genes and 11 highly co-expressed gene clusters (modules). Four of these gene network modules have robust nitrate responsive functions such as transport, signaling, and metabolism. Network analysis hypothesized G2-like transcription factors are key regulatory factors controlling transport and signaling functions. Our meta-analysis highlights the role of biological processes not studied before in the context of the nitrate response such as root hair development and provides testable hypothesis to advance our understanding of nitrate responses in plants. PMID:24570678
Ding, Zihao; Karkare, Siddharth; Feng, Jun; ...
2017-11-09
K-Cs-Sb bialkali antimonide photocathodes grown by a triple-element codeposition method have been found to have excellent quantum efficiency (QE) and outstanding near-atomic surface smoothness and have been employed in the VHF gun in the Advanced Photoinjector Experiment (APEX), however, their robustness in terms of their lifetime at elevated photocathode temperature has not yet been investigated. In this paper, the relationship between the lifetime of the K-Cs-Sb photocathode and the photocathode temperature has been investigated. The origin of the significant QE degradation at photocathode temperatures over 70 °C has been identified as the loss of cesium atoms from the K-Cs-Sb photocathode,more » based on the in situ x-ray analysis on the photocathode film during the decay process. The findings from this work will not only further the understanding of the behavior of K-Cs-Sb photocathodes at elevated temperature and help develop more temperature-robust cathodes, but also will become an important guide to the design and operation of the future high-field rf guns employing the use of such photocathodes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Zihao; Karkare, Siddharth; Feng, Jun
K-Cs-Sb bialkali antimonide photocathodes grown by a triple-element codeposition method have been found to have excellent quantum efficiency (QE) and outstanding near-atomic surface smoothness and have been employed in the VHF gun in the Advanced Photoinjector Experiment (APEX), however, their robustness in terms of their lifetime at elevated photocathode temperature has not yet been investigated. In this paper, the relationship between the lifetime of the K-Cs-Sb photocathode and the photocathode temperature has been investigated. The origin of the significant QE degradation at photocathode temperatures over 70 °C has been identified as the loss of cesium atoms from the K-Cs-Sb photocathode,more » based on the in situ x-ray analysis on the photocathode film during the decay process. The findings from this work will not only further the understanding of the behavior of K-Cs-Sb photocathodes at elevated temperature and help develop more temperature-robust cathodes, but also will become an important guide to the design and operation of the future high-field rf guns employing the use of such photocathodes.« less
RIDES: Robust Intrusion Detection System for IP-Based Ubiquitous Sensor Networks
Amin, Syed Obaid; Siddiqui, Muhammad Shoaib; Hong, Choong Seon; Lee, Sungwon
2009-01-01
The IP-based Ubiquitous Sensor Network (IP-USN) is an effort to build the “Internet of things”. By utilizing IP for low power networks, we can benefit from existing well established tools and technologies of IP networks. Along with many other unresolved issues, securing IP-USN is of great concern for researchers so that future market satisfaction and demands can be met. Without proper security measures, both reactive and proactive, it is hard to envisage an IP-USN realm. In this paper we present a design of an IDS (Intrusion Detection System) called RIDES (Robust Intrusion DEtection System) for IP-USN. RIDES is a hybrid intrusion detection system, which incorporates both Signature and Anomaly based intrusion detection components. For signature based intrusion detection this paper only discusses the implementation of distributed pattern matching algorithm with the help of signature-code, a dynamically created attack-signature identifier. Other aspects, such as creation of rules are not discussed. On the other hand, for anomaly based detection we propose a scoring classifier based on the SPC (Statistical Process Control) technique called CUSUM charts. We also investigate the settings and their effects on the performance of related parameters for both of the components. PMID:22412321
RIDES: Robust Intrusion Detection System for IP-Based Ubiquitous Sensor Networks.
Amin, Syed Obaid; Siddiqui, Muhammad Shoaib; Hong, Choong Seon; Lee, Sungwon
2009-01-01
The IP-based Ubiquitous Sensor Network (IP-USN) is an effort to build the "Internet of things". By utilizing IP for low power networks, we can benefit from existing well established tools and technologies of IP networks. Along with many other unresolved issues, securing IP-USN is of great concern for researchers so that future market satisfaction and demands can be met. Without proper security measures, both reactive and proactive, it is hard to envisage an IP-USN realm. In this paper we present a design of an IDS (Intrusion Detection System) called RIDES (Robust Intrusion DEtection System) for IP-USN. RIDES is a hybrid intrusion detection system, which incorporates both Signature and Anomaly based intrusion detection components. For signature based intrusion detection this paper only discusses the implementation of distributed pattern matching algorithm with the help of signature-code, a dynamically created attack-signature identifier. Other aspects, such as creation of rules are not discussed. On the other hand, for anomaly based detection we propose a scoring classifier based on the SPC (Statistical Process Control) technique called CUSUM charts. We also investigate the settings and their effects on the performance of related parameters for both of the components.
Li, Zi-An; Fontaíña-Troitiño, N.; Kovács, A.; Liébana-Viñas, S.; Spasova, M.; Dunin-Borkowski, R. E.; Müller, M.; Doennig, D.; Pentcheva, R.; Farle, M.; Salgueiriño, V.
2015-01-01
Polar oxide interfaces are an important focus of research due to their novel functionality which is not available in the bulk constituents. So far, research has focused mainly on heterointerfaces derived from the perovskite structure. It is important to extend our understanding of electronic reconstruction phenomena to a broader class of materials and structure types. Here we report from high-resolution transmission electron microscopy and quantitative magnetometry a robust – above room temperature (Curie temperature TC ≫ 300 K) – environmentally stable- ferromagnetically coupled interface layer between the antiferromagnetic rocksalt CoO core and a 2–4 nm thick antiferromagnetic spinel Co3O4 surface layer in octahedron-shaped nanocrystals. Density functional theory calculations with an on-site Coulomb repulsion parameter identify the origin of the experimentally observed ferromagnetic phase as a charge transfer process (partial reduction) of Co3+ to Co2+ at the CoO/Co3O4 interface, with Co2+ being in the low spin state, unlike the high spin state of its counterpart in CoO. This finding may serve as a guideline for designing new functional nanomagnets based on oxidation resistant antiferromagnetic transition metal oxides. PMID:25613569
Li, Zi-An; Fontaíña-Troitiño, N; Kovács, A; Liébana-Viñas, S; Spasova, M; Dunin-Borkowski, R E; Müller, M; Doennig, D; Pentcheva, R; Farle, M; Salgueiriño, V
2015-01-23
Polar oxide interfaces are an important focus of research due to their novel functionality which is not available in the bulk constituents. So far, research has focused mainly on heterointerfaces derived from the perovskite structure. It is important to extend our understanding of electronic reconstruction phenomena to a broader class of materials and structure types. Here we report from high-resolution transmission electron microscopy and quantitative magnetometry a robust – above room temperature (Curie temperature TC ≫ 300 K) – environmentally stable- ferromagnetically coupled interface layer between the antiferromagnetic rocksalt CoO core and a 2-4 nm thick antiferromagnetic spinel Co3O4 surface layer in octahedron-shaped nanocrystals. Density functional theory calculations with an on-site Coulomb repulsion parameter identify the origin of the experimentally observed ferromagnetic phase as a charge transfer process (partial reduction) of Co(3+) to Co(2+) at the CoO/Co3O4 interface, with Co(2+) being in the low spin state, unlike the high spin state of its counterpart in CoO. This finding may serve as a guideline for designing new functional nanomagnets based on oxidation resistant antiferromagnetic transition metal oxides.
NASA Astrophysics Data System (ADS)
Li, Yi; Abdel-Monem, Mohamed; Gopalakrishnan, Rahul; Berecibar, Maitane; Nanini-Maury, Elise; Omar, Noshin; van den Bossche, Peter; Van Mierlo, Joeri
2018-01-01
This paper proposes an advanced state of health (SoH) estimation method for high energy NMC lithium-ion batteries based on the incremental capacity (IC) analysis. IC curves are used due to their ability of detect and quantify battery degradation mechanism. A simple and robust smoothing method is proposed based on Gaussian filter to reduce the noise on IC curves, the signatures associated with battery ageing can therefore be accurately identified. A linear regression relationship is found between the battery capacity with the positions of features of interest (FOIs) on IC curves. Results show that the developed SoH estimation function from one single battery cell is able to evaluate the SoH of other batteries cycled under different cycling depth with less than 2.5% maximum errors, which proves the robustness of the proposed method on SoH estimation. With this technique, partial charging voltage curves can be used for SoH estimation and the testing time can be therefore largely reduced. This method shows great potential to be applied in reality, as it only requires static charging curves and can be easily implemented in battery management system (BMS).
Large-Scale SRM Screen of Urothelial Bladder Cancer Candidate Biomarkers in Urine.
Duriez, Elodie; Masselon, Christophe D; Mesmin, Cédric; Court, Magali; Demeure, Kevin; Allory, Yves; Malats, Núria; Matondo, Mariette; Radvanyi, François; Garin, Jérôme; Domon, Bruno
2017-04-07
Urothelial bladder cancer is a condition associated with high recurrence and substantial morbidity and mortality. Noninvasive urinary tests that would detect bladder cancer and tumor recurrence are required to significantly improve patient care. Over the past decade, numerous bladder cancer candidate biomarkers have been identified in the context of extensive proteomics or transcriptomics studies. To translate these findings in clinically useful biomarkers, the systematic evaluation of these candidates remains the bottleneck. Such evaluation involves large-scale quantitative LC-SRM (liquid chromatography-selected reaction monitoring) measurements, targeting hundreds of signature peptides by monitoring thousands of transitions in a single analysis. The design of highly multiplexed SRM analyses is driven by several factors: throughput, robustness, selectivity and sensitivity. Because of the complexity of the samples to be analyzed, some measurements (transitions) can be interfered by coeluting isobaric species resulting in biased or inconsistent estimated peptide/protein levels. Thus the assessment of the quality of SRM data is critical to allow flagging these inconsistent data. We describe an efficient and robust method to process large SRM data sets, including the processing of the raw data, the detection of low-quality measurements, the normalization of the signals for each protein, and the estimation of protein levels. Using this methodology, a variety of proteins previously associated with bladder cancer have been assessed through the analysis of urine samples from a large cohort of cancer patients and corresponding controls in an effort to establish a priority list of most promising candidates to guide subsequent clinical validation studies.
Pandemic influenza preparedness: an ethical framework to guide decision-making
Thompson, Alison K; Faith, Karen; Gibson, Jennifer L; Upshur, Ross EG
2006-01-01
Background Planning for the next pandemic influenza outbreak is underway in hospitals across the world. The global SARS experience has taught us that ethical frameworks to guide decision-making may help to reduce collateral damage and increase trust and solidarity within and between health care organisations. Good pandemic planning requires reflection on values because science alone cannot tell us how to prepare for a public health crisis. Discussion In this paper, we present an ethical framework for pandemic influenza planning. The ethical framework was developed with expertise from clinical, organisational and public health ethics and validated through a stakeholder engagement process. The ethical framework includes both substantive and procedural elements for ethical pandemic influenza planning. The incorporation of ethics into pandemic planning can be helped by senior hospital administrators sponsoring its use, by having stakeholders vet the framework, and by designing or identifying decision review processes. We discuss the merits and limits of an applied ethical framework for hospital decision-making, as well as the robustness of the framework. Summary The need for reflection on the ethical issues raised by the spectre of a pandemic influenza outbreak is great. Our efforts to address the normative aspects of pandemic planning in hospitals have generated interest from other hospitals and from the governmental sector. The framework will require re-evaluation and refinement and we hope that this paper will generate feedback on how to make it even more robust. PMID:17144926
Standardizing evaluation process: Necessary for achieving SDGs - A case study of India.
Srivastava, Alok
2018-05-09
A set of 17 Sustainable Development Goals (SDGs) adopted by the United Nations General Assembly in September 2015 are to be implemented and achieved in every country from the year 2016 to 2030. In Indian context, all these goals are very relevant and critical, as India missed the target on many components of the Millennium Development Goals (MDGs). The author strongly feels that one of the key reasons was lack of an in-built robust system for measuring the progress and achievements of MDGs. Monitoring and Evaluation of programmes and schemes, aiming at different SDGs, in a robust and regular manner is therefore need of the hour. A National evaluation policy (NEP) would set the tone in the right direction from the very beginning for achieving SDGs. The paper taking India as a case study discusses different critical factors pertinent for having a well laid down national level policy towards standardizing evaluation. Using real examples under different components of an evaluation policy, the paper discusses and questions the credibility and acceptance of the present evaluation system in place. The paper identifies five core mantras or pre-requisites of a national evaluation guideline. The paper emphasizes the importance of an evaluation policy in India and other countries as well, to provide authentic data gathered through a well-designed evaluation process and take corrective measures well on time to achieve SDGs. Copyright © 2018 Elsevier Ltd. All rights reserved.
Horsetail matching: a flexible approach to optimization under uncertainty
NASA Astrophysics Data System (ADS)
Cook, L. W.; Jarrett, J. P.
2018-04-01
It is important to design engineering systems to be robust with respect to uncertainties in the design process. Often, this is done by considering statistical moments, but over-reliance on statistical moments when formulating a robust optimization can produce designs that are stochastically dominated by other feasible designs. This article instead proposes a formulation for optimization under uncertainty that minimizes the difference between a design's cumulative distribution function and a target. A standard target is proposed that produces stochastically non-dominated designs, but the formulation also offers enough flexibility to recover existing approaches for robust optimization. A numerical implementation is developed that employs kernels to give a differentiable objective function. The method is applied to algebraic test problems and a robust transonic airfoil design problem where it is compared to multi-objective, weighted-sum and density matching approaches to robust optimization; several advantages over these existing methods are demonstrated.
NASA Astrophysics Data System (ADS)
Zhiying, Chen; Ping, Zhou
2017-11-01
Considering the robust optimization computational precision and efficiency for complex mechanical assembly relationship like turbine blade-tip radial running clearance, a hierarchically response surface robust optimization algorithm is proposed. The distribute collaborative response surface method is used to generate assembly system level approximation model of overall parameters and blade-tip clearance, and then a set samples of design parameters and objective response mean and/or standard deviation is generated by using system approximation model and design of experiment method. Finally, a new response surface approximation model is constructed by using those samples, and this approximation model is used for robust optimization process. The analyses results demonstrate the proposed method can dramatic reduce the computational cost and ensure the computational precision. The presented research offers an effective way for the robust optimization design of turbine blade-tip radial running clearance.