Science.gov

Sample records for identifying robust process

  1. A P-Norm Robust Feature Extraction Method for Identifying Differentially Expressed Genes.

    PubMed

    Liu, Jian; Liu, Jin-Xing; Gao, Ying-Lian; Kong, Xiang-Zhen; Wang, Xue-Song; Wang, Dong

    2015-01-01

    In current molecular biology, it becomes more and more important to identify differentially expressed genes closely correlated with a key biological process from gene expression data. In this paper, based on the Schatten p-norm and Lp-norm, a novel p-norm robust feature extraction method is proposed to identify the differentially expressed genes. In our method, the Schatten p-norm is used as the regularization function to obtain a low-rank matrix and the Lp-norm is taken as the error function to improve the robustness to outliers in the gene expression data. The results on simulation data show that our method can obtain higher identification accuracies than the competitive methods. Numerous experiments on real gene expression data sets demonstrate that our method can identify more differentially expressed genes than the others. Moreover, we confirmed that the identified genes are closely correlated with the corresponding gene expression data. PMID:26201006

  2. Robustness

    NASA Technical Reports Server (NTRS)

    Ryan, R.

    1993-01-01

    Robustness is a buzz word common to all newly proposed space systems design as well as many new commercial products. The image that one conjures up when the word appears is a 'Paul Bunyon' (lumberjack design), strong and hearty; healthy with margins in all aspects of the design. In actuality, robustness is much broader in scope than margins, including such factors as simplicity, redundancy, desensitization to parameter variations, control of parameter variations (environments flucation), and operational approaches. These must be traded with concepts, materials, and fabrication approaches against the criteria of performance, cost, and reliability. This includes manufacturing, assembly, processing, checkout, and operations. The design engineer or project chief is faced with finding ways and means to inculcate robustness into an operational design. First, however, be sure he understands the definition and goals of robustness. This paper will deal with these issues as well as the need for the requirement for robustness.

  3. Numerical robust stability estimation in milling process

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoming; Zhu, Limin; Ding, Han; Xiong, Youlun

    2012-09-01

    The conventional prediction of milling stability has been extensively studied based on the assumptions that the milling process dynamics is time invariant. However, nominal cutting parameters cannot guarantee the stability of milling process at the shop floor level since there exists many uncertain factors in a practical manufacturing environment. This paper proposes a novel numerical method to estimate the upper and lower bounds of Lobe diagram, which is used to predict the milling stability in a robust way by taking into account the uncertain parameters of milling system. Time finite element method, a milling stability theory is adopted as the conventional deterministic model. The uncertain dynamics parameters are dealt with by the non-probabilistic model in which the parameters with uncertainties are assumed to be bounded and there is no need for probabilistic distribution densities functions. By doing so, interval instead of deterministic stability Lobe is obtained, which guarantees the stability of milling process in an uncertain milling environment. In the simulations, the upper and lower bounds of Lobe diagram obtained by the changes of modal parameters of spindle-tool system and cutting coefficients are given, respectively. The simulation results show that the proposed method is effective and can obtain satisfying bounds of Lobe diagrams. The proposed method is helpful for researchers at shop floor to making decision on machining parameters selection.

  4. Noise suppression methods for robust speech processing

    NASA Astrophysics Data System (ADS)

    Boll, S. F.; Kajiya, J.; Youngberg, J.; Petersen, T. L.; Ravindra, H.; Done, W.; Cox, B. V.; Cohen, E.

    1981-04-01

    Robust speech processing in practical operating environments requires effective environmental and processor noise suppression. This report describes the technical findings and accomplishments during the reporting period for the research program funded to develop real-time, compressed speech analysis-synthesis algorithms whose performance is invariant under signal contamination. Fulfillment of this requirement is necessary to insure reliable secure compressed speech transmission within realistic military command and control environments. Overall contributions resulting from this research program include the understanding of how environmental noise degrades narrow band, coded speech, development of appropriate real-time noise suppression algorithms, and development of speech parameter identification methods that consider signal contamination as a fundamental element in the estimation process. This report describes the research and results in the areas of noise suppression using the dual input adaptive noise cancellation articulation rate change techniques, spectral subtraction and a description of an experiment which demonstrated that the spectral substraction noise suppression algorithm can improve the intelligibility of 2400 bps, LPC-10 coded, helicopter speech by 10.6 points. In addition summaries are included of prior studies in Constant-Q signal analysis and synthesis, perceptual modelling, speech activity detection, and pole-zero modelling of noisy signals. Three recent studies in speech modelling using the critical band analysis-synthesis transform and using splines are then presented. Finally a list of major publications generated under this contract is given.

  5. Identifying Robust and Sensitive Frequency Bands for Interrogating Neural Oscillations

    PubMed Central

    Shackman, Alexander J.; McMenamin, Brenton W.; Maxwell, Jeffrey S.; Greischar, Lawrence L.; Davidson, Richard J.

    2010-01-01

    Recent years have seen an explosion of interest in using neural oscillations to characterize the mechanisms supporting cognition and emotion. Oftentimes, oscillatory activity is indexed by mean power density in predefined frequency bands. Some investigators use broad bands originally defined by prominent surface features of the spectrum. Others rely on narrower bands originally defined by spectral factor analysis (SFA). Presently, the robustness and sensitivity of these competing band definitions remains unclear. Here, a Monte Carlo-based SFA strategy was used to decompose the tonic (“resting” or “spontaneous”) electroencephalogram (EEG) into five bands: delta (1–5Hz), alpha-low (6–9Hz), alpha-high (10–11Hz), beta (12–19Hz), and gamma (>21Hz). This pattern was consistent across SFA methods, artifact correction/rejection procedures, scalp regions, and samples. Subsequent analyses revealed that SFA failed to deliver enhanced sensitivity; narrow alpha sub-bands proved no more sensitive than the classical broadband to individual differences in temperament or mean differences in task-induced activation. Other analyses suggested that residual ocular and muscular artifact was the dominant source of activity during quiescence in the delta and gamma bands. This was observed following threshold-based artifact rejection or independent component analysis (ICA)-based artifact correction, indicating that such procedures do not necessarily confer adequate protection. Collectively, these findings highlight the limitations of several commonly used EEG procedures and underscore the necessity of routinely performing exploratory data analyses, particularly data visualization, prior to hypothesis testing. They also suggest the potential benefits of using techniques other than SFA for interrogating high-dimensional EEG datasets in the frequency or time-frequency (event-related spectral perturbation, event-related synchronization / desynchronization) domains. PMID

  6. Using Many-Objective Optimization and Robust Decision Making to Identify Robust Regional Water Resource System Plans

    NASA Astrophysics Data System (ADS)

    Matrosov, E. S.; Huskova, I.; Harou, J. J.

    2015-12-01

    Water resource system planning regulations are increasingly requiring potential plans to be robust, i.e., perform well over a wide range of possible future conditions. Robust Decision Making (RDM) has shown success in aiding the development of robust plans under conditions of 'deep' uncertainty. Under RDM, decision makers iteratively improve the robustness of a candidate plan (or plans) by quantifying its vulnerabilities to future uncertain inputs and proposing ameliorations. RDM requires planners to have an initial candidate plan. However, if the initial plan is far from robust, it may take several iterations before planners are satisfied with its performance across the wide range of conditions. Identifying an initial candidate plan is further complicated if many possible alternative plans exist and if performance is assessed against multiple conflicting criteria. Planners may benefit from considering a plan that already balances multiple performance criteria and provides some level of robustness before the first RDM iteration. In this study we use many-objective evolutionary optimization to identify promising plans before undertaking RDM. This is done for a very large regional planning problem spanning the service area of four major water utilities in East England. The five-objective optimization is performed under an ensemble of twelve uncertainty scenarios to ensure the Pareto-approximate plans exhibit an initial level of robustness. New supply interventions include two reservoirs, one aquifer recharge and recovery scheme, two transfers from an existing reservoir, five reuse and five desalination schemes. Each option can potentially supply multiple demands at varying capacities resulting in 38 unique decisions. Four candidate portfolios were selected using trade-off visualization with the involved utilities. The performance of these plans was compared under a wider range of possible scenarios. The most balanced plan was then submitted into the vulnerability

  7. Nonlinear filtering for robust signal processing

    SciTech Connect

    Palmieri, F.

    1987-01-01

    A generalized framework for the description and design of a large class of nonlinear filters is proposed. Such a family includes, among others, the newly defined Ll-estimators, that generalize the order statistic filters (L-filters) and the nonrecursive linear filters (FIR). Such estimators are particularly efficient in filtering signals that do not follow gaussian distributions. They can be designed to restore signals and images corrupted by noise of impulsive type. Such filters are very appealing since they are suitable for being made robust against perturbations on the assumed model, or insensitive to the presence of spurious outliers in the data. The linear part of the filter is used to characterize their essential spectral behavior. It can be constrained to a given shape to obtain nonlinear filters that combine given frequency characteristics and noise immunity. The generalized nonlinear filters can also be used adaptively with the coefficients computed dynamically via LMS or RLS algorithms.

  8. On adaptive robustness approach to Anti-Jam signal processing

    NASA Astrophysics Data System (ADS)

    Poberezhskiy, Y. S.; Poberezhskiy, G. Y.

    An effective approach to exploiting statistical differences between desired and jamming signals named adaptive robustness is proposed and analyzed in this paper. It combines conventional Bayesian, adaptive, and robust approaches that are complementary to each other. This combining strengthens the advantages and mitigates the drawbacks of the conventional approaches. Adaptive robustness is equally applicable to both jammers and their victim systems. The capabilities required for realization of adaptive robustness in jammers and victim systems are determined. The employment of a specific nonlinear robust algorithm for anti-jam (AJ) processing is described and analyzed. Its effectiveness in practical situations has been proven analytically and confirmed by simulation. Since adaptive robustness can be used by both sides in electronic warfare, it is more advantageous for the fastest and most intelligent side. Many results obtained and discussed in this paper are also applicable to commercial applications such as communications in unregulated or poorly regulated frequency ranges and systems with cognitive capabilities.

  9. Identifying core features of adaptive metabolic mechanisms for chronic heat stress attenuation contributing to systems robustness.

    PubMed

    Gu, Jenny; Weber, Katrin; Klemp, Elisabeth; Winters, Gidon; Franssen, Susanne U; Wienpahl, Isabell; Huylmans, Ann-Kathrin; Zecher, Karsten; Reusch, Thorsten B H; Bornberg-Bauer, Erich; Weber, Andreas P M

    2012-05-01

    The contribution of metabolism to heat stress may play a significant role in defining robustness and recovery of systems; either by providing the energy and metabolites required for cellular homeostasis, or through the generation of protective osmolytes. However, the mechanisms by which heat stress attenuation could be adapted through metabolic processes as a stabilizing strategy against thermal stress are still largely unclear. We address this issue through metabolomic and transcriptomic profiles for populations along a thermal cline where two seagrass species, Zostera marina and Zostera noltii, were found in close proximity. Significant changes captured by these profile comparisons could be detected, with a larger response magnitude observed in northern populations to heat stress. Sucrose, fructose, and myo-inositol were identified to be the most responsive of the 29 analyzed organic metabolites. Many key enzymes in the Calvin cycle, glycolysis and pentose phosphate pathways also showed significant differential expression. The reported comparison suggests that adaptive mechanisms are involved through metabolic pathways to dampen the impacts of heat stress, and interactions between the metabolome and proteome should be further investigated in systems biology to understand robust design features against abiotic stress. PMID:22402787

  10. On-Line Robust Modal Stability Prediction using Wavelet Processing

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.; Lind, Rick

    1998-01-01

    Wavelet analysis for filtering and system identification has been used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins is reduced with parametric and nonparametric time- frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data is used to reduce the effects of external disturbances and unmodeled dynamics. Parametric estimates of modal stability are also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. The F-18 High Alpha Research Vehicle aeroservoelastic flight test data demonstrates improved robust stability prediction by extension of the stability boundary beyond the flight regime. Guidelines and computation times are presented to show the efficiency and practical aspects of these procedures for on-line implementation. Feasibility of the method is shown for processing flight data from time- varying nonstationary test points.

  11. Modelling System Processes to Support Uncertainty Analysis and Robustness Evaluation

    NASA Technical Reports Server (NTRS)

    Blackwell, Charles; Cuzzi, Jeffrey (Technical Monitor)

    1996-01-01

    In the use of advanced systems control techniques in the development of a dynamic system, results from effective mathematical modelling is required. Historically, in some cases the use of a model which only reflects the "expected" or "nominal" important -information about the system's internal processes has resulted in acceptable system performance, but it should be recognized that for those cases success was due to a combination of the remarkable inherent potential of feedback control for robustness and fortuitously wide margins between system performance requirements and system performance capability. In the cases of a CELSS development, no such fortuitous combinations should be expected, and it should be expected that the uncertainty in the information on the system's processes will have to be taken into account in order to generate a performance robust design. In this paper, we develop one perspective of the issue of providing robustness as mathematical modelling impacts it, and present some examples of model formats which serve the needed purpose.

  12. Robust process design and springback compensation of a decklid inner

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaojing; Grimm, Peter; Carleer, Bart; Jin, Weimin; Liu, Gang; Cheng, Yingchao

    2013-12-01

    Springback compensation is one of the key topics in current die face engineering. The accuracy of the springback simulation, the robustness of method planning and springback are considered to be the main factors which influences the effectiveness of springback compensation. In the present paper, the basic principles of springback compensation are presented firstly. These principles consist of an accurate full cycle simulation with final validation setting and the robust process design and optimization are discussed in detail via an industrial example, a decklid inner. Moreover, an effective compensation strategy is put forward based on the analysis of springback and the simulation based springback compensation is introduced in the phase of process design. In the end, the final verification and comparison in tryout and production is given in this paper, which verified that the methodology of robust springback compensation is effective during the die development.

  13. Consistent Robustness Analysis (CRA) Identifies Biologically Relevant Properties of Regulatory Network Models

    PubMed Central

    Saithong, Treenut; Painter, Kevin J.; Millar, Andrew J.

    2010-01-01

    Background A number of studies have previously demonstrated that “goodness of fit” is insufficient in reliably classifying the credibility of a biological model. Robustness and/or sensitivity analysis is commonly employed as a secondary method for evaluating the suitability of a particular model. The results of such analyses invariably depend on the particular parameter set tested, yet many parameter values for biological models are uncertain. Results Here, we propose a novel robustness analysis that aims to determine the “common robustness” of the model with multiple, biologically plausible parameter sets, rather than the local robustness for a particular parameter set. Our method is applied to two published models of the Arabidopsis circadian clock (the one-loop [1] and two-loop [2] models). The results reinforce current findings suggesting the greater reliability of the two-loop model and pinpoint the crucial role of TOC1 in the circadian network. Conclusions Consistent Robustness Analysis can indicate both the relative plausibility of different models and also the critical components and processes controlling each model. PMID:21179566

  14. Confronting Oahu's Water Woes: Identifying Scenarios for a Robust Evaluation of Policy Alternatives

    NASA Astrophysics Data System (ADS)

    van Rees, C. B.; Garcia, M. E.; Alarcon, T.; Sixt, G.

    2013-12-01

    The Pearl Harbor aquifer is the most important freshwater resource on Oahu (Hawaii, U.S.A), providing water to nearly half a million people. Recent studies show that current water use is reaching or exceeding sustainable yield. Climate change and increasing resident and tourist populations are predicted to further stress the aquifer. The island has lost huge tracts of freshwater and estuarine wetlands since human settlement; the dependence of many endemic, endangered species on these wetlands, as well as ecosystem benefits from wetlands, link humans and wildlife through water management. After the collapse of the sugar industry on Oahu (mid-1990s), the Waiahole ditch--a massive stream diversion bringing water from the island's windward to the leeward side--became a hotly disputed resource. Commercial interests and traditional farmers have clashed over the water, which could also serve to support the Pearl Harbor aquifer. Considering competing interests, impending scarcity, and uncertain future conditions, how can groundwater be managed most effectively? Complex water networks like this are characterized by conflicts between stakeholders, coupled human-natural systems, and future uncertainty. The Water Diplomacy Framework offers a model for analyzing such complex issues by integrating multiple disciplinary perspectives, identifying intervention points, and proposing sustainable solutions. The Water Diplomacy Framework is a theory and practice of implementing adaptive water management for complex problems by shifting the discussion from 'allocation of water' to 'benefit from water resources'. This is accomplished through an interactive process that includes stakeholder input, joint fact finding, collaborative scenario development, and a negotiated approach to value creation. Presented here are the results of the initial steps in a long term project to resolve water limitations on Oahu. We developed a conceptual model of the Pearl Harbor Aquifer system and identified

  15. Processing Robustness for A Phenylethynyl Terminated Polyimide Composite

    NASA Technical Reports Server (NTRS)

    Hou, Tan-Hung

    2004-01-01

    The processability of a phenylethynyl terminated imide resin matrix (designated as PETI-5) composite is investigated. Unidirectional prepregs are made by coating an N-methylpyrrolidone solution of the amide acid oligomer (designated as PETAA-5/NMP) onto unsized IM7 fibers. Two batches of prepregs are used: one is made by NASA in-house, and the other is from an industrial source. The composite processing robustness is investigated with respect to the prepreg shelf life, the effect of B-staging conditions, and the optimal processing window. Prepreg rheology and open hole compression (OHC) strengths are found not to be affected by prolonged (i.e., up to 60 days) ambient storage. Rheological measurements indicate that the PETAA-5/NMP processability is only slightly affected over a wide range of B-stage temperatures from 250 deg C to 300 deg C. The OHC strength values are statistically indistinguishable among laminates consolidated using various B-staging conditions. An optimal processing window is established by means of the response surface methodology. IM7/PETAA-5/NMP prepreg is more sensitive to consolidation temperature than to pressure. A good consolidation is achievable at 371 deg C (700 deg F)/100 Psi, which yields an RT OHC strength of 62 Ksi. However, processability declines dramatically at temperatures below 350 deg C (662 deg F), as evidenced by the OHC strength values. The processability of the IM7/LARC(TM) PETI-5 prepreg was found to be robust.

  16. Robust Read Channel System Directly Processing Asynchronous Sampling Data

    NASA Astrophysics Data System (ADS)

    Yamamoto, Akira; Mouri, Hiroki; Yamamoto, Takashi

    2006-02-01

    In this study, we describe a robust read channel employing a novel timing recovery system and a unique Viterbi detector which extracts channel timing and channel data directly from asynchronous sampling data. The timing recovery system in the proposed read channel has feed-forward architecture and consists entirely of digital circuits. Thus, it enables robust timing recovery at high-speed and has no performance deterioration caused by variations in analog circuits. The Viterbi detector not only detects maximum-likelihood data using a reference level generator, but also transforms asynchronous data into pseudosynchronous data using two clocks, such as an asynchronous clock generated by a frequency synthesizer and a pseudosynchronous clock generated by a timing detector. The proposed read channel has achieved a constant and fast frequency acquisition time against initial frequency error and has improved its bit error rate performance. This robust read channel system can be used for high-speed signal processing and LSIs using nanometer-scale semiconductor processes.

  17. Combining structured decision making and value-of-information analyses to identify robust management strategies.

    PubMed

    Moore, Joslin L; Runge, Michael C

    2012-10-01

    Structured decision making and value-of-information analyses can be used to identify robust management strategies even when uncertainty about the response of the system to management is high. We used these methods in a case study of management of the non-native invasive species gray sallow willow (Salix cinerea) in alpine Australia. Establishment of this species is facilitated by wildfire. Managers are charged with developing a management strategy despite extensive uncertainty regarding the frequency of fires, the willow's demography, and the effectiveness of management actions. We worked with managers in Victoria to conduct a formal decision analysis. We used a dynamic model to identify the best management strategy for a range of budgets. We evaluated the robustness of the strategies to uncertainty with value-of-information analyses. Results of the value-of-information analysis indicated that reducing uncertainty would not change which management strategy was identified as the best unless budgets increased substantially. This outcome suggests there would be little value in implementing adaptive management for the problem we analyzed. The value-of-information analyses also highlighted that the main driver of gray sallow willow invasion (i.e., fire frequency) is not necessarily the same factor that is most important for decision making (i.e., willow seed dispersal distance). Value of-information analyses enables managers to better target monitoring and research efforts toward factors critical to making the decision and to assess the need for adaptive management. PMID:22862796

  18. Process Architecture for Managing Digital Object Identifiers

    NASA Astrophysics Data System (ADS)

    Wanchoo, L.; James, N.; Stolte, E.

    2014-12-01

    In 2010, NASA's Earth Science Data and Information System (ESDIS) Project implemented a process for registering Digital Object Identifiers (DOIs) for data products distributed by Earth Observing System Data and Information System (EOSDIS). For the first 3 years, ESDIS evolved the process involving the data provider community in the development of processes for creating and assigning DOIs, and guidelines for the landing page. To accomplish this, ESDIS established two DOI User Working Groups: one for reviewing the DOI process whose recommendations were submitted to ESDIS in February 2014; and the other recently tasked to review and further develop DOI landing page guidelines for ESDIS approval by end of 2014. ESDIS has recently upgraded the DOI system from a manually-driven system to one that largely automates the DOI process. The new automated feature include: a) reviewing the DOI metadata, b) assigning of opaque DOI name if data provider chooses, and c) reserving, registering, and updating the DOIs. The flexibility of reserving the DOI allows data providers to embed and test the DOI in the data product metadata before formally registering with EZID. The DOI update process allows the changing of any DOI metadata except the DOI name unless the name has not been registered. Currently, ESDIS has processed a total of 557 DOIs of which 379 DOIs are registered with EZID and 178 are reserved with ESDIS. The DOI incorporates several metadata elements that effectively identify the data product and the source of availability. Of these elements, the Uniform Resource Locator (URL) attribute has the very important function of identifying the landing page which describes the data product. ESDIS in consultation with data providers in the Earth Science community is currently developing landing page guidelines that specify the key data product descriptive elements to be included on each data product's landing page. This poster will describe in detail the unique automated process and

  19. A robust sinusoidal signal processing method for interferometers

    NASA Astrophysics Data System (ADS)

    Wu, Xiang-long; Zhang, Hui; Tseng, Yang-Yu; Fan, Kuang-Chao

    2013-10-01

    Laser interferometers are widely used as a reference for length measurement. Reliable bidirectional optical fringe counting is normally obtained by using two orthogonally sinusoidal signals derived from the two outputs of an interferometer with path difference. These signals are subject to be disturbed by the geometrical errors of the moving target that causes the separation and shift of two interfering light spots on the detector. It results in typical Heydemann errors, including DC drift, amplitude variation and out-of-orthogonality of two sinusoidal signals that will seriously reduce the accuracy of fringe counting. This paper presents a robust sinusoidal signal processing method to correct the distorted waveforms by hardware. A corresponding circuit board has been designed. A linear stage equipped with a laser displacement interferometer and a height gauge equipped with a linear grating interferometer are used as the test beds. Experimental results show that, even with a seriously disturbed input waveform, the output Lissajous circle can always be stabilized after signal correction. This robust method increases the stability and reliability of the sinusoidal signals for data acquisition device to deal with pulse count and phase subdivision.

  20. Identifying a robust method to build RCMs ensemble as climate forcing for hydrological impact models

    NASA Astrophysics Data System (ADS)

    Olmos Giménez, P.; García Galiano, S. G.; Giraldo-Osorio, J. D.

    2016-06-01

    The regional climate models (RCMs) improve the understanding of the climate mechanism and are often used as climate forcing to hydrological impact models. Rainfall is the principal input to the water cycle, so special attention should be paid to its accurate estimation. However, climate change projections of rainfall events exhibit great divergence between RCMs. As a consequence, the rainfall projections, and the estimation of uncertainties, are better based in the combination of the information provided by an ensemble approach from different RCMs simulations. Taking into account the rainfall variability provided by different RCMs, the aims of this work are to evaluate the performance of two novel approaches based on the reliability ensemble averaging (REA) method for building RCMs ensembles of monthly precipitation over Spain. The proposed methodologies are based on probability density functions (PDFs) considering the variability of different levels of information, on the one hand of annual and seasonal rainfall, and on the other hand of monthly rainfall. The sensitivity of the proposed approaches, to two metrics for identifying the best ensemble building method, is evaluated. The plausible future scenario of rainfall for 2021-2050 over Spain, based on the more robust method, is identified. As a result, the rainfall projections are improved thus decreasing the uncertainties involved, to drive hydrological impacts models and therefore to reduce the cumulative errors in the modeling chain.

  1. Application of NMR Methods to Identify Detection Reagents for Use in the Development of Robust Nanosensors

    SciTech Connect

    Cosman, M; Krishnan, V V; Balhorn, R

    2004-04-29

    Nuclear Magnetic Resonance (NMR) spectroscopy is a powerful technique for studying bi-molecular interactions at the atomic scale. Our NMR lab is involved in the identification of small molecules, or ligands that bind to target protein receptors, such as tetanus (TeNT) and botulinum (BoNT) neurotoxins, anthrax proteins and HLA-DR10 receptors on non-Hodgkin's lymphoma cancer cells. Once low affinity binders are identified, they can be linked together to produce multidentate synthetic high affinity ligands (SHALs) that have very high specificity for their target protein receptors. An important nanotechnology application for SHALs is their use in the development of robust chemical sensors or biochips for the detection of pathogen proteins in environmental samples or body fluids. Here, we describe a recently developed NMR competition assay based on transferred nuclear Overhauser effect spectroscopy (trNOESY) that enables the identification of sets of ligands that bind to the same site, or a different site, on the surface of TeNT fragment C (TetC) than a known ''marker'' ligand, doxorubicin. Using this assay, we can identify the optimal pairs of ligands to be linked together for creating detection reagents, as well as estimate the relative binding constants for ligands competing for the same site.

  2. Product and Process Improvement Using Mixture-Process Variable Designs and Robust Optimization Techniques

    SciTech Connect

    Sahni, Narinder S.; Piepel, Gregory F.; Naes, Tormod

    2009-04-01

    The quality of an industrial product depends on the raw material proportions and the process variable levels, both of which need to be taken into account in designing a product. This article presents a case study from the food industry in which both kinds of variables were studied by combining a constrained mixture experiment design and a central composite process variable design. Based on the natural structure of the situation, a split-plot experiment was designed and models involving the raw material proportions and process variable levels (separately and combined) were fitted. Combined models were used to study: (i) the robustness of the process to variations in raw material proportions, and (ii) the robustness of the raw material recipes with respect to fluctuations in the process variable levels. Further, the expected variability in the robust settings was studied using the bootstrap.

  3. Robust Density-Based Clustering To Identify Metastable Conformational States of Proteins.

    PubMed

    Sittel, Florian; Stock, Gerhard

    2016-05-10

    A density-based clustering method is proposed that is deterministic, computationally efficient, and self-consistent in its parameter choice. By calculating a geometric coordinate space density for every point of a given data set, a local free energy is defined. On the basis of these free energy estimates, the frames are lumped into local free energy minima, ultimately forming microstates separated by local free energy barriers. The algorithm is embedded into a complete workflow to robustly generate Markov state models from molecular dynamics trajectories. It consists of (i) preprocessing of the data via principal component analysis in order to reduce the dimensionality of the problem, (ii) proposed density-based clustering to generate microstates, and (iii) dynamical clustering via the most probable path algorithm to construct metastable states. To characterize the resulting state-resolved conformational distribution, dihedral angle content color plots are introduced which identify structural differences of protein states in a concise way. To illustrate the performance of the method, three well-established model problems are adopted: conformational transitions of hepta-alanine, folding of villin headpiece, and functional dynamics of bovine pancreatic trypsin inhibitor. PMID:27058020

  4. Whole-Embryo Modeling of Early Segmentation in Drosophila Identifies Robust and Fragile Expression Domains

    PubMed Central

    Bieler, Jonathan; Pozzorini, Christian; Naef, Felix

    2011-01-01

    Segmentation of the Drosophila melanogaster embryo results from the dynamic establishment of spatial mRNA and protein patterns. Here, we exploit recent temporal mRNA and protein expression measurements on the full surface of the blastoderm to calibrate a dynamical model of the gap gene network on the entire embryo cortex. We model the early mRNA and protein dynamics of the gap genes hunchback, Kruppel, giant, and knirps, taking as regulatory inputs the maternal Bicoid and Caudal gradients, plus the zygotic Tailless and Huckebein proteins. The model captures the expression patterns faithfully, and its predictions are assessed from gap gene mutants. The inferred network shows an architecture based on reciprocal repression between gap genes that can stably pattern the embryo on a realistic geometry but requires complex regulations such as those involving the Hunchback monomer and dimers. Sensitivity analysis identifies the posterior domain of giant as among the most fragile features of an otherwise robust network, and hints at redundant regulations by Bicoid and Hunchback, possibly reflecting recent evolutionary changes in the gap-gene network in insects. PMID:21767480

  5. Stretching the limits of forming processes by robust optimization: A demonstrator

    NASA Astrophysics Data System (ADS)

    Wiebenga, J. H.; Atzema, E. H.; van den Boogaard, A. H.

    2013-12-01

    Robust design of forming processes using numerical simulations is gaining attention throughout the industry. In this work, it is demonstrated how robust optimization can assist in further stretching the limits of metal forming processes. A deterministic and a robust optimization study are performed, considering a stretch-drawing process of a hemispherical cup product. For the robust optimization study, both the effect of material and process scatter are taken into account. For quantifying the material scatter, samples of 41 coils of a drawing quality forming steel have been collected. The stochastic material behavior is obtained by a hybrid approach, combining mechanical testing and texture analysis, and efficiently implemented in a metamodel based optimization strategy. The deterministic and robust optimization results are subsequently presented and compared, demonstrating an increased process robustness and decreased number of product rejects by application of the robust optimization approach.

  6. Stretching the limits of forming processes by robust optimization: A demonstrator

    SciTech Connect

    Wiebenga, J. H.; Atzema, E. H.; Boogaard, A. H. van den

    2013-12-16

    Robust design of forming processes using numerical simulations is gaining attention throughout the industry. In this work, it is demonstrated how robust optimization can assist in further stretching the limits of metal forming processes. A deterministic and a robust optimization study are performed, considering a stretch-drawing process of a hemispherical cup product. For the robust optimization study, both the effect of material and process scatter are taken into account. For quantifying the material scatter, samples of 41 coils of a drawing quality forming steel have been collected. The stochastic material behavior is obtained by a hybrid approach, combining mechanical testing and texture analysis, and efficiently implemented in a metamodel based optimization strategy. The deterministic and robust optimization results are subsequently presented and compared, demonstrating an increased process robustness and decreased number of product rejects by application of the robust optimization approach.

  7. Robust syntaxin-4 immunoreactivity in mammalian horizontal cell processes

    PubMed Central

    HIRANO, ARLENE A.; BRANDSTÄTTER, JOHANN HELMUT; VILA, ALEJANDRO; BRECHA, NICHOLAS C.

    2009-01-01

    Horizontal cells mediate inhibitory feed-forward and feedback communication in the outer retina; however, mechanisms that underlie transmitter release from mammalian horizontal cells are poorly understood. Toward determining whether the molecular machinery for exocytosis is present in horizontal cells, we investigated the localization of syntaxin-4, a SNARE protein involved in targeting vesicles to the plasma membrane, in mouse, rat, and rabbit retinae using immunocytochemistry. We report robust expression of syntaxin-4 in the outer plexiform layer of all three species. Syntaxin-4 occurred in processes and tips of horizontal cells, with regularly spaced, thicker sandwich-like structures along the processes. Double labeling with syntaxin-4 and calbindin antibodies, a horizontal cell marker, demonstrated syntaxin-4 localization to horizontal cell processes; whereas, double labeling with PKC antibodies, a rod bipolar cell (RBC) marker, showed a lack of co-localization, with syntaxin-4 immunolabeling occurring just distal to RBC dendritic tips. Syntaxin-4 immunolabeling occurred within VGLUT-1-immunoreactive photoreceptor terminals and underneath synaptic ribbons, labeled by CtBP2/RIBEYE antibodies, consistent with localization in invaginating horizontal cell tips at photoreceptor triad synapses. Vertical sections of retina immunostained for syntaxin-4 and peanut agglutinin (PNA) established that the prominent patches of syntaxin-4 immunoreactivity were adjacent to the base of cone pedicles. Horizontal sections through the OPL indicate a one-to-one co-localization of syntaxin-4 densities at likely all cone pedicles, with syntaxin-4 immunoreactivity interdigitating with PNA labeling. Pre-embedding immuno-electron microscopy confirmed the subcellular localization of syntaxin-4 labeling to lateral elements at both rod and cone triad synapses. Finally, co-localization with SNAP-25, a possible binding partner of syntaxin-4, indicated co-expression of these SNARE proteins in

  8. Decisional tool to assess current and future process robustness in an antibody purification facility.

    PubMed

    Stonier, Adam; Simaria, Ana Sofia; Smith, Martin; Farid, Suzanne S

    2012-07-01

    Increases in cell culture titers in existing facilities have prompted efforts to identify strategies that alleviate purification bottlenecks while controlling costs. This article describes the application of a database-driven dynamic simulation tool to identify optimal purification sizing strategies and visualize their robustness to future titer increases. The tool harnessed the benefits of MySQL to capture the process, business, and risk features of multiple purification options and better manage the large datasets required for uncertainty analysis and optimization. The database was linked to a discrete-event simulation engine so as to model the dynamic features of biopharmaceutical manufacture and impact of resource constraints. For a given titer, the tool performed brute force optimization so as to identify optimal purification sizing strategies that minimized the batch material cost while maintaining the schedule. The tool was applied to industrial case studies based on a platform monoclonal antibody purification process in a multisuite clinical scale manufacturing facility. The case studies assessed the robustness of optimal strategies to batch-to-batch titer variability and extended this to assess the long-term fit of the platform process as titers increase from 1 to 10 g/L, given a range of equipment sizes available to enable scale intensification efforts. Novel visualization plots consisting of multiple Pareto frontiers with tie-lines connecting the position of optimal configurations over a given titer range were constructed. These enabled rapid identification of robust purification configurations given titer fluctuations and the facility limit that the purification suites could handle in terms of the maximum titer and hence harvest load. PMID:22641562

  9. Combining Dynamical Decoupling with Robust Optimal Control for Improved Quantum Information Processing

    NASA Astrophysics Data System (ADS)

    Grace, Matthew D.; Witzel, Wayne M.; Carroll, Malcolm S.

    2010-03-01

    Constructing high-fidelity control pulses that are robust to control and system/environment fluctuations is a crucial objective for quantum information processing (QIP). We combine dynamical decoupling (DD) with optimal control (OC) to identify control pulses that achieve this objective numerically. Previous DD work has shown that general errors up to (but not including) third order can be removed from π- and π/2-pulses without concatenation. By systematically integrating DD and OC, we are able to increase pulse fidelity beyond this limit. Our hybrid method of quantum control incorporates a newly-developed algorithm for robust OC, providing a nested DD-OC approach to generate robust controls. Motivated by solid-state QIP, we also incorporate relevant experimental constraints into this DD-OC formalism. To demonstrate the advantage of our approach, the resulting quantum controls are compared to previous DD results in open and uncertain model systems. This work was supported by the Laboratory Directed Research and Development program at Sandia National Laboratories. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  10. Commonsense Conceptions of Emergent Processes: Why Some Misconceptions Are Robust

    ERIC Educational Resources Information Center

    Chi, Michelene T. H.

    2005-01-01

    This article offers a plausible domain-general explanation for why some concepts of processes are resistant to instructional remediation although other, apparently similar concepts are more easily understood. The explanation assumes that processes may differ in ontological ways: that some processes (such as the apparent flow in diffusion of dye in…

  11. Identifying Fragilities in Biochemical Networks: Robust Performance Analysis of Fas Signaling-Induced Apoptosis

    PubMed Central

    Shoemaker, Jason E.; Doyle, Francis J.

    2008-01-01

    Proper control of apoptotic signaling is critical to immune response and development in multicellular organisms. Two tools from control engineering are applied to a mathematical model of Fas ligand signaling-induced apoptosis. Structured singular value analysis determines the volume in parameter space within which the system parameters may exist and still maintain efficacious signaling, but is limited to linear behaviors. Sensitivity analysis can be applied to nonlinear systems but is difficult to relate to performance criteria. Thus, structured singular value analysis is used to quantify performance during apoptosis rejection, ensuring that the system remains sensitive but not overly so to apoptotic stimuli. Sensitivity analysis is applied when the system has switched to the death-inducing, apoptotic steady state to determine parameters significant to maintaining the bistability. The analyses reveal that the magnitude of the death signal is fragile to perturbations in degradation parameters (failures in the ubiquitin/proteasome mechanism) while the timing of signal expression can be tuned by manipulating local parameters. Simultaneous parameter uncertainty highlights apoptotic fragility to disturbances in the ubiquitin/proteasome system. Sensitivity analysis reveals that the robust signaling characteristics of the apoptotic network is due to network architecture, and the apoptotic signaling threshold is best manipulated by interactions upstream of the apoptosome. PMID:18539637

  12. Advanced process monitoring and feedback control to enhance cell culture process production and robustness.

    PubMed

    Zhang, An; Tsang, Valerie Liu; Moore, Brandon; Shen, Vivian; Huang, Yao-Ming; Kshirsagar, Rashmi; Ryll, Thomas

    2015-12-01

    It is a common practice in biotherapeutic manufacturing to define a fixed-volume feed strategy for nutrient feeds, based on historical cell demand. However, once the feed volumes are defined, they are inflexible to batch-to-batch variations in cell growth and physiology and can lead to inconsistent productivity and product quality. In an effort to control critical quality attributes and to apply process analytical technology (PAT), a fully automated cell culture feedback control system has been explored in three different applications. The first study illustrates that frequent monitoring and automatically controlling the complex feed based on a surrogate (glutamate) level improved protein production. More importantly, the resulting feed strategy was translated into a manufacturing-friendly manual feed strategy without impact on product quality. The second study demonstrates the improved process robustness of an automated feed strategy based on online bio-capacitance measurements for cell growth. In the third study, glucose and lactate concentrations were measured online and were used to automatically control the glucose feed, which in turn changed lactate metabolism. These studies suggest that the auto-feedback control system has the potential to significantly increase productivity and improve robustness in manufacturing, with the goal of ensuring process performance and product quality consistency. PMID:26108810

  13. Natural Language Processing: Toward Large-Scale, Robust Systems.

    ERIC Educational Resources Information Center

    Haas, Stephanie W.

    1996-01-01

    Natural language processing (NLP) is concerned with getting computers to do useful things with natural language. Major applications include machine translation, text generation, information retrieval, and natural language interfaces. Reviews important developments since 1987 that have led to advances in NLP; current NLP applications; and problems…

  14. Leveraging the Cloud for Robust and Efficient Lunar Image Processing

    NASA Technical Reports Server (NTRS)

    Chang, George; Malhotra, Shan; Wolgast, Paul

    2011-01-01

    The Lunar Mapping and Modeling Project (LMMP) is tasked to aggregate lunar data, from the Apollo era to the latest instruments on the LRO spacecraft, into a central repository accessible by scientists and the general public. A critical function of this task is to provide users with the best solution for browsing the vast amounts of imagery available. The image files LMMP manages range from a few gigabytes to hundreds of gigabytes in size with new data arriving every day. Despite this ever-increasing amount of data, LMMP must make the data readily available in a timely manner for users to view and analyze. This is accomplished by tiling large images into smaller images using Hadoop, a distributed computing software platform implementation of the MapReduce framework, running on a small cluster of machines locally. Additionally, the software is implemented to use Amazon's Elastic Compute Cloud (EC2) facility. We also developed a hybrid solution to serve images to users by leveraging cloud storage using Amazon's Simple Storage Service (S3) for public data while keeping private information on our own data servers. By using Cloud Computing, we improve upon our local solution by reducing the need to manage our own hardware and computing infrastructure, thereby reducing costs. Further, by using a hybrid of local and cloud storage, we are able to provide data to our users more efficiently and securely. 12 This paper examines the use of a distributed approach with Hadoop to tile images, an approach that provides significant improvements in image processing time, from hours to minutes. This paper describes the constraints imposed on the solution and the resulting techniques developed for the hybrid solution of a customized Hadoop infrastructure over local and cloud resources in managing this ever-growing data set. It examines the performance trade-offs of using the more plentiful resources of the cloud, such as those provided by S3, against the bandwidth limitations such use

  15. Identifying robust communities and multi-community nodes by combining top-down and bottom-up approaches to clustering

    PubMed Central

    Gaiteri, Chris; Chen, Mingming; Szymanski, Boleslaw; Kuzmin, Konstantin; Xie, Jierui; Lee, Changkyu; Blanche, Timothy; Chaibub Neto, Elias; Huang, Su-Chun; Grabowski, Thomas; Madhyastha, Tara; Komashko, Vitalina

    2015-01-01

    Biological functions are carried out by groups of interacting molecules, cells or tissues, known as communities. Membership in these communities may overlap when biological components are involved in multiple functions. However, traditional clustering methods detect non-overlapping communities. These detected communities may also be unstable and difficult to replicate, because traditional methods are sensitive to noise and parameter settings. These aspects of traditional clustering methods limit our ability to detect biological communities, and therefore our ability to understand biological functions. To address these limitations and detect robust overlapping biological communities, we propose an unorthodox clustering method called SpeakEasy which identifies communities using top-down and bottom-up approaches simultaneously. Specifically, nodes join communities based on their local connections, as well as global information about the network structure. This method can quantify the stability of each community, automatically identify the number of communities, and quickly cluster networks with hundreds of thousands of nodes. SpeakEasy shows top performance on synthetic clustering benchmarks and accurately identifies meaningful biological communities in a range of datasets, including: gene microarrays, protein interactions, sorted cell populations, electrophysiology and fMRI brain imaging. PMID:26549511

  16. Identifying influential factors of business process performance using dependency analysis

    NASA Astrophysics Data System (ADS)

    Wetzstein, Branimir; Leitner, Philipp; Rosenberg, Florian; Dustdar, Schahram; Leymann, Frank

    2011-02-01

    We present a comprehensive framework for identifying influential factors of business process performance. In particular, our approach combines monitoring of process events and Quality of Service (QoS) measurements with dependency analysis to effectively identify influential factors. The framework uses data mining techniques to construct tree structures to represent dependencies of a key performance indicator (KPI) on process and QoS metrics. These dependency trees allow business analysts to determine how process KPIs depend on lower-level process metrics and QoS characteristics of the IT infrastructure. The structure of the dependencies enables a drill-down analysis of single factors of influence to gain a deeper knowledge why certain KPI targets are not met.

  17. Phosphoproteomic profiling of tumor tissues identifies HSP27 Ser82 phosphorylation as a robust marker of early ischemia

    PubMed Central

    Zahari, Muhammad Saddiq; Wu, Xinyan; Pinto, Sneha M.; Nirujogi, Raja Sekhar; Kim, Min-Sik; Fetics, Barry; Philip, Mathew; Barnes, Sheri R.; Godfrey, Beverly; Gabrielson, Edward; Nevo, Erez; Pandey, Akhilesh

    2015-01-01

    Delays between tissue collection and tissue fixation result in ischemia and ischemia-associated changes in protein phosphorylation levels, which can misguide the examination of signaling pathway status. To identify a biomarker that serves as a reliable indicator of ischemic changes that tumor tissues undergo, we subjected harvested xenograft tumors to room temperature for 0, 2, 10 and 30 minutes before freezing in liquid nitrogen. Multiplex TMT-labeling was conducted to achieve precise quantitation, followed by TiO2 phosphopeptide enrichment and high resolution mass spectrometry profiling. LC-MS/MS analyses revealed phosphorylation level changes of a number of phosphosites in the ischemic samples. The phosphorylation of one of these sites, S82 of the heat shock protein 27 kDa (HSP27), was especially abundant and consistently upregulated in tissues with delays in freezing as short as 2 minutes. In order to eliminate effects of ischemia, we employed a novel cryogenic biopsy device which begins freezing tissues in situ before they are excised. Using this device, we showed that the upregulation of phosphorylation of S82 on HSP27 was abrogated. We thus demonstrate that our cryogenic biopsy device can eliminate ischemia-induced phosphoproteome alterations, and measurements of S82 on HSP27 can be used as a robust marker of ischemia in tissues. PMID:26329039

  18. The Robustness of Pathway Analysis in Identifying Potential Drug Targets in Non-Small Cell Lung Carcinoma

    PubMed Central

    Dalby, Andrew; Bailey, Ian

    2014-01-01

    The identification of genes responsible for causing cancers from gene expression data has had varied success. Often the genes identified depend on the methods used for detecting expression patterns, or on the ways that the data had been normalized and filtered. The use of gene set enrichment analysis is one way to introduce biological information in order to improve the detection of differentially expressed genes and pathways. In this paper we show that the use of network models while still subject to the problems of normalization is a more robust method for detecting pathways that are differentially overrepresented in lung cancer data. Such differences may provide opportunities for novel therapeutics. In addition, we present evidence that non-small cell lung carcinoma is not a series of homogeneous diseases; rather that there is a heterogeny within the genotype which defies phenotype classification. This diversity helps to explain the lack of progress in developing therapies against non-small cell carcinoma and suggests that drug development may consider multiple pathways as treatment targets.

  19. Multiexperiment data processing in identifying model helicopter's yaw dynamics

    NASA Astrophysics Data System (ADS)

    Chen, Haosheng; Chen, Darong

    2003-09-01

    The multi-experiment data is usually needed in identifying a model helicopter's yaw dynamics. In order to strengthen the information of the dynamics and reduce the effect of the noise, a new kind of least square method by using a weighted criterion is investigated to estimate the model parameters. To calculate the factors of the weighted criterion, a neural perceptron is trained to determine the factors automatically. The simulated outputs of the model derived by this kind of method fit the measured outputs well. It is suggested that this kind of data processing method is useful in identifying the yaw dynamics and processing the multi-experiment data for the system identification.

  20. Robust Low Cost Aerospike/RLV Combustion Chamber by Advanced Vacuum Plasma Process

    NASA Technical Reports Server (NTRS)

    Holmes, Richard; Ellis, David; McKechnie

    1999-01-01

    Next-generation, regeneratively cooled rocket engines will require materials that can withstand high temperatures while retaining high thermal conductivity. At the same time, fabrication techniques must be cost efficient so that engine components can be manufactured within the constraints of a shrinking NASA budget. In recent years, combustion chambers of equivalent size to the Aerospike chamber have been fabricated at NASA-Marshall Space Flight Center (MSFC) using innovative, relatively low-cost, vacuum-plasma-spray (VPS) techniques. Typically, such combustion chambers are made of the copper alloy NARloy-Z. However, current research and development conducted by NASA-Lewis Research Center (LeRC) has identified a Cu-8Cr-4Nb alloy which possesses excellent high-temperature strength, creep resistance, and low cycle fatigue behavior combined with exceptional thermal stability. In fact, researchers at NASA-LeRC have demonstrated that powder metallurgy (P/M) Cu-8Cr-4Nb exhibits better mechanical properties at 1,200 F than NARloy-Z does at 1,000 F. The objective of this program was to develop and demonstrate the technology to fabricate high-performance, robust, inexpensive combustion chambers for advanced propulsion systems (such as Lockheed-Martin's VentureStar and NASA's Reusable Launch Vehicle, RLV) using the low-cost, VPS process to deposit Cu-8Cr-4Nb with mechanical properties that match or exceed those of P/M Cu-8Cr-4Nb. In addition, oxidation resistant and thermal barrier coatings can be incorporated as an integral part of the hot wall of the liner during the VPS process. Tensile properties of Cu-8Cr-4Nb material produced by VPS are reviewed and compared to material produced previously by extrusion. VPS formed combustion chamber liners have also been prepared and will be reported on following scheduled hot firing tests at NASA-Lewis.

  1. Optical wafer metrology sensors for process-robust CD and overlay control in semiconductor device manufacturing

    NASA Astrophysics Data System (ADS)

    den Boef, Arie J.

    2016-06-01

    This paper presents three optical wafer metrology sensors that are used in lithography for robustly measuring the shape and position of wafers and device patterns on these wafers. The first two sensors are a level sensor and an alignment sensor that measure, respectively, a wafer height map and a wafer position before a new pattern is printed on the wafer. The third sensor is an optical scatterometer that measures critical dimension-variations and overlay after the resist has been exposed and developed. These sensors have different optical concepts but they share the same challenge that sub-nm precision is required at high throughput on a large variety of processed wafers and in the presence of unknown wafer processing variations. It is the purpose of this paper to explain these challenges in more detail and give an overview of the various solutions that have been introduced over the years to come to process-robust optical wafer metrology.

  2. Identifying different types of stochastic processes with the same spectra

    NASA Astrophysics Data System (ADS)

    Kim, Jong U.; Kish, Laszlo B.; Schmera, Gabor

    2005-05-01

    We propose a new way of pattern recognition which can distinguish different stochastic processes even if they have the same power density spectrum. Known crosscorrelation techniques recognize only the same realizations of a stochastic process in the two signal channels. However, crosscorrelation techniques do not work for recognizing independent realizations of the same stochastic process because their crosscorrelation function and cross spectrum are zero. A method able to do that would have the potential to revolutionize identification and pattern recognition, techniques, including sensing and security applications. The new method we are proposing is able to identify independent realizations of the same process, and at the same time, does not give false alarm for different processes which are very similar in nature. We demonstrate the method by using different realizations of two different types of random telegram signals, which are indistinguishable with respect to power density spectra (PDS). We call this method bispectrum correlation coefficient (BCC) technique.

  3. Hybrid image processing for robust extraction of lean tissue on beef cut surface

    NASA Astrophysics Data System (ADS)

    Hwang, Heon; Park, Bosoon; Nguyen, Minh D.; Chen, Yud-Ren

    1996-02-01

    A hybrid image processing system which automatically separates lean tissues from the beef cut surface image and generates the lean tissue contour has been developed. Because of the inhomogeneous distribution and fuzzy pattern of fat and lean tissues on the beef cut, conventional image segmentation and contour generation algorithms suffer from heavy computing, algorithm complexness, and even poor robustness. The proposed system utilizes an artificial neural network to enhance the robustness of processing. The system is composed of three procedures such as pre-network, network based lean tissue segmentation and post- network procedure. At the pre-network stage, gray level images of beef cuts were segmented and resized appropriate to the network inputs. Features such as fat and bone were enhanced and the enhanced input image was converted to the grid pattern image, whose grid was formed as 4 by 4 pixel size. At the network stage, the normalized gray value of each grid image was taken as the network input. Pre-trained network generated the grid image output of the isolated lean tissue. A sequence of post-network processing was followed to obtain the detailed contour of the lean tissue. The training scheme of the network and separating performance were presented and analyzed. The developed hybrid system shows the feasibility of the human like robust object segmentation and contour generation for the complex fuzzy and irregular image.

  4. Robust control chart for change point detection of process variance in the presence of disturbances

    NASA Astrophysics Data System (ADS)

    Huat, Ng Kooi; Midi, Habshah

    2015-02-01

    A conventional control chart for detecting shifts in variance of a process is typically developed where in most circumstances the nominal value of variance is unknown and based upon one of the essential assumptions that the underlying distribution of the quality characteristic is normal. However, this is not always the case as it is fairly evident that the statistical estimates used for these charts are very sensitive to the occurrence of occasional outliers. This is for the reason that the robust control charts are put forward when the underlying normality assumption is not met, and served as a remedial measure to the problem of contamination in process data. Realizing that the existing approach, namely Biweight A pooled residuals method, appears to be resistance to localized disturbances but lack of efficiency when there are diffuse disturbances. To be concrete, diffuse disturbances are those that have equal change of being perturbed by any observation, while a localized disturbance will have effect on every member of a certain subsample or subsamples. Since the efficiency of estimators in the presence of disturbances can rely heavily upon whether the disturbances are distributed throughout the observations or concentrated in a few subsamples. Hence, to this end, in this paper we proposed a new robust MBAS control chart by means of subsample-based robust Modified Biweight A scale estimator in estimating the process standard deviation. It has strong resistance to both localized and diffuse disturbances as well as high efficiency when no disturbances are present. The performance of the proposed robust chart was evaluated based on some decision criteria through Monte Carlo simulation study.

  5. Some Results on the Analysis of Stochastic Processes with Uncertain Transition Probabilities and Robust Optimal Control

    SciTech Connect

    Keyong Li; Seong-Cheol Kang; I. Ch. Paschalidis

    2007-09-01

    This paper investigates stochastic processes that are modeled by a finite number of states but whose transition probabilities are uncertain and possibly time-varying. The treatment of uncertain transition probabilities is important because there appears to be a disconnection between the practice and theory of stochastic processes due to the difficulty of assigning exact probabilities to real-world events. Also, when the finite-state process comes as a reduced model of one that is more complicated in nature (possibly in a continuous state space), existing results do not facilitate rigorous analysis. Two approaches are introduced here. The first focuses on processes with one terminal state and the properties that affect their convergence rates. When a process is on a complicated graph, the bound of the convergence rate is not trivially related to that of the probabilities of individual transitions. Discovering the connection between the two led us to define two concepts which we call 'progressivity' and 'sortedness', and to a new comparison theorem for stochastic processes. An optimality criterion for robust optimal control also derives from this comparison theorem. In addition, this result is applied to the case of mission-oriented autonomous robot control to produce performance estimate within a control framework that we propose. The second approach is in the MDP frame work. We will introduce our preliminary work on optimistic robust optimization, which aims at finding solutions that guarantee the upper bounds of the accumulative discounted cost with prescribed probabilities. The motivation here is to address the issue that the standard robust optimal solution tends to be overly conservative.

  6. Robust Selection Algorithm (RSA) for Multi-Omic Biomarker Discovery; Integration with Functional Network Analysis to Identify miRNA Regulated Pathways in Multiple Cancers

    PubMed Central

    Sehgal, Vasudha; Seviour, Elena G.; Moss, Tyler J.; Mills, Gordon B.; Azencott, Robert; Ram, Prahlad T.

    2015-01-01

    MicroRNAs (miRNAs) play a crucial role in the maintenance of cellular homeostasis by regulating the expression of their target genes. As such, the dysregulation of miRNA expression has been frequently linked to cancer. With rapidly accumulating molecular data linked to patient outcome, the need for identification of robust multi-omic molecular markers is critical in order to provide clinical impact. While previous bioinformatic tools have been developed to identify potential biomarkers in cancer, these methods do not allow for rapid classification of oncogenes versus tumor suppressors taking into account robust differential expression, cutoffs, p-values and non-normality of the data. Here, we propose a methodology, Robust Selection Algorithm (RSA) that addresses these important problems in big data omics analysis. The robustness of the survival analysis is ensured by identification of optimal cutoff values of omics expression, strengthened by p-value computed through intensive random resampling taking into account any non-normality in the data and integration into multi-omic functional networks. Here we have analyzed pan-cancer miRNA patient data to identify functional pathways involved in cancer progression that are associated with selected miRNA identified by RSA. Our approach demonstrates the way in which existing survival analysis techniques can be integrated with a functional network analysis framework to efficiently identify promising biomarkers and novel therapeutic candidates across diseases. PMID:26505200

  7. Robust Selection Algorithm (RSA) for Multi-Omic Biomarker Discovery; Integration with Functional Network Analysis to Identify miRNA Regulated Pathways in Multiple Cancers.

    PubMed

    Sehgal, Vasudha; Seviour, Elena G; Moss, Tyler J; Mills, Gordon B; Azencott, Robert; Ram, Prahlad T

    2015-01-01

    MicroRNAs (miRNAs) play a crucial role in the maintenance of cellular homeostasis by regulating the expression of their target genes. As such, the dysregulation of miRNA expression has been frequently linked to cancer. With rapidly accumulating molecular data linked to patient outcome, the need for identification of robust multi-omic molecular markers is critical in order to provide clinical impact. While previous bioinformatic tools have been developed to identify potential biomarkers in cancer, these methods do not allow for rapid classification of oncogenes versus tumor suppressors taking into account robust differential expression, cutoffs, p-values and non-normality of the data. Here, we propose a methodology, Robust Selection Algorithm (RSA) that addresses these important problems in big data omics analysis. The robustness of the survival analysis is ensured by identification of optimal cutoff values of omics expression, strengthened by p-value computed through intensive random resampling taking into account any non-normality in the data and integration into multi-omic functional networks. Here we have analyzed pan-cancer miRNA patient data to identify functional pathways involved in cancer progression that are associated with selected miRNA identified by RSA. Our approach demonstrates the way in which existing survival analysis techniques can be integrated with a functional network analysis framework to efficiently identify promising biomarkers and novel therapeutic candidates across diseases. PMID:26505200

  8. Turning process monitoring using a robust and miniaturized non-incremental interferometric distance sensor

    NASA Astrophysics Data System (ADS)

    Günther, P.; Dreier, F.; Pfister, T.; Czarske, J.

    2011-05-01

    In-process shape measurements of rotating objects such as turning parts at a metal working lathe are of great importance for monitoring production processes or to enable zero-error production. Therefore, contactless and compact sensors with high temporal resolution as well as high precision are necessary. Furthermore, robust sensors are required which withstand the rough ambient conditions in production environment. Thus, we developed a miniaturized and robust non-incremental fiber-optic distance sensor with dimensions of only 30x40x90 mm3 which can be attached directly adjacent to the turning tool bit of a metal working lathe and allows precise in-process 3D shape measurements of turning parts. In this contribution we present the results of in-process shape measurements during the turning process at a metal working lathe using a miniaturized interferometric distance sensor. The absolute radius of the turning workpiece can be determined with micron precision. To proof the accuracy of the measurement results, comparative measurements with tactile sensors have to be performed.

  9. The limits of feedforward vision: recurrent processing promotes robust object recognition when objects are degraded.

    PubMed

    Wyatte, Dean; Curran, Tim; O'Reilly, Randall

    2012-11-01

    Everyday vision requires robustness to a myriad of environmental factors that degrade stimuli. Foreground clutter can occlude objects of interest, and complex lighting and shadows can decrease the contrast of items. How does the brain recognize visual objects despite these low-quality inputs? On the basis of predictions from a model of object recognition that contains excitatory feedback, we hypothesized that recurrent processing would promote robust recognition when objects were degraded by strengthening bottom-up signals that were weakened because of occlusion and contrast reduction. To test this hypothesis, we used backward masking to interrupt the processing of partially occluded and contrast reduced images during a categorization experiment. As predicted by the model, we found significant interactions between the mask and occlusion and the mask and contrast, such that the recognition of heavily degraded stimuli was differentially impaired by masking. The model provided a close fit of these results in an isomorphic version of the experiment with identical stimuli. The model also provided an intuitive explanation of the interactions between the mask and degradations, indicating that masking interfered specifically with the extensive recurrent processing necessary to amplify and resolve highly degraded inputs, whereas less degraded inputs did not require much amplification and could be rapidly resolved, making them less susceptible to masking. Together, the results of the experiment and the accompanying model simulations illustrate the limits of feedforward vision and suggest that object recognition is better characterized as a highly interactive, dynamic process that depends on the coordination of multiple brain areas. PMID:22905822

  10. Conceptual information processing: A robust approach to KBS-DBMS integration

    NASA Technical Reports Server (NTRS)

    Lazzara, Allen V.; Tepfenhart, William; White, Richard C.; Liuzzi, Raymond

    1987-01-01

    Integrating the respective functionality and architectural features of knowledge base and data base management systems is a topic of considerable interest. Several aspects of this topic and associated issues are addressed. The significance of integration and the problems associated with accomplishing that integration are discussed. The shortcomings of current approaches to integration and the need to fuse the capabilities of both knowledge base and data base management systems motivates the investigation of information processing paradigms. One such paradigm is concept based processing, i.e., processing based on concepts and conceptual relations. An approach to robust knowledge and data base system integration is discussed by addressing progress made in the development of an experimental model for conceptual information processing.

  11. Primary Polymer Aging Processes Identified from Weapon Headspace Chemicals

    SciTech Connect

    Chambers, D M; Bazan, J M; Ithaca, J G

    2002-03-25

    A current focus of our weapon headspace sampling work is the interpretation of the volatile chemical signatures that we are collecting. To help validate our interpretation we have been developing a laboratory-based material aging capability to simulate material decomposition chemistries identified. Key to establishing this capability has been the development of an automated approach to process, analyze, and quantify arrays of material combinations as a function of time and temperature. Our initial approach involves monitoring the formation and migration of volatile compounds produced when a material decomposes. This approach is advantageous in that it is nondestructive and provides a direct comparison with our weapon headspace surveillance initiative. Nevertheless, this approach requires us to identify volatile material residue and decomposition byproducts that are not typically monitored and reported in material aging studies. Similar to our weapon monitoring method, our principle laboratory-based method involves static headspace collection by solid phase microextraction (SPME) followed by gas chromatography/mass spectrometry (GC/MS). SPME is a sorbent collection technique that is ideally suited for preconcentration and delivery of trace gas-phase compounds for analysis by GC. When combined with MS, detection limits are routinely in the low- and sub-ppb ranges, even for semivolatile and polar compounds. To automate this process we incorporated a robotic sample processor configured for SPME collection. The completed system will thermally process, sample, and analyze a material sample. Quantification of the instrument response is another process that has been integrated into the system. The current system screens low-milligram quantities of material for the formation or outgas of small compounds as initial indicators of chemical decomposition. This emerging capability offers us a new approach to identify and non-intrusively monitor decomposition mechanisms that are

  12. A novel predictive control algorithm and robust stability criteria for integrating processes.

    PubMed

    Zhang, Bin; Yang, Weimin; Zong, Hongyuan; Wu, Zhiyong; Zhang, Weidong

    2011-07-01

    This paper introduces a novel predictive controller for single-input/single-output (SISO) integrating systems, which can be directly applied without pre-stabilizing the process. The control algorithm is designed on the basis of the tested step response model. To produce a bounded system response along the finite predictive horizon, the effect of the integrating mode must be zeroed while unmeasured disturbances exist. Here, a novel predictive feedback error compensation method is proposed to eliminate the permanent offset between the setpoint and the process output while the integrating system is affected by load disturbance. Also, a rotator factor is introduced in the performance index, which is contributed to the improvement robustness of the closed-loop system. Then on the basis of Jury's dominant coefficient criterion, a robust stability condition of the resulted closed loop system is given. There are only two parameters which need to be tuned for the controller, and each has a clear physical meaning, which is convenient for implementation of the control algorithm. Lastly, simulations are given to illustrate that the proposed algorithm can provide excellent closed loop performance compared with some reported methods. PMID:21353217

  13. CORROSION PROCESS IN REINFORCED CONCRETE IDENTIFIED BY ACOUSTIC EMISSION

    NASA Astrophysics Data System (ADS)

    Kawasaki, Yuma; Kitaura, Misuzu; Tomoda, Yuichi; Ohtsu, Masayasu

    Deterioration of Reinforced Concrete (RC) due to salt attack is known as one of serious problems. Thus, development of non-destructive evaluation (NDE) techniques is important to assess the corrosion process. Reinforcement in concrete normally does not corrode because of a passive film on the surface of reinforcement. When chloride concentration at reinfo rcement exceeds the threshold level, the passive film is destroyed. Thus maintenance is desirable at an early stage. In this study, to identify the onset of corrosion and the nucleation of corrosion-induced cracking in concrete due to expansion of corrosion products, continuous acoustic emission (AE) monitoring is applied. Accelerated corrosion and cyclic wet and dry tests are performed in a laboratory. The SiGMA (Simplified Green's functions for Moment tensor Analysis) proce dure is applied to AE waveforms to clarify source kinematics of micro-cracks locations, types and orientations. Results show that the onset of corrosion and the nu cleation of corrosion-induced cracking in concrete are successfully identified. Additionally, cross-sections inside the reinforcement are observed by a scanning electron microscope (SEM). From these results, a great promise for AE techniques to monitor salt damage at an early stage in RC structures is demonstrated.

  14. Robustness Tests in Determining the Earthquake Rupture Process: The June 23, 2001 Mw 8.4 Peru Earthquake

    NASA Astrophysics Data System (ADS)

    Das, S.; Robinson, D. P.

    2006-12-01

    The non-uniqueness of the problem of determining the rupture process details from analysis of body-wave seismograms was first discussed by Kostrov in 1974. We discuss how to use robustness tests together with inversion of synthetic data to identify the reliable properties of the rupture process obtained from inversion of broadband body wave data. We apply it to the great 2001 Peru earthquake. Twice in the last 200 years, a great earthquake in this region has been followed by a great earthquake in the immediately adjacent plate boundary to the south within about 10 years, indicating the potential for a major earthquake in this area in the near future. By inverting 19 pure SH-seismograms evenly distributed in azimuth around the fault, we find that the rupture was held up by a barrier and then overcame it, thereby producing the world's third largest earthquake since 1965, and we show that the stalling of the rupture in this earthquake is a robust feature. The rupture propagated for ~70 km, then skirted around a ~6000 km2 area of the fault and continued propagating for another ~200 km, returning to rupture this barrier after a ~30 second delay. The barrier has relatively low rupture speed, slip and aftershock density compared to its surroundings, and the time of the main energy release in the earthquake coincides with its rupture. We identify this barrier as a fracture zone on the subducting oceanic plate. Robinson, D. P., S. Das, A. B. Watts (2006), Earthquake rupture stalled by subducting fracture zone, Science, 312(5777), 1203-1205.

  15. Evolving Robust Gene Regulatory Networks

    PubMed Central

    Noman, Nasimul; Monjo, Taku; Moscato, Pablo; Iba, Hitoshi

    2015-01-01

    Design and implementation of robust network modules is essential for construction of complex biological systems through hierarchical assembly of ‘parts’ and ‘devices’. The robustness of gene regulatory networks (GRNs) is ascribed chiefly to the underlying topology. The automatic designing capability of GRN topology that can exhibit robust behavior can dramatically change the current practice in synthetic biology. A recent study shows that Darwinian evolution can gradually develop higher topological robustness. Subsequently, this work presents an evolutionary algorithm that simulates natural evolution in silico, for identifying network topologies that are robust to perturbations. We present a Monte Carlo based method for quantifying topological robustness and designed a fitness approximation approach for efficient calculation of topological robustness which is computationally very intensive. The proposed framework was verified using two classic GRN behaviors: oscillation and bistability, although the framework is generalized for evolving other types of responses. The algorithm identified robust GRN architectures which were verified using different analysis and comparison. Analysis of the results also shed light on the relationship among robustness, cooperativity and complexity. This study also shows that nature has already evolved very robust architectures for its crucial systems; hence simulation of this natural process can be very valuable for designing robust biological systems. PMID:25616055

  16. Robust Canonical Coherence for Quasi-Cyclostationary Processes: Geomagnetism and Seismicity in Peru

    NASA Astrophysics Data System (ADS)

    Lepage, K.; Thomson, D. J.

    2007-12-01

    Preliminary results suggesting a connection between long-period, geomagnetic fluctuations and long-period, seismic fluctuations are presented. Data from the seismic detector, NNA, situated in ~Naña, Peru, is compared to geomagnetic data from HUA, located in Huancayo, Peru. The high-pass filtered data from the two stations exhibits quasi-cyclostationary pulsation with daily periodicity, and suggests correspondence. The pulsation contains power predominantly between 2000 μ Hz and 8000 μ Hz, with the geomagnetic pulses leading by approximately 4 to 5 hours. A many data section, multitaper, robust canonical coherence analysis of the two, three component data sets is performed. The method, involving an adaptation, suitable for quasi-cyclostationary processes, of the technique presented in "Robust estimation of power spectra", (by Kleiner, Martin and Thomson, Journal of the Royal Statistical Society, Series B Methodological, 1979) is described. Simulations are presented exploring the applicability of the method. Canonical coherence is detected, predominantly between the geomagnetic field and the vertical component of seismic velocity, in the band of frequencies between 1500 μ Hz and 2500 μ Hz. Subsequent group delay estimates between the geomagnetic components and seismic velocity vertical at frequencies corresponding to large canonical coherence are computed. The estimated group delays are 8 min between geomagnetic east and seismic velocity vertical, 16 min between geomagnetic north and seismic velocity vertical and 11 min between geomagnetic vertical and seismic velocity vertical. Possible coupling mechanisms are discussed.

  17. Quantifying Community Assembly Processes and Identifying Features that Impose Them

    SciTech Connect

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; Chen, Xingyuan; Kennedy, David W.; Murray, Christopher J.; Rockhold, Mark L.; Konopka, Allan

    2013-06-06

    Across a set of ecological communities connected to each other through organismal dispersal (a ‘meta-community’), turnover in composition is governed by (ecological) Drift, Selection, and Dispersal Limitation. Quantitative estimates of these processes remain elusive, but would represent a common currency needed to unify community ecology. Using a novel analytical framework we quantitatively estimate the relative influences of Drift, Selection, and Dispersal Limitation on subsurface, sediment-associated microbial meta-communities. The communities we study are distributed across two geologic formations encompassing ~12,500m3 of uranium-contaminated sediments within the Hanford Site in eastern Washington State. We find that Drift consistently governs ~25% of spatial turnover in community composition; Selection dominates (governing ~60% of turnover) across spatially-structured habitats associated with fine-grained, low permeability sediments; and Dispersal Limitation is most influential (governing ~40% of turnover) across spatially-unstructured habitats associated with coarse-grained, highly-permeable sediments. Quantitative influences of Selection and Dispersal Limitation may therefore be predictable from knowledge of environmental structure. To develop a system-level conceptual model we extend our analytical framework to compare process estimates across formations, characterize measured and unmeasured environmental variables that impose Selection, and identify abiotic features that limit dispersal. Insights gained here suggest that community ecology can benefit from a shift in perspective; the quantitative approach developed here goes beyond the ‘niche vs. neutral’ dichotomy by moving towards a style of natural history in which estimates of Selection, Dispersal Limitation and Drift can be described, mapped and compared across ecological systems.

  18. Directed International Technological Change and Climate Policy: New Methods for Identifying Robust Policies Under Conditions of Deep Uncertainty

    NASA Astrophysics Data System (ADS)

    Molina-Perez, Edmundo

    It is widely recognized that international environmental technological change is key to reduce the rapidly rising greenhouse gas emissions of emerging nations. In 2010, the United Nations Framework Convention on Climate Change (UNFCCC) Conference of the Parties (COP) agreed to the creation of the Green Climate Fund (GCF). This new multilateral organization has been created with the collective contributions of COP members, and has been tasked with directing over USD 100 billion per year towards investments that can enhance the development and diffusion of clean energy technologies in both advanced and emerging nations (Helm and Pichler, 2015). The landmark agreement arrived at the COP 21 has reaffirmed the key role that the GCF plays in enabling climate mitigation as it is now necessary to align large scale climate financing efforts with the long-term goals agreed at Paris 2015. This study argues that because of the incomplete understanding of the mechanics of international technological change, the multiplicity of policy options and ultimately the presence of climate and technological change deep uncertainty, climate financing institutions such as the GCF, require new analytical methods for designing long-term robust investment plans. Motivated by these challenges, this dissertation shows that the application of new analytical methods, such as Robust Decision Making (RDM) and Exploratory Modeling (Lempert, Popper and Bankes, 2003) to the study of international technological change and climate policy provides useful insights that can be used for designing a robust architecture of international technological cooperation for climate change mitigation. For this study I developed an exploratory dynamic integrated assessment model (EDIAM) which is used as the scenario generator in a large computational experiment. The scope of the experimental design considers an ample set of climate and technological scenarios. These scenarios combine five sources of uncertainty

  19. Adaptive GSA-based optimal tuning of PI controlled servo systems with reduced process parametric sensitivity, robust stability and controller robustness.

    PubMed

    Precup, Radu-Emil; David, Radu-Codrut; Petriu, Emil M; Radac, Mircea-Bogdan; Preitl, Stefan

    2014-11-01

    This paper suggests a new generation of optimal PI controllers for a class of servo systems characterized by saturation and dead zone static nonlinearities and second-order models with an integral component. The objective functions are expressed as the integral of time multiplied by absolute error plus the weighted sum of the integrals of output sensitivity functions of the state sensitivity models with respect to two process parametric variations. The PI controller tuning conditions applied to a simplified linear process model involve a single design parameter specific to the extended symmetrical optimum (ESO) method which offers the desired tradeoff to several control system performance indices. An original back-calculation and tracking anti-windup scheme is proposed in order to prevent the integrator wind-up and to compensate for the dead zone nonlinearity of the process. The minimization of the objective functions is carried out in the framework of optimization problems with inequality constraints which guarantee the robust stability with respect to the process parametric variations and the controller robustness. An adaptive gravitational search algorithm (GSA) solves the optimization problems focused on the optimal tuning of the design parameter specific to the ESO method and of the anti-windup tracking gain. A tuning method for PI controllers is proposed as an efficient approach to the design of resilient control systems. The tuning method and the PI controllers are experimentally validated by the adaptive GSA-based tuning of PI controllers for the angular position control of a laboratory servo system. PMID:25330468

  20. Improved Signal Processing Technique Leads to More Robust Self Diagnostic Accelerometer System

    NASA Technical Reports Server (NTRS)

    Tokars, Roger; Lekki, John; Jaros, Dave; Riggs, Terrence; Evans, Kenneth P.

    2010-01-01

    The self diagnostic accelerometer (SDA) is a sensor system designed to actively monitor the health of an accelerometer. In this case an accelerometer is considered healthy if it can be determined that it is operating correctly and its measurements may be relied upon. The SDA system accomplishes this by actively monitoring the accelerometer for a variety of failure conditions including accelerometer structural damage, an electrical open circuit, and most importantly accelerometer detachment. In recent testing of the SDA system in emulated engine operating conditions it has been found that a more robust signal processing technique was necessary. An improved accelerometer diagnostic technique and test results of the SDA system utilizing this technique are presented here. Furthermore, the real time, autonomous capability of the SDA system to concurrently compensate for effects from real operating conditions such as temperature changes and mechanical noise, while monitoring the condition of the accelerometer health and attachment, will be demonstrated.

  1. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to

  2. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering.

    PubMed

    Shanechi, Maryam M; Orsborn, Amy L; Carmena, Jose M

    2016-04-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain's behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user's motor intention during CLDA-a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter

  3. Processing and Properties of Fiber Reinforced Polymeric Matrix Composites. Part 2; Processing Robustness of IM7/PETI Polyimide Composites

    NASA Technical Reports Server (NTRS)

    Hou, Tan-Hung

    1996-01-01

    The processability of a phenylethynyl terminated imide (PETI) resin matrix composite was investigated. Unidirectional prepregs were made by coating an N-methylpyrrolidone solution of the amide acid oligomer onto unsized IM7. Two batches of prepregs were used: one was made by NASA in-house, and the other was from an industrial source. The composite processing robustness was investigated with respect to the effect of B-staging conditions, the prepreg shelf life, and the optimal processing window. Rheological measurements indicated that PETI's processability was only slightly affected over a wide range of B-staging temperatures (from 250 C to 300 C). The open hole compression (OHC) strength values were statistically indistinguishable among specimens consolidated using various B-staging conditions. Prepreg rheology and OHC strengths were also found not to be affected by prolonged (i.e., up to 60 days) ambient storage. An optimal processing window was established using response surface methodology. It was found that IM7/PETI composite is more sensitive to the consolidation temperature than to the consolidation pressure. A good consolidation was achievable at 371 C/100 Psi, which yielded an OHC strength of 62 Ksi at room temperature. However, processability declined dramatically at temperatures below 350 C.

  4. Delays in auditory processing identified in preschool children with FASD

    PubMed Central

    Stephen, Julia M.; Kodituwakku, Piyadasa W.; Kodituwakku, Elizabeth L.; Romero, Lucinda; Peters, Amanda M.; Sharadamma, Nirupama Muniswamy; Caprihan, Arvind; Coffman, Brian A.

    2012-01-01

    Background Both sensory and cognitive deficits have been associated with prenatal exposure to alcohol; however, very few studies have focused on sensory deficits in preschool aged children. Since sensory skills develop early, characterization of sensory deficits using novel imaging methods may reveal important neural markers of prenatal alcohol exposure. Materials and Methods Participants in this study were 10 children with a fetal alcohol spectrum disorder (FASD) and 15 healthy control children aged 3-6 years. All participants had normal hearing as determined by clinical screens. We measured their neurophysiological responses to auditory stimuli (1000 Hz, 72 dB tone) using magnetoencephalography (MEG). We used a multi-dipole spatio-temporal modeling technique (CSST – Ranken et al. 2002) to identify the location and timecourse of cortical activity in response to the auditory tones. The timing and amplitude of the left and right superior temporal gyrus sources associated with activation of left and right primary/secondary auditory cortices were compared across groups. Results There was a significant delay in M100 and M200 latencies for the FASD children relative to the HC children (p = 0.01), when including age as a covariate. The within-subjects effect of hemisphere was not significant. A comparable delay in M100 and M200 latencies was observed in children across the FASD subtypes. Discussion Auditory delay revealed by MEG in children with FASD may prove to be a useful neural marker of information processing difficulties in young children with prenatal alcohol exposure. The fact that delayed auditory responses were observed across the FASD spectrum suggests that it may be a sensitive measure of alcohol-induced brain damage. Therefore, this measure in conjunction with other clinical tools may prove useful for early identification of alcohol affected children, particularly those without dysmorphia. PMID:22458372

  5. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations, 2. Robustness of Techniques

    SciTech Connect

    Helton, J.C.; Kleijnen, J.P.C.

    1999-03-24

    Procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses are described and illustrated. These procedures attempt to detect increasingly complex patterns in scatterplots and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. A sequence of example analyses with a large model for two-phase fluid flow illustrates how the individual procedures can differ in the variables that they identify as having effects on particular model outcomes. The example analyses indicate that the use of a sequence of procedures is a good analysis strategy and provides some assurance that an important effect is not overlooked.

  6. Identifying Process Variables in Career Counseling: A Research Agenda.

    ERIC Educational Resources Information Center

    Heppner, Mary J.; Heppner, P. Paul

    2003-01-01

    Outlines areas for career counseling process research: examining the working alliance; reconceptualizing career counseling as learning; investigating process/outcome differences due to client and counselor attributes; examining influential session events; using a common problem resolution metric; examining change longitudinally; examining…

  7. Chatter Stability in Turning and Milling with in Process Identified Process Damping

    NASA Astrophysics Data System (ADS)

    Kurata, Yusuke; Merdol, S. Doruk; Altintas, Yusuf; Suzuki, Norikazu; Shamoto, Eiji

    Process damping in metal cutting is caused by the contact between the flank face of the cutting tool and the wavy surface finish, which is known to damp chatter vibrations. An analytical model with process damping has already been developed and verified in earlier research, in which the damping coefficient is considered to be proportional to the ratio of vibration and cutting velocities. This paper presents in process identification of the process damping force coefficient derived from cutting tests. Plunge turning is used to create a continuous reduction in cutting speed as the tool reduces the diameter of a cylindrical workpiece. When chatter stops at a critical cutting speed, the process damping coefficient is estimated by inverse solution of the stability law. It is shown that the stability lobes constructed by the identified process damping coefficient agrees with experiments conducted in both turning and milling.

  8. Data-based robust multiobjective optimization of interconnected processes: energy efficiency case study in papermaking.

    PubMed

    Afshar, Puya; Brown, Martin; Maciejowski, Jan; Wang, Hong

    2011-12-01

    Reducing energy consumption is a major challenge for "energy-intensive" industries such as papermaking. A commercially viable energy saving solution is to employ data-based optimization techniques to obtain a set of "optimized" operational settings that satisfy certain performance indices. The difficulties of this are: 1) the problems of this type are inherently multicriteria in the sense that improving one performance index might result in compromising the other important measures; 2) practical systems often exhibit unknown complex dynamics and several interconnections which make the modeling task difficult; and 3) as the models are acquired from the existing historical data, they are valid only locally and extrapolations incorporate risk of increasing process variability. To overcome these difficulties, this paper presents a new decision support system for robust multiobjective optimization of interconnected processes. The plant is first divided into serially connected units to model the process, product quality, energy consumption, and corresponding uncertainty measures. Then multiobjective gradient descent algorithm is used to solve the problem in line with user's preference information. Finally, the optimization results are visualized for analysis and decision making. In practice, if further iterations of the optimization algorithm are considered, validity of the local models must be checked prior to proceeding to further iterations. The method is implemented by a MATLAB-based interactive tool DataExplorer supporting a range of data analysis, modeling, and multiobjective optimization techniques. The proposed approach was tested in two U.K.-based commercial paper mills where the aim was reducing steam consumption and increasing productivity while maintaining the product quality by optimization of vacuum pressures in forming and press sections. The experimental results demonstrate the effectiveness of the method. PMID:22147299

  9. A Robust Gold Deconvolution Approach for LiDAR Waveform Data Processing to Characterize Vegetation Structure

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Popescu, S. C.; Krause, K.; Sheridan, R.; Ku, N. W.

    2014-12-01

    Increasing attention has been paid in the remote sensing community to the next generation Light Detection and Ranging (lidar) waveform data systems for extracting information on topography and the vertical structure of vegetation. However, processing waveform lidar data raises some challenges compared to analyzing discrete return data. The overall goal of this study was to present a robust de-convolution algorithm- Gold algorithm used to de-convolve waveforms in a lidar dataset acquired within a 60 x 60m study area located in the Harvard Forest in Massachusetts. The waveform lidar data was collected by the National Ecological Observatory Network (NEON). Specific objectives were to: (1) explore advantages and limitations of various waveform processing techniques to derive topography and canopy height information; (2) develop and implement a novel de-convolution algorithm, the Gold algorithm, to extract elevation and canopy metrics; and (3) compare results and assess accuracy. We modeled lidar waveforms with a mixture of Gaussian functions using the Non-least squares (NLS) algorithm implemented in R and derived a Digital Terrain Model (DTM) and canopy height. We compared our waveform-derived topography and canopy height measurements using the Gold de-convolution algorithm to results using the Richardson-Lucy algorithm. Our findings show that the Gold algorithm performed better than the Richardson-Lucy algorithm in terms of recovering the hidden echoes and detecting false echoes for generating a DTM, which indicates that the Gold algorithm could potentially be applied to processing of waveform lidar data to derive information on terrain elevation and canopy characteristics.

  10. Global transcriptomic analysis of Cyanothece 51142 reveals robust diurnal oscillation of central metabolic processes

    SciTech Connect

    Stockel, Jana; Welsh, Eric A.; Liberton, Michelle L.; Kunnavakkam, Rangesh V.; Aurora, Rajeev; Pakrasi, Himadri B.

    2008-04-22

    Cyanobacteria are oxygenic photosynthetic organisms, and the only prokaryotes known to have a circadian cycle. Unicellular diazotrophic cyanobacteria such as Cyanothece 51142 can fix atmospheric nitrogen, a process exquisitely sensitive to oxygen. Thus, the intracellular environment of Cyanothece oscillates between aerobic and anaerobic conditions during a day-night cycle. This is accomplished by temporal separation of two processes: photosynthesis during the day, and nitrogen fixation at night. While previous studies have examined periodic changes transcript levels for a limited number of genes in Cyanothece and other unicellular diazotrophic cyanobacteria, a comprehensive study of transcriptional activity in a nitrogen-fixing cyanobacterium is necessary to understand the impact of the temporal separation of photosynthesis and nitrogen fixation on global gene regulation and cellular metabolism. We have examined the expression patterns of nearly 5000 genes in Cyanothece 51142 during two consecutive diurnal periods. We found that ~30% of these genes exhibited robust oscillating expression profiles. Interestingly, this set included genes for almost all central metabolic processes in Cyanothece. A transcriptional network of all genes with significantly oscillating transcript levels revealed that the majority of genes in numerous individual pathways, such as glycolysis, pentose phosphate pathway and glycogen metabolism, were co-regulated and maximally expressed at distinct phases during the diurnal cycle. Our analyses suggest that the demands of nitrogen fixation greatly influence major metabolic activities inside Cyanothece cells and thus drive various cellular activities. These studies provide a comprehensive picture of how a physiologically relevant diurnal light-dark cycle influences the metabolism in a photosynthetic bacterium

  11. Accelerated evaluation of the robustness of treatment plans against geometric uncertainties by Gaussian processes

    NASA Astrophysics Data System (ADS)

    Sobotta, B.; Söhn, M.; Alber, M.

    2012-12-01

    In order to provide a consistently high quality treatment, it is of great interest to assess the robustness of a treatment plan under the influence of geometric uncertainties. One possible method to implement this is to run treatment simulations for all scenarios that may arise from these uncertainties. These simulations may be evaluated in terms of the statistical distribution of the outcomes (as given by various dosimetric quality metrics) or statistical moments thereof, e.g. mean and/or variance. This paper introduces a method to compute the outcome distribution and all associated values of interest in a very efficient manner. This is accomplished by substituting the original patient model with a surrogate provided by a machine learning algorithm. This Gaussian process (GP) is trained to mimic the behavior of the patient model based on only very few samples. Once trained, the GP surrogate takes the place of the patient model in all subsequent calculations.The approach is demonstrated on two examples. The achieved computational speedup is more than one order of magnitude.

  12. A robust color signal processing with wide dynamic range WRGB CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Kawada, Shun; Kuroda, Rihito; Sugawa, Shigetoshi

    2011-01-01

    We have developed a robust color reproduction methodology by a simple calculation with a new color matrix using the formerly developed wide dynamic range WRGB lateral overflow integration capacitor (LOFIC) CMOS image sensor. The image sensor was fabricated through a 0.18 μm CMOS technology and has a 45 degrees oblique pixel array, the 4.2 μm effective pixel pitch and the W pixels. A W pixel was formed by replacing one of the two G pixels in the Bayer RGB color filter. The W pixel has a high sensitivity through the visible light waveband. An emerald green and yellow (EGY) signal is generated from the difference between the W signal and the sum of RGB signals. This EGY signal mainly includes emerald green and yellow lights. These colors are difficult to be reproduced accurately by the conventional simple linear matrix because their wave lengths are in the valleys of the spectral sensitivity characteristics of the RGB pixels. A new linear matrix based on the EGY-RGB signal was developed. Using this simple matrix, a highly accurate color processing with a large margin to the sensitivity fluctuation and noise has been achieved.

  13. Correlation analysis for long time series by robustly estimated autoregressive stochastic processes

    NASA Astrophysics Data System (ADS)

    Schuh, Wolf-Dieter; Brockmann, Jan-Martin; Kargoll, Boris

    2015-04-01

    Modern sensors and satellite missions deliver huge data sets and long time series of observations. These data sets have to be handled with care because of changing correlations, conspicuous data and possible outliers. Tailored concepts for data selection and robust techniques to estimate the correlation characteristics allow for a better/optimal exploitation of the information of these measurements. In this presentation we give an overview of standard techniques for estimating correlations occurring in long time series in the time domain as well as in the frequency domain. We discuss the pros and cons especially with the focus on the intensified occurrence of conspicuous data and outliers. We present a concept to classify the measurements and isolate conspicuous data. We propose to describe the varying correlation behavior of the measurement series by an autoregressive stochastic process and give some hints how to construct adaptive filters to decorrelate the measurement series and to handle the huge covariance matrices. As study object we use time series from gravity gradient data collected during the GOCE low orbit operation campaign (LOOC). Due to the low orbit these data from 13-Jun-2014 to 21-Oct-2014 have more or less the same potential to recover the Earth gravity field with the same accuracy than all the data from the rest of the entire mission. Therefore these data are extraordinarily valuable but hard to handle, because of conspicuous data due to maneuvers during the orbit lowering phases, overall increase in drag, saturation of ion thrusters and other (currently) unexplained effects.

  14. Fabrication of robust micro-patterned polymeric films via static breath-figure process and vulcanization.

    PubMed

    Li, Lei; Zhong, Yawen; Gong, Jianliang; Li, Jian; Huang, Jin; Ma, Zhi

    2011-02-15

    Here, we present the preparation of thermally stable and solvent resistant micro-patterned polymeric films via static breath-figure process and sequent vulcanization, with a commercially available triblock polymer, polystyrene-b-polyisoprene-b-polystyrene (SIS). The vulcanized honeycomb structured SIS films became self-supported and resistant to a wide range of organic solvents and thermally stable up to 350°C for 2h, an increase of more than 300K as compared to the uncross-linked films. This superior robustness could be attributed to the high degree of polyisoprene cross-linking. The versatility of the methodology was demonstrated by applying to another commercially available triblock polymer, polystyrene-b-polybutadiene-b-polystyrene (SBS). Particularly, hydroxy groups were introduced into SBS by hydroboration. The functionalized two-dimensional micro-patterns feasible for site-directed grafting were created by the hydroxyl-containing polymers. In addition, the fixed microporous structures could be replicated to fabricate textured positive PDMS stamps. This simple technique offers new prospects in the field of micro-patterns, soft lithography and templates. PMID:21168143

  15. Quantitative Morphometry of Electrophysiologically Identified CA3b Interneurons Reveals Robust Local Geometry and Distinct Cell Classes

    PubMed Central

    Ascoli, Giorgio A.; Brown, Kerry M.; Calixto, Eduardo; Card, J. Patrick; Galvan, E. J.; Perez-Rosello, T.; Barrionuevo, Germán

    2010-01-01

    The morphological and electrophysiological diversity of inhibitory cells in hippocampal area CA3 may underlie specific computational roles and is not yet fully elucidated. In particular, interneurons with somata in strata radiatum (R) and lacunosum-moleculare (L-M) receive converging stimulation from the dentate gyrus and entorhinal cortex as well as within CA3. Although these cells express different forms of synaptic plasticity, their axonal trees and connectivity are still largely unknown. We investigated the branching and spatial patterns, plus the membrane and synaptic properties, of rat CA3b R and L-M interneurons digitally reconstructed after intracellular labeling. We found considerable variability within but no difference between the two layers, and no correlation between morphological and biophysical properties. Nevertheless, two cell types were identified based on the number of dendritic bifurcations, with significantly different anatomical and electrophysiological features. Axons generally branched an order of magnitude more than dendrites. However, interneurons on both sides of the R/L-M boundary revealed surprisingly modular axo-dendritic arborizations with consistently uniform local branch geometry. Both axons and dendrites followed a lamellar organization, and axons displayed a spatial preference towards the fissure. Moreover, only a small fraction of the axonal arbor extended to the outer portion of the invaded volume, and tended to return towards the proximal region. In contrast, dendritic trees demonstrated more limited but isotropic volume occupancy. These results suggest a role of predominantly local feedforward and lateral inhibitory control for both R and L-M interneurons. Such role may be essential to balance the extensive recurrent excitation of area CA3 underlying hippocampal autoassociative memory function. PMID:19496174

  16. Method for processing seismic data to identify anomalous absorption zones

    DOEpatents

    Taner, M. Turhan

    2006-01-03

    A method is disclosed for identifying zones anomalously absorptive of seismic energy. The method includes jointly time-frequency decomposing seismic traces, low frequency bandpass filtering the decomposed traces to determine a general trend of mean frequency and bandwidth of the seismic traces, and high frequency bandpass filtering the decomposed traces to determine local variations in the mean frequency and bandwidth of the seismic traces. Anomalous zones are determined where there is difference between the general trend and the local variations.

  17. Identifying and tracking dynamic processes in social networks

    NASA Astrophysics Data System (ADS)

    Chung, Wayne; Savell, Robert; Schütt, Jan-Peter; Cybenko, George

    2006-05-01

    The detection and tracking of embedded malicious subnets in an active social network can be computationally daunting due to the quantity of transactional data generated in the natural interaction of large numbers of actors comprising a network. In addition, detection of illicit behavior may be further complicated by evasive strategies designed to camouflage the activities of the covert subnet. In this work, we move beyond traditional static methods of social network analysis to develop a set of dynamic process models which encode various modes of behavior in active social networks. These models will serve as the basis for a new application of the Process Query System (PQS) to the identification and tracking of covert dynamic processes in social networks. We present a preliminary result from application of our technique in a real-world data stream-- the Enron email corpus.

  18. A Robust Power Remote Manipulator for Use in Waste Sorting, Processing, and Packaging - 12158

    SciTech Connect

    Cole, Matt; Martin, Scott

    2012-07-01

    Disposition of radioactive waste is one of the Department of Energy's (DOE's) highest priorities. A critical component of the waste disposition strategy is shipment of Transuranic (TRU) waste from DOE's Oak Ridge Reservation to the Waste Isolation Plant Project (WIPP) in Carlsbad, New Mexico. This is the mission of the DOE TRU Waste Processing Center (TWPC). The remote-handled TRU waste at the Oak Ridge Reservation is currently in a mixed waste form that must be repackaged in to meet WIPP Waste Acceptance Criteria (WAC). Because this remote-handled legacy waste is very diverse, sorting, size reducing, and packaging will require equipment flexibility and strength that is not possible with standard master-slave manipulators. To perform the wide range of tasks necessary with such diverse, highly contaminated material, TWPC worked with S.A. Technology (SAT) to modify SAT's Power Remote Manipulator (PRM) technology to provide the processing center with an added degree of dexterity and high load handling capability inside its shielded cells. TWPC and SAT incorporated innovative technologies into the PRM design to better suit the operations required at TWPC, and to increase the overall capability of the PRM system. Improving on an already proven PRM system will ensure that TWPC gains the capabilities necessary to efficiently complete its TRU waste disposition mission. The collaborative effort between TWPC and S.A. Technology has yielded an extremely capable and robust solution to perform the wide range of tasks necessary to repackage TRU waste containers at TWPC. Incorporating innovative technologies into a proven manipulator system, these PRMs are expected to be an important addition to the capabilities available to shielded cell operators. The PRMs provide operators with the ability to reach anywhere in the cell, lift heavy objects, perform size reduction associated with the disposition of noncompliant waste. Factory acceptance testing of the TWPC Powered Remote

  19. Robust Low Cost Liquid Rocket Combustion Chamber by Advanced Vacuum Plasma Process

    NASA Technical Reports Server (NTRS)

    Holmes, Richard; Elam, Sandra; Ellis, David L.; McKechnie, Timothy; Hickman, Robert; Rose, M. Franklin (Technical Monitor)

    2001-01-01

    Next-generation, regeneratively cooled rocket engines will require materials that can withstand high temperatures while retaining high thermal conductivity. Fabrication techniques must be cost efficient so that engine components can be manufactured within the constraints of shrinking budgets. Three technologies have been combined to produce an advanced liquid rocket engine combustion chamber at NASA-Marshall Space Flight Center (MSFC) using relatively low-cost, vacuum-plasma-spray (VPS) techniques. Copper alloy NARloy-Z was replaced with a new high performance Cu-8Cr-4Nb alloy developed by NASA-Glenn Research Center (GRC), which possesses excellent high-temperature strength, creep resistance, and low cycle fatigue behavior combined with exceptional thermal stability. Functional gradient technology, developed building composite cartridges for space furnaces was incorporated to add oxidation resistant and thermal barrier coatings as an integral part of the hot wall of the liner during the VPS process. NiCrAlY, utilized to produce durable protective coating for the space shuttle high pressure fuel turbopump (BPFTP) turbine blades, was used as the functional gradient material coating (FGM). The FGM not only serves as a protection from oxidation or blanching, the main cause of engine failure, but also serves as a thermal barrier because of its lower thermal conductivity, reducing the temperature of the combustion liner 200 F, from 1000 F to 800 F producing longer life. The objective of this program was to develop and demonstrate the technology to fabricate high-performance, robust, inexpensive combustion chambers for advanced propulsion systems (such as Lockheed-Martin's VentureStar and NASA's Reusable Launch Vehicle, RLV) using the low-cost VPS process. VPS formed combustion chamber test articles have been formed with the FGM hot wall built in and hot fire tested, demonstrating for the first time a coating that will remain intact through the hot firing test, and with

  20. An Excel Workbook for Identifying Redox Processes in Ground Water

    USGS Publications Warehouse

    Jurgens, Bryant C.; McMahon, Peter B.; Chapelle, Francis H.; Eberts, Sandra M.

    2009-01-01

    The reduction/oxidation (redox) condition of ground water affects the concentration, transport, and fate of many anthropogenic and natural contaminants. The redox state of a ground-water sample is defined by the dominant type of reduction/oxidation reaction, or redox process, occurring in the sample, as inferred from water-quality data. However, because of the difficulty in defining and applying a systematic redox framework to samples from diverse hydrogeologic settings, many regional water-quality investigations do not attempt to determine the predominant redox process in ground water. Recently, McMahon and Chapelle (2008) devised a redox framework that was applied to a large number of samples from 15 principal aquifer systems in the United States to examine the effect of redox processes on water quality. This framework was expanded by Chapelle and others (in press) to use measured sulfide data to differentiate between iron(III)- and sulfate-reducing conditions. These investigations showed that a systematic approach to characterize redox conditions in ground water could be applied to datasets from diverse hydrogeologic settings using water-quality data routinely collected in regional water-quality investigations. This report describes the Microsoft Excel workbook, RedoxAssignment_McMahon&Chapelle.xls, that assigns the predominant redox process to samples using the framework created by McMahon and Chapelle (2008) and expanded by Chapelle and others (in press). Assignment of redox conditions is based on concentrations of dissolved oxygen (O2), nitrate (NO3-), manganese (Mn2+), iron (Fe2+), sulfate (SO42-), and sulfide (sum of dihydrogen sulfide [aqueous H2S], hydrogen sulfide [HS-], and sulfide [S2-]). The logical arguments for assigning the predominant redox process to each sample are performed by a program written in Microsoft Visual Basic for Applications (VBA). The program is called from buttons on the main worksheet. The number of samples that can be analyzed

  1. Identifying User Needs and the Participative Design Process

    NASA Astrophysics Data System (ADS)

    Meiland, Franka; Dröes, Rose-Marie; Sävenstedt, Stefan; Bergvall-Kåreborn, Birgitta; Andersson, Anna-Lena

    As the number of persons with dementia increases and also the demands on care and support at home, additional solutions to support persons with dementia are needed. The COGKNOW project aims to develop an integrated, user-driven cognitive prosthetic device to help persons with dementia. The project focuses on support in the areas of memory, social contact, daily living activities and feelings of safety. The design process is user-participatory and consists of iterative cycles at three test sites across Europe. In the first cycle persons with dementia and their carers (n = 17) actively participated in the developmental process. Based on their priorities of needs and solutions, on their disabilities and after discussion between the team, a top four list of Information and Communication Technology (ICT) solutions was made and now serves as the basis for development: in the area of remembering - day and time orientation support, find mobile service and reminding service, in the area of social contact - telephone support by picture dialling, in the area of daily activities - media control support through a music playback and radio function, and finally, in the area of safety - a warning service to indicate when the front door is open and an emergency contact service to enhance feelings of safety. The results of this first project phase show that, in general, the people with mild dementia as well as their carers were able to express and prioritize their (unmet) needs, and the kind of technological assistance they preferred in the selected areas. In next phases it will be tested if the user-participatory design and multidisciplinary approach employed in the COGKNOW project result in a user-friendly, useful device that positively impacts the autonomy and quality of life of persons with dementia and their carers.

  2. Integrated Process Monitoring based on Systems of Sensors for Enhanced Nuclear Safeguards Sensitivity and Robustness

    SciTech Connect

    Humberto E. Garcia

    2014-07-01

    This paper illustrates safeguards benefits that process monitoring (PM) can have as a diversion deterrent and as a complementary safeguards measure to nuclear material accountancy (NMA). In order to infer the possible existence of proliferation-driven activities, the objective of NMA-based methods is often to statistically evaluate materials unaccounted for (MUF) computed by solving a given mass balance equation related to a material balance area (MBA) at every material balance period (MBP), a particular objective for a PM-based approach may be to statistically infer and evaluate anomalies unaccounted for (AUF) that may have occurred within a MBP. Although possibly being indicative of proliferation-driven activities, the detection and tracking of anomaly patterns is not trivial because some executed events may be unobservable or unreliably observed as others. The proposed similarity between NMA- and PM-based approaches is important as performance metrics utilized for evaluating NMA-based methods, such as detection probability (DP) and false alarm probability (FAP), can also be applied for assessing PM-based safeguards solutions. To this end, AUF count estimates can be translated into significant quantity (SQ) equivalents that may have been diverted within a given MBP. A diversion alarm is reported if this mass estimate is greater than or equal to the selected value for alarm level (AL), appropriately chosen to optimize DP and FAP based on the particular characteristics of the monitored MBA, the sensors utilized, and the data processing method employed for integrating and analyzing collected measurements. To illustrate the application of the proposed PM approach, a protracted diversion of Pu in a waste stream was selected based on incomplete fuel dissolution in a dissolver unit operation, as this diversion scenario is considered to be problematic for detection using NMA-based methods alone. Results demonstrate benefits of conducting PM under a system

  3. A Benchmark Data Set to Evaluate the Illumination Robustness of Image Processing Algorithms for Object Segmentation and Classification.

    PubMed

    Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus

    2015-01-01

    Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects. PMID:26191792

  4. A Benchmark Data Set to Evaluate the Illumination Robustness of Image Processing Algorithms for Object Segmentation and Classification

    PubMed Central

    Khan, Arif ul Maula; Mikut, Ralf; Reischl, Markus

    2015-01-01

    Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects. PMID:26191792

  5. Processing of Perceptual Information Is More Robust than Processing of Conceptual Information in Preschool-Age Children: Evidence from Costs of Switching

    ERIC Educational Resources Information Center

    Fisher, Anna V.

    2011-01-01

    Is processing of conceptual information as robust as processing of perceptual information early in development? Existing empirical evidence is insufficient to answer this question. To examine this issue, 3- to 5-year-old children were presented with a flexible categorization task, in which target items (e.g., an open red umbrella) shared category…

  6. Deep transcriptome-sequencing and proteome analysis of the hydrothermal vent annelid Alvinella pompejana identifies the CvP-bias as a robust measure of eukaryotic thermostability

    PubMed Central

    2013-01-01

    Background Alvinella pompejana is an annelid worm that inhabits deep-sea hydrothermal vent sites in the Pacific Ocean. Living at a depth of approximately 2500 meters, these worms experience extreme environmental conditions, including high temperature and pressure as well as high levels of sulfide and heavy metals. A. pompejana is one of the most thermotolerant metazoans, making this animal a subject of great interest for studies of eukaryotic thermoadaptation. Results In order to complement existing EST resources we performed deep sequencing of the A. pompejana transcriptome. We identified several thousand novel protein-coding transcripts, nearly doubling the sequence data for this annelid. We then performed an extensive survey of previously established prokaryotic thermoadaptation measures to search for global signals of thermoadaptation in A. pompejana in comparison with mesophilic eukaryotes. In an orthologous set of 457 proteins, we found that the best indicator of thermoadaptation was the difference in frequency of charged versus polar residues (CvP-bias), which was highest in A. pompejana. CvP-bias robustly distinguished prokaryotic thermophiles from prokaryotic mesophiles, as well as the thermophilic fungus Chaetomium thermophilum from mesophilic eukaryotes. Experimental values for thermophilic proteins supported higher CvP-bias as a measure of thermal stability when compared to their mesophilic orthologs. Proteome-wide mean CvP-bias also correlated with the body temperatures of homeothermic birds and mammals. Conclusions Our work extends the transcriptome resources for A. pompejana and identifies the CvP-bias as a robust and widely applicable measure of eukaryotic thermoadaptation. Reviewer This article was reviewed by Sándor Pongor, L. Aravind and Anthony M. Poole. PMID:23324115

  7. Individualized relapse prediction: personality measures and striatal and insular activity during reward-processing robustly predict relapse*

    PubMed Central

    Gowin, Joshua L.; Ball, Tali M.; Wittmann, Marc; Tapert, Susan F.; Paulus, Martin P.

    2015-01-01

    Background Nearly half of individuals with substance use disorders relapse in the year after treatment. A diagnostic tool to help clinicians make decisions regarding treatment does not exist for psychiatric conditions. Identifying individuals with high risk for relapse to substance use following abstinence has profound clinical consequences. This study aimed to develop neuroimaging as a robust tool to predict relapse. Methods 68 methamphetamine-dependent adults (15 female) were recruited from 28-day inpatient treatment. During treatment, participants completed a functional MRI scan that examined brain activation during reward processing. Patients were followed 1 year later to assess abstinence. We examined brain activation during reward processing between relapsing and abstaining individuals and employed three random forest prediction models (clinical and personality measures, neuroimaging measures, a combined model) to generate predictions for each participant regarding their relapse likelihood. Results 18 individuals relapsed. There were significant group by reward-size interactions for neural activation in the left insula and right striatum for rewards. Abstaining individuals showed increased activation for large, risky relative to small, safe rewards, whereas relapsing individuals failed to show differential activation between reward types. All three random forest models yielded good test characteristics such that a positive test for relapse yielded a likelihood ratio 2.63, whereas a negative test had a likelihood ratio of 0.48. Conclusions These findings suggest that neuroimaging can be developed in combination with other measures as an instrument to predict relapse, advancing tools providers can use to make decisions about individualized treatment of substance use disorders. PMID:25977206

  8. A two-step patterning process increases the robustness of periodic patterning in the fly eye.

    PubMed

    Gavish, Avishai; Barkai, Naama

    2016-06-01

    Complex periodic patterns can self-organize through dynamic interactions between diffusible activators and inhibitors. In the biological context, self-organized patterning is challenged by spatial heterogeneities ('noise') inherent to biological systems. How spatial variability impacts the periodic patterning mechanism and how it can be buffered to ensure precise patterning is not well understood. We examine the effect of spatial heterogeneity on the periodic patterning of the fruit fly eye, an organ composed of ∼800 miniature eye units (ommatidia) whose periodic arrangement along a hexagonal lattice self-organizes during early stages of fly development. The patterning follows a two-step process, with an initial formation of evenly spaced clusters of ∼10 cells followed by a subsequent refinement of each cluster into a single selected cell. Using a probabilistic approach, we calculate the rate of patterning errors resulting from spatial heterogeneities in cell size, position and biosynthetic capacity. Notably, error rates were largely independent of the desired cluster size but followed the distributions of signaling speeds. Pre-formation of large clusters therefore greatly increases the reproducibility of the overall periodic arrangement, suggesting that the two-stage patterning process functions to guard the pattern against errors caused by spatial heterogeneities. Our results emphasize the constraints imposed on self-organized patterning mechanisms by the need to buffer stochastic effects. Author summary Complex periodic patterns are common in nature and are observed in physical, chemical and biological systems. Understanding how these patterns are generated in a precise manner is a key challenge. Biological patterns are especially intriguing, as they are generated in a noisy environment; cell position and cell size, for example, are subject to stochastic variations, as are the strengths of the chemical signals mediating cell-to-cell communication. The need

  9. Differential Allelic Expression in the Human Genome: A Robust Approach To Identify Genetic and Epigenetic Cis-Acting Mechanisms Regulating Gene Expression

    PubMed Central

    Serre, David; Gurd, Scott; Ge, Bing; Sladek, Robert; Sinnett, Donna; Harmsen, Eef; Bibikova, Marina; Chudin, Eugene; Barker, David L.; Dickinson, Todd; Fan, Jian-Bing; Hudson, Thomas J.

    2008-01-01

    The recent development of whole genome association studies has lead to the robust identification of several loci involved in different common human diseases. Interestingly, some of the strongest signals of association observed in these studies arise from non-coding regions located in very large introns or far away from any annotated genes, raising the possibility that these regions are involved in the etiology of the disease through some unidentified regulatory mechanisms. These findings highlight the importance of better understanding the mechanisms leading to inter-individual differences in gene expression in humans. Most of the existing approaches developed to identify common regulatory polymorphisms are based on linkage/association mapping of gene expression to genotypes. However, these methods have some limitations, notably their cost and the requirement of extensive genotyping information from all the individuals studied which limits their applications to a specific cohort or tissue. Here we describe a robust and high-throughput method to directly measure differences in allelic expression for a large number of genes using the Illumina Allele-Specific Expression BeadArray platform and quantitative sequencing of RT-PCR products. We show that this approach allows reliable identification of differences in the relative expression of the two alleles larger than 1.5-fold (i.e., deviations of the allelic ratio larger than 60∶40) and offers several advantages over the mapping of total gene expression, particularly for studying humans or outbred populations. Our analysis of more than 80 individuals for 2,968 SNPs located in 1,380 genes confirms that differential allelic expression is a widespread phenomenon affecting the expression of 20% of human genes and shows that our method successfully captures expression differences resulting from both genetic and epigenetic cis-acting mechanisms. PMID:18454203

  10. New results on the robust stability of PID controllers with gain and phase margins for UFOPTD processes.

    PubMed

    Jin, Q B; Liu, Q; Huang, B

    2016-03-01

    This paper considers the problem of determining all the robust PID (proportional-integral-derivative) controllers in terms of the gain and phase margins (GPM) for open-loop unstable first order plus time delay (UFOPTD) processes. It is the first time that the feasible ranges of the GPM specifications provided by a PID controller are given for UFOPTD processes. A gain and phase margin tester is used to modify the original model, and the ranges of the margin specifications are derived such that the modified model can be stabilized by a stabilizing PID controller based on Hermite-Biehlers Theorem. Furthermore, we obtain all the controllers satisfying a given margin specification. Simulation studies show how to use the results to design a robust PID controller. PMID:26708658

  11. Identifying robust large-scale flood risk mitigation strategies: A quasi-2D hydraulic model as a tool for the Po river

    NASA Astrophysics Data System (ADS)

    Castellarin, Attilio; Domeneghetti, Alessio; Brath, Armando

    2011-01-01

    This paper focuses on the identification of large-scale flood risk mitigation strategies for the middle-lower reach of River Po, the longest Italian river and the largest in terms of streamflow. This study develops and tests the applicability of a quasi-2D hydraulic model to aid the identification of large-scale flood risk mitigation strategies relative to a 500-year flood event other than levee heightening, which is not technically viable nor economically conceivable for the case study. Different geometrical configurations of the embankment system are considered and modelled in the study: no overtopping; overtopping and levee breaching; overtopping without levee breaching. The quasi-2D model resulted in being a very useful tool for (1) addressing the problem of flood risk mitigation from a global - perspective (i.e., entire middle-lower reach of River Po), (2) identifying critical reaches, inundation areas and corresponding overflow volumes, and (3) generating reliable boundary conditions for smaller scale studies aimed at further analyzing the hypothesized flood mitigation strategies using more complex modelling tools (e.g., fully 2D approaches). These are crucial tasks for institutions and public bodies in charge of formulating robust flood risk management strategies for large European rivers, in the light of the recent Directive 2007/60/EC on the assessment and management of flood risks ( European Parliament, 2007).

  12. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    PubMed

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. PMID:25463325

  13. Mechanisms for Robust Cognition.

    PubMed

    Walsh, Matthew M; Gluck, Kevin A

    2015-08-01

    To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within variable environments. This raises the question, how do cognitive systems achieve similarly high degrees of robustness? The aim of this study was to identify a set of mechanisms that enhance robustness in cognitive systems. We identify three mechanisms that enhance robustness in biological and engineered systems: system control, redundancy, and adaptability. After surveying the psychological literature for evidence of these mechanisms, we provide simulations illustrating how each contributes to robust cognition in a different psychological domain: psychomotor vigilance, semantic memory, and strategy selection. These simulations highlight features of a mathematical approach for quantifying robustness, and they provide concrete examples of mechanisms for robust cognition. PMID:25352094

  14. Identifying critical success factors for designing selection processes into postgraduate specialty training: the case of UK general practice.

    PubMed

    Plint, Simon; Patterson, Fiona

    2010-06-01

    The UK national recruitment process into general practice training has been developed over several years, with incremental introduction of stages which have been piloted and validated. Previously independent processes, which encouraged multiple applications and produced inconsistent outcomes, have been replaced by a robust national process which has high reliability and predictive validity, and is perceived to be fair by candidates and allocates applicants equitably across the country. Best selection practice involves a job analysis which identifies required competencies, then designs reliable assessment methods to measure them, and over the long term ensures that the process has predictive validity against future performance. The general practitioner recruitment process introduced machine markable short listing assessments for the first time in the UK postgraduate recruitment context, and also adopted selection centre workplace simulations. The key success factors have been identified as corporate commitment to the goal of a national process, with gradual convergence maintaining locus of control rather than the imposition of change without perceived legitimate authority. PMID:20547597

  15. OGS#PETSc approach for robust and efficient simulations of strongly coupled hydrothermal processes in EGS reservoirs

    NASA Astrophysics Data System (ADS)

    Watanabe, Norihiro; Blucher, Guido; Cacace, Mauro; Kolditz, Olaf

    2016-04-01

    A robust and computationally efficient solution is important for 3D modelling of EGS reservoirs. This is particularly the case when the reservoir model includes hydraulic conduits such as induced or natural fractures, fault zones, and wellbore open-hole sections. The existence of such hydraulic conduits results in heterogeneous flow fields and in a strengthened coupling between fluid flow and heat transport processes via temperature dependent fluid properties (e.g. density and viscosity). A commonly employed partitioned solution (or operator-splitting solution) may not robustly work for such strongly coupled problems its applicability being limited by small time step sizes (e.g. 5-10 days) whereas the processes have to be simulated for 10-100 years. To overcome this limitation, an alternative approach is desired which can guarantee a robust solution of the coupled problem with minor constraints on time step sizes. In this work, we present a Newton-Raphson based monolithic coupling approach implemented in the OpenGeoSys simulator (OGS) combined with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library. The PETSc library is used for both linear and nonlinear solvers as well as MPI-based parallel computations. The suggested method has been tested by application to the 3D reservoir site of Groß Schönebeck, in northern Germany. Results show that the exact Newton-Raphson approach can also be limited to small time step sizes (e.g. one day) due to slight oscillations in the temperature field. The usage of a line search technique and modification of the Jacobian matrix were necessary to achieve robust convergence of the nonlinear solution. For the studied example, the proposed monolithic approach worked even with a very large time step size of 3.5 years.

  16. Robustness and assortativity for diffusion-like processes in scale-free networks

    NASA Astrophysics Data System (ADS)

    D'Agostino, G.; Scala, A.; Zlatić, V.; Caldarelli, G.

    2012-03-01

    By analysing the diffusive dynamics of epidemics and of distress in complex networks, we study the effect of the assortativity on the robustness of the networks. We first determine by spectral analysis the thresholds above which epidemics/failures can spread; we then calculate the slowest diffusional times. Our results shows that disassortative networks exhibit a higher epidemiological threshold and are therefore easier to immunize, while in assortative networks there is a longer time for intervention before epidemic/failure spreads. Moreover, we study by computer simulations the sandpile cascade model, a diffusive model of distress propagation (financial contagion). We show that, while assortative networks are more prone to the propagation of epidemic/failures, degree-targeted immunization policies increases their resilience to systemic risk.

  17. Robustness and Assortativity for Diffusion-like Processes in Scale- free Networks

    NASA Astrophysics Data System (ADS)

    Scala, Antonio; D'Agostino, Gregorio; Zlatic, Vinko; Caldarelli, Guido

    2012-02-01

    By analyzing the diffusive dynamics of epidemics and of distress in complex networks, we study the effect of the assortativity on the robustness of the networks. We first determine by spectral analysis the thresholds above which epidemics/failures can spread; we then calculate the slowest diffusional times. Our results shows that disassortative networks exhibit a higher epidemiological threshold and are therefore easier to immunize, while in assortative networks there is a longer time for intervention before epidemic/failure spreads. Moreover, we study by computer simulations a diffusive model of distress propagation (financial contagion). We show that, while assortative networks are more prone to the propagation of epidemic/failures, degree-targeted immunization policies increases their resilience to systemic risk.

  18. Robust Kriged Kalman Filtering

    SciTech Connect

    Baingana, Brian; Dall'Anese, Emiliano; Mateos, Gonzalo; Giannakis, Georgios B.

    2015-11-11

    Although the kriged Kalman filter (KKF) has well-documented merits for prediction of spatial-temporal processes, its performance degrades in the presence of outliers due to anomalous events, or measurement equipment failures. This paper proposes a robust KKF model that explicitly accounts for presence of measurement outliers. Exploiting outlier sparsity, a novel l1-regularized estimator that jointly predicts the spatial-temporal process at unmonitored locations, while identifying measurement outliers is put forth. Numerical tests are conducted on a synthetic Internet protocol (IP) network, and real transformer load data. Test results corroborate the effectiveness of the novel estimator in joint spatial prediction and outlier identification.

  19. Process description language: an experiment in robust programming for manufacturing systems

    NASA Astrophysics Data System (ADS)

    Spooner, Natalie R.; Creak, G. Alan

    1998-10-01

    Maintaining stable, robust, and consistent software is difficult in face of the increasing rate of change of customers' preferences, materials, manufacturing techniques, computer equipment, and other characteristic features of manufacturing systems. It is argued that software is commonly difficult to keep up to date because many of the implications of these changing features on software details are obscure. A possible solution is to use a software generation system in which the transformation of system properties into system software is made explicit. The proposed generation system stores the system properties, such as machine properties, product properties and information on manufacturing techniques, in databases. As a result this information, on which system control is based, can also be made available to other programs. In particular, artificial intelligence programs such as fault diagnosis programs, can benefit from using the same information as the control system, rather than a separate database which must be developed and maintained separately to ensure consistency. Experience in developing a simplified model of such a system is presented.

  20. Robust quantitative scratch assay

    PubMed Central

    Vargas, Andrea; Angeli, Marc; Pastrello, Chiara; McQuaid, Rosanne; Li, Han; Jurisicova, Andrea; Jurisica, Igor

    2016-01-01

    The wound healing assay (or scratch assay) is a technique frequently used to quantify the dependence of cell motility—a central process in tissue repair and evolution of disease—subject to various treatments conditions. However processing the resulting data is a laborious task due its high throughput and variability across images. This Robust Quantitative Scratch Assay algorithm introduced statistical outputs where migration rates are estimated, cellular behaviour is distinguished and outliers are identified among groups of unique experimental conditions. Furthermore, the RQSA decreased measurement errors and increased accuracy in the wound boundary at comparable processing times compared to previously developed method (TScratch). Availability and implementation: The RQSA is freely available at: http://ophid.utoronto.ca/RQSA/RQSA_Scripts.zip. The image sets used for training and validation and results are available at: (http://ophid.utoronto.ca/RQSA/trainingSet.zip, http://ophid.utoronto.ca/RQSA/validationSet.zip, http://ophid.utoronto.ca/RQSA/ValidationSetResults.zip, http://ophid.utoronto.ca/RQSA/ValidationSet_H1975.zip, http://ophid.utoronto.ca/RQSA/ValidationSet_H1975Results.zip, http://ophid.utoronto.ca/RQSA/RobustnessSet.zip, http://ophid.utoronto.ca/RQSA/RobustnessSet.zip). Supplementary Material is provided for detailed description of the development of the RQSA. Contact: juris@ai.utoronto.ca Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26722119

  1. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  2. Identification, characterization and HPLC quantification of process-related impurities in Trelagliptin succinate bulk drug: Six identified as new compounds.

    PubMed

    Zhang, Hui; Sun, Lili; Zou, Liang; Hui, Wenkai; Liu, Lei; Zou, Qiaogen; Ouyang, Pingkai

    2016-09-01

    A sensitive, selective and stability indicating reversed-phase LC method was developed for the determination of process related impurities of Trelagliptin succinate in bulk drug. Six impurities were identified by LC-MS. Further, their structures were characterized and confirmed utilizing LC-MS/MS, IR and NMR spectral data. The most probable mechanisms for the formation of these impurities were also discussed. To the best of our knowledge, six structures among these impurities are new compounds and have not been reported previously. The superior separation was achieved on an InertSustain C18 (250mm×4.6mm, 5μm) column in a gradient mixture of acetonitrile and 20mmol potassium dihydrogen phosphate with 0.25% triethylamine (pH adjusted to 3.5 with phosphate acid). The method was validated as per regulatory guidelines to demonstrate system suitability, specificity, sensitivity, linearity, robustness, and stability. PMID:27209451

  3. Nuclear robustness of the r process in neutron-star mergers

    NASA Astrophysics Data System (ADS)

    Mendoza-Temis, Joel de Jesús; Wu, Meng-Ru; Langanke, Karlheinz; Martínez-Pinedo, Gabriel; Bauswein, Andreas; Janka, Hans-Thomas

    2015-11-01

    We have performed r -process calculations for matter ejected dynamically in neutron star mergers based on a complete set of trajectories from a three-dimensional relativistic smoothed particle hydrodynamic simulation with a total ejected mass of ˜1.7 ×10-3M⊙ . Our calculations consider an extended nuclear network, including spontaneous, β - and neutron-induced fission and adopting fission yield distributions from the abla code. In particular we have studied the sensitivity of the r -process abundances to nuclear masses by using different models. Most of the trajectories, corresponding to 90% of the ejected mass, follow a relatively slow expansion allowing for all neutrons to be captured. The resulting abundances are very similar to each other and reproduce the general features of the observed r -process abundance (the second and third peaks, the rare-earth peak, and the lead peak) for all mass models as they are mainly determined by the fission yields. We find distinct differences in the predictions of the mass models at and just above the third peak, which can be traced back to different predictions of neutron separation energies for r -process nuclei around neutron number N =130 . In all simulations, we find that the second peak around A ˜130 is produced by the fission yields of the material that piles up in nuclei with A ≳250 due to the substantially longer β -decay half-lives found in this region. The third peak around A ˜195 is generated in a competition between neutron captures and β decays during r -process freeze-out. The remaining trajectories, which contribute 10% by mass to the total integrated abundances, follow such a fast expansion that the r process does not use all the neutrons. This also leads to a larger variation of abundances among trajectories, as fission does not dominate the r -process dynamics. The resulting abundances are in between those associated to the r and s processes. The total integrated abundances are dominated by

  4. The Robustness of Proofreading to Crowding-Induced Pseudo-Processivity in the MAPK Pathway

    PubMed Central

    Ouldridge, Thomas E.; Rein ten Wolde, Pieter

    2014-01-01

    Double phosphorylation of protein kinases is a common feature of signaling cascades. This motif may reduce cross-talk between signaling pathways because the second phosphorylation site allows for proofreading, especially when phosphorylation is distributive rather than processive. Recent studies suggest that phosphorylation can be pseudo-processive in the crowded cellular environment, since rebinding after the first phosphorylation is enhanced by slow diffusion. Here, we use a simple model with unsaturated reactants to show that specificity for one substrate over another drops as rebinding increases and pseudo-processive behavior becomes possible. However, this loss of specificity with increased rebinding is typically also observed if two distinct enzyme species are required for phosphorylation, i.e., when the system is necessarily distributive. Thus the loss of specificity is due to an intrinsic reduction in selectivity with increased rebinding, which benefits inefficient reactions, rather than pseudo-processivity itself. We also show that proofreading can remain effective when the intended signaling pathway exhibits high levels of rebinding-induced pseudo-processivity, unlike other proposed advantages of the dual phosphorylation motif. PMID:25418311

  5. Evaluation of robust wave image processing methods for magnetic resonance elastography.

    PubMed

    Li, Bing Nan; Shan, Xiang; Xiang, Kui; An, Ning; Xu, Jinzhang; Huang, Weimin; Kobayashi, Etsuko

    2014-11-01

    Magnetic resonance elastography (MRE) is a promising modality for in vivo quantification and visualization of soft tissue elasticity. It involves three stages of processes for (1) external excitation, (2) wave imaging and (3) elasticity reconstruction. One of the important issues to be addressed in MRE is wave image processing and enhancement. In this study we approach it from three different ways including phase unwrapping, directional filtering and noise suppression. The relevant solutions were addressed briefly. Some of them were implemented and evaluated on both simulated and experimental MRE datasets. The results confirm that wave image enhancement is indispensable before carrying out MRE elasticity reconstruction. PMID:25222934

  6. Robust carrier formation process in low-band gap organic photovoltaics

    NASA Astrophysics Data System (ADS)

    Yonezawa, Kouhei; Kamioka, Hayato; Yasuda, Takeshi; Han, Liyuan; Moritomo, Yutaka

    2013-10-01

    By means of femto-second time-resolved spectroscopy, we investigated the carrier formation process against film morphology and temperature (T) in highly-efficient organic photovoltaic, poly[[4,8-bis[(2-ethylhexyl)oxy]benzo[1,2-b:4,5-b '] dithiophene-2,6-diyl][3-fluoro-2-[(2-ethylhexyl)carbonyl]thieno[3,4-b] thiophenediyl

  7. An Ontology for Identifying Cyber Intrusion Induced Faults in Process Control Systems

    NASA Astrophysics Data System (ADS)

    Hieb, Jeffrey; Graham, James; Guan, Jian

    This paper presents an ontological framework that permits formal representations of process control systems, including elements of the process being controlled and the control system itself. A fault diagnosis algorithm based on the ontological model is also presented. The algorithm can identify traditional process elements as well as control system elements (e.g., IP network and SCADA protocol) as fault sources. When these elements are identified as a likely fault source, the possibility exists that the process fault is induced by a cyber intrusion. A laboratory-scale distillation column is used to illustrate the model and the algorithm. Coupled with a well-defined statistical process model, this fault diagnosis approach provides cyber security enhanced fault diagnosis information to plant operators and can help identify that a cyber attack is underway before a major process failure is experienced.

  8. Optimization of Tape Winding Process Parameters to Enhance the Performance of Solid Rocket Nozzle Throat Back Up Liners using Taguchi's Robust Design Methodology

    NASA Astrophysics Data System (ADS)

    Nath, Nayani Kishore

    2016-06-01

    The throat back up liners is used to protect the nozzle structural members from the severe thermal environment in solid rocket nozzles. The throat back up liners is made with E-glass phenolic prepregs by tape winding process. The objective of this work is to demonstrate the optimization of process parameters of tape winding process to achieve better insulative resistance using Taguchi's robust design methodology. In this method four control factors machine speed, roller pressure, tape tension, tape temperature that were investigated for the tape winding process. The presented work was to study the cogency and acceptability of Taguchi's methodology in manufacturing of throat back up liners. The quality characteristic identified was Back wall temperature. Experiments carried out using L{9/'} (34) orthogonal array with three levels of four different control factors. The test results were analyzed using smaller the better criteria for Signal to Noise ratio in order to optimize the process. The experimental results were analyzed conformed and successfully used to achieve the minimum back wall temperature of the throat back up liners. The enhancement in performance of the throat back up liners was observed by carrying out the oxy-acetylene tests. The influence of back wall temperature on the performance of throat back up liners was verified by ground firing test.

  9. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    NASA Astrophysics Data System (ADS)

    McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.

    2015-04-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.

  10. Filling the gaps: A robust description of adhesive birth-death-movement processes

    NASA Astrophysics Data System (ADS)

    Johnston, Stuart T.; Baker, Ruth E.; Simpson, Matthew J.

    2016-04-01

    Existing continuum descriptions of discrete adhesive birth-death-movement processes provide accurate predictions of the average discrete behavior for limited parameter regimes. Here we present an alternative continuum description in terms of the dynamics of groups of contiguous occupied and vacant lattice sites. Our method provides more accurate predictions, is valid in parameter regimes that could not be described by previous continuum descriptions, and provides information about the spatial clustering of occupied sites. Furthermore, we present a simple analytic approximation of the spatial clustering of occupied sites at late time, when the system reaches its steady-state configuration.

  11. A Robust Multi-Scale Modeling System for the Study of Cloud and Precipitation Processes

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2012-01-01

    During the past decade, numerical weather and global non-hydrostatic models have started using more complex microphysical schemes originally developed for high resolution cloud resolving models (CRMs) with 1-2 km or less horizontal resolutions. These microphysical schemes affect the dynamic through the release of latent heat (buoyancy loading and pressure gradient) the radiation through the cloud coverage (vertical distribution of cloud species), and surface processes through rainfall (both amount and intensity). Recently, several major improvements of ice microphysical processes (or schemes) have been developed for cloud-resolving model (Goddard Cumulus Ensemble, GCE, model) and regional scale (Weather Research and Forecast, WRF) model. These improvements include an improved 3-ICE (cloud ice, snow and graupel) scheme (Lang et al. 2010); a 4-ICE (cloud ice, snow, graupel and hail) scheme and a spectral bin microphysics scheme and two different two-moment microphysics schemes. The performance of these schemes has been evaluated by using observational data from TRMM and other major field campaigns. In this talk, we will present the high-resolution (1 km) GeE and WRF model simulations and compared the simulated model results with observation from recent field campaigns [i.e., midlatitude continental spring season (MC3E; 2010), high latitude cold-season (C3VP, 2007; GCPEx, 2012), and tropical oceanic (TWP-ICE, 2006)].

  12. PRIME--A Process for Remediating Identified Marginal Education Candidates Revisited

    ERIC Educational Resources Information Center

    Riley, Gena; Notar, Charles; Owens, Lynetta; Harper, Cynthia

    2011-01-01

    The article traces the history of PRIME--A Process for Remediating Identified Marginal Education Candidates since 1996. The philosophy has not changed from its inception. Procedure identifies individuals who may be at risk for successful completion of their programs or who possess those traits associated with rapid attrition in the teaching…

  13. Uncertainties and robustness of the ignition process in type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Iapichino, L.; Lesaffre, P.

    2010-03-01

    Context. It is widely accepted that the onset of the explosive carbon burning in the core of a carbon-oxygen white dwarf (CO WD) triggers the ignition of a type Ia supernova (SN Ia). The features of the ignition are among the few free parameters of the SN Ia explosion theory. Aims: We explore the role for the ignition process of two different issues: firstly, the ignition is studied in WD models coming from different accretion histories. Secondly, we estimate how a different reaction rate for C-burning can affect the ignition. Methods: Two-dimensional hydrodynamical simulations of temperature perturbations in the WD core (“bubbles”) are performed with the FLASH code. In order to evaluate the impact of the C-burning reaction rate on the WD model, the evolution code FLASH_THE_TORTOISE from Lesaffre et al. (2006, MNRAS, 368, 187) is used. Results: In different WD models a key role is played by the different gravitational acceleration in the progenitor's core. As a consequence, the ignition is disfavored at a large distance from the WD center in models with a larger central density, resulting from the evolution of initially more massive progenitors. Changes in the C reaction rate at T ⪉ 5 × 10^8~K slightly influence the ignition density in the WD core, while the ignition temperature is almost unaffected. Recent measurements of new resonances in the C-burning reaction rate (Spillane et al. 2007, Phys. Rev. Lett., 98, 122501) do not affect the core conditions of the WD significantly. Conclusions: This simple analysis, performed on the features of the temperature perturbations in the WD core, should be extended in the framework of the state-of-the-art numerical tools for studying the turbulent convection and ignition in the WD core. Future measurements of the C-burning reactions cross section at low energy, though certainly useful, are not expected to affect our current understanding of the ignition process dramatically.

  14. Testing the robustness, to changes in process, of a scaling relationship between soil grading and geomorphology using a pedogenesis model

    NASA Astrophysics Data System (ADS)

    Willgoose, G. R.; Welivitiya, W. D. D. P.; Hancock, G. R.

    2014-12-01

    Using the mARM1D pedogenesis model (which simulated armouring and weathering processes on a hillslope) previous work by Cohen found a strong log-log linear relationship between the size distribution of the soil (e.g. d50), the contributing area and the local slope. However, Cohen performed his simulations using only one set of grading data, one climate, one geology and did his simulations over a relatively limited range of area and slope combinations. A model based on mARM, called SSSPAM, that generalises the modelled processes has been developed, and calibrated to mARM. This calibration was used as the starting point for a parametric study of the robustness to changes in environmental conditions and weathering conditions of the area-slope and d50 relationship, different initial soil gradings and weathering conditions, different geology, and a broader range of area and slope combinations. This parametric study assessed the influence of changes in the model parameters on the soil evolution results. These simulations confirmed the robustness of the area-slope and d50 relationship discovered by Cohen using mARM. We also demonstrated that the area-slope-diameter relationship is not only true for d50 but for the entire grading range (e.g. d10, d90). The results strengthen our confidence in the generality of the log-log linear scaling relationship between area, slope and soil grading. The paper will present the results of our parametric study and will highlight the potential uses of the relationship for digital soil mapping and better characterization of soils in environmental models.

  15. Robust Suppression of HIV Replication by Intracellularly Expressed Reverse Transcriptase Aptamers Is Independent of Ribozyme Processing

    PubMed Central

    Lange, Margaret J; Sharma, Tarun K; Whatley, Angela S; Landon, Linda A; Tempesta, Michael A; Johnson, Marc C; Burke, Donald H

    2012-01-01

    RNA aptamers that bind human immunodeficiency virus 1 (HIV-1) reverse transcriptase (RT) also inhibit viral replication, making them attractive as therapeutic candidates and potential tools for dissecting viral pathogenesis. However, it is not well understood how aptamer-expression context and cellular RNA pathways govern aptamer accumulation and net antiviral bioactivity. Using a previously-described expression cassette in which aptamers were flanked by two “minimal core” hammerhead ribozymes, we observed only weak suppression of pseudotyped HIV. To evaluate the importance of the minimal ribozymes, we replaced them with extended, tertiary-stabilized hammerhead ribozymes with enhanced self-cleavage activity, in addition to noncleaving ribozymes with active site mutations. Both the active and inactive versions of the extended hammerhead ribozymes increased inhibition of pseudotyped virus, indicating that processing is not necessary for bioactivity. Clonal stable cell lines expressing aptamers from these modified constructs strongly suppressed infectious virus, and were more effective than minimal ribozymes at high viral multiplicity of infection (MOI). Tertiary stabilization greatly increased aptamer accumulation in viral and subcellular compartments, again regardless of self-cleavage capability. We therefore propose that the increased accumulation is responsible for increased suppression, that the bioactive form of the aptamer is one of the uncleaved or partially cleaved transcripts, and that tertiary stabilization increases transcript stability by reducing exonuclease degradation. PMID:22948672

  16. Extreme temperature robust optical sensor designs and fault-tolerant signal processing

    DOEpatents

    Riza, Nabeel Agha; Perez, Frank

    2012-01-17

    Silicon Carbide (SiC) probe designs for extreme temperature and pressure sensing uses a single crystal SiC optical chip encased in a sintered SiC material probe. The SiC chip may be protected for high temperature only use or exposed for both temperature and pressure sensing. Hybrid signal processing techniques allow fault-tolerant extreme temperature sensing. Wavelength peak-to-peak (or null-to-null) collective spectrum spread measurement to detect wavelength peak/null shift measurement forms a coarse-fine temperature measurement using broadband spectrum monitoring. The SiC probe frontend acts as a stable emissivity Black-body radiator and monitoring the shift in radiation spectrum enables a pyrometer. This application combines all-SiC pyrometry with thick SiC etalon laser interferometry within a free-spectral range to form a coarse-fine temperature measurement sensor. RF notch filtering techniques improve the sensitivity of the temperature measurement where fine spectral shift or spectrum measurements are needed to deduce temperature.

  17. Robust fetal QRS detection from noninvasive abdominal electrocardiogram based on channel selection and simultaneous multichannel processing.

    PubMed

    Ghaffari, Ali; Mollakazemi, Mohammad Javad; Atyabi, Seyyed Abbas; Niknazar, Mohammad

    2015-12-01

    The purpose of this study is to provide a new method for detecting fetal QRS complexes from non-invasive fetal electrocardiogram (fECG) signal. Despite most of the current fECG processing methods which are based on separation of fECG from maternal ECG (mECG), in this study, fetal heart rate (FHR) can be extracted with high accuracy without separation of fECG from mECG. Furthermore, in this new approach thoracic channels are not necessary. These two aspects have reduced the required computational operations. Consequently, the proposed approach can be efficiently applied to different real-time healthcare and medical devices. In this work, a new method is presented for selecting the best channel which carries strongest fECG. Each channel is scored based on two criteria of noise distribution and good fetal heartbeat visibility. Another important aspect of this study is the simultaneous and combinatorial use of available fECG channels via the priority given by their scores. A combination of geometric features and wavelet-based techniques was adopted to extract FHR. Based on fetal geometric features, fECG signals were divided into three categories, and different strategies were employed to analyze each category. The method was validated using three datasets including Noninvasive fetal ECG database, DaISy and PhysioNet/Computing in Cardiology Challenge 2013. Finally, the obtained results were compared with other studies. The adopted strategies such as multi-resolution analysis, not separating fECG and mECG, intelligent channels scoring and using them simultaneously are the factors that caused the promising performance of the method. PMID:26462679

  18. A robust and representative lower bound on object processing speed in humans.

    PubMed

    Bieniek, Magdalena M; Bennett, Patrick J; Sekuler, Allison B; Rousselet, Guillaume A

    2016-07-01

    How early does the brain decode object categories? Addressing this question is critical to constrain the type of neuronal architecture supporting object categorization. In this context, much effort has been devoted to estimating face processing speed. With onsets estimated from 50 to 150 ms, the timing of the first face-sensitive responses in humans remains controversial. This controversy is due partially to the susceptibility of dynamic brain measurements to filtering distortions and analysis issues. Here, using distributions of single-trial event-related potentials (ERPs), causal filtering, statistical analyses at all electrodes and time points, and effective correction for multiple comparisons, we present evidence that the earliest categorical differences start around 90 ms following stimulus presentation. These results were obtained from a representative group of 120 participants, aged 18-81, who categorized images of faces and noise textures. The results were reliable across testing days, as determined by test-retest assessment in 74 of the participants. Furthermore, a control experiment showed similar ERP onsets for contrasts involving images of houses or white noise. Face onsets did not change with age, suggesting that face sensitivity occurs within 100 ms across the adult lifespan. Finally, the simplicity of the face-texture contrast, and the dominant midline distribution of the effects, suggest the face responses were evoked by relatively simple image properties and are not face specific. Our results provide a new lower benchmark for the earliest neuronal responses to complex objects in the human visual system. PMID:26469359

  19. A point process approach to identifying and tracking transitions in neural spiking dynamics in the subthalamic nucleus of Parkinson's patients

    NASA Astrophysics Data System (ADS)

    Deng, Xinyi; Eskandar, Emad N.; Eden, Uri T.

    2013-12-01

    Understanding the role of rhythmic dynamics in normal and diseased brain function is an important area of research in neural electrophysiology. Identifying and tracking changes in rhythms associated with spike trains present an additional challenge, because standard approaches for continuous-valued neural recordings—such as local field potential, magnetoencephalography, and electroencephalography data—require assumptions that do not typically hold for point process data. Additionally, subtle changes in the history dependent structure of a spike train have been shown to lead to robust changes in rhythmic firing patterns. Here, we propose a point process modeling framework to characterize the rhythmic spiking dynamics in spike trains, test for statistically significant changes to those dynamics, and track the temporal evolution of such changes. We first construct a two-state point process model incorporating spiking history and develop a likelihood ratio test to detect changes in the firing structure. We then apply adaptive state-space filters and smoothers to track these changes through time. We illustrate our approach with a simulation study as well as with experimental data recorded in the subthalamic nucleus of Parkinson's patients performing an arm movement task. Our analyses show that during the arm movement task, neurons underwent a complex pattern of modulation of spiking intensity characterized initially by a release of inhibitory control at 20-40 ms after a spike, followed by a decrease in excitatory influence at 40-60 ms after a spike.

  20. Discriminating sediment archives and sedimentary processes in the arid endorheic Ejina Basin, NW China using a robust geochemical approach

    NASA Astrophysics Data System (ADS)

    Yu, Kaifeng; Hartmann, Kai; Nottebaum, Veit; Stauch, Georg; Lu, Huayu; Zeeden, Christian; Yi, Shuangwen; Wünnemann, Bernd; Lehmkuhl, Frank

    2016-04-01

    Geochemical characteristics have been intensively used to assign sediment properties to paleoclimate and provenance. Nonetheless, in particular concerning the arid context, bulk geochemistry of different sediment archives and corresponding process interpretations are hitherto elusive. The Ejina Basin, with its suite of different sediment archives, is known as one of the main sources for the loess accumulation on the Chinese Loess Plateau. In order to understand mechanisms along this supra-regional sediment cascade, it is crucial to decipher the archive characteristics and formation processes. To address these issues, five profiles in different geomorphological contexts were selected. Analyses of X-ray fluorescence and diffraction, grain size, optically stimulated luminescence and radiocarbon dating were performed. Robust factor analysis was applied to reduce the attribute space to the process space of sedimentation history. Five sediment archives from three lithologic units exhibit geochemical characteristics as follows: (i) aeolian sands have high contents of Zr and Hf, whereas only Hf can be regarded as a valuable indicator to discriminate the coarse sand proportion; (ii) sandy loess has high Ca and Sr contents which both exhibit broad correlations with the medium to coarse silt proportions; (iii) lacustrine clays have high contents of felsic, ferromagnesian and mica source elements e.g., K, Fe, Ti, V, and Ni; (iv) fluvial sands have high contents of Mg, Cl and Na which may be enriched in evaporite minerals; (v) alluvial gravels have high contents of Cr which may originate from nearby Cr-rich bedrock. Temporal variations can be illustrated by four robust factors: weathering intensity, silicate-bearing mineral abundance, saline/alkaline magnitude and quasi-constant aeolian input. In summary, the bulk-composition of the late Quaternary sediments in this arid context is governed by the nature of the source terrain, weak chemical weathering, authigenic minerals

  1. A robust post-processing method to determine skin friction in turbulent boundary layers from the velocity profile

    NASA Astrophysics Data System (ADS)

    Rodríguez-López, Eduardo; Bruce, Paul J. K.; Buxton, Oliver R. H.

    2015-04-01

    The present paper describes a method to extrapolate the mean wall shear stress, , and the accurate relative position of a velocity probe with respect to the wall, , from an experimentally measured mean velocity profile in a turbulent boundary layer. Validation is made between experimental and direct numerical simulation data of turbulent boundary layer flows with independent measurement of the shear stress. The set of parameters which minimize the residual error with respect to the canonical description of the boundary layer profile is taken as the solution. Several methods are compared, testing different descriptions of the canonical mean velocity profile (with and without overshoot over the logarithmic law) and different definitions of the residual function of the optimization. The von Kármán constant is used as a parameter of the fitting process in order to avoid any hypothesis regarding its value that may be affected by different initial or boundary conditions of the flow. Results show that the best method provides an accuracy of for the estimation of the friction velocity and for the position of the wall. The robustness of the method is tested including unconverged near-wall measurements, pressure gradient, and reduced number of points; the importance of the location of the first point is also tested, and it is shown that the method presents a high robustness even in highly distorted flows, keeping the aforementioned accuracies if one acquires at least one data point in . The wake component and the thickness of the boundary layer are also simultaneously extrapolated from the mean velocity profile. This results in the first study, to the knowledge of the authors, where a five-parameter fitting is carried out without any assumption on the von Kármán constant and the limits of the logarithmic layer further from its existence.

  2. Efficient and mechanically robust stretchable organic light-emitting devices by a laser-programmable buckling process

    PubMed Central

    Yin, Da; Feng, Jing; Ma, Rui; Liu, Yue-Feng; Zhang, Yong-Lai; Zhang, Xu-Lin; Bi, Yan-Gang; Chen, Qi-Dai; Sun, Hong-Bo

    2016-01-01

    Stretchable organic light-emitting devices are becoming increasingly important in the fast-growing fields of wearable displays, biomedical devices and health-monitoring technology. Although highly stretchable devices have been demonstrated, their luminous efficiency and mechanical stability remain impractical for the purposes of real-life applications. This is due to significant challenges arising from the high strain-induced limitations on the structure design of the device, the materials used and the difficulty of controlling the stretch-release process. Here we have developed a laser-programmable buckling process to overcome these obstacles and realize a highly stretchable organic light-emitting diode with unprecedented efficiency and mechanical robustness. The strained device luminous efficiency −70 cd A−1 under 70% strain - is the largest to date and the device can accommodate 100% strain while exhibiting only small fluctuations in performance over 15,000 stretch-release cycles. This work paves the way towards fully stretchable organic light-emitting diodes that can be used in wearable electronic devices. PMID:27187936

  3. Efficient and mechanically robust stretchable organic light-emitting devices by a laser-programmable buckling process

    NASA Astrophysics Data System (ADS)

    Yin, Da; Feng, Jing; Ma, Rui; Liu, Yue-Feng; Zhang, Yong-Lai; Zhang, Xu-Lin; Bi, Yan-Gang; Chen, Qi-Dai; Sun, Hong-Bo

    2016-05-01

    Stretchable organic light-emitting devices are becoming increasingly important in the fast-growing fields of wearable displays, biomedical devices and health-monitoring technology. Although highly stretchable devices have been demonstrated, their luminous efficiency and mechanical stability remain impractical for the purposes of real-life applications. This is due to significant challenges arising from the high strain-induced limitations on the structure design of the device, the materials used and the difficulty of controlling the stretch-release process. Here we have developed a laser-programmable buckling process to overcome these obstacles and realize a highly stretchable organic light-emitting diode with unprecedented efficiency and mechanical robustness. The strained device luminous efficiency -70 cd A-1 under 70% strain - is the largest to date and the device can accommodate 100% strain while exhibiting only small fluctuations in performance over 15,000 stretch-release cycles. This work paves the way towards fully stretchable organic light-emitting diodes that can be used in wearable electronic devices.

  4. Efficient and mechanically robust stretchable organic light-emitting devices by a laser-programmable buckling process.

    PubMed

    Yin, Da; Feng, Jing; Ma, Rui; Liu, Yue-Feng; Zhang, Yong-Lai; Zhang, Xu-Lin; Bi, Yan-Gang; Chen, Qi-Dai; Sun, Hong-Bo

    2016-01-01

    Stretchable organic light-emitting devices are becoming increasingly important in the fast-growing fields of wearable displays, biomedical devices and health-monitoring technology. Although highly stretchable devices have been demonstrated, their luminous efficiency and mechanical stability remain impractical for the purposes of real-life applications. This is due to significant challenges arising from the high strain-induced limitations on the structure design of the device, the materials used and the difficulty of controlling the stretch-release process. Here we have developed a laser-programmable buckling process to overcome these obstacles and realize a highly stretchable organic light-emitting diode with unprecedented efficiency and mechanical robustness. The strained device luminous efficiency -70 cd A(-1) under 70% strain - is the largest to date and the device can accommodate 100% strain while exhibiting only small fluctuations in performance over 15,000 stretch-release cycles. This work paves the way towards fully stretchable organic light-emitting diodes that can be used in wearable electronic devices. PMID:27187936

  5. The role of the PIRT process in identifying code improvements and executing code development

    SciTech Connect

    Wilson, G.E.; Boyack, B.E.

    1997-07-01

    In September 1988, the USNRC issued a revised ECCS rule for light water reactors that allows, as an option, the use of best estimate (BE) plus uncertainty methods in safety analysis. The key feature of this licensing option relates to quantification of the uncertainty in the determination that an NPP has a {open_quotes}low{close_quotes} probability of violating the safety criteria specified in 10 CFR 50. To support the 1988 licensing revision, the USNRC and its contractors developed the CSAU evaluation methodology to demonstrate the feasibility of the BE plus uncertainty approach. The PIRT process, Step 3 in the CSAU methodology, was originally formulated to support the BE plus uncertainty licensing option as executed in the CSAU approach to safety analysis. Subsequent work has shown the PIRT process to be a much more powerful tool than conceived in its original form. Through further development and application, the PIRT process has shown itself to be a robust means to establish safety analysis computer code phenomenological requirements in their order of importance to such analyses. Used early in research directed toward these objectives, PIRT results also provide the technical basis and cost effective organization for new experimental programs needed to improve the safety analysis codes for new applications. The primary purpose of this paper is to describe the generic PIRT process, including typical and common illustrations from prior applications. The secondary objective is to provide guidance to future applications of the process to help them focus, in a graded approach, on systems, components, processes and phenomena that have been common in several prior applications.

  6. 25 CFR 170.501 - What happens when the review process identifies areas for improvement?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false What happens when the review process identifies areas for... WATER INDIAN RESERVATION ROADS PROGRAM Planning, Design, and Construction of Indian Reservation Roads Program Facilities Program Reviews and Management Systems § 170.501 What happens when the review...

  7. 25 CFR 170.501 - What happens when the review process identifies areas for improvement?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 25 Indians 1 2013-04-01 2013-04-01 false What happens when the review process identifies areas for... WATER INDIAN RESERVATION ROADS PROGRAM Planning, Design, and Construction of Indian Reservation Roads Program Facilities Program Reviews and Management Systems § 170.501 What happens when the review...

  8. 25 CFR 170.501 - What happens when the review process identifies areas for improvement?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false What happens when the review process identifies areas for... WATER INDIAN RESERVATION ROADS PROGRAM Planning, Design, and Construction of Indian Reservation Roads Program Facilities Program Reviews and Management Systems § 170.501 What happens when the review...

  9. 25 CFR 170.501 - What happens when the review process identifies areas for improvement?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 25 Indians 1 2012-04-01 2011-04-01 true What happens when the review process identifies areas for... WATER INDIAN RESERVATION ROADS PROGRAM Planning, Design, and Construction of Indian Reservation Roads Program Facilities Program Reviews and Management Systems § 170.501 What happens when the review...

  10. 25 CFR 170.501 - What happens when the review process identifies areas for improvement?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 1 2014-04-01 2014-04-01 false What happens when the review process identifies areas for... WATER INDIAN RESERVATION ROADS PROGRAM Planning, Design, and Construction of Indian Reservation Roads Program Facilities Program Reviews and Management Systems § 170.501 What happens when the review...

  11. 20 CFR 1010.300 - What processes are to be implemented to identify covered persons?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false What processes are to be implemented to identify covered persons? 1010.300 Section 1010.300 Employees' Benefits OFFICE OF THE ASSISTANT SECRETARY FOR VETERANS' EMPLOYMENT AND TRAINING SERVICE, DEPARTMENT OF LABOR APPLICATION OF PRIORITY OF SERVICE FOR COVERED PERSONS Applying Priority...

  12. Clocking the social mind by identifying mental processes in the IAT with electrical neuroimaging.

    PubMed

    Schiller, Bastian; Gianotti, Lorena R R; Baumgartner, Thomas; Nash, Kyle; Koenig, Thomas; Knoch, Daria

    2016-03-01

    Why do people take longer to associate the word "love" with outgroup words (incongruent condition) than with ingroup words (congruent condition)? Despite the widespread use of the implicit association test (IAT), it has remained unclear whether this IAT effect is due to additional mental processes in the incongruent condition, or due to longer duration of the same processes. Here, we addressed this previously insoluble issue by assessing the spatiotemporal evolution of brain electrical activity in 83 participants. From stimulus presentation until response production, we identified seven processes. Crucially, all seven processes occurred in the same temporal sequence in both conditions, but participants needed more time to perform one early occurring process (perceptual processing) and one late occurring process (implementing cognitive control to select the motor response) in the incongruent compared with the congruent condition. We also found that the latter process contributed to individual differences in implicit bias. These results advance understanding of the neural mechanics of response time differences in the IAT: They speak against theories that explain the IAT effect as due to additional processes in the incongruent condition and speak in favor of theories that assume a longer duration of specific processes in the incongruent condition. More broadly, our data analysis approach illustrates the potential of electrical neuroimaging to illuminate the temporal organization of mental processes involved in social cognition. PMID:26903643

  13. Pilot-scale investigation of the robustness and efficiency of a copper-based treated wood wastes recycling process.

    PubMed

    Coudert, Lucie; Blais, Jean-François; Mercier, Guy; Cooper, Paul; Gastonguay, Louis; Morris, Paul; Janin, Amélie; Reynier, Nicolas

    2013-10-15

    The disposal of metal-bearing treated wood wastes is becoming an environmental challenge. An efficient recycling process based on sulfuric acid leaching has been developed to remove metals from copper-based treated wood chips (0robustness of this technology in removing metals from copper-based treated wood wastes at a pilot plant scale (130-L reactor tank). After 3 × 2 h leaching steps followed by 3 × 7 min rinsing steps, up to 97.5% of As, 87.9% of Cr, and 96.1% of Cu were removed from CCA-treated wood wastes with different initial metal loading (>7.3 kgm(-3)) and more than 94.5% of Cu was removed from ACQ-, CA- and MCQ-treated wood. The treatment of effluents by precipitation-coagulation was highly efficient; allowing removals more than 93% for the As, Cr, and Cu contained in the effluent. The economic analysis included operating costs, indirect costs and revenues related to remediated wood sales. The economic analysis concluded that CCA-treated wood wastes remediation can lead to a benefit of 53.7 US$t(-1) or a cost of 35.5 US$t(-1) and that ACQ-, CA- and MCQ-treated wood wastes recycling led to benefits ranging from 9.3 to 21.2 US$t(-1). PMID:23954815

  14. Torque coordinating robust control of shifting process for dry dual clutch transmission equipped in a hybrid car

    NASA Astrophysics Data System (ADS)

    Zhao, Z.-G.; Chen, H.-J.; Yang, Y.-Y.; He, L.

    2015-09-01

    For a hybrid car equipped with dual clutch transmission (DCT), the coordination control problems of clutches and power sources are investigated while taking full advantage of the integrated starter generator motor's fast response speed and high accuracy (speed and torque). First, a dynamic model of the shifting process is established, the vehicle acceleration is quantified according to the intentions of the driver, and the torque transmitted by clutches is calculated based on the designed disengaging principle during the torque phase. Next, a robust H∞ controller is designed to ensure speed synchronisation despite the existence of model uncertainties, measurement noise, and engine torque lag. The engine torque lag and measurement noise are used as external disturbances to initially modify the output torque of the power source. Additionally, during the torque switch phase, the torque of the power sources is smoothly transitioned to the driver's demanded torque. Finally, the torque of the power sources is further distributed based on the optimisation of system efficiency, and the throttle opening of the engine is constrained to avoid sharp torque variations. The simulation results verify that the proposed control strategies effectively address the problem of coordinating control of clutches and power sources, establishing a foundation for the application of DCT in hybrid cars.

  15. On the estimation of robustness and filtering ability of dynamic biochemical networks under process delays, internal parametric perturbations and external disturbances.

    PubMed

    Chen, Bor-Sen; Chen, Po-Wei

    2009-12-01

    Inherently, biochemical regulatory networks suffer from process delays, internal parametrical perturbations as well as external disturbances. Robustness is the property to maintain the functions of intracellular biochemical regulatory networks despite these perturbations. In this study, system and signal processing theories are employed for measurement of robust stability and filtering ability of linear and nonlinear time-delay biochemical regulatory networks. First, based on Lyapunov stability theory, the robust stability of biochemical network is measured for the tolerance of additional process delays and additive internal parameter fluctuations. Then the filtering ability of attenuating additive external disturbances is estimated for time-delay biochemical regulatory networks. In order to overcome the difficulty of solving the Hamilton Jacobi inequality (HJI), the global linearization technique is employed to simplify the measurement procedure by a simple linear matrix inequality (LMI) method. Finally, an example is given in silico to illustrate how to measure the robust stability and filtering ability of a nonlinear time-delay perturbative biochemical network. This robust stability and filtering ability measurement for biochemical network has potential application to synthetic biology, gene therapy and drug design. PMID:19788895

  16. idTarget: a web server for identifying protein targets of small chemical molecules with robust scoring functions and a divide-and-conquer docking approach

    PubMed Central

    Wang, Jui-Chih; Chu, Pei-Ying; Chen, Chung-Ming; Lin, Jung-Hsin

    2012-01-01

    Identification of possible protein targets of small chemical molecules is an important step for unravelling their underlying causes of actions at the molecular level. To this end, we construct a web server, idTarget, which can predict possible binding targets of a small chemical molecule via a divide-and-conquer docking approach, in combination with our recently developed scoring functions based on robust regression analysis and quantum chemical charge models. Affinity profiles of the protein targets are used to provide the confidence levels of prediction. The divide-and-conquer docking approach uses adaptively constructed small overlapping grids to constrain the searching space, thereby achieving better docking efficiency. Unlike previous approaches that screen against a specific class of targets or a limited number of targets, idTarget screen against nearly all protein structures deposited in the Protein Data Bank (PDB). We show that idTarget is able to reproduce known off-targets of drugs or drug-like compounds, and the suggested new targets could be prioritized for further investigation. idTarget is freely available as a web-based server at http://idtarget.rcas.sinica.edu.tw. PMID:22649057

  17. RNA-ID, a highly sensitive and robust method to identify cis-regulatory sequences using superfolder GFP and a fluorescence-based assay

    PubMed Central

    Dean, Kimberly M.; Grayhack, Elizabeth J.

    2012-01-01

    We have developed a robust and sensitive method, called RNA-ID, to screen for cis-regulatory sequences in RNA using fluorescence-activated cell sorting (FACS) of yeast cells bearing a reporter in which expression of both superfolder green fluorescent protein (GFP) and yeast codon-optimized mCherry red fluorescent protein (RFP) is driven by the bidirectional GAL1,10 promoter. This method recapitulates previously reported progressive inhibition of translation mediated by increasing numbers of CGA codon pairs, and restoration of expression by introduction of a tRNA with an anticodon that base pairs exactly with the CGA codon. This method also reproduces effects of paromomycin and context on stop codon read-through. Five key features of this method contribute to its effectiveness as a selection for regulatory sequences: The system exhibits greater than a 250-fold dynamic range, a quantitative and dose-dependent response to known inhibitory sequences, exquisite resolution that allows nearly complete physical separation of distinct populations, and a reproducible signal between different cells transformed with the identical reporter, all of which are coupled with simple methods involving ligation-independent cloning, to create large libraries. Moreover, we provide evidence that there are sequences within a 9-nt library that cause reduced GFP fluorescence, suggesting that there are novel cis-regulatory sequences to be found even in this short sequence space. This method is widely applicable to the study of both RNA-mediated and codon-mediated effects on expression. PMID:23097427

  18. A hybrid approach identifies metabolic signatures of high-producers for chinese hamster ovary clone selection and process optimization.

    PubMed

    Popp, Oliver; Müller, Dirk; Didzus, Katharina; Paul, Wolfgang; Lipsmeier, Florian; Kirchner, Florian; Niklas, Jens; Mauch, Klaus; Beaucamp, Nicola

    2016-09-01

    In-depth characterization of high-producer cell lines and bioprocesses is vital to ensure robust and consistent production of recombinant therapeutic proteins in high quantity and quality for clinical applications. This requires applying appropriate methods during bioprocess development to enable meaningful characterization of CHO clones and processes. Here, we present a novel hybrid approach for supporting comprehensive characterization of metabolic clone performance. The approach combines metabolite profiling with multivariate data analysis and fluxomics to enable a data-driven mechanistic analysis of key metabolic traits associated with desired cell phenotypes. We applied the methodology to quantify and compare metabolic performance in a set of 10 recombinant CHO-K1 producer clones and a host cell line. The comprehensive characterization enabled us to derive an extended set of clone performance criteria that not only captured growth and product formation, but also incorporated information on intracellular clone physiology and on metabolic changes during the process. These criteria served to establish a quantitative clone ranking and allowed us to identify metabolic differences between high-producing CHO-K1 clones yielding comparably high product titers. Through multivariate data analysis of the combined metabolite and flux data we uncovered common metabolic traits characteristic of high-producer clones in the screening setup. This included high intracellular rates of glutamine synthesis, low cysteine uptake, reduced excretion of aspartate and glutamate, and low intracellular degradation rates of branched-chain amino acids and of histidine. Finally, the above approach was integrated into a workflow that enables standardized high-content selection of CHO producer clones in a high-throughput fashion. In conclusion, the combination of quantitative metabolite profiling, multivariate data analysis, and mechanistic network model simulations can identify metabolic

  19. Centimeter-Level Robust Gnss-Aided Inertial Post-Processing for Mobile Mapping Without Local Reference Stations

    NASA Astrophysics Data System (ADS)

    Hutton, J. J.; Gopaul, N.; Zhang, X.; Wang, J.; Menon, V.; Rieck, D.; Kipka, A.; Pastor, F.

    2016-06-01

    For almost two decades mobile mapping systems have done their georeferencing using Global Navigation Satellite Systems (GNSS) to measure position and inertial sensors to measure orientation. In order to achieve cm level position accuracy, a technique referred to as post-processed carrier phase differential GNSS (DGNSS) is used. For this technique to be effective the maximum distance to a single Reference Station should be no more than 20 km, and when using a network of Reference Stations the distance to the nearest station should no more than about 70 km. This need to set up local Reference Stations limits productivity and increases costs, especially when mapping large areas or long linear features such as roads or pipelines. An alternative technique to DGNSS for high-accuracy positioning from GNSS is the so-called Precise Point Positioning or PPP method. In this case instead of differencing the rover observables with the Reference Station observables to cancel out common errors, an advanced model for every aspect of the GNSS error chain is developed and parameterized to within an accuracy of a few cm. The Trimble Centerpoint RTX positioning solution combines the methodology of PPP with advanced ambiguity resolution technology to produce cm level accuracies without the need for local reference stations. It achieves this through a global deployment of highly redundant monitoring stations that are connected through the internet and are used to determine the precise satellite data with maximum accuracy, robustness, continuity and reliability, along with advance algorithms and receiver and antenna calibrations. This paper presents a new post-processed realization of the Trimble Centerpoint RTX technology integrated into the Applanix POSPac MMS GNSS-Aided Inertial software for mobile mapping. Real-world results from over 100 airborne flights evaluated against a DGNSS network reference are presented which show that the post-processed Centerpoint RTX solution agrees with

  20. A Novel Mini-DNA Barcoding Assay to Identify Processed Fins from Internationally Protected Shark Species

    PubMed Central

    Fields, Andrew T.; Abercrombie, Debra L.; Eng, Rowena; Feldheim, Kevin; Chapman, Demian D.

    2015-01-01

    There is a growing need to identify shark products in trade, in part due to the recent listing of five commercially important species on the Appendices of the Convention on International Trade in Endangered Species (CITES; porbeagle, Lamna nasus, oceanic whitetip, Carcharhinus longimanus scalloped hammerhead, Sphyrna lewini, smooth hammerhead, S. zygaena and great hammerhead S. mokarran) in addition to three species listed in the early part of this century (whale, Rhincodon typus, basking, Cetorhinus maximus, and white, Carcharodon carcharias). Shark fins are traded internationally to supply the Asian dried seafood market, in which they are used to make the luxury dish shark fin soup. Shark fins usually enter international trade with their skin still intact and can be identified using morphological characters or standard DNA-barcoding approaches. Once they reach Asia and are traded in this region the skin is removed and they are treated with chemicals that eliminate many key diagnostic characters and degrade their DNA (“processed fins”). Here, we present a validated mini-barcode assay based on partial sequences of the cytochrome oxidase I gene that can reliably identify the processed fins of seven of the eight CITES listed shark species. We also demonstrate that the assay can even frequently identify the species or genus of origin of shark fin soup (31 out of 50 samples). PMID:25646789

  1. A novel mini-DNA barcoding assay to identify processed fins from internationally protected shark species.

    PubMed

    Fields, Andrew T; Abercrombie, Debra L; Eng, Rowena; Feldheim, Kevin; Chapman, Demian D

    2015-01-01

    There is a growing need to identify shark products in trade, in part due to the recent listing of five commercially important species on the Appendices of the Convention on International Trade in Endangered Species (CITES; porbeagle, Lamna nasus, oceanic whitetip, Carcharhinus longimanus scalloped hammerhead, Sphyrna lewini, smooth hammerhead, S. zygaena and great hammerhead S. mokarran) in addition to three species listed in the early part of this century (whale, Rhincodon typus, basking, Cetorhinus maximus, and white, Carcharodon carcharias). Shark fins are traded internationally to supply the Asian dried seafood market, in which they are used to make the luxury dish shark fin soup. Shark fins usually enter international trade with their skin still intact and can be identified using morphological characters or standard DNA-barcoding approaches. Once they reach Asia and are traded in this region the skin is removed and they are treated with chemicals that eliminate many key diagnostic characters and degrade their DNA ("processed fins"). Here, we present a validated mini-barcode assay based on partial sequences of the cytochrome oxidase I gene that can reliably identify the processed fins of seven of the eight CITES listed shark species. We also demonstrate that the assay can even frequently identify the species or genus of origin of shark fin soup (31 out of 50 samples). PMID:25646789

  2. Determination of all feasible robust PID controllers for open-loop unstable plus time delay processes with gain margin and phase margin specifications.

    PubMed

    Wang, Yuan-Jay

    2014-03-01

    This paper proposes a novel alternative method to graphically compute all feasible gain and phase margin specifications-oriented robust PID controllers for open-loop unstable plus time delay (OLUPTD) processes. This method is applicable to general OLUPTD processes without constraint on system order. To retain robustness for OLUPTD processes subject to positive or negative gain variations, the downward gain margin (GM(down)), upward gain margin (GM(up)), and phase margin (PM) are considered. A virtual gain-phase margin tester compensator is incorporated to guarantee the concerned system satisfies certain robust safety margins. In addition, the stability equation method and the parameter plane method are exploited to portray the stability boundary and the constant gain margin (GM) boundary as well as the constant PM boundary. The overlapping region of these boundaries is graphically determined and denotes the GM and PM specifications-oriented region (GPMSOR). Alternatively, the GPMSOR characterizes all feasible robust PID controllers which achieve the pre-specified safety margins. In particular, to achieve optimal gain tuning, the controller gains are searched within the GPMSOR to minimize the integral of the absolute error (IAE) or the integral of the squared error (ISE) performance criterion. Thus, an optimal PID controller gain set is successfully found within the GPMSOR and guarantees the OLUPTD processes with a pre-specified GM and PM as well as a minimum IAE or ISE. Consequently, both robustness and performance can be simultaneously assured. Further, the design procedures are summarized as an algorithm to help rapidly locate the GPMSOR and search an optimal PID gain set. Finally, three highly cited examples are provided to illustrate the design process and to demonstrate the effectiveness of the proposed method. PMID:24462232

  3. Method for identifying biochemical and chemical reactions and micromechanical processes using nanomechanical and electronic signal identification

    DOEpatents

    Holzrichter, John F.; Siekhaus, Wigbert J.

    1997-01-01

    A scanning probe microscope, such as an atomic force microscope (AFM) or a scanning tunneling microscope (STM), is operated in a stationary mode on a site where an activity of interest occurs to measure and identify characteristic time-varying micromotions caused by biological, chemical, mechanical, electrical, optical, or physical processes. The tip and cantilever assembly of an AFM is used as a micromechanical detector of characteristic micromotions transmitted either directly by a site of interest or indirectly through the surrounding medium. Alternatively, the exponential dependence of the tunneling current on the size of the gap in the STM is used to detect micromechanical movement. The stationary mode of operation can be used to observe dynamic biological processes in real time and in a natural environment, such as polymerase processing of DNA for determining the sequence of a DNA molecule.

  4. Method for identifying biochemical and chemical reactions and micromechanical processes using nanomechanical and electronic signal identification

    DOEpatents

    Holzrichter, J.F.; Siekhaus, W.J.

    1997-04-15

    A scanning probe microscope, such as an atomic force microscope (AFM) or a scanning tunneling microscope (STM), is operated in a stationary mode on a site where an activity of interest occurs to measure and identify characteristic time-varying micromotions caused by biological, chemical, mechanical, electrical, optical, or physical processes. The tip and cantilever assembly of an AFM is used as a micromechanical detector of characteristic micromotions transmitted either directly by a site of interest or indirectly through the surrounding medium. Alternatively, the exponential dependence of the tunneling current on the size of the gap in the STM is used to detect micromechanical movement. The stationary mode of operation can be used to observe dynamic biological processes in real time and in a natural environment, such as polymerase processing of DNA for determining the sequence of a DNA molecule. 6 figs.

  5. A stable isotope approach and its application for identifying nitrate source and transformation process in water.

    PubMed

    Xu, Shiguo; Kang, Pingping; Sun, Ya

    2016-01-01

    Nitrate contamination of water is a worldwide environmental problem. Recent studies have demonstrated that the nitrogen (N) and oxygen (O) isotopes of nitrate (NO3(-)) can be used to trace nitrogen dynamics including identifying nitrate sources and nitrogen transformation processes. This paper analyzes the current state of identifying nitrate sources and nitrogen transformation processes using N and O isotopes of nitrate. With regard to nitrate sources, δ(15)N-NO3(-) and δ(18)O-NO3(-) values typically vary between sources, allowing the sources to be isotopically fingerprinted. δ(15)N-NO3(-) is often effective at tracing NO(-)3 sources from areas with different land use. δ(18)O-NO3(-) is more useful to identify NO3(-) from atmospheric sources. Isotopic data can be combined with statistical mixing models to quantify the relative contributions of NO3(-) from multiple delineated sources. With regard to N transformation processes, N and O isotopes of nitrate can be used to decipher the degree of nitrogen transformation by such processes as nitrification, assimilation, and denitrification. In some cases, however, isotopic fractionation may alter the isotopic fingerprint associated with the delineated NO3(-) source(s). This problem may be addressed by combining the N and O isotopic data with other types of, including the concentration of selected conservative elements, e.g., chloride (Cl(-)), boron isotope (δ(11)B), and sulfur isotope (δ(35)S) data. Future studies should focus on improving stable isotope mixing models and furthering our understanding of isotopic fractionation by conducting laboratory and field experiments in different environments. PMID:26541149

  6. A computation using mutually exclusive processing is sufficient to identify specific Hedgehog signaling components

    PubMed Central

    Spratt, Spencer J.

    2013-01-01

    A system of more than one part can be deciphered by observing differences between the parts. A simple way to do this is by recording something absolute displaying a trait in one part and not in another: in other words, mutually exclusive computation. Conditional directed expression in vivo offers processing in more than one part of the system giving increased computation power for biological systems analysis. Here, I report the consideration of these aspects in the development of an in vivo screening assay that appears sufficient to identify components specific to a system. PMID:24391661

  7. A Low Processing Cost Adaptive Algorithm Identifying Nonlinear Unknown System with Piecewise Linear Curve

    NASA Astrophysics Data System (ADS)

    Fujii, Kensaku; Aoki, Ryo; Muneyasu, Mitsuji

    This paper proposes an adaptive algorithm for identifying unknown systems containing nonlinear amplitude characteristics. Usually, the nonlinearity is so small as to be negligible. However, in low cost systems, such as acoustic echo canceller using a small loudspeaker, the nonlinearity deteriorates the performance of the identification. Several methods preventing the deterioration, polynomial or Volterra series approximations, have been hence proposed and studied. However, the conventional methods require high processing cost. In this paper, we propose a method approximating the nonlinear characteristics with a piecewise linear curve and show using computer simulations that the performance can be extremely improved. The proposed method can also reduce the processing cost to only about twice that of the linear adaptive filter system.

  8. Identifying potential misfit items in cognitive process of learning engineering mathematics based on Rasch model

    NASA Astrophysics Data System (ADS)

    Ataei, Sh; Mahmud, Z.; Khalid, M. N.

    2014-04-01

    The students learning outcomes clarify what students should know and be able to demonstrate after completing their course. So, one of the issues on the process of teaching and learning is how to assess students' learning. This paper describes an application of the dichotomous Rasch measurement model in measuring the cognitive process of engineering students' learning of mathematics. This study provides insights into the perspective of 54 engineering students' cognitive ability in learning Calculus III based on Bloom's Taxonomy on 31 items. The results denote that some of the examination questions are either too difficult or too easy for the majority of the students. This analysis yields FIT statistics which are able to identify if there is data departure from the Rasch theoretical model. The study has identified some potential misfit items based on the measurement of ZSTD where the removal misfit item was accomplished based on the MNSQ outfit of above 1.3 or less than 0.7 logit. Therefore, it is recommended that these items be reviewed or revised to better match the range of students' ability in the respective course.

  9. Approaches to robustness

    NASA Astrophysics Data System (ADS)

    Cox, Henry; Heaney, Kevin D.

    2003-04-01

    The term robustness in signal processing applications usually refers to approaches that are not degraded significantly when the assumptions that were invoked in defining the processing algorithm are no longer valid. Highly tuned algorithms that fall apart in real-world conditions are useless. The classic example is super-directive arrays of closely spaced elements. The very narrow beams and high directivity could be predicted under ideal conditions, could not be achieved under realistic conditions of amplitude, phase and position errors. The robust design tries to take into account the real environment as part of the optimization problem. This problem led to the introduction of the white noise gain constraint and diagonal loading in adaptive beam forming. Multiple linear constraints have been introduced in pursuit of robustness. Sonar systems such as towed arrays operate in less than ideal conditions, making robustness a concern. A special problem in sonar systems is failed array elements. This leads to severe degradation in beam patterns and bearing response patterns. Another robustness issue arises in matched field processing that uses an acoustic propagation model in the beamforming. Knowledge of the environmental parameters is usually limited. This paper reviews the various approaches to achieving robustness in sonar systems.

  10. Modular Energy-Efficient and Robust Paradigms for a Disaster-Recovery Process over Wireless Sensor Networks.

    PubMed

    Razaque, Abdul; Elleithy, Khaled

    2015-01-01

    Robust paradigms are a necessity, particularly for emerging wireless sensor network (WSN) applications. The lack of robust and efficient paradigms causes a reduction in the provision of quality of service (QoS) and additional energy consumption. In this paper, we introduce modular energy-efficient and robust paradigms that involve two archetypes: (1) the operational medium access control (O-MAC) hybrid protocol and (2) the pheromone termite (PT) model. The O-MAC protocol controls overhearing and congestion and increases the throughput, reduces the latency and extends the network lifetime. O-MAC uses an optimized data frame format that reduces the channel access time and provides faster data delivery over the medium. Furthermore, O-MAC uses a novel randomization function that avoids channel collisions. The PT model provides robust routing for single and multiple links and includes two new significant features: (1) determining the packet generation rate to avoid congestion and (2) pheromone sensitivity to determine the link capacity prior to sending the packets on each link. The state-of-the-art research in this work is based on improving both the QoS and energy efficiency. To determine the strength of O-MAC with the PT model; we have generated and simulated a disaster recovery scenario using a network simulator (ns-3.10) that monitors the activities of disaster recovery staff; hospital staff and disaster victims brought into the hospital. Moreover; the proposed paradigm can be used for general purpose applications. Finally; the QoS metrics of the O-MAC and PT paradigms are evaluated and compared with other known hybrid protocols involving the MAC and routing features. The simulation results indicate that O-MAC with PT produced better outcomes. PMID:26153768

  11. Modular Energy-Efficient and Robust Paradigms for a Disaster-Recovery Process over Wireless Sensor Networks

    PubMed Central

    Razaque, Abdul; Elleithy, Khaled

    2015-01-01

    Robust paradigms are a necessity, particularly for emerging wireless sensor network (WSN) applications. The lack of robust and efficient paradigms causes a reduction in the provision of quality of service (QoS) and additional energy consumption. In this paper, we introduce modular energy-efficient and robust paradigms that involve two archetypes: (1) the operational medium access control (O-MAC) hybrid protocol and (2) the pheromone termite (PT) model. The O-MAC protocol controls overhearing and congestion and increases the throughput, reduces the latency and extends the network lifetime. O-MAC uses an optimized data frame format that reduces the channel access time and provides faster data delivery over the medium. Furthermore, O-MAC uses a novel randomization function that avoids channel collisions. The PT model provides robust routing for single and multiple links and includes two new significant features: (1) determining the packet generation rate to avoid congestion and (2) pheromone sensitivity to determine the link capacity prior to sending the packets on each link. The state-of-the-art research in this work is based on improving both the QoS and energy efficiency. To determine the strength of O-MAC with the PT model; we have generated and simulated a disaster recovery scenario using a network simulator (ns-3.10) that monitors the activities of disaster recovery staff; hospital staff and disaster victims brought into the hospital. Moreover; the proposed paradigm can be used for general purpose applications. Finally; the QoS metrics of the O-MAC and PT paradigms are evaluated and compared with other known hybrid protocols involving the MAC and routing features. The simulation results indicate that O-MAC with PT produced better outcomes. PMID:26153768

  12. Transposon mutagenesis identifies genes and cellular processes driving epithelial-mesenchymal transition in hepatocellular carcinoma.

    PubMed

    Kodama, Takahiro; Newberg, Justin Y; Kodama, Michiko; Rangel, Roberto; Yoshihara, Kosuke; Tien, Jean C; Parsons, Pamela H; Wu, Hao; Finegold, Milton J; Copeland, Neal G; Jenkins, Nancy A

    2016-06-14

    Epithelial-mesenchymal transition (EMT) is thought to contribute to metastasis and chemoresistance in patients with hepatocellular carcinoma (HCC), leading to their poor prognosis. The genes driving EMT in HCC are not yet fully understood, however. Here, we show that mobilization of Sleeping Beauty (SB) transposons in immortalized mouse hepatoblasts induces mesenchymal liver tumors on transplantation to nude mice. These tumors show significant down-regulation of epithelial markers, along with up-regulation of mesenchymal markers and EMT-related transcription factors (EMT-TFs). Sequencing of transposon insertion sites from tumors identified 233 candidate cancer genes (CCGs) that were enriched for genes and cellular processes driving EMT. Subsequent trunk driver analysis identified 23 CCGs that are predicted to function early in tumorigenesis and whose mutation or alteration in patients with HCC is correlated with poor patient survival. Validation of the top trunk drivers identified in the screen, including MET (MET proto-oncogene, receptor tyrosine kinase), GRB2-associated binding protein 1 (GAB1), HECT, UBA, and WWE domain containing 1 (HUWE1), lysine-specific demethylase 6A (KDM6A), and protein-tyrosine phosphatase, nonreceptor-type 12 (PTPN12), showed that deregulation of these genes activates an EMT program in human HCC cells that enhances tumor cell migration. Finally, deregulation of these genes in human HCC was found to confer sorafenib resistance through apoptotic tolerance and reduced proliferation, consistent with recent studies showing that EMT contributes to the chemoresistance of tumor cells. Our unique cell-based transposon mutagenesis screen appears to be an excellent resource for discovering genes involved in EMT in human HCC and potentially for identifying new drug targets. PMID:27247392

  13. Scan-pattern and signal processing for microvasculature visualization with complex SD-OCT: tissue-motion artifacts robustness and decorrelation time - blood vessel characteristics

    NASA Astrophysics Data System (ADS)

    Matveev, Lev A.; Zaitsev, Vladimir Y.; Gelikonov, Grigory V.; Matveyev, Alexandr L.; Moiseev, Alexander A.; Ksenofontov, Sergey Y.; Gelikonov, Valentin M.; Demidov, Valentin; Vitkin, Alex

    2015-03-01

    We propose a modification of OCT scanning pattern and corresponding signal processing for 3D visualizing blood microcirculation from complex-signal B-scans. We describe the scanning pattern modifications that increase the methods' robustness to bulk tissue motion artifacts, with speed up to several cm/s. Based on these modifications, OCT-based angiography becomes more realistic under practical measurement conditions. For these scan patterns, we apply novel signal processing to separate the blood vessels with different decorrelation times, by varying of effective temporal diversity of processed signals.

  14. Robust Regression.

    PubMed

    Huang, Dong; Cabral, Ricardo; De la Torre, Fernando

    2016-02-01

    Discriminative methods (e.g., kernel regression, SVM) have been extensively used to solve problems such as object recognition, image alignment and pose estimation from images. These methods typically map image features ( X) to continuous (e.g., pose) or discrete (e.g., object category) values. A major drawback of existing discriminative methods is that samples are directly projected onto a subspace and hence fail to account for outliers common in realistic training sets due to occlusion, specular reflections or noise. It is important to notice that existing discriminative approaches assume the input variables X to be noise free. Thus, discriminative methods experience significant performance degradation when gross outliers are present. Despite its obvious importance, the problem of robust discriminative learning has been relatively unexplored in computer vision. This paper develops the theory of robust regression (RR) and presents an effective convex approach that uses recent advances on rank minimization. The framework applies to a variety of problems in computer vision including robust linear discriminant analysis, regression with missing data, and multi-label classification. Several synthetic and real examples with applications to head pose estimation from images, image and video classification and facial attribute classification with missing data are used to illustrate the benefits of RR. PMID:26761740

  15. Pharmaceutical screen identifies novel target processes for activation of autophagy with a broad translational potential.

    PubMed

    Chauhan, Santosh; Ahmed, Zahra; Bradfute, Steven B; Arko-Mensah, John; Mandell, Michael A; Won Choi, Seong; Kimura, Tomonori; Blanchet, Fabien; Waller, Anna; Mudd, Michal H; Jiang, Shanya; Sklar, Larry; Timmins, Graham S; Maphis, Nicole; Bhaskar, Kiran; Piguet, Vincent; Deretic, Vojo

    2015-01-01

    Autophagy is a conserved homeostatic process active in all human cells and affecting a spectrum of diseases. Here we use a pharmaceutical screen to discover new mechanisms for activation of autophagy. We identify a subset of pharmaceuticals inducing autophagic flux with effects in diverse cellular systems modelling specific stages of several human diseases such as HIV transmission and hyperphosphorylated tau accumulation in Alzheimer's disease. One drug, flubendazole, is a potent inducer of autophagy initiation and flux by affecting acetylated and dynamic microtubules in a reciprocal way. Disruption of dynamic microtubules by flubendazole results in mTOR deactivation and dissociation from lysosomes leading to TFEB (transcription factor EB) nuclear translocation and activation of autophagy. By inducing microtubule acetylation, flubendazole activates JNK1 leading to Bcl-2 phosphorylation, causing release of Beclin1 from Bcl-2-Beclin1 complexes for autophagy induction, thus uncovering a new approach to inducing autophagic flux that may be applicable in disease treatment. PMID:26503418

  16. Systems and processes for identifying features and determining feature associations in groups of documents

    DOEpatents

    Rose, Stuart J.; Cowley, Wendy E.; Crow, Vernon L.

    2016-01-12

    Systems and computer-implemented processes for identification of features and determination of feature associations in a group of documents can involve providing a plurality of keywords identified among the terms of at least some of the documents. A value measure can be calculated for each keyword. High-value keywords are defined as those keywords having value measures that exceed a threshold. For each high-value keyword, term-document associations (TDA) are accessed. The TDA characterize measures of association between each term and at least some documents in the group. A processor quantifies similarities between unique pairs of high-value keywords based on the TDA for each respective high-value keyword and generates a similarity matrix that indicates one or more sets that each comprise highly associated high-value keywords.

  17. Pharmaceutical screen identifies novel target processes for activation of autophagy with a broad translational potential

    PubMed Central

    Chauhan, Santosh; Ahmed, Zahra; Bradfute, Steven B.; Arko-Mensah, John; Mandell, Michael A.; Won Choi, Seong; Kimura, Tomonori; Blanchet, Fabien; Waller, Anna; Mudd, Michal H.; Jiang, Shanya; Sklar, Larry; Timmins, Graham S.; Maphis, Nicole; Bhaskar, Kiran; Piguet, Vincent; Deretic, Vojo

    2015-01-01

    Autophagy is a conserved homeostatic process active in all human cells and affecting a spectrum of diseases. Here we use a pharmaceutical screen to discover new mechanisms for activation of autophagy. We identify a subset of pharmaceuticals inducing autophagic flux with effects in diverse cellular systems modelling specific stages of several human diseases such as HIV transmission and hyperphosphorylated tau accumulation in Alzheimer's disease. One drug, flubendazole, is a potent inducer of autophagy initiation and flux by affecting acetylated and dynamic microtubules in a reciprocal way. Disruption of dynamic microtubules by flubendazole results in mTOR deactivation and dissociation from lysosomes leading to TFEB (transcription factor EB) nuclear translocation and activation of autophagy. By inducing microtubule acetylation, flubendazole activates JNK1 leading to Bcl-2 phosphorylation, causing release of Beclin1 from Bcl-2-Beclin1 complexes for autophagy induction, thus uncovering a new approach to inducing autophagic flux that may be applicable in disease treatment. PMID:26503418

  18. Towards a typology of business process management professionals: identifying patterns of competences through latent semantic analysis

    NASA Astrophysics Data System (ADS)

    Müller, Oliver; Schmiedel, Theresa; Gorbacheva, Elena; vom Brocke, Jan

    2016-01-01

    While researchers have analysed the organisational competences that are required for successful Business Process Management (BPM) initiatives, individual BPM competences have not yet been studied in detail. In this study, latent semantic analysis is used to examine a collection of 1507 BPM-related job advertisements in order to develop a typology of BPM professionals. This empirical analysis reveals distinct ideal types and profiles of BPM professionals on several levels of abstraction. A closer look at these ideal types and profiles confirms that BPM is a boundary-spanning field that requires interdisciplinary sets of competence that range from technical competences to business and systems competences. Based on the study's findings, it is posited that individual and organisational alignment with the identified ideal types and profiles is likely to result in high employability and organisational BPM success.

  19. Finding Jumps in Otherwise Smooth Curves: Identifying Critical Events in Political Processes

    PubMed Central

    Ratkovic, Marc T.

    2010-01-01

    Many social processes are stable and smooth in general, with discrete jumps. We develop a sequential segmentation spline method that can identify both the location and the number of discontinuities in a series of observations with a time component, while fitting a smooth spline between jumps, using a modified Bayesian Information Criterion statistic as a stopping rule. We explore the method in a large-n, unbalanced panel setting with George W. Bush’s approval data, a small-n time series with median DW-NOMINATE scores for each Congress over time, and a series of simulations. We compare the method to several extant smoothers, and the method performs favorably in terms of visual inspection, residual properties, and event detection. Finally, we discuss extensions of the method. PMID:20721311

  20. Identifying Areas for Improvement in the HIV Screening Process of a High-Prevalence Emergency Department.

    PubMed

    Zucker, Jason; Cennimo, David; Sugalski, Gregory; Swaminthan, Shobha

    2016-06-01

    Since 1993, the Centers for Disease Control recommendations for HIV testing were extended to include persons obtaining care in the emergency department (ED). Situated in Newark, New Jersey, the University Hospital (UH) ED serves a community with a greater than 2% HIV prevalence, and a recent study showed a UH ED HIV seroprevalence of 6.5%, of which 33% were unknown diagnoses. Electronic records for patients seen in the UH ED from October 1st, 2014, to February 28th, 2015, were obtained. Information was collected on demographics, ED diagnosis, triage time, and HIV testing. Random sampling of 500 patients was performed to identify those eligible for screening. Univariate and multivariate analysis was done to assess screening characteristics. Only 9% (8.8-9.3%) of patients eligible for screening were screened in the ED. Sixteen percent (15.7-16.6%) of those in the age group18-25 and 12% (11.6-12.3%) of those in the age group 26-35 were screened, whereas 8% (7.8-8.2%) of those in the age group 35-45 were screened. 19.6% (19-20.1%) of eligible patients in fast track were screened versus 1.7% (1.6-1.8%) in the main ED. Eighty-five percent of patients screened were triaged between 6 a.m. and 8 p.m. with 90% of all screening tests done by the HIV counseling, testing, and referral services. Due to the high prevalence of HIV, urban EDs play an integral public health role in the early identification and linkage to care of patients with HIV. By evaluating our current screening process, we identified opportunities to improve our screening process and reduce missed opportunities for diagnosis. PMID:27286295

  1. Identifying children with autism spectrum disorder based on their face processing abnormality: A machine learning framework.

    PubMed

    Liu, Wenbo; Li, Ming; Yi, Li

    2016-08-01

    The atypical face scanning patterns in individuals with Autism Spectrum Disorder (ASD) has been repeatedly discovered by previous research. The present study examined whether their face scanning patterns could be potentially useful to identify children with ASD by adopting the machine learning algorithm for the classification purpose. Particularly, we applied the machine learning method to analyze an eye movement dataset from a face recognition task [Yi et al., 2016], to classify children with and without ASD. We evaluated the performance of our model in terms of its accuracy, sensitivity, and specificity of classifying ASD. Results indicated promising evidence for applying the machine learning algorithm based on the face scanning patterns to identify children with ASD, with a maximum classification accuracy of 88.51%. Nevertheless, our study is still preliminary with some constraints that may apply in the clinical practice. Future research should shed light on further valuation of our method and contribute to the development of a multitask and multimodel approach to aid the process of early detection and diagnosis of ASD. Autism Res 2016, 9: 888-898. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. PMID:27037971

  2. DPNuc: Identifying Nucleosome Positions Based on the Dirichlet Process Mixture Model.

    PubMed

    Chen, Huidong; Guan, Jihong; Zhou, Shuigeng

    2015-01-01

    Nucleosomes and the free linker DNA between them assemble the chromatin. Nucleosome positioning plays an important role in gene transcription regulation, DNA replication and repair, alternative splicing, and so on. With the rapid development of ChIP-seq, it is possible to computationally detect the positions of nucleosomes on chromosomes. However, existing methods cannot provide accurate and detailed information about the detected nucleosomes, especially for the nucleosomes with complex configurations where overlaps and noise exist. Meanwhile, they usually require some prior knowledge of nucleosomes as input, such as the size or the number of the unknown nucleosomes, which may significantly influence the detection results. In this paper, we propose a novel approach DPNuc for identifying nucleosome positions based on the Dirichlet process mixture model. In our method, Markov chain Monte Carlo (MCMC) simulations are employed to determine the mixture model with no need of prior knowledge about nucleosomes. Compared with three existing methods, our approach can provide more detailed information of the detected nucleosomes and can more reasonably reveal the real configurations of the chromosomes; especially, our approach performs better in the complex overlapping situations. By mapping the detected nucleosomes to a synthetic benchmark nucleosome map and two existing benchmark nucleosome maps, it is shown that our approach achieves a better performance in identifying nucleosome positions and gets a higher F-score. Finally, we show that our approach can more reliably detect the size distribution of nucleosomes. PMID:26671796

  3. Identifying Hydrologic Processes in Agricultural Watersheds Using Precipitation-Runoff Models

    USGS Publications Warehouse

    Linard, Joshua I.; Wolock, David M.; Webb, Richard M.T.; Wieczorek, Michael E.

    2009-01-01

    Understanding the fate and transport of agricultural chemicals applied to agricultural fields will assist in designing the most effective strategies to prevent water-quality impairments. At a watershed scale, the processes controlling the fate and transport of agricultural chemicals are generally understood only conceptually. To examine the applicability of conceptual models to the processes actually occurring, two precipitation-runoff models - the Soil and Water Assessment Tool (SWAT) and the Water, Energy, and Biogeochemical Model (WEBMOD) - were applied in different agricultural settings of the contiguous United States. Each model, through different physical processes, simulated the transport of water to a stream from the surface, the unsaturated zone, and the saturated zone. Models were calibrated for watersheds in Maryland, Indiana, and Nebraska. The calibrated sets of input parameters for each model at each watershed are discussed, and the criteria used to validate the models are explained. The SWAT and WEBMOD model results at each watershed conformed to each other and to the processes identified in each watershed's conceptual hydrology. In Maryland the conceptual understanding of the hydrology indicated groundwater flow was the largest annual source of streamflow; the simulation results for the validation period confirm this. The dominant source of water to the Indiana watershed was thought to be tile drains. Although tile drains were not explicitly simulated in the SWAT model, a large component of streamflow was received from lateral flow, which could be attributed to tile drains. Being able to explicitly account for tile drains, WEBMOD indicated water from tile drains constituted most of the annual streamflow in the Indiana watershed. The Nebraska models indicated annual streamflow was composed primarily of perennial groundwater flow and infiltration-excess runoff, which conformed to the conceptual hydrology developed for that watershed. The hydrologic

  4. Beyond Element-Wise Interactions: Identifying Complex Interactions in Biological Processes

    PubMed Central

    Kendrick, Keith; Feng, Jianfeng

    2009-01-01

    Background Biological processes typically involve the interactions of a number of elements (genes, cells) acting on each others. Such processes are often modelled as networks whose nodes are the elements in question and edges pairwise relations between them (transcription, inhibition). But more often than not, elements actually work cooperatively or competitively to achieve a task. Or an element can act on the interaction between two others, as in the case of an enzyme controlling a reaction rate. We call “complex” these types of interaction and propose ways to identify them from time-series observations. Methodology We use Granger Causality, a measure of the interaction between two signals, to characterize the influence of an enzyme on a reaction rate. We extend its traditional formulation to the case of multi-dimensional signals in order to capture group interactions, and not only element interactions. Our method is extensively tested on simulated data and applied to three biological datasets: microarray data of the Saccharomyces cerevisiae yeast, local field potential recordings of two brain areas and a metabolic reaction. Conclusions Our results demonstrate that complex Granger causality can reveal new types of relation between signals and is particularly suited to biological data. Our approach raises some fundamental issues of the systems biology approach since finding all complex causalities (interactions) is an NP hard problem. PMID:19774090

  5. Hominin cognitive evolution: identifying patterns and processes in the fossil and archaeological record

    PubMed Central

    Shultz, Susanne; Nelson, Emma; Dunbar, Robin I. M.

    2012-01-01

    As only limited insight into behaviour is available from the archaeological record, much of our understanding of historical changes in human cognition is restricted to identifying changes in brain size and architecture. Using both absolute and residual brain size estimates, we show that hominin brain evolution was likely to be the result of a mix of processes; punctuated changes at approximately 100 kya, 1 Mya and 1.8 Mya are supplemented by gradual within-lineage changes in Homo erectus and Homo sapiens sensu lato. While brain size increase in Homo in Africa is a gradual process, migration of hominins into Eurasia is associated with step changes at approximately 400 kya and approximately 100 kya. We then demonstrate that periods of rapid change in hominin brain size are not temporally associated with changes in environmental unpredictability or with long-term palaeoclimate trends. Thus, we argue that commonly used global sea level or Indian Ocean dust palaeoclimate records provide little evidence for either the variability selection or aridity hypotheses explaining changes in hominin brain size. Brain size change at approximately 100 kya is coincident with demographic change and the appearance of fully modern language. However, gaps remain in our understanding of the external pressures driving encephalization, which will only be filled by novel applications of the fossil, palaeoclimatic and archaeological records. PMID:22734056

  6. Use of Sulphur and Boron Isotopes to Identify Natural Gas Processing Emissions Sources

    NASA Astrophysics Data System (ADS)

    Bradley, C. E.; Norman, A.; Wieser, M. E.

    2003-12-01

    Natural gas processing results in the emission of large amounts of gaseous pollutants as a result of planned and / or emergency flaring, sulphur incineration, and in the course of normal operation. Since many gas plants often contribute to the same air shed, it is not possible to conclusively determine the sources, amounts, and characteristics of pollution from a particular processing facility using traditional methods. However, sulphur isotopes have proven useful in the apportionment of sources of atmospheric sulphate (Norman et al., 1999), and boron isotopes have been shown to be of use in tracing coal contamination through groundwater (Davidson and Bassett, 1993). In this study, both sulphur and boron isotopes have been measured at source, receptor, and control sites, and, if emissions prove to be sufficiently distinct isotopically, they will be used to identify and apportion emissions downwind. Sulphur is present in natural gas as hydrogen sulphide (H2S), which is combusted to sulphur dioxide (SO2) prior to its release to the atmosphere, while boron is present both in hydrocarbon deposits as well as in any water used in the process. Little is known about the isotopic abundance variations of boron in hydrocarbon reservoirs, but Krouse (1991) has shown that the sulphur isotope composition of H2S in reservoirs varies according to both the concentration and the method of formation of H2S. As a result, gas plants processing gas from different reservoirs are expected to produce emissions with unique isotopic compositions. Samples were collected using a high-volume air sampler placed directly downwind of several gas plants, as well as at a receptor site and a control site. Aerosol sulphate and boron were collected on quartz fibre filters, while SO2 was collected on potassium hydroxide-impregnated cellulose filters. Solid sulphur samples were taken from those plants that process sulphur in order to compare the isotopic composition with atmospheric measurements. A

  7. Hillslopes to Hollows to Channels: Identifying Process Transitions and Domains using Characteristic Scaling Relations

    NASA Astrophysics Data System (ADS)

    Williams, K.; Locke, W. W.

    2011-12-01

    Headwater catchments are partitioned into hillslopes, unchanneled valleys (hollows), and channels. Low order (less than or equal to two) channels comprise most of the stream length in the drainage network so defining where hillslopes end and hollows begin, and where hollows end and channels begin, is important for calibration and verification of hydrologic runoff and sediment production modeling. We test the use of landscape scaling relations to detect flow regimes characteristic of diffusive, concentrated, and incisive runoff, and use these flow regimes as proxies for hillslope, hollow, and channeled landforms. We use LiDAR-derived digital elevation models (DEMs) of two pairs of headwater catchments in southwest and north-central Montana to develop scaling relations of flowpath length, total stream power, and contributing area. The catchment pairs contrast low versus high drainage density and north versus south aspect. Inflections in scaling relations of contributing area and flowpath length in a single basin (modified Hack's law) and contributing area and total stream power were used to identify hillslope and fluvial process domain transitions. In the modified Hack's law, inflections in the slope of the log-log power law are hypothesized to correspond to changes in flow regime used as proxies for hillslope, hollow, and channeled landforms. Similarly, rate of change of total stream power with contributing area is hypothesized to become constant and then decrease at the hillslope to fluvial domain transition. Power law scaling of frequency-magnitude plots of curvature and an aspect-related parameter were also tested as an indicator of the transition from scale-dependent hillslope length to the scale invariant fluvial domain. Curvature and aspect were calculated at each cell in spectrally filtered DEMs. Spectral filtering by fast Fourier and wavelet transforms enhances detection of fine-scale fluvial features by removing long wavelength topography. Using the

  8. Identifying and processing the gap between perceived and actual agreement in breast pathology interpretation.

    PubMed

    Carney, Patricia A; Allison, Kimberly H; Oster, Natalia V; Frederick, Paul D; Morgan, Thomas R; Geller, Berta M; Weaver, Donald L; Elmore, Joann G

    2016-07-01

    We examined how pathologists' process their perceptions of how their interpretations on diagnoses for breast pathology cases agree with a reference standard. To accomplish this, we created an individualized self-directed continuing medical education program that showed pathologists interpreting breast specimens how their interpretations on a test set compared with a reference diagnosis developed by a consensus panel of experienced breast pathologists. After interpreting a test set of 60 cases, 92 participating pathologists were asked to estimate how their interpretations compared with the standard for benign without atypia, atypia, ductal carcinoma in situ and invasive cancer. We then asked pathologists their thoughts about learning about differences in their perceptions compared with actual agreement. Overall, participants tended to overestimate their agreement with the reference standard, with a mean difference of 5.5% (75.9% actual agreement; 81.4% estimated agreement), especially for atypia and were least likely to overestimate it for invasive breast cancer. Non-academic affiliated pathologists were more likely to more closely estimate their performance relative to academic affiliated pathologists (77.6 vs 48%; P=0.001), whereas participants affiliated with an academic medical center were more likely to underestimate agreement with their diagnoses compared with non-academic affiliated pathologists (40 vs 6%). Before the continuing medical education program, nearly 55% (54.9%) of participants could not estimate whether they would overinterpret the cases or underinterpret them relative to the reference diagnosis. Nearly 80% (79.8%) reported learning new information from this individualized web-based continuing medical education program, and 23.9% of pathologists identified strategies they would change their practice to improve. In conclusion, when evaluating breast pathology specimens, pathologists do a good job of estimating their diagnostic agreement with a

  9. Identifying vegetation's influence on multi-scale fluvial processes based on plant trait adaptations

    NASA Astrophysics Data System (ADS)

    Manners, R.; Merritt, D. M.; Wilcox, A. C.; Scott, M.

    2015-12-01

    Riparian vegetation-geomorphic interactions are critical to the physical and biological function of riparian ecosystems, yet we lack a mechanistic understanding of these interactions and predictive ability at the reach to watershed scale. Plant functional groups, or groupings of species that have similar traits, either in terms of a plant's life history strategy (e.g., drought tolerance) or morphology (e.g., growth form), may provide an expression of vegetation-geomorphic interactions. We are developing an approach that 1) identifies where along a river corridor plant functional groups exist and 2) links the traits that define functional groups and their impact on fluvial processes. The Green and Yampa Rivers in Dinosaur National Monument have wide variations in hydrology, hydraulics, and channel morphology, as well as a large dataset of species presence. For these rivers, we build a predictive model of the probable presence of plant functional groups based on site-specific aspects of the flow regime (e.g., inundation probability and duration), hydraulic characteristics (e.g., velocity), and substrate size. Functional group traits are collected from the literature and measured in the field. We found that life-history traits more strongly predicted functional group presence than did morphological traits. However, some life-history traits, important for determining the likelihood of a plant existing along an environmental gradient, are directly related to the morphological properties of the plant, important for the plant's impact on fluvial processes. For example, stem density (i.e., dry mass divided by volume of stem) is positively correlated to drought tolerance and is also related to the modulus of elasticity. Growth form, which is related to the plant's susceptibility to biomass-removing fluvial disturbances, is also related to frontal area. Using this approach, we can identify how plant community composition and distribution shifts with a change to the flow

  10. Comparison of Remote Sensing Image Processing Techniques to Identify Tornado Damage Areas from Landsat TM Data

    PubMed Central

    Myint, Soe W.; Yuan, May; Cerveny, Randall S.; Giri, Chandra P.

    2008-01-01

    Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and object-oriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques.

  11. Comparison of remote sensing image processing techniques to identify tornado damage areas from Landsat TM data

    USGS Publications Warehouse

    Myint, S.W.; Yuan, M.; Cerveny, R.S.; Giri, C.P.

    2008-01-01

    Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and objectoriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. ?? 2008 by MDPI.

  12. Identifying Highly Penetrant Disease Causal Mutations Using Next Generation Sequencing: Guide to Whole Process

    PubMed Central

    Erzurumluoglu, A. Mesut; Shihab, Hashem A.; Baird, Denis; Richardson, Tom G.; Day, Ian N. M.; Gaunt, Tom R.

    2015-01-01

    Recent technological advances have created challenges for geneticists and a need to adapt to a wide range of new bioinformatics tools and an expanding wealth of publicly available data (e.g., mutation databases, and software). This wide range of methods and a diversity of file formats used in sequence analysis is a significant issue, with a considerable amount of time spent before anyone can even attempt to analyse the genetic basis of human disorders. Another point to consider that is although many possess “just enough” knowledge to analyse their data, they do not make full use of the tools and databases that are available and also do not fully understand how their data was created. The primary aim of this review is to document some of the key approaches and provide an analysis schema to make the analysis process more efficient and reliable in the context of discovering highly penetrant causal mutations/genes. This review will also compare the methods used to identify highly penetrant variants when data is obtained from consanguineous individuals as opposed to nonconsanguineous; and when Mendelian disorders are analysed as opposed to common-complex disorders. PMID:26106619

  13. Identifying weathering processes by Si isotopes in two small catchments in the Black Forest (Germany)

    NASA Astrophysics Data System (ADS)

    Steinhoefel, G.; Breuer, J.; von Blanckenburg, F.; Horn, I.; Kaczorek, D.; Sommer, M.

    2013-12-01

    isotopically light Si with Fe-oxides, which shifts surface water to δ30Si values up to 1.1‰. The Si isotope signature of the main stream depends on variable proportion of inflowing surface water and groundwater. The results on these small catchments demonstrate that Si isotopes are a powerful tool to identify weathering processes and the sources of dissolved Si, which can now be used to constrain the isotope signature of large river systems.

  14. On the processes generating latitudinal richness gradients: identifying diagnostic patterns and predictions

    SciTech Connect

    Hurlbert, Allen H.; Stegen, James C.

    2014-12-02

    Many processes have been put forward to explain the latitudinal gradient in species richness. Here, we use a simulation model to examine four of the most common hypotheses and identify patterns that might be diagnostic of those four hypotheses. The hypotheses examined include (1) tropical niche conservatism, or the idea that the tropics are more diverse because a tropical clade origin has allowed more time for diversification in the tropics and has resulted in few species adapted to extra-tropical climates. (2) The productivity, or energetic constraints, hypothesis suggests that species richness is limited by the amount of biologically available energy in a region. (3) The tropical stability hypothesis argues that major climatic fluctuations and glacial cycles in extratropical regions have led to greater extinction rates and less opportunity for specialization relative to the tropics. (4) Finally, the speciation rates hypothesis suggests that the latitudinal richness gradient arises from a parallel gradient in rates of speciation. We found that tropical niche conservatism can be distinguished from the other three scenarios by phylogenies which are more balanced than expected, no relationship between mean root distance and richness across regions, and a homogeneous rate of speciation across clades and through time. The energy gradient, speciation gradient, and disturbance gradient scenarios all exhibited phylogenies which were more imbalanced than expected, showed a negative relationship between mean root distance and richness, and diversity-dependence of speciation rate estimates through time. Using Bayesian Analysis of Macroevolutionary Mixtures on the simulated phylogenies, we found that the relationship between speciation rates and latitude could distinguish among these three scenarios. We emphasize the importance of considering multiple hypotheses and focusing on diagnostic predictions instead of predictions that are consistent with more than one hypothesis.

  15. Robust conversion of marrow cells to skeletal muscle with formation of marrow-derived muscle cell colonies: A multifactorial process

    SciTech Connect

    Abedi, Mehrdad; Greer, Deborah A.; Colvin, Gerald A.; Demers, Delia A.; Dooner, Mark S.; Harpel, Jasha A.; Weier, Heinz-Ulrich G.; Lambert, Jean-Francois; Quesenberry, Peter J.

    2004-01-10

    Murine marrow cells are capable of repopulating skeletal muscle fibers. A point of concern has been the robustness of such conversions. We have investigated the impact of type of cell delivery, muscle injury, nature of delivered cell, and stem cell mobilizations on marrow to muscle conversion. We transplanted GFP transgenic marrow into irradiated C57BL/6 mice and then injured anterior tibialis muscle by cardiotoxin. One month after injury, sections were analyzed by standard and deconvolutional microscopy for expression of muscle and hematopietic markers. Irradiation was essential to conversion although whether by injury or induction of chimerism is not clear. Cardiotoxin and to a lesser extent PBS injected muscles showed significant number of GFP+ muscle fibers while uninjected muscles showed only rare GFP+ cells. Marrow conversion to muscle was increased by two cycles of G-CSF mobilization and to a lesser extent with G-CSF and steel or GM-CSF. Transplantation of female GFP to male C57 BL/6 and GFP to Rosa26 mice showed fusion of donor cells to recipient muscle. High numbers of donor derived muscle colonies and up to12 percent GFP positive muscle cells were seen after mobilization or direct injection. These levels of donor muscle chimerism approach levels which could be clinically significant in developing strategies for the treatment of muscular dystrophies. In summary, the conversion of marrow to skeletal muscle cells is based on cell fusion and is critically dependent on injury. This conversion is also numerically significant and increases with mobilization.

  16. Energy Landscape Reveals That the Budding Yeast Cell Cycle Is a Robust and Adaptive Multi-stage Process

    PubMed Central

    Lv, Cheng; Li, Xiaoguang; Li, Fangting; Li, Tiejun

    2015-01-01

    Quantitatively understanding the robustness, adaptivity and efficiency of cell cycle dynamics under the influence of noise is a fundamental but difficult question to answer for most eukaryotic organisms. Using a simplified budding yeast cell cycle model perturbed by intrinsic noise, we systematically explore these issues from an energy landscape point of view by constructing an energy landscape for the considered system based on large deviation theory. Analysis shows that the cell cycle trajectory is sharply confined by the ambient energy barrier, and the landscape along this trajectory exhibits a generally flat shape. We explain the evolution of the system on this flat path by incorporating its non-gradient nature. Furthermore, we illustrate how this global landscape changes in response to external signals, observing a nice transformation of the landscapes as the excitable system approaches a limit cycle system when nutrients are sufficient, as well as the formation of additional energy wells when the DNA replication checkpoint is activated. By taking into account the finite volume effect, we find additional pits along the flat cycle path in the landscape associated with the checkpoint mechanism of the cell cycle. The difference between the landscapes induced by intrinsic and extrinsic noise is also discussed. In our opinion, this meticulous structure of the energy landscape for our simplified model is of general interest to other cell cycle dynamics, and the proposed methods can be applied to study similar biological systems. PMID:25794282

  17. CONTAINER MATERIALS, FABRICATION AND ROBUSTNESS

    SciTech Connect

    Dunn, K.; Louthan, M.; Rawls, G.; Sindelar, R.; Zapp, P.; Mcclard, J.

    2009-11-10

    The multi-barrier 3013 container used to package plutonium-bearing materials is robust and thereby highly resistant to identified degradation modes that might cause failure. The only viable degradation mechanisms identified by a panel of technical experts were pressurization within and corrosion of the containers. Evaluations of the container materials and the fabrication processes and resulting residual stresses suggest that the multi-layered containers will mitigate the potential for degradation of the outer container and prevent the release of the container contents to the environment. Additionally, the ongoing surveillance programs and laboratory studies should detect any incipient degradation of containers in the 3013 storage inventory before an outer container is compromised.

  18. Robust control of accelerators

    SciTech Connect

    Johnson, W.J.D. ); Abdallah, C.T. )

    1990-01-01

    The problem of controlling the variations in the rf power system can be effectively cast as an application of modern control theory. Two components of this theory are obtaining a model and a feedback structure. The model inaccuracies influence the choice of a particular controller structure. Because of the modeling uncertainty, one has to design either a variable, adaptive controller or a fixed, robust controller to achieve the desired objective. The adaptive control scheme usually results in very complex hardware; and, therefore, shall not be pursued in this research. In contrast, the robust control methods leads to simpler hardware. However, robust control requires a more accurate mathematical model of the physical process than is required by adaptive control. Our research at the Los Alamos National Laboratory (LANL) and the University of New Mexico (UNM) has led to the development and implementation of a new robust rf power feedback system. In this paper, we report on our research progress. In section one, the robust control problem for the rf power system and the philosophy adopted for the beginning phase of our research is presented. In section two, the results of our proof-of-principle experiments are presented. In section three, we describe the actual controller configuration that is used in LANL FEL physics experiments. The novelty of our approach is that the control hardware is implemented directly in rf without demodulating, compensating, and then remodulating.

  19. Robust control of accelerators

    NASA Astrophysics Data System (ADS)

    Joel, W.; Johnson, D.; Chaouki, Abdallah T.

    1991-07-01

    The problem of controlling the variations in the rf power system can be effectively cast as an application of modern control theory. Two components of this theory are obtaining a model and a feedback structure. The model inaccuracies influence the choice of a particular controller structure. Because of the modelling uncertainty, one has to design either a variable, adaptive controller or a fixed, robust controller to achieve the desired objective. The adaptive control scheme usually results in very complex hardware; and, therefore, shall not be pursued in this research. In contrast, the robust control method leads to simpler hardware. However, robust control requires a more accurate mathematical model of the physical process than is required by adaptive control. Our research at the Los Alamos National Laboratory (LANL) and the University of New Mexico (UNM) has led to the development and implementation of a new robust rf power feedback system. In this article, we report on our research progress. In section 1, the robust control problem for the rf power system and the philosophy adopted for the beginning phase of our research is presented. In section 2, the results of our proof-of-principle experiments are presented. In section 3, we describe the actual controller configuration that is used in LANL FEL physics experiments. The novelty of our approach is that the control hardware is implemented directly in rf. without demodulating, compensating, and then remodulating.

  20. Reliable and robust entanglement witness

    NASA Astrophysics Data System (ADS)

    Yuan, Xiao; Mei, Quanxin; Zhou, Shan; Ma, Xiongfeng

    2016-04-01

    Entanglement, a critical resource for quantum information processing, needs to be witnessed in many practical scenarios. Theoretically, witnessing entanglement is by measuring a special Hermitian observable, called an entanglement witness (EW), which has non-negative expected outcomes for all separable states but can have negative expectations for certain entangled states. In practice, an EW implementation may suffer from two problems. The first one is reliability. Due to unreliable realization devices, a separable state could be falsely identified as an entangled one. The second problem relates to robustness. A witness may not be optimal for a target state and fail to identify its entanglement. To overcome the reliability problem, we employ a recently proposed measurement-device-independent entanglement witness scheme, in which the correctness of the conclusion is independent of the implemented measurement devices. In order to overcome the robustness problem, we optimize the EW to draw a better conclusion given certain experimental data. With the proposed EW scheme, where only data postprocessing needs to be modified compared to the original measurement-device-independent scheme, one can efficiently take advantage of the measurement results to maximally draw reliable conclusions.

  1. On the robustness of the r-process in neutron-star mergers against variations of nuclear masses

    NASA Astrophysics Data System (ADS)

    Mendoza-Temis, J. J.; Wu, M. R.; Martínez-Pinedo, G.; Langanke, K.; Bauswein, A.; Janka, H.-T.; Frank, A.

    2016-07-01

    r-process calculations have been performed for matter ejected dynamically in neutron star mergers (NSM), such calculations are based on a complete set of trajectories from a three-dimensional relativistic smoothed particle hydrodynamic (SPH) simulation. Our calculations consider an extended nuclear reaction network, including spontaneous, β- and neutron-induced fission and adopting fission yield distributions from the ABLA code. In this contribution we have studied the sensitivity of the r-process abundances to nuclear masses by using diferent mass models for the calculation of neutron capture cross sections via the statistical model. Most of the trajectories, corresponding to 90% of the ejected mass, follow a relatively slow expansion allowing for all neutrons to be captured. The resulting abundances are very similar to each other and reproduce the general features of the observed r-process abundance (the second and third peaks, the rare-earth peak and the lead peak) for all mass models as they are mainly determined by the fission yields. We find distinct differences in the predictions of the mass models at and just above the third peak, which can be traced back to different predictions of neutron separation energies for r-process nuclei around neutron number N = 130.

  2. Identifying Leadership Potential: The Process of Principals within a Charter School Network

    ERIC Educational Resources Information Center

    Waidelich, Lynn A.

    2012-01-01

    The importance of strong educational leadership for American K-12 schools cannot be overstated. As such, school districts need to actively recruit and develop leaders. One way to do so is for school officials to become more strategic in leadership identification and development. If contemporary leaders are strategic about whom they identify and…

  3. Students' Conceptual Knowledge and Process Skills in Civic Education: Identifying Cognitive Profiles and Classroom Correlates

    ERIC Educational Resources Information Center

    Zhang, Ting; Torney-Purta, Judith; Barber, Carolyn

    2012-01-01

    In 2 related studies framed by social constructivism theory, the authors explored a fine-grained analysis of adolescents' civic conceptual knowledge and skills and investigated them in relation to factors such as teachers' qualifications and students' classroom experiences. In Study 1 (with about 2,800 U.S. students), the authors identified 4…

  4. Identifying the hazard characteristics of powder byproducts generated from semiconductor fabrication processes.

    PubMed

    Choi, Kwang-Min; An, Hee-Chul; Kim, Kwan-Sick

    2015-01-01

    Semiconductor manufacturing processes generate powder particles as byproducts which potentially could affect workers' health. The chemical composition, size, shape, and crystal structure of these powder particles were investigated by scanning electron microscopy equipped with an energy dispersive spectrometer, Fourier transform infrared spectrometry, and X-ray diffractometry. The powders generated in diffusion and chemical mechanical polishing processes were amorphous silica. The particles in the chemical vapor deposition (CVD) and etch processes were TiO(2) and Al(2)O(3), and Al(2)O(3) particles, respectively. As for metallization, WO(3), TiO(2), and Al(2)O(3) particles were generated from equipment used for tungsten and barrier metal (TiN) operations. In photolithography, the size and shape of the powder particles showed 1-10 μm and were of spherical shape. In addition, the powders generated from high-current and medium-current processes for ion implantation included arsenic (As), whereas the high-energy process did not include As. For all samples collected using a personal air sampler during preventive maintenance of process equipment, the mass concentrations of total airborne particles were < 1 μg, which is the detection limit of the microbalance. In addition, the mean mass concentrations of airborne PM10 (particles less than 10 μm in diameter) using direct-reading aerosol monitor by area sampling were between 0.00 and 0.02 μg/m(3). Although the exposure concentration of airborne particles during preventive maintenance is extremely low, it is necessary to make continuous improvements to the process and work environment, because the influence of chronic low-level exposure cannot be excluded. PMID:25192369

  5. Identifying temporal and causal contributions of neural processes underlying the Implicit Association Test (IAT)

    PubMed Central

    Forbes, Chad E.; Cameron, Katherine A.; Grafman, Jordan; Barbey, Aron; Solomon, Jeffrey; Ritter, Walter; Ruchkin, Daniel S.

    2012-01-01

    The Implicit Association Test (IAT) is a popular behavioral measure that assesses the associative strength between outgroup members and stereotypical and counterstereotypical traits. Less is known, however, about the degree to which the IAT reflects automatic processing. Two studies examined automatic processing contributions to a gender-IAT using a data driven, social neuroscience approach. Performance on congruent (e.g., categorizing male names with synonyms of strength) and incongruent (e.g., categorizing female names with synonyms of strength) IAT blocks were separately analyzed using EEG (event-related potentials, or ERPs, and coherence; Study 1) and lesion (Study 2) methodologies. Compared to incongruent blocks, performance on congruent IAT blocks was associated with more positive ERPs that manifested in frontal and occipital regions at automatic processing speeds, occipital regions at more controlled processing speeds and was compromised by volume loss in the anterior temporal lobe (ATL), insula and medial PFC. Performance on incongruent blocks was associated with volume loss in supplementary motor areas, cingulate gyrus and a region in medial PFC similar to that found for congruent blocks. Greater coherence was found between frontal and occipital regions to the extent individuals exhibited more bias. This suggests there are separable neural contributions to congruent and incongruent blocks of the IAT but there is also a surprising amount of overlap. Given the temporal and regional neural distinctions, these results provide converging evidence that stereotypic associative strength assessed by the IAT indexes automatic processing to a degree. PMID:23226123

  6. Accessing spoilage features of osmotolerant yeasts identified from kiwifruit plantation and processing environment in Shaanxi, China.

    PubMed

    Niu, Chen; Yuan, Yahong; Hu, Zhongqiu; Wang, Zhouli; Liu, Bin; Wang, Huxuan; Yue, Tianli

    2016-09-01

    Osmotolerant yeasts originating from kiwifruit industrial chain can result in spoilage incidences, while little information is available about their species and spoilage features. This work identified possible spoilage osmotolerant yeasts from orchards and a manufacturer (quick-freeze kiwifruit manufacturer) in main producing areas in Shaanxi, China and further characterized their spoilage features. A total of 86 osmotolerant isolates dispersing over 29 species were identified through 26S rDNA sequencing at the D1/D2 domain, among which Hanseniaspora uvarum occurred most frequently and have intimate relationships with kiwifruit. RAPD analysis indicated a high variability of this species from sampling regions. The correlation of genotypes with origins was established except for isolates from Zhouzhi orchards, and the mobility of H. uvarum from orchard to the manufacturer can be speculated and contributed to spoilage sourcing. The manufacturing environment favored the inhabitance of osmotolerant yeasts more than the orchard by giving high positive sample ratio or osmotolerant yeast ratio. The growth curves under various glucose levels were fitted by Grofit R package and the obtained growth parameters indicated phenotypic diversity in the H. uvarum and the rest species. Wickerhamomyces anomalus (OM14) and Candida glabrata (OZ17) were the most glucose tolerant species and availability of high glucose would assist them to produce more gas. The test osmotolerant species were odor altering in kiwifruit concentrate juice. 3-Methyl-1-butanol, phenylethyl alcohol, phenylethyl acetate, 5-hydroxymethylfurfural (5-HMF) and ethyl acetate were the most altered compound identified by GC/MS in the juice. Particularly, W. anomalus produced 4-vinylguaiacol and M. guilliermondii produced 4-ethylguaiacol that would imperil product acceptance. The study determines the target spoilers as well as offering a detailed spoilage features, which will be instructive in implementing preventative

  7. Identifying the Neural Correlates Underlying Social Pain: Implications for Developmental Processes

    ERIC Educational Resources Information Center

    Eisenberger, Naomi I.

    2006-01-01

    Although the need for social connection is critical for early social development as well as for psychological well-being throughout the lifespan, relatively little is known about the neural processes involved in maintaining social connections. The following review summarizes what is known regarding the neural correlates underlying feeling of…

  8. Identifying Process Variables for a Low Atmospheric Pressure Stunning/Killing System

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Current systems for pre-slaughter gas stunning/killing of broilers use process gases such as carbon dioxide, argon, or a mixture of these gases with air or oxygen. Both carbon dioxide and argon work by displacing oxygen to induce hypoxia in the bird, leading to unconsciousness and ultimately death....

  9. Stress test: identifying crowding stress-tolerant hybrids in processing sweet corn

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Improvement in tolerance to intense competition at high plant populations (i.e. crowding stress) is a major genetic driver of corn yield gain the last half-century. Recent research found differences in crowding stress tolerance among a few modern processing sweet corn hybrids; however, a larger asse...

  10. Sociometric Effects in Small Classroom Groups Using Curricula Identified as Process-Oriented.

    ERIC Educational Resources Information Center

    Nickse, Ruth S.; Ripple, Richard E.

    This study was an attempt fo document aspects of small group work in classrooms engaged in the process education curricula called "Materials and Activities for Teachers and Children" (MATCH). Data on student-student interaction was related to small group work and gathered by paper-and-pencil sociometric questionnaires and measures of group…

  11. 20 CFR 1010.300 - What processes are to be implemented to identify covered persons?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... service delivery programs or Web sites in order to provide covered persons with timely and useful... FOR VETERANS' EMPLOYMENT AND TRAINING SERVICE, DEPARTMENT OF LABOR APPLICATION OF PRIORITY OF SERVICE FOR COVERED PERSONS Applying Priority of Service § 1010.300 What processes are to be implemented...

  12. A national effort to identify fry processing clones with low acrylamide-forming potential

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Acrylamide is a suspected human carcinogen. Processed potato products, such as chips and fries, contribute to dietary intake of acrylamide. One of the most promising approaches to reducing acrylamide consumption is to develop and commercialize new potato varieties with low acrylamide-forming potenti...

  13. Identify in a Canadian Urban Community. A Process Report of the Brunskill Subproject. Project Canada West.

    ERIC Educational Resources Information Center

    Burke, M.; And Others

    The purpose of this subproject is to guide students to meet and interact with individuals from the many subcultures in a community (see ED 055 011). This progress report of the second year's activities includes information on the process of curriculum development, the materials developed, evaluation, roles of supporting agencies, behavioral…

  14. ROBUSTNESS OF THE CSSX PROCESS TO FEED VARIATION: EFFICIENT CESIUM REMOVAL FROM THE HIGH POTASSIUM WASTES AT HANFORD

    SciTech Connect

    Delmau, Laetitia Helene; Birdwell Jr, Joseph F; McFarlane, Joanna; Moyer, Bruce A

    2010-01-01

    This contribution finds the Caustic-Side Solvent Extraction (CSSX) process to be effective for the removal of cesium from the Hanford tank-waste supernatant solutions. The Hanford waste types are more challenging than those at the Savannah River Site (SRS) in that they contain significantly higher levels of potassium, the chief competing ion in the extraction of cesium. By use of a computerized CSSX thermodynamic model, it was calculated that the higher levels of potassium depress the cesium distribution ratio (D{sub Cs}), as validated to within {+-}11% by the measurement of D{sub Cs} values on various Hanford waste-simulant compositions. A simple analog model equation that can be readily applied in a spreadsheet for estimating the D{sub Cs} values for the varying waste compositions was developed and shown to yield nearly identical estimates as the computerized CSSX model. It is concluded from the batch distribution experiments, the physical-property measurements, the equilibrium modeling, the flowsheet calculations, and the contactor sizing that the CSSX process as currently formulated for cesium removal from alkaline salt waste at the SRS is capable of treating similar Hanford tank feeds, albeit with more stages. For the most challenging Hanford waste composition tested, 31 stages would be required to provide a cesium decontamination factor (DF) of 5000 and a concentration factor (CF) of 2. Commercial contacting equipment with rotor diameters of 10 in. for extraction and 5 in. for stripping should have the capacity to meet throughput requirements, but testing will be required to confirm that the needed efficiency and hydraulic performance are actually obtainable. Markedly improved flowsheet performance was calculated based on experimental distribution ratios determined for an improved solvent formulation employing the more soluble cesium extractant BEHBCalixC6 used with alternative scrub and strip solutions, respectively 0.1 M NaOH and 0.010 M boric acid. The

  15. Using Workflow Modeling to Identify Areas to Improve Genetic Test Processes in the University of Maryland Translational Pharmacogenomics Project

    PubMed Central

    Cutting, Elizabeth M.; Overby, Casey L.; Banchero, Meghan; Pollin, Toni; Kelemen, Mark; Shuldiner, Alan R.; Beitelshees, Amber L.

    2015-01-01

    Delivering genetic test results to clinicians is a complex process. It involves many actors and multiple steps, requiring all of these to work together in order to create an optimal course of treatment for the patient. We used information gained from focus groups in order to illustrate the current process of delivering genetic test results to clinicians. We propose a business process model and notation (BPMN) representation of this process for a Translational Pharmacogenomics Project being implemented at the University of Maryland Medical Center, so that personalized medicine program implementers can identify areas to improve genetic testing processes. We found that the current process could be improved to reduce input errors, better inform and notify clinicians about the implications of certain genetic tests, and make results more easily understood. We demonstrate our use of BPMN to improve this important clinical process for CYP2C19 genetic testing in patients undergoing invasive treatment of coronary heart disease. PMID:26958179

  16. Deep Proteome Analysis Identifies Age-Related Processes in C. elegans.

    PubMed

    Narayan, Vikram; Ly, Tony; Pourkarimi, Ehsan; Murillo, Alejandro Brenes; Gartner, Anton; Lamond, Angus I; Kenyon, Cynthia

    2016-08-01

    Effective network analysis of protein data requires high-quality proteomic datasets. Here, we report a near doubling in coverage of the C. elegans adult proteome, identifying >11,000 proteins in total with ∼9,400 proteins reproducibly detected in three biological replicates. Using quantitative mass spectrometry, we identify proteins whose abundances vary with age, revealing a concerted downregulation of proteins involved in specific metabolic pathways and upregulation of cellular stress responses with advancing age. Among these are ∼30 peroxisomal proteins, including the PRX-5/PEX5 import protein. Functional experiments confirm that protein import into the peroxisome is compromised in vivo in old animals. We also studied the behavior of the set of age-variant proteins in chronologically age-matched, long-lived daf-2 insulin/IGF-1-pathway mutants. Unexpectedly, the levels of many of these age-variant proteins did not scale with extended lifespan. This indicates that, despite their youthful appearance and extended lifespans, not all aspects of aging are reset in these long-lived mutants. PMID:27453442

  17. Exploring High-Dimensional Data Space: Identifying Optimal Process Conditions in Photovoltaics: Preprint

    SciTech Connect

    Suh, C.; Glynn, S.; Scharf, J.; Contreras, M. A.; Noufi, R.; Jones, W. B.; Biagioni, D.

    2011-07-01

    We demonstrate how advanced exploratory data analysis coupled to data-mining techniques can be used to scrutinize the high-dimensional data space of photovoltaics in the context of thin films of Al-doped ZnO (AZO), which are essential materials as a transparent conducting oxide (TCO) layer in CuInxGa1-xSe2 (CIGS) solar cells. AZO data space, wherein each sample is synthesized from a different process history and assessed with various characterizations, is transformed, reorganized, and visualized in order to extract optimal process conditions. The data-analysis methods used include parallel coordinates, diffusion maps, and hierarchical agglomerative clustering algorithms combined with diffusion map embedding.

  18. Exploring High-Dimensional Data Space: Identifying Optimal Process Conditions in Photovoltaics

    SciTech Connect

    Suh, C.; Biagioni, D.; Glynn, S.; Scharf, J.; Contreras, M. A.; Noufi, R.; Jones, W. B.

    2011-01-01

    We demonstrate how advanced exploratory data analysis coupled to data-mining techniques can be used to scrutinize the high-dimensional data space of photovoltaics in the context of thin films of Al-doped ZnO (AZO), which are essential materials as a transparent conducting oxide (TCO) layer in CuIn{sub x}Ga{sub 1-x}Se{sub 2} (CIGS) solar cells. AZO data space, wherein each sample is synthesized from a different process history and assessed with various characterizations, is transformed, reorganized, and visualized in order to extract optimal process conditions. The data-analysis methods used include parallel coordinates, diffusion maps, and hierarchical agglomerative clustering algorithms combined with diffusion map embedding.

  19. Octopaminergic Modulation of Temporal Frequency Coding in an Identified Optic Flow-Processing Interneuron

    PubMed Central

    Longden, Kit D.; Krapp, Holger G.

    2010-01-01

    Flying generates predictably different patterns of optic flow compared with other locomotor states. A sensorimotor system tuned to rapid responses and a high bandwidth of optic flow would help the animal to avoid wasting energy through imprecise motor action. However, neural processing that covers a higher input bandwidth itself comes at higher energetic costs which would be a poor investment when the animal was not flying. How does the blowfly adjust the dynamic range of its optic flow-processing neurons to the locomotor state? Octopamine (OA) is a biogenic amine central to the initiation and maintenance of flight in insects. We used an OA agonist chlordimeform (CDM) to simulate the widespread OA release during flight and recorded the effects on the temporal frequency coding of the H2 cell. This cell is a visual interneuron known to be involved in flight stabilization reflexes. The application of CDM resulted in (i) an increase in the cell's spontaneous activity, expanding the inhibitory signaling range (ii) an initial response gain to moving gratings (20–60 ms post-stimulus) that depended on the temporal frequency of the grating and (iii) a reduction in the rate and magnitude of motion adaptation that was also temporal frequency-dependent. To our knowledge, this is the first demonstration that the application of a neuromodulator can induce velocity-dependent alterations in the gain of a wide-field optic flow-processing neuron. The observed changes in the cell's response properties resulted in a 33% increase of the cell's information rate when encoding random changes in temporal frequency of the stimulus. The increased signaling range and more rapid, longer lasting responses employed more spikes to encode each bit, and so consumed a greater amount of energy. It appears that for the fly investing more energy in sensory processing during flight is more efficient than wasting energy on under-performing motor control. PMID:21152339

  20. Identifying scale-emergent, nonlinear, asynchronous processes of wetland methane exchange

    NASA Astrophysics Data System (ADS)

    Sturtevant, Cove; Ruddell, Benjamin L.; Knox, Sara Helen; Verfaillie, Joseph; Matthes, Jaclyn Hatala; Oikawa, Patricia Y.; Baldocchi, Dennis

    2016-01-01

    Methane (CH4) exchange in wetlands is complex, involving nonlinear asynchronous processes across diverse time scales. These processes and time scales are poorly characterized at the whole-ecosystem level, yet are crucial for accurate representation of CH4 exchange in process models. We used a combination of wavelet analysis and information theory to analyze interactions between whole-ecosystem CH4 flux and biophysical drivers in two restored wetlands of Northern California from hourly to seasonal time scales, explicitly questioning assumptions of linear, synchronous, single-scale analysis. Although seasonal variability in CH4 exchange was dominantly and synchronously controlled by soil temperature, water table fluctuations, and plant activity were important synchronous and asynchronous controls at shorter time scales that propagated to the seasonal scale. Intermittent, subsurface water table decline promoted short-term pulses of methane emission but ultimately decreased seasonal CH4 emission through subsequent inhibition after rewetting. Methane efflux also shared information with evapotranspiration from hourly to multiday scales and the strength and timing of hourly and diel interactions suggested the strong importance of internal gas transport in regulating short-term emission. Traditional linear correlation analysis was generally capable of capturing the major diel and seasonal relationships, but mesoscale, asynchronous interactions and nonlinear, cross-scale effects were unresolved yet important for a deeper understanding of methane flux dynamics. We encourage wider use of these methods to aid interpretation and modeling of long-term continuous measurements of trace gas and energy exchange.

  1. Cholinesterase-Targeting microRNAs Identified in silico Affect Specific Biological Processes

    PubMed Central

    Hanin, Geula; Soreq, Hermona

    2011-01-01

    MicroRNAs (miRs) have emerged as important gene silencers affecting many target mRNAs. Here, we report the identification of 244 miRs that target the 3′-untranslated regions of different cholinesterase transcripts: 116 for butyrylcholinesterase (BChE), 47 for the synaptic acetylcholinesterase (AChE-S) splice variant, and 81 for the normally rare splice variant AChE-R. Of these, 11 and 6 miRs target both AChE-S and AChE-R, and AChE-R and BChE transcripts, respectively. BChE and AChE-S showed no overlapping miRs, attesting to their distinct modes of miR regulation. Generally, miRs can suppress a number of targets; thereby controlling an entire battery of functions. To evaluate the importance of the cholinesterase-targeted miRs in other specific biological processes we searched for their other experimentally validated target transcripts and analyzed the gene ontology enriched biological processes these transcripts are involved in. Interestingly, a number of the resulting categories are also related to cholinesterases. They include, for BChE, response to glucocorticoid stimulus, and for AChE, response to wounding and two child terms of neuron development: regulation of axonogenesis and regulation of dendrite morphogenesis. Importantly, all of the AChE-targeting miRs found to be related to these selected processes were directed against the normally rare AChE-R splice variant, with three of them, including the neurogenesis regulator miR-132, also directed against AChE-S. Our findings point at the AChE-R splice variant as particularly susceptible to miR regulation, highlight those biological functions of cholinesterases that are likely to be subject to miR post-transcriptional control, demonstrate the selectivity of miRs in regulating specific biological processes, and open new venues for targeted interference with these specific processes. PMID:22007158

  2. Comparative assessment of genomic DNA extraction processes for Plasmodium: Identifying the appropriate method.

    PubMed

    Mann, Riti; Sharma, Supriya; Mishra, Neelima; Valecha, Neena; Anvikar, Anupkumar R

    2015-12-01

    Plasmodium DNA, in addition to being used for molecular diagnosis of malaria, find utility in monitoring patient responses to antimalarial drugs, drug resistance studies, genotyping and sequencing purposes. Over the years, numerous protocols have been proposed for extracting Plasmodium DNA from a variety of sources. Given that DNA isolation is fundamental to successful molecular studies, here we review the most commonly used methods for Plasmodium genomic DNA isolation, emphasizing their pros and cons. A comparison of these existing methods has been made, to evaluate their appropriateness for use in different applications and identify the method suitable for a particular laboratory based study. Selection of a suitable and accessible DNA extraction method for Plasmodium requires consideration of many factors, the most important being sensitivity, cost-effectiveness and, purity and stability of isolated DNA. Need of the hour is to accentuate on the development of a method that upholds well on all these parameters. PMID:26714505

  3. Identifying Human Disease Genes through Cross-Species Gene Mapping of Evolutionary Conserved Processes

    PubMed Central

    Poot, Martin; Badea, Alexandra; Williams, Robert W.; Kas, Martien J.

    2011-01-01

    Background Understanding complex networks that modulate development in humans is hampered by genetic and phenotypic heterogeneity within and between populations. Here we present a method that exploits natural variation in highly diverse mouse genetic reference panels in which genetic and environmental factors can be tightly controlled. The aim of our study is to test a cross-species genetic mapping strategy, which compares data of gene mapping in human patients with functional data obtained by QTL mapping in recombinant inbred mouse strains in order to prioritize human disease candidate genes. Methodology We exploit evolutionary conservation of developmental phenotypes to discover gene variants that influence brain development in humans. We studied corpus callosum volume in a recombinant inbred mouse panel (C57BL/6J×DBA/2J, BXD strains) using high-field strength MRI technology. We aligned mouse mapping results for this neuro-anatomical phenotype with genetic data from patients with abnormal corpus callosum (ACC) development. Principal Findings From the 61 syndromes which involve an ACC, 51 human candidate genes have been identified. Through interval mapping, we identified a single significant QTL on mouse chromosome 7 for corpus callosum volume with a QTL peak located between 25.5 and 26.7 Mb. Comparing the genes in this mouse QTL region with those associated with human syndromes (involving ACC) and those covered by copy number variations (CNV) yielded a single overlap, namely HNRPU in humans and Hnrpul1 in mice. Further analysis of corpus callosum volume in BXD strains revealed that the corpus callosum was significantly larger in BXD mice with a B genotype at the Hnrpul1 locus than in BXD mice with a D genotype at Hnrpul1 (F = 22.48, p<9.87*10−5). Conclusion This approach that exploits highly diverse mouse strains provides an efficient and effective translational bridge to study the etiology of human developmental disorders, such as autism and schizophrenia

  4. The June 2014 eruption at Piton de la Fournaise: Robust methods developed for monitoring challenging eruptive processes

    NASA Astrophysics Data System (ADS)

    Villeneuve, N.; Ferrazzini, V.; Di Muro, A.; Peltier, A.; Beauducel, F.; Roult, G. C.; Lecocq, T.; Brenguier, F.; Vlastelic, I.; Gurioli, L.; Guyard, S.; Catry, T.; Froger, J. L.; Coppola, D.; Harris, A. J. L.; Favalli, M.; Aiuppa, A.; Liuzzo, M.; Giudice, G.; Boissier, P.; Brunet, C.; Catherine, P.; Fontaine, F. J.; Henriette, L.; Lauret, F.; Riviere, A.; Kowalski, P.

    2014-12-01

    After almost 3.5 years of quiescence, Piton de la Fournaise (PdF) produced a small summit eruption on 20 June 2014 at 21:35 (GMT). The eruption lasted 20 hours and was preceded by: i) onset of deep eccentric seismicity (15-20 km bsl; 9 km NW of the volcano summit) in March and April 2014; ii) enhanced CO2 soil flux along the NW rift zone; iii) increase in the number and energy of shallow (<1.5 km asl) VT events. The increase in VT events occurred on 9 June. Their signature, and shallow location, was not characteristic of an eruptive crisis. However, at 20:06 on 20/06 their character changed. This was 74 minutes before the onset of tremor. Deformations then began at 20:20. Since 2007, PdF has emitted small magma volumes (<3 Mm3) in events preceded by weak and short precursory phases. To respond to this challenging activity style, new monitoring methods were deployed at OVPF. While the JERK and MSNoise methods were developed for processing of seismic data, borehole tiltmeters and permanent monitoring of summit gas emissions, plus CO2 soil flux, were used to track precursory activity. JERK, based on an analysis of the acceleration slope of a broad-band seismometer data, allowed advanced notice of the new eruption by 50 minutes. MSNoise, based on seismic velocity determination, showed a significant decrease 7 days before the eruption. These signals were coupled with change in summit fumarole composition. Remote sensing allowed the following syn-eruptive observations: - INSAR confirmed measurements made by the OVPF geodetic network, showing that deformation was localized around the eruptive fissures; - A SPOT5 image acquired at 05:41 on 21/06 allowed definition of the flow field area (194 500 m2); - A MODIS image acquired at 06:35 on 21/06 gave a lava discharge rate of 6.9±2.8 m3 s-1, giving an erupted volume of 0.3 and 0.4 Mm3. - This rate was used with the DOWNFLOW and FLOWGO models, calibrated with the textural data from Piton's 2010 lava, to run lava flow

  5. Establishment of a Cost-Effective and Robust Planning Basis for the Processing of M-91 Waste at the Hanford Site

    SciTech Connect

    Johnson, Wayne L.; Parker, Brian M.

    2004-07-30

    This report identifies and evaluates viable alternatives for the accelerated processing of Hanford Site transuranic (TRU) and mixed low-level wastes (MLLW) that cannot be processed using existing site capabilities. Accelerated processing of these waste streams will lead to earlier reduction of risk and considerable life-cycle cost savings. The processing need is to handle both oversized MLLW and TRU containers as well as containers with surface contact dose rates greater than 200 mrem/hr. This capability is known as the ''M-91'' processing capability required by the Tri-Party Agreement milestone M-91--01. The new, phased approach proposed in this evaluation would use a combination of existing and planned processing capabilities to treat and more easily manage contact-handled waste streams first and would provide for earlier processing of these wastes.

  6. Pervasive robustness in biological systems.

    PubMed

    Félix, Marie-Anne; Barkoulas, Michalis

    2015-08-01

    Robustness is characterized by the invariant expression of a phenotype in the face of a genetic and/or environmental perturbation. Although phenotypic variance is a central measure in the mapping of the genotype and environment to the phenotype in quantitative evolutionary genetics, robustness is also a key feature in systems biology, resulting from nonlinearities in quantitative relationships between upstream and downstream components. In this Review, we provide a synthesis of these two lines of investigation, converging on understanding how variation propagates across biological systems. We critically assess the recent proliferation of studies identifying robustness-conferring genes in the context of the nonlinearity in biological systems. PMID:26184598

  7. Modelling evapotranspiration during precipitation deficits: Identifying critical processes in a land surface model

    DOE PAGESBeta

    Ukkola, Anna M.; Pitman, Andy J.; Decker, Mark; De Kauwe, Martin G.; Abramowitz, Gab; Kala, Jatin; Wang, Ying -Ping

    2016-06-21

    Surface fluxes from land surface models (LSMs) have traditionally been evaluated against monthly, seasonal or annual mean states. The limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions has been previously noted, but very few studies have systematically evaluated these models during rainfall deficits. We evaluated latent heat fluxes simulated by the Community Atmosphere Biosphere Land Exchange (CABLE) LSM across 20 flux tower sites at sub-annual to inter-annual timescales, in particular focusing on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux was explored by employing alternative representations of hydrology, leafmore » area index, soil properties and stomatal conductance. We found that the representation of hydrological processes was critical for capturing observed declines in latent heat during rainfall deficits. By contrast, the effects of soil properties, LAI and stomatal conductance were highly site-specific. Whilst the standard model performs reasonably well at annual scales as measured by common metrics, it grossly underestimates latent heat during rainfall deficits. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions, but remaining biases point to future research needs. Lastly, our results highlight the importance of evaluating LSMs under water-stressed conditions and across multiple plant functional types and climate regimes.« less

  8. Identifying the Institutional Decision Process to Introduce Decentralized Sanitation in the City of Kunming (China)

    NASA Astrophysics Data System (ADS)

    Medilanski, Edi; Chuan, Liang; Mosler, Hans-Joachim; Schertenleib, Roland; Larsen, Tove A.

    2007-05-01

    We conducted a study of the institutional barriers to introducing urine source separation in the urban area of Kunming, China. On the basis of a stakeholder analysis, we constructed stakeholder diagrams showing the relative importance of decision-making power and (positive) interest in the topic. A hypothetical decision-making process for the urban case was derived based on a successful pilot project in a periurban area. All our results were evaluated by the stakeholders. We concluded that although a number of primary stakeholders have a large interest in testing urine source separation also in an urban context, most of the key stakeholders would be reluctant to this idea. However, the success in the periurban area showed that even a single, well-received pilot project can trigger the process of broad dissemination of new technologies. Whereas the institutional setting for such a pilot project is favorable in Kunming, a major challenge will be to adapt the technology to the demands of an urban population. Methodologically, we developed an approach to corroborate a stakeholder analysis with the perception of the stakeholders themselves. This is important not only in order to validate the analysis but also to bridge the theoretical gap between stakeholder analysis and stakeholder involvement. We also show that in disagreement with the assumption of most policy theories, local stakeholders consider informal decision pathways to be of great importance in actual policy-making.

  9. Unsupervised image processing scheme for transistor photon emission analysis in order to identify defect location

    NASA Astrophysics Data System (ADS)

    Chef, Samuel; Jacquir, Sabir; Sanchez, Kevin; Perdu, Philippe; Binczak, Stéphane

    2015-01-01

    The study of the light emitted by transistors in a highly scaled complementary metal oxide semiconductor (CMOS) integrated circuit (IC) has become a key method with which to analyze faulty devices, track the failure root cause, and have candidate locations for where to start the physical analysis. The localization of defective areas in IC corresponds to a reliability check and gives information to the designer to improve the IC design. The scaling of CMOS leads to an increase in the number of active nodes inside the acquisition area. There are also more differences between the spot's intensities. In order to improve the identification of all of the photon emission spots, we introduce an unsupervised processing scheme. It is based on iterative thresholding decomposition (ITD) and mathematical morphology operations. It unveils all of the emission spots and removes most of the noise from the database thanks to a succession of image processing. The ITD approach based on five thresholding methods is tested on 15 photon emission databases (10 real cases and 5 simulated cases). The photon emission areas' localization is compared to an expert identification and the estimation quality is quantified using the object consistency error.

  10. Modelling evapotranspiration during precipitation deficits: identifying critical processes in a land surface model

    NASA Astrophysics Data System (ADS)

    Ukkola, A.; Pitman, A.; Decker, M. R.; De Kauwe, M. G.; Abramowitz, G.; Wang, Y.; Kala, J.

    2015-12-01

    Surface fluxes from land surface models (LSM) have traditionally been evaluated against monthly, seasonal or annual mean states. Previous studies have noted the limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions but very few studies have systematically evaluated LSMs during rainfall deficits. We investigate the performance of the Community Atmosphere Biosphere Land Exchange (CABLE) LSM in simulating latent heat fluxes in offline mode. CABLE is evaluated against eddy covariance measurements of latent heat flux across 20 flux tower sites at sub-annual to inter-annual time scales, with a focus on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux is explored by employing alternative representations of hydrology, soil properties, leaf area index and stomatal conductance. We demonstrate the critical role of hydrological processes for capturing observed declines in latent heat. The effects of soil, LAI and stomatal conductance are shown to be highly site-specific. The default CABLE performs reasonably well at annual scales despite grossly underestimating latent heat during rainfall deficits, highlighting the importance for evaluating models explicitly under water-stressed conditions across multiple vegetation and climate regimes. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions but remaining deficiencies point to future research needs.

  11. Modelling evapotranspiration during precipitation deficits: identifying critical processes in a land surface model

    NASA Astrophysics Data System (ADS)

    Ukkola, A. M.; Pitman, A. J.; Decker, M.; De Kauwe, M. G.; Abramowitz, G.; Kala, J.; Wang, Y.-P.

    2015-10-01

    Surface fluxes from land surface models (LSM) have traditionally been evaluated against monthly, seasonal or annual mean states. The limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions has been previously noted, but very few studies have systematically evaluated these models during rainfall deficits. We evaluated latent heat flux simulated by the Community Atmosphere Biosphere Land Exchange (CABLE) LSM across 20 flux tower sites at sub-annual to inter-annual time scales, in particular focusing on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux are explored by employing alternative representations of hydrology, leaf area index, soil properties and stomatal conductance. We found that the representation of hydrological processes was critical for capturing observed declines in latent heat during rainfall deficits. By contrast, the effects of soil properties, LAI and stomatal conductance are shown to be highly site-specific. Whilst the standard model performs reasonably well at annual scales as measured by common metrics, it grossly underestimates latent heat during rainfall deficits. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions but remaining biases point to future research needs. Our results highlight the importance of evaluating LSMs under water-stressed conditions and across multiple plant functional types and climate regimes.

  12. Modelling evapotranspiration during precipitation deficits: identifying critical processes in a land surface model

    NASA Astrophysics Data System (ADS)

    Ukkola, Anna M.; Pitman, Andy J.; Decker, Mark; De Kauwe, Martin G.; Abramowitz, Gab; Kala, Jatin; Wang, Ying-Ping

    2016-06-01

    Surface fluxes from land surface models (LSMs) have traditionally been evaluated against monthly, seasonal or annual mean states. The limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions has been previously noted, but very few studies have systematically evaluated these models during rainfall deficits. We evaluated latent heat fluxes simulated by the Community Atmosphere Biosphere Land Exchange (CABLE) LSM across 20 flux tower sites at sub-annual to inter-annual timescales, in particular focusing on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux was explored by employing alternative representations of hydrology, leaf area index, soil properties and stomatal conductance. We found that the representation of hydrological processes was critical for capturing observed declines in latent heat during rainfall deficits. By contrast, the effects of soil properties, LAI and stomatal conductance were highly site-specific. Whilst the standard model performs reasonably well at annual scales as measured by common metrics, it grossly underestimates latent heat during rainfall deficits. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions, but remaining biases point to future research needs. Our results highlight the importance of evaluating LSMs under water-stressed conditions and across multiple plant functional types and climate regimes.

  13. Joint-specific DNA methylation and transcriptome signatures in rheumatoid arthritis identify distinct pathogenic processes.

    PubMed

    Ai, Rizi; Hammaker, Deepa; Boyle, David L; Morgan, Rachel; Walsh, Alice M; Fan, Shicai; Firestein, Gary S; Wang, Wei

    2016-01-01

    Stratifying patients on the basis of molecular signatures could facilitate development of therapeutics that target pathways specific to a particular disease or tissue location. Previous studies suggest that pathogenesis of rheumatoid arthritis (RA) is similar in all affected joints. Here we show that distinct DNA methylation and transcriptome signatures not only discriminate RA fibroblast-like synoviocytes (FLS) from osteoarthritis FLS, but also distinguish RA FLS isolated from knees and hips. Using genome-wide methods, we show differences between RA knee and hip FLS in the methylation of genes encoding biological pathways, such as IL-6 signalling via JAK-STAT pathway. Furthermore, differentially expressed genes are identified between knee and hip FLS using RNA-sequencing. Double-evidenced genes that are both differentially methylated and expressed include multiple HOX genes. Joint-specific DNA signatures suggest that RA disease mechanisms might vary from joint to joint, thus potentially explaining some of the diversity of drug responses in RA patients. PMID:27282753

  14. Joint-specific DNA methylation and transcriptome signatures in rheumatoid arthritis identify distinct pathogenic processes

    PubMed Central

    Ai, Rizi; Hammaker, Deepa; Boyle, David L.; Morgan, Rachel; Walsh, Alice M.; Fan, Shicai; Firestein, Gary S.; Wang, Wei

    2016-01-01

    Stratifying patients on the basis of molecular signatures could facilitate development of therapeutics that target pathways specific to a particular disease or tissue location. Previous studies suggest that pathogenesis of rheumatoid arthritis (RA) is similar in all affected joints. Here we show that distinct DNA methylation and transcriptome signatures not only discriminate RA fibroblast-like synoviocytes (FLS) from osteoarthritis FLS, but also distinguish RA FLS isolated from knees and hips. Using genome-wide methods, we show differences between RA knee and hip FLS in the methylation of genes encoding biological pathways, such as IL-6 signalling via JAK-STAT pathway. Furthermore, differentially expressed genes are identified between knee and hip FLS using RNA-sequencing. Double-evidenced genes that are both differentially methylated and expressed include multiple HOX genes. Joint-specific DNA signatures suggest that RA disease mechanisms might vary from joint to joint, thus potentially explaining some of the diversity of drug responses in RA patients. PMID:27282753

  15. ESL Teachers' Perceptions of the Process for Identifying Adolescent Latino English Language Learners with Specific Learning Disabilities

    ERIC Educational Resources Information Center

    Ferlis, Emily C.

    2012-01-01

    This dissertation examines the question "how do ESL teachers perceive the prereferral process for identifying adolescent Latino English language learners with specific learning disabilities?" The study fits within the Latino Critical Race Theory framework and employs an interpretive phenomenological qualitative research approach.…

  16. Using Statistical Process Control Charts to Identify the Steroids Era in Major League Baseball: An Educational Exercise

    ERIC Educational Resources Information Center

    Hill, Stephen E.; Schvaneveldt, Shane J.

    2011-01-01

    This article presents an educational exercise in which statistical process control charts are constructed and used to identify the Steroids Era in American professional baseball. During this period (roughly 1993 until the present), numerous baseball players were alleged or proven to have used banned, performance-enhancing drugs. Also observed…

  17. Identifying sources and processes influencing nitrogen export to a small stream using dual isotopes of nitrate

    NASA Astrophysics Data System (ADS)

    Lohse, K. A.; Sanderman, J.; Amundson, R.

    2009-12-01

    Interactions between plant and microbial reactions exert strong controls on sources and export of nitrate to headwater streams. Yet quantifying this interaction is challenging due to spatial and temporal changes in these processes. Topography has been hypothesized to play a large role in these processes, yet few studies have coupled measurement of soil nitrogen cycling to hydrologic losses of N. In water limited environments such as Mediterranean grasslands, we hypothesized that seasonal shifts in runoff mechanisms and flow paths would change stream water sources of nitrate from deep subsoil sources to near-surface sources. In theory, these changes can be quantified using mixing models and dual isotopes of nitrate. We examined the temporal patterns of N stream export using hydrometric methods and dual isotopes of nitrate in a small headwater catchment on the coast of Northern California. A plot of stream water 15N-nitrate and 18O-nitrate with known isotopic value of nitrate in rainwater, fertilizer, and soil N confirmed that the nitrate was primarily microbial nitrate. Plots of 15N-nitrate and the inverse nitrate concentration, as well as the log of nitrate concentration, indicated both mixing and fractionation via denitrification. Further analysis of soil water 15N-nitrate and 18O-nitrate revealed two denitrification vectors for both surface and subsurface soil waters (slopes of 0.50 ±0.1) that constrained the stream water 15N and 18O-nitrate values indicating mixing of two soil water sources. Analysis of mixing models showed shifts in surface and subsurface soil water nitrate sources to stream water along with progressive denitrification over the course of the season.

  18. On the processes generating latitudinal richness gradients: identifying diagnostic patterns and predictions

    PubMed Central

    Hurlbert, Allen H.; Stegen, James C.

    2014-01-01

    We use a simulation model to examine four of the most common hypotheses for the latitudinal richness gradient and identify patterns that might be diagnostic of those four hypotheses. The hypotheses examined include (1) tropical niche conservatism, or the idea that the tropics are more diverse because a tropical clade origin has allowed more time for diversification in the tropics and has resulted in few species adapted to extra-tropical climates. (2) The ecological limits hypothesis suggests that species richness is limited by the amount of biologically available energy in a region. (3) The speciation rates hypothesis suggests that the latitudinal gradient arises from a gradient in speciation rates. (4) Finally, the tropical stability hypothesis argues that climatic fluctuations and glacial cycles in extratropical regions have led to greater extinction rates and less opportunity for specialization relative to the tropics. We found that tropical niche conservatism can be distinguished from the other three scenarios by phylogenies which are more balanced than expected, no relationship between mean root distance (MRD) and richness across regions, and a homogeneous rate of speciation across clades and through time. The energy gradient, speciation gradient, and disturbance gradient scenarios all produced phylogenies which were more imbalanced than expected, showed a negative relationship between MRD and richness, and diversity-dependence of speciation rate estimates through time. We found that the relationship between speciation rates and latitude could distinguish among these three scenarios, with no relation expected under the ecological limits hypothesis, a negative relationship expected under the speciation rates hypothesis, and a positive relationship expected under the tropical stability hypothesis. We emphasize the importance of considering multiple hypotheses and focusing on diagnostic predictions instead of predictions that are consistent with multiple

  19. Robustness of metabolic networks

    NASA Astrophysics Data System (ADS)

    Jeong, Hawoong

    2009-03-01

    We investigated the robustness of cellular metabolism by simulating the system-level computational models, and also performed the corresponding experiments to validate our predictions. We address the cellular robustness from the ``metabolite''-framework by using the novel concept of ``flux-sum,'' which is the sum of all incoming or outgoing fluxes (they are the same under the pseudo-steady state assumption). By estimating the changes of the flux-sum under various genetic and environmental perturbations, we were able to clearly decipher the metabolic robustness; the flux-sum around an essential metabolite does not change much under various perturbations. We also identified the list of the metabolites essential to cell survival, and then ``acclimator'' metabolites that can control the cell growth were discovered. Furthermore, this concept of ``metabolite essentiality'' should be useful in developing new metabolic engineering strategies for improved production of various bioproducts and designing new drugs that can fight against multi-antibiotic resistant superbacteria by knocking-down the enzyme activities around an essential metabolite. Finally, we combined a regulatory network with the metabolic network to investigate its effect on dynamic properties of cellular metabolism.

  20. Identifying influential nodes in a wound healing-related network of biological processes using mean first-passage time

    NASA Astrophysics Data System (ADS)

    Arodz, Tomasz; Bonchev, Danail

    2015-02-01

    In this study we offer an approach to network physiology, which proceeds from transcriptomic data and uses gene ontology analysis to identify the biological processes most enriched in several critical time points of wound healing process (days 0, 3 and 7). The top-ranking differentially expressed genes for each process were used to build two networks: one with all proteins regulating the transcription of selected genes, and a second one involving the proteins from the signaling pathways that activate the transcription factors. The information from these networks is used to build a network of the most enriched processes with undirected links weighted proportionally to the count of shared genes between the pair of processes, and directed links weighted by the count of relationships connecting genes from one process to genes from the other. In analyzing the network thus built we used an approach based on random walks and accounting for the temporal aspects of the spread of a signal in the network (mean-first passage time, MFPT). The MFPT scores allowed identifying the top influential, as well as the top essential biological processes, which vary with the progress in the healing process. Thus, the most essential for day 0 was found to be the Wnt-receptor signaling pathway, well known for its crucial role in wound healing, while in day 3 this was the regulation of NF-kB cascade, essential for matrix remodeling in the wound healing process. The MFPT-based scores correctly reflected the pattern of the healing process dynamics to be highly concentrated around several processes between day 0 and day 3, and becoming more diffuse at day 7.

  1. Identifying the processes underpinning anticipation and decision-making in a dynamic time-constrained task.

    PubMed

    Roca, André; Ford, Paul R; McRobert, Allistair P; Mark Williams, A

    2011-08-01

    A novel, representative task was used to examine skill-based differences in the perceptual and cognitive processes underlying performance on a dynamic, externally paced task. Skilled and less skilled soccer players were required to move and interact with life-size, action sequences involving 11 versus 11 soccer situations filmed from the perspective of a central defender in soccer. The ability of participants to anticipate the intentions of their opponents and to make decisions about how they should respond was measured across two separate experiments. In Experiment 1, visual search behaviors were examined using an eye-movement registration system. In Experiment 2, retrospective verbal reports of thinking were gathered from a new sample of skilled and less skilled participants. Skilled participants were more accurate than less skilled participants at anticipating the intentions of opponents and in deciding on an appropriate course of action. The skilled players employed a search strategy involving more fixations of shorter duration in a different sequential order and toward more disparate and informative locations in the display when compared with the less skilled counterparts. The skilled players generated a greater number of verbal report statements with a higher proportion of evaluation, prediction, and planning statements than the less skilled players, suggesting they employed more complex domain-specific memory representations to solve the task. Theoretical, methodological, and practical implications are discussed. PMID:21305386

  2. Semantic Processing to Identify Adverse Drug Event Information from Black Box Warnings

    PubMed Central

    Culbertson, Adam; Fiszman, Marcelo; Shin, Dongwook; Rindflesch, Thomas C.

    2014-01-01

    Adverse drug events account for two million combined injuries, hospitalizations, or deaths each year. Furthermore, there are few comprehensive, up-to-date, and free sources of drug information. Clinical decision support systems may significantly mitigate the number of adverse drug events. However, these systems depend on up-to-date, comprehensive, and codified data to serve as input. The DailyMed website, a resource managed by the FDA and NLM, contains all currently approved drugs. We used a semantic natural language processing approach that successfully extracted information for adverse drug events, at-risk conditions, and susceptible populations from black box warning labels on this site. The precision, recall, and F-score were, 94%, 52%, 0.67 for adverse drug events; 80%, 53%, and 0.64 for conditions; and 95%, 44%, 0.61 for populations. Overall performance was 90% precision, 51% recall, and 0.65 F-Score. Information extracted can be stored in a structured format and may support clinical decision support systems. PMID:25954348

  3. Isotopic investigations of dissolved organic N in soils identifies N mineralization as a major sink process

    NASA Astrophysics Data System (ADS)

    Wanek, Wolfgang; Prommer, Judith; Hofhansl, Florian

    2016-04-01

    Dissolved organic nitrogen (DON) is a major component of transfer processes in the global nitrogen (N) cycle, contributing to atmospheric N deposition, terrestrial N losses and aquatic N inputs. In terrestrial ecosystems several sources and sinks contribute to belowground DON pools but yet are hard to quantify. In soils, DON is released by desorption of soil organic N and by microbial lysis. Major losses from the DON pool occur via sorption, hydrological losses and by soil N mineralization. Sorption/desorption, lysis and hydrological losses are expected to exhibit no 15N fractionation therefore allowing to trace different DON sources. Soil N mineralization of DON has been commonly assumed to have no or only a small isotope effect of between 0-4‰, however isotope fractionation by N mineralization has rarely been measured and might be larger than anticipated. Depending on the degree of 15N fractionation by soil N mineralization, we would expect DON to become 15N-enriched relative to bulk soil N, and dissolved inorganic N (DIN; ammonium and nitrate) to become 15N-depleted relative to both, bulk soil N and DON. Isotopic analyses of soil organic N, DON and DIN might therefore provide insights into the relative contributions of different sources and sink processes. This study therefore aimed at a better understanding of the isotopic signatures of DON and its controls in soils. We investigated the concentration and isotopic composition of bulk soil N, DON and DIN in a wide range of sites, covering arable, grassland and forest ecosystems in Austria across an altitudinal transect. Isotopic composition of ammonium, nitrate and DON were measured in soil extracts after chemical conversion to N2O by purge-and-trap isotope ratio mass spectrometry. We found that delta15N values of DON ranged between -0.4 and 7.6‰, closely tracking the delta15N values of bulk soils. However, DON was 15N-enriched relative to bulk soil N by 1.5±1.3‰ (1 SD), and inorganic N was 15N

  4. Identifying sources and processes influencing nitrogen export to a small stream using dual isotopes of nitrate

    NASA Astrophysics Data System (ADS)

    Lohse, Kathleen A.; Sanderman, Jonathan; Amundson, Ronald

    2013-09-01

    Topography plays a critical role in controlling rates of nitrogen (N) transformation and loss to streams through its effects on reaction and transport, yet few studies have coupled measurements of soil N cycling within a catchment to hydrologic N losses and sources of those losses. We examined the processes controlling temporal patterns of stream N export using hydrometric methods and dual isotopes of nitrate (NO3-) in a small headwater catchment on the coast of Northern California. Soil nitrate pools accumulated in the hollow during the dry summer due to sustained rates of net nitrification and elevated soil moisture, and then contributed to the first flush of NO3- in macropore soil-water and stream water in the winter. Macropore soil-waters had higher concentrations of all forms of N than matrix soil-waters, especially in the hollow. A plot of stream water δ15N versus δ18O values in NO3- indicated that NO3- was primarily derived from nitrification or microbial NO3-. Further analysis revealed a mixing of two microbial NO3- sources combined with seasonal progressive denitrification. Mass balance estimates suggested microbial NO3- was consumed by denitrification when conditions of high NO3-, dissolved organic matter, and soil-water contents converged. Our study is the first to show a mixing of two sources of microbial NO3- and seasonal progressive denitrification using dual isotopes. Our observations suggest that the physical conditions in the convergent hollow are important constraints on stream N chemistry, and that shifts in runoff mechanisms and flow paths control the source and mixing of NO3- from various watershed sources.

  5. Robust verification analysis

    NASA Astrophysics Data System (ADS)

    Rider, William; Witkowski, Walt; Kamm, James R.; Wildey, Tim

    2016-02-01

    We introduce a new methodology for inferring the accuracy of computational simulations through the practice of solution verification. We demonstrate this methodology on examples from computational heat transfer, fluid dynamics and radiation transport. Our methodology is suited to both well- and ill-behaved sequences of simulations. Our approach to the analysis of these sequences of simulations incorporates expert judgment into the process directly via a flexible optimization framework, and the application of robust statistics. The expert judgment is systematically applied as constraints to the analysis, and together with the robust statistics guards against over-emphasis on anomalous analysis results. We have named our methodology Robust Verification. Our methodology is based on utilizing multiple constrained optimization problems to solve the verification model in a manner that varies the analysis' underlying assumptions. Constraints applied in the analysis can include expert judgment regarding convergence rates (bounds and expectations) as well as bounding values for physical quantities (e.g., positivity of energy or density). This approach then produces a number of error models, which are then analyzed through robust statistical techniques (median instead of mean statistics). This provides self-contained, data and expert informed error estimation including uncertainties for both the solution itself and order of convergence. Our method produces high quality results for the well-behaved cases relatively consistent with existing practice. The methodology can also produce reliable results for ill-behaved circumstances predicated on appropriate expert judgment. We demonstrate the method and compare the results with standard approaches used for both code and solution verification on well-behaved and ill-behaved simulations.

  6. A Thermodynamic-Based Interpretation of Protein Expression Heterogeneity in Different Glioblastoma Multiforme Tumors Identifies Tumor-Specific Unbalanced Processes.

    PubMed

    Kravchenko-Balasha, Nataly; Johnson, Hannah; White, Forest M; Heath, James R; Levine, R D

    2016-07-01

    We describe a thermodynamic-motivated, information theoretic analysis of proteomic data collected from a series of 8 glioblastoma multiforme (GBM) tumors. GBMs are considered here as prototypes of heterogeneous cancers. That heterogeneity is viewed here as manifesting in different unbalanced biological processes that are associated with thermodynamic-like constraints. The analysis yields a molecular description of a stable steady state that is common across all tumors. It also resolves molecular descriptions of unbalanced processes that are shared by several tumors, such as hyperactivated phosphoprotein signaling networks. Further, it resolves unbalanced processes that provide unique classifiers of tumor subgroups. The results of the theoretical interpretation are compared against those of statistical multivariate methods and are shown to provide a superior level of resolution for identifying unbalanced processes in GBM tumors. The identification of specific constraints for each GBM tumor suggests tumor-specific combination therapies that may reverse this imbalance. PMID:27035264

  7. Robustness Elasticity in Complex Networks

    PubMed Central

    Matisziw, Timothy C.; Grubesic, Tony H.; Guo, Junyu

    2012-01-01

    Network robustness refers to a network’s resilience to stress or damage. Given that most networks are inherently dynamic, with changing topology, loads, and operational states, their robustness is also likely subject to change. However, in most analyses of network structure, it is assumed that interaction among nodes has no effect on robustness. To investigate the hypothesis that network robustness is not sensitive or elastic to the level of interaction (or flow) among network nodes, this paper explores the impacts of network disruption, namely arc deletion, over a temporal sequence of observed nodal interactions for a large Internet backbone system. In particular, a mathematical programming approach is used to identify exact bounds on robustness to arc deletion for each epoch of nodal interaction. Elasticity of the identified bounds relative to the magnitude of arc deletion is assessed. Results indicate that system robustness can be highly elastic to spatial and temporal variations in nodal interactions within complex systems. Further, the presence of this elasticity provides evidence that a failure to account for nodal interaction can confound characterizations of complex networked systems. PMID:22808060

  8. An Evaluation of a Natural Language Processing Tool for Identifying and Encoding Allergy Information in Emergency Department Clinical Notes

    PubMed Central

    Goss, Foster R.; Plasek, Joseph M.; Lau, Jason J.; Seger, Diane L.; Chang, Frank Y.; Zhou, Li

    2014-01-01

    Emergency department (ED) visits due to allergic reactions are common. Allergy information is often recorded in free-text provider notes; however, this domain has not yet been widely studied by the natural language processing (NLP) community. We developed an allergy module built on the MTERMS NLP system to identify and encode food, drug, and environmental allergies and allergic reactions. The module included updates to our lexicon using standard terminologies, and novel disambiguation algorithms. We developed an annotation schema and annotated 400 ED notes that served as a gold standard for comparison to MTERMS output. MTERMS achieved an F-measure of 87.6% for the detection of allergen names and no known allergies, 90% for identifying true reactions in each allergy statement where true allergens were also identified, and 69% for linking reactions to their allergen. These preliminary results demonstrate the feasibility using NLP to extract and encode allergy information from clinical notes. PMID:25954363

  9. Hétérochronies dans l'évolution des hominidés. Le développement dentaire des australopithécines «robustes»Heterochronic process in hominid evolution. The dental development in 'robust' australopithecines.

    NASA Astrophysics Data System (ADS)

    Ramirez Rozzi, Fernando V.

    2000-10-01

    Heterochrony is defined as an evolutionary modification in time and in the relative rate of development [6]. Growth (size), development (shape), and age (adult) are the three fundamental factors of ontogeny and have to be known to carry out a study on heterochronies. These three factors have been analysed in 24 Plio-Pleistocene hominid molars from Omo, Ethiopia, attributed to A. afarensis and robust australopithecines ( A. aethiopicus and A. aff. aethiopicus) . Molars were grouped into three chronological periods. The analysis suggests that morphological modifications through time are due to heterochronic process, a neoteny ( A. afarensis - robust australopithecine clade) and a time hypermorphosis ( A. aethiopicus - A. aff. aethiopicus).

  10. Robust automatic target recognition in FLIR imagery

    NASA Astrophysics Data System (ADS)

    Soyman, Yusuf

    2012-05-01

    In this paper, a robust automatic target recognition algorithm in FLIR imagery is proposed. Target is first segmented out from the background using wavelet transform. Segmentation process is accomplished by parametric Gabor wavelet transformation. Invariant features that belong to the target, which is segmented out from the background, are then extracted via moments. Higher-order moments, while providing better quality for identifying the image, are more sensitive to noise. A trade-off study is then performed on a few moments that provide effective performance. Bayes method is used for classification, using Mahalanobis distance as the Bayes' classifier. Results are assessed based on false alarm rates. The proposed method is shown to be robust against rotations, translations and scale effects. Moreover, it is shown to effectively perform under low-contrast objects in FLIR images. Performance comparisons are also performed on both GPU and CPU. Results indicate that GPU has superior performance over CPU.

  11. Assessment Approach for Identifying Compatibility of Restoration Projects with Geomorphic and Flooding Processes in Gravel Bed Rivers.

    PubMed

    DeVries, Paul; Aldrich, Robert

    2015-08-01

    A critical requirement for a successful river restoration project in a dynamic gravel bed river is that it be compatible with natural hydraulic and sediment transport processes operating at the reach scale. The potential for failure is greater at locations where the influence of natural processes is inconsistent with intended project function and performance. We present an approach using practical GIS, hydrologic, hydraulic, and sediment transport analyses to identify locations where specific restoration project types have the greatest likelihood of working as intended because their function and design are matched with flooding and morphologic processes. The key premise is to identify whether a specific river analysis segment (length ~1-10 bankfull widths) within a longer reach is geomorphically active or inactive in the context of vertical and lateral stabilities, and hydrologically active for floodplain connectivity. Analyses involve empirical channel geometry relations, aerial photographic time series, LiDAR data, HEC-RAS hydraulic modeling, and a time-integrated sediment transport budget to evaluate trapping efficiency within each segment. The analysis segments are defined by HEC-RAS model cross sections. The results have been used effectively to identify feasible projects in a variety of alluvial gravel bed river reaches with lengths between 11 and 80 km and 2-year flood magnitudes between ~350 and 1330 m(3)/s. Projects constructed based on the results have all performed as planned. In addition, the results provide key criteria for formulating erosion and flood management plans. PMID:25910870

  12. Assessment Approach for Identifying Compatibility of Restoration Projects with Geomorphic and Flooding Processes in Gravel Bed Rivers

    NASA Astrophysics Data System (ADS)

    DeVries, Paul; Aldrich, Robert

    2015-08-01

    A critical requirement for a successful river restoration project in a dynamic gravel bed river is that it be compatible with natural hydraulic and sediment transport processes operating at the reach scale. The potential for failure is greater at locations where the influence of natural processes is inconsistent with intended project function and performance. We present an approach using practical GIS, hydrologic, hydraulic, and sediment transport analyses to identify locations where specific restoration project types have the greatest likelihood of working as intended because their function and design are matched with flooding and morphologic processes. The key premise is to identify whether a specific river analysis segment (length ~1-10 bankfull widths) within a longer reach is geomorphically active or inactive in the context of vertical and lateral stabilities, and hydrologically active for floodplain connectivity. Analyses involve empirical channel geometry relations, aerial photographic time series, LiDAR data, HEC-RAS hydraulic modeling, and a time-integrated sediment transport budget to evaluate trapping efficiency within each segment. The analysis segments are defined by HEC-RAS model cross sections. The results have been used effectively to identify feasible projects in a variety of alluvial gravel bed river reaches with lengths between 11 and 80 km and 2-year flood magnitudes between ~350 and 1330 m3/s. Projects constructed based on the results have all performed as planned. In addition, the results provide key criteria for formulating erosion and flood management plans.

  13. Reasoning about anomalies: a study of the analytical process of detecting and identifying anomalous behavior in maritime traffic data

    NASA Astrophysics Data System (ADS)

    Riveiro, Maria; Falkman, Göran; Ziemke, Tom; Kronhamn, Thomas

    2009-05-01

    The goal of visual analytical tools is to support the analytical reasoning process, maximizing human perceptual, understanding and reasoning capabilities in complex and dynamic situations. Visual analytics software must be built upon an understanding of the reasoning process, since it must provide appropriate interactions that allow a true discourse with the information. In order to deepen our understanding of the human analytical process and guide developers in the creation of more efficient anomaly detection systems, this paper investigates how is the human analytical process of detecting and identifying anomalous behavior in maritime traffic data. The main focus of this work is to capture the entire analysis process that an analyst goes through, from the raw data to the detection and identification of anomalous behavior. Three different sources are used in this study: a literature survey of the science of analytical reasoning, requirements specified by experts from organizations with interest in port security and user field studies conducted in different marine surveillance control centers. Furthermore, this study elaborates on how to support the human analytical process using data mining, visualization and interaction methods. The contribution of this paper is twofold: (1) within visual analytics, contribute to the science of analytical reasoning with practical understanding of users tasks in order to develop a taxonomy of interactions that support the analytical reasoning process and (2) within anomaly detection, facilitate the design of future anomaly detector systems when fully automatic approaches are not viable and human participation is needed.

  14. Robust efficient video fingerprinting

    NASA Astrophysics Data System (ADS)

    Puri, Manika; Lubin, Jeffrey

    2009-02-01

    We have developed a video fingerprinting system with robustness and efficiency as the primary and secondary design criteria. In extensive testing, the system has shown robustness to cropping, letter-boxing, sub-titling, blur, drastic compression, frame rate changes, size changes and color changes, as well as to the geometric distortions often associated with camcorder capture in cinema settings. Efficiency is afforded by a novel two-stage detection process in which a fast matching process first computes a number of likely candidates, which are then passed to a second slower process that computes the overall best match with minimal false alarm probability. One key component of the algorithm is a maximally stable volume computation - a three-dimensional generalization of maximally stable extremal regions - that provides a content-centric coordinate system for subsequent hash function computation, independent of any affine transformation or extensive cropping. Other key features include an efficient bin-based polling strategy for initial candidate selection, and a final SIFT feature-based computation for final verification. We describe the algorithm and its performance, and then discuss additional modifications that can provide further improvement to efficiency and accuracy.

  15. Formosa Plastics Corporation: Plant-Wide Assessment of Texas Plant Identifies Opportunities for Improving Process Efficiency and Reducing Energy Costs

    SciTech Connect

    2005-01-01

    At Formosa Plastics Corporation's plant in Point Comfort, Texas, a plant-wide assessment team analyzed process energy requirements, reviewed new technologies for applicability, and found ways to improve the plant's energy efficiency. The assessment team identified the energy requirements of each process and compared actual energy consumption with theoretical process requirements. The team estimated that total annual energy savings would be about 115,000 MBtu for natural gas and nearly 14 million kWh for electricity if the plant makes several improvements, which include upgrading the gas compressor impeller, improving the vent blower system, and recovering steam condensate for reuse. Total annual cost savings could be $1.5 million. The U.S. Department of Energy's Industrial Technologies Program cosponsored this assessment.

  16. Hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) and its application to predicting key process variables.

    PubMed

    He, Yan-Lin; Xu, Yuan; Geng, Zhi-Qiang; Zhu, Qun-Xiong

    2016-03-01

    In this paper, a hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) is proposed. Firstly, an improved functional link neural network with small norm of expanded weights and high input-output correlation (SNEWHIOC-FLNN) was proposed for enhancing the generalization performance of FLNN. Unlike the traditional FLNN, the expanded variables of the original inputs are not directly used as the inputs in the proposed SNEWHIOC-FLNN model. The original inputs are attached to some small norm of expanded weights. As a result, the correlation coefficient between some of the expanded variables and the outputs is enhanced. The larger the correlation coefficient is, the more relevant the expanded variables tend to be. In the end, the expanded variables with larger correlation coefficient are selected as the inputs to improve the performance of the traditional FLNN. In order to test the proposed SNEWHIOC-FLNN model, three UCI (University of California, Irvine) regression datasets named Housing, Concrete Compressive Strength (CCS), and Yacht Hydro Dynamics (YHD) are selected. Then a hybrid model based on the improved FLNN integrating with partial least square (IFLNN-PLS) was built. In IFLNN-PLS model, the connection weights are calculated using the partial least square method but not the error back propagation algorithm. Lastly, IFLNN-PLS was developed as an intelligent measurement model for accurately predicting the key variables in the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. Simulation results illustrated that the IFLNN-PLS could significant improve the prediction performance. PMID:26685746

  17. Testing the realism of model structures to identify karst system processes using water quality and quantity signatures

    NASA Astrophysics Data System (ADS)

    Hartmann, A.; Wagener, T.; Rimmer, A.; Lange, J.; Brielmann, H.; Weiler, M.

    2012-12-01

    Many hydrological systems exhibit complex subsurface flow and storage behavior. Runoff observations often only provide insufficient information for unique process identification of complex hydrologic systems. Quantitative modeling of water and solute fluxes presents a potentially more powerful avenue to explore whether hypotheses about system functioning can be rejected or conditionally accepted. In this study we developed and tested four hydrological model structures, based on different hypotheses about subsurface flow and storage behavior, to identify the functioning of a large Mediterranean karst system. Using eight different system signatures, i.e. indicators of particular hydrodynamic and hydrochemical characteristics of the karst system, we applied a novel model evaluation strategy to identify the best conceptual model representation of the karst system. Our approach consists of three stages: (1) evaluation of model performance with respect to system signatures using automatic calibration, (2) evaluation of parameter identifiability using Sobol's sensitivity analysis, and (3) evaluation of model plausibility by combining the results of stages (1) and (2). These evaluation stages eliminated three model structures and lead to a unique hypothesis about the functioning of the studied karst system. We used the estimated parameter values to further quantify subsurface processes. The remaining model is able to simultaneously provide high performances for all eight system signatures. Our approach demonstrates the benefits of interpreting different tracers in a hydrologically meaningful way during model evaluation and identification.

  18. Testing the realism of model structures to identify karst system processes using water quality and quantity signatures

    NASA Astrophysics Data System (ADS)

    Hartmann, A.; Wagener, T.; Rimmer, A.; Lange, J.; Brielmann, H.; Weiler, M.

    2013-06-01

    Many hydrological systems exhibit complex subsurface flow and storage behavior. Runoff observations often only provide insufficient information for unique process identification. Quantitative modeling of water and solute fluxes presents a potentially more powerful avenue to explore whether hypotheses about system functioning can be rejected or conditionally accepted. In this study we developed and tested four hydrological model structures, based on different hypotheses about subsurface flow and storage behavior, to identify the functioning of a large Mediterranean karst system. Using eight different system signatures, i.e., indicators of particular hydrodynamic and hydrochemical characteristics of the karst system, we applied a novel model evaluation strategy to identify the best conceptual model representation of the karst system within our set of possible system representations. Our approach to test model realism consists of three stages: (1) evaluation of model performance with respect to system signatures using automatic calibration, (2) evaluation of parameter identifiability using Sobol's sensitivity analysis, and (3) evaluation of model plausibility by combining the results of stages (1) and (2). These evaluation stages eliminated three out of four model structures and lead to a unique hypothesis about the functioning of the studied karst system. We used the estimated parameter values to further quantify subsurface processes. The chosen model is able to simultaneously provide high performances for eight system signatures with realistic parameter values. Our approach demonstrates the benefits of interpreting different tracers in a hydrologically meaningful way during model evaluation and identification.

  19. Robust indexing for automatic data collection

    SciTech Connect

    Sauter, Nicholas K.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.

    2003-12-09

    We present improved methods for indexing diffraction patterns from macromolecular crystals. The novel procedures include a more robust way to verify the position of the incident X-ray beam on the detector, an algorithm to verify that the deduced lattice basis is consistent with the observations, and an alternative approach to identify the metric symmetry of the lattice. These methods help to correct failures commonly experienced during indexing, and increase the overall success rate of the process. Rapid indexing, without the need for visual inspection, will play an important role as beamlines at synchrotron sources prepare for high-throughput automation.

  20. Using Analytic Hierarchy Process to Identify the Nurses with High Stress-Coping Capability: Model and Application

    PubMed Central

    F. C. PAN, Frank

    2014-01-01

    Abstract Background Nurses have long been relied as the major labor force in hospitals. Featured with complicated and highly labor-intensive job requirement, multiple pressures from different sources was inevitable. Success in identifying stresses and accordingly coping with such stresses is important for job performance of nurses, and service quality of a hospital. Purpose of this research is to identify the determinants of nurses' capabilities. Methods A modified Analytic Hierarchy Process (AHP) was adopted. Overall, 105 nurses from several randomly selected hospitals in southern Taiwan were investigated to generate factors. Ten experienced practitioners were included as the expert in the AHP to produce weights of each criterion. Six nurses from two regional hospitals were then selected to test the model. Results Four factors are then identified as the second level of hierarchy. The study result shows that the family factor is the most important factor, and followed by the personal attributes. Top three sub-criteria that attribute to the nurse's stress-coping capability are children's education, good career plan, and healthy family. The practical simulation provided evidence for the usefulness of this model. Conclusion The study suggested including these key determinants into the practice of human-resource management, and restructuring the hospital's organization, creating an employee-support system as well as a family-friendly working climate. The research provided evidence that supports the usefulness of AHP in identifying the key factors that help stabilizing a nursing team. PMID:25988086

  1. Genes Involved in the Osteoarthritis Process Identified through Genome Wide Expression Analysis in Articular Cartilage; the RAAK Study

    PubMed Central

    Bovée, Judith V. M. G.; Bomer, Nils; van der Breggen, Ruud; Lakenberg, Nico; Keurentjes, J. Christiaan; Goeman, Jelle J.; Slagboom, P. Eline; Nelissen, Rob G. H. H.; Bos, Steffan D.; Meulenbelt, Ingrid

    2014-01-01

    Objective Identify gene expression profiles associated with OA processes in articular cartilage and determine pathways changing during the disease process. Methods Genome wide gene expression was determined in paired samples of OA affected and preserved cartilage of the same joint using microarray analysis for 33 patients of the RAAK study. Results were replicated in independent samples by RT-qPCR and immunohistochemistry. Profiles were analyzed with the online analysis tools DAVID and STRING to identify enrichment for specific pathways and protein-protein interactions. Results Among the 1717 genes that were significantly differently expressed between OA affected and preserved cartilage we found significant enrichment for genes involved in skeletal development (e.g. TNFRSF11B and FRZB). Also several inflammatory genes such as CD55, PTGES and TNFAIP6, previously identified in within-joint analyses as well as in analyses comparing preserved cartilage from OA affected joints versus healthy cartilage were among the top genes. Of note was the high up-regulation of NGF in OA cartilage. RT-qPCR confirmed differential expression for 18 out of 19 genes with expression changes of 2-fold or higher, and immunohistochemistry of selected genes showed a concordant change in protein expression. Most of these changes associated with OA severity (Mankin score) but were independent of joint-site or sex. Conclusion We provide further insights into the ongoing OA pathophysiological processes in cartilage, in particular into differences in macroscopically intact cartilage compared to OA affected cartilage, which seem relatively consistent and independent of sex or joint. We advocate that development of treatment could benefit by focusing on these similarities in gene expression changes and/or pathways. PMID:25054223

  2. Robust springback compensation

    NASA Astrophysics Data System (ADS)

    Carleer, Bart; Grimm, Peter

    2013-12-01

    Springback simulation and springback compensation are more and more applied in productive use of die engineering. In order to successfully compensate a tool accurate springback results are needed as well as an effective compensation approach. In this paper a methodology has been introduce in order to effectively compensate tools. First step is the full process simulation meaning that not only the drawing operation will be simulated but also all secondary operations like trimming and flanging. Second will be the verification whether the process is robust meaning that it obtains repeatable results. In order to effectively compensate a minimum clamping concept will be defined. Once these preconditions are fulfilled the tools can be compensated effectively.

  3. Comparison of the Analytic Hierarchy Process and Incomplete Analytic Hierarchy Process for identifying customer preferences in the Texas retail energy provider market

    NASA Astrophysics Data System (ADS)

    Davis, Christopher

    The competitive market for retail energy providers in Texas has been in existence for 10 years. When the market opened in 2002, 5 energy providers existed, offering, on average, 20 residential product plans in total. As of January 2012, there are now 115 energy providers in Texas offering over 300 residential product plans for customers. With the increase in providers and product plans, customers can be bombarded with information and suffer from the "too much choice" effect. The goal of this praxis is to aid customers in the decision making process of identifying an energy provider and product plan. Using the Analytic Hierarchy Process (AHP), a hierarchical decomposition decision making tool, and the Incomplete Analytic Hierarchy Process (IAHP), a modified version of AHP, customers can prioritize criteria such as price, rate type, customer service, and green energy products to identify the provider and plan that best meets their needs. To gather customer data, a survey tool has been developed for customers to complete the pairwise comparison process. Results are compared for the Incomplete AHP and AHP method to determine if the Incomplete AHP method is just as accurate, but more efficient, than the traditional AHP method.

  4. Enabling Rapid and Robust Structural Analysis During Conceptual Design

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.; Padula, Sharon L.; Li, Wu

    2015-01-01

    This paper describes a multi-year effort to add a structural analysis subprocess to a supersonic aircraft conceptual design process. The desired capabilities include parametric geometry, automatic finite element mesh generation, static and aeroelastic analysis, and structural sizing. The paper discusses implementation details of the new subprocess, captures lessons learned, and suggests future improvements. The subprocess quickly compares concepts and robustly handles large changes in wing or fuselage geometry. The subprocess can rank concepts with regard to their structural feasibility and can identify promising regions of the design space. The automated structural analysis subprocess is deemed robust and rapid enough to be included in multidisciplinary conceptual design and optimization studies.

  5. Robustness in multicellular systems

    NASA Astrophysics Data System (ADS)

    Xavier, Joao

    2011-03-01

    Cells and organisms cope with the task of maintaining their phenotypes in the face of numerous challenges. Much attention has recently been paid to questions of how cells control molecular processes to ensure robustness. However, many biological functions are multicellular and depend on interactions, both physical and chemical, between cells. We use a combination of mathematical modeling and molecular biology experiments to investigate the features that convey robustness to multicellular systems. Cell populations must react to external perturbations by sensing environmental cues and acting coordinately in response. At the same time, they face a major challenge: the emergence of conflict from within. Multicellular traits are prone to cells with exploitative phenotypes that do not contribute to shared resources yet benefit from them. This is true in populations of single-cell organisms that have social lifestyles, where conflict can lead to the emergence of social ``cheaters,'' as well as in multicellular organisms, where conflict can lead to the evolution of cancer. I will describe features that diverse multicellular systems can have to eliminate potential conflicts as well as external perturbations.

  6. A cross-sectional study to identify organisational processes associated with nurse-reported quality and patient safety

    PubMed Central

    Tvedt, Christine; Sjetne, Ingeborg Strømseng; Helgeland, Jon; Bukholm, Geir

    2012-01-01

    Objectives The purpose of this study was to identify organisational processes and structures that are associated with nurse-reported patient safety and quality of nursing. Design This is an observational cross-sectional study using survey methods. Setting Respondents from 31 Norwegian hospitals with more than 85 beds were included in the survey. Participants All registered nurses working in direct patient care in a position of 20% or more were invited to answer the survey. In this study, 3618 nurses from surgical and medical wards responded (response rate 58.9). Nurses' practice environment was defined as organisational processes and measured by the Nursing Work Index Revised and items from Hospital Survey on Patient Safety Culture. Outcome measures Nurses' assessments of patient safety, quality of nursing, confidence in how their patients manage after discharge and frequency of adverse events were used as outcome measures. Results Quality system, nurse–physician relation, patient safety management and staff adequacy were process measures associated with nurse-reported work-related and patient-related outcomes, but we found no associations with nurse participation, education and career and ward leadership. Most organisational structures were non-significant in the multilevel model except for nurses’ affiliations to medical department and hospital type. Conclusions Organisational structures may have minor impact on how nurses perceive work-related and patient-related outcomes, but the findings in this study indicate that there is a considerable potential to address organisational design in improvement of patient safety and quality of care. PMID:23263021

  7. Exploiting Cloud Radar Doppler Spectra of Mixed-Phase Clouds during ACCEPT Field Experiment to Identify Microphysical Processes

    NASA Astrophysics Data System (ADS)

    Kalesse, H.; Myagkov, A.; Seifert, P.; Buehl, J.

    2015-12-01

    Cloud radar Doppler spectra offer much information about cloud processes. By analyzing millimeter radar Doppler spectra from cloud-top to -base in mixed-phase clouds in which super-cooled liquid-layers are present we try to tell the microphysical evolution story of particles that are present by disentangling the contributions of the solid and liquid particles to the total radar returns. Instead of considering vertical profiles, dynamical effects are taken into account by following the particle population evolution along slanted paths which are caused by horizontal advection of the cloud. The goal is to identify regions in which different microphysical processes such as new particle formation (nucleation), water vapor deposition, aggregation, riming, or sublimation occurr. Cloud radar measurements are supplemented by Doppler lidar and Raman lidar observations as well as observations with MWR, wind profiler, and radio sondes. The presence of super-cooled liquid layers is identified by positive liquid water paths in MWR measurements, the vertical location of liquid layers (in non-raining systems and below lidar extinction) is derived from regions of high-backscatter and low depolarization in Raman lidar observations. In collocated cloud radar measurements, we try to identify cloud phase in the cloud radar Doppler spectrum via location of the Doppler peak(s), the existence of multi-modalities or the spectral skewness. Additionally, within the super-cooled liquid layers, the radar-identified liquid droplets are used as air motion tracer to correct the radar Doppler spectrum for vertical air motion w. These radar-derived estimates of w are validated by independent estimates of w from collocated Doppler lidar measurements. A 35 GHz vertically pointing cloud Doppler radar (METEK MIRA-35) in linear depolarization (LDR) mode is used. Data is from the deployment of the Leipzig Aerosol and Cloud Remote Observations System (LACROS) during the Analysis of the Composition of

  8. Identifying biogeochemical processes beneath stormwater infiltration ponds in support of a new best management practice for groundwater protection

    USGS Publications Warehouse

    O'Reilly, Andrew M.; Chang, Ni-Bin; Wanielista, Martin P.; Xuan, Zhemin

    2011-01-01

     When applying a stormwater infiltration pond best management practice (BMP) for protecting the quality of underlying groundwater, a common constituent of concern is nitrate. Two stormwater infiltration ponds, the SO and HT ponds, in central Florida, USA, were monitored. A temporal succession of biogeochemical processes was identified beneath the SO pond, including oxygen reduction, denitrification, manganese and iron reduction, and methanogenesis. In contrast, aerobic conditions persisted beneath the HT pond, resulting in nitrate leaching into groundwater. Biogeochemical differences likely are related to soil textural and hydraulic properties that control surface/subsurface oxygen exchange. A new infiltration BMP was developed and a full-scale application was implemented for the HT pond. Preliminary results indicate reductions in nitrate concentration exceeding 50% in soil water and shallow groundwater beneath the HT pond.

  9. Comparing Four Instructional Techniques for Promoting Robust Knowledge

    ERIC Educational Resources Information Center

    Richey, J. Elizabeth; Nokes-Malach, Timothy J.

    2015-01-01

    Robust knowledge serves as a common instructional target in academic settings. Past research identifying characteristics of experts' knowledge across many domains can help clarify the features of robust knowledge as well as ways of assessing it. We review the expertise literature and identify three key features of robust knowledge (deep,…

  10. The role of various amino acids in enzymatic browning process in potato tubers, and identifying the browning products.

    PubMed

    Ali, Hussein M; El-Gizawy, Ahmed M; El-Bassiouny, Rawia E I; Saleh, Mahmoud A

    2016-02-01

    The effects of five structurally variant amino acids, glycine, valine, methionine, phenylalanine and cysteine were examined as inhibitors and/or stimulators of fresh-cut potato browning. The first four amino acids showed conflict effects; high concentrations (⩾ 100mM for glycine and ⩾ 1.0M for the other three amino acids) induced potato browning while lower concentrations reduced the browning process. Alternatively, increasing cysteine concentration consistently reduced the browning process due to reaction with quinone to give colorless adduct. In PPO assay, high concentrations (⩾ 1.11 mM) of the four amino acids developed more color than that of control samples. Visible spectra indicated a continuous condensation of quinone and glycine to give colored adducts absorbed at 610-630 nm which were separated and identified by LC-ESI-MS as catechol-diglycine adduct that undergoes polymerization with other glycine molecules to form peptide side chains. In lower concentrations, the less concentration the less developed color was observed. PMID:26304424

  11. A multivariate statistical approach to identify the spatio-temporal variation of geochemical process in a hard rock aquifer.

    PubMed

    Thivya, C; Chidambaram, S; Thilagavathi, R; Prasanna, M V; Singaraja, C; Adithya, V S; Nepolian, M

    2015-09-01

    A study has been carried out in crystalline hard rock aquifers of Madurai district, Tamil Nadu, to identify the spatial and temporal variations and to understand sources responsible for hydrogeochemical processes in the region. Totally, 216 samples were collected for four seasons [premonsoon (PRM), southwest monsoon (SWM), northeast monsoon (NWM), and postmonsoon (POM)]. The Na and K ions are attributed from weathering of feldspars in charnockite and fissile hornblende gneiss. The results also indicate that monsoon leaches the U ions in the groundwater and later it is reflected in the (222)Rn levels also. The statistical relationship on the temporal data reflects the fact that Ca, Mg, Na, Cl, HCO3, and SO4 form the spinal species, which are the chief ions playing the significant role in the geochemistry of the region. The factor loadings of the temporal data reveal the fact that the predominant factor is anthropogenic process and followed by natural weathering and U dissolution. The spatial analysis of the temporal data reveals that weathering is prominent in the NW part and that of distribution of U and (222)Rn along the NE part of the study area. This is also reflected in the cluster analysis, and it is understood that lithology, land use pattern, lineaments, and groundwater flow direction determine the spatial variation of these ions with respect to season. PMID:26239570

  12. Acetylome study in mouse adipocytes identifies targets of SIRT1 deacetylation in chromatin organization and RNA processing.

    PubMed

    Kim, Sun-Yee; Sim, Choon Kiat; Tang, Hui; Han, Weiping; Zhang, Kangling; Xu, Feng

    2016-05-15

    SIRT1 is a key protein deacetylase that regulates cellular metabolism through lysine deacetylation on both histones and non-histone proteins. Lysine acetylation is a wide-spread post-translational modification found on many regulatory proteins and it plays an essential role in cell signaling, transcription and metabolism. In mice, SIRT1 has known protective functions during high-fat diet but the acetylome regulated by SIRT1 in adipocytes is not completely understood. Here we conducted acetylome analyses in murine adipocytes treated with small-molecule modulators that inhibit or activate the deacetylase activity of SIRT1. We identified a total of 302 acetylated peptides from 78 proteins in this study. From the list of potential SIRT1 targets, we selected seven candidates and further verified that six of them can be deacetylated by SIRT1 in-vitro. Among them, half of the SIRT1 targets are involved in regulating chromatin structure and the other half is involved in RNA processing. Our results provide a resource for further SIRT1 target validation in fat cells and suggest a potential role of SIRT1 in the regulation of chromatin structure and RNA processing, which may possibly extend to other cell types as well. PMID:27021582

  13. Identifying Armed Respondents to Domestic Violence Restraining Orders and Recovering Their Firearms: Process Evaluation of an Initiative in California

    PubMed Central

    Frattaroli, Shannon; Claire, Barbara E.; Vittes, Katherine A.; Webster, Daniel W.

    2014-01-01

    Objectives. We evaluated a law enforcement initiative to screen respondents to domestic violence restraining orders for firearm ownership or possession and recover their firearms. Methods. The initiative was implemented in San Mateo and Butte counties in California from 2007 through 2010. We used descriptive methods to evaluate the screening process and recovery effort in each county, relying on records for individual cases. Results. Screening relied on an archive of firearm transactions, court records, and petitioner interviews; no single source was adequate. Screening linked 525 respondents (17.7%) in San Mateo County to firearms; 405 firearms were recovered from 119 (22.7%) of them. In Butte County, 88 (31.1%) respondents were linked to firearms; 260 firearms were recovered from 45 (51.1%) of them. Nonrecovery occurred most often when orders were never served or respondents denied having firearms. There were no reports of serious violence or injury. Conclusions. Recovering firearms from persons subject to domestic violence restraining orders is possible. We have identified design and implementation changes that may improve the screening process and the yield from recovery efforts. Larger implementation trials are needed. PMID:24328660

  14. Identifying key processes in the hydrochemistry of a basin through the combined use of factor and regression models

    NASA Astrophysics Data System (ADS)

    Yidana, Sandow Mark; Banoeng-Yakubo, Bruce; Sakyi, Patrick Asamoah

    2012-04-01

    An innovative technique of measuring the intensities of major sources of variation in the hydrochemistry of (ground) water in a basin has been developed. This technique, which is based on the combination of R-mode factor and multiple regression analyses, can be used to measure the degrees of influence of the major sources of variation in the hydrochemistry without measuring the concentrations of the entire set of physico-chemical parameters which are often used to characterize water systems. R-mode factor analysis was applied to the data of 13 physico-chemical parameters and 50 samples in order to determine the major sources of variation in the hydrochemistry of some aquifers in the western region of Ghana. In this study, three sources of variation in the hydrochemistry were distinguished: the dissolution of chlorides and sulfates of the major cations, carbonate mineral dissolution, and silicate mineral weathering. Two key parameters were identified with each of the processes and multiple regression models were developed for each process. These models were tested and found to predict these processes quite accurately, and can be applied anywhere within the terrain. This technique can be reliably applied in areas where logistical constraints limit water sampling for whole basin hydrochemical characterization. Q-mode hierarchical cluster analysis (HCA) applied to the data revealed three major groundwater associations distinguished on the basis of the major causes of variation in the hydrochemistry. The three groundwater types represent Na-HCO3, Ca-HCO3, and Na-Cl groundwater types. Silicate stability diagrams suggest that all these groundwater types are mainly stable in the kaolinite and montmorillonite fields suggesting moderately restricted flow conditions.

  15. Robust adiabatic sum frequency conversion.

    PubMed

    Suchowski, Haim; Prabhudesai, Vaibhav; Oron, Dan; Arie, Ady; Silberberg, Yaron

    2009-07-20

    We discuss theoretically and demonstrate experimentally the robustness of the adiabatic sum frequency conversion method. This technique, borrowed from an analogous scheme of robust population transfer in atomic physics and nuclear magnetic resonance, enables the achievement of nearly full frequency conversion in a sum frequency generation process for a bandwidth up to two orders of magnitude wider than in conventional conversion schemes. We show that this scheme is robust to variations in the parameters of both the nonlinear crystal and of the incoming light. These include the crystal temperature, the frequency of the incoming field, the pump intensity, the crystal length and the angle of incidence. Also, we show that this extremely broad bandwidth can be tuned to higher or lower central wavelengths by changing either the pump frequency or the crystal temperature. The detailed study of the properties of this converter is done using the Landau-Zener theory dealing with the adiabatic transitions in two level systems. PMID:19654679

  16. A multi-resolution analysis of lidar-DTMs to identify geomorphic processes from characteristic topographic length scales

    NASA Astrophysics Data System (ADS)

    Sangireddy, H.; Passalacqua, P.; Stark, C. P.

    2013-12-01

    Characteristic length scales are often present in topography, and they reflect the driving geomorphic processes. The wide availability of high resolution lidar Digital Terrain Models (DTMs) allows us to measure such characteristic scales, but new methods of topographic analysis are needed in order to do so. Here, we explore how transitions in probability distributions (pdfs) of topographic variables such as (log(area/slope)), defined as topoindex by Beven and Kirkby[1979], can be measured by Multi-Resolution Analysis (MRA) of lidar DTMs [Stark and Stark, 2001; Sangireddy et al.,2012] and used to infer dominant geomorphic processes such as non-linear diffusion and critical shear. We show this correlation between dominant geomorphic processes to characteristic length scales by comparing results from a landscape evolution model to natural landscapes. The landscape evolution model MARSSIM Howard[1994] includes components for modeling rock weathering, mass wasting by non-linear creep, detachment-limited channel erosion, and bedload sediment transport. We use MARSSIM to simulate steady state landscapes for a range of hillslope diffusivity and critical shear stresses. Using the MRA approach, we estimate modal values and inter-quartile ranges of slope, curvature, and topoindex as a function of resolution. We also construct pdfs at each resolution and identify and extract characteristic scale breaks. Following the approach of Tucker et al.,[2001], we measure the average length to channel from ridges, within the GeoNet framework developed by Passalacqua et al.,[2010] and compute pdfs for hillslope lengths at each scale defined in the MRA. We compare the hillslope diffusivity used in MARSSIM against inter-quartile ranges of topoindex and hillslope length scales, and observe power law relationships between the compared variables for simulated landscapes at steady state. We plot similar measures for natural landscapes and are able to qualitatively infer the dominant geomorphic

  17. Robust Adaptive Control

    NASA Technical Reports Server (NTRS)

    Narendra, K. S.; Annaswamy, A. M.

    1985-01-01

    Several concepts and results in robust adaptive control are are discussed and is organized in three parts. The first part surveys existing algorithms. Different formulations of the problem and theoretical solutions that have been suggested are reviewed here. The second part contains new results related to the role of persistent excitation in robust adaptive systems and the use of hybrid control to improve robustness. In the third part promising new areas for future research are suggested which combine different approaches currently known.

  18. Integrated analysis of DNA methylation, immunohistochemistry and mRNA expression, data identifies a Methylation Expression Index (MEI) robustly associated with survival of ER-positive breast cancer patients

    PubMed Central

    Garcia-Closas, Montserrat; Davis, Sean; Meltzer, Paul; Lissowska, Jolanta; Horne, Hisani N.; Sherman, Mark E.; Lee, Maxwell

    2015-01-01

    Identification of prognostic gene expression signatures may enable improved decisions about management of breast cancer. To identify a prognostic signature for breast cancer, we performed DNA methylation profiling and identified methylation markers that were associated with expression of ER, PR, HER2, CK5/6 and EGFR proteins. Methylation markers that were correlated with corresponding mRNA expression levels were identified using 208 invasive tumors from a population-based case-control study conducted in Poland. Using this approach, we defined the Methylation Expression Index (MEI) signature that was based on a weighted sum of mRNA levels of 57 genes. Classification of cases as low or high MEI scores were related to survival using Cox regression models. In the Polish study, women with ER-positive low MEI cancers had reduced survival at a median of 5.20 years of follow-up, HR=2.85 95%CI=1.25-6.47. Low MEI was also related to decreased survival in four independent datasets totaling over 2500 ER-positive breast cancers. These results suggest that integrated analysis of tumor expression markers, DNA methylation, and mRNA data can be an important approach for identifying breast cancer prognostic signatures. Prospective assessment of MEI along with other prognostic signatures should be evaluated in future studies. PMID:25773928

  19. Processes for Identifying Regional Influences of and Responses to Increasing Atmospheric CO sub 2 and Climate Change --- The MINK Project

    SciTech Connect

    Easterling, W.E. III; McKenney, M.S.; Rosenberg, N.J.; Lemon, K.M.

    1991-08-01

    The second report of a series Processes for Identifying Regional Influences of and Responses to Increasing Atmospheric CO{sub 2} and Climate Change -- The MINK Project is composed of two parts. This Report (IIB) deals with agriculture at the level of farms and Major Land Resource Areas (MLRAs). The Erosion Productivity Impact Calculator (EPIC), a crop growth simulation model developed by scientists at the US Department of Agriculture, is used to study the impacts of the analog climate on yields of main crops in both the 1984/87 and the 2030 baselines. The results of this work with EPIC are the basis for the analysis of the climate change impacts on agriculture at the region-wide level undertaken in this report. Report IIA treats agriculture in MINK in terms of state and region-wide production and resource use for the main crops and animals in the baseline periods of 1984/87 and 2030. The effects of the analog climate on the industry at this level of aggregation are considered in both baseline periods. 41 refs., 40 figs., 46 tabs.

  20. Network Robustness: the whole story

    NASA Astrophysics Data System (ADS)

    Longjas, A.; Tejedor, A.; Zaliapin, I. V.; Ambroj, S.; Foufoula-Georgiou, E.

    2014-12-01

    A multitude of actual processes operating on hydrological networks may exhibit binary outcomes such as clean streams in a river network that may become contaminated. These binary outcomes can be modeled by node removal processes (attacks) acting in a network. Network robustness against attacks has been widely studied in fields as diverse as the Internet, power grids and human societies. However, the current definition of robustness is only accounting for the connectivity of the nodes unaffected by the attack. Here, we put forward the idea that the connectivity of the affected nodes can play a crucial role in proper evaluation of the overall network robustness and its future recovery from the attack. Specifically, we propose a dual perspective approach wherein at any instant in the network evolution under attack, two distinct networks are defined: (i) the Active Network (AN) composed of the unaffected nodes and (ii) the Idle Network (IN) composed of the affected nodes. The proposed robustness metric considers both the efficiency of destroying the AN and the efficiency of building-up the IN. This approach is motivated by concrete applied problems, since, for example, if we study the dynamics of contamination in river systems, it is necessary to know both the connectivity of the healthy and contaminated parts of the river to assess its ecological functionality. We show that trade-offs between the efficiency of the Active and Idle network dynamics give rise to surprising crossovers and re-ranking of different attack strategies, pointing to significant implications for decision making.

  1. Mechanisms for Robust Cognition

    ERIC Educational Resources Information Center

    Walsh, Matthew M.; Gluck, Kevin A.

    2015-01-01

    To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within…

  2. Robust Fringe Projection Profilometry via Sparse Representation.

    PubMed

    Budianto; Lun, Daniel P K

    2016-04-01

    In this paper, a robust fringe projection profilometry (FPP) algorithm using the sparse dictionary learning and sparse coding techniques is proposed. When reconstructing the 3D model of objects, traditional FPP systems often fail to perform if the captured fringe images have a complex scene, such as having multiple and occluded objects. It introduces great difficulty to the phase unwrapping process of an FPP system that can result in serious distortion in the final reconstructed 3D model. For the proposed algorithm, it encodes the period order information, which is essential to phase unwrapping, into some texture patterns and embeds them to the projected fringe patterns. When the encoded fringe image is captured, a modified morphological component analysis and a sparse classification procedure are performed to decode and identify the embedded period order information. It is then used to assist the phase unwrapping process to deal with the different artifacts in the fringe images. Experimental results show that the proposed algorithm can significantly improve the robustness of an FPP system. It performs equally well no matter the fringe images have a simple or complex scene, or are affected due to the ambient lighting of the working environment. PMID:26890867

  3. Robust Nonlinear Neural Codes

    NASA Astrophysics Data System (ADS)

    Yang, Qianli; Pitkow, Xaq

    2015-03-01

    Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.

  4. Euphausiid distribution along the Western Antarctic Peninsula—Part A: Development of robust multi-frequency acoustic techniques to identify euphausiid aggregations and quantify euphausiid size, abundance, and biomass

    NASA Astrophysics Data System (ADS)

    Lawson, Gareth L.; Wiebe, Peter H.; Stanton, Timothy K.; Ashjian, Carin J.

    2008-02-01

    Methods were refined and tested for identifying the aggregations of Antarctic euphausiids ( Euphausia spp.) and then estimating euphausiid size, abundance, and biomass, based on multi-frequency acoustic survey data. A threshold level of volume backscattering strength for distinguishing euphausiid aggregations from other zooplankton was derived on the basis of published measurements of euphausiid visual acuity and estimates of the minimum density of animals over which an individual can maintain visual contact with its nearest neighbor. Differences in mean volume backscattering strength at 120 and 43 kHz further served to distinguish euphausiids from other sources of scattering. An inversion method was then developed to estimate simultaneously the mean length and density of euphausiids in these acoustically identified aggregations based on measurements of mean volume backscattering strength at four frequencies (43, 120, 200, and 420 kHz). The methods were tested at certain locations within an acoustically surveyed continental shelf region in and around Marguerite Bay, west of the Antarctic Peninsula, where independent evidence was also available from net and video systems. Inversion results at these test sites were similar to net samples for estimated length, but acoustic estimates of euphausiid density exceeded those from nets by one to two orders of magnitude, likely due primarily to avoidance and to a lesser extent to differences in the volumes sampled by the two systems. In a companion study, these methods were applied to the full acoustic survey data in order to examine the distribution of euphausiids in relation to aspects of the physical and biological environment [Lawson, G.L., Wiebe, P.H., Ashjian, C.J., Stanton, T.K., 2008. Euphausiid distribution along the Western Antarctic Peninsula—Part B: Distribution of euphausiid aggregations and biomass, and associations with environmental features. Deep-Sea Research II, this issue [doi:10.1016/j.dsr2.2007.11.014

  5. Robust, optimal subsonic airfoil shapes

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor)

    2008-01-01

    Method system, and product from application of the method, for design of a subsonic airfoil shape, beginning with an arbitrary initial airfoil shape and incorporating one or more constraints on the airfoil geometric parameters and flow characteristics. The resulting design is robust against variations in airfoil dimensions and local airfoil shape introduced in the airfoil manufacturing process. A perturbation procedure provides a class of airfoil shapes, beginning with an initial airfoil shape.

  6. Ruggedness and robustness testing.

    PubMed

    Dejaegher, Bieke; Heyden, Yvan Vander

    2007-07-27

    Due to the strict regulatory requirements, especially in pharmaceutical analysis, analysis results with an acceptable quality should be reported. Thus, a proper validation of the measurement method is required. In this context, ruggedness and robustness testing becomes increasingly more important. In this review, the definitions of ruggedness and robustness are given, followed by a short explanation of the different approaches applied to examine the ruggedness or the robustness of an analytical method. Then, case studies, describing ruggedness or robustness tests of high-performance liquid chromatographic (HPLC), capillary electrophoretic (CE), gas chromatographic (GC), supercritical fluid chromatographic (SFC), and ultra-performance liquid chromatographic (UPLC) assay methods, are critically reviewed and discussed. Mainly publications of the last 10 years are considered. PMID:17379230

  7. Feel No Guilt! Your Statistics Are Probably Robust.

    ERIC Educational Resources Information Center

    Micceri, Theodore

    This paper reports an attempt to identify appropriate and robust location estimators for situations that tend to occur among various types of empirical data. Emphasizing robustness across broad unidentifiable ranges of contamination, an attempt was made to replicate, on a somewhat smaller scale, the definitive Princeton Robustness Study of 1972 to…

  8. Identifying Complex Cultural Interactions in the Instructional Design Process: A Case Study of a Cross-Border, Cross-Sector Training for Innovation Program

    ERIC Educational Resources Information Center

    Russell, L. Roxanne; Kinuthia, Wanjira L.; Lokey-Vega, Anissa; Tsang-Kosma, Winnie; Madathany, Reeny

    2013-01-01

    The purpose of this research is to identify complex cultural dynamics in the instructional design process of a cross-sector, cross-border training environment by applying Young's (2009) Culture-Based Model (CBM) as a theoretical framework and taxonomy for description of the instructional design process under the conditions of one case. This…

  9. Examining the Cognitive Processes Used by Adolescent Girls and Women Scientists in Identifying Science Role Models: A Feminist Approach

    ERIC Educational Resources Information Center

    Buck, Gayle A.; Plano Clark, Vicki L.; Leslie-Pelecky, Diandra; Lu, Yun; Cerda-Lizarraga, Particia

    2008-01-01

    Women remain underrepresented in science professions. Studies have shown that students are more likely to select careers when they can identify a role model in that career path. Further research has shown that the success of this strategy is enhanced by the use of gender-matched role models. While prior work provides insights into the value of…

  10. A MULTI-SCALE SCREENING PROCESS TO IDENTIFY LEAST-DISTURBED STREAM SITES FOR USE IN WATER QUALITY MONITORING

    EPA Science Inventory

    We developed a four-step screening procedure to identify least-disturbed stream sites for an EPA Environmental Monitoring and Assessment Program (EMAP) pilot project being conducted in twelve western states. In this project, biological attributes at least-disturbed sites are use...

  11. Evaluation of repetitive extragenic palindromic-PCR and denatured gradient gel electrophoresis in identifying Salmonella serotypes isolated from processed turkeys

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Salmonella has been reported as the leading foodborne pathogen in the US. A study was conducted to compare the use of automated repetitive extragenic palindromic (REP-PCR) and denaturing gradient gel electrophoresis (DGGE) as diagnostic tools for identifying Salmonella serotypes. The interspersed ...

  12. Identifying an Educational Need: Survival Skills in Arithmetic. A Real Situation and an Example of the Process.

    ERIC Educational Resources Information Center

    Schweiker, Robert F.

    One model of educational needs assessment stresses: (1) deciding what students should learn in school, (2) measuring whether they learn it, and (3) looking at gaps between desired learnings and actual learnings to identify educational needs and priorities. In this study, it was assumed that practically all students should master…

  13. Seventeen Projects Carried out by Students Designing for and with Disabled Children: Identifying Designers' Difficulties during the Whole Design Process

    ERIC Educational Resources Information Center

    Magnier, Cecile; Thomann, Guillaume; Villeneuve, Francois

    2012-01-01

    This article aims to identify the difficulties that may arise when designing assistive devices for disabled children. Seventeen design projects involving disabled children, engineering students, and special schools were analysed. A content analysis of the design reports was performed. For this purpose, a coding scheme was built based on a review…

  14. Use of a marker organism in poultry processing to identify sites of cross-contamination and evaluate possible control measures.

    PubMed

    Mead, G C; Hudson, W R; Hinton, M H

    1994-07-01

    1. Nine different sites at a poultry processing plant were selected in the course of a hazard analysis to investigate the degree of microbial cross-contamination that could occur during processing and the effectiveness of possible control measures. 2. At each site, carcases, equipment or working surfaces were inoculated with a non-pathogenic strain of nalidixic acid-resistant Escherichia coli K12; transmission of the organism among carcases being processed was followed qualitatively and, where appropriate, quantitatively. 3. The degree of cross-contamination and the extent to which it could be controlled by the proposed measures varied from one site to another. PMID:7953779

  15. Engineering robust intelligent robots

    NASA Astrophysics Data System (ADS)

    Hall, E. L.; Ali, S. M. Alhaj; Ghaffari, M.; Liao, X.; Cao, M.

    2010-01-01

    The purpose of this paper is to discuss the challenge of engineering robust intelligent robots. Robust intelligent robots may be considered as ones that not only work in one environment but rather in all types of situations and conditions. Our past work has described sensors for intelligent robots that permit adaptation to changes in the environment. We have also described the combination of these sensors with a "creative controller" that permits adaptive critic, neural network learning, and a dynamic database that permits task selection and criteria adjustment. However, the emphasis of this paper is on engineering solutions which are designed for robust operations and worst case situations such as day night cameras or rain and snow solutions. This ideal model may be compared to various approaches that have been implemented on "production vehicles and equipment" using Ethernet, CAN Bus and JAUS architectures and to modern, embedded, mobile computing architectures. Many prototype intelligent robots have been developed and demonstrated in terms of scientific feasibility but few have reached the stage of a robust engineering solution. Continual innovation and improvement are still required. The significance of this comparison is that it provides some insights that may be useful in designing future robots for various manufacturing, medical, and defense applications where robust and reliable performance is essential.

  16. THE ORIGINS OF LIGHT AND HEAVY R-PROCESS ELEMENTS IDENTIFIED BY CHEMICAL TAGGING OF METAL-POOR STARS

    SciTech Connect

    Tsujimoto, Takuji; Shigeyama, Toshikazu

    2014-11-01

    Growing interests in neutron star (NS) mergers as the origin of r-process elements have sprouted since the discovery of evidence for the ejection of these elements from a short-duration γ-ray burst. The hypothesis of a NS merger origin is reinforced by a theoretical update of nucleosynthesis in NS mergers successful in yielding r-process nuclides with A > 130. On the other hand, whether the origin of light r-process elements are associated with nucleosynthesis in NS merger events remains unclear. We find a signature of nucleosynthesis in NS mergers from peculiar chemical abundances of stars belonging to the Galactic globular cluster M15. This finding combined with the recent nucleosynthesis results implies a potential diversity of nucleosynthesis in NS mergers. Based on these considerations, we are successful in the interpretation of an observed correlation between [light r-process/Eu] and [Eu/Fe] among Galactic halo stars and accordingly narrow down the role of supernova nucleosynthesis in the r-process production site. We conclude that the tight correlation by a large fraction of halo stars is attributable to the fact that core-collapse supernovae produce light r-process elements while heavy r-process elements such as Eu and Ba are produced by NS mergers. On the other hand, stars in the outlier, composed of r-enhanced stars ([Eu/Fe] ≳ +1) such as CS22892-052, were exclusively enriched by matter ejected by a subclass of NS mergers that is inclined to be massive and consist of both light and heavy r-process nuclides.

  17. A Robust Biomarker

    NASA Technical Reports Server (NTRS)

    Westall, F.; Steele, A.; Toporski, J.; Walsh, M. M.; Allen, C. C.; Guidry, S.; McKay, D. S.; Gibson, E. K.; Chafetz, H. S.

    2000-01-01

    Polymers of bacterial origin, either through cell secretion or the degraded product of cell lysis, form isolated mucoidal strands as well as well-developed biofilms on interfaces. Biofilms are structurally and compositionally complex and are readily distinguishable from abiogenic films. These structures range in size from micrometers to decimeters, the latter occurring as the well-known, mineralised biofilms called stromatolites. Compositionally bacterial polymers are greater than 90 % water, with while the majority of the macromolecules forming the framework of the polymers consisting of polysaccharides (with and some nucteic acids and proteins). These macromolecules contain a vaste amount of functional groups, such as carboxyls, hydroxyls, and phosphoryls which are implicated in cation-binding. It is the elevated metal- binding capacity which provides the bacterial polymer with structural support and also helps to preserves it for up to 3.5 b.y. in the terrestrial rock record. The macromolecules, thus, can become rapidly mineralised and trapped in a mineral matrix. Through early and late diagenesis (bacterial degradation, burial, heat, pressure and time) they break down, losing the functional groups and, gradually, their hydrogen atoms. The degraded product is known as "kerogen". With further diagenesis and metamorphism, all the hydrogen atoms are lost and the carbonaceous matter becomes graphite. until the remnant carbonaceous material become graphitised. This last sentence reads a bit as if ALL these macromolecules break down and end up as graphite., but since we find 441 this is not true for all of the macromolecules. We have traced fossilised polymer and biofilms in rocks from throughout Earth's history, to rocks as old as the oldest being 3.5 b.y.-old. Furthermore, Time of Flight Secondary Ion Mass Spectrometry has been able to identify individual macromolecules of bacterial origin, the identities of which are still being investigated, in all the samples

  18. Identifying low-dimensional dynamics in type-I edge-localised-mode processes in JET plasmas

    SciTech Connect

    Calderon, F. A.; Chapman, S. C.; Nicol, R. M.; Dendy, R. O.; Webster, A. J.; Alper, B. [EURATOM Collaboration: JET EFDA Contributors

    2013-04-15

    Edge localised mode (ELM) measurements from reproducibly similar plasmas in the Joint European Torus (JET) tokamak, which differ only in their gas puffing rate, are analysed in terms of the pattern in the sequence of inter-ELM time intervals. It is found that the category of ELM defined empirically as type I-typically more regular, less frequent, and having larger amplitude than other ELM types-embraces substantially different ELMing processes. By quantifying the structure in the sequence of inter-ELM time intervals using delay time plots, we reveal transitions between distinct phase space dynamics, implying transitions between distinct underlying physical processes. The control parameter for these transitions between these different ELMing processes is the gas puffing rate.

  19. Frequency-dependent processing and interpretation (FDPI) of seismic data for identifying, imaging and monitoring fluid-saturated underground reservoirs

    DOEpatents

    Goloshubin, Gennady M.; Korneev, Valeri A.

    2005-09-06

    A method for identifying, imaging and monitoring dry or fluid-saturated underground reservoirs using seismic waves reflected from target porous or fractured layers is set forth. Seismic imaging the porous or fractured layer occurs by low pass filtering of the windowed reflections from the target porous or fractured layers leaving frequencies below low-most corner (or full width at half maximum) of a recorded frequency spectra. Additionally, the ratio of image amplitudes is shown to be approximately proportional to reservoir permeability, viscosity of fluid, and the fluid saturation of the porous or fractured layers.

  20. Frequency-dependent processing and interpretation (FDPI) of seismic data for identifying, imaging and monitoring fluid-saturated underground reservoirs

    DOEpatents

    Goloshubin, Gennady M.; Korneev, Valeri A.

    2006-11-14

    A method for identifying, imaging and monitoring dry or fluid-saturated underground reservoirs using seismic waves reflected from target porous or fractured layers is set forth. Seismic imaging the porous or fractured layer occurs by low pass filtering of the windowed reflections from the target porous or fractured layers leaving frequencies below low-most corner (or full width at half maximum) of a recorded frequency spectra. Additionally, the ratio of image amplitudes is shown to be approximately proportional to reservoir permeability, viscosity of fluid, and the fluid saturation of the porous or fractured layers.

  1. Identifying thresholds in pattern-process relationships: a new cross-scale interactions experiment at the Jornada Basin LTER

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Interactions among ecological patterns and processes at multiple scales play a significant role in threshold behaviors in arid systems. Black grama grasslands and mesquite shrublands are hypothesized to operate under unique sets of feedbacks: grasslands are maintained by fine-scale biotic feedbacks ...

  2. The AP Chemistry Course Audit: A Fertile Ground for Identifying and Addressing Misconceptions about the Course and Process

    ERIC Educational Resources Information Center

    Schwenz, Richard W.; Miller, Sheldon

    2014-01-01

    The advanced placement course audit was implemented to standardize the college-level curricular and resource requirements for AP courses. While the process has had this effect, it has brought with it misconceptions about how much the College Board intends to control what happens within the classroom, what information is required to be included in…

  3. Identifying the Associated Factors of Mediation and Due Process in Families of Students with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Burke, Meghan M.; Goldman, Samantha E.

    2015-01-01

    Compared to families of students with other types of disabilities, families of students with autism spectrum disorder (ASD) are significantly more likely to enact their procedural safeguards such as mediation and due process. However, we do not know which school, child, and parent characteristics are associated with the enactment of safeguards.…

  4. Identifying consumer preferences for specific beef flavor characteristics in relation to cattle production and postmortem processing parameters.

    PubMed

    O'Quinn, T G; Woerner, D R; Engle, T E; Chapman, P L; Legako, J F; Brooks, J C; Belk, K E; Tatum, J D

    2016-02-01

    Sensory analysis of ground LL samples representing 12 beef product categories was conducted in 3 different regions of the U.S. to identify flavor preferences of beef consumers. Treatments characterized production-related flavor differences associated with USDA grade, cattle type, finishing diet, growth enhancement, and postmortem aging method. Consumers (N=307) rated cooked samples for 12 flavors and overall flavor desirability. Samples were analyzed to determine fatty acid content. Volatile compounds produced by cooking were extracted and quantified. Overall, consumers preferred beef that rated high for beefy/brothy, buttery/beef fat, and sweet flavors and disliked beef with fishy, livery, gamey, and sour flavors. Flavor attributes of samples higher in intramuscular fat with greater amounts of monounsaturated fatty acids and lesser proportions of saturated, odd-chain, omega-3, and trans fatty acids were preferred by consumers. Of the volatiles identified, diacetyl and acetoin were most closely correlated with desirable ratings for overall flavor and dimethyl sulfide was associated with an undesirable sour flavor. PMID:26560806

  5. Robustness of spatial micronetworks

    NASA Astrophysics Data System (ADS)

    McAndrew, Thomas C.; Danforth, Christopher M.; Bagrow, James P.

    2015-04-01

    Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure.

  6. Robustness of spatial micronetworks.

    PubMed

    McAndrew, Thomas C; Danforth, Christopher M; Bagrow, James P

    2015-04-01

    Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure. PMID:25974553

  7. Digital monochrome CCD camera for robust pixel correspondant, data compression, and preprocessing in an integrated PC-based image-processing environment

    NASA Astrophysics Data System (ADS)

    Arshad, Norhashim M.; Harvey, David M.; Hobson, Clifford A.

    1996-12-01

    This paper describes the development of a compact digital CCD camera which contains image digitization and processing which interfaces to a personal computer (PC) via a standard enhanced parallel port. Digitizing of precise pixel samples coupled with the provision of putting a single chip FPGA for data processing, became the main digital components of the camera prior to sending the data to the PC. A form of compression scheme is applied so that the digital images may be transferred within the existing parallel port bandwidth. The data is decompressed in the PC environment for a real- time display of the video images using purely native processor resources. Frame capture is built into the camera so that a full uncompressed digital image could be sent for special processing.

  8. Identifying Component-Processes of Executive Functioning that Serve as Risk Factors for the Alcohol-Aggression Relation

    PubMed Central

    Giancola, Peter R.; Godlaski, Aaron J.; Roth, Robert M.

    2011-01-01

    The present investigation determined how different component-processes of executive functioning (EF) served as risk factors for intoxicated aggression. Participants were 512 (246 men and 266 women) healthy social drinkers between 21 and 35 years of age. EF was measured using the Behavior Rating Inventory of Executive Functioning – Adult Version (BRIEF-A; Roth, Isquith, & Gioia, 2005) that assesses nine EF components. Following the consumption of either an alcohol or a placebo beverage, participants were tested on a modified version of the Taylor Aggression Paradigm (Taylor, 1967) in which mild electric shocks were received from, and administered to, a fictitious opponent. Aggressive behavior was operationalized as the shock intensities and durations administered to the opponent. Although a general BRIEF-A EF construct consisting of all nine components predicted intoxicated aggression, the best predictor involved one termed the Behavioral Regulation Index which comprises component processes such as inhibition, emotional control, flexible thinking, and self-monitoring. PMID:21875167

  9. SU-C-304-02: Robust and Efficient Process for Acceptance Testing of Varian TrueBeam Linacs Using An Electronic Portal Imaging Device (EPID)

    SciTech Connect

    Yaddanapudi, S; Cai, B; Sun, B; Li, H; Noel, C; Goddu, S; Mutic, S; Harry, T; Pawlicki, T

    2015-06-15

    Purpose: The purpose of this project was to develop a process that utilizes the onboard kV and MV electronic portal imaging devices (EPIDs) to perform rapid acceptance testing (AT) of linacs in order to improve efficiency and standardize AT equipment and processes. Methods: In this study a Varian TrueBeam linac equipped with an amorphous silicon based EPID (aSi1000) was used. The conventional set of AT tests and tolerances was used as a baseline guide, and a novel methodology was developed to perform as many tests as possible using EPID exclusively. The developer mode on Varian TrueBeam linac was used to automate the process. In the current AT process there are about 45 tests that call for customer demos. Many of the geometric tests such as jaw alignment and MLC positioning are performed with highly manual methods, such as using graph paper. The goal of the new methodology was to achieve quantitative testing while reducing variability in data acquisition, analysis and interpretation of the results. The developed process was validated on two machines at two different institutions. Results: At least 25 of the 45 (56%) tests which required customer demo can be streamlined and performed using EPIDs. More than half of the AT tests can be fully automated using the developer mode, while others still require some user interaction. Overall, the preliminary data shows that EPID-based linac AT can be performed in less than a day, compared to 2–3 days using conventional methods. Conclusions: Our preliminary results show that performance of onboard imagers is quite suitable for both geometric and dosimetric testing of TrueBeam systems. A standardized AT process can tremendously improve efficiency, and minimize the variability related to third party quality assurance (QA) equipment and the available onsite expertise. Research funding provided by Varian Medical Systems. Dr. Sasa Mutic receives compensation for providing patient safety training services from Varian Medical

  10. Identifying the sources and processes of mercury in subtropical estuarine and ocean sediments using Hg isotopic composition.

    PubMed

    Yin, Runsheng; Feng, Xinbin; Chen, Baowei; Zhang, Junjun; Wang, Wenxiong; Li, Xiangdong

    2015-02-01

    The concentrations and isotopic compositions of mercury (Hg) in surface sediments of the Pearl River Estuary (PRE) and the South China Sea (SCS) were analyzed. The data revealed significant differences between the total Hg (THg) in fine-grained sediments collected from the PRE (8-251 μg kg(-1)) and those collected from the SCS (12-83 μg kg(-1)). Large spatial variations in Hg isotopic compositions were observed in the SCS (δ(202)Hg, from -2.82 to -2.10‰; Δ(199)Hg, from +0.21 to +0.45‰) and PRE (δ(202)Hg, from -2.80 to -0.68‰; Δ(199)Hg, from -0.15 to +0.16‰). The large positive Δ(199)Hg in the SCS indicated that a fraction of Hg has undergone Hg(2+) photoreduction processes prior to incorporation into the sediments. The relatively negative Δ(199)Hg values in the PRE indicated that photoreduction of Hg is not the primary route for the removal of Hg from the water column. The riverine input of fine particles played an important role in transporting Hg to the PRE sediments. In the deep ocean bed of the SCS, source-related signatures of Hg isotopes may have been altered by natural geochemical processes (e.g., Hg(2+) photoreduction and preferential adsorption processes). Using Hg isotope compositions, we estimate that river deliveries of Hg from industrial and urban sources and natural soils could be the main inputs of Hg to the PRE. However, the use of Hg isotopes as tracers in source attribution could be limited because of the isotope fractionation by natural processes in the SCS. PMID:25565343

  11. Meta-analysis of genome-wide association studies identifies novel loci that influence cupping and the glaucomatous process

    PubMed Central

    Springelkamp, Henriët.; Höhn, René; Mishra, Aniket; Hysi, Pirro G.; Khor, Chiea-Chuen; Loomis, Stephanie J.; Bailey, Jessica N. Cooke; Gibson, Jane; Thorleifsson, Gudmar; Janssen, Sarah F.; Luo, Xiaoyan; Ramdas, Wishal D.; Vithana, Eranga; Nongpiur, Monisha E.; Montgomery, Grant W.; Xu, Liang; Mountain, Jenny E.; Gharahkhani, Puya; Lu, Yi; Amin, Najaf; Karssen, Lennart C.; Sim, Kar-Seng; van Leeuwen, Elisabeth M.; Iglesias, Adriana I.; Verhoeven, Virginie J. M.; Hauser, Michael A.; Loon, Seng-Chee; Despriet, Dominiek D. G.; Nag, Abhishek; Venturini, Cristina; Sanfilippo, Paul G.; Schillert, Arne; Kang, Jae H.; Landers, John; Jonasson, Fridbert; Cree, Angela J.; van Koolwijk, Leonieke M. E.; Rivadeneira, Fernando; Souzeau, Emmanuelle; Jonsson, Vesteinn; Menon, Geeta; Mitchell, Paul; Wang, Jie Jin; Rochtchina, Elena; Attia, John; Scott, Rodney; Holliday, Elizabeth G.; Wong, Tien-Yin; Baird, Paul N.; Xie, Jing; Inouye, Michael; Viswanathan, Ananth; Sim, Xueling; Weinreb, Robert N.; de Jong, Paulus T. V. M.; Oostra, Ben A.; Uitterlinden, André G.; Hofman, Albert; Ennis, Sarah; Thorsteinsdottir, Unnur; Burdon, Kathryn P.; Allingham, R. Rand; Brilliant, Murray H.; Budenz, Donald L.; Cooke Bailey, Jessica N.; Christen, William G.; Fingert, John; Friedman, David S.; Gaasterland, Douglas; Gaasterland, Terry; Haines, Jonathan L.; Hauser, Michael A.; Kang, Jae Hee; Kraft, Peter; Lee, Richard K.; Lichter, Paul R.; Liu, Yutao; Loomis, Stephanie J.; Moroi, Sayoko E.; Pasquale, Louis R.; Pericak-Vance, Margaret A.; Realini, Anthony; Richards, Julia E.; Schuman, Joel S.; Scott, William K.; Singh, Kuldev; Sit, Arthur J.; Vollrath, Douglas; Weinreb, Robert N.; Wiggs, Janey L.; Wollstein, Gadi; Zack, Donald J.; Zhang, Kang; Donnelly (Chair), Peter; Barroso (Deputy Chair), Ines; Blackwell, Jenefer M.; Bramon, Elvira; Brown, Matthew A.; Casas, Juan P.; Corvin, Aiden; Deloukas, Panos; Duncanson, Audrey; Jankowski, Janusz; Markus, Hugh S.; Mathew, Christopher G.; Palmer, Colin N. A.; Plomin, Robert; Rautanen, Anna; Sawcer, Stephen J.; Trembath, Richard C.; Viswanathan, Ananth C.; Wood, Nicholas W.; Spencer, Chris C. A.; Band, Gavin; Bellenguez, Céline; Freeman, Colin; Hellenthal, Garrett; Giannoulatou, Eleni; Pirinen, Matti; Pearson, Richard; Strange, Amy; Su, Zhan; Vukcevic, Damjan; Donnelly, Peter; Langford, Cordelia; Hunt, Sarah E.; Edkins, Sarah; Gwilliam, Rhian; Blackburn, Hannah; Bumpstead, Suzannah J.; Dronov, Serge; Gillman, Matthew; Gray, Emma; Hammond, Naomi; Jayakumar, Alagurevathi; McCann, Owen T.; Liddle, Jennifer; Potter, Simon C.; Ravindrarajah, Radhi; Ricketts, Michelle; Waller, Matthew; Weston, Paul; Widaa, Sara; Whittaker, Pamela; Barroso, Ines; Deloukas, Panos; Mathew (Chair), Christopher G.; Blackwell, Jenefer M.; Brown, Matthew A.; Corvin, Aiden; Spencer, Chris C. A.; Spector, Timothy D.; Mirshahi, Alireza; Saw, Seang-Mei; Vingerling, Johannes R.; Teo, Yik-Ying; Haines, Jonathan L.; Wolfs, Roger C. W.; Lemij, Hans G.; Tai, E-Shyong; Jansonius, Nomdo M.; Jonas, Jost B.; Cheng, Ching-Yu; Aung, Tin; Viswanathan, Ananth C.; Klaver, Caroline C. W.; Craig, Jamie E.; Macgregor, Stuart; Mackey, David A.; Lotery, Andrew J.; Stefansson, Kari; Bergen, Arthur A. B.; Young, Terri L.; Wiggs, Janey L.; Pfeiffer, Norbert; Wong, Tien-Yin; Pasquale, Louis R.; Hewitt, Alex W.; van Duijn, Cornelia M.; Hammond, Christopher J.

    2014-01-01

    Glaucoma is characterized by irreversible optic nerve degeneration and is the most frequent cause of irreversible blindness worldwide. Here, the International Glaucoma Genetics Consortium conducts a meta-analysis of genome-wide association studies of vertical cup-disc ratio (VCDR), an important disease-related optic nerve parameter. In 21,094 individuals of European ancestry and 6,784 individuals of Asian ancestry, we identify 10 new loci associated with variation in VCDR. In a separate risk-score analysis of five case-control studies, Caucasians in the highest quintile have a 2.5-fold increased risk of primary open-angle glaucoma as compared with those in the lowest quintile. This study has more than doubled the known loci associated with optic disc cupping and will allow greater understanding of mechanisms involved in this common blinding condition. PMID:25241763

  12. Meta-analysis of genome-wide association studies identifies novel loci that influence cupping and the glaucomatous process.

    PubMed

    Springelkamp, Henriët; Höhn, René; Mishra, Aniket; Hysi, Pirro G; Khor, Chiea-Chuen; Loomis, Stephanie J; Bailey, Jessica N Cooke; Gibson, Jane; Thorleifsson, Gudmar; Janssen, Sarah F; Luo, Xiaoyan; Ramdas, Wishal D; Vithana, Eranga; Nongpiur, Monisha E; Montgomery, Grant W; Xu, Liang; Mountain, Jenny E; Gharahkhani, Puya; Lu, Yi; Amin, Najaf; Karssen, Lennart C; Sim, Kar-Seng; van Leeuwen, Elisabeth M; Iglesias, Adriana I; Verhoeven, Virginie J M; Hauser, Michael A; Loon, Seng-Chee; Despriet, Dominiek D G; Nag, Abhishek; Venturini, Cristina; Sanfilippo, Paul G; Schillert, Arne; Kang, Jae H; Landers, John; Jonasson, Fridbert; Cree, Angela J; van Koolwijk, Leonieke M E; Rivadeneira, Fernando; Souzeau, Emmanuelle; Jonsson, Vesteinn; Menon, Geeta; Weinreb, Robert N; de Jong, Paulus T V M; Oostra, Ben A; Uitterlinden, André G; Hofman, Albert; Ennis, Sarah; Thorsteinsdottir, Unnur; Burdon, Kathryn P; Spector, Timothy D; Mirshahi, Alireza; Saw, Seang-Mei; Vingerling, Johannes R; Teo, Yik-Ying; Haines, Jonathan L; Wolfs, Roger C W; Lemij, Hans G; Tai, E-Shyong; Jansonius, Nomdo M; Jonas, Jost B; Cheng, Ching-Yu; Aung, Tin; Viswanathan, Ananth C; Klaver, Caroline C W; Craig, Jamie E; Macgregor, Stuart; Mackey, David A; Lotery, Andrew J; Stefansson, Kari; Bergen, Arthur A B; Young, Terri L; Wiggs, Janey L; Pfeiffer, Norbert; Wong, Tien-Yin; Pasquale, Louis R; Hewitt, Alex W; van Duijn, Cornelia M; Hammond, Christopher J

    2014-01-01

    Glaucoma is characterized by irreversible optic nerve degeneration and is the most frequent cause of irreversible blindness worldwide. Here, the International Glaucoma Genetics Consortium conducts a meta-analysis of genome-wide association studies of vertical cup-disc ratio (VCDR), an important disease-related optic nerve parameter. In 21,094 individuals of European ancestry and 6,784 individuals of Asian ancestry, we identify 10 new loci associated with variation in VCDR. In a separate risk-score analysis of five case-control studies, Caucasians in the highest quintile have a 2.5-fold increased risk of primary open-angle glaucoma as compared with those in the lowest quintile. This study has more than doubled the known loci associated with optic disc cupping and will allow greater understanding of mechanisms involved in this common blinding condition. PMID:25241763

  13. Robust calibration of a global aerosol model

    NASA Astrophysics Data System (ADS)

    Lee, L.; Carslaw, K. S.; Pringle, K. J.; Reddington, C.

    2013-12-01

    Comparison of models and observations is vital for evaluating how well computer models can simulate real world processes. However, many current methods are lacking in their assessment of the model uncertainty, which introduces questions regarding the robustness of the observationally constrained model. In most cases, models are evaluated against observations using a single baseline simulation considered to represent the models' best estimate. The model is then improved in some way so that its comparison to observations is improved. Continuous adjustments in such a way may result in a model that compares better to observations but there may be many compensating features which make prediction with the newly calibrated model difficult to justify. There may also be some model outputs whose comparison to observations becomes worse in some regions/seasons as others improve. In such cases calibration cannot be considered robust. We present details of the calibration of a global aerosol model, GLOMAP, in which we consider not just a single model setup but a perturbed physics ensemble with 28 uncertain parameters. We first quantify the uncertainty in various model outputs (CCN, CN) for the year 2008 and use statistical emulation to identify which of the 28 parameters contribute most to this uncertainty. We then compare the emulated model simulations in the entire parametric uncertainty space to observations. Regions where the entire ensemble lies outside the error of the observations indicate structural model error or gaps in current knowledge which allows us to target future research areas. Where there is some agreement with the observations we use the information on the sources of the model uncertainty to identify geographical regions in which the important parameters are similar. Identification of regional calibration clusters helps us to use information from observation rich regions to calibrate regions with sparse observations and allow us to make recommendations for

  14. Using natural salmon colonization as a guide to identify functional links between physical and biological processes (Invited)

    NASA Astrophysics Data System (ADS)

    Pess, G. R.

    2009-12-01

    Establishing clear functional links between physical and biological processes in aquatic systems has been difficult to accomplish because many of the aquatic ecosystems we attempt to quantify have been significantly degraded from both perspectives. However there are freshwater ecosystems along the Western Pacific Rim where these functional relationships between physical and biological processes remain intact. I present three examples from Alaska, British Columbia, and Washington State where natural salmon colonization in functioning aquatic ecosystems has allowed for the quantification of functional links between habitat characteristics and the occurrence, persistence, and abundance of salmon populations in newly opened habitats. Habitat metrics such as habitat area, residual depth, and substrate size, combined with biological factors such as individual salmon condition (e.g., length, weight), population dynamics (e.g., population growth rate), and exogenous variables (e.g. ocean conditions) determine many of the functional links that are important to aquatic ecosystems. The relationships between these physical and biological variables can help better define what is needed in aquatic ecosystems that have been simplified and where aquatic biota have significantly declined.

  15. Robust Three-Metallization Back End of Line Process for 0.18 μm Embedded Ferroelectric Random Access Memory

    NASA Astrophysics Data System (ADS)

    Kang, Seung-Kuk; Rhie, Hyoung-Seub; Kim, Hyun-Ho; Koo, Bon-Jae; Joo, Heung-Jin; Park, Jung-Hun; Kang, Young-Min; Choi, Do-Hyun; Lee, Sung-Young; Jeong, Hong-Sik; Kim, Kinam

    2005-04-01

    We developed ferroelectric random access memory (FRAM)-embedded smartcards in which FRAM replaces electrically erasable PROM (EEPROM) and static random access memory (SRAM) to improve the read/write cycle time and endurance of data memories during operation, in which the main time delay retardation observed in EEPROM embedded smartcards occurs because of slow data update time. EEPROM-embedded smartcards have EEPROM, ROM, and SRAM. To utilize FRAM-embedded smartcards, we should integrate submicron ferroelectric capacitors into embedded logic complementary metal oxide semiconductor (CMOS) without the degradation of the ferroelectric properties. We resolved this process issue from the viewpoint of the back end of line (BEOL) process. As a result, we realized a highly reliable sensing window for FRAM-embedded smartcards that were realized by novel integration schemes such as tungsten and barrier metal (BM) technology, multilevel encapsulating (EBL) layer scheme and optimized intermetallic dielectrics (IMD) technology.

  16. A corpus of full-text journal articles is a robust evaluation tool for revealing differences in performance of biomedical natural language processing tools

    PubMed Central

    2012-01-01

    Background We introduce the linguistic annotation of a corpus of 97 full-text biomedical publications, known as the Colorado Richly Annotated Full Text (CRAFT) corpus. We further assess the performance of existing tools for performing sentence splitting, tokenization, syntactic parsing, and named entity recognition on this corpus. Results Many biomedical natural language processing systems demonstrated large differences between their previously published results and their performance on the CRAFT corpus when tested with the publicly available models or rule sets. Trainable systems differed widely with respect to their ability to build high-performing models based on this data. Conclusions The finding that some systems were able to train high-performing models based on this corpus is additional evidence, beyond high inter-annotator agreement, that the quality of the CRAFT corpus is high. The overall poor performance of various systems indicates that considerable work needs to be done to enable natural language processing systems to work well when the input is full-text journal articles. The CRAFT corpus provides a valuable resource to the biomedical natural language processing community for evaluation and training of new models for biomedical full text publications. PMID:22901054

  17. Gafchromic EBT2 dosimetry via robust optimization

    NASA Astrophysics Data System (ADS)

    Alves, Victor G. L.; Cardoso, Simone C.; da Silva, Ademir X.

    2013-07-01

    An 'in house' software was developed with MATLAB in order to perform optimized dose calculations in radiotherapy. The aim of this work is to demonstrate an improvement on the Multichannel method using robust optimization that deals with optimization problems where robustness is sought against uncertainty. An optimization framework was developed in order to compare remaining error from optimization process of robust methods against the conventional triple-channel method. The proposed robust method minimizes the dose difference over all channels compared to the original triple-channel method, mainly over clinical dose range. Even if a Gafchromic EBT2 film irradiated by composite IMRT fields is analyzed, more consistent values than the ones obtained by the triple-channel method are found and Newton Rings patterns are minimized. When robust methods are applied, the difference between blue and red channel doses was found to be very small, about 104 times less than obtained by triple-channel optimization. It is well known that one outlier may cause a large error in a least squares estimator. The blue channel correction of non-uniformities may have best performance when robust optimization is used. A variety of anomalies (artifacts, Newton rings and other disturbances) behave differently from natural Gaussian random noise such as variations of the thickness. The use of robust optimization methods might be more realistic since this approach uses fatter tail distributions as the Laplace and could mitigate the Newton's Rings Pattern.

  18. Biological robustness: paradigms, mechanisms, and systems principles.

    PubMed

    Whitacre, James Michael

    2012-01-01

    Robustness has been studied through the analysis of data sets, simulations, and a variety of experimental techniques that each have their own limitations but together confirm the ubiquity of biological robustness. Recent trends suggest that different types of perturbation (e.g., mutational, environmental) are commonly stabilized by similar mechanisms, and system sensitivities often display a long-tailed distribution with relatively few perturbations representing the majority of sensitivities. Conceptual paradigms from network theory, control theory, complexity science, and natural selection have been used to understand robustness, however each paradigm has a limited scope of applicability and there has been little discussion of the conditions that determine this scope or the relationships between paradigms. Systems properties such as modularity, bow-tie architectures, degeneracy, and other topological features are often positively associated with robust traits, however common underlying mechanisms are rarely mentioned. For instance, many system properties support robustness through functional redundancy or through response diversity with responses regulated by competitive exclusion and cooperative facilitation. Moreover, few studies compare and contrast alternative strategies for achieving robustness such as homeostasis, adaptive plasticity, environment shaping, and environment tracking. These strategies share similarities in their utilization of adaptive and self-organization processes that are not well appreciated yet might be suggestive of reusable building blocks for generating robust behavior. PMID:22593762

  19. Evolution, robustness, and the cost of complexity

    NASA Astrophysics Data System (ADS)

    Leclerc, Robert D.

    Evolutionary systems biology is the study of how regulatory networks evolve under the influence of natural selection, mutation, and the environment. It attempts to explain the dynamics, architecture, and variational properties of regulatory networks and how this relates to the origins, evolution and maintenance of complex and diverse functions. Key questions in the field of evolutionary systems biology ask how does robustness evolve, what are the factors that drive its evolution, and what are the underlying mechanisms that discharge robustness? In this dissertation, I investigate the evolution of robustness in artificial gene regulatory networks. I show how different conceptions of robustness fit together as pieces of a general notion of robustness, and I show how this relationship implies potential tradeoffs in how robustness can be implemented. I present results which suggest that inherent logistical problems with genetic recombination may help drive the evolution of modularity in the genotype-phenotype map. Finally, I show that robustness implies a parsimonious network structure, one which is sparsely connected and not unnecessarily complex. These results challenge conclusions drawn from many high-profile studies, and may offer a broad new perspective on biological systems. Because life must orchestrate its existence on random nonlinear thermodynamic processes, it will be designed and implemented in the most probable way. Life turns the law of entropy back onto itself to root out every inefficiency, waste, and every surprise.

  20. Biological Robustness: Paradigms, Mechanisms, and Systems Principles

    PubMed Central

    Whitacre, James Michael

    2012-01-01

    Robustness has been studied through the analysis of data sets, simulations, and a variety of experimental techniques that each have their own limitations but together confirm the ubiquity of biological robustness. Recent trends suggest that different types of perturbation (e.g., mutational, environmental) are commonly stabilized by similar mechanisms, and system sensitivities often display a long-tailed distribution with relatively few perturbations representing the majority of sensitivities. Conceptual paradigms from network theory, control theory, complexity science, and natural selection have been used to understand robustness, however each paradigm has a limited scope of applicability and there has been little discussion of the conditions that determine this scope or the relationships between paradigms. Systems properties such as modularity, bow-tie architectures, degeneracy, and other topological features are often positively associated with robust traits, however common underlying mechanisms are rarely mentioned. For instance, many system properties support robustness through functional redundancy or through response diversity with responses regulated by competitive exclusion and cooperative facilitation. Moreover, few studies compare and contrast alternative strategies for achieving robustness such as homeostasis, adaptive plasticity, environment shaping, and environment tracking. These strategies share similarities in their utilization of adaptive and self-organization processes that are not well appreciated yet might be suggestive of reusable building blocks for generating robust behavior. PMID:22593762

  1. Process development of a New Haemophilus influenzae type b conjugate vaccine and the use of mathematical modeling to identify process optimization possibilities.

    PubMed

    Hamidi, Ahd; Kreeftenberg, Hans; V D Pol, Leo; Ghimire, Saroj; V D Wielen, Luuk A M; Ottens, Marcel

    2016-05-01

    Vaccination is one of the most successful public health interventions being a cost-effective tool in preventing deaths among young children. The earliest vaccines were developed following empirical methods, creating vaccines by trial and error. New process development tools, for example mathematical modeling, as well as new regulatory initiatives requiring better understanding of both the product and the process are being applied to well-characterized biopharmaceuticals (for example recombinant proteins). The vaccine industry is still running behind in comparison to these industries. A production process for a new Haemophilus influenzae type b (Hib) conjugate vaccine, including related quality control (QC) tests, was developed and transferred to a number of emerging vaccine manufacturers. This contributed to a sustainable global supply of affordable Hib conjugate vaccines, as illustrated by the market launch of the first Hib vaccine based on this technology in 2007 and concomitant price reduction of Hib vaccines. This paper describes the development approach followed for this Hib conjugate vaccine as well as the mathematical modeling tool applied recently in order to indicate options for further improvements of the initial Hib process. The strategy followed during the process development of this Hib conjugate vaccine was a targeted and integrated approach based on prior knowledge and experience with similar products using multi-disciplinary expertise. Mathematical modeling was used to develop a predictive model for the initial Hib process (the 'baseline' model) as well as an 'optimized' model, by proposing a number of process changes which could lead to further reduction in price. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:568-580, 2016. PMID:26821825

  2. Genomic analysis of snub-nosed monkeys (Rhinopithecus) identifies genes and processes related to high-altitude adaptation.

    PubMed

    Yu, Li; Wang, Guo-Dong; Ruan, Jue; Chen, Yong-Bin; Yang, Cui-Ping; Cao, Xue; Wu, Hong; Liu, Yan-Hu; Du, Zheng-Lin; Wang, Xiao-Ping; Yang, Jing; Cheng, Shao-Chen; Zhong, Li; Wang, Lu; Wang, Xuan; Hu, Jing-Yang; Fang, Lu; Bai, Bing; Wang, Kai-Le; Yuan, Na; Wu, Shi-Fang; Li, Bao-Guo; Zhang, Jin-Guo; Yang, Ye-Qin; Zhang, Cheng-Lin; Long, Yong-Cheng; Li, Hai-Shu; Yang, Jing-Yuan; Irwin, David M; Ryder, Oliver A; Li, Ying; Wu, Chung-I; Zhang, Ya-Ping

    2016-08-01

    The snub-nosed monkey genus Rhinopithecus includes five closely related species distributed across altitudinal gradients from 800 to 4,500 m. Rhinopithecus bieti, Rhinopithecus roxellana, and Rhinopithecus strykeri inhabit high-altitude habitats, whereas Rhinopithecus brelichi and Rhinopithecus avunculus inhabit lowland regions. We report the de novo whole-genome sequence of R. bieti and genomic sequences for the four other species. Eight shared substitutions were found in six genes related to lung function, DNA repair, and angiogenesis in the high-altitude snub-nosed monkeys. Functional assays showed that the high-altitude variant of CDT1 (Ala537Val) renders cells more resistant to UV irradiation, and the high-altitude variants of RNASE4 (Asn89Lys and Thr128Ile) confer enhanced ability to induce endothelial tube formation in vitro. Genomic scans in the R. bieti and R. roxellana populations identified signatures of selection between and within populations at genes involved in functions relevant to high-altitude adaptation. These results provide valuable insights into the adaptation to high altitude in the snub-nosed monkeys. PMID:27399969

  3. Niche Divergence versus Neutral Processes: Combined Environmental and Genetic Analyses Identify Contrasting Patterns of Differentiation in Recently Diverged Pine Species

    PubMed Central

    Moreno-Letelier, Alejandra; Ortíz-Medrano, Alejandra; Piñero, Daniel

    2013-01-01

    Background and Aims Solving relationships of recently diverged taxa, poses a challenge due to shared polymorphism and weak reproductive barriers. Multiple lines of evidence are needed to identify independently evolving lineages. This is especially true of long-lived species with large effective population sizes, and slow rates of lineage sorting. North American pines are an interesting group to test this multiple approach. Our aim is to combine cytoplasmic genetic markers with environmental information to clarify species boundaries and relationships of the species complex of Pinus flexilis, Pinus ayacahuite, and Pinus strobiformis. Methods Mitochondrial and chloroplast sequences were combined with previously obtained microsatellite data and contrasted with environmental information to reconstruct phylogenetic relationships of the species complex. Ecological niche models were compared to test if ecological divergence is significant among species. Key Results and Conclusion Separately, both genetic and ecological evidence support a clear differentiation of all three species but with different topology, but also reveal an ancestral contact zone between P. strobiformis and P. ayacahuite. The marked ecological differentiation of P. flexilis suggests that ecological speciation has occurred in this lineage, but this is not reflected in neutral markers. The inclusion of environmental traits in phylogenetic reconstruction improved the resolution of internal branches. We suggest that combining environmental and genetic information would be useful for species delimitation and phylogenetic studies in other recently diverged species complexes. PMID:24205167

  4. Calculation of the decision thresholds for radionuclides identified in gamma-ray spectra by post-processing peak analysis results

    NASA Astrophysics Data System (ADS)

    Korun, Matjaž; Vodenik, Branko; Zorko, Benjamin

    2016-03-01

    A method for calculating the decision thresholds for gamma-ray emitters, identified in gamma-ray spectrometric analyses, is described. The method is suitable for application in computerized spectra-analyzing procedures. In the calculation, the number of counts and the uncertainty in the number of counts for the peaks associated with the emitter are used. The method makes possible to calculate decision thresholds from peaks on a curved background and overlapping peaks. The uncertainty in the number of counts used in the calculation was computed using Canberra's Standard Peak Search Program (Canberra, 1986, Peak Search Algorithm Manual 07-0064). For isolated peaks, the decision threshold exceeds the value calculated from the channel contents in an energy region that is 2.5 FWHM wide, covering the background in the immediate vicinity of the peak. The decision thresholds vary by approximately 20% over a dynamic range of peak areas of about 1000. In the case of overlapping peaks, the decision threshold increases considerably. For multi-gamma-ray emitters, a common decision threshold is calculated from the decision thresholds obtained from individual gamma-ray emissions, being smaller than the smallest of the individual decision thresholds.

  5. What develops during emotional development? A component process approach to identifying sources of psychopathology risk in adolescence

    PubMed Central

    McLaughlin, Katie A.; Garrad, Megan C.; Somerville, Leah H.

    2015-01-01

    Adolescence is a phase of the lifespan associated with widespread changes in emotional behavior thought to reflect both changing environments and stressors, and psychological and neurobiological development. However, emotions themselves are complex phenomena that are composed of multiple subprocesses. In this paper, we argue that examining emotional development from a process-level perspective facilitates important insights into the mechanisms that underlie adolescents' shifting emotions and intensified risk for psychopathology. Contrasting the developmental progressions for the antecedents to emotion, physiological reactivity to emotion, emotional regulation capacity, and motivation to experience particular affective states reveals complex trajectories that intersect in a unique way during adolescence. We consider the implications of these intersecting trajectories for negative outcomes such as psychopathology, as well as positive outcomes for adolescent social bonds. PMID:26869841

  6. Using empirical models of species colonization under multiple threatening processes to identify complementary threat-mitigation strategies.

    PubMed

    Tulloch, Ayesha I T; Mortelliti, Alessio; Kay, Geoffrey M; Florance, Daniel; Lindenmayer, David

    2016-08-01

    Approaches to prioritize conservation actions are gaining popularity. However, limited empirical evidence exists on which species might benefit most from threat mitigation and on what combination of threats, if mitigated simultaneously, would result in the best outcomes for biodiversity. We devised a way to prioritize threat mitigation at a regional scale with empirical evidence based on predicted changes to population dynamics-information that is lacking in most threat-management prioritization frameworks that rely on expert elicitation. We used dynamic occupancy models to investigate the effects of multiple threats (tree cover, grazing, and presence of an hyperaggressive competitor, the Noisy Miner (Manorina melanocephala) on bird-population dynamics in an endangered woodland community in southeastern Australia. The 3 threatening processes had different effects on different species. We used predicted patch-colonization probabilities to estimate the benefit to each species of removing one or more threats. We then determined the complementary set of threat-mitigation strategies that maximized colonization of all species while ensuring that redundant actions with little benefit were avoided. The single action that resulted in the highest colonization was increasing tree cover, which increased patch colonization by 5% and 11% on average across all species and for declining species, respectively. Combining Noisy Miner control with increasing tree cover increased species colonization by 10% and 19% on average for all species and for declining species respectively, and was a higher priority than changing grazing regimes. Guidance for prioritizing threat mitigation is critical in the face of cumulative threatening processes. By incorporating population dynamics in prioritization of threat management, our approach helps ensure funding is not wasted on ineffective management programs that target the wrong threats or species. PMID:26711716

  7. Doubly robust survival trees.

    PubMed

    Steingrimsson, Jon Arni; Diao, Liqun; Molinaro, Annette M; Strawderman, Robert L

    2016-09-10

    Estimating a patient's mortality risk is important in making treatment decisions. Survival trees are a useful tool and employ recursive partitioning to separate patients into different risk groups. Existing 'loss based' recursive partitioning procedures that would be used in the absence of censoring have previously been extended to the setting of right censored outcomes using inverse probability censoring weighted estimators of loss functions. In this paper, we propose new 'doubly robust' extensions of these loss estimators motivated by semiparametric efficiency theory for missing data that better utilize available data. Simulations and a data analysis demonstrate strong performance of the doubly robust survival trees compared with previously used methods. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27037609

  8. Robust Collaborative Recommendation

    NASA Astrophysics Data System (ADS)

    Burke, Robin; O'Mahony, Michael P.; Hurley, Neil J.

    Collaborative recommender systems are vulnerable to malicious users who seek to bias their output, causing them to recommend (or not recommend) particular items. This problem has been an active research topic since 2002. Researchers have found that the most widely-studied memory-based algorithms have significant vulnerabilities to attacks that can be fairly easily mounted. This chapter discusses these findings and the responses that have been investigated, especially detection of attack profiles and the implementation of robust recommendation algorithms.

  9. Identifying weathering sources and processes in an outlet glacier of the Greenland Ice Sheet using Ca and Sr isotope ratios

    NASA Astrophysics Data System (ADS)

    Hindshaw, Ruth S.; Rickli, Jörg; Leuthold, Julien; Wadham, Jemma; Bourdon, Bernard

    2014-11-01

    Chemical and isotope data (ε40Ca, δ44/42Ca, 87Sr/86Sr, δ18O) of river water samples were collected twice daily for 28 days in 2009 from the outlet river of Leverett Glacier, West Greenland. The water chemistry data was combined with detailed geochemical analysis and petrography of bulk rock, mineral separates and sediment samples in order to constrain the mineral weathering sources to the river. The average isotopic compositions measured in the river, with 2SD of all the values measured, were ε40Ca = +4.0 ± 1.4, δ44/42Ca = +0.60 ± 0.10‰ and 87Sr/86Sr = 0.74243 ± 0.00327. Based on changes in bulk meltwater discharge, the hydrochemical data was divided into three hydrological periods. The first period was marked by the tail-end of an outburst event and was characterised by water with decreasing suspended sediment concentrations (SSC), ion concentrations and pH. During the second hydrological period, discharge increased whilst 87Sr/86Sr decreased from 0.74550 to 0.74164. Based on binary mixing diagrams using 87Sr/86Sr with Na/Sr, Ca/Sr and ε40Ca, this is interpreted to reflect an increase in reactive mineral weathering, in particular epidote, as the water residence time decreases. The decrease in water residence time is a result of the evolution from a distributed (long water residence time) to a channelised (short water residence time) subglacial drainage network. The third hydrological period was defined as the period when overall discharge was decreasing. This hydrological period was marked by prominent diurnal cycles in discharge. During this period, significant correlations between δ44/42Ca and SSC and δ18O were observed which are suggestive of fractionation during adsorption. This study demonstrates the potential of radiogenic Ca to both identify temporally changing mineral sources in conjunction with 87Sr/86Sr values and to separate source and fractionation effects in δ44/42Ca values.

  10. A High-Resolution Dynamic Approach to Identifying and Characterizing Slow Slip and Subduction Locking Processes in Cascadia

    NASA Astrophysics Data System (ADS)

    Dimitrova, L. L.; Haines, A. J.; Wallace, L. M.; Bartlow, N. M.

    2014-12-01

    Slow slip events (SSEs) in Cascadia occur at ~30-50 km depth, every 10-19 months, and typically involve slip of a few cm, producing surface displacements on the order of a few mm up to ~1cm. There is a well-known association between tremor and SSEs; however, there are more frequent tremor episodes that are not clearly associated with geodetically detected SSEs (Wech and Creager 2011). This motivates the question: Are there smaller SSE signals that we are currently not recognizing geodetically? Most existing methods to investigate transient deformation with continuous GPS (cGPS) data employ kinematic, smoothed approaches to fit the cGPS data, limiting SSE identification and characterization.Recently, Haines et al. (submitted) showed that Vertical Derivatives of Horizontal Stress (VDoHS) rates, calculated from GPS data by solving the force balance equations at the Earth's surface, represent the most inclusive and spatially compact surface expressions of subsurface deformation sources: VDoHS rate vectors are tightly localized above the sources and point in the direction of push or pull. We adapt this approach, previously applied to campaign GPS data in New Zealand (e.g., Dimitrova et al. 2013), to daily cGPS time series from Cascadia and compare our results with those from the Network Inversion Filter (NIF) for 2009 (Bartlow et al. 2011). In both NIF and VDoHS rate inversions, the main 2009 SSE pulse reaches a peak slip value and splits into northern and southern sections. However, our inversion shows that the SSE started prior to July 27-28, compared to August 6-7 from the NIF results. Furthermore, we detect a smaller (~1 mm surface displacement) event from June 29-July 7 in southern Cascadia, which had not been identified previously.VDoHS rates also reveal the boundaries between the locked and unlocked portions of the megathrust, and we can track how this varies throughout the SSE cycle. Above the locked interface, the pull of the subducted plate generates shear

  11. An audit of the processes involved in identifying and assessing bilingual learners suspected of being dyslexic: a Scottish study.

    PubMed

    Deponio, P; Landon, J; Mullin, K; Reid, G

    2000-01-01

    The Commission for Racial Equality (Special Educational Needs Assessment in Strathclyde: Report of a Formal Investigation, CRE, London, 1996) highlighted the significant under-representation of bilingual children among pupils assessed as having specific learning difficulties/dyslexia. In this present study an audit was undertaken in order to explore issues arising from the Commission's report, initially using 53 schools from one education authority. This revealed an extremely low incidence of suspected dyslexia among bilingual pupils. A second study was carried out in a further nine education authorities, surveying 91 schools with bilingual pupils. The incidence of suspected dyslexia in bilingual pupils was found to be extremely low. Twenty-seven cases were examined. Most cases concerned pupils aged 7:0-9:0. Difficulties associated with conventional indicators of dyslexia are discussed. A wide variety of assessment approaches were reported and the use of first language (L1) assessment varied. The process of assessment tended to be lengthy and inconclusive. However, this report suggests that caution is necessary when considering dyslexia in the early stages of second language (L2) development. PMID:10840505

  12. Pattern recognition and data mining techniques to identify factors in wafer processing and control determining overlay error

    NASA Astrophysics Data System (ADS)

    Lam, Auguste; Ypma, Alexander; Gatefait, Maxime; Deckers, David; Koopman, Arne; van Haren, Richard; Beltman, Jan

    2015-03-01

    On-product overlay can be improved through the use of context data from the fab and the scanner. Continuous improvements in lithography and processing performance over the past years have resulted in consequent overlay performance improvement for critical layers. Identification of the remaining factors causing systematic disturbances and inefficiencies will further reduce overlay. By building a context database, mappings between context, fingerprints and alignment & overlay metrology can be learned through techniques from pattern recognition and data mining. We relate structure (`patterns') in the metrology data to relevant contextual factors. Once understood, these factors could be moved to the known effects (e.g. the presence of systematic fingerprints from reticle writing error or lens and reticle heating). Hence, we build up a knowledge base of known effects based on data. Outcomes from such an integral (`holistic') approach to lithography data analysis may be exploited in a model-based predictive overlay controller that combines feedback and feedforward control [1]. Hence, the available measurements from scanner, fab and metrology equipment are combined to reveal opportunities for further overlay improvement which would otherwise go unnoticed.

  13. Robust impedance shaping telemanipulation

    SciTech Connect

    Colgate, J.E.

    1993-08-01

    When a human operator performs a task via a bilateral manipulator, the feel of the task is embodied in the mechanical impedance of the manipulator. Traditionally, a bilateral manipulator is designed for transparency; i.e., so that the impedance reflected through the manipulator closely approximates that of the task. Impedance shaping bilateral control, introduced here, differs in that it treats the bilateral manipulator as a means of constructively altering the impedance of a task. This concept is particularly valuable if the characteristic dimensions (e.g., force, length, time) of the task impedance are very different from those of the human limb. It is shown that a general form of impedance shaping control consists of a conventional power-scaling bilateral controller augmented with a real-time interactive task simulation (i.e., a virtual environment). An approach to impedance shaping based on kinematic similarity between tasks of different scale is introduced and illustrated with an example. It is shown that an important consideration in impedance shaping controller design is robustness; i.e., guaranteeing the stability of the operator/manipulator/task system. A general condition for the robustness of a bilateral manipulator is derived. This condition is based on the structured singular value ({mu}). An example of robust impedance shaping bilateral control is presented and discussed.

  14. Robustness of Interdependent Networks

    NASA Astrophysics Data System (ADS)

    Havlin, Shlomo

    2011-03-01

    In interdependent networks, when nodes in one network fail, they cause dependent nodes in other networks to also fail. This may happen recursively and can lead to a cascade of failures. In fact, a failure of a very small fraction of nodes in one network may lead to the complete fragmentation of a system of many interdependent networks. We will present a framework for understanding the robustness of interacting networks subject to such cascading failures and provide a basic analytic approach that may be useful in future studies. We present exact analytical solutions for the critical fraction of nodes that upon removal will lead to a failure cascade and to a complete fragmentation of two interdependent networks in a first order transition. Surprisingly, analyzing complex systems as a set of interdependent networks may alter a basic assumption that network theory has relied on: while for a single network a broader degree distribution of the network nodes results in the network being more robust to random failures, for interdependent networks, the broader the distribution is, the more vulnerable the networks become to random failure. We also show that reducing the coupling between the networks leads to a change from a first order percolation phase transition to a second order percolation transition at a critical point. These findings pose a significant challenge to the future design of robust networks that need to consider the unique properties of interdependent networks.

  15. Enhancing the Performance of a robust sol-gel-processed p-type delafossite CuFeO2 photocathode for solar water reduction.

    PubMed

    Prévot, Mathieu S; Guijarro, Néstor; Sivula, Kevin

    2015-04-24

    Delafossite CuFeO2 is a promising material for solar hydrogen production, but is limited by poor photocurrent. Strategies are demonstrated herein to improve the performance of CuFeO2 electrodes prepared directly on transparent conductive substrates by using a simple sol-gel technique. Optimizing the delafossite layer thickness and increasing the majority carrier concentration (through the thermal intercalation of oxygen) give insights into the limitations of photogenerated charge extraction and enable performance improvements. In oxygen-saturated electrolyte, (sacrificial) photocurrents (1 sun illumination) up to 1.51 mA cm(-2) at +0.35 V versus a reversible hydrogen electrode (RHE) are observed. Water photoreduction with bare delafossite is limited by poor hydrogen evolution catalysis, but employing methyl viologen as an electron acceptor verifies that photogenerated electrons can be extracted from the conduction band before recombination into mid-gap trap states identified by electrochemical impedance spectroscopy. Through the use of suitable oxide overlayers and a platinum catalyst, sustained solar hydrogen production photocurrents of 0.4 mA cm(-2) at 0 V versus RHE (0.8 mA cm(-2) at -0.2 V) are demonstrated. Importantly, bare CuFeO2 is highly stable at potentials at which photocurrent is generated. No degradation is observed after 40 h under operating conditions in oxygen-saturated electrolyte. PMID:25572288

  16. Robust facial expression recognition via compressive sensing.

    PubMed

    Zhang, Shiqing; Zhao, Xiaoming; Lei, Bicheng

    2012-01-01

    Recently, compressive sensing (CS) has attracted increasing attention in the areas of signal processing, computer vision and pattern recognition. In this paper, a new method based on the CS theory is presented for robust facial expression recognition. The CS theory is used to construct a sparse representation classifier (SRC). The effectiveness and robustness of the SRC method is investigated on clean and occluded facial expression images. Three typical facial features, i.e., the raw pixels, Gabor wavelets representation and local binary patterns (LBP), are extracted to evaluate the performance of the SRC method. Compared with the nearest neighbor (NN), linear support vector machines (SVM) and the nearest subspace (NS), experimental results on the popular Cohn-Kanade facial expression database demonstrate that the SRC method obtains better performance and stronger robustness to corruption and occlusion on robust facial expression recognition tasks. PMID:22737035

  17. Processable and robust MoS2 paper chemically cross-linked with polymeric ligands by the coordination of divalent metal ions.

    PubMed

    Liu, Yi-Tao; Tan, Zhen; Xie, Xu-Ming; Wang, Zhi-Feng; Ye, Xiong-Ying

    2013-04-01

    Inorganic graphene analogues (IGAs) are a huge and fascinating family of compounds that have extraordinary electronic, mechanical, and thermal properties. However, one of the largest problems that face the industrial application of IGAs is their poor processability, which has led to a "bottlenecking" in the development of freestanding, large-area, IGA-based thin-film devices. Herein, we report a facile and cost-efficient method to chemically modify IGAs by using their abundant coordination atoms (S, O, and N). Taking MoS2 as an example, we have prepared homogeneous "solution" systems, in which MoS2 nanosheets are chemically cross-linked through a carboxylate-containing polymeric ligand, poly(methyl methacrylate) (PMMA), by copper-ion coordination. Bonding interactions between C=O bonds and sulfur atoms through copper ions were confirmed by various characterization techniques, such as UV/Vis, FTIR, and Raman spectroscopy and XPS. By using our method, freestanding MoS2 paper with significantly improved mechanical properties was obtained, thus laying the basis for the mass production of large-area MoS2-based thin-film devices. Furthermore, copper-ion coordination was also applied to MoS2/PMMA nanocomposites. Direct and strong nanofiller/matrix bonding interactions facilitate efficient load transfer and endow the polymeric nanocomposites with an excellent reinforcement effect. This method may pave a new way to high-strength polymeric nanocomposites with superior frictional properties, flame retardance, and oxidation resistance. PMID:23378295

  18. Robustness of Tree Extraction Algorithms from LIDAR

    NASA Astrophysics Data System (ADS)

    Dumitru, M.; Strimbu, B. M.

    2015-12-01

    Forest inventory faces a new era as unmanned aerial systems (UAS) increased the precision of measurements, while reduced field effort and price of data acquisition. A large number of algorithms were developed to identify various forest attributes from UAS data. The objective of the present research is to assess the robustness of two types of tree identification algorithms when UAS data are combined with digital elevation models (DEM). The algorithms use as input photogrammetric point cloud, which are subsequent rasterized. The first type of algorithms associate tree crown with an inversed watershed (subsequently referred as watershed based), while the second type is based on simultaneous representation of tree crown as an individual entity, and its relation with neighboring crowns (subsequently referred as simultaneous representation). A DJI equipped with a SONY a5100 was used to acquire images over an area from center Louisiana. The images were processed with Pix4D, and a photogrammetric point cloud with 50 points / m2 was attained. DEM was obtained from a flight executed in 2013, which also supplied a LIDAR point cloud with 30 points/m2. The algorithms were tested on two plantations with different species and crown class complexities: one homogeneous (i.e., a mature loblolly pine plantation), and one heterogeneous (i.e., an unmanaged uneven-aged stand with mixed species pine -hardwoods). Tree identification on photogrammetric point cloud reveled that simultaneous representation algorithm outperforms watershed algorithm, irrespective stand complexity. Watershed algorithm exhibits robustness to parameters, but the results were worse than majority sets of parameters needed by the simultaneous representation algorithm. The simultaneous representation algorithm is a better alternative to watershed algorithm even when parameters are not accurately estimated. Similar results were obtained when the two algorithms were run on the LIDAR point cloud.

  19. The Maternal-to-Zygotic Transition Targets Actin to Promote Robustness during Morphogenesis

    PubMed Central

    Zheng, Liuliu; Sepúlveda, Leonardo A.; Lua, Rhonald C.; Lichtarge, Olivier; Golding, Ido; Sokac, Anna Marie

    2013-01-01

    Robustness is a property built into biological systems to ensure stereotypical outcomes despite fluctuating inputs from gene dosage, biochemical noise, and the environment. During development, robustness safeguards embryos against structural and functional defects. Yet, our understanding of how robustness is achieved in embryos is limited. While much attention has been paid to the role of gene and signaling networks in promoting robust cell fate determination, little has been done to rigorously assay how mechanical processes like morphogenesis are designed to buffer against variable conditions. Here we show that the cell shape changes that drive morphogenesis can be made robust by mechanisms targeting the actin cytoskeleton. We identified two novel members of the Vinculin/α-Catenin Superfamily that work together to promote robustness during Drosophila cellularization, the dramatic tissue-building event that generates the primary epithelium of the embryo. We find that zygotically-expressed Serendipity-α (Sry-α) and maternally-loaded Spitting Image (Spt) share a redundant, actin-regulating activity during cellularization. Spt alone is sufficient for cellularization at an optimal temperature, but both Spt plus Sry-α are required at high temperature and when actin assembly is compromised by genetic perturbation. Our results offer a clear example of how the maternal and zygotic genomes interact to promote the robustness of early developmental events. Specifically, the Spt and Sry-α collaboration is informative when it comes to genes that show both a maternal and zygotic requirement during a given morphogenetic process. For the cellularization of Drosophilids, Sry-α and its expression profile may represent a genetic adaptive trait with the sole purpose of making this extreme event more reliable. Since all morphogenesis depends on cytoskeletal remodeling, both in embryos and adults, we suggest that robustness-promoting mechanisms aimed at actin could be effective at

  20. The maternal-to-zygotic transition targets actin to promote robustness during morphogenesis.

    PubMed

    Zheng, Liuliu; Sepúlveda, Leonardo A; Lua, Rhonald C; Lichtarge, Olivier; Golding, Ido; Sokac, Anna Marie

    2013-11-01

    Robustness is a property built into biological systems to ensure stereotypical outcomes despite fluctuating inputs from gene dosage, biochemical noise, and the environment. During development, robustness safeguards embryos against structural and functional defects. Yet, our understanding of how robustness is achieved in embryos is limited. While much attention has been paid to the role of gene and signaling networks in promoting robust cell fate determination, little has been done to rigorously assay how mechanical processes like morphogenesis are designed to buffer against variable conditions. Here we show that the cell shape changes that drive morphogenesis can be made robust by mechanisms targeting the actin cytoskeleton. We identified two novel members of the Vinculin/α-Catenin Superfamily that work together to promote robustness during Drosophila cellularization, the dramatic tissue-building event that generates the primary epithelium of the embryo. We find that zygotically-expressed Serendipity-α (Sry-α) and maternally-loaded Spitting Image (Spt) share a redundant, actin-regulating activity during cellularization. Spt alone is sufficient for cellularization at an optimal temperature, but both Spt plus Sry-α are required at high temperature and when actin assembly is compromised by genetic perturbation. Our results offer a clear example of how the maternal and zygotic genomes interact to promote the robustness of early developmental events. Specifically, the Spt and Sry-α collaboration is informative when it comes to genes that show both a maternal and zygotic requirement during a given morphogenetic process. For the cellularization of Drosophilids, Sry-α and its expression profile may represent a genetic adaptive trait with the sole purpose of making this extreme event more reliable. Since all morphogenesis depends on cytoskeletal remodeling, both in embryos and adults, we suggest that robustness-promoting mechanisms aimed at actin could be effective at

  1. Robust Photon Locking

    SciTech Connect

    Bayer, T.; Wollenhaupt, M.; Sarpe-Tudoran, C.; Baumert, T.

    2009-01-16

    We experimentally demonstrate a strong-field coherent control mechanism that combines the advantages of photon locking (PL) and rapid adiabatic passage (RAP). Unlike earlier implementations of PL and RAP by pulse sequences or chirped pulses, we use shaped pulses generated by phase modulation of the spectrum of a femtosecond laser pulse with a generalized phase discontinuity. The novel control scenario is characterized by a high degree of robustness achieved via adiabatic preparation of a state of maximum coherence. Subsequent phase control allows for efficient switching among different target states. We investigate both properties by photoelectron spectroscopy on potassium atoms interacting with the intense shaped light field.

  2. Complexity and robustness

    PubMed Central

    Carlson, J. M.; Doyle, John

    2002-01-01

    Highly optimized tolerance (HOT) was recently introduced as a conceptual framework to study fundamental aspects of complexity. HOT is motivated primarily by systems from biology and engineering and emphasizes, (i) highly structured, nongeneric, self-dissimilar internal configurations, and (ii) robust yet fragile external behavior. HOT claims these are the most important features of complexity and not accidents of evolution or artifices of engineering design but are inevitably intertwined and mutually reinforcing. In the spirit of this collection, our paper contrasts HOT with alternative perspectives on complexity, drawing on real-world examples and also model systems, particularly those from self-organized criticality. PMID:11875207

  3. Robust Systems Test Framework

    SciTech Connect

    Ballance, Robert A.

    2003-01-01

    The Robust Systems Test Framework (RSTF) provides a means of specifying and running test programs on various computation platforms. RSTF provides a level of specification above standard scripting languages. During a set of runs, standard timing information is collected. The RSTF specification can also gather job-specific information, and can include ways to classify test outcomes. All results and scripts can be stored into and retrieved from an SQL database for later data analysis. RSTF also provides operations for managing the script and result files, and for compiling applications and gathering compilation information such as optimization flags.

  4. Robust quantum spatial search

    NASA Astrophysics Data System (ADS)

    Tulsi, Avatar

    2016-07-01

    Quantum spatial search has been widely studied with most of the study focusing on quantum walk algorithms. We show that quantum walk algorithms are extremely sensitive to systematic errors. We present a recursive algorithm which offers significant robustness to certain systematic errors. To search N items, our recursive algorithm can tolerate errors of size O(1{/}√{ln N}) which is exponentially better than quantum walk algorithms for which tolerable error size is only O(ln N{/}√{N}). Also, our algorithm does not need any ancilla qubit. Thus our algorithm is much easier to implement experimentally compared to quantum walk algorithms.

  5. Robust Systems Test Framework

    Energy Science and Technology Software Center (ESTSC)

    2003-01-01

    The Robust Systems Test Framework (RSTF) provides a means of specifying and running test programs on various computation platforms. RSTF provides a level of specification above standard scripting languages. During a set of runs, standard timing information is collected. The RSTF specification can also gather job-specific information, and can include ways to classify test outcomes. All results and scripts can be stored into and retrieved from an SQL database for later data analysis. RSTF alsomore » provides operations for managing the script and result files, and for compiling applications and gathering compilation information such as optimization flags.« less

  6. Robust quantum spatial search

    NASA Astrophysics Data System (ADS)

    Tulsi, Avatar

    2016-04-01

    Quantum spatial search has been widely studied with most of the study focusing on quantum walk algorithms. We show that quantum walk algorithms are extremely sensitive to systematic errors. We present a recursive algorithm which offers significant robustness to certain systematic errors. To search N items, our recursive algorithm can tolerate errors of size O(1{/}√{N}) which is exponentially better than quantum walk algorithms for which tolerable error size is only O(ln N{/}√{N}) . Also, our algorithm does not need any ancilla qubit. Thus our algorithm is much easier to implement experimentally compared to quantum walk algorithms.

  7. Robust control for uncertain structures

    NASA Technical Reports Server (NTRS)

    Douglas, Joel; Athans, Michael

    1991-01-01

    Viewgraphs on robust control for uncertain structures are presented. Topics covered include: robust linear quadratic regulator (RLQR) formulas; mismatched LQR design; RLQR design; interpretations of RLQR design; disturbance rejection; and performance comparisons: RLQR vs. mismatched LQR.

  8. Experimental Robust Control Studies on an Unstable Magnetic Suspension System

    NASA Technical Reports Server (NTRS)

    Lim, Kyong B.; Cox, David E.

    1993-01-01

    This study is an experimental investigation of the robustness of various controllers designed for the Large Angle Magnetic Suspension Test Fixture (LAMSTF). Both analytical and identified nominal models are used for designing controllers along with two different types of uncertainty models. Robustness refers to maintain- ing tracking performance under analytical model errors and dynamically induced eddy currents, while external disturbances are not considered. Results show that incorporating robustness into analytical models gives significantly better results. However, incorporating incorrect uncertainty models may lead to poorer performance than not designing for robustness at all. Designing controllers based on accurate identified models gave the best performance. In fact, incorporating a significant level of robustness into an accurate nominal model resulted in reduced performance. This paper discusses an assortment of experimental results in a consistent manner using robust control theory.

  9. Robustness and modeling error characterization

    NASA Technical Reports Server (NTRS)

    Lehtomaki, N. A.; Castanon, D. A.; Sandell, N. R., Jr.; Levy, B. C.; Athans, M.; Stein, G.

    1984-01-01

    The results on robustness theory presented here are extensions of those given in Lehtomaki et al., (1981). The basic innovation in these new results is that they utilize minimal additional information about the structure of the modeling error, as well as its magnitude, to assess the robustness of feedback systems for which robustness tests based on the magnitude of modeling error alone are inconclusive.

  10. Robust fusion with reliabilities weights

    NASA Astrophysics Data System (ADS)

    Grandin, Jean-Francois; Marques, Miguel

    2002-03-01

    The reliability is a value of the degree of trust in a given measurement. We analyze and compare: ML (Classical Maximum Likelihood), MLE (Maximum Likelihood weighted by Entropy), MLR (Maximum Likelihood weighted by Reliability), MLRE (Maximum Likelihood weighted by Reliability and Entropy), DS (Credibility Plausibility), DSR (DS weighted by reliabilities). The analysis is based on a model of a dynamical fusion process. It is composed of three sensors, which have each it's own discriminatory capacity, reliability rate, unknown bias and measurement noise. The knowledge of uncertainties is also severely corrupted, in order to analyze the robustness of the different fusion operators. Two sensor models are used: the first type of sensor is able to estimate the probability of each elementary hypothesis (probabilistic masses), the second type of sensor delivers masses on union of elementary hypotheses (DS masses). In the second case probabilistic reasoning leads to sharing the mass abusively between elementary hypotheses. Compared to the classical ML or DS which achieves just 50% of correct classification in some experiments, DSR, MLE, MLR and MLRE reveals very good performances on all experiments (more than 80% of correct classification rate). The experiment was performed with large variations of the reliability coefficients for each sensor (from 0 to 1), and with large variations on the knowledge of these coefficients (from 0 0.8). All four operators reveal good robustness, but the MLR reveals to be uniformly dominant on all the experiments in the Bayesian case and achieves the best mean performance under incomplete a priori information.

  11. Identifying Anomality in Precipitation Processes

    NASA Astrophysics Data System (ADS)

    Jiang, P.; Zhang, Y.

    2014-12-01

    Safety, risk and economic analyses of engineering constructions such as storm sewer, street and urban drainage, and channel design are sensitive to precipitation storm properties. Whether the precipitation storm properties exhibit normal or anomalous characteristics remains obscure. In this study, we will decompose a precipitation time series as sequences of average storm intensity, storm duration and interstorm period to examine whether these sequences could be treated as a realization of a continuous time random walk with both "waiting times" (interstorm period) and "jump sizes" (average storm intensity and storm duration). Starting from this viewpoint, we will analyze the statistics of storm duration, interstorm period, and average storm intensity in four regions in southwestern United States. We will examine whether the probability distribution is temporal and spatial dependent. Finally, we will use fractional engine to capture the randomness in precipitation storms.

  12. Robust omniphobic surfaces

    PubMed Central

    Tuteja, Anish; Choi, Wonjae; Mabry, Joseph M.; McKinley, Gareth H.; Cohen, Robert E.

    2008-01-01

    Superhydrophobic surfaces display water contact angles greater than 150° in conjunction with low contact angle hysteresis. Microscopic pockets of air trapped beneath the water droplets placed on these surfaces lead to a composite solid-liquid-air interface in thermodynamic equilibrium. Previous experimental and theoretical studies suggest that it may not be possible to form similar fully-equilibrated, composite interfaces with drops of liquids, such as alkanes or alcohols, that possess significantly lower surface tension than water (γlv = 72.1 mN/m). In this work we develop surfaces possessing re-entrant texture that can support strongly metastable composite solid-liquid-air interfaces, even with very low surface tension liquids such as pentane (γlv = 15.7 mN/m). Furthermore, we propose four design parameters that predict the measured contact angles for a liquid droplet on a textured surface, as well as the robustness of the composite interface, based on the properties of the solid surface and the contacting liquid. These design parameters allow us to produce two different families of re-entrant surfaces— randomly-deposited electrospun fiber mats and precisely fabricated microhoodoo surfaces—that can each support a robust composite interface with essentially any liquid. These omniphobic surfaces display contact angles greater than 150° and low contact angle hysteresis with both polar and nonpolar liquids possessing a wide range of surface tensions. PMID:19001270

  13. Fooled by local robustness.

    PubMed

    Sniedovich, Moshe

    2012-10-01

    One would have expected the considerable public debate created by Nassim Taleb's two best selling books on uncertainty, Fooled by Randomness and The Black Swan, to inspire greater caution to the fundamental difficulties posed by severe uncertainty. Yet, methodologies exhibiting an incautious approach to uncertainty have been proposed recently in a range of publications. So, the objective of this short note is to call attention to a prime example of an incautious approach to severe uncertainty that is manifested in the proposition to use the concept radius of stability as a measure of robustness against severe uncertainty. The central proposition of this approach, which is exemplified in info-gap decision theory, is this: use a simple radius of stability model to analyze and manage a severe uncertainty that is characterized by a vast uncertainty space, a poor point estimate, and a likelihood-free quantification of uncertainty. This short discussion serves then as a reminder that the generic radius of stability model is a model of local robustness. It is, therefore, utterly unsuitable for the treatment of severe uncertainty when the latter is characterized by a poor estimate of the parameter of interest, a vast uncertainty space, and a likelihood-free quantification of uncertainty. PMID:22384828

  14. Quality by Design Approaches to Formulation Robustness-An Antibody Case Study.

    PubMed

    Wurth, Christine; Demeule, Barthelemy; Mahler, Hanns-Christian; Adler, Michael

    2016-05-01

    The International Conference on Harmonization Q8 (R2) includes a requirement that "Critical formulation attributes and process parameters are generally identified through an assessment of the extent to which their variation can impact the quality of the drug product," that is, the need to assess the robustness of a formulation. In this article, a quality-by-design-based definition of a "robust formulation" for a biopharmaceutical product is proposed and illustrated with a case study. A multivariate formulation robustness study was performed for a selected formulation of a monoclonal antibody to demonstrate acceptable quality at the target composition as well as at the edges of the allowable composition ranges and fulfillment of the end-of-shelf-life stability requirements of 36 months at the intended storage temperature (2°C-8°C). Extrapolation of 24 months' formulation robustness data to end of shelf life showed that the MAb formulation was robust within the claimed formulation composition ranges. Based on this case study, we propose that a formulation can be claimed as "robust" if all drug substance and drug product critical quality attributes remain within their respective end-of-shelf-life critical quality attribute-acceptance criteria throughout the entire claimed formulation composition range. PMID:27001536

  15. Identifiability of the unrooted species tree topology under the coalescent model with time-reversible substitution processes, site-specific rate variation, and invariable sites.

    PubMed

    Chifman, Julia; Kubatko, Laura

    2015-06-01

    The inference of the evolutionary history of a collection of organisms is a problem of fundamental importance in evolutionary biology. The abundance of DNA sequence data arising from genome sequencing projects has led to significant challenges in the inference of these phylogenetic relationships. Among these challenges is the inference of the evolutionary history of a collection of species based on sequence information from several distinct genes sampled throughout the genome. It is widely accepted that each individual gene has its own phylogeny, which may not agree with the species tree. Many possible causes of this gene tree incongruence are known. The best studied is the incomplete lineage sorting, which is commonly modeled by the coalescent process. Numerous methods based on the coalescent process have been proposed for the estimation of the phylogenetic species tree given DNA sequence data. However, use of these methods assumes that the phylogenetic species tree can be identified from DNA sequence data at the leaves of the tree, although this has not been formally established. We prove that the unrooted topology of the n-leaf phylogenetic species tree is generically identifiable given observed data at the leaves of the tree that are assumed to have arisen from the coalescent process under a time-reversible substitution process with the possibility of site-specific rate variation modeled by the discrete gamma distribution and a proportion of invariable sites. PMID:25791286

  16. Robust geostatistical analysis of spatial data

    NASA Astrophysics Data System (ADS)

    Papritz, Andreas; Künsch, Hans Rudolf; Schwierz, Cornelia; Stahel, Werner A.

    2013-04-01

    Most of the geostatistical software tools rely on non-robust algorithms. This is unfortunate, because outlying observations are rather the rule than the exception, in particular in environmental data sets. Outliers affect the modelling of the large-scale spatial trend, the estimation of the spatial dependence of the residual variation and the predictions by kriging. Identifying outliers manually is cumbersome and requires expertise because one needs parameter estimates to decide which observation is a potential outlier. Moreover, inference after the rejection of some observations is problematic. A better approach is to use robust algorithms that prevent automatically that outlying observations have undue influence. Former studies on robust geostatistics focused on robust estimation of the sample variogram and ordinary kriging without external drift. Furthermore, Richardson and Welsh (1995) proposed a robustified version of (restricted) maximum likelihood ([RE]ML) estimation for the variance components of a linear mixed model, which was later used by Marchant and Lark (2007) for robust REML estimation of the variogram. We propose here a novel method for robust REML estimation of the variogram of a Gaussian random field that is possibly contaminated by independent errors from a long-tailed distribution. It is based on robustification of estimating equations for the Gaussian REML estimation (Welsh and Richardson, 1997). Besides robust estimates of the parameters of the external drift and of the variogram, the method also provides standard errors for the estimated parameters, robustified kriging predictions at both sampled and non-sampled locations and kriging variances. Apart from presenting our modelling framework, we shall present selected simulation results by which we explored the properties of the new method. This will be complemented by an analysis a data set on heavy metal contamination of the soil in the vicinity of a metal smelter. Marchant, B.P. and Lark, R

  17. A New Robust Watermarking Scheme to Increase Image Security

    NASA Astrophysics Data System (ADS)

    Rahmani, Hossein; Mortezaei, Reza; Ebrahimi Moghaddam, Mohsen

    2010-12-01

    In digital image watermarking, an image is embedded into a picture for a variety of purposes such as captioning and copyright protection. In this paper, a robust private watermarking scheme for embedding a gray-scale watermark is proposed. In the proposed method, the watermark and original image are processed by applying blockwise DCT. Also, a Dynamic Fuzzy Inference System (DFIS) is used to identify the best place for watermark insertion by approximating the relationship established between the properties of HVS model. In the insertion phase, the DC coefficients of the original image are modified according to DC value of watermark and output of Fuzzy System. In the experiment phase, the CheckMark (StirMark MATLAB) software was used to verify the method robustness by applying several conventional attacks on the watermarked image. The results showed that the proposed scheme provided high image quality while it was robust against various attacks, such as Compression, Filtering, additive Noise, Cropping, Scaling, Changing aspect ratio, Copy attack, and Composite attack in comparison with related methods.

  18. A model to assess the Mars Telecommunications Network relay robustness

    NASA Technical Reports Server (NTRS)

    Girerd, Andre R.; Meshkat, Leila; Edwards, Charles D., Jr.; Lee, Charles H.

    2005-01-01

    The relatively long mission durations and compatible radio protocols of current and projected Mars orbiters have enabled the gradual development of a heterogeneous constellation providing proximity communication services for surface assets. The current and forecasted capability of this evolving network has reached the point that designers of future surface missions consider complete dependence on it. Such designers, along with those architecting network requirements, have a need to understand the robustness of projected communication service. A model has been created to identify the robustness of the Mars Network as a function of surface location and time. Due to the decade-plus time horizon considered, the network will evolve, with emerging productive nodes and nodes that cease or fail to contribute. The model is a flexible framework to holistically process node information into measures of capability robustness that can be visualized for maximum understanding. Outputs from JPL's Telecom Orbit Analysis Simulation Tool (TOAST) provide global telecom performance parameters for current and projected orbiters. Probabilistic estimates of orbiter fuel life are derived from orbit keeping burn rates, forecasted maneuver tasking, and anomaly resolution budgets. Orbiter reliability is estimated probabilistically. A flexible scheduling framework accommodates the projected mission queue as well as potential alterations.

  19. On the robustness of Herlihy's hierarchy

    NASA Technical Reports Server (NTRS)

    Jayanti, Prasad

    1993-01-01

    A wait-free hierarchy maps object types to levels in Z(+) U (infinity) and has the following property: if a type T is at level N, and T' is an arbitrary type, then there is a wait-free implementation of an object of type T', for N processes, using only registers and objects of type T. The infinite hierarchy defined by Herlihy is an example of a wait-free hierarchy. A wait-free hierarchy is robust if it has the following property: if T is at level N, and S is a finite set of types belonging to levels N - 1 or lower, then there is no wait-free implementation of an object of type T, for N processes, using any number and any combination of objects belonging to the types in S. Robustness implies that there are no clever ways of combining weak shared objects to obtain stronger ones. Contrary to what many researchers believe, we prove that Herlihy's hierarchy is not robust. We then define some natural variants of Herlihy's hierarchy, which are also infinite wait-free hierarchies. With the exception of one, which is still open, these are not robust either. We conclude with the open question of whether non-trivial robust wait-free hierarchies exist.

  20. Robust characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2016-04-01

    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  1. Robust matching for voice recognition

    NASA Astrophysics Data System (ADS)

    Higgins, Alan; Bahler, L.; Porter, J.; Blais, P.

    1994-10-01

    This paper describes an automated method of comparing a voice sample of an unknown individual with samples from known speakers in order to establish or verify the individual's identity. The method is based on a statistical pattern matching approach that employs a simple training procedure, requires no human intervention (transcription, work or phonetic marketing, etc.), and makes no assumptions regarding the expected form of the statistical distributions of the observations. The content of the speech material (vocabulary, grammar, etc.) is not assumed to be constrained in any way. An algorithm is described which incorporates frame pruning and channel equalization processes designed to achieve robust performance with reasonable computational resources. An experimental implementation demonstrating the feasibility of the concept is described.

  2. Robust automated knowledge capture.

    SciTech Connect

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  3. Robustness in Digital Hardware

    NASA Astrophysics Data System (ADS)

    Woods, Roger; Lightbody, Gaye

    The growth in electronics has probably been the equivalent of the Industrial Revolution in the past century in terms of how much it has transformed our daily lives. There is a great dependency on technology whether it is in the devices that control travel (e.g., in aircraft or cars), our entertainment and communication systems, or our interaction with money, which has been empowered by the onset of Internet shopping and banking. Despite this reliance, there is still a danger that at some stage devices will fail within the equipment's lifetime. The purpose of this chapter is to look at the factors causing failure and address possible measures to improve robustness in digital hardware technology and specifically chip technology, giving a long-term forecast that will not reassure the reader!

  4. Robust snapshot interferometric spectropolarimetry.

    PubMed

    Kim, Daesuk; Seo, Yoonho; Yoon, Yonghee; Dembele, Vamara; Yoon, Jae Woong; Lee, Kyu Jin; Magnusson, Robert

    2016-05-15

    This Letter describes a Stokes vector measurement method based on a snapshot interferometric common-path spectropolarimeter. The proposed scheme, which employs an interferometric polarization-modulation module, can extract the spectral polarimetric parameters Ψ(k) and Δ(k) of a transmissive anisotropic object by which an accurate Stokes vector can be calculated in the spectral domain. It is inherently strongly robust to the object 3D pose variation, since it is designed distinctly so that the measured object can be placed outside of the interferometric module. Experiments are conducted to verify the feasibility of the proposed system. The proposed snapshot scheme enables us to extract the spectral Stokes vector of a transmissive anisotropic object within tens of msec with high accuracy. PMID:27176992

  5. Robust Rocket Engine Concept

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.

    1995-01-01

    The potential for a revolutionary step in the durability of reusable rocket engines is made possible by the combination of several emerging technologies. The recent creation and analytical demonstration of life extending (or damage mitigating) control technology enables rapid rocket engine transients with minimum fatigue and creep damage. This technology has been further enhanced by the formulation of very simple but conservative continuum damage models. These new ideas when combined with recent advances in multidisciplinary optimization provide the potential for a large (revolutionary) step in reusable rocket engine durability. This concept has been named the robust rocket engine concept (RREC) and is the basic contribution of this paper. The concept also includes consideration of design innovations to minimize critical point damage.

  6. Robust Vertex Classification.

    PubMed

    Chen, Li; Shen, Cencheng; Vogelstein, Joshua T; Priebe, Carey E

    2016-03-01

    For random graphs distributed according to stochastic blockmodels, a special case of latent position graphs, adjacency spectral embedding followed by appropriate vertex classification is asymptotically Bayes optimal; but this approach requires knowledge of and critically depends on the model dimension. In this paper, we propose a sparse representation vertex classifier which does not require information about the model dimension. This classifier represents a test vertex as a sparse combination of the vertices in the training set and uses the recovered coefficients to classify the test vertex. We prove consistency of our proposed classifier for stochastic blockmodels, and demonstrate that the sparse representation classifier can predict vertex labels with higher accuracy than adjacency spectral embedding approaches via both simulation studies and real data experiments. Our results demonstrate the robustness and effectiveness of our proposed vertex classifier when the model dimension is unknown. PMID:26340770

  7. Robust Crossfeed Design for Hovering Rotorcraft

    NASA Technical Reports Server (NTRS)

    Catapang, David R.

    1993-01-01

    Control law design for rotorcraft fly-by-wire systems normally attempts to decouple angular responses using fixed-gain crossfeeds. This approach can lead to poor decoupling over the frequency range of pilot inputs and increase the load on the feedback loops. In order to improve the decoupling performance, dynamic crossfeeds may be adopted. Moreover, because of the large changes that occur in rotorcraft dynamics due to small changes about the nominal design condition, especially for near-hovering flight, the crossfeed design must be 'robust'. A new low-order matching method is presented here to design robust crossfeed compensators for multi-input, multi-output (MIMO) systems. The technique identifies degrees-of-freedom that can be decoupled using crossfeeds, given an anticipated set of parameter variations for the range of flight conditions of concern. Cross-coupling is then reduced for degrees-of-freedom that can use crossfeed compensation by minimizing off-axis response magnitude average and variance. Results are presented for the analysis of pitch, roll, yaw and heave coupling of the UH-60 Black Hawk helicopter in near-hovering flight. Robust crossfeeds are designed that show significant improvement in decoupling performance and robustness over nominal, single design point, compensators. The design method and results are presented in an easily used graphical format that lends significant physical insight to the design procedure. This plant pre-compensation technique is an appropriate preliminary step to the design of robust feedback control laws for rotorcraft.

  8. Nodes and biological processes identified on the basis of network analysis in the brain of the senescence accelerated mice as an Alzheimer's disease animal model

    PubMed Central

    Cheng, Xiao-rui; Cui, Xiu-liang; Zheng, Yue; Zhang, Gui-rong; Li, Peng; Huang, Huang; Zhao, Yue-ying; Bo, Xiao-chen; Wang, Sheng-qi; Zhou, Wen-xia; Zhang, Yong-xiang

    2013-01-01

    Harboring the behavioral and histopathological signatures of Alzheimer's disease (AD), senescence accelerated mouse-prone 8 (SAMP8) mice are currently considered a robust model for studying AD. However, the underlying mechanisms, prioritized pathways and genes in SAMP8 mice linked to AD remain unclear. In this study, we provide a biological interpretation of the molecular underpinnings of SAMP8 mice. Our results were derived from differentially expressed genes in the hippocampus and cerebral cortex of SAMP8 mice compared to age-matched SAMR1 mice at 2, 6, and 12 months of age using cDNA microarray analysis. On the basis of PPI, MetaCore and the co-expression network, we constructed a distinct genetic sub-network in the brains of SAMP8 mice. Next, we determined that the regulation of synaptic transmission and apoptosis were disrupted in the brains of SAMP8 mice. We found abnormal gene expression of RAF1, MAPT, PTGS2, CDKN2A, CAMK2A, NTRK2, AGER, ADRBK1, MCM3AP, and STUB1, which may have initiated the dysfunction of biological processes in the brains of SAMP8 mice. Specifically, we found microRNAs, including miR-20a, miR-17, miR-34a, miR-155, miR-18a, miR-22, miR-26a, miR-101, miR-106b, and miR-125b, that might regulate the expression of nodes in the sub-network. Taken together, these results provide new insights into the biological and genetic mechanisms of SAMP8 mice and add an important dimension to our understanding of the neuro-pathogenesis in SAMP8 mice from a systems perspective. PMID:24194717

  9. The Robust Beauty of Ordinary Information

    ERIC Educational Resources Information Center

    Katsikopoulos, Konstantinos V.; Schooler, Lael J.; Hertwig, Ralph

    2010-01-01

    Heuristics embodying limited information search and noncompensatory processing of information can yield robust performance relative to computationally more complex models. One criticism raised against heuristics is the argument that complexity is hidden in the calculation of the cue order used to make predictions. We discuss ways to order cues…

  10. A T-flask based screening platform for evaluating and identifying plant hydrolysates for a fed-batch cell culture process.

    PubMed

    Lu, Canghai; Gonzalez, Carlos; Gleason, Joseph; Gangi, Jennifer; Yang, Jeng-Dar

    2007-09-01

    This paper presents a T-flask based screening platform for evaluating and identifying plant hydrolysates for cell culture processes. The development of this platform was driven by an urgent need of replacing a soy hydrolysate that was no longer available for the fed-batch process of recombinant Sp2/0 cell culture expressing a humanized antibody. Series of small-scale experiments in T-flasks and 3-l bioreactors were designed to gain an insight on how this soy hydrolysate benefits the culture. A comprehensive, function-oriented screening platform then was developed, consisting of three T-flask tests, namely the protection test, the growth promotion test, and the growth inhibition test. The cell growth in these three T-flask tests enabled a good prediction of the cell growth in the fed-batch bioreactor process. Fourteen plant hydrolysate candidates were quickly evaluated by this platform for their ability to exert strong protection, high cell growth promotion, and low cell growth inhibition to the culture. One soy hydrolysate was successfully identified to support the comparable cell growth as the discontinued soy hydrolysate. Because of the advantage of using small-scale batch culture to guide bioreactor fed-batch culture, this proposed platform approach has the potential for other applications, such as the medium and feeding optimization, and the mechanism study of plant hydrolysates, in a high throughput format. PMID:19002991

  11. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    PubMed Central

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  12. Robust Control Design for Large Space Structures

    NASA Technical Reports Server (NTRS)

    Eastman, W. L.; Bossi, J. A.

    1985-01-01

    The control design problem for the class of future spacecraft referred to as large space structures (LSS) is by now well known. The issue is the reduced order control of a very high order, lightly damped system with uncertain system parameters, particularly in the high frequency modes. A design methodology which incorporates robustness considerations as part of the design process is presented. Combining pertinent results from multivariable systems theory and optimal control and estimation, LQG eigenstructure assignment and LQG frequency shaping, were used to improve singular value robustness measures in the presence of control and observation spillover.

  13. Information theory perspective on network robustness

    NASA Astrophysics Data System (ADS)

    Schieber, Tiago A.; Carpi, Laura; Frery, Alejandro C.; Rosso, Osvaldo A.; Pardalos, Panos M.; Ravetti, Martín G.

    2016-01-01

    A crucial challenge in network theory is the study of the robustness of a network when facing a sequence of failures. In this work, we propose a dynamical definition of network robustness based on Information Theory, that considers measurements of the structural changes caused by failures of the network's components. Failures are defined here as a temporal process defined in a sequence. Robustness is then evaluated by measuring dissimilarities between topologies after each time step of the sequence, providing a dynamical information about the topological damage. We thoroughly analyze the efficiency of the method in capturing small perturbations by considering different probability distributions on networks. In particular, we find that distributions based on distances are more consistent in capturing network structural deviations, as better reflect the consequences of the failures. Theoretical examples and real networks are used to study the performance of this methodology.

  14. FPGA implementation of robust Capon beamformer

    NASA Astrophysics Data System (ADS)

    Guan, Xin; Zmuda, Henry; Li, Jian; Du, Lin; Sheplak, Mark

    2012-03-01

    The Capon Beamforming algorithm is an optimal spatial filtering algorithm used in various signal processing applications where excellent interference rejection performance is required, such as Radar and Sonar systems, Smart Antenna systems for wireless communications. Its lack of robustness, however, means that it is vulnerable to array calibration errors and other model errors. To overcome this problem, numerous robust Capon Beamforming algorithms have been proposed, which are much more promising for practical applications. In this paper, an FPGA implementation of a robust Capon Beamforming algorithm is investigated and presented. This realization takes an array output with 4 channels, computes the complex-valued adaptive weight vectors for beamforming with an 18 bit fixed-point representation and runs at a 100 MHz clock on Xilinx V4 FPGA. This work will be applied in our medical imaging project for breast cancer detection.

  15. Horse metabolism and the photocatalytic process as a tool to identify metabolic products formed from dopant substances: the case of sildenafil.

    PubMed

    Medana, Claudio; Calza, Paola; Giancotti, Valeria; Dal Bello, Federica; Pasello, Emanuela; Montana, Marco; Baiocchi, Claudio

    2011-10-01

    Two horses were treated with sildenafil, and its metabolic products were sought in both urine and plasma samples. Prior to this, a simulative laboratory study had been done using a photocatalytic process, to identify all possible main and secondary transformation products, in a clean matrix; these were then sought in the biological samples. The transformation of sildenafil and the formation of intermediate products were evaluated adopting titanium dioxide as photocatalyst. Several products were formed and characterized using the HPLC/HRMS(n) technique. The main intermediates identified in these experimental conditions were the same as the major sildenafil metabolites found in in vivo studies on rats and horses. Concerning horse metabolism, sildenafil and the demethylated product (UK 103,320) were quantified in blood samples. Sildenafil propyloxide, de-ethyl, and demethyl sildenafil, were the main metabolites quantified in urine. Some more oxidized species, already formed in the photocatalytic process, were also found in urine and plasma samples of treated animals. Their formation involved hydroxylation on the aromatic ring, combined oxidation and dihydroxylation, N-demethylation on the pyrazole ring, and hydroxylation. These new findings could be of interest in further metabolism studies. PMID:21964727

  16. Footprint Reduction Process: Using Remote Sensing and GIS Technologies to Identify Non-Contaminated Land Parcels on the Oak Ridge Reservation National Priorities List Site

    SciTech Connect

    Halsey, P.A.; Kendall, D.T.; King, A.L.; Storms, R.A.

    1998-12-09

    In 1989, the Agency for Toxic Substances and Disease Registry evaluated the entire 35,000-acre U. S: Department of Energy (DOE) Oak Ridge Reservation (ORR, located in Oak Ridge, TN) and placed it on the National Priorities List (NPL), making the ORR subject to Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) regulations. Although much of the ORR has not been impacted by previous federal activities, without investigation it is difficult to discern which parcels of land are free of surface contamination. In 1996, the DOE Oak Ridge Environmental Management Program (EM) funded the Footprint Reduction Project to: 1) develop a process to study the large areas of the ORR that are believed to be free of surface contamination and 2) initiate the delisting of the "clean" areas from the NPL. Although this project's goals do not include the transfer of federal property to non-federal entities, the process development team aimed to provide a final product with multiple uses. Therefore, the process was developed to meet the requirements of NPL delisting and the transfer of non- contaminated federal lands to future land users. Section 120 (h) of the CERCLA law identifies the requirements for the transfer of federal property that is currently part of an NPL site. Reviews of historical information (including aerial photography), field inspections, and the recorded chain of title documents for the property are required for the delisting of property prior to transfer from the federal government. Despite the widespread availability of remote sensing and other digital geographic data and geographic information systems (GIS) for the analysis of such data, historical aerial photography is the only geographic data source required for review under the CERCLA 120 (h) process. However, since the ORR Environmental Management Program had an established Remote Sensing Program, the Footprint Reduction Project included the development and application of a methodology

  17. Learning robust plans for mobile robots from a single trial

    SciTech Connect

    Engelson, S.P.

    1996-12-31

    We address the problem of learning robust plans for robot navigation by observing particular robot behaviors. In this Paper we present a method which can learn a robust reactive example of a desired behavior. The translating a sequence of events arising system into a plan which represents among such events. This method allows us to rely or the underlying stability properties of low-level behavior processes in order to produce robust plans. Since the resultant plan reproduces the original behavior of the robot at a high level, it generalizes over small environmental changes and is robust to sensor and effector noise.

  18. Robust estimation for ordinary differential equation models.

    PubMed

    Cao, J; Wang, L; Xu, J

    2011-12-01

    Applied scientists often like to use ordinary differential equations (ODEs) to model complex dynamic processes that arise in biology, engineering, medicine, and many other areas. It is interesting but challenging to estimate ODE parameters from noisy data, especially when the data have some outliers. We propose a robust method to address this problem. The dynamic process is represented with a nonparametric function, which is a linear combination of basis functions. The nonparametric function is estimated by a robust penalized smoothing method. The penalty term is defined with the parametric ODE model, which controls the roughness of the nonparametric function and maintains the fidelity of the nonparametric function to the ODE model. The basis coefficients and ODE parameters are estimated in two nested levels of optimization. The coefficient estimates are treated as an implicit function of ODE parameters, which enables one to derive the analytic gradients for optimization using the implicit function theorem. Simulation studies show that the robust method gives satisfactory estimates for the ODE parameters from noisy data with outliers. The robust method is demonstrated by estimating a predator-prey ODE model from real ecological data. PMID:21401565

  19. Robust detection-isolation-accommodation for sensor failures

    NASA Astrophysics Data System (ADS)

    Weiss, J. L.; Pattipati, K. R.; Willsky, A. S.; Eterno, J. S.; Crawford, J. T.

    1985-09-01

    The results of a one year study to: (1) develop a theory for Robust Failure Detection and Identification (FDI) in the presence of model uncertainty, (2) develop a design methodology which utilizes the robust FDI ththeory, (3) apply the methodology to a sensor FDI problem for the F-100 jet engine, and (4) demonstrate the application of the theory to the evaluation of alternative FDI schemes are presented. Theoretical results in statistical discrimination are used to evaluate the robustness of residual signals (or parity relations) in terms of their usefulness for FDI. Furthermore, optimally robust parity relations are derived through the optimization of robustness metrics. The result is viewed as decentralization of the FDI process. A general structure for decentralized FDI is proposed and robustness metrics are used for determining various parameters of the algorithm.

  20. Robust detection-isolation-accommodation for sensor failures

    NASA Technical Reports Server (NTRS)

    Weiss, J. L.; Pattipati, K. R.; Willsky, A. S.; Eterno, J. S.; Crawford, J. T.

    1985-01-01

    The results of a one year study to: (1) develop a theory for Robust Failure Detection and Identification (FDI) in the presence of model uncertainty, (2) develop a design methodology which utilizes the robust FDI ththeory, (3) apply the methodology to a sensor FDI problem for the F-100 jet engine, and (4) demonstrate the application of the theory to the evaluation of alternative FDI schemes are presented. Theoretical results in statistical discrimination are used to evaluate the robustness of residual signals (or parity relations) in terms of their usefulness for FDI. Furthermore, optimally robust parity relations are derived through the optimization of robustness metrics. The result is viewed as decentralization of the FDI process. A general structure for decentralized FDI is proposed and robustness metrics are used for determining various parameters of the algorithm.

  1. Robust Understanding of Statistical Variation

    ERIC Educational Resources Information Center

    Peters, Susan A.

    2011-01-01

    This paper presents a framework that captures the complexity of reasoning about variation in ways that are indicative of robust understanding and describes reasoning as a blend of design, data-centric, and modeling perspectives. Robust understanding is indicated by integrated reasoning about variation within each perspective and across…

  2. Robust, Optimal Subsonic Airfoil Shapes

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2014-01-01

    A method has been developed to create an airfoil robust enough to operate satisfactorily in different environments. This method determines a robust, optimal, subsonic airfoil shape, beginning with an arbitrary initial airfoil shape, and imposes the necessary constraints on the design. Also, this method is flexible and extendible to a larger class of requirements and changes in constraints imposed.

  3. Robust Fault Detection Using Robust Z1 Estimation and Fuzzy Logic

    NASA Technical Reports Server (NTRS)

    Curry, Tramone; Collins, Emmanuel G., Jr.; Selekwa, Majura; Guo, Ten-Huei (Technical Monitor)

    2001-01-01

    This research considers the application of robust Z(sub 1), estimation in conjunction with fuzzy logic to robust fault detection for an aircraft fight control system. It begins with the development of robust Z(sub 1) estimators based on multiplier theory and then develops a fixed threshold approach to fault detection (FD). It then considers the use of fuzzy logic for robust residual evaluation and FD. Due to modeling errors and unmeasurable disturbances, it is difficult to distinguish between the effects of an actual fault and those caused by uncertainty and disturbance. Hence, it is the aim of a robust FD system to be sensitive to faults while remaining insensitive to uncertainty and disturbances. While fixed thresholds only allow a decision on whether a fault has or has not occurred, it is more valuable to have the residual evaluation lead to a conclusion related to the degree of, or probability of, a fault. Fuzzy logic is a viable means of determining the degree of a fault and allows the introduction of human observations that may not be incorporated in the rigorous threshold theory. Hence, fuzzy logic can provide a more reliable and informative fault detection process. Using an aircraft flight control system, the results of FD using robust Z(sub 1) estimation with a fixed threshold are demonstrated. FD that combines robust Z(sub 1) estimation and fuzzy logic is also demonstrated. It is seen that combining the robust estimator with fuzzy logic proves to be advantageous in increasing the sensitivity to smaller faults while remaining insensitive to uncertainty and disturbances.

  4. Analytical redundancy and the design of robust failure detection systems

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.; Willsky, A. S.

    1984-01-01

    The Failure Detection and Identification (FDI) process is viewed as consisting of two stages: residual generation and decision making. It is argued that a robust FDI system can be achieved by designing a robust residual generation process. Analytical redundancy, the basis for residual generation, is characterized in terms of a parity space. Using the concept of parity relations, residuals can be generated in a number of ways and the design of a robust residual generation process can be formulated as a minimax optimization problem. An example is included to illustrate this design methodology. Previously announcedd in STAR as N83-20653

  5. Identifying outcome-based indicators and developing a curriculum for a continuing medical education programme on rational prescribing using a modified Delphi process

    PubMed Central

    Esmaily, Hamideh M; Savage, Carl; Vahidi, Rezagoli; Amini, Abolghasem; Zarrintan, Mohammad Hossein; Wahlstrom, Rolf

    2008-01-01

    Background Continuing medical education (CME) is compulsory for physicians in Iran. Recent studies in Iran show that modifications of CME elements are necessary to improve the effectiveness of the educational programmes. Other studies point to an inappropriate, even irrational drug prescribing. Based on a needs assessment study regarding CME for general physicians in the East Azerbaijan province in Iran, rational prescribing practice was recognized as a high priority issue. Considering different educational methods, outcome-based education has been proposed as a suitable approach for CME. The purpose of the study was to obtain experts' consensus about appropriate educational outcomes of rational prescribing for general physicians in CME and developing curricular contents for this education. Methods The study consisted of two phases: The first phase was conducted using a two-round Delphi consensus process to identify the outcome-based educational indicators regarding rational prescribing for general physicians in primary care (GPs). In the second phase the agreed indicators were submitted to panels of experts for assessment and determination of content for a CME program in the field. Results Twenty one learning outcomes were identified through a modified Delphi process. The indicators were used by the panels of experts and six educational topics were determined for the CME programme and the curricular content of each was defined. The topics were 1) Principles of prescription writing, 2) Adverse drug reactions, 3) Drug interactions, 4) Injections, 5) Antibiotic therapy, and 6) Anti-inflammatory agents therapy. One of the topics was not directly related to any outcome, raising a question about the need for a discussion on constructive alignment. Conclusions Consensus on learning outcomes was achieved and an educational guideline was designed. Before suggesting widespread use in the country the educational package should be tested in the CME context. PMID:18510774

  6. RSRE: RNA structural robustness evaluator.

    PubMed

    Shu, Wenjie; Bo, Xiaochen; Zheng, Zhiqiang; Wang, Shengqi

    2007-07-01

    Biological robustness, defined as the ability to maintain stable functioning in the face of various perturbations, is an important and fundamental topic in current biology, and has become a focus of numerous studies in recent years. Although structural robustness has been explored in several types of RNA molecules, the origins of robustness are still controversial. Computational analysis results are needed to make up for the lack of evidence of robustness in natural biological systems. The RNA structural robustness evaluator (RSRE) web server presented here provides a freely available online tool to quantitatively evaluate the structural robustness of RNA based on the widely accepted definition of neutrality. Several classical structure comparison methods are employed; five randomization methods are implemented to generate control sequences; sub-optimal predicted structures can be optionally utilized to mitigate the uncertainty of secondary structure prediction. With a user-friendly interface, the web application is easy to use. Intuitive illustrations are provided along with the original computational results to facilitate analysis. The RSRE will be helpful in the wide exploration of RNA structural robustness and will catalyze our understanding of RNA evolution. The RSRE web server is freely available at http://biosrv1.bmi.ac.cn/RSRE/ or http://biotech.bmi.ac.cn/RSRE/. PMID:17567615

  7. Nanotechnology Based Environmentally Robust Primers

    SciTech Connect

    Barbee, T W Jr; Gash, A E; Satcher, J H Jr; Simpson, R L

    2003-03-18

    An initiator device structure consisting of an energetic metallic nano-laminate foil coated with a sol-gel derived energetic nano-composite has been demonstrated. The device structure consists of a precision sputter deposition synthesized nano-laminate energetic foil of non-toxic and non-hazardous metals along with a ceramic-based energetic sol-gel produced coating made up of non-toxic and non-hazardous components such as ferric oxide and aluminum metal. Both the nano-laminate and sol-gel technologies are versatile commercially viable processes that allow the ''engineering'' of properties such as mechanical sensitivity and energy output. The nano-laminate serves as the mechanically sensitive precision igniter and the energetic sol-gel functions as a low-cost, non-toxic, non-hazardous booster in the ignition train. In contrast to other energetic nanotechnologies these materials can now be safely manufactured at application required levels, are structurally robust, have reproducible and engineerable properties, and have excellent aging characteristics.

  8. Robustness of airline route networks

    NASA Astrophysics Data System (ADS)

    Lordan, Oriol; Sallan, Jose M.; Escorihuela, Nuria; Gonzalez-Prieto, David

    2016-03-01

    Airlines shape their route network by defining their routes through supply and demand considerations, paying little attention to network performance indicators, such as network robustness. However, the collapse of an airline network can produce high financial costs for the airline and all its geographical area of influence. The aim of this study is to analyze the topology and robustness of the network route of airlines following Low Cost Carriers (LCCs) and Full Service Carriers (FSCs) business models. Results show that FSC hubs are more central than LCC bases in their route network. As a result, LCC route networks are more robust than FSC networks.

  9. Robust Control Design for Systems With Probabilistic Uncertainty

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.

    2005-01-01

    This paper presents a reliability- and robustness-based formulation for robust control synthesis for systems with probabilistic uncertainty. In a reliability-based formulation, the probability of violating design requirements prescribed by inequality constraints is minimized. In a robustness-based formulation, a metric which measures the tendency of a random variable/process to cluster close to a target scalar/function is minimized. A multi-objective optimization procedure, which combines stability and performance requirements in time and frequency domains, is used to search for robustly optimal compensators. Some of the fundamental differences between the proposed strategy and conventional robust control methods are: (i) unnecessary conservatism is eliminated since there is not need for convex supports, (ii) the most likely plants are favored during synthesis allowing for probabilistic robust optimality, (iii) the tradeoff between robust stability and robust performance can be explored numerically, (iv) the uncertainty set is closely related to parameters with clear physical meaning, and (v) compensators with improved robust characteristics for a given control structure can be synthesized.

  10. Robust simplifications of multiscale biochemical networks

    PubMed Central

    Radulescu, Ovidiu; Gorban, Alexander N; Zinovyev, Andrei; Lilienbaum, Alain

    2008-01-01

    Background Cellular processes such as metabolism, decision making in development and differentiation, signalling, etc., can be modeled as large networks of biochemical reactions. In order to understand the functioning of these systems, there is a strong need for general model reduction techniques allowing to simplify models without loosing their main properties. In systems biology we also need to compare models or to couple them as parts of larger models. In these situations reduction to a common level of complexity is needed. Results We propose a systematic treatment of model reduction of multiscale biochemical networks. First, we consider linear kinetic models, which appear as "pseudo-monomolecular" subsystems of multiscale nonlinear reaction networks. For such linear models, we propose a reduction algorithm which is based on a generalized theory of the limiting step that we have developed in [1]. Second, for non-linear systems we develop an algorithm based on dominant solutions of quasi-stationarity equations. For oscillating systems, quasi-stationarity and averaging are combined to eliminate time scales much faster and much slower than the period of the oscillations. In all cases, we obtain robust simplifications and also identify the critical parameters of the model. The methods are demonstrated for simple examples and for a more complex model of NF-κB pathway. Conclusion Our approach allows critical parameter identification and produces hierarchies of models. Hierarchical modeling is important in "middle-out" approaches when there is need to zoom in and out several levels of complexity. Critical parameter identification is an important issue in systems biology with potential applications to biological control and therapeutics. Our approach also deals naturally with the presence of multiple time scales, which is a general property of systems biology models. PMID:18854041

  11. Robust multi-objective calibration strategies - chances for improving flood forecasting

    NASA Astrophysics Data System (ADS)

    Krauße, T.; Cullmann, J.; Saile, P.; Schmitz, G. H.

    2011-04-01

    Process-oriented rainfall-runoff models are designed to approximate the complex hydrologic processes within a specific catchment and in particular to simulate the discharge at the catchment outlet. Most of these models exhibit a high degree of complexity and require the determination of various parameters by calibration. Recently automatic calibration methods became popular in order to identify parameter vectors with high corresponding model performance. The model performance is often assessed by a purpose-oriented objective function. Practical experience suggests that in many situations one single objective function cannot adequately describe the model's ability to represent any aspect of the catchment's behaviour. This is regardless whether the objective is aggregated of several criteria that measure different (possibly opposite) aspects of the system behaviour. One strategy to circumvent this problem is to define multiple objective functions and to apply a multi-objective optimisation algorithm to identify the set of Pareto optimal or non-dominated solutions. One possible approach to estimate the Pareto set effectively and efficiently is the particle swarm optimisation (PSO). It has already been successfully applied in various other fields and has been reported to show effective and efficient performance. Krauße and Cullmann (2011b) presented a method entitled ROPEPSO which merges the strengths of PSO and data depth measures in order to identify robust parameter vectors for hydrological models. In this paper we present a multi-objective parameter estimation algorithm, entitled the Multi-Objective Robust Particle Swarm Parameter Estimation (MO-ROPE). The algorithm is a further development of the previously mentioned single-objective ROPEPSO approach. It applies a newly developed multi-objective particle swarm optimisation algorithm in order to identify non-dominated robust model parameter vectors. Subsequently it samples robust parameter vectors by the

  12. Robust Optimization of Biological Protocols

    PubMed Central

    Flaherty, Patrick; Davis, Ronald W.

    2015-01-01

    When conducting high-throughput biological experiments, it is often necessary to develop a protocol that is both inexpensive and robust. Standard approaches are either not cost-effective or arrive at an optimized protocol that is sensitive to experimental variations. We show here a novel approach that directly minimizes the cost of the protocol while ensuring the protocol is robust to experimental variation. Our approach uses a risk-averse conditional value-at-risk criterion in a robust parameter design framework. We demonstrate this approach on a polymerase chain reaction protocol and show that our improved protocol is less expensive than the standard protocol and more robust than a protocol optimized without consideration of experimental variation. PMID:26417115

  13. Dosimetry robustness with stochastic optimization

    NASA Astrophysics Data System (ADS)

    Nohadani, Omid; Seco, Joao; Martin, Benjamin C.; Bortfeld, Thomas

    2009-06-01

    All radiation therapy treatment planning relies on accurate dose calculation. Uncertainties in dosimetric prediction can significantly degrade an otherwise optimal plan. In this work, we introduce a robust optimization method which handles dosimetric errors and warrants for high-quality IMRT plans. Unlike other dose error estimations, we do not rely on the detailed knowledge about the sources of the uncertainty and use a generic error model based on random perturbation. This generality is sought in order to cope with a large variety of error sources. We demonstrate the method on a clinical case of lung cancer and show that our method provides plans that are more robust against dosimetric errors and are clinically acceptable. In fact, the robust plan exhibits a two-fold improved equivalent uniform dose compared to the non-robust but optimized plan. The achieved speedup will allow computationally extensive multi-criteria or beam-angle optimization approaches to warrant for dosimetrically relevant plans.

  14. Robust Online Hamiltonian Learning

    NASA Astrophysics Data System (ADS)

    Granade, Christopher; Ferrie, Christopher; Wiebe, Nathan; Cory, David

    2013-05-01

    In this talk, we introduce a machine-learning algorithm for the problem of inferring the dynamical parameters of a quantum system, and discuss this algorithm in the example of estimating the precession frequency of a single qubit in a static field. Our algorithm is designed with practicality in mind by including parameters that control trade-offs between the requirements on computational and experimental resources. The algorithm can be implemented online, during experimental data collection, or can be used as a tool for post-processing. Most importantly, our algorithm is capable of learning Hamiltonian parameters even when the parameters change from experiment-to-experiment, and also when additional noise processes are present and unknown. Finally, we discuss the performance of the our algorithm by appeal to the Cramer-Rao bound. This work was financially supported by the Canadian government through NSERC and CERC and by the United States government through DARPA. NW would like to acknowledge funding from USARO-DTO.

  15. Inferring robust gene networks from expression data by a sensitivity-based incremental evolution method

    PubMed Central

    2012-01-01

    Background Reconstructing gene regulatory networks (GRNs) from expression data is one of the most important challenges in systems biology research. Many computational models and methods have been proposed to automate the process of network reconstruction. Inferring robust networks with desired behaviours remains challenging, however. This problem is related to network dynamics but has yet to be investigated using network modeling. Results We propose an incremental evolution approach for inferring GRNs that takes network robustness into consideration and can deal with a large number of network parameters. Our approach includes a sensitivity analysis procedure to iteratively select the most influential network parameters, and it uses a swarm intelligence procedure to perform parameter optimization. We have conducted a series of experiments to evaluate the external behaviors and internal robustness of the networks inferred by the proposed approach. The results and analyses have verified the effectiveness of our approach. Conclusions Sensitivity analysis is crucial to identifying the most sensitive parameters that govern the network dynamics. It can further be used to derive constraints for network parameters in the network reconstruction process. The experimental results show that the proposed approach can successfully infer robust GRNs with desired system behaviors. PMID:22595005

  16. Robust controls with structured perturbations

    NASA Technical Reports Server (NTRS)

    Keel, Leehyun

    1993-01-01

    This final report summarizes the recent results obtained by the principal investigator and his coworkers on the robust stability and control of systems containing parametric uncertainty. The starting point is a generalization of Kharitonov's theorem obtained in 1989, and its generalization to the multilinear case, the singling out of extremal stability subsets, and other ramifications now constitutes an extensive and coherent theory of robust parametric stability that is summarized in the results contained here.

  17. Invariants reveal multiple forms of robustness in bifunctional enzyme systems.

    PubMed

    Dexter, Joseph P; Dasgupta, Tathagata; Gunawardena, Jeremy

    2015-08-01

    Experimental and theoretical studies have suggested that bifunctional enzymes catalyzing opposing modification and demodification reactions can confer steady-state concentration robustness to their substrates. However, the types of robustness and the biochemical basis for them have remained elusive. Here we report a systematic study of the most general biochemical reaction network for a bifunctional enzyme acting on a substrate with one modification site, along with eleven sub-networks with more specialized biochemical assumptions. We exploit ideas from computational algebraic geometry, introduced in previous work, to find a polynomial expression (an invariant) between the steady state concentrations of the modified and unmodified substrate for each network. We use these invariants to identify five classes of robust behavior: robust upper bounds on concentration, robust two-sided bounds on concentration ratio, hybrid robustness, absolute concentration robustness (ACR), and robust concentration ratio. This analysis demonstrates that robustness can take a variety of forms and that the type of robustness is sensitive to many biochemical details, with small changes in biochemistry leading to very different steady-state behaviors. In particular, we find that the widely-studied ACR requires highly specialized assumptions in addition to bifunctionality. An unexpected result is that the robust bounds derived from invariants are strictly tighter than those derived by ad hoc manipulation of the underlying differential equations, confirming the value of invariants as a tool to gain insight into biochemical reaction networks. Furthermore, invariants yield multiple experimentally testable predictions and illuminate new strategies for inferring enzymatic mechanisms from steady-state measurements. PMID:26021467

  18. Robust and intelligent bearing estimation

    SciTech Connect

    Claassen, J.P.

    1998-07-01

    As the monitoring thresholds of global and regional networks are lowered, bearing estimates become more important to the processes which associate (sparse) detections and which locate events. Current methods of estimating bearings from observations by 3-component stations and arrays lack both accuracy and precision. Methods are required which will develop all the precision inherently available in the arrival, determine the measurability of the arrival, provide better estimates of the bias induced by the medium, permit estimates at lower SNRs, and provide physical insight into the effects of the medium on the estimates. Initial efforts have focused on 3-component stations since the precision is poorest there. An intelligent estimation process for 3-component stations has been developed and explored. The method, called SEE for Search, Estimate, and Evaluation, adaptively exploits all the inherent information in the arrival at every step of the process to achieve optimal results. In particular, the approach uses a consistent and robust mathematical framework to define the optimal time-frequency windows on which to make estimates, to make the bearing estimates themselves, and to withdraw metrics helpful in choosing the best estimate(s) or admitting that the bearing is immeasurable. The approach is conceptually superior to current methods, particular those which rely on real values signals. The method has been evaluated to a considerable extent in a seismically active region and has demonstrated remarkable utility by providing not only the best estimates possible but also insight into the physical processes affecting the estimates. It has been shown, for example, that the best frequency at which to make an estimate seldom corresponds to the frequency having the best detection SNR and sometimes the best time interval is not at the onset of the signal. The method is capable of measuring bearing dispersion, thereby withdrawing the bearing bias as a function of frequency

  19. Profound hyperglycemia in knockout mutant mice identifies novel function for POU4F2/Brn-3b in regulating metabolic processes.

    PubMed

    Bitsi, Stavroula; Ali, Houda; Maskell, Lauren; Ounzain, Samir; Mohamed-Ali, Vidya; Budhram-Mahadeo, Vishwanie S

    2016-03-01

    The POU4F2/Brn-3b transcription factor has been identified as a potentially novel regulator of key metabolic processes. Loss of this protein in Brn-3b knockout (KO) mice causes profound hyperglycemia and insulin resistance (IR), normally associated with type 2 diabetes (T2D), whereas Brn-3b is reduced in tissues taken from obese mice fed on high-fat diets (HFD), which also develop hyperglycemia and IR. Furthermore, studies in C2C12 myocytes show that Brn-3b mRNA and proteins are induced by glucose but inhibited by insulin, suggesting that this protein is itself highly regulated in responsive cells. Analysis of differential gene expression in skeletal muscle from Brn-3b KO mice showed changes in genes that are implicated in T2D such as increased glycogen synthase kinase-3β and reduced GLUT4 glucose transporter. The GLUT4 gene promoter contains multiple Brn-3b binding sites and is directly transactivated by this transcription factor in cotransfection assays, whereas chromatin immunoprecipitation assays confirm that Brn-3b binds to this promoter in vivo. In addition, correlation between GLUT4 and Brn-3b in KO tissues or in C2C12 cells strongly supports a close association between Brn-3b levels and GLUT4 expression. Since Brn-3b is regulated by metabolites and insulin, this may provide a mechanism for controlling key genes that are required for normal metabolic processes in insulin-responsive tissues and its loss may contribute to abnormal glucose uptake. PMID:26670484

  20. A penalized likelihood approach for robust estimation of isoform expression

    PubMed Central

    2016-01-01

    Ultra high-throughput sequencing of transcriptomes (RNA-Seq) has enabled the accurate estimation of gene expression at individual isoform level. However, systematic biases introduced during the sequencing and mapping processes as well as incompleteness of the transcript annotation databases may cause the estimates of isoform abundances to be unreliable, and in some cases, highly inaccurate. This paper introduces a penalized likelihood approach to detect and correct for such biases in a robust manner. Our model extends those previously proposed by introducing bias parameters for reads. An L1 penalty is used for the selection of non-zero bias parameters. We introduce an efficient algorithm for model fitting and analyze the statistical properties of the proposed model. Our experimental studies on both simulated and real datasets suggest that the model has the potential to improve isoform-specific gene expression estimates and identify incompletely annotated gene models.

  1. How robust is a robust policy? A comparative analysis of alternative robustness metrics for supporting robust decision analysis.

    NASA Astrophysics Data System (ADS)

    Kwakkel, Jan; Haasnoot, Marjolijn

    2015-04-01

    In response to climate and socio-economic change, in various policy domains there is increasingly a call for robust plans or policies. That is, plans or policies that performs well in a very large range of plausible futures. In the literature, a wide range of alternative robustness metrics can be found. The relative merit of these alternative conceptualizations of robustness has, however, received less attention. Evidently, different robustness metrics can result in different plans or policies being adopted. This paper investigates the consequences of several robustness metrics on decision making, illustrated here by the design of a flood risk management plan. A fictitious case, inspired by a river reach in the Netherlands is used. The performance of this system in terms of casualties, damages, and costs for flood and damage mitigation actions is explored using a time horizon of 100 years, and accounting for uncertainties pertaining to climate change and land use change. A set of candidate policy options is specified up front. This set of options includes dike raising, dike strengthening, creating more space for the river, and flood proof building and evacuation options. The overarching aim is to design an effective flood risk mitigation strategy that is designed from the outset to be adapted over time in response to how the future actually unfolds. To this end, the plan will be based on the dynamic adaptive policy pathway approach (Haasnoot, Kwakkel et al. 2013) being used in the Dutch Delta Program. The policy problem is formulated as a multi-objective robust optimization problem (Kwakkel, Haasnoot et al. 2014). We solve the multi-objective robust optimization problem using several alternative robustness metrics, including both satisficing robustness metrics and regret based robustness metrics. Satisficing robustness metrics focus on the performance of candidate plans across a large ensemble of plausible futures. Regret based robustness metrics compare the

  2. Meristem size contributes to the robustness of phyllotaxis in Arabidopsis

    PubMed Central

    Landrein, Benoit; Refahi, Yassin; Besnard, Fabrice; Hervieux, Nathan; Mirabet, Vincent; Boudaoud, Arezki; Vernoux, Teva; Hamant, Olivier

    2015-01-01

    Using the plant model Arabidopsis, the relationship between day length, the size of the shoot apical meristem, and the robustness of phyllotactic patterns were analysed. First, it was found that reducing day length leads to an increased meristem size and an increased number of alterations in the final positions of organs along the stem. Most of the phyllotactic defects could be related to an altered tempo of organ emergence, while not affecting the spatial positions of organ initiations at the meristem. A correlation was also found between meristem size and the robustness of phyllotaxis in two accessions (Col-0 and WS-4) and a mutant (clasp-1), independent of growth conditions. A reduced meristem size in clasp-1 was even associated with an increased robustness of the phyllotactic pattern, beyond what is observed in the wild type. Interestingly it was also possible to modulate the robustness of phyllotaxis in these different genotypes by changing day length. To conclude, it is shown first that robustness of the phyllotactic pattern is not maximal in the wild type, suggesting that, beyond its apparent stereotypical order, the robustness of phyllotaxis is regulated. Secondly, a role for day length in the robustness of the phyllotaxis was also identified, thus providing a new example of a link between patterning and environment in plants. Thirdly, the experimental results validate previous model predictions suggesting a contribution of meristem size in the robustness of phyllotaxis via the coupling between the temporal sequence and spatial pattern of organ initiations. PMID:25504644

  3. Use of chemical modification and mass spectrometry to identify substrate-contacting sites in proteinaceous RNase P, a tRNA processing enzyme

    PubMed Central

    Chen, Tien-Hao; Tanimoto, Akiko; Shkriabai, Nikoloz; Kvaratskhelia, Mamuka; Wysocki, Vicki; Gopalan, Venkat

    2016-01-01

    Among all enzymes in nature, RNase P is unique in that it can use either an RNA- or a protein-based active site for its function: catalyzing cleavage of the 5′-leader from precursor tRNAs (pre-tRNAs). The well-studied catalytic RNase P RNA uses a specificity module to recognize the pre-tRNA and a catalytic module to perform cleavage. Similarly, the recently discovered proteinaceous RNase P (PRORP) possesses two domains – pentatricopeptide repeat (PPR) and metallonuclease (NYN) – that are present in some other RNA processing factors. Here, we combined chemical modification of lysines and multiple-reaction monitoring mass spectrometry to identify putative substrate-contacting residues in Arabidopsis thaliana PRORP1 (AtPRORP1), and subsequently validated these candidate sites by site-directed mutagenesis. Using biochemical studies to characterize the wild-type (WT) and mutant derivatives, we found that AtPRORP1 exploits specific lysines strategically positioned at the tips of it's V-shaped arms, in the first PPR motif and in the NYN domain proximal to the catalytic center, to bind and cleave pre-tRNA. Our results confirm that the protein- and RNA-based forms of RNase P have distinct modules for substrate recognition and cleavage, an unanticipated parallel in their mode of action. PMID:27166372

  4. Heterologous Expression Screens in Nicotiana benthamiana Identify a Candidate Effector of the Wheat Yellow Rust Pathogen that Associates with Processing Bodies

    PubMed Central

    Petre, Benjamin; Saunders, Diane G. O.; Sklenar, Jan; Lorrain, Cécile; Krasileva, Ksenia V.; Win, Joe; Duplessis, Sébastien; Kamoun, Sophien

    2016-01-01

    Rust fungal pathogens of wheat (Triticum spp.) affect crop yields worldwide. The molecular mechanisms underlying the virulence of these pathogens remain elusive, due to the limited availability of suitable molecular genetic research tools. Notably, the inability to perform high-throughput analyses of candidate virulence proteins (also known as effectors) impairs progress. We previously established a pipeline for the fast-forward screens of rust fungal candidate effectors in the model plant Nicotiana benthamiana. This pipeline involves selecting candidate effectors in silico and performing cell biology and protein-protein interaction assays in planta to gain insight into the putative functions of candidate effectors. In this study, we used this pipeline to identify and characterize sixteen candidate effectors from the wheat yellow rust fungal pathogen Puccinia striiformis f sp tritici. Nine candidate effectors targeted a specific plant subcellular compartment or protein complex, providing valuable information on their putative functions in plant cells. One candidate effector, PST02549, accumulated in processing bodies (P-bodies), protein complexes involved in mRNA decapping, degradation, and storage. PST02549 also associates with the P-body-resident ENHANCER OF mRNA DECAPPING PROTEIN 4 (EDC4) from N. benthamiana and wheat. We propose that P-bodies are a novel plant cell compartment targeted by pathogen effectors. PMID:26863009

  5. Use of chemical modification and mass spectrometry to identify substrate-contacting sites in proteinaceous RNase P, a tRNA processing enzyme.

    PubMed

    Chen, Tien-Hao; Tanimoto, Akiko; Shkriabai, Nikoloz; Kvaratskhelia, Mamuka; Wysocki, Vicki; Gopalan, Venkat

    2016-06-20

    Among all enzymes in nature, RNase P is unique in that it can use either an RNA- or a protein-based active site for its function: catalyzing cleavage of the 5'-leader from precursor tRNAs (pre-tRNAs). The well-studied catalytic RNase P RNA uses a specificity module to recognize the pre-tRNA and a catalytic module to perform cleavage. Similarly, the recently discovered proteinaceous RNase P (PRORP) possesses two domains - pentatricopeptide repeat (PPR) and metallonuclease (NYN) - that are present in some other RNA processing factors. Here, we combined chemical modification of lysines and multiple-reaction monitoring mass spectrometry to identify putative substrate-contacting residues in Arabidopsis thaliana PRORP1 (AtPRORP1), and subsequently validated these candidate sites by site-directed mutagenesis. Using biochemical studies to characterize the wild-type (WT) and mutant derivatives, we found that AtPRORP1 exploits specific lysines strategically positioned at the tips of it's V-shaped arms, in the first PPR motif and in the NYN domain proximal to the catalytic center, to bind and cleave pre-tRNA. Our results confirm that the protein- and RNA-based forms of RNase P have distinct modules for substrate recognition and cleavage, an unanticipated parallel in their mode of action. PMID:27166372

  6. Robust Design of Motor PWM Control using Modeling and Simulation

    NASA Astrophysics Data System (ADS)

    Zhan, Wei

    A robust design method is developed for Pulse Width Modulation (PWM) motor speed control. A first principle model for DC permanent magnetic motor is used to build a Simulink model for simulation and analysis. Based on the simulation result, the main factors that contributed to the average speed variation are identified using Design of Experiment (DOE). A robust solution is derived to reduce the aver age speed control variation using Response Surface Method (RSM). The robustness of the new design is verified using the simulation model.

  7. Genome sequencing identifies two nearly unchanged strains of persistent Listeria monocytogenes isolated at two different fish processing plants sampled 6 years apart.

    PubMed

    Holch, Anne; Webb, Kristen; Lukjancenko, Oksana; Ussery, David; Rosenthal, Benjamin M; Gram, Lone

    2013-05-01

    Listeria monocytogenes is a food-borne human-pathogenic bacterium that can cause infections with a high mortality rate. It has a remarkable ability to persist in food processing facilities. Here we report the genome sequences for two L. monocytogenes strains (N53-1 and La111) that were isolated 6 years apart from two different Danish fish processers. Both strains are of serotype 1/2a and belong to a highly persistent DNA subtype (random amplified polymorphic DNA [RAPD] type 9). We demonstrate using in silico analyses that both strains belong to the multilocus sequence typing (MLST) type ST121 that has been isolated as a persistent subtype in several European countries. The purpose of this study was to use genome analyses to identify genes or proteins that could contribute to persistence. In a genome comparison, the two persistent strains were extremely similar and collectively differed from the reference lineage II strain, EGD-e. Also, they differed markedly from a lineage I strain (F2365). On the proteome level, the two strains were almost identical, with a predicted protein homology of 99.94%, differing at only 2 proteins. No single-nucleotide polymorphism (SNP) differences were seen between the two strains; in contrast, N53-1 and La111 differed from the EGD-e reference strain by 3,942 and 3,471 SNPs, respectively. We included a persistent L. monocytogenes strain from the United States (F6854) in our comparisons. Compared to nonpersistent strains, all three persistent strains were distinguished by two genome deletions: one, of 2,472 bp, typically contains the gene for inlF, and the other, of 3,017 bp, includes three genes potentially related to bacteriocin production and transport (lmo2774, lmo2775, and the 3'-terminal part of lmo2776). Further studies of highly persistent strains are required to determine if the absence of these genes promotes persistence. While the genome comparison did not point to a clear physiological explanation of the persistent phenotype

  8. Genome Sequencing Identifies Two Nearly Unchanged Strains of Persistent Listeria monocytogenes Isolated at Two Different Fish Processing Plants Sampled 6 Years Apart

    PubMed Central

    Holch, Anne; Webb, Kristen; Lukjancenko, Oksana; Ussery, David; Rosenthal, Benjamin M.

    2013-01-01

    Listeria monocytogenes is a food-borne human-pathogenic bacterium that can cause infections with a high mortality rate. It has a remarkable ability to persist in food processing facilities. Here we report the genome sequences for two L. monocytogenes strains (N53-1 and La111) that were isolated 6 years apart from two different Danish fish processers. Both strains are of serotype 1/2a and belong to a highly persistent DNA subtype (random amplified polymorphic DNA [RAPD] type 9). We demonstrate using in silico analyses that both strains belong to the multilocus sequence typing (MLST) type ST121 that has been isolated as a persistent subtype in several European countries. The purpose of this study was to use genome analyses to identify genes or proteins that could contribute to persistence. In a genome comparison, the two persistent strains were extremely similar and collectively differed from the reference lineage II strain, EGD-e. Also, they differed markedly from a lineage I strain (F2365). On the proteome level, the two strains were almost identical, with a predicted protein homology of 99.94%, differing at only 2 proteins. No single-nucleotide polymorphism (SNP) differences were seen between the two strains; in contrast, N53-1 and La111 differed from the EGD-e reference strain by 3,942 and 3,471 SNPs, respectively. We included a persistent L. monocytogenes strain from the United States (F6854) in our comparisons. Compared to nonpersistent strains, all three persistent strains were distinguished by two genome deletions: one, of 2,472 bp, typically contains the gene for inlF, and the other, of 3,017 bp, includes three genes potentially related to bacteriocin production and transport (lmo2774, lmo2775, and the 3′-terminal part of lmo2776). Further studies of highly persistent strains are required to determine if the absence of these genes promotes persistence. While the genome comparison did not point to a clear physiological explanation of the persistent

  9. Robust disturbance rejection for flexible mechanical structures

    NASA Astrophysics Data System (ADS)

    Enzmann, Marc R.; Doeschner, Christian

    2000-06-01

    Topic of the presentation is a procedure to determine controller parameters using principles from Internal Model Control (IMC) in combination with Quantitative Feedback Theory (QFT) for robust vibration control of flexible mechanical structures. IMC design is based on a parameterization of all controllers that stabilize a given nominal plant, called the Q-parameter or Youla-parameter. It will be shown that it is possible to choose the controller structure and the Q- parameter in a very straightforward manner, so that a low order controller results, which stabilizes the given nominal model. Additional constraints can be implemented, so that the method allows for a direct and transparent trade-off between control performance and controller complexity and facilitates the inclusion of low-pass filters. In order to test (and if necessary augment) the inherent robust performance of the resulting controllers, boundaries based on the work of Kidron and Yaniv are calculated in the Nichols-Charts of the open loop and the complementary sensitivity function. The application of these boundaries is presented. Very simple uncertainty models for resonant modes are used to assess the robustness of the design. Using a simply structured plant as illustrative example we will demonstrate the design process. This will illuminate several important features of the design process, e.g. trade-off between conflicting objectives, trade- off between controller complexity and achievable performance.

  10. Robust speech coding using microphone arrays

    NASA Astrophysics Data System (ADS)

    Li, Zhao

    1998-09-01

    To achieve robustness and efficiency for voice communication in noise, the noise suppression and bandwidth compression processes are combined to form a joint process using input from an array of microphones. An adaptive beamforming technique with a set of robust linear constraints and a single quadratic inequality constraint is used to preserve desired signal and to cancel directional plus ambient noise in a small room environment. This robustly constrained array processor is found to be effective in limiting signal cancelation over a wide range of input SNRs (-10 dB to +10 dB). The resulting intelligibility gains (8-10 dB) provide significant improvement to subsequent CELP coding. In addition, the desired speech activity is detected by estimating Target-to-Jammer Ratios (TJR) using subband correlations between different microphone inputs or using signals within the Generalized Sidelobe Canceler directly. These two novel techniques of speech activity detection for coding are studied thoroughly in this dissertation. Each is subsequently incorporated with the adaptive array and a 4.8 kbps CELP coder to form a Variable Bit Kate (VBR) coder with noise canceling and Spatial Voice Activity Detection (SVAD) capabilities. This joint noise suppression and bandwidth compression system demonstrates large improvements in desired speech quality after coding, accurate desired speech activity detection in various types of interference, and a reduction in the information bits required to code the speech.

  11. A robust nonlinear filter for image restoration.

    PubMed

    Koivunen, V

    1995-01-01

    A class of nonlinear regression filters based on robust estimation theory is introduced. The goal of the filtering is to recover a high-quality image from degraded observations. Models for desired image structures and contaminating processes are employed, but deviations from strict assumptions are allowed since the assumptions on signal and noise are typically only approximately true. The robustness of filters is usually addressed only in a distributional sense, i.e., the actual error distribution deviates from the nominal one. In this paper, the robustness is considered in a broad sense since the outliers may also be due to inappropriate signal model, or there may be more than one statistical population present in the processing window, causing biased estimates. Two filtering algorithms minimizing a least trimmed squares criterion are provided. The design of the filters is simple since no scale parameters or context-dependent threshold values are required. Experimental results using both real and simulated data are presented. The filters effectively attenuate both impulsive and nonimpulsive noise while recovering the signal structure and preserving interesting details. PMID:18290007

  12. Robust boosting via convex optimization

    NASA Astrophysics Data System (ADS)

    Rätsch, Gunnar

    2001-12-01

    In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues: o The statistical learning theory framework for analyzing boosting methods. We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution. o How can boosting methods be related to mathematical optimization techniques? To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms. o How to make Boosting noise robust? One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness. o How to adapt boosting to regression problems

  13. Robust control technique for nuclear power plants

    SciTech Connect

    Murphy, G.V.; Bailey, J.M.

    1989-03-01

    This report summarizes the linear quadratic Guassian (LQG) design technique with loop transfer recovery (LQG/LTR) for design of control systems. The concepts of return ratio, return difference, inverse return difference, and singular values are summarized. The LQG/LTR design technique allows the synthesis of a robust control system. To illustrate the LQG/LTR technique, a linearized model of a simple process has been chosen. The process has three state variables, one input, and one output. Three control system design methods are compared: LQG, LQG/LTR, and a proportional plus integral controller (PI). 7 refs., 20 figs., 6 tabs.

  14. Mutational robustness emerges in a microscopic model of protein evolution

    NASA Astrophysics Data System (ADS)

    Zeldovich, Konstantin; Shakhnovich, Eugene

    2009-03-01

    The ability to absorb mutations while retaining structure and function, or mutational robustness, is a remarkable property of natural proteins. We use a computational model of organismic evolution [Zeldovich et al, PLOS Comp Biol 3(7):e139 (2007)], which explicitly couples protein physics and population dynamics, to study mutational robustness of evolved model proteins. We compare evolved sequences with the ones designed to fold into the same native structures and having the same thermodynamic stability, and find that evolved sequences are more robust against point mutations, being less likely to be destabilized, and more likely to increase stability upon a point mutation. These results point to sequence evolution as an important method of protein engineering if mutational robustness of the artificially developed proteins is desired. On the biological side, mutational robustness of proteins appears to be a natural consequence of the divergence-mutation- selection evolutionary process.

  15. Robust scanner identification based on noise features

    NASA Astrophysics Data System (ADS)

    Gou, Hongmei; Swaminathan, Ashwin; Wu, Min

    2007-02-01

    A large portion of digital image data available today is acquired using digital cameras or scanners. While cameras allow digital reproduction of natural scenes, scanners are often used to capture hardcopy art in more controlled scenarios. This paper proposes a new technique for non-intrusive scanner model identification, which can be further extended to perform tampering detection on scanned images. Using only scanned image samples that contain arbitrary content, we construct a robust scanner identifier to determine the brand/model of the scanner used to capture each scanned image. The proposed scanner identifier is based on statistical features of scanning noise. We first analyze scanning noise from several angles, including through image de-noising, wavelet analysis, and neighborhood prediction, and then obtain statistical features from each characterization. Experimental results demonstrate that the proposed method can effectively identify the correct scanner brands/models with high accuracy.

  16. Vehicle active steering control research based on two-DOF robust internal model control

    NASA Astrophysics Data System (ADS)

    Wu, Jian; Liu, Yahui; Wang, Fengbo; Bao, Chunjiang; Sun, Qun; Zhao, Youqun

    2016-03-01

    Because of vehicle's external disturbances and model uncertainties, robust control algorithms have obtained popularity in vehicle stability control. The robust control usually gives up performance in order to guarantee the robustness of the control algorithm, therefore an improved robust internal model control(IMC) algorithm blending model tracking and internal model control is put forward for active steering system in order to reach high performance of yaw rate tracking with certain robustness. The proposed algorithm inherits the good model tracking ability of the IMC control and guarantees robustness to model uncertainties. In order to separate the design process of model tracking from the robustness design process, the improved 2 degree of freedom(DOF) robust internal model controller structure is given from the standard Youla parameterization. Simulations of double lane change maneuver and those of crosswind disturbances are conducted for evaluating the robust control algorithm, on the basis of a nonlinear vehicle simulation model with a magic tyre model. Results show that the established 2-DOF robust IMC method has better model tracking ability and a guaranteed level of robustness and robust performance, which can enhance the vehicle stability and handling, regardless of variations of the vehicle model parameters and the external crosswind interferences. Contradiction between performance and robustness of active steering control algorithm is solved and higher control performance with certain robustness to model uncertainties is obtained.

  17. Reactive Transport Modeling of Chemical and Isotope Data to Identify Degradation Processes of Chlorinated Ethenes in a Diffusion-Dominated Media

    NASA Astrophysics Data System (ADS)

    Chambon, J. C.; Damgaard, I.; Jeannottat, S.; Hunkeler, D.; Broholm, M. M.; Binning, P. J.; Bjerg, P. L.

    2012-12-01

    Chlorinated ethenes are among the most widespread contaminants in the subsurface and a major threat to groundwater quality at numerous contaminated sites. Many of these contaminated sites are found in low-permeability media, such as clay tills, where contaminant transport is controlled by diffusion. Degradation and transport processes of chlorinated ethenes are not well understood in such geological settings, therefore risk assessment and remediation at these sites are particularly challenging. In this work, a combined approach of chemical and isotope analysis on core samples, and reactive transport modeling has been used to identify the degradation processes occurring at the core scale. The field data was from a site located at Vadsby, Denmark, where chlorinated solvents were spilled during the 1960-70's, resulting in contamination of the clay till and the underlying sandy layer (15 meters below surface). The clay till is heavily contaminated between 4 and 15 mbs, both with the mother compounds PCE/TCE and TCA and the daughter products (DCE, VC, ethene, DCA), indicating the occurrence of natural dechlorination of both PCE/TCE and TCA. Intact core samples of length 0.5m were collected from the source zone (between 6 and 12 mbs). Concentrations and stable isotope ratios of the mother compounds and their daughter products, as well as redox parameters, fatty acids and microbial data, were analyzed with discrete sub-sampling along the cores. More samples (each 5 mm) were collected around the observed higher permeability zones such as sand lenses, sand stringers and fractures, where a higher degradation activity was expected. This study made use of a reactive transport model to investigate the appropriateness of several conceptual models. The conceptual models considered the location of dechlorination and degradation pathways (biotic reductive dechlorination or abiotic β-elimination with iron minerals) in three core profiles. The model includes diffusion in the matrix

  18. Coupling groundwater and land surface processes: Idealized simulations to identify effects of terrain and subsurface heterogeneity on land surface energy fluxes

    NASA Astrophysics Data System (ADS)

    Rihani, Jehan F.; Maxwell, Reed M.; Chow, Fotini K.

    2010-12-01

    This work investigates the role of terrain and subsurface heterogeneity on the interactions between groundwater dynamics and land surface energy fluxes using idealized simulations. A three-dimensional variably saturated groundwater code (ParFlow) coupled to a land surface model (Common Land Model) is used to account for both vertical and lateral water and pressure movement. This creates a fully integrated approach, coupling overland and subsurface flow while having an explicit representation of the water table and all land surface processes forced by atmospheric data. Because the water table is explicitly represented in these simulations, regions with stronger interaction between water table depth and the land surface energy balance (known as critical zones) can be identified. This study uses simple terrain and geologic configurations to demonstrate the importance of lateral surface and subsurface flows in determining land surface heat and moisture fluxes. Strong correlations are found between the land surface fluxes and water table depth across all cases, including terrain shape, subsurface heterogeneity, vegetation type, and climatological region. Results show that different land forms and subsurface heterogeneities produce very different water table dynamics and land surface flux responses to atmospheric forcing. Subsurface formation and properties have the greatest effect on the coupling between the water table and surface heat and moisture fluxes. Changes in landform and land surface slope also have an effect on these interactions by influencing the fraction of rainfall contributing to overland flow versus infiltration. This directly affects the extent of the critical zone with highest coupling strength along the hillside. Vegetative land cover, as seen in these simulations, has a large effect on the energy balance at the land surface but a small effect on streamflow and water table dynamics and thus a limited impact on the land surface-subsurface interactions

  19. Robust image segmentation using local robust statistics and correntropy-based K-means clustering

    NASA Astrophysics Data System (ADS)

    Huang, Chencheng; Zeng, Li

    2015-03-01

    It is an important work to segment the real world images with intensity inhomogeneity such as magnetic resonance (MR) and computer tomography (CT) images. In practice, such images are often polluted by noise which make them difficult to be segmented by traditional level set based segmentation models. In this paper, we propose a robust level set image segmentation model combining local with global fitting energies to segment noised images. In the proposed model, the local fitting energy is based on the local robust statistics (LRS) information of an input image, which can efficiently reduce the effects of the noise, and the global fitting energy utilizes the correntropy-based K-means (CK) method, which can adaptively emphasize the samples that are close to their corresponding cluster centers. By integrating the advantages of global information and local robust statistics characteristics, the proposed model can efficiently segment images with intensity inhomogeneity and noise. Then, a level set regularization term is used to avoid re-initialization procedures in the process of curve evolution. In addition, the Gaussian filter is utilized to keep the level set smoothing in the curve evolution process. The proposed model first appeared as a two-phase model and then extended to a multi-phase one. Experimental results show the advantages of our model in terms of accuracy and robustness to the noise. In particular, our method has been applied on some synthetic and real images with desirable results.

  20. Uneven Genetic Robustness of HIV-1 Integrase

    PubMed Central

    Rihn, Suzannah J.; Hughes, Joseph; Wilson, Sam J.

    2014-01-01

    ABSTRACT Genetic robustness (tolerance of mutation) may be a naturally selected property in some viruses, because it should enhance adaptability. Robustness should be especially beneficial to viruses like HIV-1 that exhibit high mutation rates and exist in immunologically hostile environments. Surprisingly, however, the HIV-1 capsid protein (CA) exhibits extreme fragility. To determine whether fragility is a general property of HIV-1 proteins, we created a large library of random, single-amino-acid mutants in HIV-1 integrase (IN), covering >40% of amino acid positions. Despite similar degrees of sequence variation in naturally occurring IN and CA sequences, we found that HIV-1 IN was significantly more robust than CA, with random nonsilent IN mutations only half as likely to cause lethal defects. Interestingly, IN and CA were similar in that a subset of mutations with high in vitro fitness were rare in natural populations. IN mutations of this type were more likely to occur in the buried interior of the modeled HIV-1 intasome, suggesting that even very subtle fitness effects suppress variation in natural HIV-1 populations. Lethal mutations, in particular those that perturbed particle production, proteolytic processing, and particle-associated IN levels, were strikingly localized at specific IN subunit interfaces. This observation strongly suggests that binding interactions between particular IN subunits regulate proteolysis during HIV-1 virion morphogenesis. Overall, use of the IN mutant library in conjunction with structural models demonstrates the overall robustness of IN and highlights particular regions of vulnerability that may be targeted in therapeutic interventions. IMPORTANCE The HIV-1 integrase (IN) protein is responsible for the integration of the viral genome into the host cell chromosome. To measure the capacity of IN to maintain function in the face of mutation, and to probe structure/function relationships, we created a library of random single

  1. Design optimization for cost and quality: The robust design approach

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  2. Panaceas, uncertainty, and the robust control framework in sustainability science

    PubMed Central

    Anderies, John M.; Rodriguez, Armando A.; Janssen, Marco A.; Cifdaloz, Oguzhan

    2007-01-01

    A critical challenge faced by sustainability science is to develop strategies to cope with highly uncertain social and ecological dynamics. This article explores the use of the robust control framework toward this end. After briefly outlining the robust control framework, we apply it to the traditional Gordon–Schaefer fishery model to explore fundamental performance–robustness and robustness–vulnerability trade-offs in natural resource management. We find that the classic optimal control policy can be very sensitive to parametric uncertainty. By exploring a large class of alternative strategies, we show that there are no panaceas: even mild robustness properties are difficult to achieve, and increasing robustness to some parameters (e.g., biological parameters) results in decreased robustness with respect to others (e.g., economic parameters). On the basis of this example, we extract some broader themes for better management of resources under uncertainty and for sustainability science in general. Specifically, we focus attention on the importance of a continual learning process and the use of robust control to inform this process. PMID:17881574

  3. Parenchymal texture analysis in digital mammography: robust texture feature identification and equivalence across devices.

    PubMed

    Keller, Brad M; Oustimov, Andrew; Wang, Yan; Chen, Jinbo; Acciavatti, Raymond J; Zheng, Yuanjie; Ray, Shonket; Gee, James C; Maidment, Andrew D A; Kontos, Despina

    2015-04-01

    An analytical framework is presented for evaluating the equivalence of parenchymal texture features across different full-field digital mammography (FFDM) systems using a physical breast phantom. Phantom images (FOR PROCESSING) are acquired from three FFDM systems using their automated exposure control setting. A panel of texture features, including gray-level histogram, co-occurrence, run length, and structural descriptors, are extracted. To identify features that are robust across imaging systems, a series of equivalence tests are performed on the feature distributions, in which the extent of their intersystem variation is compared to their intrasystem variation via the Hodges-Lehmann test statistic. Overall, histogram and structural features tend to be most robust across all systems, and certain features, such as edge enhancement, tend to be more robust to intergenerational differences between detectors of a single vendor than to intervendor differences. Texture features extracted from larger regions of interest (i.e., [Formula: see text]) and with a larger offset length (i.e., [Formula: see text]), when applicable, also appear to be more robust across imaging systems. This framework and observations from our experiments may benefit applications utilizing mammographic texture analysis on images acquired in multivendor settings, such as in multicenter studies of computer-aided detection and breast cancer risk assessment. PMID:26158105

  4. Parenchymal texture analysis in digital mammography: robust texture feature identification and equivalence across devices

    PubMed Central

    Keller, Brad M.; Oustimov, Andrew; Wang, Yan; Chen, Jinbo; Acciavatti, Raymond J.; Zheng, Yuanjie; Ray, Shonket; Gee, James C.; Maidment, Andrew D. A.; Kontos, Despina

    2015-01-01

    Abstract. An analytical framework is presented for evaluating the equivalence of parenchymal texture features across different full-field digital mammography (FFDM) systems using a physical breast phantom. Phantom images (FOR PROCESSING) are acquired from three FFDM systems using their automated exposure control setting. A panel of texture features, including gray-level histogram, co-occurrence, run length, and structural descriptors, are extracted. To identify features that are robust across imaging systems, a series of equivalence tests are performed on the feature distributions, in which the extent of their intersystem variation is compared to their intrasystem variation via the Hodges–Lehmann test statistic. Overall, histogram and structural features tend to be most robust across all systems, and certain features, such as edge enhancement, tend to be more robust to intergenerational differences between detectors of a single vendor than to intervendor differences. Texture features extracted from larger regions of interest (i.e., >63  pixels2) and with a larger offset length (i.e., >7  pixels), when applicable, also appear to be more robust across imaging systems. This framework and observations from our experiments may benefit applications utilizing mammographic texture analysis on images acquired in multivendor settings, such as in multicenter studies of computer-aided detection and breast cancer risk assessment. PMID:26158105

  5. Effective acoustic modeling for robust speaker recognition

    NASA Astrophysics Data System (ADS)

    Hasan Al Banna, Taufiq

    Robustness due to mismatched train/test conditions is the biggest challenge facing the speaker recognition community today, with transmission channel and environmental noise degradation being the prominent factors. Performance of state-of-the art speaker recognition methods aim at mitigating these factors by effectively modeling speech in multiple recording conditions, so that it can learn to distinguish between inter-speaker and intra-speaker variability. The increasing demand and availability of large development corpora introduces difficulties in effective data utilization and computationally efficient modeling. Traditional compensation strategies operate on higher dimensional utterance features, known as supervectors, which are obtained from the acoustic modeling of short-time features. Feature compensation is performed during front-end processing. Motivated by the covariance structure of conventional acoustic features, we envision that feature normalization and compensation can be integrated into the acoustic modeling. In this dissertation, we investigate the following fundamental research challenges: (i) analysis of data requirements for effective and efficient background model training, (ii) introducing latent factor analysis modeling of acoustic features, (iii) integration of channel compensation strategies in mixture-models, and (iv) development of noise robust background models using factor analysis. The effectiveness of the proposed solutions are demonstrated in various noisy and channel degraded conditions using the recent evaluation datasets released by the National Institute of Standards and Technology (NIST). These research accomplishments make an important step towards improving speaker recognition robustness in diverse acoustic conditions.

  6. UNIX-based operating systems robustness evaluation

    NASA Technical Reports Server (NTRS)

    Chang, Yu-Ming

    1996-01-01

    Robust operating systems are required for reliable computing. Techniques for robustness evaluation of operating systems not only enhance the understanding of the reliability of computer systems, but also provide valuable feed- back to system designers. This thesis presents results from robustness evaluation experiments on five UNIX-based operating systems, which include Digital Equipment's OSF/l, Hewlett Packard's HP-UX, Sun Microsystems' Solaris and SunOS, and Silicon Graphics' IRIX. Three sets of experiments were performed. The methodology for evaluation tested (1) the exception handling mechanism, (2) system resource management, and (3) system capacity under high workload stress. An exception generator was used to evaluate the exception handling mechanism of the operating systems. Results included exit status of the exception generator and the system state. Resource management techniques used by individual operating systems were tested using programs designed to usurp system resources such as physical memory and process slots. Finally, the workload stress testing evaluated the effect of the workload on system performance by running a synthetic workload and recording the response time of local and remote user requests. Moderate to severe performance degradations were observed on the systems under stress.

  7. A robust DCT domain watermarking algorithm based on chaos system

    NASA Astrophysics Data System (ADS)

    Xiao, Mingsong; Wan, Xiaoxia; Gan, Chaohua; Du, Bo

    2009-10-01

    Digital watermarking is a kind of technique that can be used for protecting and enforcing the intellectual property (IP) rights of the digital media like the digital images containting in the transaction copyright. There are many kinds of digital watermarking algorithms. However, existing digital watermarking algorithms are not robust enough against geometric attacks and signal processing operations. In this paper, a robust watermarking algorithm based on chaos array in DCT (discrete cosine transform)-domain for gray images is proposed. The algorithm provides an one-to-one method to extract the watermark.Experimental results have proved that this new method has high accuracy and is highly robust against geometric attacks, signal processing operations and geometric transformations. Furthermore, the one who have on idea of the key can't find the position of the watermark embedded in. As a result, the watermark not easy to be modified, so this scheme is secure and robust.

  8. Enhancing robustness of coupled networks under targeted recoveries

    PubMed Central

    Gong, Maoguo; Ma, Lijia; Cai, Qing; Jiao, Licheng

    2015-01-01

    Coupled networks are extremely fragile because a node failure of a network would trigger a cascade of failures on the entire system. Existing studies mainly focused on the cascading failures and the robustness of coupled networks when the networks suffer from attacks. In reality, it is necessary to recover the damaged networks, and there are cascading failures in recovery processes. In this study, firstly, we analyze the cascading failures of coupled networks during recoveries. Then, a recovery robustness index is presented for evaluating the resilience of coupled networks to cascading failures in the recovery processes. Finally, we propose a technique aiming at protecting several influential nodes for enhancing robustness of coupled networks under the recoveries, and adopt six strategies based on the potential knowledge of network centrality to find the influential nodes. Experiments on three coupling networks demonstrate that with a small number of influential nodes protected, the robustness of coupled networks under the recoveries can be greatly enhanced. PMID:25675980

  9. Two novel PRPF31 premessenger ribonucleic acid processing factor 31 homolog mutations including a complex insertion-deletion identified in Chinese families with retinitis pigmentosa

    PubMed Central

    Dong, Bing; Chen, Jieqiong; Zhang, Xiaohui; Pan, Zhe; Bai, Fengge

    2013-01-01

    Objective To identify the causative mutations in two Chinese families with retinitis pigmentosa (RP), and to describe the associated phenotype. Methods Individuals from two unrelated families underwent full ophthalmic examinations. After informed consent was obtained, genomic DNA was extracted from the venous blood of all participants. Linkage analysis was performed on the known genetic loci for autosomal dominant retinitis pigmentosa with a panel of polymorphic markers in the two families, and then all coding exons of the PRP31 premessenger ribonucleic acid processing factor 31 homolog (PRPF31) gene were screened for mutations with direct sequencing of PCR-amplified DNA fragments. Allele-specific PCR was used to validate a substitution in all available family members and 100 normal controls. A large deletion was detected with real-time quantitative PCR (RQ-PCR) using a panel of primers from regions around the PRPF31 gene. Long-range PCR, followed by DNA sequencing, was used to define the breakpoints. Results Clinical examination and pedigree analysis revealed two four-generation families (RP24 and RP106) with autosomal dominant retinitis pigmentosa. A significant two-point linkage odd disequilibrium score was generated at marker D19S926 (Zmax=3.55, θ=0) for family RP24 and D19S571 (Zmax=3.21, θ=0) for family RP106, and further linkage and haplotype studies confined the disease locus to chromosome 19q13.42 where the PRPF31 gene is located. Mutation screening of the PRPF31 gene revealed a novel deletion c.1215delG (p.G405fs+7X) in family RP106. The deletion cosegregated with the family’s disease phenotype, but was not found in 100 normal controls. No disease-causing mutation was detected in family RP24 with PCR-based sequencing analysis. RQ-PCR and long-range PCR analysis revealed a complex insertion-deletion (indel) in the patients of family RP24. The deletion is more than 19 kb and encompasses part of the PRPF31 gene (exons 1–3), together with three adjacent

  10. Robust design of dynamic observers

    NASA Technical Reports Server (NTRS)

    Bhattacharyya, S. P.

    1974-01-01

    The two (identity) observer realizations z = Mz + Ky and z = transpose of Az + transpose of K(y - transpose of Cz), respectively called the open loop and closed loop realizations, for the linear system x = Ax, y = Cx are analyzed with respect to the requirement of robustness; i.e., the requirement that the observer continue to regulate the error x - z satisfactorily despite small variations in the observer parameters from the projected design values. The results show that the open loop realization is never robust, that robustness requires a closed loop implementation, and that the closed loop realization is robust with respect to small perturbations in the gains transpose of K if and only if the observer can be built to contain an exact replica of the unstable and underdamped dynamics of the system being observed. These results clarify the stringent accuracy requirements on both models and hardware that must be met before an observer can be considered for use in a control system.

  11. Robust Sliding Window Synchronizer Developed

    NASA Technical Reports Server (NTRS)

    Chun, Kue S.; Xiong, Fuqin; Pinchak, Stanley

    2004-01-01

    The development of an advanced robust timing synchronization scheme is crucial for the support of two NASA programs--Advanced Air Transportation Technologies and Aviation Safety. A mobile aeronautical channel is a dynamic channel where various adverse effects--such as Doppler shift, multipath fading, and shadowing due to precipitation, landscape, foliage, and buildings--cause the loss of symbol timing synchronization.

  12. Mental Models: A Robust Definition

    ERIC Educational Resources Information Center

    Rook, Laura

    2013-01-01

    Purpose: The concept of a mental model has been described by theorists from diverse disciplines. The purpose of this paper is to offer a robust definition of an individual mental model for use in organisational management. Design/methodology/approach: The approach adopted involves an interdisciplinary literature review of disciplines, including…

  13. Robust Portfolio Optimization Using Pseudodistances

    PubMed Central

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948

  14. Robust Integrated Neurocontroller for Complex Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Zein-Sabatto, S.; Marpaka, D.; Hwang, W.

    1996-01-01

    The goal of this research effort is to develop an integrated control software environment for the purpose of creating an intelligent neurocontrol system. The system will be capable of estimating states, identifying parameters, diagnosing conditions, planning control strategies, and producing intelligent control actions. The distinct features of such control system are: adaptability and on-line learning capability. The proposed system will be flexible to allow structure adaptability to account for changes in the dynamic system such as: sensory failures and/or component degradations. The developed system should learn system uncertainties and changes, as they occur, while maintaining minimal control level on the dynamic system. The research activities set to achieve the research objective are summarized by the following general items: (1) Development of a system identifier or diagnostic system, (2) Development of a robust neurocontroller system, and 3. Integration of above systems to create a Robust Integrated Control system (RIC-system). Two contrary approaches are investigated in this research: classical (traditional) design approach, and the simultaneous design approach. However, in both approaches neural network is the base for the development of different functions of the system. The two resulting designs will be tested and simulation results will be compared for better possible implementation.

  15. Robust integrated neurocontroller for complex dynamic systems

    NASA Technical Reports Server (NTRS)

    Zein-Sabbato, S.; Marpaka, D.; Hwang, W.

    1995-01-01

    The goal of this research effort is to develop an integrated control software environment for the purpose of creating an intelligent neurocontrol system. The system will be capable of estimating states, identifying parameters, diagnosing conditions, planning control strategies, and producing intelligent control actions. The distinct features of such control system are adaptability and on-line learning capability. The proposed system will be flexible to allow structure adaptability to account for changes in the dynamic system such as sensory failures and/or component degradations. The developed system should learn system uncertainties and changes, as they occur, while maintaining minimal control level on the dynamic system. The research activities set to achieve the research objective are summarized by the following general items: (1) Development of a system identifier or diagnostic system; (2) Development of a robust neurocontroller system, and; (3) Integration of above systems to create a robust Integration Control system (RIC-system). Two contrary approaches are investigated in this research: classical (traditional) design approach, and the simultaneous design approach. However, in both approaches neural network is the base for the development of different functions of the system. The two resulting designs will be tested and simulation results will be compared for better possible implementation.

  16. Performance analysis of robust road sign identification

    NASA Astrophysics Data System (ADS)

    Ali, Nursabillilah M.; Mustafah, Y. M.; Rashid, N. K. A. M.

    2013-12-01

    This study describes performance analysis of a robust system for road sign identification that incorporated two stages of different algorithms. The proposed algorithms consist of HSV color filtering and PCA techniques respectively in detection and recognition stages. The proposed algorithms are able to detect the three standard types of colored images namely Red, Yellow and Blue. The hypothesis of the study is that road sign images can be used to detect and identify signs that are involved with the existence of occlusions and rotational changes. PCA is known as feature extraction technique that reduces dimensional size. The sign image can be easily recognized and identified by the PCA method as is has been used in many application areas. Based on the experimental result, it shows that the HSV is robust in road sign detection with minimum of 88% and 77% successful rate for non-partial and partial occlusions images. For successful recognition rates using PCA can be achieved in the range of 94-98%. The occurrences of all classes are recognized successfully is between 5% and 10% level of occlusions.

  17. Robust influence angle for clustering mixed data sets

    NASA Astrophysics Data System (ADS)

    Aziz, Nazrina

    2014-07-01

    The great importance in attempting to identify clusters of observations which may be present in a data is how close observations are to each other. Two observations are close when their dissimilarity is small. Some traditional distance functions cannot capture the pattern dissimilarity among the observations. The other demand is the dissimilarity measurement should have the ability to deal with a variety of data types. This article proposed a new dissimilarity measure namely, Robust Influence Angle (RIA) based on eigenstructure of the covariance matrix and robust principal component score. The proposed measurement is able to identify cluster of observation and it also has the ability to handle data set with mixed variables.

  18. System identification for robust control design

    SciTech Connect

    Dohner, J.L.

    1995-04-01

    System identification for the purpose of robust control design involves estimating a nominal model of a physical system and the uncertainty bounds of that nominal model via the use of experimentally measured input/output data. Although many algorithms have been developed to identify nominal models, little effort has been directed towards identifying uncertainty bounds. Therefore, in this document, a discussion of both nominal model identification and bounded output multiplicative uncertainty identification will be presented. This document is divided into several sections. Background information relevant to system identification and control design will be presented. A derivation of eigensystem realization type algorithms will be presented. An algorithm will be developed for calculating the maximum singular value of output multiplicative uncertainty from measured data. An application will be given involving the identification of a complex system with aliased dynamics, feedback control, and exogenous noise disturbances. And, finally, a short discussion of results will be presented.

  19. Robustness measure of hybrid intra-particle entanglement, discord, and classical correlation with initial Werner state

    NASA Astrophysics Data System (ADS)

    Saha, P.; Sarkar, D.

    2016-02-01

    Quantum information processing is largely dependent on the robustness of non-classical correlations, such as entanglement and quantum discord. However, all the realistic quantum systems are thermodynamically open and lose their coherence with time through environmental interaction. The time evolution of quantum entanglement, discord, and the respective classical correlation for a single, spin-1/2 particle under spin and energy degrees of freedom, with an initial Werner state, has been investigated in the present study. The present intra-particle system is considered to be easier to produce than its inter-particle counterpart. Experimentally, this type of system may be realized in the well-known Penning trap. The most stable correlation was identified through maximization of a system-specific global objective function. Quantum discord was found to be the most stable, followed by the classical correlation. Moreover, all the correlations were observed to attain highest robustness under initial Bell state, with minimum possible dephasing and decoherence parameters.

  20. Intracortical remodeling parameters are associated with measures of bone robustness

    PubMed Central

    Goldman, Haviva M.; Hampson, Naomi A.; Guth, J. Jared; Lin, David; Jepsen, Karl J.

    2014-01-01

    Prior work identified a novel association between bone robustness and porosity, which may be part of a broader interaction whereby the skeletal system compensates for the natural variation in robustness (bone width relative to length) by modulating tissue-level mechanical properties to increase stiffness of slender bones and to reduce mass of robust bones. To further understand this association, we tested the hypothesis that the relationship between robustness and porosity is mediated through intracortical, BMU-based (basic multicellular unit) remodeling. We quantified cortical porosity, mineralization, and histomorphometry at two sites (38 and 66% of the length) in human cadaveric tibiae. We found significant correlations between robustness and several histomorphometric variables (e.g., % secondary tissue [R2 = 0.68, p < 0.004], total osteon area [R2=0.42, p<0.04]) at the 66% site. Although these associations were weaker at the 38% site, significant correlations between histological variables were identified between the two sites indicating that both respond to the same global effects and demonstrate a similar character at the whole bone level. Thus, robust bones tended to have larger and more numerous osteons with less infilling, resulting in bigger pores and more secondary bone area. These results suggest that local regulation of BMU-based remodeling may be further modulated by a global signal associated with robustness, such that remodeling is suppressed in slender bones but not in robust bones. Elucidating this mechanism further is crucial for better understanding the complex adaptive nature of the skeleton, and how inter-individual variation in remodeling differentially impacts skeletal aging and an individuals’ potential response to prophylactic treatments. PMID:24962664

  1. Plasmonic nanogels with robustly tunable optical properties

    NASA Astrophysics Data System (ADS)

    Cong, Tao; Wani, Satvik N.; Zhou, Georo; Baszczuk, Elia; Sureshkumar, Radhakrishna

    2011-10-01

    Low viscosity fluids with tunable optical properties can be processed to manufacture thin film and interfaces for molecular detection, light trapping in photovoltaics and reconfigurable optofluidic devices. In this work, self-assembly in wormlike micelle solutions is used to uniformly distribute various metallic nanoparticles to produce stable suspensions with localized, multiple wavelength or broad-band optical properties. Their spectral response can be robustly modified by varying the species, concentration, size and/or shape of the nanoparticles. Structure, rheology and optical properties of these plasmonic nanogels as well as their potential applications to efficient photovoltaics design are discussed.

  2. Robust technique allowing manufacturing superoleophobic surfaces

    NASA Astrophysics Data System (ADS)

    Bormashenko, Edward; Grynyov, Roman; Chaniel, Gilad; Taitelbaum, Haim; Bormashenko, Yelena

    2013-04-01

    We report the robust technique allowing manufacturing of superhydrophobic and oleophobic (omniphobic) surfaces with industrial grade low density polyethylene. The reported process includes two stages: (1) hot embossing of polyethylene with micro-scaled steel gauzes; (2) treatment of embossed surfaces with cold radiofrequency plasma of tetrafluoromethane. The reported surfaces demonstrate not only pronounced superhydrophobicity but also superoleophobicity. Superoleophobicity results from the hierarchical nano-scaled topography of fluorinated polyethylene surface. The observed superoleophobicity is strengthened by the hydrophobic recovery. The stability of the Cassie wetting regime was studied.

  3. Design analysis, robust methods, and stress classification

    SciTech Connect

    Bees, W.J.

    1993-01-01

    This special edition publication volume is comprised of papers presented at the 1993 ASME Pressure Vessels and Piping Conference, July 25--29, 1993 in Denver, Colorado. The papers were prepared for presentations in technical sessions developed under the auspices of the PVPD Committees on Computer Technology, Design and Analysis, Operations Applications and Components. The topics included are: Analysis of Pressure Vessels and Components; Expansion Joints; Robust Methods; Stress Classification; and Non-Linear Analysis. Individual papers have been processed separately for inclusion in the appropriate data bases.

  4. Identifying Training Needs to Improve Indigenous Community Representatives Input into Environmental Resource Management Consultative Processes: A Case Study of the Bundjalung Nation

    ERIC Educational Resources Information Center

    Lloyd, David; Norrie, Fiona

    2004-01-01

    Despite increased engagement of Indigenous representatives as participants on consultative panels charged with processes of natural resource management, concerns have been raised by both Indigenous representatives and management agencies regarding the ability of Indigenous people to have quality input into the decisions these processes produce. In…

  5. Robust multi-objective calibration strategies - possibilities for improving flood forecasting

    NASA Astrophysics Data System (ADS)

    Krauße, T.; Cullmann, J.; Saile, P.; Schmitz, G. H.

    2012-10-01

    Process-oriented rainfall-runoff models are designed to approximate the complex hydrologic processes within a specific catchment and in particular to simulate the discharge at the catchment outlet. Most of these models exhibit a high degree of complexity and require the determination of various parameters by calibration. Recently, automatic calibration methods became popular in order to identify parameter vectors with high corresponding model performance. The model performance is often assessed by a purpose-oriented objective function. Practical experience suggests that in many situations one single objective function cannot adequately describe the model's ability to represent any aspect of the catchment's behaviour. This is regardless of whether the objective is aggregated of several criteria that measure different (possibly opposite) aspects of the system behaviour. One strategy to circumvent this problem is to define multiple objective functions and to apply a multi-objective optimisation algorithm to identify the set of Pareto optimal or non-dominated solutions. Nonetheless, there is a major disadvantage of automatic calibration procedures that understand the problem of model calibration just as the solution of an optimisation problem: due to the complex-shaped response surface, the estimated solution of the optimisation problem can result in different near-optimum parameter vectors that can lead to a very different performance on the validation data. Bárdossy and Singh (2008) studied this problem for single-objective calibration problems using the example of hydrological models and proposed a geometrical sampling approach called Robust Parameter Estimation (ROPE). This approach applies the concept of data depth in order to overcome the shortcomings of automatic calibration procedures and find a set of robust parameter vectors. Recent studies confirmed the effectivity of this method. However, all ROPE approaches published so far just identify robust model

  6. Algebraic connectivity and graph robustness.

    SciTech Connect

    Feddema, John Todd; Byrne, Raymond Harry; Abdallah, Chaouki T.

    2009-07-01

    Recent papers have used Fiedler's definition of algebraic connectivity to show that network robustness, as measured by node-connectivity and edge-connectivity, can be increased by increasing the algebraic connectivity of the network. By the definition of algebraic connectivity, the second smallest eigenvalue of the graph Laplacian is a lower bound on the node-connectivity. In this paper we show that for circular random lattice graphs and mesh graphs algebraic connectivity is a conservative lower bound, and that increases in algebraic connectivity actually correspond to a decrease in node-connectivity. This means that the networks are actually less robust with respect to node-connectivity as the algebraic connectivity increases. However, an increase in algebraic connectivity seems to correlate well with a decrease in the characteristic path length of these networks - which would result in quicker communication through the network. Applications of these results are then discussed for perimeter security.

  7. Robust dynamic mitigation of instabilities

    SciTech Connect

    Kawata, S.; Karino, T.

    2015-04-15

    A dynamic mitigation mechanism for instability growth was proposed and discussed in the paper [S. Kawata, Phys. Plasmas 19, 024503 (2012)]. In the present paper, the robustness of the dynamic instability mitigation mechanism is discussed further. The results presented here show that the mechanism of the dynamic instability mitigation is rather robust against changes in the phase, the amplitude, and the wavelength of the wobbling perturbation applied. Generally, instability would emerge from the perturbation of the physical quantity. Normally, the perturbation phase is unknown so that the instability growth rate is discussed. However, if the perturbation phase is known, the instability growth can be controlled by a superposition of perturbations imposed actively: If the perturbation is induced by, for example, a driving beam axis oscillation or wobbling, the perturbation phase could be controlled, and the instability growth is mitigated by the superposition of the growing perturbations.

  8. Key molecular processes of the diapause to post-diapause quiescence transition in the alfalfa leafcutting bee Megachile rotundata identified by comparative transcriptome analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Insect diapause (dormancy) synchronizes an insect’s life cycle to seasonal changes in the abiotic and biotic resources required for development and reproduction. Transcription analysis of Megachile rotundata diapause termination identified 399 post-diapause upregulated and 144 post-diapause down-reg...

  9. Application of a Three-Dimensional Microstructure Evolution Model to Identify Key Process Settings for the Production of Dual-Phase Steels

    NASA Astrophysics Data System (ADS)

    Bos, C.; Mecozzi, M. G.; Hanlon, D. N.; Aarnts, M. P.; Sietsma, J.

    2011-12-01

    During the production of dual-phase (DP) steels, many transformation phenomena occur, each of which may significantly influence the final properties of the product. In the continuous annealing line, recovery, recrystallization, carbide dissolution, austenite formation, ferrite formation, and martensite formation may all occur. These processes can strongly influence each other. Furthermore, these processes may overlap. This metallurgical complexity makes both establishing the root cause of property variations and the design of new chemistries experimentally expensive and time consuming. With the recent introduction of a computationally efficient three-dimensional (3-D) microstructure evolution model that describes all transformation processes that occur during the processing of DP steels, a tool has become available to study in detail the effect of individual process parameters on the final microstructure. The model has been applied to study the transformation processes on the run-out table of the hot strip mill and in the continuous annealing line. In this study, emphasis has been placed on the role the hot-rolled microstructure plays in the final DP microstructure. Therefore, the model was extended to include the influence of manganese segregation on band formation. Details of the model and findings on the relation between the final microstructure and process settings are presented.

  10. Robust computational reconstitution – a new method for the comparative analysis of gene expression in tissues and isolated cell fractions

    PubMed Central

    Hoffmann, Martin; Pohlers, Dirk; Koczan, Dirk; Thiesen, Hans-Jürgen; Wölfl, Stefan; Kinne, Raimund W

    2006-01-01

    Background Biological tissues consist of various cell types that differentially contribute to physiological and pathophysiological processes. Determining and analyzing cell type-specific gene expression under diverse conditions is therefore a central aim of biomedical research. The present study compares gene expression profiles in whole tissues and isolated cell fractions purified from these tissues in patients with rheumatoid arthritis and osteoarthritis. Results The expression profiles of the whole tissues were compared to computationally reconstituted expression profiles that combine the expression profiles of the isolated cell fractions (macrophages, fibroblasts, and non-adherent cells) according to their relative mRNA proportions in the tissue. The mRNA proportions were determined by trimmed robust regression using only the most robustly-expressed genes (1/3 to 1/2 of all measured genes), i.e. those showing the most similar expression in tissue and isolated cell fractions. The relative mRNA proportions were determined using several different chip evaluation methods, among which the MAS 5.0 signal algorithm appeared to be most robust. The computed mRNA proportions agreed well with the cell proportions determined by immunohistochemistry except for a minor number of outliers. Genes that were either regulated (i.e. differentially-expressed in tissue and isolated cell fractions) or robustly-expressed in all patients were identified using different test statistics. Conclusion Robust Computational Reconstitution uses an intermediate number of robustly-expressed genes to estimate the relative mRNA proportions. This avoids both the exclusive dependence on the robust expression of individual, highly cell type-specific marker genes and the bias towards an equal distribution upon inclusion of all genes for computation. PMID:16889662

  11. Robust flight control of rotorcraft

    NASA Astrophysics Data System (ADS)

    Pechner, Adam Daniel

    With recent design improvement in fixed wing aircraft, there has been a considerable interest in the design of robust flight control systems to compensate for the inherent instability necessary to achieve desired performance. Such systems are designed for maximum available retention of stability and performance in the presence of significant vehicle damage or system failure. The rotorcraft industry has shown similar interest in adopting these reconfigurable flight control schemes specifically because of their ability to reject disturbance inputs and provide a significant amount of robustness for all but the most catastrophic of situations. The research summarized herein focuses on the extension of the pseudo-sliding mode control design procedure interpreted in the frequency domain. Application of the technique is employed and simulated on two well known helicopters, a simplified model of a hovering Sikorsky S-61 and the military's Black Hawk UH-60A also produced by Sikorsky. The Sikorsky helicopter model details are readily available and was chosen because it can be limited to pitch and roll motion reducing the number of degrees of freedom and yet contains two degrees of freedom, which is the minimum requirement in proving the validity of the pseudo-sliding control technique. The full order model of a hovering Black Hawk system was included both as a comparison to the S-61 helicopter design system and as a means to demonstrate the scaleability and effectiveness of the control technique on sophisticated systems where design robustness is of critical concern.

  12. Robust video hashing via multilinear subspace projections.

    PubMed

    Li, Mu; Monga, Vishal

    2012-10-01

    The goal of video hashing is to design hash functions that summarize videos by short fingerprints or hashes. While traditional applications of video hashing lie in database searches and content authentication, the emergence of websites such as YouTube and DailyMotion poses a challenging problem of anti-piracy video search. That is, hashes or fingerprints of an original video (provided to YouTube by the content owner) must be matched against those uploaded to YouTube by users to identify instances of "illegal" or undesirable uploads. Because the uploaded videos invariably differ from the original in their digital representation (owing to incidental or malicious distortions), robust video hashes are desired. We model videos as order-3 tensors and use multilinear subspace projections, such as a reduced rank parallel factor analysis (PARAFAC) to construct video hashes. We observe that, unlike most standard descriptors of video content, tensor-based subspace projections can offer excellent robustness while effectively capturing the spatio-temporal essence of the video for discriminability. We introduce randomization in the hash function by dividing the video into (secret key based) pseudo-randomly selected overlapping sub-cubes to prevent against intentional guessing and forgery. Detection theoretic analysis of the proposed hash-based video identification is presented, where we derive analytical approximations for error probabilities. Remarkably, these theoretic error estimates closely mimic empirically observed error probability for our hash algorithm. Furthermore, experimental receiver operating characteristic (ROC) curves reveal that the proposed tensor-based video hash exhibits enhanced robustness against both spatial and temporal video distortions over state-of-the-art video hashing techniques. PMID:22752130

  13. Identifying Executable Plans

    NASA Technical Reports Server (NTRS)

    Bedrax-Weiss, Tania; Jonsson, Ari K.; Frank, Jeremy D.; McGann, Conor

    2003-01-01

    Generating plans for execution imposes a different set of requirements on the planning process than those imposed by planning alone. In highly unpredictable execution environments, a fully-grounded plan may become inconsistent frequently when the world fails to behave as expected. Intelligent execution permits making decisions when the most up-to-date information is available, ensuring fewer failures. Planning should acknowledge the capabilities of the execution system, both to ensure robust execution in the face of uncertainty, which also relieves the planner of the burden of making premature commitments. We present Plan Identification Functions (PIFs), which formalize what it means for a plan to be executable, md are used in conjunction with a complete model of system behavior to halt the planning process when an executable plan is found. We describe the implementation of plan identification functions for a temporal, constraint-based planner. This particular implementation allows the description of many different plan identification functions. characteristics crf the xectieonfvii rnm-enft,h e best plan to hand to the execution system will contain more or less commitment and information.

  14. Three-dimensional human facial morphologies as robust aging markers.

    PubMed

    Chen, Weiyang; Qian, Wei; Wu, Gang; Chen, Weizhong; Xian, Bo; Chen, Xingwei; Cao, Yaqiang; Green, Christopher D; Zhao, Fanghong; Tang, Kun; Han, Jing-Dong J

    2015-05-01

    Aging is associated with many complex diseases. Reliable prediction of the aging process is important for assessing the risks of aging-associated diseases. However, despite intense research, so far there is no reliable aging marker. Here we addressed this problem by examining whether human 3D facial imaging features could be used as reliable aging markers. We collected > 300 3D human facial images and blood profiles well-distributed across ages of 17 to 77 years. By analyzing the morphological profiles, we generated the first comprehensive map of the aging human facial phenome. We identified quantitative facial features, such as eye slopes, highly associated with age. We constructed a robust age predictor and found that on average people of the same chronological age differ by ± 6 years in facial age, with the deviations increasing after age 40. Using this predictor, we identified slow and fast agers that are significantly supported by levels of health indicators. Despite a close relationship between facial morphological features and health indicators in the blood, facial features are more reliable aging biomarkers than blood profiles and can better reflect the general health status than chronological age. PMID:25828530

  15. A Methodology for Robust Comparative Life Cycle Assessments Incorporating Uncertainty.

    PubMed

    Gregory, Jeremy R; Noshadravan, Arash; Olivetti, Elsa A; Kirchain, Randolph E

    2016-06-21

    We propose a methodology for conducting robust comparative life cycle assessments (LCA) by leveraging uncertainty. The method evaluates a broad range of the possible scenario space in a probabilistic fashion while simultaneously considering uncertainty in input data. The method is intended to ascertain which scenarios have a definitive environmentally preferable choice among the alternatives being compared and the significance of the differences given uncertainty in the parameters, which parameters have the most influence on this difference, and how we can identify the resolvable scenarios (where one alternative in the comparison has a clearly lower environmental impact). This is accomplished via an aggregated probabilistic scenario-aware analysis, followed by an assessment of which scenarios have resolvable alternatives. Decision-tree partitioning algorithms are used to isolate meaningful scenario groups. In instances where the alternatives cannot be resolved for scenarios of interest, influential parameters are identified using sensitivity analysis. If those parameters can be refined, the process can be iterated using the refined parameters. We also present definitions of uncertainty quantities that have not been applied in the field of LCA and approaches for characterizing uncertainty in those quantities. We then demonstrate the methodology through a case study of pavements. PMID:27219285

  16. Three-dimensional human facial morphologies as robust aging markers

    PubMed Central

    Chen, Weiyang; Qian, Wei; Wu, Gang; Chen, Weizhong; Xian, Bo; Chen, Xingwei; Cao, Yaqiang; Green, Christopher D; Zhao, Fanghong; Tang, Kun; Han, Jing-Dong J

    2015-01-01

    Aging is associated with many complex diseases. Reliable prediction of the aging process is important for assessing the risks of aging-associated diseases. However, despite intense research, so far there is no reliable aging marker. Here we addressed this problem by examining whether human 3D facial imaging features could be used as reliable aging markers. We collected > 300 3D human facial images and blood profiles well-distributed across ages of 17 to 77 years. By analyzing the morphological profiles, we generated the first comprehensive map of the aging human facial phenome. We identified quantitative facial features, such as eye slopes, highly associated with age. We constructed a robust age predictor and found that on average people of the same chronological age differ by ± 6 years in facial age, with the deviations increasing after age 40. Using this predictor, we identified slow and fast agers that are significantly supported by levels of health indicators. Despite a close relationship between facial morphological features and health indicators in the blood, facial features are more reliable aging biomarkers than blood profiles and can better reflect the general health status than chronological age. PMID:25828530

  17. Robust Modeling of Greenhouse Gas (GHG) Fluxes from Coastal Wetland Ecosystems

    NASA Astrophysics Data System (ADS)

    Abdul-Aziz, O. I.; Ishtiaq, K. S.

    2014-12-01

    Many critical wetland biogeochemical processes are still largely unknown or poorly understood at best. Yet, available models for predicting wetland greenhouse gas (GHG) fluxes (e.g., CO2, CH4, and N2O) are generally mechanistic in nature. This knowledge gap leads to inappropriate process descriptions or over-parameterizations in existing mechanistic models, which often fail to provide accurate and robust predictions across time and space. We developed a systematic data-analytics and informatics method to identify the dominant controls and quantify the relative linkages of wetland GHG fluxes in relation to various hydro-climatic, sea level, biogeochemical and ecological drivers. The method was applied to data collected from 2012-14 through an extensive field campaign from different blue carbon sites of Waquoit Bay, MA. Multivariate pattern recognition techniques of principal component and factor analyses were employed to identify the dominant controls of wetland GHG fluxes; classifying and grouping process variables based on their similarity and interrelation patterns. Power-law based partial least squares regression models were developed to quantify the relative linkages of major GHGs with different process drivers and stressors, as well as to achieve site-specific predictions of GHG fluxes. Wetland biogeochemical similitude and scaling laws were also investigated to unravel emergent patterns and organizing principles of wetland GHG fluxes. The research findings will guide the development of parsimonious empirical to appropriate mechanistic models for spatio-temporally robust predictions of GHGs fluxes and carbon sequestration from coastal wetland ecosystems. The research is part of two current projects funded by the National Oceanic and Atmospheric Administration and the National Science Foundation; focusing on wetland data collections, knowledge formation, formulation of robust GHGs prediction models, and development of ecological engineering tools.

  18. Robust SAR ATR by hedging against uncertainty

    NASA Astrophysics Data System (ADS)

    Hoffman, John R.; Mahler, Ronald P. S.; Ravichandran, Ravi B.; Huff, Melvyn; Musick, Stanton

    2002-07-01

    For the past two years in this conference, we have described techniques for robust identification of motionless ground targets using single-frame Synthetic Aperture Radar (SAR) data. By robust identification, we mean the problem of determining target ID despite the existence of confounding statistically uncharacterizable signature variations. Such variations can be caused by effects such as mud, dents, attachment of nonstandard equipment, nonstandard attachment of standard equipment, turret articulations, etc. When faced with such variations, optimal approaches can often behave badly-e.g., by mis-identifying a target type with high confidence. A basic element of our approach has been to hedge against unknowable uncertainties in the sensor likelihood function by specifying a random error bar (random interval) for each value of the likelihood function corresponding to any given value of the input data. Int his paper, we will summarize our recent results. This will include a description of the fuzzy maximum a posteriori (MAP) estimator. The fuzzy MAP estiamte is essentially the set of conventional MAP estimates that are plausible, given the assumed uncertainty in the problem. Despite its name, the fuzzy MAP is derived rigorously from first probabilistic principles based on random interval theory.

  19. Robust hashing for 3D models

    NASA Astrophysics Data System (ADS)

    Berchtold, Waldemar; Schäfer, Marcel; Rettig, Michael; Steinebach, Martin

    2014-02-01

    3D models and applications are of utmost interest in both science and industry. With the increment of their usage, their number and thereby the challenge to correctly identify them increases. Content identification is commonly done by cryptographic hashes. However, they fail as a solution in application scenarios such as computer aided design (CAD), scientific visualization or video games, because even the smallest alteration of the 3D model, e.g. conversion or compression operations, massively changes the cryptographic hash as well. Therefore, this work presents a robust hashing algorithm for 3D mesh data. The algorithm applies several different bit extraction methods. They are built to resist desired alterations of the model as well as malicious attacks intending to prevent correct allocation. The different bit extraction methods are tested against each other and, as far as possible, the hashing algorithm is compared to the state of the art. The parameters tested are robustness, security and runtime performance as well as False Acceptance Rate (FAR) and False Rejection Rate (FRR), also the probability calculation of hash collision is included. The introduced hashing algorithm is kept adaptive e.g. in hash length, to serve as a proper tool for all applications in practice.

  20. Designing Flood Management Systems for Joint Economic and Ecological Robustness

    NASA Astrophysics Data System (ADS)

    Spence, C. M.; Grantham, T.; Brown, C. M.; Poff, N. L.

    2015-12-01

    Freshwater ecosystems across the United States are threatened by hydrologic change caused by water management operations and non-stationary climate trends. Nonstationary hydrology also threatens flood management systems' performance. Ecosystem managers and flood risk managers need tools to design systems that achieve flood risk reduction objectives while sustaining ecosystem functions and services in an uncertain hydrologic future. Robust optimization is used in water resources engineering to guide system design under climate change uncertainty. Using principles introduced by Eco-Engineering Decision Scaling (EEDS), we extend robust optimization techniques to design flood management systems that meet both economic and ecological goals simultaneously across a broad range of future climate conditions. We use three alternative robustness indices to identify flood risk management solutions that preserve critical ecosystem functions in a case study from the Iowa River, where recent severe flooding has tested the limits of the existing flood management system. We seek design modifications to the system that both reduce expected cost of flood damage while increasing ecologically beneficial inundation of riparian floodplains across a wide range of plausible climate futures. The first robustness index measures robustness as the fraction of potential climate scenarios in which both engineering and ecological performance goals are met, implicitly weighting each climate scenario equally. The second index builds on the first by using climate projections to weight each climate scenario, prioritizing acceptable performance in climate scenarios most consistent with climate projections. The last index measures robustness as mean performance across all climate scenarios, but penalizes scenarios with worse performance than average, rewarding consistency. Results stemming from alternate robustness indices reflect implicit assumptions about attitudes toward risk and reveal the

  1. Robust bearing estimation for 3-component stations

    SciTech Connect

    CLAASSEN,JOHN P.

    2000-02-01

    A robust bearing estimation process for 3-component stations has been developed and explored. The method, called SEEC for Search, Estimate, Evaluate and Correct, intelligently exploits the inherent information in the arrival at every step of the process to achieve near-optimal results. In particular the approach uses a consistent framework to define the optimal time-frequency windows on which to make estimates, to make the bearing estimates themselves, to construct metrics helpful in choosing the better estimates or admitting that the bearing is immeasurable, and finally to apply bias corrections when calibration information is available to yield a single final estimate. The algorithm was applied to a small but challenging set of events in a seismically active region. It demonstrated remarkable utility by providing better estimates and insights than previously available. Various monitoring implications are noted from these findings.

  2. Robust Bearing Estimation for 3-Component Stations

    SciTech Connect

    Claassen, John P.

    1999-06-03

    A robust bearing estimation process for 3-component stations has been developed and explored. The method, called SEEC for Search, Estimate, Evaluate and Correct, intelligently exploits the in- herent information in the arrival at every step of the process to achieve near-optimal results. In particular, the approach uses a consistent framework to define the optimal time-frequency windows on which to make estimates, to make the bearing estimates themselves, to construct metrics helpful in choosing the better estimates or admitting that the bearing is immeasurable, andjinally to apply bias corrections when calibration information is available to yield a single final estimate. The method was applied to a small but challenging set of events in a seismically active region. The method demonstrated remarkable utility by providing better estimates and insights than previously available. Various monitoring implications are noted fiom these findings.

  3. From acoustic observatories to robust passive sonar

    NASA Astrophysics Data System (ADS)

    Baggeroer, Arthur B.

    2003-04-01

    The evolution of the DARPA Robust Passive Sonar (RPS) as well as the ONR Shallow Water Acoustic Testbed (SWAT) programs can be traced from concept of an acoustic observatory posed by Munk in 1980 through several assessment and feasibility studies to their current implementations. During this, the thinking on several key hypotheses matured. (i) Are noise fields directional enough to sustain high array gains? (ii) What are the tradeoffs among nonstationarity caused by ship motion, array configuration (geometry and the number of sensors), and ``snapshots'' needed for stable adaptive processing? (iii) What is the interaction between gains from vertical and horizontal apertures? (iv) How much signal gain degradation is acceptable? (v) What methods of post-processing can be done for normalization, tracking, and 3-D localization? This presentation will give a brief summary of the history of RPS and SWAT and pose the question of how well we can answer some of hypotheses which motivated them.

  4. Robust and efficient overset grid assembly for partitioned unstructured meshes

    SciTech Connect

    Roget, Beatrice Sitaraman, Jayanarayanan

    2014-03-01

    This paper presents a method to perform efficient and automated Overset Grid Assembly (OGA) on a system of overlapping unstructured meshes in a parallel computing environment where all meshes are partitioned into multiple mesh-blocks and processed on multiple cores. The main task of the overset grid assembler is to identify, in parallel, among all points in the overlapping mesh system, at which points the flow solution should be computed (field points), interpolated (receptor points), or ignored (hole points). Point containment search or donor search, an algorithm to efficiently determine the cell that contains a given point, is the core procedure necessary for accomplishing this task. Donor search is particularly challenging for partitioned unstructured meshes because of the complex irregular boundaries that are often created during partitioning. Another challenge arises because of the large variation in the type of mesh-block overlap and the resulting large load imbalance on multiple processors. Desirable traits for the grid assembly method are efficiency (requiring only a small fraction of the solver time), robustness (correct identification of all point types), and full automation (no user input required other than the mesh system). Additionally, the method should be scalable, which is an important challenge due to the inherent load imbalance. This paper describes a fully-automated grid assembly method, which can use two different donor search algorithms. One is based on the use of auxiliary grids and Exact Inverse Maps (EIM), and the other is based on the use of Alternating Digital Trees (ADT). The EIM method is demonstrated to be more efficient than the ADT method, while retaining robustness. An adaptive load re-balance algorithm is also designed and implemented, which considerably improves the scalability of the method.

  5. Genome sequencing identifies two nearly unchanged strains of persistent Listeria monocytogenes isolated in two different fish processing plants sampled six years apart

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Listeria monocytogenes is a food-borne human pathogenic bacterium that can cause infections with a high mortality rate. It has a remarkable ability to persist in food processing facilities and here we report the complete genome sequences for two L. monocytogenes strains (N53-1 and La111) that were i...

  6. Understanding and Identifying the Child at Risk for Auditory Processing Disorders: A Case Method Approach in Examining the Interdisciplinary Role of the School Nurse

    ERIC Educational Resources Information Center

    Neville, Kathleen; Foley, Marie; Gertner, Alan

    2011-01-01

    Despite receiving increased professional and public awareness since the initial American Speech Language Hearing Association (ASHA) statement defining Auditory Processing Disorders (APDs) in 1993 and the subsequent ASHA statement (2005), many misconceptions remain regarding APDs in school-age children among health and academic professionals. While…

  7. Identify Skills and Proficiency Levels Necessary for Entry-Level Employment for All Vocational Programs Using Computers to Process Data. Final Report.

    ERIC Educational Resources Information Center

    Crowe, Jacquelyn

    This study investigated computer and word processing operator skills necessary for employment in today's high technology office. The study was comprised of seven major phases: (1) identification of existing community college computer operator programs in the state of Washington; (2) attendance at an information management seminar; (3) production…

  8. Human-mouse cystic fibrosis transmembrane conductance regulator (CFTR) chimeras identify regions that partially rescue CFTR-ΔF508 processing and alter its gating defect.

    PubMed

    Dong, Qian; Ostedgaard, Lynda S; Rogers, Christopher; Vermeer, Daniel W; Zhang, Yuping; Welsh, Michael J

    2012-01-17

    The ΔF508 mutation in the cystic fibrosis transmembrane conductance regulator (CFTR) gene is the most common cause of cystic fibrosis. The mutation disrupts biosynthetic processing, reduces channel opening rate, and decreases protein lifetime. In contrast to human CFTR (hCFTR)-ΔF508, mouse CFTR-ΔF508 is partially processed to the cell surface, although it exhibits a functional defect similar to hCFTR-ΔF508. To explore ΔF508 abnormalities, we generated human-mouse chimeric channels. Substituting mouse nucleotide-binding domain-1 (mNBD1) into hCFTR partially rescued the ΔF508-induced maturation defect, and substituting mouse membrane-spanning domain-2 or its intracellular loops (ICLs) into hCFTR prevented further ΔF508-induced gating defects. The protective effect of the mouse ICLs was reverted by inserting mouse NBDs. Our results indicate that the ΔF508 mutation affects maturation and gating via distinct regions of the protein; maturation of CFTR-ΔF508 depends on NBD1, and the ΔF508-induced gating defect depends on the interaction between the membrane-spanning domain-2 ICLs and the NBDs. These appear to be distinct processes, because none of the chimeras repaired both defects. This distinction was exemplified by the I539T mutation, which improved CFTR-ΔF508 processing but worsened the gating defect. Our results, together with previous studies, suggest that many different NBD1 modifications improve CFTR-ΔF508 maturation and that the effect of modifications can be additive. Thus, it might be possible to enhance processing by targeting several different regions of the domain or by targeting a network of CFTR-associated proteins. Because no one modification corrected both maturation and gating, perhaps more than a single agent will be required to correct all CFTR-ΔF508 defects. PMID:22210114

  9. Human–mouse cystic fibrosis transmembrane conductance regulator (CFTR) chimeras identify regions that partially rescue CFTR-ΔF508 processing and alter its gating defect

    PubMed Central

    Dong, Qian; Ostedgaard, Lynda S.; Rogers, Christopher; Vermeer, Daniel W.; Zhang, Yuping; Welsh, Michael J.

    2012-01-01

    The ΔF508 mutation in the cystic fibrosis transmembrane conductance regulator (CFTR) gene is the most common cause of cystic fibrosis. The mutation disrupts biosynthetic processing, reduces channel opening rate, and decreases protein lifetime. In contrast to human CFTR (hCFTR)-ΔF508, mouse CFTR-ΔF508 is partially processed to the cell surface, although it exhibits a functional defect similar to hCFTR-ΔF508. To explore ΔF508 abnormalities, we generated human–mouse chimeric channels. Substituting mouse nucleotide-binding domain-1 (mNBD1) into hCFTR partially rescued the ΔF508-induced maturation defect, and substituting mouse membrane-spanning domain-2 or its intracellular loops (ICLs) into hCFTR prevented further ΔF508-induced gating defects. The protective effect of the mouse ICLs was reverted by inserting mouse NBDs. Our results indicate that the ΔF508 mutation affects maturation and gating via distinct regions of the protein; maturation of CFTR-ΔF508 depends on NBD1, and the ΔF508-induced gating defect depends on the interaction between the membrane-spanning domain-2 ICLs and the NBDs. These appear to be distinct processes, because none of the chimeras repaired both defects. This distinction was exemplified by the I539T mutation, which improved CFTR-ΔF508 processing but worsened the gating defect. Our results, together with previous studies, suggest that many different NBD1 modifications improve CFTR-ΔF508 maturation and that the effect of modifications can be additive. Thus, it might be possible to enhance processing by targeting several different regions of the domain or by targeting a network of CFTR-associated proteins. Because no one modification corrected both maturation and gating, perhaps more than a single agent will be required to correct all CFTR-ΔF508 defects. PMID:22210114

  10. Robust Software Architecture for Robots

    NASA Technical Reports Server (NTRS)

    Aghazanian, Hrand; Baumgartner, Eric; Garrett, Michael

    2009-01-01

    Robust Real-Time Reconfigurable Robotics Software Architecture (R4SA) is the name of both a software architecture and software that embodies the architecture. The architecture was conceived in the spirit of current practice in designing modular, hard, realtime aerospace systems. The architecture facilitates the integration of new sensory, motor, and control software modules into the software of a given robotic system. R4SA was developed for initial application aboard exploratory mobile robots on Mars, but is adaptable to terrestrial robotic systems, real-time embedded computing systems in general, and robotic toys.

  11. Recent Progress toward Robust Photocathodes

    SciTech Connect

    Mulhollan, G. A.; Bierman, J. C.

    2009-08-04

    RF photoinjectors for next generation spin-polarized electron accelerators require photo-cathodes capable of surviving RF gun operation. Free electron laser photoinjectors can benefit from more robust visible light excited photoemitters. A negative electron affinity gallium arsenide activation recipe has been found that diminishes its background gas susceptibility without any loss of near bandgap photoyield. The highest degree of immunity to carbon dioxide exposure was achieved with a combination of cesium and lithium. Activated amorphous silicon photocathodes evince advantageous properties for high current photoinjectors including low cost, substrate flexibility, visible light excitation and greatly reduced gas reactivity compared to gallium arsenide.

  12. Determining the rp-process flow through 56Ni: resonances in 57Cu(p,γ)58Zn identified with GRETINA.

    PubMed

    Langer, C; Montes, F; Aprahamian, A; Bardayan, D W; Bazin, D; Brown, B A; Browne, J; Crawford, H; Cyburt, R H; Domingo-Pardo, C; Gade, A; George, S; Hosmer, P; Keek, L; Kontos, A; Lee, I-Y; Lemasson, A; Lunderberg, E; Maeda, Y; Matos, M; Meisel, Z; Noji, S; Nunes, F M; Nystrom, A; Perdikakis, G; Pereira, J; Quinn, S J; Recchia, F; Schatz, H; Scott, M; Siegl, K; Simon, A; Smith, M; Spyrou, A; Stevens, J; Stroberg, S R; Weisshaar, D; Wheeler, J; Wimmer, K; Zegers, R G T

    2014-07-18

    An approach is presented to experimentally constrain previously unreachable (p, γ) reaction rates on nuclei far from stability in the astrophysical rp process. Energies of all critical resonances in the (57)Cu(p,γ)(58)Zn reaction are deduced by populating states in (58)Zn with a (d, n) reaction in inverse kinematics at 75 MeV/u, and detecting γ-ray-recoil coincidences with the state-of-the-art γ-ray tracking array GRETINA and the S800 spectrograph at the National Superconducting Cyclotron Laboratory. The results reduce the uncertainty in the (57)Cu(p,γ) reaction rate by several orders of magnitude. The effective lifetime of (56)Ni, an important waiting point in the rp process in x-ray bursts, can now be determined entirely from experimentally constrained reaction rates. PMID:25083636

  13. Linking sediment connectivity to remotely sensed, reach-scale morphology identifies correlations between network-scale sediment regimes and local river forms and processes

    NASA Astrophysics Data System (ADS)

    Schmitt, R. J. P.; Bizzi, S.; Castelletti, A.

    2015-12-01

    Connectivity describes the transport between sediment sources and sinks in fluvial networks, defining source-sink relations in the domains of sediment flux, delivery times, and supplied grain sizes. Connectivity embalms both sediment deliveries to individual reaches and sediment transport regimes on the network scale, and is a central driver behind fluvial biotic and abiotic processes, and related ecosystem services. Yet, river basin management is missing quantitative tools for studying connectivity in larger fluvial networks. With the CASCADE (Catchment Sediment Connectivity and Delivery) model we recently introduced a framework that quantifies sediment deliveries from each sediment source to all the connected sinks as individual cascading processes. This allows quantifying all domains of sediment connectivity at the reach scale as well as analyzing the resulting network-scale sediment regimes. CASCADE is applicable also for very large and poorly monitored river networks. We implement CASCADE for a large river network (7500 km) in SE Asia and quantify all domains of connectivity for all reaches in the network. We derive some relevant river morphological features for a subset of reaches in the network from high resolution satellite imagery and find significant links between observed forms and sediment connectivity information derived from CASCADE. CASCADE opens up novel opportunities to clarify the link between network scale sediment regimes and local morphologic processes and forms. This is of concrete interest for river basin management because CASCADE allows to assess impacts of anthropic disturbance on river sediment regimes and to anticipate resulting changes in local fluvial processes and related eco-system services.

  14. Robust Hidden Markov Models for Geophysical Data Analysis

    NASA Astrophysics Data System (ADS)

    Granat, R. A.

    2002-12-01

    We employed robust hidden Markov models (HMMs) to perform statistical analysis of seismic events and crustal deformation. These models allowed us to classify different kinds of events or modes of deformation, and furthermore gave us a statistical basis for understanding relationships between different classes. A hidden Markov model is a statistical model for ordered data (typically in time). The observed data is assumed to have been generated by an unobservable statistical process of a particular form. This process is such that each observation is coincident with the system being in a particular discrete state. Furthermore, the next state is dependent on the current state; in other words, it is a first order Markov process. The model is completely described by a set of model parameters: the initial state probabilities, the first order Markov chain state-to-state transition probabilities, and the probabilities of observable outputs associated with each state. Application of the model to data involves optimizing these model parameters with respect to some function of the observations, typically the likelihood of the observations given the model. Our work focused on the fact that this objective function typically has a number of local maxima that is exponential in the model size (the number of states). This means that not only is it very difficult to discover the global maximum, but also that results can vary widely between applications of the model. For some domains, such as speech processing, sufficient a priori information about the system is available such that this problem can be avoided. However, for general scientific analysis, such a priori information is often not available, especially in cases where the HMM is being used as an exploratory tool for scientific understanding. Such was the case for the geophysical data sets used in this work. Our approach involves analytical location of sub-optimal local maxima; once the locations of these maxima have been found

  15. On identified predictive control

    NASA Technical Reports Server (NTRS)

    Bialasiewicz, Jan T.

    1993-01-01

    Self-tuning control algorithms are potential successors to manually tuned PID controllers traditionally used in process control applications. A very attractive design method for self-tuning controllers, which has been developed over recent years, is the long-range predictive control (LRPC). The success of LRPC is due to its effectiveness with plants of unknown order and dead-time which may be simultaneously nonminimum phase and unstable or have multiple lightly damped poles (as in the case of flexible structures or flexible robot arms). LRPC is a receding horizon strategy and can be, in general terms, summarized as follows. Using assumed long-range (or multi-step) cost function the optimal control law is found in terms of unknown parameters of the predictor model of the process, current input-output sequence, and future reference signal sequence. The common approach is to assume that the input-output process model is known or separately identified and then to find the parameters of the predictor model. Once these are known, the optimal control law determines control signal at the current time t which is applied at the process input and the whole procedure is repeated at the next time instant. Most of the recent research in this field is apparently centered around the LRPC formulation developed by Clarke et al., known as generalized predictive control (GPC). GPC uses ARIMAX/CARIMA model of the process in its input-output formulation. In this paper, the GPC formulation is used but the process predictor model is derived from the state space formulation of the ARIMAX model and is directly identified over the receding horizon, i.e., using current input-output sequence. The underlying technique in the design of identified predictive control (IPC) algorithm is the identification algorithm of observer/Kalman filter Markov parameters developed by Juang et al. at NASA Langley Research Center and successfully applied to identification of flexible structures.

  16. Single-sweep spectral analysis of contact heat evoked potentials: a novel approach to identify altered cortical processing after morphine treatment

    PubMed Central

    Hansen, Tine M; Graversen, Carina; Frøkjær, Jens B; Olesen, Anne E; Valeriani, Massimiliano; Drewes, Asbjørn M

    2015-01-01

    Aims The cortical response to nociceptive thermal stimuli recorded as contact heat evoked potentials (CHEPs) may be altered by morphine. However, previous studies have averaged CHEPs over multiple stimuli, which are confounded by jitter between sweeps. Thus, the aim was to assess single-sweep characteristics to identify alterations induced by morphine. Methods In a crossover study 15 single-sweep CHEPs were analyzed from 62 electroencephalography electrodes in 26 healthy volunteers before and after administration of morphine or placebo. Each sweep was decomposed by a continuous wavelet transform to obtain normalized spectral indices in the delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–12 Hz), beta (12–32 Hz) and gamma (32–80 Hz) bands. The average distribution over all sweeps and channels was calculated for the four recordings for each volunteer, and the two recordings before treatments were assessed for reproducibility. Baseline corrected spectral indices after morphine and placebo treatments were compared to identify alterations induced by morphine. Results Reproducibility between baseline CHEPs was demonstrated. As compared with placebo, morphine decreased the spectral indices in the delta and theta bands by 13% (P = 0.04) and 9% (P = 0.007), while the beta and gamma bands were increased by 10% (P = 0.006) and 24% (P = 0.04). Conclusion The decreases in the delta and theta band are suggested to represent a decrease in the pain specific morphology of the CHEPs, which indicates a diminished pain response after morphine administration. Hence, assessment of spectral indices in single-sweep CHEPs can be used to study cortical mechanisms induced by morphine treatment. PMID:25556985

  17. Robust excitons inhabit soft supramolecular nanotubes

    PubMed Central

    Eisele, Dörthe M.; Arias, Dylan H.; Fu, Xiaofeng; Bloemsma, Erik A.; Steiner, Colby P.; Jensen, Russell A.; Rebentrost, Patrick; Eisele, Holger; Tokmakoff, Andrei; Lloyd, Seth; Nelson, Keith A.; Nicastro, Daniela; Knoester, Jasper; Bawendi, Moungi G.

    2014-01-01

    Nature's highly efficient light-harvesting antennae, such as those found in green sulfur bacteria, consist of supramolecular building blocks that self-assemble into a hierarchy of close-packed structures. In an effort to mimic the fundamental processes that govern nature’s efficient systems, it is important to elucidate the role of each level of hierarchy: from molecule, to supramolecular building block, to close-packed building blocks. Here, we study the impact of hierarchical structure. We present a model system that mirrors nature’s complexity: cylinders self-assembled from cyanine-dye molecules. Our work reveals that even though close-packing may alter the cylinders’ soft mesoscopic structure, robust delocalized excitons are retained: Internal order and strong excitation-transfer interactions—prerequisites for efficient energy transport—are both maintained. Our results suggest that the cylindrical geometry strongly favors robust excitons; it presents a rational design that is potentially key to nature’s high efficiency, allowing construction of efficient light-harvesting devices even from soft, supramolecular materials. PMID:25092336

  18. Robust excitons inhabit soft supramolecular nanotubes.

    PubMed

    Eisele, Dörthe M; Arias, Dylan H; Fu, Xiaofeng; Bloemsma, Erik A; Steiner, Colby P; Jensen, Russell A; Rebentrost, Patrick; Eisele, Holger; Tokmakoff, Andrei; Lloyd, Seth; Nelson, Keith A; Nicastro, Daniela; Knoester, Jasper; Bawendi, Moungi G

    2014-08-19

    Nature's highly efficient light-harvesting antennae, such as those found in green sulfur bacteria, consist of supramolecular building blocks that self-assemble into a hierarchy of close-packed structures. In an effort to mimic the fundamental processes that govern nature's efficient systems, it is important to elucidate the role of each level of hierarchy: from molecule, to supramolecular building block, to close-packed building blocks. Here, we study the impact of hierarchical structure. We present a model system that mirrors nature's complexity: cylinders self-assembled from cyanine-dye molecules. Our work reveals that even though close-packing may alter the cylinders' soft mesoscopic structure, robust delocalized excitons are retained: Internal order and strong excitation-transfer interactions--prerequisites for efficient energy transport--are both maintained. Our results suggest that the cylindrical geometry strongly favors robust excitons; it presents a rational design that is potentially key to nature's high efficiency, allowing construction of efficient light-harvesting devices even from soft, supramolecular materials. PMID:25092336

  19. Robust excitons inhabit soft supramolecular nanotubes

    NASA Astrophysics Data System (ADS)

    Eisele, Dörthe M.; Arias, Dylan H.; Fu, Xiaofeng; Bloemsma, Erik A.; Steiner, Colby P.; Jensen, Russell A.; Rebentrost, Patrick; Eisele, Holger; Tokmakoff, Andrei; Lloyd, Seth; Nelson, Keith A.; Nicastro, Daniela; Knoester, Jasper; Bawendi, Moungi G.

    2014-08-01

    Nature's highly efficient light-harvesting antennae, such as those found in green sulfur bacteria, consist of supramolecular building blocks that self-assemble into a hierarchy of close-packed structures. In an effort to mimic the fundamental processes that govern nature's efficient systems, it is important to elucidate the role of each level of hierarchy: from molecule, to supramolecular building block, to close-packed building blocks. Here, we study the impact of hierarchical structure. We present a model system that mirrors nature's complexity: cylinders self-assembled from cyanine-dye molecules. Our work reveals that even though close-packing may alter the cylinders' soft mesoscopic structure, robust delocalized excitons are retained: Internal order and strong excitation-transfer interactions-prerequisites for efficient energy transport-are both maintained. Our results suggest that the cylindrical geometry strongly favors robust excitons; it presents a rational design that is potentially key to nature's high efficiency, allowing construction of efficient light-harvesting devices even from soft, supramolecular materials.

  20. Robust Inflation from fibrous strings

    NASA Astrophysics Data System (ADS)

    Burgess, C. P.; Cicoli, M.; de Alwis, S.; Quevedo, F.

    2016-05-01

    Successful inflationary models should (i) describe the data well; (ii) arise generically from sensible UV completions; (iii) be insensitive to detailed fine-tunings of parameters and (iv) make interesting new predictions. We argue that a class of models with these properties is characterized by relatively simple potentials with a constant term and negative exponentials. We here continue earlier work exploring UV completions for these models—including the key (though often ignored) issue of modulus stabilisation—to assess the robustness of their predictions. We show that string models where the inflaton is a fibration modulus seem to be robust due to an effective rescaling symmetry, and fairly generic since most known Calabi-Yau manifolds are fibrations. This class of models is characterized by a generic relation between the tensor-to-scalar ratio r and the spectral index ns of the form r propto (ns‑1)2 where the proportionality constant depends on the nature of the effects used to develop the inflationary potential and the topology of the internal space. In particular we find that the largest values of the tensor-to-scalar ratio that can be obtained by generalizing the original set-up are of order r lesssim 0.01. We contrast this general picture with specific popular models, such as the Starobinsky scenario and α-attractors. Finally, we argue the self consistency of large-field inflationary models can strongly constrain non-supersymmetric inflationary mechanisms.

  1. The Robustness of Acoustic Analogies

    NASA Technical Reports Server (NTRS)

    Freund, J. B.; Lele, S. K.; Wei, M.

    2004-01-01

    Acoustic analogies for the prediction of flow noise are exact rearrangements of the flow equations N(right arrow q) = 0 into a nominal sound source S(right arrow q) and sound propagation operator L such that L(right arrow q) = S(right arrow q). In practice, the sound source is typically modeled and the propagation operator inverted to make predictions. Since the rearrangement is exact, any sufficiently accurate model of the source will yield the correct sound, so other factors must determine the merits of any particular formulation. Using data from a two-dimensional mixing layer direct numerical simulation (DNS), we evaluate the robustness of two analogy formulations to different errors intentionally introduced into the source. The motivation is that since S can not be perfectly modeled, analogies that are less sensitive to errors in S are preferable. Our assessment is made within the framework of Goldstein's generalized acoustic analogy, in which different choices of a base flow used in constructing L give different sources S and thus different analogies. A uniform base flow yields a Lighthill-like analogy, which we evaluate against a formulation in which the base flow is the actual mean flow of the DNS. The more complex mean flow formulation is found to be significantly more robust to errors in the energetic turbulent fluctuations, but its advantage is less pronounced when errors are made in the smaller scales.

  2. TARGET Researchers Identify Mutations in SIX1/2 and microRNA Processing Genes in Favorable Histology Wilms Tumor | Office of Cancer Genomics

    Cancer.gov

    TARGET researchers molecularly characterized favorable histology Wilms tumor (FHWT), a pediatric renal cancer. Comprehensive genome and transcript analyses revealed single-nucleotide substitution/deletion mutations in microRNA processing genes (15% of FHWT patients) and Sine Oculis Homeobox Homolog 1/2 (SIX1/2) genes (7% of FHWT patients). SIX1/2 genes play a critical role in renal development and were not previously associated with FHWT, thus presenting a novel role for SIX1/2 pathway aberrations in this disease.

  3. Robust depth filter sizing for centrate clarification.

    PubMed

    Lutz, Herb; Chefer, Kate; Felo, Michael; Cacace, Benjamin; Hove, Sarah; Wang, Bin; Blanchard, Mark; Oulundsen, George; Piper, Rob; Zhao, Xiaoyang

    2015-01-01

    Cellulosic depth filters embedded with diatomaceous earth are widely used to remove colloidal cell debris from centrate as a secondary clarification step during the harvest of mammalian cell culture fluid. The high cost associated with process failure in a GMP (Good Manufacturing Practice) environment highlights the need for a robust process scale depth filter sizing that allows for (1) stochastic batch-to-batch variations from filter media, bioreactor feed and operation, and (2) systematic scaling differences in average performance between filter sizes and formats. Matched-lot depth filter media tested at the same conditions with consecutive batches of the same molecule were used to assess the sources and magnitudes of process variability. Depth filter sizing safety factors of 1.2-1.6 allow a filtration process to compensate for random batch-to-batch process variations. Matched-lot depth filter media in four different devices tested simultaneously at the same conditions was used with a common feed to assess scaling effects. All filter devices showed <11% capacity difference and the Pod format devices showed no statistically different capacity differences. PMID:26518411

  4. An Approach to Identify SNPs in the Gene Encoding Acetyl-CoA Acetyltransferase-2 (ACAT-2) and Their Proposed Role in Metabolic Processes in Pig

    PubMed Central

    Song, Ki Duk; Sharma, Neelesh; Kim, Jeong Hyun; Kim, Nam Eun; Lee, Sung Jin; Kang, Chul Woong; Oh, Sung Jong; Jeong, Dong Kee

    2014-01-01

    The novel liver protein acetyl-CoA acetyltransferase-2 (ACAT2) is involved in the beta-oxidation and lipid metabolism. Its comprehensive relative expression, in silico non-synonymous single nucleotide polymorphism (nsSNP) analysis, as well as its annotation in terms of metabolic process with another protein from the same family, namely, acetyl-CoA acyltransferase-2 (ACAA2) was performed in Sus scrofa. This investigation was conducted to understand the most important nsSNPs of ACAT2 in terms of their effects on metabolic activities and protein conformation. The two most deleterious mutations at residues 122 (I to V) and 281 (R to H) were found in ACAT2. Validation of expression of genes in the laboratory also supported the idea of differential expression of ACAT2 and ACAA2 conceived through the in silico analysis. Analysis of the relative expression of ACAT2 and ACAA2 in the liver tissue of Jeju native pig showed that the former expressed significantly higher (P<0.05). Overall, the computational prediction supported by wet laboratory analysis suggests that ACAT2 might contribute more to metabolic processes than ACAA2 in swine. Further associations of SNPs in ACAT2 with production traits might guide efforts to improve growth performance in Jeju native pigs. PMID:25050817

  5. Identifying weaknesses in undergraduate programs within the context input process product model framework in view of faculty and library staff in 2014

    PubMed Central

    2016-01-01

    Purpose: Objective of this research is to find out weaknesses of undergraduate programs in terms of personnel and financial, organizational management and facilities in view of faculty and library staff, and determining factors that may facilitate program quality–improvement. Methods: This is a descriptive analytical survey research and from purpose aspect is an application evaluation study that undergraduate groups of selected faculties (Public Health, Nursing and Midwifery, Allied Medical Sciences and Rehabilitation) at Tehran University of Medical Sciences (TUMS) have been surveyed using context input process product model in 2014. Statistical population were consist of three subgroups including department head (n=10), faculty members (n=61), and library staff (n=10) with total population of 81 people. Data collected through three researcher-made questionnaires which were based on Likert scale. The data were then analyzed using descriptive and inferential statistics. Results: Results showed desirable and relatively desirable situation for factors in context, input, process, and product fields except for factors of administration and financial; and research and educational spaces and equipment which were in undesirable situation. Conclusion: Based on results, researcher highlighted weaknesses in the undergraduate programs of TUMS in terms of research and educational spaces and facilities, educational curriculum, administration and financial; and recommended some steps in terms of financial, organizational management and communication with graduates in order to improve the quality of this system. PMID:27240892

  6. Applying meta-pathway analyses through metagenomics to identify the functional properties of the major bacterial communities of a single spontaneous cocoa bean fermentation process sample.

    PubMed

    Illeghems, Koen; Weckx, Stefan; De Vuyst, Luc

    2015-09-01

    A high-resolution functional metagenomic analysis of a representative single sample of a Brazilian spontaneous cocoa bean fermentation process was carried out to gain insight into its bacterial community functioning. By reconstruction of microbial meta-pathways based on metagenomic data, the current knowledge about the metabolic capabilities of bacterial members involved in the cocoa bean fermentation ecosystem was extended. Functional meta-pathway analysis revealed the distribution of the metabolic pathways between the bacterial members involved. The metabolic capabilities of the lactic acid bacteria present were most associated with the heterolactic fermentation and citrate assimilation pathways. The role of Enterobacteriaceae in the conversion of substrates was shown through the use of the mixed-acid fermentation and methylglyoxal detoxification pathways. Furthermore, several other potential functional roles for Enterobacteriaceae were indicated, such as pectinolysis and citrate assimilation. Concerning acetic acid bacteria, metabolic pathways were partially reconstructed, in particular those related to responses toward stress, explaining their metabolic activities during cocoa bean fermentation processes. Further, the in-depth metagenomic analysis unveiled functionalities involved in bacterial competitiveness, such as the occurrence of CRISPRs and potential bacteriocin production. Finally, comparative analysis of the metagenomic data with bacterial genomes of cocoa bean fermentation isolates revealed the applicability of the selected strains as functional starter cultures. PMID:25998815

  7. Complexity and robustness in hypernetwork models of metabolism.

    PubMed

    Pearcy, Nicole; Chuzhanova, Nadia; Crofts, Jonathan J

    2016-10-01

    Metabolic reaction data is commonly modelled using a complex network approach, whereby nodes represent the chemical species present within the organism of interest, and connections are formed between those nodes participating in the same chemical reaction. Unfortunately, such an approach provides an inadequate description of the metabolic process in general, as a typical chemical reaction will involve more than two nodes, thus risking oversimplification of the system of interest in a potentially significant way. In this paper, we employ a complex hypernetwork formalism to investigate the robustness of bacterial metabolic hypernetworks by extending the concept of a percolation process to hypernetworks. Importantly, this provides a novel method for determining the robustness of these systems and thus for quantifying their resilience to random attacks/errors. Moreover, we performed a site percolation analysis on a large cohort of bacterial metabolic networks and found that hypernetworks that evolved in more variable environments displayed increased levels of robustness and topological complexity. PMID:27354314

  8. Wavelet Filtering to Reduce Conservatism in Aeroservoelastic Robust Stability Margins

    NASA Technical Reports Server (NTRS)

    Brenner, Marty; Lind, Rick

    1998-01-01

    Wavelet analysis for filtering and system identification was used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins was reduced with parametric and nonparametric time-frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data was used to reduce the effects of external desirableness and unmodeled dynamics. Parametric estimates of modal stability were also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. F-18 high Alpha Research Vehicle aeroservoelastic flight test data demonstrated improved robust stability prediction by extension of the stability boundary beyond the flight regime.

  9. Processes for identifying regional influences of and responses to increasing atmospheric CO{sub 2} and climate change - the MINK project: An overview

    SciTech Connect

    Rosenberg, N.J.; Crosson, P.R.

    1991-08-01

    Scientists believe that a serious change in the climate of the earth could occur in the course of the next two to five decades as a result of warming caused by the rapid accumulation of radiatively active trace gases in the atmosphere. There is concern that not only the amount of warming but the rate at which it occurs could be unprecedented, at least since the current interglacial period began. Scientific uncertainties remain in our understanding of the climatic changes that may follow from greenhouse warming. Nevertheless, large and rapid changes in regional climate are conceivable. General circulation models (GCMs) predict changes for the central U.S. as large as an 8{degrees}C increase in mean summertime temperature accompanied by a 1 mm/day decrease in mean precipitation. Most predictions are less extreme but, so long as the direction of change is credible, efforts are warranted to identify just what kinds of impacts to expect if society chooses to allow climate to change or cannot stop it from changing, and just what might be done to adjust to those impacts.

  10. Implementing a Real-time Complex Event Stream Processing System to Help Identify Potential Participants in Clinical and Translational Research Studies.

    PubMed

    Weber, Susan; Lowe, Henry J; Malunjkar, Sanjay; Quinn, James

    2010-01-01

    Event Stream Processing is a computational approach to the problem of how to infer the occurrence of an event from a data stream in real time without reference to a database. This paper describes how we implemented this technology on the STRIDE platform to address the challenge of real time notification of patients presenting in the Emergency Department (ED) who potentially meet eligibility criteria for a clinical study. The system was evaluated against a standalone legacy alerting system and found to perform adequately. While our initial use of this technology was focused on relatively simple alerts, the system is extensible and has the potential to provide enterprise-level research alerting services supporting more complex scenarios. PMID:21347023

  11. Identifying phonological processing deficits in Northern Sotho-speaking children: The use of non-word repetition as a language assessment tool in the South African context.

    PubMed

    Wilsenach, Carien

    2016-01-01

    Diagnostic testing of speech/language skills in the African languages spoken in South Africa is a challenging task, as standardised language tests in the official languages of South Africa barely exist. Commercially available language tests are in English, and have been standardised in other parts of the world. Such tests are often translated into African languages, a practice that speech language therapists deem linguistically and culturally inappropriate. In response to the need for developing clinical language assessment instruments that could be used in South Africa, this article reports on data collected with a Northern Sotho non-word repetition task (NRT). Non-word repetition measures various aspects of phonological processing, including phonological working memory (PWM), and is used widely by speech language therapists, linguists, and educational psychologists in the Western world. The design of a novel Northern Sotho NRT is described, and it is argued that the task could be used successfully in the South African context to discriminate between children with weak and strong Northern Sotho phonological processing ability, regardless of the language of learning and teaching. The NRT was piloted with 120 third graders, and showed moderate to strong correlations with other measures of PWM, such as digit span and English non-word repetition. Furthermore, the task was positively associated with both word and fluent reading in Northern Sotho, and it reliably predicted reading outcomes in the tested population. Suggestions are made for improving the current version of the Northern Sotho NRT, whereafter it should be suitable to test learners from various age groups. PMID:27245134

  12. Fast and Robust Segmentation and Classification for Change Detection in Urban Point Clouds

    NASA Astrophysics Data System (ADS)

    Roynard, X.; Deschaud, J.-E.; Goulette, F.

    2016-06-01

    Change detection is an important issue in city monitoring to analyse street furniture, road works, car parking, etc. For example, parking surveys are needed but are currently a laborious task involving sending operators in the streets to identify the changes in car locations. In this paper, we propose a method that performs a fast and robust segmentation and classification of urban point clouds, that can be used for change detection. We apply this method to detect the cars, as a particular object class, in order to perform parking surveys automatically. A recently proposed method already addresses the need for fast segmentation and classification of urban point clouds, using elevation images. The interest to work on images is that processing is much faster, proven and robust. However there may be a loss of information in complex 3D cases: for example when objects are one above the other, typically a car under a tree or a pedestrian under a balcony. In this paper we propose a method that retain the three-dimensional information while preserving fast computation times and improving segmentation and classification accuracy. It is based on fast region-growing using an octree, for the segmentation, and specific descriptors with Random-Forest for the classification. Experiments have been performed on large urban point clouds acquired by Mobile Laser Scanning. They show that the method is as fast as the state of the art, and that it gives more robust results in the complex 3D cases.

  13. Dynamic reliability-based robust design optimization with time-variant probabilistic constraints

    NASA Astrophysics Data System (ADS)

    Wang, Pingfeng; Wang, Zequn; Almaktoom, Abdulaziz T.

    2014-06-01

    With the increasing complexity of engineering systems, ensuring high system reliability and system performance robustness throughout a product life cycle is of vital importance in practical engineering design. Dynamic reliability analysis, which is generally encountered due to time-variant system random inputs, becomes a primary challenge in reliability-based robust design optimization (RBRDO). This article presents a new approach to efficiently carry out dynamic reliability analysis for RBRDO. The key idea of the proposed approach is to convert time-variant probabilistic constraints to time-invariant ones by efficiently constructing a nested extreme response surface (NERS) and then carry out dynamic reliability analysis using NERS in an iterative RBRDO process. The NERS employs an efficient global optimization technique to identify the extreme time responses that correspond to the worst case scenario of system time-variant limit state functions. With these extreme time samples, a kriging-based time prediction model is built and used to estimate extreme responses for any given arbitrary design in the design space. An adaptive response prediction and model maturation mechanism is developed to guarantee the accuracy and efficiency of the proposed NERS approach. The NERS is integrated with RBRDO with time-variant probabilistic constraints to achieve optimum designs of engineered systems with desired reliability and performance robustness. Two case studies are used to demonstrate the efficacy of the proposed approach.

  14. Mechanisms of mutational robustness in transcriptional regulation

    PubMed Central

    Payne, Joshua L.; Wagner, Andreas

    2015-01-01

    Robustness is the invariance of a phenotype in the face of environmental or genetic change. The phenotypes produced by transcriptional regulatory circuits are gene expression patterns that are to some extent robust to mutations. Here we review several causes of this robustness. They include robustness of individual transcription factor binding sites, homotypic clusters of such sites, redundant enhancers, transcription factors, redundant transcription factors, and the wiring of transcriptional regulatory circuits. Such robustness can either be an adaptation by itself, a byproduct of other adaptations, or the result of biophysical principles and non-adaptive forces of genome evolution. The potential consequences of such robustness include complex regulatory network topologies that arise through neutral evolution, as well as cryptic variation, i.e., genotypic divergence without phenotypic divergence. On the longest evolutionary timescales, the robustness of transcriptional regulation has helped shape life as we know it, by facilitating evolutionary innovations that helped organisms such as flowering plants and vertebrates diversify. PMID:26579194

  15. Mutant α-galactosidase A enzymes identified in Fabry disease patients with residual enzyme activity: biochemical characterization and restoration of normal intracellular processing by 1-deoxygalactonojirimycin

    PubMed Central

    Ishii, Satoshi; Chang, Hui-Hwa; Kawasaki, Kunito; Yasuda, Kayo; Wu, Hui-Li; Garman, Scott C.; Fan, Jian-Qiang

    2007-01-01

    Fabry disease is a lysosomal storage disorder caused by the deficiency of α-Gal A (α-galactosidase A) activity. In order to understand the molecular mechanism underlying α-Gal A deficiency in Fabry disease patients with residual enzyme activity, enzymes with different missense mutations were purified from transfected COS-7 cells and the biochemical properties were characterized. The mutant enzymes detected in variant patients (A20P, E66Q, M72V, I91T, R112H, F113L, N215S, Q279E, M296I, M296V and R301Q), and those found mostly in mild classic patients (A97V, A156V, L166V and R356W) appeared to have normal Km and Vmax values. The degradation of all mutants (except E59K) was partially inhibited by treatment with kifunensine, a selective inhibitor of ER (endoplasmic reticulum) α-mannosidase I. Metabolic labelling and subcellular fractionation studies in COS-7 cells expressing the L166V and R301Q α-Gal A mutants indicated that the mutant protein was retained in the ER and degraded without processing. Addition of DGJ (1-deoxygalactonojirimycin) to the culture medium of COS-7 cells transfected with a large set of missense mutant α-Gal A cDNAs effectively increased both enzyme activity and protein yield. DGJ was capable of normalizing intracellular processing of mutant α-Gal A found in both classic (L166V) and variant (R301Q) Fabry disease patients. In addition, the residual enzyme activity in fibroblasts or lymphoblasts from both classic and variant hemizygous Fabry disease patients carrying a variety of missense mutations could be substantially increased by cultivation of the cells with DGJ. These results indicate that a large proportion of mutant enzymes in patients with residual enzyme activity are kinetically active. Excessive degradation in the ER could be responsible for the deficiency of enzyme activity in vivo, and the DGJ approach may be broadly applicable to Fabry disease patients with missense mutations. PMID:17555407

  16. Robust registration of longitudinal spine CT.

    PubMed

    Glocker, Ben; Zikic, Darko; Haynor, David R

    2014-01-01

    Accurate and reliable registration of longitudinal spine images is essential for assessment of disease progression and surgical outcome. Implementing a fully automatic and robust registration for clinical use, however, is challenging since standard registration techniques often fail due to poor initial alignment. The main causes of registration failure are the small overlap between scans which focus on different parts of the spine and/or substantial change in shape (e.g. after correction of abnormal curvature) and appearance (e.g. due to surgical implants). To overcome these issues we propose a registration approach which incorporates estimates of vertebrae locations obtained from a learning-based classification method. These location priors are used to initialize the registration and to provide semantic information within the optimization process. Quantitative evaluation on a database of 93 patients with a total of 276 registrations on longitudinal spine CT demonstrate that our registration method significantly reduces the number of failure cases. PMID:25333125

  17. Towards designing robust coupled networks

    PubMed Central

    Schneider, Christian M.; Yazdani, Nuri; Araújo, Nuno A. M.; Havlin, Shlomo; Herrmann, Hans J.

    2013-01-01

    Natural and technological interdependent systems have been shown to be highly vulnerable due to cascading failures and an abrupt collapse of global connectivity under initial failure. Mitigating the risk by partial disconnection endangers their functionality. Here we propose a systematic strategy of selecting a minimum number of autonomous nodes that guarantee a smooth transition in robustness. Our method which is based on betweenness is tested on various examples including the famous 2003 electrical blackout of Italy. We show that, with this strategy, the necessary number of autonomous nodes can be reduced by a factor of five compared to a random choice. We also find that the transition to abrupt collapse follows tricritical scaling characterized by a set of exponents which is independent on the protection strategy. PMID:23752705

  18. The structure of robust observers

    NASA Technical Reports Server (NTRS)

    Bhattacharyya, S. P.

    1975-01-01

    Conventional observers for linear time-invariant systems are shown to be structurally inadequate from a sensitivity standpoint. It is proved that if a linear dynamic system is to provide observer action despite arbitrary small perturbations in a specified subset of its parameters, it must: (1) be a closed loop system, be driven by the observer error, (2) possess redundancy, the observer must be generating, implicitly or explicitly, at least one linear combination of states that is already contained in the measurements, and (3) contain a perturbation-free model of the portion of the system observable from the external input to the observer. The procedure for design of robust observers possessing the above structural features is established and discussed.

  19. Robust holographic storage system design.

    PubMed

    Watanabe, Takahiro; Watanabe, Minoru

    2011-11-21

    Demand is increasing daily for large data storage systems that are useful for applications in spacecraft, space satellites, and space robots, which are all exposed to radiation-rich space environment. As candidates for use in space embedded systems, holographic storage systems are promising because they can easily provided the demanded large-storage capability. Particularly, holographic storage systems, which have no rotation mechanism, are demanded because they are virtually maintenance-free. Although a holographic memory itself is an extremely robust device even in a space radiation environment, its associated lasers and drive circuit devices are vulnerable. Such vulnerabilities sometimes engendered severe problems that prevent reading of all contents of the holographic memory, which is a turn-off failure mode of a laser array. This paper therefore presents a proposal for a recovery method for the turn-off failure mode of a laser array on a holographic storage system, and describes results of an experimental demonstration. PMID:22109441

  20. How robust are distributed systems

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1989-01-01

    A distributed system is made up of large numbers of components operating asynchronously from one another and hence with imcomplete and inaccurate views of one another's state. Load fluctuations are common as new tasks arrive and active tasks terminate. Jointly, these aspects make it nearly impossible to arrive at detailed predictions for a system's behavior. It is important to the successful use of distributed systems in situations in which humans cannot provide the sorts of predictable realtime responsiveness of a computer, that the system be robust. The technology of today can too easily be affected by worn programs or by seemingly trivial mechanisms that, for example, can trigger stock market disasters. Inventors of a technology have an obligation to overcome flaws that can exact a human cost. A set of principles for guiding solutions to distributed computing problems is presented.

  1. Probabilistic Reasoning for Plan Robustness

    NASA Technical Reports Server (NTRS)

    Schaffer, Steve R.; Clement, Bradley J.; Chien, Steve A.

    2005-01-01

    A planning system must reason about the uncertainty of continuous variables in order to accurately project the possible system state over time. A method is devised for directly reasoning about the uncertainty in continuous activity duration and resource usage for planning problems. By representing random variables as parametric distributions, computing projected system state can be simplified in some cases. Common approximation and novel methods are compared for over-constrained and lightly constrained domains. The system compares a few common approximation methods for an iterative repair planner. Results show improvements in robustness over the conventional non-probabilistic representation by reducing the number of constraint violations witnessed by execution. The improvement is more significant for larger problems and problems with higher resource subscription levels but diminishes as the system is allowed to accept higher risk levels.

  2. Identifying residence times and streamflow generation processes using δ18O and δ2H in meso-scale catchments in the Abay/Upper Blue Nile, Ethiopia

    NASA Astrophysics Data System (ADS)

    Tekleab, S.; Wenninger, J.; Uhlenbrook, S.

    2013-08-01

    Measurements of the stable isotopes oxygen-18 (18O) and deuterium (2H) were carried out in two meso-scale catchments, Chemoga (358 km2) and Jedeb (296 km2) south of Lake Tana, Abay/Upper Blue Nile basin, Ethiopia. The region is of paramount importance for the water resources in the Nile basin. Stable isotope composition in precipitation, spring water and streamflow were analyzed (i) to characterize the spatial and temporal variations of water fluxes; (ii) to estimate the mean residence time of water using a sine wave regression approach; and (iii) to identify runoff components using classical two component hydrograph separations at a seasonal time scale. The results show that the isotopic composition of precipitation exhibit marked seasonal variations, which suggests different sources of moisture generation for the rainfall in the study area. The Atlantic-Indian ocean, Congo basin, and the Sud swamps are the likely the potential moisture source areas during the main rainy (summer) season. While, the Indian-Arabian, and Mediterranean Sea moisture source areas during little rain (spring), and dry (winter) seasons. The spatial variation of the isotopic composition is affected by the amount effect and to less extent by altitude and temperature effects. A mean altitude effect of -0.12‰ (100 m)-1 for 18O and -0.58‰ (100 m)-1 for 2H were discernable in precipitation isotope composition. The seasonal variations of the isotopic signature of the spring water exhibit a damped response as compared to the river waters, which shows that the spring water has longer residence times than the river water. Results from the hydrograph separation at a seasonal time scale indicate the dominance of event water with an average of 71% and 64% of the total runoff during the wet season in the Chemoga and Jedeb catchment, respectively. The stable isotope compositions of streamflow samples were damped compared to the input function of precipitation for both catchments and this damping was

  3. Problems Identifying Independent and Dependent Variables

    ERIC Educational Resources Information Center

    Leatham, Keith R.

    2012-01-01

    This paper discusses one step from the scientific method--that of identifying independent and dependent variables--from both scientific and mathematical perspectives. It begins by analyzing an episode from a middle school mathematics classroom that illustrates the need for students and teachers alike to develop a robust understanding of…

  4. Robust statistical fusion of image labels.

    PubMed

    Landman, Bennett A; Asman, Andrew J; Scoggins, Andrew G; Bogovic, John A; Xing, Fangxu; Prince, Jerry L

    2012-02-01

    Image labeling and parcellation (i.e., assigning structure to a collection of voxels) are critical tasks for the assessment of volumetric and morphometric features in medical imaging data. The process of image labeling is inherently error prone as images are corrupted by noise and artifacts. Even expert interpretations are subject to subjectivity and the precision of the individual raters. Hence, all labels must be considered imperfect with some degree of inherent variability. One may seek multiple independent assessments to both reduce this variability and quantify the degree of uncertainty. Existing techniques have exploited maximum a posteriori statistics to combine data from multiple raters and simultaneously estimate rater reliabilities. Although quite successful, wide-scale application has been hampered by unstable estimation with practical datasets, for example, with label sets with small or thin objects to be labeled or with partial or limited datasets. As well, these approaches have required each rater to generate a complete dataset, which is often impossible given both human foibles and the typical turnover rate of raters in a research or clinical environment. Herein, we propose a robust approach to improve estimation performance with small anatomical structures, allow for missing data, account for repeated label sets, and utilize training/catch trial data. With this approach, numerous raters can label small, overlapping portions of a large dataset, and rater heterogeneity can be robustly controlled while simultaneously estimating a single, reliable label set and characterizing uncertainty. The proposed approach enables many individuals to collaborate in the construction of large datasets for labeling tasks (e.g., human parallel processing) and reduces the otherwise detrimental impact of rater unavailability. PMID:22010145

  5. Contextualizing the Genes Altered in Bladder Neoplasms in Pediatric andTeen Patients Allows Identifying Two Main Classes of Biological ProcessesInvolved and New Potential Therapeutic Targets

    PubMed Central

    Porrello, A.; Piergentili, R. b

    2016-01-01

    Research on bladder neoplasms in pediatric and teen patients (BNPTP) has described 21 genes, which are variously involved in this disease and are mostly responsible for deregulated cell proliferation. However, due to the limited number of publications on this subject, it is still unclear what type of relationships there are among these genes and which are the chances that, while having different molecular functions, they i) act as downstream effector genes of well-known pro- or anti- proliferative stimuli and/or interplay with biochemical pathways having oncological relevance or ii) are specific and, possibly, early biomarkers of these pathologies. A Gene Ontology (GO)-based analysis showed that these 21 genes are involved in biological processes, which can be split into two main classes: cell regulation-based and differentiation/development-based. In order to understand the involvement/overlapping with main cancer-related pathways, we performed a meta-analysis dependent on the 189 oncogenic signatures of the Molecular Signatures Database (OSMSD) curated by the Broad Institute. We generated a binary matrix with 53 gene signatures having at least one hit; this analysis i) suggests that some genes of the original list show inconsistencies and might need to be experimentally re- assessed or evaluated as biomarkers (in particular, ACTA2) and ii) allows hypothesizing that important (proto)oncogenes (E2F3, ERBB2/HER2, CCND1, WNT1, and YAP1) and (putative) tumor suppressors (BRCA1, RBBP8/CTIP, and RB1-RBL2/p130) may participate in the onset of this disease or worsen the observed phenotype, thus expanding the list of possible molecular targets for the treatment of BNPTP. PMID:27013923

  6. Object oriented simulation implementation in support of robust system design

    SciTech Connect

    Not Available

    1993-04-01

    A very brief description of two ``classes`` developed for use in design optimization and sensitivity analyses are given. These classes are used in simulations of systems in early design phases as well as system response assessments. The instanciated classes were coupled to system models to demonstrate the practically and efficiency of using these objects in complex robust design processes.

  7. The GODDESS ionization chamber: developing robust windows

    NASA Astrophysics Data System (ADS)

    Blanchard, Rose; Baugher, Travis; Cizewski, Jolie; Pain, Steven; Ratkiewicz, Andrew; Goddess Collaboration

    2015-10-01

    Reaction studies of nuclei far from stability require high-efficiency arrays of detectors and the ability to identify beam-like particles, especially when the beam is a cocktail beam. The Gammasphere ORRUBA Dual Detectors for Experimental Structure Studies (GODDESS) is made up of the Oak Ridge-Rutgers University Barrel Array (ORRUBA) of silicon detectors for charged particles inside of the gamma-ray detector array Gammasphere. A high-rate ionization chamber is being developed to identify beam-like particles. Consisting of twenty-one alternating anode and cathode grids, the ionization chamber sits downstream of the target chamber and is used to measure the energy loss of recoiling ions. A critical component of the system is a thin and robust mylar window which serves to separate the gas-filled ionization chamber from the vacuum of the target chamber with minimal energy loss. After construction, windows were tested to assure that they would not break below the required pressure, causing harm to the wire grids. This presentation will summarize the status of the ionization chamber and the results of the first tests with beams. This work is supported in part by the U.S. Department of Energy and National Science Foundation.

  8. Robust fluidic connections to freestanding microfluidic hydrogels

    PubMed Central

    Baer, Bradly B.; Larsen, Taylor S. H.

    2015-01-01

    Biomimetic scaffolds approaching physiological scale, whose size and large cellular load far exceed the limits of diffusion, require incorporation of a fluidic means to achieve adequate nutrient/metabolite exchange. This need has driven the extension of microfluidic technologies into the area of biomaterials. While construction of perfusable scaffolds is essentially a problem of microfluidic device fabrication, functional implementation of free-standing, thick-tissue constructs depends upon successful integration of external pumping mechanisms through optimized connective assemblies. However, a critical analysis to identify optimal materials/assembly components for hydrogel substrates has received little focus to date. This investigation addresses this issue directly by evaluating the efficacy of a range of adhesive and mechanical fluidic connection methods to gelatin hydrogel constructs based upon both mechanical property analysis and cell compatibility. Results identify a novel bioadhesive, comprised of two enzymatically modified gelatin compounds, for connecting tubing to hydrogel constructs that is both structurally robust and non-cytotoxic. Furthermore, outcomes from this study provide clear evidence that fluidic interconnect success varies with substrate composition (specifically hydrogel versus polydimethylsiloxane), highlighting not only the importance of selecting the appropriately tailored components for fluidic hydrogel systems but also that of encouraging ongoing, targeted exploration of this issue. The optimization of such interconnect systems will ultimately promote exciting scientific and therapeutic developments provided by microfluidic, cell-laden scaffolds. PMID:26045731

  9. Sensitive Periods for Developing a Robust Trait of Appetitive Aggression

    PubMed Central

    Köbach, Anke; Elbert, Thomas

    2015-01-01

    Violent behavior can be intrinsically rewarding; especially combatants fighting in current civil wars present with elevated traits of appetitive aggression. The majority of these fighters were recruited as children or adolescents. In the present study, we test whether there is a developmental period where combatants are sensitive for developing a robust trait of appetitive aggression. We investigated 95 combatants in their demobilization process that were recruited at different ages in the Kivu regions of the eastern Democratic Republic of Congo. Using random forest with conditional inference trees, we identified recruitment at the ages from 16 and 17 years as being predictive of the level of appetitive aggression; the number of lifetime, perpetrated acts was the most important predictor. We conclude that high levels of appetitive aggression develop in ex-combatants, especially in those recruited during their middle to late teenage, which is a developmental period marked by a natural inclination to exercise physical force. Consequently, ex-combatants may remain vulnerable for aggressive behavior patterns and re-recruitment unless they are provided alternative strategies for dealing with their aggression. PMID:26528191

  10. Robust Speaker Authentication Based on Combined Speech and Voiceprint Recognition

    NASA Astrophysics Data System (ADS)

    Malcangi, Mario

    2009-08-01

    Personal authentication is becoming increasingly important in many applications that have to protect proprietary data. Passwords and personal identification numbers (PINs) prove not to be robust enough to ensure that unauthorized people do not use them. Biometric authentication technology may offer a secure, convenient, accurate solution but sometimes fails due to its intrinsically fuzzy nature. This research aims to demonstrate that combining two basic speech processing methods, voiceprint identification and speech recognition, can provide a very high degree of robustness, especially if fuzzy decision logic is used.

  11. Robust reflective pupil slicing technology

    NASA Astrophysics Data System (ADS)

    Meade, Jeffrey T.; Behr, Bradford B.; Cenko, Andrew T.; Hajian, Arsen R.

    2014-07-01

    Tornado Spectral Systems (TSS) has developed the High Throughput Virtual Slit (HTVSTM), robust all-reflective pupil slicing technology capable of replacing the slit in research-, commercial- and MIL-SPEC-grade spectrometer systems. In the simplest configuration, the HTVS allows optical designers to remove the lossy slit from pointsource spectrometers and widen the input slit of long-slit spectrometers, greatly increasing throughput without loss of spectral resolution or cross-dispersion information. The HTVS works by transferring etendue between image plane axes but operating in the pupil domain rather than at a focal plane. While useful for other technologies, this is especially relevant for spectroscopic applications by performing the same spectral narrowing as a slit without throwing away light on the slit aperture. HTVS can be implemented in all-reflective designs and only requires a small number of reflections for significant spectral resolution enhancement-HTVS systems can be efficiently implemented in most wavelength regions. The etendueshifting operation also provides smooth scaling with input spot/image size without requiring reconfiguration for different targets (such as different seeing disk diameters or different fiber core sizes). Like most slicing technologies, HTVS provides throughput increases of several times without resolution loss over equivalent slitbased designs. HTVS technology enables robust slit replacement in point-source spectrometer systems. By virtue of pupilspace operation this technology has several advantages over comparable image-space slicer technology, including the ability to adapt gracefully and linearly to changing source size and better vertical packing of the flux distribution. Additionally, this technology can be implemented with large slicing factors in both fast and slow beams and can easily scale from large, room-sized spectrometers through to small, telescope-mounted devices. Finally, this same technology is directly

  12. S-system-based analysis of the robust properties common to many biochemical network models.

    PubMed

    Matsuoka, Yu; Jahan, Nusrat; Kurata, Hiroyuki

    2016-05-01

    Robustness is a key feature to characterize the adaptation of organisms to changes in their internal and external environments. A broad range of kinetic or dynamic models of biochemical systems have been developed. Robustness analyses are attractive for exploring some common properties of many biochemical models. To reveal such features, we transform different types of mathematical equations into a standard or intelligible formula and use the multiple parameter sensitivity (MPS) to identify some factors critically responsible for the total robustness to many perturbations. The MPS would be determined by the top quarter of the highly sensitive parameters rather than the single parameter with the maximum sensitivity. The MPS did not show any correlation to the network size. The MPS is closely related to the standard deviation of the sensitivity profile. A decrease in the standard deviation enhanced the total robustness, which shows the hallmark of distributed robustness that many factors (pathways) involve the total robustness. PMID:26861555

  13. Robust template matching for affine resistant image watermarks.

    PubMed

    Pereira, S; Pun, T

    2000-01-01

    Digital watermarks have been proposed as a method for discouraging illicit copying and distribution of copyrighted material. This paper describes a method for the secure and robust copyright protection of digital images. We present an approach for embedding a digital watermark into an image using the Fourier transform. To this watermark is added a template in the Fourier transform domain to render the method robust against general linear transformations. We detail a new algorithm based on polar maps for the accurate and efficient recovery of the template in an image which has undergone a general affine transformation. We also present results which demonstrate the robustness of the method against some common image processing operations such as compression, rotation, scaling, and aspect ratio changes. PMID:18255481

  14. Robust image modeling techniques with an image restoration application

    NASA Astrophysics Data System (ADS)

    Kashyap, Rangasami L.; Eom, Kie-Bum

    1988-08-01

    A robust parameter-estimation algorithm for a nonsymmetric half-plane (NSHP) autoregressive model, where the driving noise is a mixture of a Gaussian and an outlier process, is presented. The convergence of the estimation algorithm is proved. An algorithm to estimate parameters and original image intensity simultaneously from the impulse-noise-corrupted image, where the model governing the image is not available, is also presented. The robustness of the parameter estimates is demonstrated by simulation. Finally, an algorithm to restore realistic images is presented. The entire image generally does not obey a simple image model, but a small portion (e.g., 8 x 8) of the image is assumed to obey an NSHP model. The original image is divided into windows and the robust estimation algorithm is applied for each window. The restoration algorithm is tested by comparing it to traditional methods on several different images.

  15. Origin of Robustness in Generating Drug-Resistant Malaria Parasites

    PubMed Central

    Kümpornsin, Krittikorn; Modchang, Charin; Heinberg, Adina; Ekland, Eric H.; Jirawatcharadech, Piyaporn; Chobson, Pornpimol; Suwanakitti, Nattida; Chaotheing, Sastra; Wilairat, Prapon; Deitsch, Kirk W.; Kamchonwongpaisan, Sumalee; Fidock, David A.; Kirkman, Laura A.; Yuthavong, Yongyuth; Chookajorn, Thanat

    2014-01-01

    Biological robustness allows mutations to accumulate while maintaining functional phenotypes. Despite its crucial role in evolutionary processes, the mechanistic details of how robustness originates remain elusive. Using an evolutionary trajectory analysis approach, we demonstrate how robustness evolved in malaria parasites under selective pressure from an antimalarial drug inhibiting the folate synthesis pathway. A series of four nonsynonymous amino acid substitutions at the targeted enzyme, dihydrofolate reductase (DHFR), render the parasites highly resistant to the antifolate drug pyrimethamine. Nevertheless, the stepwise gain of these four dhfr mutations results in tradeoffs between pyrimethamine resistance and parasite fitness. Here, we report the epistatic interaction between dhfr mutations and amplification of the gene encoding the first upstream enzyme in the folate pathway, GTP cyclohydrolase I (GCH1). gch1 amplification confers low level pyrimethamine resistance and would thus be selected for by pyrimethamine treatment. Interestingly, the gch1 amplification can then be co-opted by the parasites because it reduces the cost of acquiring drug-resistant dhfr mutations downstream in the same metabolic pathway. The compensation of compromised fitness by extra GCH1 is an example of how robustness can evolve in a system and thus expand the accessibility of evolutionary trajectories leading toward highly resistant alleles. The evolution of robustness during the gain of drug-resistant mutations has broad implications for both the development of new drugs and molecular surveillance for resistance to existing drugs. PMID:24739308

  16. Selection for Robustness in Mutagenized RNA Viruses

    PubMed Central

    Furió, Victoria; Holmes, Edward C; Moya, Andrés

    2007-01-01

    Mutational robustness is defined as the constancy of a phenotype in the face of deleterious mutations. Whether robustness can be directly favored by natural selection remains controversial. Theory and in silico experiments predict that, at high mutation rates, slow-replicating genotypes can potentially outcompete faster counterparts if they benefit from a higher robustness. Here, we experimentally validate this hypothesis, dubbed the “survival of the flattest,” using two populations of the vesicular stomatitis RNA virus. Characterization of fitness distributions and genetic variability indicated that one population showed a higher replication rate, whereas the other was more robust to mutation. The faster replicator outgrew its robust counterpart in standard competition assays, but the outcome was reversed in the presence of chemical mutagens. These results show that selection can directly favor mutational robustness and reveal a novel viral resistance mechanism against treatment by lethal mutagenesis. PMID:17571922

  17. Tail mean and related robust solution concepts

    NASA Astrophysics Data System (ADS)

    Ogryczak, Włodzimierz

    2014-01-01

    Robust optimisation might be viewed as a multicriteria optimisation problem where objectives correspond to the scenarios although their probabilities are unknown or imprecise. The simplest robust solution concept represents a conservative approach focused on the worst-case scenario results optimisation. A softer concept allows one to optimise the tail mean thus combining performances under multiple worst scenarios. We show that while considering robust models allowing the probabilities to vary only within given intervals, the tail mean represents the robust solution for only upper bounded probabilities. For any arbitrary intervals of probabilities the corresponding robust solution may be expressed by the optimisation of appropriately combined mean and tail mean criteria thus remaining easily implementable with auxiliary linear inequalities. Moreover, we use the tail mean concept to develope linear programming implementable robust solution concepts related to risk averse optimisation criteria.

  18. Fast Robust PCA on Graphs

    NASA Astrophysics Data System (ADS)

    Shahid, Nauman; Perraudin, Nathanael; Kalofolias, Vassilis; Puy, Gilles; Vandergheynst, Pierre

    2016-06-01

    Mining useful clusters from high dimensional data has received significant attention of the computer vision and pattern recognition community in the recent years. Linear and non-linear dimensionality reduction has played an important role to overcome the curse of dimensionality. However, often such methods are accompanied with three different problems: high computational complexity (usually associated with the nuclear norm minimization), non-convexity (for matrix factorization methods) and susceptibility to gross corruptions in the data. In this paper we propose a principal component analysis (PCA) based solution that overcomes these three issues and approximates a low-rank recovery method for high dimensional datasets. We target the low-rank recovery by enforcing two types of graph smoothness assumptions, one on the data samples and the other on the features by designing a convex optimization problem. The resulting algorithm is fast, efficient and scalable for huge datasets with O(nlog(n)) computational complexity in the number of data samples. It is also robust to gross corruptions in the dataset as well as to the model parameters. Clustering experiments on 7 benchmark datasets with different types of corruptions and background separation experiments on 3 video datasets show that our proposed model outperforms 10 state-of-the-art dimensionality reduction models. Our theoretical analysis proves that the proposed model is able to recover approximate low-rank representations with a bounded error for clusterable data.

  19. Efficient robust conditional random fields.

    PubMed

    Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A

    2015-10-01

    Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs. PMID:26080050

  20. Robust Face Sketch Style Synthesis.

    PubMed

    Shengchuan Zhang; Xinbo Gao; Nannan Wang; Jie Li

    2016-01-01

    Heterogeneous image conversion is a critical issue in many computer vision tasks, among which example-based face sketch style synthesis provides a convenient way to make artistic effects for photos. However, existing face sketch style synthesis methods generate stylistic sketches depending on many photo-sketch pairs. This requirement limits the generalization ability of these methods to produce arbitrarily stylistic sketches. To handle such a drawback, we propose a robust face sketch style synthesis method, which can convert photos to arbitrarily stylistic sketches based on only one corresponding template sketch. In the proposed method, a sparse representation-based greedy search strategy is first applied to estimate an initial sketch. Then, multi-scale features and Euclidean distance are employed to select candidate image patches from the initial estimated sketch and the template sketch. In order to further refine the obtained candidat