Science.gov

Sample records for identifying robust process

  1. Robust Signal Processing in Living Cells

    PubMed Central

    Steuer, Ralf; Waldherr, Steffen; Sourjik, Victor; Kollmann, Markus

    2011-01-01

    Cellular signaling networks have evolved an astonishing ability to function reliably and with high fidelity in uncertain environments. A crucial prerequisite for the high precision exhibited by many signaling circuits is their ability to keep the concentrations of active signaling compounds within tightly defined bounds, despite strong stochastic fluctuations in copy numbers and other detrimental influences. Based on a simple mathematical formalism, we identify topological organizing principles that facilitate such robust control of intracellular concentrations in the face of multifarious perturbations. Our framework allows us to judge whether a multiple-input-multiple-output reaction network is robust against large perturbations of network parameters and enables the predictive design of perfectly robust synthetic network architectures. Utilizing the Escherichia coli chemotaxis pathway as a hallmark example, we provide experimental evidence that our framework indeed allows us to unravel the topological organization of robust signaling. We demonstrate that the specific organization of the pathway allows the system to maintain global concentration robustness of the diffusible response regulator CheY with respect to several dominant perturbations. Our framework provides a counterpoint to the hypothesis that cellular function relies on an extensive machinery to fine-tune or control intracellular parameters. Rather, we suggest that for a large class of perturbations, there exists an appropriate topology that renders the network output invariant to the respective perturbations. PMID:22215991

  2. A P-Norm Robust Feature Extraction Method for Identifying Differentially Expressed Genes.

    PubMed

    Liu, Jian; Liu, Jin-Xing; Gao, Ying-Lian; Kong, Xiang-Zhen; Wang, Xue-Song; Wang, Dong

    2015-01-01

    In current molecular biology, it becomes more and more important to identify differentially expressed genes closely correlated with a key biological process from gene expression data. In this paper, based on the Schatten p-norm and Lp-norm, a novel p-norm robust feature extraction method is proposed to identify the differentially expressed genes. In our method, the Schatten p-norm is used as the regularization function to obtain a low-rank matrix and the Lp-norm is taken as the error function to improve the robustness to outliers in the gene expression data. The results on simulation data show that our method can obtain higher identification accuracies than the competitive methods. Numerous experiments on real gene expression data sets demonstrate that our method can identify more differentially expressed genes than the others. Moreover, we confirmed that the identified genes are closely correlated with the corresponding gene expression data. PMID:26201006

  3. Robustness

    NASA Technical Reports Server (NTRS)

    Ryan, R.

    1993-01-01

    Robustness is a buzz word common to all newly proposed space systems design as well as many new commercial products. The image that one conjures up when the word appears is a 'Paul Bunyon' (lumberjack design), strong and hearty; healthy with margins in all aspects of the design. In actuality, robustness is much broader in scope than margins, including such factors as simplicity, redundancy, desensitization to parameter variations, control of parameter variations (environments flucation), and operational approaches. These must be traded with concepts, materials, and fabrication approaches against the criteria of performance, cost, and reliability. This includes manufacturing, assembly, processing, checkout, and operations. The design engineer or project chief is faced with finding ways and means to inculcate robustness into an operational design. First, however, be sure he understands the definition and goals of robustness. This paper will deal with these issues as well as the need for the requirement for robustness.

  4. Identifying Robust and Sensitive Frequency Bands for Interrogating Neural Oscillations

    PubMed Central

    Shackman, Alexander J.; McMenamin, Brenton W.; Maxwell, Jeffrey S.; Greischar, Lawrence L.; Davidson, Richard J.

    2010-01-01

    Recent years have seen an explosion of interest in using neural oscillations to characterize the mechanisms supporting cognition and emotion. Oftentimes, oscillatory activity is indexed by mean power density in predefined frequency bands. Some investigators use broad bands originally defined by prominent surface features of the spectrum. Others rely on narrower bands originally defined by spectral factor analysis (SFA). Presently, the robustness and sensitivity of these competing band definitions remains unclear. Here, a Monte Carlo-based SFA strategy was used to decompose the tonic (“resting” or “spontaneous”) electroencephalogram (EEG) into five bands: delta (1–5Hz), alpha-low (6–9Hz), alpha-high (10–11Hz), beta (12–19Hz), and gamma (>21Hz). This pattern was consistent across SFA methods, artifact correction/rejection procedures, scalp regions, and samples. Subsequent analyses revealed that SFA failed to deliver enhanced sensitivity; narrow alpha sub-bands proved no more sensitive than the classical broadband to individual differences in temperament or mean differences in task-induced activation. Other analyses suggested that residual ocular and muscular artifact was the dominant source of activity during quiescence in the delta and gamma bands. This was observed following threshold-based artifact rejection or independent component analysis (ICA)-based artifact correction, indicating that such procedures do not necessarily confer adequate protection. Collectively, these findings highlight the limitations of several commonly used EEG procedures and underscore the necessity of routinely performing exploratory data analyses, particularly data visualization, prior to hypothesis testing. They also suggest the potential benefits of using techniques other than SFA for interrogating high-dimensional EEG datasets in the frequency or time-frequency (event-related spectral perturbation, event-related synchronization / desynchronization) domains. PMID

  5. A penalized robust method for identifying gene-environment interactions.

    PubMed

    Shi, Xingjie; Liu, Jin; Huang, Jian; Zhou, Yong; Xie, Yang; Ma, Shuangge

    2014-04-01

    In high-throughput studies, an important objective is to identify gene-environment interactions associated with disease outcomes and phenotypes. Many commonly adopted methods assume specific parametric or semiparametric models, which may be subject to model misspecification. In addition, they usually use significance level as the criterion for selecting important interactions. In this study, we adopt the rank-based estimation, which is much less sensitive to model specification than some of the existing methods and includes several commonly encountered data and models as special cases. Penalization is adopted for the identification of gene-environment interactions. It achieves simultaneous estimation and identification and does not rely on significance level. For computation feasibility, a smoothed rank estimation is further proposed. Simulation shows that under certain scenarios, for example, with contaminated or heavy-tailed data, the proposed method can significantly outperform the existing alternatives with more accurate identification. We analyze a lung cancer prognosis study with gene expression measurements under the AFT (accelerated failure time) model. The proposed method identifies interactions different from those using the alternatives. Some of the identified genes have important implications.

  6. Using Many-Objective Optimization and Robust Decision Making to Identify Robust Regional Water Resource System Plans

    NASA Astrophysics Data System (ADS)

    Matrosov, E. S.; Huskova, I.; Harou, J. J.

    2015-12-01

    Water resource system planning regulations are increasingly requiring potential plans to be robust, i.e., perform well over a wide range of possible future conditions. Robust Decision Making (RDM) has shown success in aiding the development of robust plans under conditions of 'deep' uncertainty. Under RDM, decision makers iteratively improve the robustness of a candidate plan (or plans) by quantifying its vulnerabilities to future uncertain inputs and proposing ameliorations. RDM requires planners to have an initial candidate plan. However, if the initial plan is far from robust, it may take several iterations before planners are satisfied with its performance across the wide range of conditions. Identifying an initial candidate plan is further complicated if many possible alternative plans exist and if performance is assessed against multiple conflicting criteria. Planners may benefit from considering a plan that already balances multiple performance criteria and provides some level of robustness before the first RDM iteration. In this study we use many-objective evolutionary optimization to identify promising plans before undertaking RDM. This is done for a very large regional planning problem spanning the service area of four major water utilities in East England. The five-objective optimization is performed under an ensemble of twelve uncertainty scenarios to ensure the Pareto-approximate plans exhibit an initial level of robustness. New supply interventions include two reservoirs, one aquifer recharge and recovery scheme, two transfers from an existing reservoir, five reuse and five desalination schemes. Each option can potentially supply multiple demands at varying capacities resulting in 38 unique decisions. Four candidate portfolios were selected using trade-off visualization with the involved utilities. The performance of these plans was compared under a wider range of possible scenarios. The most balanced plan was then submitted into the vulnerability

  7. On adaptive robustness approach to Anti-Jam signal processing

    NASA Astrophysics Data System (ADS)

    Poberezhskiy, Y. S.; Poberezhskiy, G. Y.

    An effective approach to exploiting statistical differences between desired and jamming signals named adaptive robustness is proposed and analyzed in this paper. It combines conventional Bayesian, adaptive, and robust approaches that are complementary to each other. This combining strengthens the advantages and mitigates the drawbacks of the conventional approaches. Adaptive robustness is equally applicable to both jammers and their victim systems. The capabilities required for realization of adaptive robustness in jammers and victim systems are determined. The employment of a specific nonlinear robust algorithm for anti-jam (AJ) processing is described and analyzed. Its effectiveness in practical situations has been proven analytically and confirmed by simulation. Since adaptive robustness can be used by both sides in electronic warfare, it is more advantageous for the fastest and most intelligent side. Many results obtained and discussed in this paper are also applicable to commercial applications such as communications in unregulated or poorly regulated frequency ranges and systems with cognitive capabilities.

  8. On-Line Robust Modal Stability Prediction using Wavelet Processing

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.; Lind, Rick

    1998-01-01

    Wavelet analysis for filtering and system identification has been used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins is reduced with parametric and nonparametric time- frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data is used to reduce the effects of external disturbances and unmodeled dynamics. Parametric estimates of modal stability are also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. The F-18 High Alpha Research Vehicle aeroservoelastic flight test data demonstrates improved robust stability prediction by extension of the stability boundary beyond the flight regime. Guidelines and computation times are presented to show the efficiency and practical aspects of these procedures for on-line implementation. Feasibility of the method is shown for processing flight data from time- varying nonstationary test points.

  9. Robust process design and springback compensation of a decklid inner

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaojing; Grimm, Peter; Carleer, Bart; Jin, Weimin; Liu, Gang; Cheng, Yingchao

    2013-12-01

    Springback compensation is one of the key topics in current die face engineering. The accuracy of the springback simulation, the robustness of method planning and springback are considered to be the main factors which influences the effectiveness of springback compensation. In the present paper, the basic principles of springback compensation are presented firstly. These principles consist of an accurate full cycle simulation with final validation setting and the robust process design and optimization are discussed in detail via an industrial example, a decklid inner. Moreover, an effective compensation strategy is put forward based on the analysis of springback and the simulation based springback compensation is introduced in the phase of process design. In the end, the final verification and comparison in tryout and production is given in this paper, which verified that the methodology of robust springback compensation is effective during the die development.

  10. Confronting Oahu's Water Woes: Identifying Scenarios for a Robust Evaluation of Policy Alternatives

    NASA Astrophysics Data System (ADS)

    van Rees, C. B.; Garcia, M. E.; Alarcon, T.; Sixt, G.

    2013-12-01

    The Pearl Harbor aquifer is the most important freshwater resource on Oahu (Hawaii, U.S.A), providing water to nearly half a million people. Recent studies show that current water use is reaching or exceeding sustainable yield. Climate change and increasing resident and tourist populations are predicted to further stress the aquifer. The island has lost huge tracts of freshwater and estuarine wetlands since human settlement; the dependence of many endemic, endangered species on these wetlands, as well as ecosystem benefits from wetlands, link humans and wildlife through water management. After the collapse of the sugar industry on Oahu (mid-1990s), the Waiahole ditch--a massive stream diversion bringing water from the island's windward to the leeward side--became a hotly disputed resource. Commercial interests and traditional farmers have clashed over the water, which could also serve to support the Pearl Harbor aquifer. Considering competing interests, impending scarcity, and uncertain future conditions, how can groundwater be managed most effectively? Complex water networks like this are characterized by conflicts between stakeholders, coupled human-natural systems, and future uncertainty. The Water Diplomacy Framework offers a model for analyzing such complex issues by integrating multiple disciplinary perspectives, identifying intervention points, and proposing sustainable solutions. The Water Diplomacy Framework is a theory and practice of implementing adaptive water management for complex problems by shifting the discussion from 'allocation of water' to 'benefit from water resources'. This is accomplished through an interactive process that includes stakeholder input, joint fact finding, collaborative scenario development, and a negotiated approach to value creation. Presented here are the results of the initial steps in a long term project to resolve water limitations on Oahu. We developed a conceptual model of the Pearl Harbor Aquifer system and identified

  11. Processing Robustness for A Phenylethynyl Terminated Polyimide Composite

    NASA Technical Reports Server (NTRS)

    Hou, Tan-Hung

    2004-01-01

    The processability of a phenylethynyl terminated imide resin matrix (designated as PETI-5) composite is investigated. Unidirectional prepregs are made by coating an N-methylpyrrolidone solution of the amide acid oligomer (designated as PETAA-5/NMP) onto unsized IM7 fibers. Two batches of prepregs are used: one is made by NASA in-house, and the other is from an industrial source. The composite processing robustness is investigated with respect to the prepreg shelf life, the effect of B-staging conditions, and the optimal processing window. Prepreg rheology and open hole compression (OHC) strengths are found not to be affected by prolonged (i.e., up to 60 days) ambient storage. Rheological measurements indicate that the PETAA-5/NMP processability is only slightly affected over a wide range of B-stage temperatures from 250 deg C to 300 deg C. The OHC strength values are statistically indistinguishable among laminates consolidated using various B-staging conditions. An optimal processing window is established by means of the response surface methodology. IM7/PETAA-5/NMP prepreg is more sensitive to consolidation temperature than to pressure. A good consolidation is achievable at 371 deg C (700 deg F)/100 Psi, which yields an RT OHC strength of 62 Ksi. However, processability declines dramatically at temperatures below 350 deg C (662 deg F), as evidenced by the OHC strength values. The processability of the IM7/LARC(TM) PETI-5 prepreg was found to be robust.

  12. Robust design of binary countercurrent adsorption separation processes

    SciTech Connect

    Storti, G. ); Mazzotti, M.; Morbidelli, M.; Carra, S. )

    1993-03-01

    The separation of a binary mixture, using a third component having intermediate adsorptivity as desorbent, in a four section countercurrent adsorption separation unit is considered. A procedure for the optimal and robust design of the unit is developed in the frame of Equilibrium Theory, using a model where the adsorption equilibria are described through the constant selectivity stoichiometric model, while mass-transfer resistances and axial mixing are neglected. By requiring that the unit achieves complete separation, it is possible to identify a set of implicity constraints on the operating parameters, that is, the flow rate ratios in the four sections of the unit. From these constraints explicit bounds on the operating parameters are obtained, thus yielding a region in the operating parameters space, which can be drawn a priori in terms of the adsorption equilibrium constants and the feed composition. This result provides a very convenient tool to determine both optimal and robust operating conditions. The latter issue is addressed by first analyzing the various possible sources of disturbances, as well as their effect on the separation performance. Next, the criteria for the robust design of the unit are discussed. Finally, these theoretical findings are compared with a set of experimental results obtained in a six port simulated moving bed adsorption separation unit operated in the vapor phase.

  13. Exploring critical pathways for urban water management to identify robust strategies under deep uncertainties.

    PubMed

    Urich, Christian; Rauch, Wolfgang

    2014-12-01

    Long-term projections for key drivers needed in urban water infrastructure planning such as climate change, population growth, and socio-economic changes are deeply uncertain. Traditional planning approaches heavily rely on these projections, which, if a projection stays unfulfilled, can lead to problematic infrastructure decisions causing high operational costs and/or lock-in effects. New approaches based on exploratory modelling take a fundamentally different view. Aim of these is, to identify an adaptation strategy that performs well under many future scenarios, instead of optimising a strategy for a handful. However, a modelling tool to support strategic planning to test the implication of adaptation strategies under deeply uncertain conditions for urban water management does not exist yet. This paper presents a first step towards a new generation of such strategic planning tools, by combing innovative modelling tools, which coevolve the urban environment and urban water infrastructure under many different future scenarios, with robust decision making. The developed approach is applied to the city of Innsbruck, Austria, which is spatially explicitly evolved 20 years into the future under 1000 scenarios to test the robustness of different adaptation strategies. Key findings of this paper show that: (1) Such an approach can be used to successfully identify parameter ranges of key drivers in which a desired performance criterion is not fulfilled, which is an important indicator for the robustness of an adaptation strategy; and (2) Analysis of the rich dataset gives new insights into the adaptive responses of agents to key drivers in the urban system by modifying a strategy.

  14. Robust media processing on programmable power-constrained systems

    NASA Astrophysics Data System (ADS)

    McVeigh, Jeff

    2005-03-01

    To achieve consumer-level quality, media systems must process continuous streams of audio and video data while maintaining exacting tolerances on sampling rate, jitter, synchronization, and latency. While it is relatively straightforward to design fixed-function hardware implementations to satisfy worst-case conditions, there is a growing trend to utilize programmable multi-tasking solutions for media applications. The flexibility of these systems enables support for multiple current and future media formats, which can reduce design costs and time-to-market. This paper provides practical engineering solutions to achieve robust media processing on such systems, with specific attention given to power-constrained platforms. The techniques covered in this article utilize the fundamental concepts of algorithm and software optimization, software/hardware partitioning, stream buffering, hierarchical prioritization, and system resource and power management. A novel enhancement to dynamically adjust processor voltage and frequency based on buffer fullness to reduce system power consumption is examined in detail. The application of these techniques is provided in a case study of a portable video player implementation based on a general-purpose processor running a non real-time operating system that achieves robust playback of synchronized H.264 video and MP3 audio from local storage and streaming over 802.11.

  15. Process Architecture for Managing Digital Object Identifiers

    NASA Astrophysics Data System (ADS)

    Wanchoo, L.; James, N.; Stolte, E.

    2014-12-01

    In 2010, NASA's Earth Science Data and Information System (ESDIS) Project implemented a process for registering Digital Object Identifiers (DOIs) for data products distributed by Earth Observing System Data and Information System (EOSDIS). For the first 3 years, ESDIS evolved the process involving the data provider community in the development of processes for creating and assigning DOIs, and guidelines for the landing page. To accomplish this, ESDIS established two DOI User Working Groups: one for reviewing the DOI process whose recommendations were submitted to ESDIS in February 2014; and the other recently tasked to review and further develop DOI landing page guidelines for ESDIS approval by end of 2014. ESDIS has recently upgraded the DOI system from a manually-driven system to one that largely automates the DOI process. The new automated feature include: a) reviewing the DOI metadata, b) assigning of opaque DOI name if data provider chooses, and c) reserving, registering, and updating the DOIs. The flexibility of reserving the DOI allows data providers to embed and test the DOI in the data product metadata before formally registering with EZID. The DOI update process allows the changing of any DOI metadata except the DOI name unless the name has not been registered. Currently, ESDIS has processed a total of 557 DOIs of which 379 DOIs are registered with EZID and 178 are reserved with ESDIS. The DOI incorporates several metadata elements that effectively identify the data product and the source of availability. Of these elements, the Uniform Resource Locator (URL) attribute has the very important function of identifying the landing page which describes the data product. ESDIS in consultation with data providers in the Earth Science community is currently developing landing page guidelines that specify the key data product descriptive elements to be included on each data product's landing page. This poster will describe in detail the unique automated process and

  16. Robust Microarray Meta-Analysis Identifies Differentially Expressed Genes for Clinical Prediction

    PubMed Central

    Phan, John H.; Young, Andrew N.; Wang, May D.

    2012-01-01

    Combining multiple microarray datasets increases sample size and leads to improved reproducibility in identification of informative genes and subsequent clinical prediction. Although microarrays have increased the rate of genomic data collection, sample size is still a major issue when identifying informative genetic biomarkers. Because of this, feature selection methods often suffer from false discoveries, resulting in poorly performing predictive models. We develop a simple meta-analysis-based feature selection method that captures the knowledge in each individual dataset and combines the results using a simple rank average. In a comprehensive study that measures robustness in terms of clinical application (i.e., breast, renal, and pancreatic cancer), microarray platform heterogeneity, and classifier (i.e., logistic regression, diagonal LDA, and linear SVM), we compare the rank average meta-analysis method to five other meta-analysis methods. Results indicate that rank average meta-analysis consistently performs well compared to five other meta-analysis methods. PMID:23365541

  17. A robust sinusoidal signal processing method for interferometers

    NASA Astrophysics Data System (ADS)

    Wu, Xiang-long; Zhang, Hui; Tseng, Yang-Yu; Fan, Kuang-Chao

    2013-10-01

    Laser interferometers are widely used as a reference for length measurement. Reliable bidirectional optical fringe counting is normally obtained by using two orthogonally sinusoidal signals derived from the two outputs of an interferometer with path difference. These signals are subject to be disturbed by the geometrical errors of the moving target that causes the separation and shift of two interfering light spots on the detector. It results in typical Heydemann errors, including DC drift, amplitude variation and out-of-orthogonality of two sinusoidal signals that will seriously reduce the accuracy of fringe counting. This paper presents a robust sinusoidal signal processing method to correct the distorted waveforms by hardware. A corresponding circuit board has been designed. A linear stage equipped with a laser displacement interferometer and a height gauge equipped with a linear grating interferometer are used as the test beds. Experimental results show that, even with a seriously disturbed input waveform, the output Lissajous circle can always be stabilized after signal correction. This robust method increases the stability and reliability of the sinusoidal signals for data acquisition device to deal with pulse count and phase subdivision.

  18. Identifying a robust method to build RCMs ensemble as climate forcing for hydrological impact models

    NASA Astrophysics Data System (ADS)

    Olmos Giménez, P.; García Galiano, S. G.; Giraldo-Osorio, J. D.

    2016-06-01

    The regional climate models (RCMs) improve the understanding of the climate mechanism and are often used as climate forcing to hydrological impact models. Rainfall is the principal input to the water cycle, so special attention should be paid to its accurate estimation. However, climate change projections of rainfall events exhibit great divergence between RCMs. As a consequence, the rainfall projections, and the estimation of uncertainties, are better based in the combination of the information provided by an ensemble approach from different RCMs simulations. Taking into account the rainfall variability provided by different RCMs, the aims of this work are to evaluate the performance of two novel approaches based on the reliability ensemble averaging (REA) method for building RCMs ensembles of monthly precipitation over Spain. The proposed methodologies are based on probability density functions (PDFs) considering the variability of different levels of information, on the one hand of annual and seasonal rainfall, and on the other hand of monthly rainfall. The sensitivity of the proposed approaches, to two metrics for identifying the best ensemble building method, is evaluated. The plausible future scenario of rainfall for 2021-2050 over Spain, based on the more robust method, is identified. As a result, the rainfall projections are improved thus decreasing the uncertainties involved, to drive hydrological impacts models and therefore to reduce the cumulative errors in the modeling chain.

  19. Application of NMR Methods to Identify Detection Reagents for Use in the Development of Robust Nanosensors

    SciTech Connect

    Cosman, M; Krishnan, V V; Balhorn, R

    2004-04-29

    Nuclear Magnetic Resonance (NMR) spectroscopy is a powerful technique for studying bi-molecular interactions at the atomic scale. Our NMR lab is involved in the identification of small molecules, or ligands that bind to target protein receptors, such as tetanus (TeNT) and botulinum (BoNT) neurotoxins, anthrax proteins and HLA-DR10 receptors on non-Hodgkin's lymphoma cancer cells. Once low affinity binders are identified, they can be linked together to produce multidentate synthetic high affinity ligands (SHALs) that have very high specificity for their target protein receptors. An important nanotechnology application for SHALs is their use in the development of robust chemical sensors or biochips for the detection of pathogen proteins in environmental samples or body fluids. Here, we describe a recently developed NMR competition assay based on transferred nuclear Overhauser effect spectroscopy (trNOESY) that enables the identification of sets of ligands that bind to the same site, or a different site, on the surface of TeNT fragment C (TetC) than a known ''marker'' ligand, doxorubicin. Using this assay, we can identify the optimal pairs of ligands to be linked together for creating detection reagents, as well as estimate the relative binding constants for ligands competing for the same site.

  20. Product and Process Improvement Using Mixture-Process Variable Designs and Robust Optimization Techniques

    SciTech Connect

    Sahni, Narinder S.; Piepel, Gregory F.; Naes, Tormod

    2009-04-01

    The quality of an industrial product depends on the raw material proportions and the process variable levels, both of which need to be taken into account in designing a product. This article presents a case study from the food industry in which both kinds of variables were studied by combining a constrained mixture experiment design and a central composite process variable design. Based on the natural structure of the situation, a split-plot experiment was designed and models involving the raw material proportions and process variable levels (separately and combined) were fitted. Combined models were used to study: (i) the robustness of the process to variations in raw material proportions, and (ii) the robustness of the raw material recipes with respect to fluctuations in the process variable levels. Further, the expected variability in the robust settings was studied using the bootstrap.

  1. Stretching the limits of forming processes by robust optimization: A demonstrator

    SciTech Connect

    Wiebenga, J. H.; Atzema, E. H.; Boogaard, A. H. van den

    2013-12-16

    Robust design of forming processes using numerical simulations is gaining attention throughout the industry. In this work, it is demonstrated how robust optimization can assist in further stretching the limits of metal forming processes. A deterministic and a robust optimization study are performed, considering a stretch-drawing process of a hemispherical cup product. For the robust optimization study, both the effect of material and process scatter are taken into account. For quantifying the material scatter, samples of 41 coils of a drawing quality forming steel have been collected. The stochastic material behavior is obtained by a hybrid approach, combining mechanical testing and texture analysis, and efficiently implemented in a metamodel based optimization strategy. The deterministic and robust optimization results are subsequently presented and compared, demonstrating an increased process robustness and decreased number of product rejects by application of the robust optimization approach.

  2. Robust syntaxin-4 immunoreactivity in mammalian horizontal cell processes

    PubMed Central

    HIRANO, ARLENE A.; BRANDSTÄTTER, JOHANN HELMUT; VILA, ALEJANDRO; BRECHA, NICHOLAS C.

    2009-01-01

    Horizontal cells mediate inhibitory feed-forward and feedback communication in the outer retina; however, mechanisms that underlie transmitter release from mammalian horizontal cells are poorly understood. Toward determining whether the molecular machinery for exocytosis is present in horizontal cells, we investigated the localization of syntaxin-4, a SNARE protein involved in targeting vesicles to the plasma membrane, in mouse, rat, and rabbit retinae using immunocytochemistry. We report robust expression of syntaxin-4 in the outer plexiform layer of all three species. Syntaxin-4 occurred in processes and tips of horizontal cells, with regularly spaced, thicker sandwich-like structures along the processes. Double labeling with syntaxin-4 and calbindin antibodies, a horizontal cell marker, demonstrated syntaxin-4 localization to horizontal cell processes; whereas, double labeling with PKC antibodies, a rod bipolar cell (RBC) marker, showed a lack of co-localization, with syntaxin-4 immunolabeling occurring just distal to RBC dendritic tips. Syntaxin-4 immunolabeling occurred within VGLUT-1-immunoreactive photoreceptor terminals and underneath synaptic ribbons, labeled by CtBP2/RIBEYE antibodies, consistent with localization in invaginating horizontal cell tips at photoreceptor triad synapses. Vertical sections of retina immunostained for syntaxin-4 and peanut agglutinin (PNA) established that the prominent patches of syntaxin-4 immunoreactivity were adjacent to the base of cone pedicles. Horizontal sections through the OPL indicate a one-to-one co-localization of syntaxin-4 densities at likely all cone pedicles, with syntaxin-4 immunoreactivity interdigitating with PNA labeling. Pre-embedding immuno-electron microscopy confirmed the subcellular localization of syntaxin-4 labeling to lateral elements at both rod and cone triad synapses. Finally, co-localization with SNAP-25, a possible binding partner of syntaxin-4, indicated co-expression of these SNARE proteins in

  3. Decisional tool to assess current and future process robustness in an antibody purification facility.

    PubMed

    Stonier, Adam; Simaria, Ana Sofia; Smith, Martin; Farid, Suzanne S

    2012-07-01

    Increases in cell culture titers in existing facilities have prompted efforts to identify strategies that alleviate purification bottlenecks while controlling costs. This article describes the application of a database-driven dynamic simulation tool to identify optimal purification sizing strategies and visualize their robustness to future titer increases. The tool harnessed the benefits of MySQL to capture the process, business, and risk features of multiple purification options and better manage the large datasets required for uncertainty analysis and optimization. The database was linked to a discrete-event simulation engine so as to model the dynamic features of biopharmaceutical manufacture and impact of resource constraints. For a given titer, the tool performed brute force optimization so as to identify optimal purification sizing strategies that minimized the batch material cost while maintaining the schedule. The tool was applied to industrial case studies based on a platform monoclonal antibody purification process in a multisuite clinical scale manufacturing facility. The case studies assessed the robustness of optimal strategies to batch-to-batch titer variability and extended this to assess the long-term fit of the platform process as titers increase from 1 to 10 g/L, given a range of equipment sizes available to enable scale intensification efforts. Novel visualization plots consisting of multiple Pareto frontiers with tie-lines connecting the position of optimal configurations over a given titer range were constructed. These enabled rapid identification of robust purification configurations given titer fluctuations and the facility limit that the purification suites could handle in terms of the maximum titer and hence harvest load. PMID:22641562

  4. Decisional tool to assess current and future process robustness in an antibody purification facility.

    PubMed

    Stonier, Adam; Simaria, Ana Sofia; Smith, Martin; Farid, Suzanne S

    2012-07-01

    Increases in cell culture titers in existing facilities have prompted efforts to identify strategies that alleviate purification bottlenecks while controlling costs. This article describes the application of a database-driven dynamic simulation tool to identify optimal purification sizing strategies and visualize their robustness to future titer increases. The tool harnessed the benefits of MySQL to capture the process, business, and risk features of multiple purification options and better manage the large datasets required for uncertainty analysis and optimization. The database was linked to a discrete-event simulation engine so as to model the dynamic features of biopharmaceutical manufacture and impact of resource constraints. For a given titer, the tool performed brute force optimization so as to identify optimal purification sizing strategies that minimized the batch material cost while maintaining the schedule. The tool was applied to industrial case studies based on a platform monoclonal antibody purification process in a multisuite clinical scale manufacturing facility. The case studies assessed the robustness of optimal strategies to batch-to-batch titer variability and extended this to assess the long-term fit of the platform process as titers increase from 1 to 10 g/L, given a range of equipment sizes available to enable scale intensification efforts. Novel visualization plots consisting of multiple Pareto frontiers with tie-lines connecting the position of optimal configurations over a given titer range were constructed. These enabled rapid identification of robust purification configurations given titer fluctuations and the facility limit that the purification suites could handle in terms of the maximum titer and hence harvest load.

  5. Adaptive Position/Attitude Tracking Control of Aerial Robot With Unknown Inertial Matrix Based on a New Robust Neural Identifier.

    PubMed

    Lai, Guanyu; Liu, Zhi; Zhang, Yun; Chen, C L Philip

    2016-01-01

    This paper presents a novel adaptive controller for controlling an autonomous helicopter with unknown inertial matrix to asymptotically track the desired trajectory. To identify the unknown inertial matrix included in the attitude dynamic model, this paper proposes a new structural identifier that differs from those previously proposed in that it additionally contains a neural networks (NNs) mechanism and a robust adaptive mechanism, respectively. Using the NNs to compensate the unknown aerodynamic forces online and the robust adaptive mechanism to cancel the combination of the overlarge NNs compensation error and the external disturbances, the new robust neural identifier exhibits a better identification performance in the complex flight environment. Moreover, an optimized algorithm is included in the NNs mechanism to alleviate the burdensome online computation. By the strict Lyapunov argument, the asymptotic convergence of the inertial matrix identification error, position tracking error, and attitude tracking error to arbitrarily small neighborhood of the origin is proved. The simulation and implementation results are provided to evaluate the performance of the proposed controller. PMID:25794402

  6. Magnetoencephalographic Signals Identify Stages in Real-Life Decision Processes

    PubMed Central

    Braeutigam, Sven; Stins, John F.; Rose, Steven P. R.; Swithenby, Stephen J.; Ambler, Tim

    2001-01-01

    We used magnetoencephalography (MEG) to study the dynamics of neural responses in eight subjects engaged in shopping for day-to-day items from supermarket shelves. This behavior not only has personal and economic importance but also provides an example of an experience that is both personal and shared between individuals. The shopping experience enables the exploration of neural mechanisms underlying choice based on complex memories. Choosing among different brands of closely related products activated a robust sequence of signals within the first second after the presentation of the choice images. This sequence engaged first the visual cortex (80-100 ms), then as the images were analyzed, predominantly the left temporal regions (310-340 ms). At longer latency, characteristic neural activetion was found in motor speech areas (500-520 ms) for images requiring low salience choices with respect to previous (brand) memory, and in right parietal cortex for high salience choices (850-920 ms). We argue that the neural processes associated with the particular brand-choice stimulus can be separated into identifiable stages through observation of MEG responses and knowledge of functional anatomy. PMID:12018772

  7. Commonsense Conceptions of Emergent Processes: Why Some Misconceptions Are Robust

    ERIC Educational Resources Information Center

    Chi, Michelene T. H.

    2005-01-01

    This article offers a plausible domain-general explanation for why some concepts of processes are resistant to instructional remediation although other, apparently similar concepts are more easily understood. The explanation assumes that processes may differ in ontological ways: that some processes (such as the apparent flow in diffusion of dye in…

  8. Advanced process monitoring and feedback control to enhance cell culture process production and robustness.

    PubMed

    Zhang, An; Tsang, Valerie Liu; Moore, Brandon; Shen, Vivian; Huang, Yao-Ming; Kshirsagar, Rashmi; Ryll, Thomas

    2015-12-01

    It is a common practice in biotherapeutic manufacturing to define a fixed-volume feed strategy for nutrient feeds, based on historical cell demand. However, once the feed volumes are defined, they are inflexible to batch-to-batch variations in cell growth and physiology and can lead to inconsistent productivity and product quality. In an effort to control critical quality attributes and to apply process analytical technology (PAT), a fully automated cell culture feedback control system has been explored in three different applications. The first study illustrates that frequent monitoring and automatically controlling the complex feed based on a surrogate (glutamate) level improved protein production. More importantly, the resulting feed strategy was translated into a manufacturing-friendly manual feed strategy without impact on product quality. The second study demonstrates the improved process robustness of an automated feed strategy based on online bio-capacitance measurements for cell growth. In the third study, glucose and lactate concentrations were measured online and were used to automatically control the glucose feed, which in turn changed lactate metabolism. These studies suggest that the auto-feedback control system has the potential to significantly increase productivity and improve robustness in manufacturing, with the goal of ensuring process performance and product quality consistency.

  9. Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy

    NASA Astrophysics Data System (ADS)

    Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.

    2011-08-01

    The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.

  10. Natural Language Processing: Toward Large-Scale, Robust Systems.

    ERIC Educational Resources Information Center

    Haas, Stephanie W.

    1996-01-01

    Natural language processing (NLP) is concerned with getting computers to do useful things with natural language. Major applications include machine translation, text generation, information retrieval, and natural language interfaces. Reviews important developments since 1987 that have led to advances in NLP; current NLP applications; and problems…

  11. Probability fold change: a robust computational approach for identifying differentially expressed gene lists.

    PubMed

    Deng, Xutao; Xu, Jun; Hui, James; Wang, Charles

    2009-02-01

    Identifying genes that are differentially expressed under different experimental conditions is a fundamental task in microarray studies. However, different ranking methods generate very different gene lists, and this could profoundly impact follow-up analyses and biological interpretation. Therefore, developing improved ranking methods are critical in microarray data analysis. We developed a new algorithm, the probabilistic fold change (PFC), which ranks genes based on a confidence interval estimate of fold change. We performed extensive testing using multiple benchmark data sources including the MicroArray Quality Control (MAQC) data sets. We corroborated our observations with MAQC data sets using qRT-PCR data sets and Latin square spike-in data sets. Along with PFC, we tested six other popular ranking algorithms including Mean Fold Change (FC), SAM, t-statistic (T), Bayesian-t (BAYT), Intensity-Conditional Fold Change (CFC), and Rank Product (RP). PFC achieved reproducibility and accuracy that are consistently among the best of the seven ranking algorithms while other ranking algorithms would show weakness in some cases. Contrary to common belief, our results demonstrated that statistical accuracy will not translate to biological reproducibility and therefore both quality aspects need to be evaluated.

  12. Leveraging the Cloud for Robust and Efficient Lunar Image Processing

    NASA Technical Reports Server (NTRS)

    Chang, George; Malhotra, Shan; Wolgast, Paul

    2011-01-01

    The Lunar Mapping and Modeling Project (LMMP) is tasked to aggregate lunar data, from the Apollo era to the latest instruments on the LRO spacecraft, into a central repository accessible by scientists and the general public. A critical function of this task is to provide users with the best solution for browsing the vast amounts of imagery available. The image files LMMP manages range from a few gigabytes to hundreds of gigabytes in size with new data arriving every day. Despite this ever-increasing amount of data, LMMP must make the data readily available in a timely manner for users to view and analyze. This is accomplished by tiling large images into smaller images using Hadoop, a distributed computing software platform implementation of the MapReduce framework, running on a small cluster of machines locally. Additionally, the software is implemented to use Amazon's Elastic Compute Cloud (EC2) facility. We also developed a hybrid solution to serve images to users by leveraging cloud storage using Amazon's Simple Storage Service (S3) for public data while keeping private information on our own data servers. By using Cloud Computing, we improve upon our local solution by reducing the need to manage our own hardware and computing infrastructure, thereby reducing costs. Further, by using a hybrid of local and cloud storage, we are able to provide data to our users more efficiently and securely. 12 This paper examines the use of a distributed approach with Hadoop to tile images, an approach that provides significant improvements in image processing time, from hours to minutes. This paper describes the constraints imposed on the solution and the resulting techniques developed for the hybrid solution of a customized Hadoop infrastructure over local and cloud resources in managing this ever-growing data set. It examines the performance trade-offs of using the more plentiful resources of the cloud, such as those provided by S3, against the bandwidth limitations such use

  13. Identifying robust communities and multi-community nodes by combining top-down and bottom-up approaches to clustering

    PubMed Central

    Gaiteri, Chris; Chen, Mingming; Szymanski, Boleslaw; Kuzmin, Konstantin; Xie, Jierui; Lee, Changkyu; Blanche, Timothy; Chaibub Neto, Elias; Huang, Su-Chun; Grabowski, Thomas; Madhyastha, Tara; Komashko, Vitalina

    2015-01-01

    Biological functions are carried out by groups of interacting molecules, cells or tissues, known as communities. Membership in these communities may overlap when biological components are involved in multiple functions. However, traditional clustering methods detect non-overlapping communities. These detected communities may also be unstable and difficult to replicate, because traditional methods are sensitive to noise and parameter settings. These aspects of traditional clustering methods limit our ability to detect biological communities, and therefore our ability to understand biological functions. To address these limitations and detect robust overlapping biological communities, we propose an unorthodox clustering method called SpeakEasy which identifies communities using top-down and bottom-up approaches simultaneously. Specifically, nodes join communities based on their local connections, as well as global information about the network structure. This method can quantify the stability of each community, automatically identify the number of communities, and quickly cluster networks with hundreds of thousands of nodes. SpeakEasy shows top performance on synthetic clustering benchmarks and accurately identifies meaningful biological communities in a range of datasets, including: gene microarrays, protein interactions, sorted cell populations, electrophysiology and fMRI brain imaging. PMID:26549511

  14. Identifying robust communities and multi-community nodes by combining top-down and bottom-up approaches to clustering.

    PubMed

    Gaiteri, Chris; Chen, Mingming; Szymanski, Boleslaw; Kuzmin, Konstantin; Xie, Jierui; Lee, Changkyu; Blanche, Timothy; Chaibub Neto, Elias; Huang, Su-Chun; Grabowski, Thomas; Madhyastha, Tara; Komashko, Vitalina

    2015-11-09

    Biological functions are carried out by groups of interacting molecules, cells or tissues, known as communities. Membership in these communities may overlap when biological components are involved in multiple functions. However, traditional clustering methods detect non-overlapping communities. These detected communities may also be unstable and difficult to replicate, because traditional methods are sensitive to noise and parameter settings. These aspects of traditional clustering methods limit our ability to detect biological communities, and therefore our ability to understand biological functions. To address these limitations and detect robust overlapping biological communities, we propose an unorthodox clustering method called SpeakEasy which identifies communities using top-down and bottom-up approaches simultaneously. Specifically, nodes join communities based on their local connections, as well as global information about the network structure. This method can quantify the stability of each community, automatically identify the number of communities, and quickly cluster networks with hundreds of thousands of nodes. SpeakEasy shows top performance on synthetic clustering benchmarks and accurately identifies meaningful biological communities in a range of datasets, including: gene microarrays, protein interactions, sorted cell populations, electrophysiology and fMRI brain imaging.

  15. Analytical Design of Robust Multi-loop PI Controller for Multi-time Delay Processes

    NASA Astrophysics Data System (ADS)

    Vu, Truong Nguyen Luan; Lee, Moonyong

    In this chapter, a robust design of multi-loop PI controller for multivariable processes in the presence of the multiplicative input uncertainty is presented. The method consists of two major steps: firstly, the analytical tuning rules of multi-loop PI controller are derived based on the direct synthesis and IMC-PID approach. Then, in the second step, the robust stability analysis is utilized for enhancing the robustness of proposed PI control systems. The most important feature of the proposed method is that the tradeoff between the robust stability and performance can be established by adjusting only one design parameter (i.e., the closed-loop time constant) via structured singular value synthesis. To verify the superiority of the proposed method, simulation studies have been conducted on a variety of the nominal processes and their plant-model mismatch cases. The results demonstrate that the proposed design method guarantees the robustness under the perturbation on each of the process parameters simultaneously.

  16. Identifying Individual Excellence: The Dependable Strengths Articulation Process.

    ERIC Educational Resources Information Center

    Boivin-Brown, Allen; Haldane, Jean; Forster, Jerald

    This paper was written to describe the essential tasks of a process known as Dependable Strengths Articulation (DSA) and how career development practitioners can acquire the skills to use the process. DSA, when combined with practices known as Job Magnet, helps participants identify their individual excellence and then use this knowledge to…

  17. Phosphoproteomic profiling of tumor tissues identifies HSP27 Ser82 phosphorylation as a robust marker of early ischemia

    PubMed Central

    Zahari, Muhammad Saddiq; Wu, Xinyan; Pinto, Sneha M.; Nirujogi, Raja Sekhar; Kim, Min-Sik; Fetics, Barry; Philip, Mathew; Barnes, Sheri R.; Godfrey, Beverly; Gabrielson, Edward; Nevo, Erez; Pandey, Akhilesh

    2015-01-01

    Delays between tissue collection and tissue fixation result in ischemia and ischemia-associated changes in protein phosphorylation levels, which can misguide the examination of signaling pathway status. To identify a biomarker that serves as a reliable indicator of ischemic changes that tumor tissues undergo, we subjected harvested xenograft tumors to room temperature for 0, 2, 10 and 30 minutes before freezing in liquid nitrogen. Multiplex TMT-labeling was conducted to achieve precise quantitation, followed by TiO2 phosphopeptide enrichment and high resolution mass spectrometry profiling. LC-MS/MS analyses revealed phosphorylation level changes of a number of phosphosites in the ischemic samples. The phosphorylation of one of these sites, S82 of the heat shock protein 27 kDa (HSP27), was especially abundant and consistently upregulated in tissues with delays in freezing as short as 2 minutes. In order to eliminate effects of ischemia, we employed a novel cryogenic biopsy device which begins freezing tissues in situ before they are excised. Using this device, we showed that the upregulation of phosphorylation of S82 on HSP27 was abrogated. We thus demonstrate that our cryogenic biopsy device can eliminate ischemia-induced phosphoproteome alterations, and measurements of S82 on HSP27 can be used as a robust marker of ischemia in tissues. PMID:26329039

  18. The Robustness of Pathway Analysis in Identifying Potential Drug Targets in Non-Small Cell Lung Carcinoma

    PubMed Central

    Dalby, Andrew; Bailey, Ian

    2014-01-01

    The identification of genes responsible for causing cancers from gene expression data has had varied success. Often the genes identified depend on the methods used for detecting expression patterns, or on the ways that the data had been normalized and filtered. The use of gene set enrichment analysis is one way to introduce biological information in order to improve the detection of differentially expressed genes and pathways. In this paper we show that the use of network models while still subject to the problems of normalization is a more robust method for detecting pathways that are differentially overrepresented in lung cancer data. Such differences may provide opportunities for novel therapeutics. In addition, we present evidence that non-small cell lung carcinoma is not a series of homogeneous diseases; rather that there is a heterogeny within the genotype which defies phenotype classification. This diversity helps to explain the lack of progress in developing therapies against non-small cell carcinoma and suggests that drug development may consider multiple pathways as treatment targets.

  19. Robust Design of Sheet Metal Forming Process Based on Kriging Metamodel

    NASA Astrophysics Data System (ADS)

    Xie, Yanmin

    2011-08-01

    Nowadays, sheet metal forming processes design is not a trivial task due to the complex issues to be taken into account (conflicting design goals, complex shapes forming and so on). Optimization methods have also been widely applied in sheet metal forming. Therefore, proper design methods to reduce time and costs have to be developed mostly based on computer aided procedures. At the same time, the existence of variations during manufacturing processes significantly may influence final product quality, rendering non-robust optimal solutions. In this paper, a small size of design of experiments is conducted to investigate how a stochastic behavior of noise factors affects drawing quality. The finite element software (LS_DYNA) is used to simulate the complex sheet metal stamping processes. The Kriging metamodel is adopted to map the relation between input process parameters and part quality. Robust design models for sheet metal forming process integrate adaptive importance sampling with Kriging model, in order to minimize impact of the variations and achieve reliable process parameters. In the adaptive sample, an improved criterion is used to provide direction in which additional training samples can be added to better the Kriging model. Nonlinear functions as test functions and a square stamping example (NUMISHEET'93) are employed to verify the proposed method. Final results indicate application feasibility of the aforesaid method proposed for multi-response robust design.

  20. A robust method for processing scanning probe microscopy images and determining nanoobject position and dimensions.

    PubMed

    Silly, F

    2009-12-01

    Processing of scanning probe microscopy (SPM) images is essential to explore nanoscale phenomena. Image processing and pattern recognition techniques are developed to improve the accuracy and consistency of nanoobject and surface characterization. We present a robust and versatile method to process SPM images and reproducibly estimate nanoobject position and dimensions. This method is using dedicated fits based on the least-square method and the matrix operations. The corresponding algorithms have been implemented in the FabViewer portable application. We illustrate how these algorithms permit not only to correct SPM images but also to precisely determine the position and dimensions of nanocrystals and adatoms on surface. A robustness test is successfully performed using distorted SPM images. PMID:19941561

  1. Optical wafer metrology sensors for process-robust CD and overlay control in semiconductor device manufacturing

    NASA Astrophysics Data System (ADS)

    den Boef, Arie J.

    2016-06-01

    This paper presents three optical wafer metrology sensors that are used in lithography for robustly measuring the shape and position of wafers and device patterns on these wafers. The first two sensors are a level sensor and an alignment sensor that measure, respectively, a wafer height map and a wafer position before a new pattern is printed on the wafer. The third sensor is an optical scatterometer that measures critical dimension-variations and overlay after the resist has been exposed and developed. These sensors have different optical concepts but they share the same challenge that sub-nm precision is required at high throughput on a large variety of processed wafers and in the presence of unknown wafer processing variations. It is the purpose of this paper to explain these challenges in more detail and give an overview of the various solutions that have been introduced over the years to come to process-robust optical wafer metrology.

  2. Robust control chart for change point detection of process variance in the presence of disturbances

    NASA Astrophysics Data System (ADS)

    Huat, Ng Kooi; Midi, Habshah

    2015-02-01

    A conventional control chart for detecting shifts in variance of a process is typically developed where in most circumstances the nominal value of variance is unknown and based upon one of the essential assumptions that the underlying distribution of the quality characteristic is normal. However, this is not always the case as it is fairly evident that the statistical estimates used for these charts are very sensitive to the occurrence of occasional outliers. This is for the reason that the robust control charts are put forward when the underlying normality assumption is not met, and served as a remedial measure to the problem of contamination in process data. Realizing that the existing approach, namely Biweight A pooled residuals method, appears to be resistance to localized disturbances but lack of efficiency when there are diffuse disturbances. To be concrete, diffuse disturbances are those that have equal change of being perturbed by any observation, while a localized disturbance will have effect on every member of a certain subsample or subsamples. Since the efficiency of estimators in the presence of disturbances can rely heavily upon whether the disturbances are distributed throughout the observations or concentrated in a few subsamples. Hence, to this end, in this paper we proposed a new robust MBAS control chart by means of subsample-based robust Modified Biweight A scale estimator in estimating the process standard deviation. It has strong resistance to both localized and diffuse disturbances as well as high efficiency when no disturbances are present. The performance of the proposed robust chart was evaluated based on some decision criteria through Monte Carlo simulation study.

  3. Robust Selection Algorithm (RSA) for Multi-Omic Biomarker Discovery; Integration with Functional Network Analysis to Identify miRNA Regulated Pathways in Multiple Cancers.

    PubMed

    Sehgal, Vasudha; Seviour, Elena G; Moss, Tyler J; Mills, Gordon B; Azencott, Robert; Ram, Prahlad T

    2015-01-01

    MicroRNAs (miRNAs) play a crucial role in the maintenance of cellular homeostasis by regulating the expression of their target genes. As such, the dysregulation of miRNA expression has been frequently linked to cancer. With rapidly accumulating molecular data linked to patient outcome, the need for identification of robust multi-omic molecular markers is critical in order to provide clinical impact. While previous bioinformatic tools have been developed to identify potential biomarkers in cancer, these methods do not allow for rapid classification of oncogenes versus tumor suppressors taking into account robust differential expression, cutoffs, p-values and non-normality of the data. Here, we propose a methodology, Robust Selection Algorithm (RSA) that addresses these important problems in big data omics analysis. The robustness of the survival analysis is ensured by identification of optimal cutoff values of omics expression, strengthened by p-value computed through intensive random resampling taking into account any non-normality in the data and integration into multi-omic functional networks. Here we have analyzed pan-cancer miRNA patient data to identify functional pathways involved in cancer progression that are associated with selected miRNA identified by RSA. Our approach demonstrates the way in which existing survival analysis techniques can be integrated with a functional network analysis framework to efficiently identify promising biomarkers and novel therapeutic candidates across diseases.

  4. Conceptual information processing: A robust approach to KBS-DBMS integration

    NASA Technical Reports Server (NTRS)

    Lazzara, Allen V.; Tepfenhart, William; White, Richard C.; Liuzzi, Raymond

    1987-01-01

    Integrating the respective functionality and architectural features of knowledge base and data base management systems is a topic of considerable interest. Several aspects of this topic and associated issues are addressed. The significance of integration and the problems associated with accomplishing that integration are discussed. The shortcomings of current approaches to integration and the need to fuse the capabilities of both knowledge base and data base management systems motivates the investigation of information processing paradigms. One such paradigm is concept based processing, i.e., processing based on concepts and conceptual relations. An approach to robust knowledge and data base system integration is discussed by addressing progress made in the development of an experimental model for conceptual information processing.

  5. Inference of Longevity-Related Genes from a Robust Coexpression Network of Seed Maturation Identifies Regulators Linking Seed Storability to Biotic Defense-Related Pathways.

    PubMed

    Righetti, Karima; Vu, Joseph Ly; Pelletier, Sandra; Vu, Benoit Ly; Glaab, Enrico; Lalanne, David; Pasha, Asher; Patel, Rohan V; Provart, Nicholas J; Verdier, Jerome; Leprince, Olivier; Buitink, Julia

    2015-10-01

    Seed longevity, the maintenance of viability during storage, is a crucial factor for preservation of genetic resources and ensuring proper seedling establishment and high crop yield. We used a systems biology approach to identify key genes regulating the acquisition of longevity during seed maturation of Medicago truncatula. Using 104 transcriptomes from seed developmental time courses obtained in five growth environments, we generated a robust, stable coexpression network (MatNet), thereby capturing the conserved backbone of maturation. Using a trait-based gene significance measure, a coexpression module related to the acquisition of longevity was inferred from MatNet. Comparative analysis of the maturation processes in M. truncatula and Arabidopsis thaliana seeds and mining Arabidopsis interaction databases revealed conserved connectivity for 87% of longevity module nodes between both species. Arabidopsis mutant screening for longevity and maturation phenotypes demonstrated high predictive power of the longevity cross-species network. Overrepresentation analysis of the network nodes indicated biological functions related to defense, light, and auxin. Characterization of defense-related wrky3 and nf-x1-like1 (nfxl1) transcription factor mutants demonstrated that these genes regulate some of the network nodes and exhibit impaired acquisition of longevity during maturation. These data suggest that seed longevity evolved by co-opting existing genetic pathways regulating the activation of defense against pathogens. PMID:26410298

  6. Inference of Longevity-Related Genes from a Robust Coexpression Network of Seed Maturation Identifies Regulators Linking Seed Storability to Biotic Defense-Related Pathways

    PubMed Central

    Righetti, Karima; Vu, Joseph Ly; Pelletier, Sandra; Vu, Benoit Ly; Glaab, Enrico; Lalanne, David; Pasha, Asher; Patel, Rohan V.; Provart, Nicholas J.; Verdier, Jerome; Leprince, Olivier

    2015-01-01

    Seed longevity, the maintenance of viability during storage, is a crucial factor for preservation of genetic resources and ensuring proper seedling establishment and high crop yield. We used a systems biology approach to identify key genes regulating the acquisition of longevity during seed maturation of Medicago truncatula. Using 104 transcriptomes from seed developmental time courses obtained in five growth environments, we generated a robust, stable coexpression network (MatNet), thereby capturing the conserved backbone of maturation. Using a trait-based gene significance measure, a coexpression module related to the acquisition of longevity was inferred from MatNet. Comparative analysis of the maturation processes in M. truncatula and Arabidopsis thaliana seeds and mining Arabidopsis interaction databases revealed conserved connectivity for 87% of longevity module nodes between both species. Arabidopsis mutant screening for longevity and maturation phenotypes demonstrated high predictive power of the longevity cross-species network. Overrepresentation analysis of the network nodes indicated biological functions related to defense, light, and auxin. Characterization of defense-related wrky3 and nf-x1-like1 (nfxl1) transcription factor mutants demonstrated that these genes regulate some of the network nodes and exhibit impaired acquisition of longevity during maturation. These data suggest that seed longevity evolved by co-opting existing genetic pathways regulating the activation of defense against pathogens. PMID:26410298

  7. A novel predictive control algorithm and robust stability criteria for integrating processes.

    PubMed

    Zhang, Bin; Yang, Weimin; Zong, Hongyuan; Wu, Zhiyong; Zhang, Weidong

    2011-07-01

    This paper introduces a novel predictive controller for single-input/single-output (SISO) integrating systems, which can be directly applied without pre-stabilizing the process. The control algorithm is designed on the basis of the tested step response model. To produce a bounded system response along the finite predictive horizon, the effect of the integrating mode must be zeroed while unmeasured disturbances exist. Here, a novel predictive feedback error compensation method is proposed to eliminate the permanent offset between the setpoint and the process output while the integrating system is affected by load disturbance. Also, a rotator factor is introduced in the performance index, which is contributed to the improvement robustness of the closed-loop system. Then on the basis of Jury's dominant coefficient criterion, a robust stability condition of the resulted closed loop system is given. There are only two parameters which need to be tuned for the controller, and each has a clear physical meaning, which is convenient for implementation of the control algorithm. Lastly, simulations are given to illustrate that the proposed algorithm can provide excellent closed loop performance compared with some reported methods. PMID:21353217

  8. CORROSION PROCESS IN REINFORCED CONCRETE IDENTIFIED BY ACOUSTIC EMISSION

    NASA Astrophysics Data System (ADS)

    Kawasaki, Yuma; Kitaura, Misuzu; Tomoda, Yuichi; Ohtsu, Masayasu

    Deterioration of Reinforced Concrete (RC) due to salt attack is known as one of serious problems. Thus, development of non-destructive evaluation (NDE) techniques is important to assess the corrosion process. Reinforcement in concrete normally does not corrode because of a passive film on the surface of reinforcement. When chloride concentration at reinfo rcement exceeds the threshold level, the passive film is destroyed. Thus maintenance is desirable at an early stage. In this study, to identify the onset of corrosion and the nucleation of corrosion-induced cracking in concrete due to expansion of corrosion products, continuous acoustic emission (AE) monitoring is applied. Accelerated corrosion and cyclic wet and dry tests are performed in a laboratory. The SiGMA (Simplified Green's functions for Moment tensor Analysis) proce dure is applied to AE waveforms to clarify source kinematics of micro-cracks locations, types and orientations. Results show that the onset of corrosion and the nu cleation of corrosion-induced cracking in concrete are successfully identified. Additionally, cross-sections inside the reinforcement are observed by a scanning electron microscope (SEM). From these results, a great promise for AE techniques to monitor salt damage at an early stage in RC structures is demonstrated.

  9. Robustness Tests in Determining the Earthquake Rupture Process: The June 23, 2001 Mw 8.4 Peru Earthquake

    NASA Astrophysics Data System (ADS)

    Das, S.; Robinson, D. P.

    2006-12-01

    The non-uniqueness of the problem of determining the rupture process details from analysis of body-wave seismograms was first discussed by Kostrov in 1974. We discuss how to use robustness tests together with inversion of synthetic data to identify the reliable properties of the rupture process obtained from inversion of broadband body wave data. We apply it to the great 2001 Peru earthquake. Twice in the last 200 years, a great earthquake in this region has been followed by a great earthquake in the immediately adjacent plate boundary to the south within about 10 years, indicating the potential for a major earthquake in this area in the near future. By inverting 19 pure SH-seismograms evenly distributed in azimuth around the fault, we find that the rupture was held up by a barrier and then overcame it, thereby producing the world's third largest earthquake since 1965, and we show that the stalling of the rupture in this earthquake is a robust feature. The rupture propagated for ~70 km, then skirted around a ~6000 km2 area of the fault and continued propagating for another ~200 km, returning to rupture this barrier after a ~30 second delay. The barrier has relatively low rupture speed, slip and aftershock density compared to its surroundings, and the time of the main energy release in the earthquake coincides with its rupture. We identify this barrier as a fracture zone on the subducting oceanic plate. Robinson, D. P., S. Das, A. B. Watts (2006), Earthquake rupture stalled by subducting fracture zone, Science, 312(5777), 1203-1205.

  10. High Fidelity Springback Simulation and Compensation with Robust Forming Process Design

    NASA Astrophysics Data System (ADS)

    Lee, Intaek; Carleer, B. D.; Haage, S.

    2011-08-01

    For the efficient virtual try-out loop, geometric change and bending angles have been compensated during last 20 years. This approach was based on some restrictions like pure bending, plane strain state and isotropic behavior. For more complex forming processes, this has been applied without reviewing this limitation. Analytical force consideration to reduce the amount of springback is also another idea to compensate geometrical displacement adjustment efficiently. In other view, the springback prediction accuracy is also one major topic with various material model developments. All these topics are absolutely of high importance in order to increase springback accuracy and effective compensation to help reduce the trials efforts. But the focus should not only be on these advances issues since the basics must be right as well. Based on our long term experiences in simulation and several project outcomes covered 20 different parts during last 2 years, we will present our experience and investigations in order to geometrically compensate forming process. It is emphasized that we did not just simulate the springback but the compensated surfaces where brought into the real tools. So, the experiences are not only based on numerical analysis but of all this part a physical tryout has been performed as well. The experiences are consolidated in a set of principles of robust springback compensation. These principles will be illustrated and explained at an example part, a B-pillar upper reinforcement. Indeed, it is can be seen that that compensation is a straight forward activity. However, our experiences have shown that it is only a straight forward activity in case certain `boundary conditions are fulfilled. We will discuss some of this boundary condition using the B-pillar upper example. Through this study, basic requirements for successful springabck compensation, the full scope of simulation range, set-up the nominal springback simulation and robustness of forming

  11. Formulation of an integrated robust design and tactics optimization process for undersea weapon systems

    NASA Astrophysics Data System (ADS)

    Frits, Andrew P.

    In the current Navy environment of undersea weapons development, the engineering aspect of design is decoupled from the development of the tactics with which the weapon is employed. Tactics are developed by intelligence experts, warfighters, and wargamers, while torpedo design is handled by engineers and contractors. This dissertation examines methods by which the conceptual design process of undersea weapon systems, including both torpedo systems and mine counter-measure systems, can be improved. It is shown that by simultaneously designing the torpedo and the tactics with which undersea weapons are used, a more effective overall weapon system can be created. In addition to integrating torpedo tactics with design, the thesis also looks at design methods to account for uncertainty. The uncertainty is attributable to multiple sources, including: lack of detailed analysis tools early in the design process, incomplete knowledge of the operational environments, and uncertainty in the performance of potential technologies. A robust design process is introduced to account for this uncertainty in the analysis and optimization of torpedo systems through the combination of Monte Carlo simulation with response surface methodology and metamodeling techniques. Additionally, various other methods that are appropriate to uncertainty analysis are discussed and analyzed. The thesis also advances a new approach towards examining robustness and risk: the treatment of probability of success (POS) as an independent variable. Examining the cost and performance tradeoffs between high and low probability of success designs, the decision-maker can make better informed decisions as to what designs are most promising and determine the optimal balance of risk, cost, and performance. Finally, the thesis examines the use of non-dimensionalization of parameters for torpedo design. The thesis shows that the use of non-dimensional torpedo parameters leads to increased knowledge about the

  12. Quantifying Community Assembly Processes and Identifying Features that Impose Them

    SciTech Connect

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; Chen, Xingyuan; Kennedy, David W.; Murray, Christopher J.; Rockhold, Mark L.; Konopka, Allan

    2013-06-06

    Across a set of ecological communities connected to each other through organismal dispersal (a ‘meta-community’), turnover in composition is governed by (ecological) Drift, Selection, and Dispersal Limitation. Quantitative estimates of these processes remain elusive, but would represent a common currency needed to unify community ecology. Using a novel analytical framework we quantitatively estimate the relative influences of Drift, Selection, and Dispersal Limitation on subsurface, sediment-associated microbial meta-communities. The communities we study are distributed across two geologic formations encompassing ~12,500m3 of uranium-contaminated sediments within the Hanford Site in eastern Washington State. We find that Drift consistently governs ~25% of spatial turnover in community composition; Selection dominates (governing ~60% of turnover) across spatially-structured habitats associated with fine-grained, low permeability sediments; and Dispersal Limitation is most influential (governing ~40% of turnover) across spatially-unstructured habitats associated with coarse-grained, highly-permeable sediments. Quantitative influences of Selection and Dispersal Limitation may therefore be predictable from knowledge of environmental structure. To develop a system-level conceptual model we extend our analytical framework to compare process estimates across formations, characterize measured and unmeasured environmental variables that impose Selection, and identify abiotic features that limit dispersal. Insights gained here suggest that community ecology can benefit from a shift in perspective; the quantitative approach developed here goes beyond the ‘niche vs. neutral’ dichotomy by moving towards a style of natural history in which estimates of Selection, Dispersal Limitation and Drift can be described, mapped and compared across ecological systems.

  13. Ultra-low-power and robust digital-signal-processing hardware for implantable neural interface microsystems.

    PubMed

    Narasimhan, S; Chiel, H J; Bhunia, S

    2011-04-01

    Implantable microsystems for monitoring or manipulating brain activity typically require on-chip real-time processing of multichannel neural data using ultra low-power, miniaturized electronics. In this paper, we propose an integrated-circuit/architecture-level hardware design framework for neural signal processing that exploits the nature of the signal-processing algorithm. First, we consider different power reduction techniques and compare the energy efficiency between the ultra-low frequency subthreshold and conventional superthreshold design. We show that the superthreshold design operating at a much higher frequency can achieve comparable energy dissipation by taking advantage of extensive power gating. It also provides significantly higher robustness of operation and yield under large process variations. Next, we propose an architecture level preferential design approach for further energy reduction by isolating the critical computation blocks (with respect to the quality of the output signal) and assigning them higher delay margins compared to the noncritical ones. Possible delay failures under parameter variations are confined to the noncritical components, allowing graceful degradation in quality under voltage scaling. Simulation results using prerecorded neural data from the sea-slug (Aplysia californica) show that the application of the proposed design approach can lead to significant improvement in total energy, without compromising the output signal quality under process variations, compared to conventional design approaches. PMID:23851205

  14. Ultra-low-power and robust digital-signal-processing hardware for implantable neural interface microsystems.

    PubMed

    Narasimhan, S; Chiel, H J; Bhunia, S

    2011-04-01

    Implantable microsystems for monitoring or manipulating brain activity typically require on-chip real-time processing of multichannel neural data using ultra low-power, miniaturized electronics. In this paper, we propose an integrated-circuit/architecture-level hardware design framework for neural signal processing that exploits the nature of the signal-processing algorithm. First, we consider different power reduction techniques and compare the energy efficiency between the ultra-low frequency subthreshold and conventional superthreshold design. We show that the superthreshold design operating at a much higher frequency can achieve comparable energy dissipation by taking advantage of extensive power gating. It also provides significantly higher robustness of operation and yield under large process variations. Next, we propose an architecture level preferential design approach for further energy reduction by isolating the critical computation blocks (with respect to the quality of the output signal) and assigning them higher delay margins compared to the noncritical ones. Possible delay failures under parameter variations are confined to the noncritical components, allowing graceful degradation in quality under voltage scaling. Simulation results using prerecorded neural data from the sea-slug (Aplysia californica) show that the application of the proposed design approach can lead to significant improvement in total energy, without compromising the output signal quality under process variations, compared to conventional design approaches.

  15. Evolving Robust Gene Regulatory Networks

    PubMed Central

    Noman, Nasimul; Monjo, Taku; Moscato, Pablo; Iba, Hitoshi

    2015-01-01

    Design and implementation of robust network modules is essential for construction of complex biological systems through hierarchical assembly of ‘parts’ and ‘devices’. The robustness of gene regulatory networks (GRNs) is ascribed chiefly to the underlying topology. The automatic designing capability of GRN topology that can exhibit robust behavior can dramatically change the current practice in synthetic biology. A recent study shows that Darwinian evolution can gradually develop higher topological robustness. Subsequently, this work presents an evolutionary algorithm that simulates natural evolution in silico, for identifying network topologies that are robust to perturbations. We present a Monte Carlo based method for quantifying topological robustness and designed a fitness approximation approach for efficient calculation of topological robustness which is computationally very intensive. The proposed framework was verified using two classic GRN behaviors: oscillation and bistability, although the framework is generalized for evolving other types of responses. The algorithm identified robust GRN architectures which were verified using different analysis and comparison. Analysis of the results also shed light on the relationship among robustness, cooperativity and complexity. This study also shows that nature has already evolved very robust architectures for its crucial systems; hence simulation of this natural process can be very valuable for designing robust biological systems. PMID:25616055

  16. Low Power and Robust Domino Circuit with Process Variations Tolerance for High Speed Digital Signal Processing

    NASA Astrophysics Data System (ADS)

    Wang, Jinhui; Peng, Xiaohong; Li, Xinxin; Hou, Ligang; Wu, Wuchen

    Utilizing the sleep switch transistor technique and dual threshold voltage technique, a source following evaluation gate (SEFG) based domino circuit is presented in this paper for simultaneously suppressing the leakage current and enhancing noise immunity. Simulation results show that the leakage current of the proposed design can be reduced by 43%, 62%, and 67% while improving 19.7%, 3.4 %, and 12.5% noise margin as compared to standard low threshold voltage circuit, standard dual threshold voltage circuit, and SEFG structure, respectively. Also, the inputs and clock signals combination static state dependent leakage current characteristic is analyzed and the minimum leakage states of different domino AND gates are obtained. At last, the leakage power characteristic under process variations is discussed.

  17. Directed International Technological Change and Climate Policy: New Methods for Identifying Robust Policies Under Conditions of Deep Uncertainty

    NASA Astrophysics Data System (ADS)

    Molina-Perez, Edmundo

    It is widely recognized that international environmental technological change is key to reduce the rapidly rising greenhouse gas emissions of emerging nations. In 2010, the United Nations Framework Convention on Climate Change (UNFCCC) Conference of the Parties (COP) agreed to the creation of the Green Climate Fund (GCF). This new multilateral organization has been created with the collective contributions of COP members, and has been tasked with directing over USD 100 billion per year towards investments that can enhance the development and diffusion of clean energy technologies in both advanced and emerging nations (Helm and Pichler, 2015). The landmark agreement arrived at the COP 21 has reaffirmed the key role that the GCF plays in enabling climate mitigation as it is now necessary to align large scale climate financing efforts with the long-term goals agreed at Paris 2015. This study argues that because of the incomplete understanding of the mechanics of international technological change, the multiplicity of policy options and ultimately the presence of climate and technological change deep uncertainty, climate financing institutions such as the GCF, require new analytical methods for designing long-term robust investment plans. Motivated by these challenges, this dissertation shows that the application of new analytical methods, such as Robust Decision Making (RDM) and Exploratory Modeling (Lempert, Popper and Bankes, 2003) to the study of international technological change and climate policy provides useful insights that can be used for designing a robust architecture of international technological cooperation for climate change mitigation. For this study I developed an exploratory dynamic integrated assessment model (EDIAM) which is used as the scenario generator in a large computational experiment. The scope of the experimental design considers an ample set of climate and technological scenarios. These scenarios combine five sources of uncertainty

  18. Improved Signal Processing Technique Leads to More Robust Self Diagnostic Accelerometer System

    NASA Technical Reports Server (NTRS)

    Tokars, Roger; Lekki, John; Jaros, Dave; Riggs, Terrence; Evans, Kenneth P.

    2010-01-01

    The self diagnostic accelerometer (SDA) is a sensor system designed to actively monitor the health of an accelerometer. In this case an accelerometer is considered healthy if it can be determined that it is operating correctly and its measurements may be relied upon. The SDA system accomplishes this by actively monitoring the accelerometer for a variety of failure conditions including accelerometer structural damage, an electrical open circuit, and most importantly accelerometer detachment. In recent testing of the SDA system in emulated engine operating conditions it has been found that a more robust signal processing technique was necessary. An improved accelerometer diagnostic technique and test results of the SDA system utilizing this technique are presented here. Furthermore, the real time, autonomous capability of the SDA system to concurrently compensate for effects from real operating conditions such as temperature changes and mechanical noise, while monitoring the condition of the accelerometer health and attachment, will be demonstrated.

  19. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to

  20. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering.

    PubMed

    Shanechi, Maryam M; Orsborn, Amy L; Carmena, Jose M

    2016-04-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain's behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user's motor intention during CLDA-a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter

  1. Adaptive GSA-based optimal tuning of PI controlled servo systems with reduced process parametric sensitivity, robust stability and controller robustness.

    PubMed

    Precup, Radu-Emil; David, Radu-Codrut; Petriu, Emil M; Radac, Mircea-Bogdan; Preitl, Stefan

    2014-11-01

    This paper suggests a new generation of optimal PI controllers for a class of servo systems characterized by saturation and dead zone static nonlinearities and second-order models with an integral component. The objective functions are expressed as the integral of time multiplied by absolute error plus the weighted sum of the integrals of output sensitivity functions of the state sensitivity models with respect to two process parametric variations. The PI controller tuning conditions applied to a simplified linear process model involve a single design parameter specific to the extended symmetrical optimum (ESO) method which offers the desired tradeoff to several control system performance indices. An original back-calculation and tracking anti-windup scheme is proposed in order to prevent the integrator wind-up and to compensate for the dead zone nonlinearity of the process. The minimization of the objective functions is carried out in the framework of optimization problems with inequality constraints which guarantee the robust stability with respect to the process parametric variations and the controller robustness. An adaptive gravitational search algorithm (GSA) solves the optimization problems focused on the optimal tuning of the design parameter specific to the ESO method and of the anti-windup tracking gain. A tuning method for PI controllers is proposed as an efficient approach to the design of resilient control systems. The tuning method and the PI controllers are experimentally validated by the adaptive GSA-based tuning of PI controllers for the angular position control of a laboratory servo system. PMID:25330468

  2. Processing and Properties of Fiber Reinforced Polymeric Matrix Composites. Part 2; Processing Robustness of IM7/PETI Polyimide Composites

    NASA Technical Reports Server (NTRS)

    Hou, Tan-Hung

    1996-01-01

    The processability of a phenylethynyl terminated imide (PETI) resin matrix composite was investigated. Unidirectional prepregs were made by coating an N-methylpyrrolidone solution of the amide acid oligomer onto unsized IM7. Two batches of prepregs were used: one was made by NASA in-house, and the other was from an industrial source. The composite processing robustness was investigated with respect to the effect of B-staging conditions, the prepreg shelf life, and the optimal processing window. Rheological measurements indicated that PETI's processability was only slightly affected over a wide range of B-staging temperatures (from 250 C to 300 C). The open hole compression (OHC) strength values were statistically indistinguishable among specimens consolidated using various B-staging conditions. Prepreg rheology and OHC strengths were also found not to be affected by prolonged (i.e., up to 60 days) ambient storage. An optimal processing window was established using response surface methodology. It was found that IM7/PETI composite is more sensitive to the consolidation temperature than to the consolidation pressure. A good consolidation was achievable at 371 C/100 Psi, which yielded an OHC strength of 62 Ksi at room temperature. However, processability declined dramatically at temperatures below 350 C.

  3. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations, 2. Robustness of Techniques

    SciTech Connect

    Helton, J.C.; Kleijnen, J.P.C.

    1999-03-24

    Procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses are described and illustrated. These procedures attempt to detect increasingly complex patterns in scatterplots and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. A sequence of example analyses with a large model for two-phase fluid flow illustrates how the individual procedures can differ in the variables that they identify as having effects on particular model outcomes. The example analyses indicate that the use of a sequence of procedures is a good analysis strategy and provides some assurance that an important effect is not overlooked.

  4. Chatter Stability in Turning and Milling with in Process Identified Process Damping

    NASA Astrophysics Data System (ADS)

    Kurata, Yusuke; Merdol, S. Doruk; Altintas, Yusuf; Suzuki, Norikazu; Shamoto, Eiji

    Process damping in metal cutting is caused by the contact between the flank face of the cutting tool and the wavy surface finish, which is known to damp chatter vibrations. An analytical model with process damping has already been developed and verified in earlier research, in which the damping coefficient is considered to be proportional to the ratio of vibration and cutting velocities. This paper presents in process identification of the process damping force coefficient derived from cutting tests. Plunge turning is used to create a continuous reduction in cutting speed as the tool reduces the diameter of a cylindrical workpiece. When chatter stops at a critical cutting speed, the process damping coefficient is estimated by inverse solution of the stability law. It is shown that the stability lobes constructed by the identified process damping coefficient agrees with experiments conducted in both turning and milling.

  5. Global transcriptomic analysis of Cyanothece 51142 reveals robust diurnal oscillation of central metabolic processes

    SciTech Connect

    Stockel, Jana; Welsh, Eric A.; Liberton, Michelle L.; Kunnavakkam, Rangesh V.; Aurora, Rajeev; Pakrasi, Himadri B.

    2008-04-22

    Cyanobacteria are oxygenic photosynthetic organisms, and the only prokaryotes known to have a circadian cycle. Unicellular diazotrophic cyanobacteria such as Cyanothece 51142 can fix atmospheric nitrogen, a process exquisitely sensitive to oxygen. Thus, the intracellular environment of Cyanothece oscillates between aerobic and anaerobic conditions during a day-night cycle. This is accomplished by temporal separation of two processes: photosynthesis during the day, and nitrogen fixation at night. While previous studies have examined periodic changes transcript levels for a limited number of genes in Cyanothece and other unicellular diazotrophic cyanobacteria, a comprehensive study of transcriptional activity in a nitrogen-fixing cyanobacterium is necessary to understand the impact of the temporal separation of photosynthesis and nitrogen fixation on global gene regulation and cellular metabolism. We have examined the expression patterns of nearly 5000 genes in Cyanothece 51142 during two consecutive diurnal periods. We found that ~30% of these genes exhibited robust oscillating expression profiles. Interestingly, this set included genes for almost all central metabolic processes in Cyanothece. A transcriptional network of all genes with significantly oscillating transcript levels revealed that the majority of genes in numerous individual pathways, such as glycolysis, pentose phosphate pathway and glycogen metabolism, were co-regulated and maximally expressed at distinct phases during the diurnal cycle. Our analyses suggest that the demands of nitrogen fixation greatly influence major metabolic activities inside Cyanothece cells and thus drive various cellular activities. These studies provide a comprehensive picture of how a physiologically relevant diurnal light-dark cycle influences the metabolism in a photosynthetic bacterium

  6. Data-based robust multiobjective optimization of interconnected processes: energy efficiency case study in papermaking.

    PubMed

    Afshar, Puya; Brown, Martin; Maciejowski, Jan; Wang, Hong

    2011-12-01

    Reducing energy consumption is a major challenge for "energy-intensive" industries such as papermaking. A commercially viable energy saving solution is to employ data-based optimization techniques to obtain a set of "optimized" operational settings that satisfy certain performance indices. The difficulties of this are: 1) the problems of this type are inherently multicriteria in the sense that improving one performance index might result in compromising the other important measures; 2) practical systems often exhibit unknown complex dynamics and several interconnections which make the modeling task difficult; and 3) as the models are acquired from the existing historical data, they are valid only locally and extrapolations incorporate risk of increasing process variability. To overcome these difficulties, this paper presents a new decision support system for robust multiobjective optimization of interconnected processes. The plant is first divided into serially connected units to model the process, product quality, energy consumption, and corresponding uncertainty measures. Then multiobjective gradient descent algorithm is used to solve the problem in line with user's preference information. Finally, the optimization results are visualized for analysis and decision making. In practice, if further iterations of the optimization algorithm are considered, validity of the local models must be checked prior to proceeding to further iterations. The method is implemented by a MATLAB-based interactive tool DataExplorer supporting a range of data analysis, modeling, and multiobjective optimization techniques. The proposed approach was tested in two U.K.-based commercial paper mills where the aim was reducing steam consumption and increasing productivity while maintaining the product quality by optimization of vacuum pressures in forming and press sections. The experimental results demonstrate the effectiveness of the method.

  7. Method for processing seismic data to identify anomalous absorption zones

    DOEpatents

    Taner, M. Turhan

    2006-01-03

    A method is disclosed for identifying zones anomalously absorptive of seismic energy. The method includes jointly time-frequency decomposing seismic traces, low frequency bandpass filtering the decomposed traces to determine a general trend of mean frequency and bandwidth of the seismic traces, and high frequency bandpass filtering the decomposed traces to determine local variations in the mean frequency and bandwidth of the seismic traces. Anomalous zones are determined where there is difference between the general trend and the local variations.

  8. A robust color signal processing with wide dynamic range WRGB CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Kawada, Shun; Kuroda, Rihito; Sugawa, Shigetoshi

    2011-01-01

    We have developed a robust color reproduction methodology by a simple calculation with a new color matrix using the formerly developed wide dynamic range WRGB lateral overflow integration capacitor (LOFIC) CMOS image sensor. The image sensor was fabricated through a 0.18 μm CMOS technology and has a 45 degrees oblique pixel array, the 4.2 μm effective pixel pitch and the W pixels. A W pixel was formed by replacing one of the two G pixels in the Bayer RGB color filter. The W pixel has a high sensitivity through the visible light waveband. An emerald green and yellow (EGY) signal is generated from the difference between the W signal and the sum of RGB signals. This EGY signal mainly includes emerald green and yellow lights. These colors are difficult to be reproduced accurately by the conventional simple linear matrix because their wave lengths are in the valleys of the spectral sensitivity characteristics of the RGB pixels. A new linear matrix based on the EGY-RGB signal was developed. Using this simple matrix, a highly accurate color processing with a large margin to the sensitivity fluctuation and noise has been achieved.

  9. Fabrication of robust micro-patterned polymeric films via static breath-figure process and vulcanization.

    PubMed

    Li, Lei; Zhong, Yawen; Gong, Jianliang; Li, Jian; Huang, Jin; Ma, Zhi

    2011-02-15

    Here, we present the preparation of thermally stable and solvent resistant micro-patterned polymeric films via static breath-figure process and sequent vulcanization, with a commercially available triblock polymer, polystyrene-b-polyisoprene-b-polystyrene (SIS). The vulcanized honeycomb structured SIS films became self-supported and resistant to a wide range of organic solvents and thermally stable up to 350°C for 2h, an increase of more than 300K as compared to the uncross-linked films. This superior robustness could be attributed to the high degree of polyisoprene cross-linking. The versatility of the methodology was demonstrated by applying to another commercially available triblock polymer, polystyrene-b-polybutadiene-b-polystyrene (SBS). Particularly, hydroxy groups were introduced into SBS by hydroboration. The functionalized two-dimensional micro-patterns feasible for site-directed grafting were created by the hydroxyl-containing polymers. In addition, the fixed microporous structures could be replicated to fabricate textured positive PDMS stamps. This simple technique offers new prospects in the field of micro-patterns, soft lithography and templates. PMID:21168143

  10. Fabrication of robust micro-patterned polymeric films via static breath-figure process and vulcanization.

    PubMed

    Li, Lei; Zhong, Yawen; Gong, Jianliang; Li, Jian; Huang, Jin; Ma, Zhi

    2011-02-15

    Here, we present the preparation of thermally stable and solvent resistant micro-patterned polymeric films via static breath-figure process and sequent vulcanization, with a commercially available triblock polymer, polystyrene-b-polyisoprene-b-polystyrene (SIS). The vulcanized honeycomb structured SIS films became self-supported and resistant to a wide range of organic solvents and thermally stable up to 350°C for 2h, an increase of more than 300K as compared to the uncross-linked films. This superior robustness could be attributed to the high degree of polyisoprene cross-linking. The versatility of the methodology was demonstrated by applying to another commercially available triblock polymer, polystyrene-b-polybutadiene-b-polystyrene (SBS). Particularly, hydroxy groups were introduced into SBS by hydroboration. The functionalized two-dimensional micro-patterns feasible for site-directed grafting were created by the hydroxyl-containing polymers. In addition, the fixed microporous structures could be replicated to fabricate textured positive PDMS stamps. This simple technique offers new prospects in the field of micro-patterns, soft lithography and templates.

  11. Accelerated evaluation of the robustness of treatment plans against geometric uncertainties by Gaussian processes.

    PubMed

    Sobotta, B; Söhn, M; Alber, M

    2012-12-01

    In order to provide a consistently high quality treatment, it is of great interest to assess the robustness of a treatment plan under the influence of geometric uncertainties. One possible method to implement this is to run treatment simulations for all scenarios that may arise from these uncertainties. These simulations may be evaluated in terms of the statistical distribution of the outcomes (as given by various dosimetric quality metrics) or statistical moments thereof, e.g. mean and/or variance. This paper introduces a method to compute the outcome distribution and all associated values of interest in a very efficient manner. This is accomplished by substituting the original patient model with a surrogate provided by a machine learning algorithm. This Gaussian process (GP) is trained to mimic the behavior of the patient model based on only very few samples. Once trained, the GP surrogate takes the place of the patient model in all subsequent calculations.The approach is demonstrated on two examples. The achieved computational speedup is more than one order of magnitude.

  12. Quantitative Morphometry of Electrophysiologically Identified CA3b Interneurons Reveals Robust Local Geometry and Distinct Cell Classes

    PubMed Central

    Ascoli, Giorgio A.; Brown, Kerry M.; Calixto, Eduardo; Card, J. Patrick; Galvan, E. J.; Perez-Rosello, T.; Barrionuevo, Germán

    2010-01-01

    The morphological and electrophysiological diversity of inhibitory cells in hippocampal area CA3 may underlie specific computational roles and is not yet fully elucidated. In particular, interneurons with somata in strata radiatum (R) and lacunosum-moleculare (L-M) receive converging stimulation from the dentate gyrus and entorhinal cortex as well as within CA3. Although these cells express different forms of synaptic plasticity, their axonal trees and connectivity are still largely unknown. We investigated the branching and spatial patterns, plus the membrane and synaptic properties, of rat CA3b R and L-M interneurons digitally reconstructed after intracellular labeling. We found considerable variability within but no difference between the two layers, and no correlation between morphological and biophysical properties. Nevertheless, two cell types were identified based on the number of dendritic bifurcations, with significantly different anatomical and electrophysiological features. Axons generally branched an order of magnitude more than dendrites. However, interneurons on both sides of the R/L-M boundary revealed surprisingly modular axo-dendritic arborizations with consistently uniform local branch geometry. Both axons and dendrites followed a lamellar organization, and axons displayed a spatial preference towards the fissure. Moreover, only a small fraction of the axonal arbor extended to the outer portion of the invaded volume, and tended to return towards the proximal region. In contrast, dendritic trees demonstrated more limited but isotropic volume occupancy. These results suggest a role of predominantly local feedforward and lateral inhibitory control for both R and L-M interneurons. Such role may be essential to balance the extensive recurrent excitation of area CA3 underlying hippocampal autoassociative memory function. PMID:19496174

  13. Robust Low Cost Liquid Rocket Combustion Chamber by Advanced Vacuum Plasma Process

    NASA Technical Reports Server (NTRS)

    Holmes, Richard; Elam, Sandra; Ellis, David L.; McKechnie, Timothy; Hickman, Robert; Rose, M. Franklin (Technical Monitor)

    2001-01-01

    Next-generation, regeneratively cooled rocket engines will require materials that can withstand high temperatures while retaining high thermal conductivity. Fabrication techniques must be cost efficient so that engine components can be manufactured within the constraints of shrinking budgets. Three technologies have been combined to produce an advanced liquid rocket engine combustion chamber at NASA-Marshall Space Flight Center (MSFC) using relatively low-cost, vacuum-plasma-spray (VPS) techniques. Copper alloy NARloy-Z was replaced with a new high performance Cu-8Cr-4Nb alloy developed by NASA-Glenn Research Center (GRC), which possesses excellent high-temperature strength, creep resistance, and low cycle fatigue behavior combined with exceptional thermal stability. Functional gradient technology, developed building composite cartridges for space furnaces was incorporated to add oxidation resistant and thermal barrier coatings as an integral part of the hot wall of the liner during the VPS process. NiCrAlY, utilized to produce durable protective coating for the space shuttle high pressure fuel turbopump (BPFTP) turbine blades, was used as the functional gradient material coating (FGM). The FGM not only serves as a protection from oxidation or blanching, the main cause of engine failure, but also serves as a thermal barrier because of its lower thermal conductivity, reducing the temperature of the combustion liner 200 F, from 1000 F to 800 F producing longer life. The objective of this program was to develop and demonstrate the technology to fabricate high-performance, robust, inexpensive combustion chambers for advanced propulsion systems (such as Lockheed-Martin's VentureStar and NASA's Reusable Launch Vehicle, RLV) using the low-cost VPS process. VPS formed combustion chamber test articles have been formed with the FGM hot wall built in and hot fire tested, demonstrating for the first time a coating that will remain intact through the hot firing test, and with

  14. A Robust Power Remote Manipulator for Use in Waste Sorting, Processing, and Packaging - 12158

    SciTech Connect

    Cole, Matt; Martin, Scott

    2012-07-01

    Disposition of radioactive waste is one of the Department of Energy's (DOE's) highest priorities. A critical component of the waste disposition strategy is shipment of Transuranic (TRU) waste from DOE's Oak Ridge Reservation to the Waste Isolation Plant Project (WIPP) in Carlsbad, New Mexico. This is the mission of the DOE TRU Waste Processing Center (TWPC). The remote-handled TRU waste at the Oak Ridge Reservation is currently in a mixed waste form that must be repackaged in to meet WIPP Waste Acceptance Criteria (WAC). Because this remote-handled legacy waste is very diverse, sorting, size reducing, and packaging will require equipment flexibility and strength that is not possible with standard master-slave manipulators. To perform the wide range of tasks necessary with such diverse, highly contaminated material, TWPC worked with S.A. Technology (SAT) to modify SAT's Power Remote Manipulator (PRM) technology to provide the processing center with an added degree of dexterity and high load handling capability inside its shielded cells. TWPC and SAT incorporated innovative technologies into the PRM design to better suit the operations required at TWPC, and to increase the overall capability of the PRM system. Improving on an already proven PRM system will ensure that TWPC gains the capabilities necessary to efficiently complete its TRU waste disposition mission. The collaborative effort between TWPC and S.A. Technology has yielded an extremely capable and robust solution to perform the wide range of tasks necessary to repackage TRU waste containers at TWPC. Incorporating innovative technologies into a proven manipulator system, these PRMs are expected to be an important addition to the capabilities available to shielded cell operators. The PRMs provide operators with the ability to reach anywhere in the cell, lift heavy objects, perform size reduction associated with the disposition of noncompliant waste. Factory acceptance testing of the TWPC Powered Remote

  15. An Excel Workbook for Identifying Redox Processes in Ground Water

    USGS Publications Warehouse

    Jurgens, Bryant C.; McMahon, Peter B.; Chapelle, Francis H.; Eberts, Sandra M.

    2009-01-01

    The reduction/oxidation (redox) condition of ground water affects the concentration, transport, and fate of many anthropogenic and natural contaminants. The redox state of a ground-water sample is defined by the dominant type of reduction/oxidation reaction, or redox process, occurring in the sample, as inferred from water-quality data. However, because of the difficulty in defining and applying a systematic redox framework to samples from diverse hydrogeologic settings, many regional water-quality investigations do not attempt to determine the predominant redox process in ground water. Recently, McMahon and Chapelle (2008) devised a redox framework that was applied to a large number of samples from 15 principal aquifer systems in the United States to examine the effect of redox processes on water quality. This framework was expanded by Chapelle and others (in press) to use measured sulfide data to differentiate between iron(III)- and sulfate-reducing conditions. These investigations showed that a systematic approach to characterize redox conditions in ground water could be applied to datasets from diverse hydrogeologic settings using water-quality data routinely collected in regional water-quality investigations. This report describes the Microsoft Excel workbook, RedoxAssignment_McMahon&Chapelle.xls, that assigns the predominant redox process to samples using the framework created by McMahon and Chapelle (2008) and expanded by Chapelle and others (in press). Assignment of redox conditions is based on concentrations of dissolved oxygen (O2), nitrate (NO3-), manganese (Mn2+), iron (Fe2+), sulfate (SO42-), and sulfide (sum of dihydrogen sulfide [aqueous H2S], hydrogen sulfide [HS-], and sulfide [S2-]). The logical arguments for assigning the predominant redox process to each sample are performed by a program written in Microsoft Visual Basic for Applications (VBA). The program is called from buttons on the main worksheet. The number of samples that can be analyzed

  16. Identifying User Needs and the Participative Design Process

    NASA Astrophysics Data System (ADS)

    Meiland, Franka; Dröes, Rose-Marie; Sävenstedt, Stefan; Bergvall-Kåreborn, Birgitta; Andersson, Anna-Lena

    As the number of persons with dementia increases and also the demands on care and support at home, additional solutions to support persons with dementia are needed. The COGKNOW project aims to develop an integrated, user-driven cognitive prosthetic device to help persons with dementia. The project focuses on support in the areas of memory, social contact, daily living activities and feelings of safety. The design process is user-participatory and consists of iterative cycles at three test sites across Europe. In the first cycle persons with dementia and their carers (n = 17) actively participated in the developmental process. Based on their priorities of needs and solutions, on their disabilities and after discussion between the team, a top four list of Information and Communication Technology (ICT) solutions was made and now serves as the basis for development: in the area of remembering - day and time orientation support, find mobile service and reminding service, in the area of social contact - telephone support by picture dialling, in the area of daily activities - media control support through a music playback and radio function, and finally, in the area of safety - a warning service to indicate when the front door is open and an emergency contact service to enhance feelings of safety. The results of this first project phase show that, in general, the people with mild dementia as well as their carers were able to express and prioritize their (unmet) needs, and the kind of technological assistance they preferred in the selected areas. In next phases it will be tested if the user-participatory design and multidisciplinary approach employed in the COGKNOW project result in a user-friendly, useful device that positively impacts the autonomy and quality of life of persons with dementia and their carers.

  17. Design of robust flow processing networks with time-programmed responses

    NASA Astrophysics Data System (ADS)

    Kaluza, P.; Mikhailov, A. S.

    2012-04-01

    Can artificially designed networks reach the levels of robustness against local damage which are comparable with those of the biochemical networks of a living cell? We consider a simple model where the flow applied to an input node propagates through the network and arrives at different times to the output nodes, thus generating a pattern of coordinated responses. By using evolutionary optimization algorithms, functional networks - with required time-programmed responses - were constructed. Then, continuing the evolution, such networks were additionally optimized for robustness against deletion of individual nodes or links. In this manner, large ensembles of functional networks with different kinds of robustness were obtained, making statistical investigations and comparison of their structural properties possible. We have found that, generally, different architectures are needed for various kinds of robustness. The differences are statistically revealed, for example, in the Laplacian spectra of the respective graphs. On the other hand, motif distributions of robust networks do not differ from those of the merely functional networks; they are found to belong to the first Alon superfamily, the same as that of the gene transcription networks of single-cell organisms.

  18. Identifying Sources of Configurality in Three Face Processing Tasks

    PubMed Central

    Mestry, Natalie; Menneer, Tamaryn; Wenger, Michael J.; Donnelly, Nick

    2012-01-01

    Participants performed three feature-complete face processing tasks involving detection of changes in: (1) feature size and (2) feature identity in successive matching tasks, and (3) feature orientation. In each experiment, information in the top (eyes) and bottom (mouths) parts of faces were manipulated. All tasks were performed with upright and inverted faces. Data were analyzed first using group-based analysis of signal detection measures (sensitivity and bias), and second using analysis of multidimensional measures of sensitivity and bias along with probit regression models in order to draw inferences about independence and separability as defined within general recognition theory (Ashby and Townsend, 1986). The results highlighted different patterns of perceptual and decisional influences across tasks and orientations. There was evidence of orientation specific configural effects (violations of perceptual independence, perceptual seperability and decisional separabilty) in the Feature Orientation Task. For the Feature Identity Task there were orientation specific performance effects and there was evidence of configural effects (violations of decisional separability) in both orientations. Decisional effects are consistent with previous research (Wenger and Ingvalson, 2002, 2003; Richler et al., 2008; Cornes et al., 2011). Crucially, the probit analysis revealed violations of perceptual independence that remain undetected by marginal analysis. PMID:23162505

  19. Identifying predictors of time-inhomogeneous viral evolutionary processes

    PubMed Central

    Bielejec, Filip; Baele, Guy; Rodrigo, Allen G.; Suchard, Marc A.; Lemey, Philippe

    2016-01-01

    Various factors determine the rate at which mutations are generated and fixed in viral genomes. Viral evolutionary rates may vary over the course of a single persistent infection and can reflect changes in replication rates and selective dynamics. Dedicated statistical inference approaches are required to understand how the complex interplay of these processes shapes the genetic diversity and divergence in viral populations. Although evolutionary models accommodating a high degree of complexity can now be formalized, adequately informing these models by potentially sparse data, and assessing the association of the resulting estimates with external predictors, remains a major challenge. In this article, we present a novel Bayesian evolutionary inference method, which integrates multiple potential predictors and tests their association with variation in the absolute rates of synonymous and non-synonymous substitutions along the evolutionary history. We consider clinical and virological measures as predictors, but also changes in population size trajectories that are simultaneously inferred using coalescent modelling. We demonstrate the potential of our method in an application to within-host HIV-1 sequence data sampled throughout the infection of multiple patients. While analyses of individual patient populations lack statistical power, we detect significant evidence for an abrupt drop in non-synonymous rates in late stage infection and a more gradual increase in synonymous rates over the course of infection in a joint analysis across all patients. The former is predicted by the immune relaxation hypothesis while the latter may be in line with increasing replicative fitness during the asymptomatic stage. PMID:27774306

  20. Integrated Process Monitoring based on Systems of Sensors for Enhanced Nuclear Safeguards Sensitivity and Robustness

    SciTech Connect

    Humberto E. Garcia

    2014-07-01

    This paper illustrates safeguards benefits that process monitoring (PM) can have as a diversion deterrent and as a complementary safeguards measure to nuclear material accountancy (NMA). In order to infer the possible existence of proliferation-driven activities, the objective of NMA-based methods is often to statistically evaluate materials unaccounted for (MUF) computed by solving a given mass balance equation related to a material balance area (MBA) at every material balance period (MBP), a particular objective for a PM-based approach may be to statistically infer and evaluate anomalies unaccounted for (AUF) that may have occurred within a MBP. Although possibly being indicative of proliferation-driven activities, the detection and tracking of anomaly patterns is not trivial because some executed events may be unobservable or unreliably observed as others. The proposed similarity between NMA- and PM-based approaches is important as performance metrics utilized for evaluating NMA-based methods, such as detection probability (DP) and false alarm probability (FAP), can also be applied for assessing PM-based safeguards solutions. To this end, AUF count estimates can be translated into significant quantity (SQ) equivalents that may have been diverted within a given MBP. A diversion alarm is reported if this mass estimate is greater than or equal to the selected value for alarm level (AL), appropriately chosen to optimize DP and FAP based on the particular characteristics of the monitored MBA, the sensors utilized, and the data processing method employed for integrating and analyzing collected measurements. To illustrate the application of the proposed PM approach, a protracted diversion of Pu in a waste stream was selected based on incomplete fuel dissolution in a dissolver unit operation, as this diversion scenario is considered to be problematic for detection using NMA-based methods alone. Results demonstrate benefits of conducting PM under a system

  1. A Benchmark Data Set to Evaluate the Illumination Robustness of Image Processing Algorithms for Object Segmentation and Classification.

    PubMed

    Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus

    2015-01-01

    Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects.

  2. Processing of Perceptual Information Is More Robust than Processing of Conceptual Information in Preschool-Age Children: Evidence from Costs of Switching

    ERIC Educational Resources Information Center

    Fisher, Anna V.

    2011-01-01

    Is processing of conceptual information as robust as processing of perceptual information early in development? Existing empirical evidence is insufficient to answer this question. To examine this issue, 3- to 5-year-old children were presented with a flexible categorization task, in which target items (e.g., an open red umbrella) shared category…

  3. Deep transcriptome-sequencing and proteome analysis of the hydrothermal vent annelid Alvinella pompejana identifies the CvP-bias as a robust measure of eukaryotic thermostability

    PubMed Central

    2013-01-01

    Background Alvinella pompejana is an annelid worm that inhabits deep-sea hydrothermal vent sites in the Pacific Ocean. Living at a depth of approximately 2500 meters, these worms experience extreme environmental conditions, including high temperature and pressure as well as high levels of sulfide and heavy metals. A. pompejana is one of the most thermotolerant metazoans, making this animal a subject of great interest for studies of eukaryotic thermoadaptation. Results In order to complement existing EST resources we performed deep sequencing of the A. pompejana transcriptome. We identified several thousand novel protein-coding transcripts, nearly doubling the sequence data for this annelid. We then performed an extensive survey of previously established prokaryotic thermoadaptation measures to search for global signals of thermoadaptation in A. pompejana in comparison with mesophilic eukaryotes. In an orthologous set of 457 proteins, we found that the best indicator of thermoadaptation was the difference in frequency of charged versus polar residues (CvP-bias), which was highest in A. pompejana. CvP-bias robustly distinguished prokaryotic thermophiles from prokaryotic mesophiles, as well as the thermophilic fungus Chaetomium thermophilum from mesophilic eukaryotes. Experimental values for thermophilic proteins supported higher CvP-bias as a measure of thermal stability when compared to their mesophilic orthologs. Proteome-wide mean CvP-bias also correlated with the body temperatures of homeothermic birds and mammals. Conclusions Our work extends the transcriptome resources for A. pompejana and identifies the CvP-bias as a robust and widely applicable measure of eukaryotic thermoadaptation. Reviewer This article was reviewed by Sándor Pongor, L. Aravind and Anthony M. Poole. PMID:23324115

  4. Robust, automated processing of IR thermography for quantitative boundary-layer transition measurements

    NASA Astrophysics Data System (ADS)

    Crawford, Brian K.; Duncan, Glen T.; West, David E.; Saric, William S.

    2015-07-01

    A technique for automated, quantitative, global boundary-layer transition detection using IR thermography is developed. Transition data are rigorously mapped onto model coordinates in an automated fashion on moving targets. Statistical analysis of transition data that is robust to environmental contamination is presented.

  5. Individualized relapse prediction: personality measures and striatal and insular activity during reward-processing robustly predict relapse*

    PubMed Central

    Gowin, Joshua L.; Ball, Tali M.; Wittmann, Marc; Tapert, Susan F.; Paulus, Martin P.

    2015-01-01

    Background Nearly half of individuals with substance use disorders relapse in the year after treatment. A diagnostic tool to help clinicians make decisions regarding treatment does not exist for psychiatric conditions. Identifying individuals with high risk for relapse to substance use following abstinence has profound clinical consequences. This study aimed to develop neuroimaging as a robust tool to predict relapse. Methods 68 methamphetamine-dependent adults (15 female) were recruited from 28-day inpatient treatment. During treatment, participants completed a functional MRI scan that examined brain activation during reward processing. Patients were followed 1 year later to assess abstinence. We examined brain activation during reward processing between relapsing and abstaining individuals and employed three random forest prediction models (clinical and personality measures, neuroimaging measures, a combined model) to generate predictions for each participant regarding their relapse likelihood. Results 18 individuals relapsed. There were significant group by reward-size interactions for neural activation in the left insula and right striatum for rewards. Abstaining individuals showed increased activation for large, risky relative to small, safe rewards, whereas relapsing individuals failed to show differential activation between reward types. All three random forest models yielded good test characteristics such that a positive test for relapse yielded a likelihood ratio 2.63, whereas a negative test had a likelihood ratio of 0.48. Conclusions These findings suggest that neuroimaging can be developed in combination with other measures as an instrument to predict relapse, advancing tools providers can use to make decisions about individualized treatment of substance use disorders. PMID:25977206

  6. 25 CFR 170.501 - What happens when the review process identifies areas for improvement?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... identifies areas for improvement? When the review process identifies areas for improvement: (a) The regional... 25 Indians 1 2013-04-01 2013-04-01 false What happens when the review process identifies areas for improvement? 170.501 Section 170.501 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAND...

  7. 25 CFR 170.501 - What happens when the review process identifies areas for improvement?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... identifies areas for improvement? When the review process identifies areas for improvement: (a) The regional... 25 Indians 1 2012-04-01 2011-04-01 true What happens when the review process identifies areas for improvement? 170.501 Section 170.501 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAND...

  8. 25 CFR 170.501 - What happens when the review process identifies areas for improvement?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... identifies areas for improvement? When the review process identifies areas for improvement: (a) The regional... 25 Indians 1 2011-04-01 2011-04-01 false What happens when the review process identifies areas for improvement? 170.501 Section 170.501 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAND...

  9. 25 CFR 170.501 - What happens when the review process identifies areas for improvement?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... identifies areas for improvement? When the review process identifies areas for improvement: (a) The regional... 25 Indians 1 2014-04-01 2014-04-01 false What happens when the review process identifies areas for improvement? 170.501 Section 170.501 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAND...

  10. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    PubMed

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical.

  11. OGS#PETSc approach for robust and efficient simulations of strongly coupled hydrothermal processes in EGS reservoirs

    NASA Astrophysics Data System (ADS)

    Watanabe, Norihiro; Blucher, Guido; Cacace, Mauro; Kolditz, Olaf

    2016-04-01

    A robust and computationally efficient solution is important for 3D modelling of EGS reservoirs. This is particularly the case when the reservoir model includes hydraulic conduits such as induced or natural fractures, fault zones, and wellbore open-hole sections. The existence of such hydraulic conduits results in heterogeneous flow fields and in a strengthened coupling between fluid flow and heat transport processes via temperature dependent fluid properties (e.g. density and viscosity). A commonly employed partitioned solution (or operator-splitting solution) may not robustly work for such strongly coupled problems its applicability being limited by small time step sizes (e.g. 5-10 days) whereas the processes have to be simulated for 10-100 years. To overcome this limitation, an alternative approach is desired which can guarantee a robust solution of the coupled problem with minor constraints on time step sizes. In this work, we present a Newton-Raphson based monolithic coupling approach implemented in the OpenGeoSys simulator (OGS) combined with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library. The PETSc library is used for both linear and nonlinear solvers as well as MPI-based parallel computations. The suggested method has been tested by application to the 3D reservoir site of Groß Schönebeck, in northern Germany. Results show that the exact Newton-Raphson approach can also be limited to small time step sizes (e.g. one day) due to slight oscillations in the temperature field. The usage of a line search technique and modification of the Jacobian matrix were necessary to achieve robust convergence of the nonlinear solution. For the studied example, the proposed monolithic approach worked even with a very large time step size of 3.5 years.

  12. Robust Kriged Kalman Filtering

    SciTech Connect

    Baingana, Brian; Dall'Anese, Emiliano; Mateos, Gonzalo; Giannakis, Georgios B.

    2015-11-11

    Although the kriged Kalman filter (KKF) has well-documented merits for prediction of spatial-temporal processes, its performance degrades in the presence of outliers due to anomalous events, or measurement equipment failures. This paper proposes a robust KKF model that explicitly accounts for presence of measurement outliers. Exploiting outlier sparsity, a novel l1-regularized estimator that jointly predicts the spatial-temporal process at unmonitored locations, while identifying measurement outliers is put forth. Numerical tests are conducted on a synthetic Internet protocol (IP) network, and real transformer load data. Test results corroborate the effectiveness of the novel estimator in joint spatial prediction and outlier identification.

  13. Robustness and sensitivity analysis of a virtual process chain using the S-rail specimen applying random fields

    NASA Astrophysics Data System (ADS)

    Konrad, T.; Wolff, S.; Wiegand, K.; Merklein, M.

    2016-08-01

    An important part in robustness evaluation of production processes is the identification of shape deviations. A systematic approach is typically based on the numerical evaluation of a DoE and the application of metamodels. They provide knowledge on solver noise and sensitivities of individual model parameters. This article presents the sensitivity analysis workflow of a linked deep drawing and joining process chain. LS-DYNA®, optiSLang and SoS is used. The challenge is to separate simulative from process and material parameters of AA 6014. Spatial quantities like variations in geometry, thinning and strain have to be considered in the next process steps. At the same time the number of required virtual CAE model evaluations must be limited. The solution is based on nonlinear metamodels and random fields.

  14. Molecular mechanisms of robustness in plants

    PubMed Central

    Lempe, Janne; Lachowiec, Jennifer; Sullivan, Alessandra. M.; Queitsch, Christine

    2012-01-01

    Robustness, the ability of organisms to buffer phenotypes against perturbations, has drawn renewed interest among developmental biologists and geneticists. A growing body of research supports an important role of robustness in the genotype to phenotype translation, with far- reaching implications for evolutionary processes and disease susceptibility. Like for animals and fungi, plant robustness is a function of genetic network architecture. Most perturbations are buffered; however, perturbation of network hubs destabilizes many traits. Here, we review recent advances in identifying molecular robustness mechanisms in plants that have been enabled by a combination of classical genetics and population genetics with genome-scale data. PMID:23279801

  15. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  16. A New Hybrid BFOA-PSO Optimization Technique for Decoupling and Robust Control of Two-Coupled Distillation Column Process

    PubMed Central

    Mohamed, Amr E.; Dorrah, Hassen T.

    2016-01-01

    The two-coupled distillation column process is a physically complicated system in many aspects. Specifically, the nested interrelationship between system inputs and outputs constitutes one of the significant challenges in system control design. Mostly, such a process is to be decoupled into several input/output pairings (loops), so that a single controller can be assigned for each loop. In the frame of this research, the Brain Emotional Learning Based Intelligent Controller (BELBIC) forms the control structure for each decoupled loop. The paper's main objective is to develop a parameterization technique for decoupling and control schemes, which ensures robust control behavior. In this regard, the novel optimization technique Bacterial Swarm Optimization (BSO) is utilized for the minimization of summation of the integral time-weighted squared errors (ITSEs) for all control loops. This optimization technique constitutes a hybrid between two techniques, which are the Particle Swarm and Bacterial Foraging algorithms. According to the simulation results, this hybridized technique ensures low mathematical burdens and high decoupling and control accuracy. Moreover, the behavior analysis of the proposed BELBIC shows a remarkable improvement in the time domain behavior and robustness over the conventional PID controller. PMID:27807444

  17. The Robustness of Proofreading to Crowding-Induced Pseudo-Processivity in the MAPK Pathway

    PubMed Central

    Ouldridge, Thomas E.; Rein ten Wolde, Pieter

    2014-01-01

    Double phosphorylation of protein kinases is a common feature of signaling cascades. This motif may reduce cross-talk between signaling pathways because the second phosphorylation site allows for proofreading, especially when phosphorylation is distributive rather than processive. Recent studies suggest that phosphorylation can be pseudo-processive in the crowded cellular environment, since rebinding after the first phosphorylation is enhanced by slow diffusion. Here, we use a simple model with unsaturated reactants to show that specificity for one substrate over another drops as rebinding increases and pseudo-processive behavior becomes possible. However, this loss of specificity with increased rebinding is typically also observed if two distinct enzyme species are required for phosphorylation, i.e., when the system is necessarily distributive. Thus the loss of specificity is due to an intrinsic reduction in selectivity with increased rebinding, which benefits inefficient reactions, rather than pseudo-processivity itself. We also show that proofreading can remain effective when the intended signaling pathway exhibits high levels of rebinding-induced pseudo-processivity, unlike other proposed advantages of the dual phosphorylation motif. PMID:25418311

  18. An Ontology for Identifying Cyber Intrusion Induced Faults in Process Control Systems

    NASA Astrophysics Data System (ADS)

    Hieb, Jeffrey; Graham, James; Guan, Jian

    This paper presents an ontological framework that permits formal representations of process control systems, including elements of the process being controlled and the control system itself. A fault diagnosis algorithm based on the ontological model is also presented. The algorithm can identify traditional process elements as well as control system elements (e.g., IP network and SCADA protocol) as fault sources. When these elements are identified as a likely fault source, the possibility exists that the process fault is induced by a cyber intrusion. A laboratory-scale distillation column is used to illustrate the model and the algorithm. Coupled with a well-defined statistical process model, this fault diagnosis approach provides cyber security enhanced fault diagnosis information to plant operators and can help identify that a cyber attack is underway before a major process failure is experienced.

  19. Robust carrier formation process in low-band gap organic photovoltaics

    NASA Astrophysics Data System (ADS)

    Yonezawa, Kouhei; Kamioka, Hayato; Yasuda, Takeshi; Han, Liyuan; Moritomo, Yutaka

    2013-10-01

    By means of femto-second time-resolved spectroscopy, we investigated the carrier formation process against film morphology and temperature (T) in highly-efficient organic photovoltaic, poly[[4,8-bis[(2-ethylhexyl)oxy]benzo[1,2-b:4,5-b '] dithiophene-2,6-diyl][3-fluoro-2-[(2-ethylhexyl)carbonyl]thieno[3,4-b] thiophenediyl

  20. Self assembly of nanoislands on YSZ-(001) surface: a mechanistic approach toward a robust process.

    PubMed

    Ansari, Haris M; Dixit, Vikas; Zimmerman, Lawrence B; Rauscher, Michael D; Dregia, Suliman A; Akbar, Sheikh A

    2013-05-01

    We experimentally investigate the mechanism of formation of self-assembled arrays of nanoislands surrounding dopant sources on the (001) surface of yttria-stabilized zirconia. Initially, we used lithographically defined thin-film patches of gadolinia-doped ceria (GDC) as dopant sources. During annealing at approximately one-half the melting temperature of zirconia, surface diffusion of dopants leads to the breakup of the surface around the source, creating arrays of epitaxial nanoislands with a characteristic size (~100 nm) and alignment along elastically compliant directions, <110>. The breakup relieves elastic strain energy at the expense of increasing surface energy. On the basis of understanding the mechanism of island formation, we introduce a simple and versatile powder-based doping process for spontaneous surface patterning. The new process bypasses lithography and conventional vapor-source doping, opening the door to spontaneous surface patterning of functional ceramics and other refractory materials. In addition to using GDC solid-solution powders, we demonstrate the effectiveness of the process in another system based on Eu2O3.

  1. A sensitivity analysis of process design parameters, commodity prices and robustness on the economics of odour abatement technologies.

    PubMed

    Estrada, José M; Kraakman, N J R Bart; Lebrero, Raquel; Muñoz, Raúl

    2012-01-01

    The sensitivity of the economics of the five most commonly applied odour abatement technologies (biofiltration, biotrickling filtration, activated carbon adsorption, chemical scrubbing and a hybrid technology consisting of a biotrickling filter coupled with carbon adsorption) towards design parameters and commodity prices was evaluated. Besides, the influence of the geographical location on the Net Present Value calculated for a 20 years lifespan (NPV20) of each technology and its robustness towards typical process fluctuations and operational upsets were also assessed. This comparative analysis showed that biological techniques present lower operating costs (up to 6 times) and lower sensitivity than their physical/chemical counterparts, with the packing material being the key parameter affecting their operating costs (40-50% of the total operating costs). The use of recycled or partially treated water (e.g. secondary effluent in wastewater treatment plants) offers an opportunity to significantly reduce costs in biological techniques. Physical/chemical technologies present a high sensitivity towards H2S concentration, which is an important drawback due to the fluctuating nature of malodorous emissions. The geographical analysis evidenced high NPV20 variations around the world for all the technologies evaluated, but despite the differences in wage and price levels, biofiltration and biotrickling filtration are always the most cost-efficient alternatives (NPV20). When, in an economical evaluation, the robustness is as relevant as the overall costs (NPV20), the hybrid technology would move up next to BTF as the most preferred technologies.

  2. Optimization of Tape Winding Process Parameters to Enhance the Performance of Solid Rocket Nozzle Throat Back Up Liners using Taguchi's Robust Design Methodology

    NASA Astrophysics Data System (ADS)

    Nath, Nayani Kishore

    2016-06-01

    The throat back up liners is used to protect the nozzle structural members from the severe thermal environment in solid rocket nozzles. The throat back up liners is made with E-glass phenolic prepregs by tape winding process. The objective of this work is to demonstrate the optimization of process parameters of tape winding process to achieve better insulative resistance using Taguchi's robust design methodology. In this method four control factors machine speed, roller pressure, tape tension, tape temperature that were investigated for the tape winding process. The presented work was to study the cogency and acceptability of Taguchi's methodology in manufacturing of throat back up liners. The quality characteristic identified was Back wall temperature. Experiments carried out using L{9/'} (34) orthogonal array with three levels of four different control factors. The test results were analyzed using smaller the better criteria for Signal to Noise ratio in order to optimize the process. The experimental results were analyzed conformed and successfully used to achieve the minimum back wall temperature of the throat back up liners. The enhancement in performance of the throat back up liners was observed by carrying out the oxy-acetylene tests. The influence of back wall temperature on the performance of throat back up liners was verified by ground firing test.

  3. Filling the gaps: A robust description of adhesive birth-death-movement processes

    NASA Astrophysics Data System (ADS)

    Johnston, Stuart T.; Baker, Ruth E.; Simpson, Matthew J.

    2016-04-01

    Existing continuum descriptions of discrete adhesive birth-death-movement processes provide accurate predictions of the average discrete behavior for limited parameter regimes. Here we present an alternative continuum description in terms of the dynamics of groups of contiguous occupied and vacant lattice sites. Our method provides more accurate predictions, is valid in parameter regimes that could not be described by previous continuum descriptions, and provides information about the spatial clustering of occupied sites. Furthermore, we present a simple analytic approximation of the spatial clustering of occupied sites at late time, when the system reaches its steady-state configuration.

  4. A Robust Multi-Scale Modeling System for the Study of Cloud and Precipitation Processes

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2012-01-01

    During the past decade, numerical weather and global non-hydrostatic models have started using more complex microphysical schemes originally developed for high resolution cloud resolving models (CRMs) with 1-2 km or less horizontal resolutions. These microphysical schemes affect the dynamic through the release of latent heat (buoyancy loading and pressure gradient) the radiation through the cloud coverage (vertical distribution of cloud species), and surface processes through rainfall (both amount and intensity). Recently, several major improvements of ice microphysical processes (or schemes) have been developed for cloud-resolving model (Goddard Cumulus Ensemble, GCE, model) and regional scale (Weather Research and Forecast, WRF) model. These improvements include an improved 3-ICE (cloud ice, snow and graupel) scheme (Lang et al. 2010); a 4-ICE (cloud ice, snow, graupel and hail) scheme and a spectral bin microphysics scheme and two different two-moment microphysics schemes. The performance of these schemes has been evaluated by using observational data from TRMM and other major field campaigns. In this talk, we will present the high-resolution (1 km) GeE and WRF model simulations and compared the simulated model results with observation from recent field campaigns [i.e., midlatitude continental spring season (MC3E; 2010), high latitude cold-season (C3VP, 2007; GCPEx, 2012), and tropical oceanic (TWP-ICE, 2006)].

  5. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    NASA Astrophysics Data System (ADS)

    McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.

    2015-04-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.

  6. Uncertainties and robustness of the ignition process in type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Iapichino, L.; Lesaffre, P.

    2010-03-01

    Context. It is widely accepted that the onset of the explosive carbon burning in the core of a carbon-oxygen white dwarf (CO WD) triggers the ignition of a type Ia supernova (SN Ia). The features of the ignition are among the few free parameters of the SN Ia explosion theory. Aims: We explore the role for the ignition process of two different issues: firstly, the ignition is studied in WD models coming from different accretion histories. Secondly, we estimate how a different reaction rate for C-burning can affect the ignition. Methods: Two-dimensional hydrodynamical simulations of temperature perturbations in the WD core (“bubbles”) are performed with the FLASH code. In order to evaluate the impact of the C-burning reaction rate on the WD model, the evolution code FLASH_THE_TORTOISE from Lesaffre et al. (2006, MNRAS, 368, 187) is used. Results: In different WD models a key role is played by the different gravitational acceleration in the progenitor's core. As a consequence, the ignition is disfavored at a large distance from the WD center in models with a larger central density, resulting from the evolution of initially more massive progenitors. Changes in the C reaction rate at T ⪉ 5 × 10^8~K slightly influence the ignition density in the WD core, while the ignition temperature is almost unaffected. Recent measurements of new resonances in the C-burning reaction rate (Spillane et al. 2007, Phys. Rev. Lett., 98, 122501) do not affect the core conditions of the WD significantly. Conclusions: This simple analysis, performed on the features of the temperature perturbations in the WD core, should be extended in the framework of the state-of-the-art numerical tools for studying the turbulent convection and ignition in the WD core. Future measurements of the C-burning reactions cross section at low energy, though certainly useful, are not expected to affect our current understanding of the ignition process dramatically.

  7. Robust Suppression of HIV Replication by Intracellularly Expressed Reverse Transcriptase Aptamers Is Independent of Ribozyme Processing

    PubMed Central

    Lange, Margaret J; Sharma, Tarun K; Whatley, Angela S; Landon, Linda A; Tempesta, Michael A; Johnson, Marc C; Burke, Donald H

    2012-01-01

    RNA aptamers that bind human immunodeficiency virus 1 (HIV-1) reverse transcriptase (RT) also inhibit viral replication, making them attractive as therapeutic candidates and potential tools for dissecting viral pathogenesis. However, it is not well understood how aptamer-expression context and cellular RNA pathways govern aptamer accumulation and net antiviral bioactivity. Using a previously-described expression cassette in which aptamers were flanked by two “minimal core” hammerhead ribozymes, we observed only weak suppression of pseudotyped HIV. To evaluate the importance of the minimal ribozymes, we replaced them with extended, tertiary-stabilized hammerhead ribozymes with enhanced self-cleavage activity, in addition to noncleaving ribozymes with active site mutations. Both the active and inactive versions of the extended hammerhead ribozymes increased inhibition of pseudotyped virus, indicating that processing is not necessary for bioactivity. Clonal stable cell lines expressing aptamers from these modified constructs strongly suppressed infectious virus, and were more effective than minimal ribozymes at high viral multiplicity of infection (MOI). Tertiary stabilization greatly increased aptamer accumulation in viral and subcellular compartments, again regardless of self-cleavage capability. We therefore propose that the increased accumulation is responsible for increased suppression, that the bioactive form of the aptamer is one of the uncleaved or partially cleaved transcripts, and that tertiary stabilization increases transcript stability by reducing exonuclease degradation. PMID:22948672

  8. Extreme temperature robust optical sensor designs and fault-tolerant signal processing

    DOEpatents

    Riza, Nabeel Agha; Perez, Frank

    2012-01-17

    Silicon Carbide (SiC) probe designs for extreme temperature and pressure sensing uses a single crystal SiC optical chip encased in a sintered SiC material probe. The SiC chip may be protected for high temperature only use or exposed for both temperature and pressure sensing. Hybrid signal processing techniques allow fault-tolerant extreme temperature sensing. Wavelength peak-to-peak (or null-to-null) collective spectrum spread measurement to detect wavelength peak/null shift measurement forms a coarse-fine temperature measurement using broadband spectrum monitoring. The SiC probe frontend acts as a stable emissivity Black-body radiator and monitoring the shift in radiation spectrum enables a pyrometer. This application combines all-SiC pyrometry with thick SiC etalon laser interferometry within a free-spectral range to form a coarse-fine temperature measurement sensor. RF notch filtering techniques improve the sensitivity of the temperature measurement where fine spectral shift or spectrum measurements are needed to deduce temperature.

  9. Robust fetal QRS detection from noninvasive abdominal electrocardiogram based on channel selection and simultaneous multichannel processing.

    PubMed

    Ghaffari, Ali; Mollakazemi, Mohammad Javad; Atyabi, Seyyed Abbas; Niknazar, Mohammad

    2015-12-01

    The purpose of this study is to provide a new method for detecting fetal QRS complexes from non-invasive fetal electrocardiogram (fECG) signal. Despite most of the current fECG processing methods which are based on separation of fECG from maternal ECG (mECG), in this study, fetal heart rate (FHR) can be extracted with high accuracy without separation of fECG from mECG. Furthermore, in this new approach thoracic channels are not necessary. These two aspects have reduced the required computational operations. Consequently, the proposed approach can be efficiently applied to different real-time healthcare and medical devices. In this work, a new method is presented for selecting the best channel which carries strongest fECG. Each channel is scored based on two criteria of noise distribution and good fetal heartbeat visibility. Another important aspect of this study is the simultaneous and combinatorial use of available fECG channels via the priority given by their scores. A combination of geometric features and wavelet-based techniques was adopted to extract FHR. Based on fetal geometric features, fECG signals were divided into three categories, and different strategies were employed to analyze each category. The method was validated using three datasets including Noninvasive fetal ECG database, DaISy and PhysioNet/Computing in Cardiology Challenge 2013. Finally, the obtained results were compared with other studies. The adopted strategies such as multi-resolution analysis, not separating fECG and mECG, intelligent channels scoring and using them simultaneously are the factors that caused the promising performance of the method. PMID:26462679

  10. A robust and representative lower bound on object processing speed in humans.

    PubMed

    Bieniek, Magdalena M; Bennett, Patrick J; Sekuler, Allison B; Rousselet, Guillaume A

    2016-07-01

    How early does the brain decode object categories? Addressing this question is critical to constrain the type of neuronal architecture supporting object categorization. In this context, much effort has been devoted to estimating face processing speed. With onsets estimated from 50 to 150 ms, the timing of the first face-sensitive responses in humans remains controversial. This controversy is due partially to the susceptibility of dynamic brain measurements to filtering distortions and analysis issues. Here, using distributions of single-trial event-related potentials (ERPs), causal filtering, statistical analyses at all electrodes and time points, and effective correction for multiple comparisons, we present evidence that the earliest categorical differences start around 90 ms following stimulus presentation. These results were obtained from a representative group of 120 participants, aged 18-81, who categorized images of faces and noise textures. The results were reliable across testing days, as determined by test-retest assessment in 74 of the participants. Furthermore, a control experiment showed similar ERP onsets for contrasts involving images of houses or white noise. Face onsets did not change with age, suggesting that face sensitivity occurs within 100 ms across the adult lifespan. Finally, the simplicity of the face-texture contrast, and the dominant midline distribution of the effects, suggest the face responses were evoked by relatively simple image properties and are not face specific. Our results provide a new lower benchmark for the earliest neuronal responses to complex objects in the human visual system. PMID:26469359

  11. Pyrolucite fluidized-bed reactor (PFBR): a robust and compact process for removing manganese from groundwater.

    PubMed

    Dashtban Kenari, Seyedeh Laleh; Barbeau, Benoit

    2014-02-01

    The purpose of this paper is to introduce a pyrolucite fluidized-bed reactor (PFBR) as a potential drinking water process to treat groundwater containing high levels of dissolved manganese (Mn(II)) (0.5-3 mg/L) and reduce its concentration to <0.02 mg/L in treated water. A pilot-scale study was conducted under dynamic conditions using synthetic groundwater (SGW), to elucidate the effect of operational conditions and groundwater composition on manganese (Mn) removal achieved by the PFBR. Results demonstrated almost complete Mn removal (close to 100%) in less than 1 min under all tested operational conditions (influent Mn concentration of 0.5-3 mg/L, calcium (Ca(2+)) hardness of 0-200 mg CaCO3/L, pH of 6.2-7.8, temperature of 9 & 23 °C and high hydraulic loading rate (HLR) of 24-63 m/h (i.e., bed expansion of 0-30%)). Improved Mn removal profile was achieved at higher water temperature. Also, the results showed that adsorption of Mn(II) onto pyrolucite and subsequent slower surface oxidation of sorbed Mn(II) was the only mechanism responsible for Mn removal while direct oxidation of Mn(II) by free chlorine did not occur even at high concentrations of Mn(II) and free chlorine and elevated temperatures. Higher average mass transfer coefficient and consequently adsorption rate was achieved at elevated HLR. Increasing effluent free chlorine residuals from 1.0 to 2.0-2.6 mg Cl2/L allowed increasing the operation time needed for media regeneration from 6 days to >12 days. Turbidity was maintained around 0.2 NTU during the entire test periods indicating good capture of MnOx colloids within the PFBR. PMID:24183400

  12. A point process approach to identifying and tracking transitions in neural spiking dynamics in the subthalamic nucleus of Parkinson's patients

    NASA Astrophysics Data System (ADS)

    Deng, Xinyi; Eskandar, Emad N.; Eden, Uri T.

    2013-12-01

    Understanding the role of rhythmic dynamics in normal and diseased brain function is an important area of research in neural electrophysiology. Identifying and tracking changes in rhythms associated with spike trains present an additional challenge, because standard approaches for continuous-valued neural recordings—such as local field potential, magnetoencephalography, and electroencephalography data—require assumptions that do not typically hold for point process data. Additionally, subtle changes in the history dependent structure of a spike train have been shown to lead to robust changes in rhythmic firing patterns. Here, we propose a point process modeling framework to characterize the rhythmic spiking dynamics in spike trains, test for statistically significant changes to those dynamics, and track the temporal evolution of such changes. We first construct a two-state point process model incorporating spiking history and develop a likelihood ratio test to detect changes in the firing structure. We then apply adaptive state-space filters and smoothers to track these changes through time. We illustrate our approach with a simulation study as well as with experimental data recorded in the subthalamic nucleus of Parkinson's patients performing an arm movement task. Our analyses show that during the arm movement task, neurons underwent a complex pattern of modulation of spiking intensity characterized initially by a release of inhibitory control at 20-40 ms after a spike, followed by a decrease in excitatory influence at 40-60 ms after a spike.

  13. Discriminating sediment archives and sedimentary processes in the arid endorheic Ejina Basin, NW China using a robust geochemical approach

    NASA Astrophysics Data System (ADS)

    Yu, Kaifeng; Hartmann, Kai; Nottebaum, Veit; Stauch, Georg; Lu, Huayu; Zeeden, Christian; Yi, Shuangwen; Wünnemann, Bernd; Lehmkuhl, Frank

    2016-04-01

    Geochemical characteristics have been intensively used to assign sediment properties to paleoclimate and provenance. Nonetheless, in particular concerning the arid context, bulk geochemistry of different sediment archives and corresponding process interpretations are hitherto elusive. The Ejina Basin, with its suite of different sediment archives, is known as one of the main sources for the loess accumulation on the Chinese Loess Plateau. In order to understand mechanisms along this supra-regional sediment cascade, it is crucial to decipher the archive characteristics and formation processes. To address these issues, five profiles in different geomorphological contexts were selected. Analyses of X-ray fluorescence and diffraction, grain size, optically stimulated luminescence and radiocarbon dating were performed. Robust factor analysis was applied to reduce the attribute space to the process space of sedimentation history. Five sediment archives from three lithologic units exhibit geochemical characteristics as follows: (i) aeolian sands have high contents of Zr and Hf, whereas only Hf can be regarded as a valuable indicator to discriminate the coarse sand proportion; (ii) sandy loess has high Ca and Sr contents which both exhibit broad correlations with the medium to coarse silt proportions; (iii) lacustrine clays have high contents of felsic, ferromagnesian and mica source elements e.g., K, Fe, Ti, V, and Ni; (iv) fluvial sands have high contents of Mg, Cl and Na which may be enriched in evaporite minerals; (v) alluvial gravels have high contents of Cr which may originate from nearby Cr-rich bedrock. Temporal variations can be illustrated by four robust factors: weathering intensity, silicate-bearing mineral abundance, saline/alkaline magnitude and quasi-constant aeolian input. In summary, the bulk-composition of the late Quaternary sediments in this arid context is governed by the nature of the source terrain, weak chemical weathering, authigenic minerals

  14. A robust post-processing method to determine skin friction in turbulent boundary layers from the velocity profile

    NASA Astrophysics Data System (ADS)

    Rodríguez-López, Eduardo; Bruce, Paul J. K.; Buxton, Oliver R. H.

    2015-04-01

    The present paper describes a method to extrapolate the mean wall shear stress, , and the accurate relative position of a velocity probe with respect to the wall, , from an experimentally measured mean velocity profile in a turbulent boundary layer. Validation is made between experimental and direct numerical simulation data of turbulent boundary layer flows with independent measurement of the shear stress. The set of parameters which minimize the residual error with respect to the canonical description of the boundary layer profile is taken as the solution. Several methods are compared, testing different descriptions of the canonical mean velocity profile (with and without overshoot over the logarithmic law) and different definitions of the residual function of the optimization. The von Kármán constant is used as a parameter of the fitting process in order to avoid any hypothesis regarding its value that may be affected by different initial or boundary conditions of the flow. Results show that the best method provides an accuracy of for the estimation of the friction velocity and for the position of the wall. The robustness of the method is tested including unconverged near-wall measurements, pressure gradient, and reduced number of points; the importance of the location of the first point is also tested, and it is shown that the method presents a high robustness even in highly distorted flows, keeping the aforementioned accuracies if one acquires at least one data point in . The wake component and the thickness of the boundary layer are also simultaneously extrapolated from the mean velocity profile. This results in the first study, to the knowledge of the authors, where a five-parameter fitting is carried out without any assumption on the von Kármán constant and the limits of the logarithmic layer further from its existence.

  15. Efficient and mechanically robust stretchable organic light-emitting devices by a laser-programmable buckling process

    PubMed Central

    Yin, Da; Feng, Jing; Ma, Rui; Liu, Yue-Feng; Zhang, Yong-Lai; Zhang, Xu-Lin; Bi, Yan-Gang; Chen, Qi-Dai; Sun, Hong-Bo

    2016-01-01

    Stretchable organic light-emitting devices are becoming increasingly important in the fast-growing fields of wearable displays, biomedical devices and health-monitoring technology. Although highly stretchable devices have been demonstrated, their luminous efficiency and mechanical stability remain impractical for the purposes of real-life applications. This is due to significant challenges arising from the high strain-induced limitations on the structure design of the device, the materials used and the difficulty of controlling the stretch-release process. Here we have developed a laser-programmable buckling process to overcome these obstacles and realize a highly stretchable organic light-emitting diode with unprecedented efficiency and mechanical robustness. The strained device luminous efficiency −70 cd A−1 under 70% strain - is the largest to date and the device can accommodate 100% strain while exhibiting only small fluctuations in performance over 15,000 stretch-release cycles. This work paves the way towards fully stretchable organic light-emitting diodes that can be used in wearable electronic devices. PMID:27187936

  16. Efficient and mechanically robust stretchable organic light-emitting devices by a laser-programmable buckling process

    NASA Astrophysics Data System (ADS)

    Yin, Da; Feng, Jing; Ma, Rui; Liu, Yue-Feng; Zhang, Yong-Lai; Zhang, Xu-Lin; Bi, Yan-Gang; Chen, Qi-Dai; Sun, Hong-Bo

    2016-05-01

    Stretchable organic light-emitting devices are becoming increasingly important in the fast-growing fields of wearable displays, biomedical devices and health-monitoring technology. Although highly stretchable devices have been demonstrated, their luminous efficiency and mechanical stability remain impractical for the purposes of real-life applications. This is due to significant challenges arising from the high strain-induced limitations on the structure design of the device, the materials used and the difficulty of controlling the stretch-release process. Here we have developed a laser-programmable buckling process to overcome these obstacles and realize a highly stretchable organic light-emitting diode with unprecedented efficiency and mechanical robustness. The strained device luminous efficiency -70 cd A-1 under 70% strain - is the largest to date and the device can accommodate 100% strain while exhibiting only small fluctuations in performance over 15,000 stretch-release cycles. This work paves the way towards fully stretchable organic light-emitting diodes that can be used in wearable electronic devices.

  17. The role of the PIRT process in identifying code improvements and executing code development

    SciTech Connect

    Wilson, G.E.; Boyack, B.E.

    1997-07-01

    In September 1988, the USNRC issued a revised ECCS rule for light water reactors that allows, as an option, the use of best estimate (BE) plus uncertainty methods in safety analysis. The key feature of this licensing option relates to quantification of the uncertainty in the determination that an NPP has a {open_quotes}low{close_quotes} probability of violating the safety criteria specified in 10 CFR 50. To support the 1988 licensing revision, the USNRC and its contractors developed the CSAU evaluation methodology to demonstrate the feasibility of the BE plus uncertainty approach. The PIRT process, Step 3 in the CSAU methodology, was originally formulated to support the BE plus uncertainty licensing option as executed in the CSAU approach to safety analysis. Subsequent work has shown the PIRT process to be a much more powerful tool than conceived in its original form. Through further development and application, the PIRT process has shown itself to be a robust means to establish safety analysis computer code phenomenological requirements in their order of importance to such analyses. Used early in research directed toward these objectives, PIRT results also provide the technical basis and cost effective organization for new experimental programs needed to improve the safety analysis codes for new applications. The primary purpose of this paper is to describe the generic PIRT process, including typical and common illustrations from prior applications. The secondary objective is to provide guidance to future applications of the process to help them focus, in a graded approach, on systems, components, processes and phenomena that have been common in several prior applications.

  18. 25 CFR 170.501 - What happens when the review process identifies areas for improvement?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false What happens when the review process identifies areas for improvement? 170.501 Section 170.501 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAND AND WATER INDIAN RESERVATION ROADS PROGRAM Planning, Design, and Construction of Indian Reservation...

  19. Clocking the social mind by identifying mental processes in the IAT with electrical neuroimaging.

    PubMed

    Schiller, Bastian; Gianotti, Lorena R R; Baumgartner, Thomas; Nash, Kyle; Koenig, Thomas; Knoch, Daria

    2016-03-01

    Why do people take longer to associate the word "love" with outgroup words (incongruent condition) than with ingroup words (congruent condition)? Despite the widespread use of the implicit association test (IAT), it has remained unclear whether this IAT effect is due to additional mental processes in the incongruent condition, or due to longer duration of the same processes. Here, we addressed this previously insoluble issue by assessing the spatiotemporal evolution of brain electrical activity in 83 participants. From stimulus presentation until response production, we identified seven processes. Crucially, all seven processes occurred in the same temporal sequence in both conditions, but participants needed more time to perform one early occurring process (perceptual processing) and one late occurring process (implementing cognitive control to select the motor response) in the incongruent compared with the congruent condition. We also found that the latter process contributed to individual differences in implicit bias. These results advance understanding of the neural mechanics of response time differences in the IAT: They speak against theories that explain the IAT effect as due to additional processes in the incongruent condition and speak in favor of theories that assume a longer duration of specific processes in the incongruent condition. More broadly, our data analysis approach illustrates the potential of electrical neuroimaging to illuminate the temporal organization of mental processes involved in social cognition.

  20. Clocking the social mind by identifying mental processes in the IAT with electrical neuroimaging.

    PubMed

    Schiller, Bastian; Gianotti, Lorena R R; Baumgartner, Thomas; Nash, Kyle; Koenig, Thomas; Knoch, Daria

    2016-03-01

    Why do people take longer to associate the word "love" with outgroup words (incongruent condition) than with ingroup words (congruent condition)? Despite the widespread use of the implicit association test (IAT), it has remained unclear whether this IAT effect is due to additional mental processes in the incongruent condition, or due to longer duration of the same processes. Here, we addressed this previously insoluble issue by assessing the spatiotemporal evolution of brain electrical activity in 83 participants. From stimulus presentation until response production, we identified seven processes. Crucially, all seven processes occurred in the same temporal sequence in both conditions, but participants needed more time to perform one early occurring process (perceptual processing) and one late occurring process (implementing cognitive control to select the motor response) in the incongruent compared with the congruent condition. We also found that the latter process contributed to individual differences in implicit bias. These results advance understanding of the neural mechanics of response time differences in the IAT: They speak against theories that explain the IAT effect as due to additional processes in the incongruent condition and speak in favor of theories that assume a longer duration of specific processes in the incongruent condition. More broadly, our data analysis approach illustrates the potential of electrical neuroimaging to illuminate the temporal organization of mental processes involved in social cognition. PMID:26903643

  1. Clocking the social mind by identifying mental processes in the IAT with electrical neuroimaging

    PubMed Central

    Schiller, Bastian; Gianotti, Lorena R. R.; Baumgartner, Thomas; Nash, Kyle; Koenig, Thomas; Knoch, Daria

    2016-01-01

    Why do people take longer to associate the word “love” with outgroup words (incongruent condition) than with ingroup words (congruent condition)? Despite the widespread use of the implicit association test (IAT), it has remained unclear whether this IAT effect is due to additional mental processes in the incongruent condition, or due to longer duration of the same processes. Here, we addressed this previously insoluble issue by assessing the spatiotemporal evolution of brain electrical activity in 83 participants. From stimulus presentation until response production, we identified seven processes. Crucially, all seven processes occurred in the same temporal sequence in both conditions, but participants needed more time to perform one early occurring process (perceptual processing) and one late occurring process (implementing cognitive control to select the motor response) in the incongruent compared with the congruent condition. We also found that the latter process contributed to individual differences in implicit bias. These results advance understanding of the neural mechanics of response time differences in the IAT: They speak against theories that explain the IAT effect as due to additional processes in the incongruent condition and speak in favor of theories that assume a longer duration of specific processes in the incongruent condition. More broadly, our data analysis approach illustrates the potential of electrical neuroimaging to illuminate the temporal organization of mental processes involved in social cognition. PMID:26903643

  2. Torque coordinating robust control of shifting process for dry dual clutch transmission equipped in a hybrid car

    NASA Astrophysics Data System (ADS)

    Zhao, Z.-G.; Chen, H.-J.; Yang, Y.-Y.; He, L.

    2015-09-01

    For a hybrid car equipped with dual clutch transmission (DCT), the coordination control problems of clutches and power sources are investigated while taking full advantage of the integrated starter generator motor's fast response speed and high accuracy (speed and torque). First, a dynamic model of the shifting process is established, the vehicle acceleration is quantified according to the intentions of the driver, and the torque transmitted by clutches is calculated based on the designed disengaging principle during the torque phase. Next, a robust H∞ controller is designed to ensure speed synchronisation despite the existence of model uncertainties, measurement noise, and engine torque lag. The engine torque lag and measurement noise are used as external disturbances to initially modify the output torque of the power source. Additionally, during the torque switch phase, the torque of the power sources is smoothly transitioned to the driver's demanded torque. Finally, the torque of the power sources is further distributed based on the optimisation of system efficiency, and the throttle opening of the engine is constrained to avoid sharp torque variations. The simulation results verify that the proposed control strategies effectively address the problem of coordinating control of clutches and power sources, establishing a foundation for the application of DCT in hybrid cars.

  3. Pilot-scale investigation of the robustness and efficiency of a copper-based treated wood wastes recycling process.

    PubMed

    Coudert, Lucie; Blais, Jean-François; Mercier, Guy; Cooper, Paul; Gastonguay, Louis; Morris, Paul; Janin, Amélie; Reynier, Nicolas

    2013-10-15

    The disposal of metal-bearing treated wood wastes is becoming an environmental challenge. An efficient recycling process based on sulfuric acid leaching has been developed to remove metals from copper-based treated wood chips (0robustness of this technology in removing metals from copper-based treated wood wastes at a pilot plant scale (130-L reactor tank). After 3 × 2 h leaching steps followed by 3 × 7 min rinsing steps, up to 97.5% of As, 87.9% of Cr, and 96.1% of Cu were removed from CCA-treated wood wastes with different initial metal loading (>7.3 kgm(-3)) and more than 94.5% of Cu was removed from ACQ-, CA- and MCQ-treated wood. The treatment of effluents by precipitation-coagulation was highly efficient; allowing removals more than 93% for the As, Cr, and Cu contained in the effluent. The economic analysis included operating costs, indirect costs and revenues related to remediated wood sales. The economic analysis concluded that CCA-treated wood wastes remediation can lead to a benefit of 53.7 US$t(-1) or a cost of 35.5 US$t(-1) and that ACQ-, CA- and MCQ-treated wood wastes recycling led to benefits ranging from 9.3 to 21.2 US$t(-1). PMID:23954815

  4. On the estimation of robustness and filtering ability of dynamic biochemical networks under process delays, internal parametric perturbations and external disturbances.

    PubMed

    Chen, Bor-Sen; Chen, Po-Wei

    2009-12-01

    Inherently, biochemical regulatory networks suffer from process delays, internal parametrical perturbations as well as external disturbances. Robustness is the property to maintain the functions of intracellular biochemical regulatory networks despite these perturbations. In this study, system and signal processing theories are employed for measurement of robust stability and filtering ability of linear and nonlinear time-delay biochemical regulatory networks. First, based on Lyapunov stability theory, the robust stability of biochemical network is measured for the tolerance of additional process delays and additive internal parameter fluctuations. Then the filtering ability of attenuating additive external disturbances is estimated for time-delay biochemical regulatory networks. In order to overcome the difficulty of solving the Hamilton Jacobi inequality (HJI), the global linearization technique is employed to simplify the measurement procedure by a simple linear matrix inequality (LMI) method. Finally, an example is given in silico to illustrate how to measure the robust stability and filtering ability of a nonlinear time-delay perturbative biochemical network. This robust stability and filtering ability measurement for biochemical network has potential application to synthetic biology, gene therapy and drug design. PMID:19788895

  5. Model of areas for identifying risks influencing the compliance of technological processes and products

    NASA Astrophysics Data System (ADS)

    Misztal, A.; Belu, N.

    2016-08-01

    Operation of every company is associated with the risk of interfering with proper performance of its fundamental processes. This risk is associated with various internal areas of the company, as well as the environment in which it operates. From the point of view of ensuring compliance of the course of specific technological processes and, consequently, product conformity with requirements, it is important to identify these threats and eliminate or reduce the risk of their occurrence. The purpose of this article is to present a model of areas of identifying risk affecting the compliance of processes and products, which is based on multiregional targeted monitoring of typical places of interference and risk management methods. The model is based on the verification of risk analyses carried out in small and medium-sized manufacturing companies in various industries..

  6. A hybrid approach identifies metabolic signatures of high-producers for chinese hamster ovary clone selection and process optimization.

    PubMed

    Popp, Oliver; Müller, Dirk; Didzus, Katharina; Paul, Wolfgang; Lipsmeier, Florian; Kirchner, Florian; Niklas, Jens; Mauch, Klaus; Beaucamp, Nicola

    2016-09-01

    In-depth characterization of high-producer cell lines and bioprocesses is vital to ensure robust and consistent production of recombinant therapeutic proteins in high quantity and quality for clinical applications. This requires applying appropriate methods during bioprocess development to enable meaningful characterization of CHO clones and processes. Here, we present a novel hybrid approach for supporting comprehensive characterization of metabolic clone performance. The approach combines metabolite profiling with multivariate data analysis and fluxomics to enable a data-driven mechanistic analysis of key metabolic traits associated with desired cell phenotypes. We applied the methodology to quantify and compare metabolic performance in a set of 10 recombinant CHO-K1 producer clones and a host cell line. The comprehensive characterization enabled us to derive an extended set of clone performance criteria that not only captured growth and product formation, but also incorporated information on intracellular clone physiology and on metabolic changes during the process. These criteria served to establish a quantitative clone ranking and allowed us to identify metabolic differences between high-producing CHO-K1 clones yielding comparably high product titers. Through multivariate data analysis of the combined metabolite and flux data we uncovered common metabolic traits characteristic of high-producer clones in the screening setup. This included high intracellular rates of glutamine synthesis, low cysteine uptake, reduced excretion of aspartate and glutamate, and low intracellular degradation rates of branched-chain amino acids and of histidine. Finally, the above approach was integrated into a workflow that enables standardized high-content selection of CHO producer clones in a high-throughput fashion. In conclusion, the combination of quantitative metabolite profiling, multivariate data analysis, and mechanistic network model simulations can identify metabolic

  7. A hybrid approach identifies metabolic signatures of high-producers for chinese hamster ovary clone selection and process optimization.

    PubMed

    Popp, Oliver; Müller, Dirk; Didzus, Katharina; Paul, Wolfgang; Lipsmeier, Florian; Kirchner, Florian; Niklas, Jens; Mauch, Klaus; Beaucamp, Nicola

    2016-09-01

    In-depth characterization of high-producer cell lines and bioprocesses is vital to ensure robust and consistent production of recombinant therapeutic proteins in high quantity and quality for clinical applications. This requires applying appropriate methods during bioprocess development to enable meaningful characterization of CHO clones and processes. Here, we present a novel hybrid approach for supporting comprehensive characterization of metabolic clone performance. The approach combines metabolite profiling with multivariate data analysis and fluxomics to enable a data-driven mechanistic analysis of key metabolic traits associated with desired cell phenotypes. We applied the methodology to quantify and compare metabolic performance in a set of 10 recombinant CHO-K1 producer clones and a host cell line. The comprehensive characterization enabled us to derive an extended set of clone performance criteria that not only captured growth and product formation, but also incorporated information on intracellular clone physiology and on metabolic changes during the process. These criteria served to establish a quantitative clone ranking and allowed us to identify metabolic differences between high-producing CHO-K1 clones yielding comparably high product titers. Through multivariate data analysis of the combined metabolite and flux data we uncovered common metabolic traits characteristic of high-producer clones in the screening setup. This included high intracellular rates of glutamine synthesis, low cysteine uptake, reduced excretion of aspartate and glutamate, and low intracellular degradation rates of branched-chain amino acids and of histidine. Finally, the above approach was integrated into a workflow that enables standardized high-content selection of CHO producer clones in a high-throughput fashion. In conclusion, the combination of quantitative metabolite profiling, multivariate data analysis, and mechanistic network model simulations can identify metabolic

  8. A novel mini-DNA barcoding assay to identify processed fins from internationally protected shark species.

    PubMed

    Fields, Andrew T; Abercrombie, Debra L; Eng, Rowena; Feldheim, Kevin; Chapman, Demian D

    2015-01-01

    There is a growing need to identify shark products in trade, in part due to the recent listing of five commercially important species on the Appendices of the Convention on International Trade in Endangered Species (CITES; porbeagle, Lamna nasus, oceanic whitetip, Carcharhinus longimanus scalloped hammerhead, Sphyrna lewini, smooth hammerhead, S. zygaena and great hammerhead S. mokarran) in addition to three species listed in the early part of this century (whale, Rhincodon typus, basking, Cetorhinus maximus, and white, Carcharodon carcharias). Shark fins are traded internationally to supply the Asian dried seafood market, in which they are used to make the luxury dish shark fin soup. Shark fins usually enter international trade with their skin still intact and can be identified using morphological characters or standard DNA-barcoding approaches. Once they reach Asia and are traded in this region the skin is removed and they are treated with chemicals that eliminate many key diagnostic characters and degrade their DNA ("processed fins"). Here, we present a validated mini-barcode assay based on partial sequences of the cytochrome oxidase I gene that can reliably identify the processed fins of seven of the eight CITES listed shark species. We also demonstrate that the assay can even frequently identify the species or genus of origin of shark fin soup (31 out of 50 samples).

  9. A Novel Mini-DNA Barcoding Assay to Identify Processed Fins from Internationally Protected Shark Species

    PubMed Central

    Fields, Andrew T.; Abercrombie, Debra L.; Eng, Rowena; Feldheim, Kevin; Chapman, Demian D.

    2015-01-01

    There is a growing need to identify shark products in trade, in part due to the recent listing of five commercially important species on the Appendices of the Convention on International Trade in Endangered Species (CITES; porbeagle, Lamna nasus, oceanic whitetip, Carcharhinus longimanus scalloped hammerhead, Sphyrna lewini, smooth hammerhead, S. zygaena and great hammerhead S. mokarran) in addition to three species listed in the early part of this century (whale, Rhincodon typus, basking, Cetorhinus maximus, and white, Carcharodon carcharias). Shark fins are traded internationally to supply the Asian dried seafood market, in which they are used to make the luxury dish shark fin soup. Shark fins usually enter international trade with their skin still intact and can be identified using morphological characters or standard DNA-barcoding approaches. Once they reach Asia and are traded in this region the skin is removed and they are treated with chemicals that eliminate many key diagnostic characters and degrade their DNA (“processed fins”). Here, we present a validated mini-barcode assay based on partial sequences of the cytochrome oxidase I gene that can reliably identify the processed fins of seven of the eight CITES listed shark species. We also demonstrate that the assay can even frequently identify the species or genus of origin of shark fin soup (31 out of 50 samples). PMID:25646789

  10. A novel mini-DNA barcoding assay to identify processed fins from internationally protected shark species.

    PubMed

    Fields, Andrew T; Abercrombie, Debra L; Eng, Rowena; Feldheim, Kevin; Chapman, Demian D

    2015-01-01

    There is a growing need to identify shark products in trade, in part due to the recent listing of five commercially important species on the Appendices of the Convention on International Trade in Endangered Species (CITES; porbeagle, Lamna nasus, oceanic whitetip, Carcharhinus longimanus scalloped hammerhead, Sphyrna lewini, smooth hammerhead, S. zygaena and great hammerhead S. mokarran) in addition to three species listed in the early part of this century (whale, Rhincodon typus, basking, Cetorhinus maximus, and white, Carcharodon carcharias). Shark fins are traded internationally to supply the Asian dried seafood market, in which they are used to make the luxury dish shark fin soup. Shark fins usually enter international trade with their skin still intact and can be identified using morphological characters or standard DNA-barcoding approaches. Once they reach Asia and are traded in this region the skin is removed and they are treated with chemicals that eliminate many key diagnostic characters and degrade their DNA ("processed fins"). Here, we present a validated mini-barcode assay based on partial sequences of the cytochrome oxidase I gene that can reliably identify the processed fins of seven of the eight CITES listed shark species. We also demonstrate that the assay can even frequently identify the species or genus of origin of shark fin soup (31 out of 50 samples). PMID:25646789

  11. Centimeter-Level Robust Gnss-Aided Inertial Post-Processing for Mobile Mapping Without Local Reference Stations

    NASA Astrophysics Data System (ADS)

    Hutton, J. J.; Gopaul, N.; Zhang, X.; Wang, J.; Menon, V.; Rieck, D.; Kipka, A.; Pastor, F.

    2016-06-01

    For almost two decades mobile mapping systems have done their georeferencing using Global Navigation Satellite Systems (GNSS) to measure position and inertial sensors to measure orientation. In order to achieve cm level position accuracy, a technique referred to as post-processed carrier phase differential GNSS (DGNSS) is used. For this technique to be effective the maximum distance to a single Reference Station should be no more than 20 km, and when using a network of Reference Stations the distance to the nearest station should no more than about 70 km. This need to set up local Reference Stations limits productivity and increases costs, especially when mapping large areas or long linear features such as roads or pipelines. An alternative technique to DGNSS for high-accuracy positioning from GNSS is the so-called Precise Point Positioning or PPP method. In this case instead of differencing the rover observables with the Reference Station observables to cancel out common errors, an advanced model for every aspect of the GNSS error chain is developed and parameterized to within an accuracy of a few cm. The Trimble Centerpoint RTX positioning solution combines the methodology of PPP with advanced ambiguity resolution technology to produce cm level accuracies without the need for local reference stations. It achieves this through a global deployment of highly redundant monitoring stations that are connected through the internet and are used to determine the precise satellite data with maximum accuracy, robustness, continuity and reliability, along with advance algorithms and receiver and antenna calibrations. This paper presents a new post-processed realization of the Trimble Centerpoint RTX technology integrated into the Applanix POSPac MMS GNSS-Aided Inertial software for mobile mapping. Real-world results from over 100 airborne flights evaluated against a DGNSS network reference are presented which show that the post-processed Centerpoint RTX solution agrees with

  12. Determination of all feasible robust PID controllers for open-loop unstable plus time delay processes with gain margin and phase margin specifications.

    PubMed

    Wang, Yuan-Jay

    2014-03-01

    This paper proposes a novel alternative method to graphically compute all feasible gain and phase margin specifications-oriented robust PID controllers for open-loop unstable plus time delay (OLUPTD) processes. This method is applicable to general OLUPTD processes without constraint on system order. To retain robustness for OLUPTD processes subject to positive or negative gain variations, the downward gain margin (GM(down)), upward gain margin (GM(up)), and phase margin (PM) are considered. A virtual gain-phase margin tester compensator is incorporated to guarantee the concerned system satisfies certain robust safety margins. In addition, the stability equation method and the parameter plane method are exploited to portray the stability boundary and the constant gain margin (GM) boundary as well as the constant PM boundary. The overlapping region of these boundaries is graphically determined and denotes the GM and PM specifications-oriented region (GPMSOR). Alternatively, the GPMSOR characterizes all feasible robust PID controllers which achieve the pre-specified safety margins. In particular, to achieve optimal gain tuning, the controller gains are searched within the GPMSOR to minimize the integral of the absolute error (IAE) or the integral of the squared error (ISE) performance criterion. Thus, an optimal PID controller gain set is successfully found within the GPMSOR and guarantees the OLUPTD processes with a pre-specified GM and PM as well as a minimum IAE or ISE. Consequently, both robustness and performance can be simultaneously assured. Further, the design procedures are summarized as an algorithm to help rapidly locate the GPMSOR and search an optimal PID gain set. Finally, three highly cited examples are provided to illustrate the design process and to demonstrate the effectiveness of the proposed method.

  13. Identifying Genes Involved in Cyclic Processes by Combining Gene Expression Analysis and Prior Knowledge

    PubMed Central

    2009-01-01

    Based on time series gene expressions, cyclic genes can be recognized via spectral analysis and statistical periodicity detection tests. These cyclic genes are usually associated with cyclic biological processes, for example, cell cycle and circadian rhythm. The power of a scheme is practically measured by comparing the detected periodically expressed genes with experimentally verified genes participating in a cyclic process. However, in the above mentioned procedure the valuable prior knowledge only serves as an evaluation benchmark, and it is not fully exploited in the implementation of the algorithm. In addition, partial data sets are also disregarded due to their nonstationarity. This paper proposes a novel algorithm to identify cyclic-process-involved genes by integrating the prior knowledge with the gene expression analysis. The proposed algorithm is applied on data sets corresponding to Saccharomyces cerevisiae and Drosophila melanogaster, respectively. Biological evidences are found to validate the roles of the discovered genes in cell cycle and circadian rhythm. Dendrograms are presented to cluster the identified genes and to reveal expression patterns. It is corroborated that the proposed novel identification scheme provides a valuable technique for unveiling pathways related to cyclic processes. PMID:19390635

  14. Method for identifying biochemical and chemical reactions and micromechanical processes using nanomechanical and electronic signal identification

    DOEpatents

    Holzrichter, John F.; Siekhaus, Wigbert J.

    1997-01-01

    A scanning probe microscope, such as an atomic force microscope (AFM) or a scanning tunneling microscope (STM), is operated in a stationary mode on a site where an activity of interest occurs to measure and identify characteristic time-varying micromotions caused by biological, chemical, mechanical, electrical, optical, or physical processes. The tip and cantilever assembly of an AFM is used as a micromechanical detector of characteristic micromotions transmitted either directly by a site of interest or indirectly through the surrounding medium. Alternatively, the exponential dependence of the tunneling current on the size of the gap in the STM is used to detect micromechanical movement. The stationary mode of operation can be used to observe dynamic biological processes in real time and in a natural environment, such as polymerase processing of DNA for determining the sequence of a DNA molecule.

  15. Method for identifying biochemical and chemical reactions and micromechanical processes using nanomechanical and electronic signal identification

    DOEpatents

    Holzrichter, J.F.; Siekhaus, W.J.

    1997-04-15

    A scanning probe microscope, such as an atomic force microscope (AFM) or a scanning tunneling microscope (STM), is operated in a stationary mode on a site where an activity of interest occurs to measure and identify characteristic time-varying micromotions caused by biological, chemical, mechanical, electrical, optical, or physical processes. The tip and cantilever assembly of an AFM is used as a micromechanical detector of characteristic micromotions transmitted either directly by a site of interest or indirectly through the surrounding medium. Alternatively, the exponential dependence of the tunneling current on the size of the gap in the STM is used to detect micromechanical movement. The stationary mode of operation can be used to observe dynamic biological processes in real time and in a natural environment, such as polymerase processing of DNA for determining the sequence of a DNA molecule. 6 figs.

  16. A stable isotope approach and its application for identifying nitrate source and transformation process in water.

    PubMed

    Xu, Shiguo; Kang, Pingping; Sun, Ya

    2016-01-01

    Nitrate contamination of water is a worldwide environmental problem. Recent studies have demonstrated that the nitrogen (N) and oxygen (O) isotopes of nitrate (NO3(-)) can be used to trace nitrogen dynamics including identifying nitrate sources and nitrogen transformation processes. This paper analyzes the current state of identifying nitrate sources and nitrogen transformation processes using N and O isotopes of nitrate. With regard to nitrate sources, δ(15)N-NO3(-) and δ(18)O-NO3(-) values typically vary between sources, allowing the sources to be isotopically fingerprinted. δ(15)N-NO3(-) is often effective at tracing NO(-)3 sources from areas with different land use. δ(18)O-NO3(-) is more useful to identify NO3(-) from atmospheric sources. Isotopic data can be combined with statistical mixing models to quantify the relative contributions of NO3(-) from multiple delineated sources. With regard to N transformation processes, N and O isotopes of nitrate can be used to decipher the degree of nitrogen transformation by such processes as nitrification, assimilation, and denitrification. In some cases, however, isotopic fractionation may alter the isotopic fingerprint associated with the delineated NO3(-) source(s). This problem may be addressed by combining the N and O isotopic data with other types of, including the concentration of selected conservative elements, e.g., chloride (Cl(-)), boron isotope (δ(11)B), and sulfur isotope (δ(35)S) data. Future studies should focus on improving stable isotope mixing models and furthering our understanding of isotopic fractionation by conducting laboratory and field experiments in different environments.

  17. A stable isotope approach and its application for identifying nitrate source and transformation process in water.

    PubMed

    Xu, Shiguo; Kang, Pingping; Sun, Ya

    2016-01-01

    Nitrate contamination of water is a worldwide environmental problem. Recent studies have demonstrated that the nitrogen (N) and oxygen (O) isotopes of nitrate (NO3(-)) can be used to trace nitrogen dynamics including identifying nitrate sources and nitrogen transformation processes. This paper analyzes the current state of identifying nitrate sources and nitrogen transformation processes using N and O isotopes of nitrate. With regard to nitrate sources, δ(15)N-NO3(-) and δ(18)O-NO3(-) values typically vary between sources, allowing the sources to be isotopically fingerprinted. δ(15)N-NO3(-) is often effective at tracing NO(-)3 sources from areas with different land use. δ(18)O-NO3(-) is more useful to identify NO3(-) from atmospheric sources. Isotopic data can be combined with statistical mixing models to quantify the relative contributions of NO3(-) from multiple delineated sources. With regard to N transformation processes, N and O isotopes of nitrate can be used to decipher the degree of nitrogen transformation by such processes as nitrification, assimilation, and denitrification. In some cases, however, isotopic fractionation may alter the isotopic fingerprint associated with the delineated NO3(-) source(s). This problem may be addressed by combining the N and O isotopic data with other types of, including the concentration of selected conservative elements, e.g., chloride (Cl(-)), boron isotope (δ(11)B), and sulfur isotope (δ(35)S) data. Future studies should focus on improving stable isotope mixing models and furthering our understanding of isotopic fractionation by conducting laboratory and field experiments in different environments. PMID:26541149

  18. Global protein profiling studies of chikungunya virus infection identify different proteins but common biological processes.

    PubMed

    Smith, Duncan R

    2015-01-01

    Chikungunya fever (CHIKF) caused by the mosquito-transmitted chikungunya virus (CHIKV) swept into international prominence from late 2005 as an epidemic of CHIKF spread around countries surrounding the Indian Ocean. Although significant advances have been made in understanding the pathobiology of CHIKF, numerous questions still remain. In the absence of commercially available specific drugs to treat the disease, or a vaccine to prevent the diseases, the questions have particular significance. A number of studies have used global proteome analysis to increase our understanding of the process of CHIKV infection using a number of different experimental techniques and experimental systems. In all, over 700 proteins have been identified in nine different analyses by five different groups as being differentially regulated. Remarkably, only a single protein, eukaryotic elongation factor 2, has been identified by more than two different groups as being differentially regulated during CHIKV infection. This review provides a critical overview of the studies that have used global protein profiling to understand CHIKV infection and shows that while a broad consensus is emerging on which biological processes are altered during CHIKV infection, this consensus is poorly supported in terms of consistent identification of any key proteins mediating those biological processes.

  19. A computation using mutually exclusive processing is sufficient to identify specific Hedgehog signaling components

    PubMed Central

    Spratt, Spencer J.

    2013-01-01

    A system of more than one part can be deciphered by observing differences between the parts. A simple way to do this is by recording something absolute displaying a trait in one part and not in another: in other words, mutually exclusive computation. Conditional directed expression in vivo offers processing in more than one part of the system giving increased computation power for biological systems analysis. Here, I report the consideration of these aspects in the development of an in vivo screening assay that appears sufficient to identify components specific to a system. PMID:24391661

  20. Objectively identifying landmark use and predicting flight trajectories of the homing pigeon using Gaussian processes.

    PubMed

    Mann, Richard; Freeman, Robin; Osborne, Michael; Garnett, Roman; Armstrong, Chris; Meade, Jessica; Biro, Dora; Guilford, Tim; Roberts, Stephen

    2011-02-01

    Pigeons home along idiosyncratic habitual routes from familiar locations. It has been suggested that memorized visual landmarks underpin this route learning. However, the inability to experimentally alter the landscape on large scales has hindered the discovery of the particular features to which birds attend. Here, we present a method for objectively classifying the most informative regions of animal paths. We apply this method to flight trajectories from homing pigeons to identify probable locations of salient visual landmarks. We construct and apply a Gaussian process model of flight trajectory generation for pigeons trained to home from specific release sites. The model shows increasing predictive power as the birds become familiar with the sites, mirroring the animal's learning process. We subsequently find that the most informative elements of the flight trajectories coincide with landscape features that have previously been suggested as important components of the homing task.

  1. Identifying potential misfit items in cognitive process of learning engineering mathematics based on Rasch model

    NASA Astrophysics Data System (ADS)

    Ataei, Sh; Mahmud, Z.; Khalid, M. N.

    2014-04-01

    The students learning outcomes clarify what students should know and be able to demonstrate after completing their course. So, one of the issues on the process of teaching and learning is how to assess students' learning. This paper describes an application of the dichotomous Rasch measurement model in measuring the cognitive process of engineering students' learning of mathematics. This study provides insights into the perspective of 54 engineering students' cognitive ability in learning Calculus III based on Bloom's Taxonomy on 31 items. The results denote that some of the examination questions are either too difficult or too easy for the majority of the students. This analysis yields FIT statistics which are able to identify if there is data departure from the Rasch theoretical model. The study has identified some potential misfit items based on the measurement of ZSTD where the removal misfit item was accomplished based on the MNSQ outfit of above 1.3 or less than 0.7 logit. Therefore, it is recommended that these items be reviewed or revised to better match the range of students' ability in the respective course.

  2. Identifying Repetitive Institutional Review Board Stipulations by Natural Language Processing and Network Analysis.

    PubMed

    Kury, Fabrício S P; Cimino, James J

    2015-01-01

    The corrections ("stipulations") to a proposed research study protocol produced by an institutional review board (IRB) can often be repetitive across many studies; however, there is no standard set of stipulations that could be used, for example, by researchers wishing to anticipate and correct problems in their research proposals prior to submitting to an IRB. The objective of the research was to computationally identify the most repetitive types of stipulations generated in the course of IRB deliberations. The text of each stipulation was normalized using the natural language processing techniques. An undirected weighted network was constructed in which each stipulation was represented by a node, and each link, if present, had weight corresponding to the TF-IDF Cosine Similarity of the stipulations. Network analysis software was then used to identify clusters in the network representing similar stipulations. The final results were correlated with additional data to produce further insights about the IRB workflow. From a corpus of 18,582 stipulations we identified 31 types of repetitive stipulations. Those types accounted for 3,870 stipulations (20.8% of the corpus) produced for 697 (88.7%) of all protocols in 392 (also 88.7%) of all the CNS IRB meetings with stipulations entered in our data source. A notable peroportion of the corrections produced by the IRB can be considered highly repetitive. Our shareable method relied on a minimal manual analysis and provides an intuitive exploration with theoretically unbounded granularity. Finer granularity allowed for the insight that is anticipated to prevent the need for identifying the IRB panel expertise or any human supervision.

  3. Modular Energy-Efficient and Robust Paradigms for a Disaster-Recovery Process over Wireless Sensor Networks

    PubMed Central

    Razaque, Abdul; Elleithy, Khaled

    2015-01-01

    Robust paradigms are a necessity, particularly for emerging wireless sensor network (WSN) applications. The lack of robust and efficient paradigms causes a reduction in the provision of quality of service (QoS) and additional energy consumption. In this paper, we introduce modular energy-efficient and robust paradigms that involve two archetypes: (1) the operational medium access control (O-MAC) hybrid protocol and (2) the pheromone termite (PT) model. The O-MAC protocol controls overhearing and congestion and increases the throughput, reduces the latency and extends the network lifetime. O-MAC uses an optimized data frame format that reduces the channel access time and provides faster data delivery over the medium. Furthermore, O-MAC uses a novel randomization function that avoids channel collisions. The PT model provides robust routing for single and multiple links and includes two new significant features: (1) determining the packet generation rate to avoid congestion and (2) pheromone sensitivity to determine the link capacity prior to sending the packets on each link. The state-of-the-art research in this work is based on improving both the QoS and energy efficiency. To determine the strength of O-MAC with the PT model; we have generated and simulated a disaster recovery scenario using a network simulator (ns-3.10) that monitors the activities of disaster recovery staff; hospital staff and disaster victims brought into the hospital. Moreover; the proposed paradigm can be used for general purpose applications. Finally; the QoS metrics of the O-MAC and PT paradigms are evaluated and compared with other known hybrid protocols involving the MAC and routing features. The simulation results indicate that O-MAC with PT produced better outcomes. PMID:26153768

  4. Modular Energy-Efficient and Robust Paradigms for a Disaster-Recovery Process over Wireless Sensor Networks.

    PubMed

    Razaque, Abdul; Elleithy, Khaled

    2015-01-01

    Robust paradigms are a necessity, particularly for emerging wireless sensor network (WSN) applications. The lack of robust and efficient paradigms causes a reduction in the provision of quality of service (QoS) and additional energy consumption. In this paper, we introduce modular energy-efficient and robust paradigms that involve two archetypes: (1) the operational medium access control (O-MAC) hybrid protocol and (2) the pheromone termite (PT) model. The O-MAC protocol controls overhearing and congestion and increases the throughput, reduces the latency and extends the network lifetime. O-MAC uses an optimized data frame format that reduces the channel access time and provides faster data delivery over the medium. Furthermore, O-MAC uses a novel randomization function that avoids channel collisions. The PT model provides robust routing for single and multiple links and includes two new significant features: (1) determining the packet generation rate to avoid congestion and (2) pheromone sensitivity to determine the link capacity prior to sending the packets on each link. The state-of-the-art research in this work is based on improving both the QoS and energy efficiency. To determine the strength of O-MAC with the PT model; we have generated and simulated a disaster recovery scenario using a network simulator (ns-3.10) that monitors the activities of disaster recovery staff; hospital staff and disaster victims brought into the hospital. Moreover; the proposed paradigm can be used for general purpose applications. Finally; the QoS metrics of the O-MAC and PT paradigms are evaluated and compared with other known hybrid protocols involving the MAC and routing features. The simulation results indicate that O-MAC with PT produced better outcomes. PMID:26153768

  5. Scan-pattern and signal processing for microvasculature visualization with complex SD-OCT: tissue-motion artifacts robustness and decorrelation time - blood vessel characteristics

    NASA Astrophysics Data System (ADS)

    Matveev, Lev A.; Zaitsev, Vladimir Y.; Gelikonov, Grigory V.; Matveyev, Alexandr L.; Moiseev, Alexander A.; Ksenofontov, Sergey Y.; Gelikonov, Valentin M.; Demidov, Valentin; Vitkin, Alex

    2015-03-01

    We propose a modification of OCT scanning pattern and corresponding signal processing for 3D visualizing blood microcirculation from complex-signal B-scans. We describe the scanning pattern modifications that increase the methods' robustness to bulk tissue motion artifacts, with speed up to several cm/s. Based on these modifications, OCT-based angiography becomes more realistic under practical measurement conditions. For these scan patterns, we apply novel signal processing to separate the blood vessels with different decorrelation times, by varying of effective temporal diversity of processed signals.

  6. Pharmaceutical screen identifies novel target processes for activation of autophagy with a broad translational potential

    PubMed Central

    Chauhan, Santosh; Ahmed, Zahra; Bradfute, Steven B.; Arko-Mensah, John; Mandell, Michael A.; Won Choi, Seong; Kimura, Tomonori; Blanchet, Fabien; Waller, Anna; Mudd, Michal H.; Jiang, Shanya; Sklar, Larry; Timmins, Graham S.; Maphis, Nicole; Bhaskar, Kiran; Piguet, Vincent; Deretic, Vojo

    2015-01-01

    Autophagy is a conserved homeostatic process active in all human cells and affecting a spectrum of diseases. Here we use a pharmaceutical screen to discover new mechanisms for activation of autophagy. We identify a subset of pharmaceuticals inducing autophagic flux with effects in diverse cellular systems modelling specific stages of several human diseases such as HIV transmission and hyperphosphorylated tau accumulation in Alzheimer's disease. One drug, flubendazole, is a potent inducer of autophagy initiation and flux by affecting acetylated and dynamic microtubules in a reciprocal way. Disruption of dynamic microtubules by flubendazole results in mTOR deactivation and dissociation from lysosomes leading to TFEB (transcription factor EB) nuclear translocation and activation of autophagy. By inducing microtubule acetylation, flubendazole activates JNK1 leading to Bcl-2 phosphorylation, causing release of Beclin1 from Bcl-2-Beclin1 complexes for autophagy induction, thus uncovering a new approach to inducing autophagic flux that may be applicable in disease treatment. PMID:26503418

  7. Towards a typology of business process management professionals: identifying patterns of competences through latent semantic analysis

    NASA Astrophysics Data System (ADS)

    Müller, Oliver; Schmiedel, Theresa; Gorbacheva, Elena; vom Brocke, Jan

    2016-01-01

    While researchers have analysed the organisational competences that are required for successful Business Process Management (BPM) initiatives, individual BPM competences have not yet been studied in detail. In this study, latent semantic analysis is used to examine a collection of 1507 BPM-related job advertisements in order to develop a typology of BPM professionals. This empirical analysis reveals distinct ideal types and profiles of BPM professionals on several levels of abstraction. A closer look at these ideal types and profiles confirms that BPM is a boundary-spanning field that requires interdisciplinary sets of competence that range from technical competences to business and systems competences. Based on the study's findings, it is posited that individual and organisational alignment with the identified ideal types and profiles is likely to result in high employability and organisational BPM success.

  8. Identifying Areas for Improvement in the HIV Screening Process of a High-Prevalence Emergency Department.

    PubMed

    Zucker, Jason; Cennimo, David; Sugalski, Gregory; Swaminthan, Shobha

    2016-06-01

    Since 1993, the Centers for Disease Control recommendations for HIV testing were extended to include persons obtaining care in the emergency department (ED). Situated in Newark, New Jersey, the University Hospital (UH) ED serves a community with a greater than 2% HIV prevalence, and a recent study showed a UH ED HIV seroprevalence of 6.5%, of which 33% were unknown diagnoses. Electronic records for patients seen in the UH ED from October 1st, 2014, to February 28th, 2015, were obtained. Information was collected on demographics, ED diagnosis, triage time, and HIV testing. Random sampling of 500 patients was performed to identify those eligible for screening. Univariate and multivariate analysis was done to assess screening characteristics. Only 9% (8.8-9.3%) of patients eligible for screening were screened in the ED. Sixteen percent (15.7-16.6%) of those in the age group18-25 and 12% (11.6-12.3%) of those in the age group 26-35 were screened, whereas 8% (7.8-8.2%) of those in the age group 35-45 were screened. 19.6% (19-20.1%) of eligible patients in fast track were screened versus 1.7% (1.6-1.8%) in the main ED. Eighty-five percent of patients screened were triaged between 6 a.m. and 8 p.m. with 90% of all screening tests done by the HIV counseling, testing, and referral services. Due to the high prevalence of HIV, urban EDs play an integral public health role in the early identification and linkage to care of patients with HIV. By evaluating our current screening process, we identified opportunities to improve our screening process and reduce missed opportunities for diagnosis.

  9. Identifying children with autism spectrum disorder based on their face processing abnormality: A machine learning framework.

    PubMed

    Liu, Wenbo; Li, Ming; Yi, Li

    2016-08-01

    The atypical face scanning patterns in individuals with Autism Spectrum Disorder (ASD) has been repeatedly discovered by previous research. The present study examined whether their face scanning patterns could be potentially useful to identify children with ASD by adopting the machine learning algorithm for the classification purpose. Particularly, we applied the machine learning method to analyze an eye movement dataset from a face recognition task [Yi et al., 2016], to classify children with and without ASD. We evaluated the performance of our model in terms of its accuracy, sensitivity, and specificity of classifying ASD. Results indicated promising evidence for applying the machine learning algorithm based on the face scanning patterns to identify children with ASD, with a maximum classification accuracy of 88.51%. Nevertheless, our study is still preliminary with some constraints that may apply in the clinical practice. Future research should shed light on further valuation of our method and contribute to the development of a multitask and multimodel approach to aid the process of early detection and diagnosis of ASD. Autism Res 2016, 9: 888-898. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  10. An event-specific DNA microarray to identify genetically modified organisms in processed foods.

    PubMed

    Kim, Jae-Hwan; Kim, Su-Youn; Lee, Hyungjae; Kim, Young-Rok; Kim, Hae-Yeong

    2010-05-26

    We developed an event-specific DNA microarray system to identify 19 genetically modified organisms (GMOs), including two GM soybeans (GTS-40-3-2 and A2704-12), thirteen GM maizes (Bt176, Bt11, MON810, MON863, NK603, GA21, T25, TC1507, Bt10, DAS59122-7, TC6275, MIR604, and LY038), three GM canolas (GT73, MS8xRF3, and T45), and one GM cotton (LLcotton25). The microarray included 27 oligonucleotide probes optimized to identify endogenous reference targets, event-specific targets, screening targets (35S promoter and nos terminator), and an internal target (18S rRNA gene). Thirty-seven maize-containing food products purchased from South Korean and US markets were tested for the presence of GM maize using this microarray system. Thirteen GM maize events were simultaneously detected using multiplex PCR coupled with microarray on a single chip, at a limit of detection of approximately 0.5%. Using the system described here, we detected GM maize in 11 of the 37 food samples tested. These results suggest that an event-specific DNA microarray system can reliably detect GMOs in processed foods.

  11. DPNuc: Identifying Nucleosome Positions Based on the Dirichlet Process Mixture Model.

    PubMed

    Chen, Huidong; Guan, Jihong; Zhou, Shuigeng

    2015-01-01

    Nucleosomes and the free linker DNA between them assemble the chromatin. Nucleosome positioning plays an important role in gene transcription regulation, DNA replication and repair, alternative splicing, and so on. With the rapid development of ChIP-seq, it is possible to computationally detect the positions of nucleosomes on chromosomes. However, existing methods cannot provide accurate and detailed information about the detected nucleosomes, especially for the nucleosomes with complex configurations where overlaps and noise exist. Meanwhile, they usually require some prior knowledge of nucleosomes as input, such as the size or the number of the unknown nucleosomes, which may significantly influence the detection results. In this paper, we propose a novel approach DPNuc for identifying nucleosome positions based on the Dirichlet process mixture model. In our method, Markov chain Monte Carlo (MCMC) simulations are employed to determine the mixture model with no need of prior knowledge about nucleosomes. Compared with three existing methods, our approach can provide more detailed information of the detected nucleosomes and can more reasonably reveal the real configurations of the chromosomes; especially, our approach performs better in the complex overlapping situations. By mapping the detected nucleosomes to a synthetic benchmark nucleosome map and two existing benchmark nucleosome maps, it is shown that our approach achieves a better performance in identifying nucleosome positions and gets a higher F-score. Finally, we show that our approach can more reliably detect the size distribution of nucleosomes.

  12. Identifying children with autism spectrum disorder based on their face processing abnormality: A machine learning framework.

    PubMed

    Liu, Wenbo; Li, Ming; Yi, Li

    2016-08-01

    The atypical face scanning patterns in individuals with Autism Spectrum Disorder (ASD) has been repeatedly discovered by previous research. The present study examined whether their face scanning patterns could be potentially useful to identify children with ASD by adopting the machine learning algorithm for the classification purpose. Particularly, we applied the machine learning method to analyze an eye movement dataset from a face recognition task [Yi et al., 2016], to classify children with and without ASD. We evaluated the performance of our model in terms of its accuracy, sensitivity, and specificity of classifying ASD. Results indicated promising evidence for applying the machine learning algorithm based on the face scanning patterns to identify children with ASD, with a maximum classification accuracy of 88.51%. Nevertheless, our study is still preliminary with some constraints that may apply in the clinical practice. Future research should shed light on further valuation of our method and contribute to the development of a multitask and multimodel approach to aid the process of early detection and diagnosis of ASD. Autism Res 2016, 9: 888-898. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. PMID:27037971

  13. DPNuc: Identifying Nucleosome Positions Based on the Dirichlet Process Mixture Model.

    PubMed

    Chen, Huidong; Guan, Jihong; Zhou, Shuigeng

    2015-01-01

    Nucleosomes and the free linker DNA between them assemble the chromatin. Nucleosome positioning plays an important role in gene transcription regulation, DNA replication and repair, alternative splicing, and so on. With the rapid development of ChIP-seq, it is possible to computationally detect the positions of nucleosomes on chromosomes. However, existing methods cannot provide accurate and detailed information about the detected nucleosomes, especially for the nucleosomes with complex configurations where overlaps and noise exist. Meanwhile, they usually require some prior knowledge of nucleosomes as input, such as the size or the number of the unknown nucleosomes, which may significantly influence the detection results. In this paper, we propose a novel approach DPNuc for identifying nucleosome positions based on the Dirichlet process mixture model. In our method, Markov chain Monte Carlo (MCMC) simulations are employed to determine the mixture model with no need of prior knowledge about nucleosomes. Compared with three existing methods, our approach can provide more detailed information of the detected nucleosomes and can more reasonably reveal the real configurations of the chromosomes; especially, our approach performs better in the complex overlapping situations. By mapping the detected nucleosomes to a synthetic benchmark nucleosome map and two existing benchmark nucleosome maps, it is shown that our approach achieves a better performance in identifying nucleosome positions and gets a higher F-score. Finally, we show that our approach can more reliably detect the size distribution of nucleosomes. PMID:26671796

  14. A Comparative Study on Retirement Process in Korea, Germany, and the United States: Identifying Determinants of Retirement Process.

    PubMed

    Cho, Joonmo; Lee, Ayoung; Woo, Kwangho

    2016-10-01

    This study classifies the retirement process and empirically identifies the individual and institutional characteristics determining the retirement process of the aged in South Korea, Germany, and the United States. Using data from the Cross-National Equivalent File, we use a multinomial logistic regression with individual factors, public pension, and an interaction term between an occupation and an education level. We found that in Germany, the elderly with a higher education level were more likely to continue work after retirement with a relatively well-developed social support system, while in Korea, the elderly, with a lower education level in almost all occupation sectors, tended to work off and on after retirement. In the United States, the public pension and the interaction terms have no statistically significant impact on work after retirement. In both Germany and Korea, receiving a higher pension decreased the probability of working after retirement, but the influence of a pension in Korea was much greater than that of Germany. In South Korea, the elderly workers, with lower education levels, tended to work off and on repeatedly because there is no proper security in both the labor market and pension system.

  15. A Comparative Study on Retirement Process in Korea, Germany, and the United States: Identifying Determinants of Retirement Process.

    PubMed

    Cho, Joonmo; Lee, Ayoung; Woo, Kwangho

    2016-10-01

    This study classifies the retirement process and empirically identifies the individual and institutional characteristics determining the retirement process of the aged in South Korea, Germany, and the United States. Using data from the Cross-National Equivalent File, we use a multinomial logistic regression with individual factors, public pension, and an interaction term between an occupation and an education level. We found that in Germany, the elderly with a higher education level were more likely to continue work after retirement with a relatively well-developed social support system, while in Korea, the elderly, with a lower education level in almost all occupation sectors, tended to work off and on after retirement. In the United States, the public pension and the interaction terms have no statistically significant impact on work after retirement. In both Germany and Korea, receiving a higher pension decreased the probability of working after retirement, but the influence of a pension in Korea was much greater than that of Germany. In South Korea, the elderly workers, with lower education levels, tended to work off and on repeatedly because there is no proper security in both the labor market and pension system. PMID:27388889

  16. Identifying Hydrologic Processes in Agricultural Watersheds Using Precipitation-Runoff Models

    USGS Publications Warehouse

    Linard, Joshua I.; Wolock, David M.; Webb, Richard M.T.; Wieczorek, Michael E.

    2009-01-01

    Understanding the fate and transport of agricultural chemicals applied to agricultural fields will assist in designing the most effective strategies to prevent water-quality impairments. At a watershed scale, the processes controlling the fate and transport of agricultural chemicals are generally understood only conceptually. To examine the applicability of conceptual models to the processes actually occurring, two precipitation-runoff models - the Soil and Water Assessment Tool (SWAT) and the Water, Energy, and Biogeochemical Model (WEBMOD) - were applied in different agricultural settings of the contiguous United States. Each model, through different physical processes, simulated the transport of water to a stream from the surface, the unsaturated zone, and the saturated zone. Models were calibrated for watersheds in Maryland, Indiana, and Nebraska. The calibrated sets of input parameters for each model at each watershed are discussed, and the criteria used to validate the models are explained. The SWAT and WEBMOD model results at each watershed conformed to each other and to the processes identified in each watershed's conceptual hydrology. In Maryland the conceptual understanding of the hydrology indicated groundwater flow was the largest annual source of streamflow; the simulation results for the validation period confirm this. The dominant source of water to the Indiana watershed was thought to be tile drains. Although tile drains were not explicitly simulated in the SWAT model, a large component of streamflow was received from lateral flow, which could be attributed to tile drains. Being able to explicitly account for tile drains, WEBMOD indicated water from tile drains constituted most of the annual streamflow in the Indiana watershed. The Nebraska models indicated annual streamflow was composed primarily of perennial groundwater flow and infiltration-excess runoff, which conformed to the conceptual hydrology developed for that watershed. The hydrologic

  17. Robust Regression.

    PubMed

    Huang, Dong; Cabral, Ricardo; De la Torre, Fernando

    2016-02-01

    Discriminative methods (e.g., kernel regression, SVM) have been extensively used to solve problems such as object recognition, image alignment and pose estimation from images. These methods typically map image features ( X) to continuous (e.g., pose) or discrete (e.g., object category) values. A major drawback of existing discriminative methods is that samples are directly projected onto a subspace and hence fail to account for outliers common in realistic training sets due to occlusion, specular reflections or noise. It is important to notice that existing discriminative approaches assume the input variables X to be noise free. Thus, discriminative methods experience significant performance degradation when gross outliers are present. Despite its obvious importance, the problem of robust discriminative learning has been relatively unexplored in computer vision. This paper develops the theory of robust regression (RR) and presents an effective convex approach that uses recent advances on rank minimization. The framework applies to a variety of problems in computer vision including robust linear discriminant analysis, regression with missing data, and multi-label classification. Several synthetic and real examples with applications to head pose estimation from images, image and video classification and facial attribute classification with missing data are used to illustrate the benefits of RR. PMID:26761740

  18. Robust Methods in Qsar

    NASA Astrophysics Data System (ADS)

    Walczak, Beata; Daszykowski, Michał; Stanimirova, Ivana

    A large progress in the development of robust methods as an efficient tool for processing of data contaminated with outlying objects has been made over the last years. Outliers in the QSAR studies are usually the result of an improper calculation of some molecular descriptors and/or experimental error in determining the property to be modelled. They influence greatly any least square model, and therefore the conclusions about the biological activity of a potential component based on such a model are misleading. With the use of robust approaches, one can solve this problem building a robust model describing the data majority well. On the other hand, the proper identification of outliers may pinpoint a new direction of a drug development. The outliers' assessment can exclusively be done with robust methods and these methods are to be described in this chapter

  19. Identifying Abdominal Aortic Aneurysm Cases and Controls using Natural Language Processing of Radiology Reports.

    PubMed

    Sohn, Sunghwan; Ye, Zi; Liu, Hongfang; Chute, Christopher G; Kullo, Iftikhar J

    2013-01-01

    Prevalence of abdominal aortic aneurysm (AAA) is increasing due to longer life expectancy and implementation of screening programs. Patient-specific longitudinal measurements of AAA are important to understand pathophysiology of disease development and modifiers of abdominal aortic size. In this paper, we applied natural language processing (NLP) techniques to process radiology reports and developed a rule-based algorithm to identify AAA patients and also extract the corresponding aneurysm size with the examination date. AAA patient cohorts were determined by a hierarchical approach that: 1) selected potential AAA reports using keywords; 2) classified reports into AAA-case vs. non-case using rules; and 3) determined the AAA patient cohort based on a report-level classification. Our system was built in an Unstructured Information Management Architecture framework that allows efficient use of existing NLP components. Our system produced an F-score of 0.961 for AAA-case report classification with an accuracy of 0.984 for aneurysm size extraction. PMID:24303276

  20. Hominin cognitive evolution: identifying patterns and processes in the fossil and archaeological record

    PubMed Central

    Shultz, Susanne; Nelson, Emma; Dunbar, Robin I. M.

    2012-01-01

    As only limited insight into behaviour is available from the archaeological record, much of our understanding of historical changes in human cognition is restricted to identifying changes in brain size and architecture. Using both absolute and residual brain size estimates, we show that hominin brain evolution was likely to be the result of a mix of processes; punctuated changes at approximately 100 kya, 1 Mya and 1.8 Mya are supplemented by gradual within-lineage changes in Homo erectus and Homo sapiens sensu lato. While brain size increase in Homo in Africa is a gradual process, migration of hominins into Eurasia is associated with step changes at approximately 400 kya and approximately 100 kya. We then demonstrate that periods of rapid change in hominin brain size are not temporally associated with changes in environmental unpredictability or with long-term palaeoclimate trends. Thus, we argue that commonly used global sea level or Indian Ocean dust palaeoclimate records provide little evidence for either the variability selection or aridity hypotheses explaining changes in hominin brain size. Brain size change at approximately 100 kya is coincident with demographic change and the appearance of fully modern language. However, gaps remain in our understanding of the external pressures driving encephalization, which will only be filled by novel applications of the fossil, palaeoclimatic and archaeological records. PMID:22734056

  1. Identifying and Processing the Gap Between Perceived and Actual Agreement in Breast Pathology Interpretation

    PubMed Central

    Carney, Patricia A.; Allison, Kimberly H.; Oster, Natalia V.; Frederick, Paul D.; Morgan, Thomas R.; Geller, Berta M.; Weaver, Donald L.; Elmore, Joann G.

    2016-01-01

    We examined how pathologists’ process their perceptions of how their interpretations on diagnoses for breast pathology cases agree with a reference standard. To accomplish this, we created an individualized self-directed continuing medical education program that showed pathologists interpreting breast specimens how their interpretations on a test set compared to a reference diagnosis developed by a consensus panel of experienced breast pathologists. After interpreting a test set of 60 cases, 92 participating pathologists were asked to estimate how their interpretations compared to the standard for benign without atypia, atypia, ductal carcinoma in situ and invasive cancer. We then asked pathologists their thoughts about learning about differences in their perceptions compared to actual agreement. Overall, participants tended to overestimate their agreement with the reference standard, with a mean difference of 5.5% (75.9% actual agreement; 81.4% estimated agreement), especially for atypia and were least likely to overestimate it for invasive breast cancer. Non-academic affiliated pathologists were more likely to more closely estimate their performance relative to academic affiliated pathologists (77.6% versus 48%; p=0.001), whereas participants affiliated with an academic medical center were more likely to underestimate agreement with their diagnoses compared to non-academic affiliated pathologists (40% versus 6%). Prior to the continuing medical education program, nearly 55% (54.9%) of participants could not estimate whether they would over-interpret the cases or under-interpret them relative to the reference diagnosis. Nearly 80% (79.8%) reported learning new information from this individualized web-based continuing medical education program, and 23.9% of pathologists identified strategies they would change their practice to improve. In conclusion, when evaluating breast pathology specimens, pathologists do a good job of estimating their diagnostic agreement

  2. Identifying and processing the gap between perceived and actual agreement in breast pathology interpretation.

    PubMed

    Carney, Patricia A; Allison, Kimberly H; Oster, Natalia V; Frederick, Paul D; Morgan, Thomas R; Geller, Berta M; Weaver, Donald L; Elmore, Joann G

    2016-07-01

    We examined how pathologists' process their perceptions of how their interpretations on diagnoses for breast pathology cases agree with a reference standard. To accomplish this, we created an individualized self-directed continuing medical education program that showed pathologists interpreting breast specimens how their interpretations on a test set compared with a reference diagnosis developed by a consensus panel of experienced breast pathologists. After interpreting a test set of 60 cases, 92 participating pathologists were asked to estimate how their interpretations compared with the standard for benign without atypia, atypia, ductal carcinoma in situ and invasive cancer. We then asked pathologists their thoughts about learning about differences in their perceptions compared with actual agreement. Overall, participants tended to overestimate their agreement with the reference standard, with a mean difference of 5.5% (75.9% actual agreement; 81.4% estimated agreement), especially for atypia and were least likely to overestimate it for invasive breast cancer. Non-academic affiliated pathologists were more likely to more closely estimate their performance relative to academic affiliated pathologists (77.6 vs 48%; P=0.001), whereas participants affiliated with an academic medical center were more likely to underestimate agreement with their diagnoses compared with non-academic affiliated pathologists (40 vs 6%). Before the continuing medical education program, nearly 55% (54.9%) of participants could not estimate whether they would overinterpret the cases or underinterpret them relative to the reference diagnosis. Nearly 80% (79.8%) reported learning new information from this individualized web-based continuing medical education program, and 23.9% of pathologists identified strategies they would change their practice to improve. In conclusion, when evaluating breast pathology specimens, pathologists do a good job of estimating their diagnostic agreement with a

  3. Identifying vegetation's influence on multi-scale fluvial processes based on plant trait adaptations

    NASA Astrophysics Data System (ADS)

    Manners, R.; Merritt, D. M.; Wilcox, A. C.; Scott, M.

    2015-12-01

    Riparian vegetation-geomorphic interactions are critical to the physical and biological function of riparian ecosystems, yet we lack a mechanistic understanding of these interactions and predictive ability at the reach to watershed scale. Plant functional groups, or groupings of species that have similar traits, either in terms of a plant's life history strategy (e.g., drought tolerance) or morphology (e.g., growth form), may provide an expression of vegetation-geomorphic interactions. We are developing an approach that 1) identifies where along a river corridor plant functional groups exist and 2) links the traits that define functional groups and their impact on fluvial processes. The Green and Yampa Rivers in Dinosaur National Monument have wide variations in hydrology, hydraulics, and channel morphology, as well as a large dataset of species presence. For these rivers, we build a predictive model of the probable presence of plant functional groups based on site-specific aspects of the flow regime (e.g., inundation probability and duration), hydraulic characteristics (e.g., velocity), and substrate size. Functional group traits are collected from the literature and measured in the field. We found that life-history traits more strongly predicted functional group presence than did morphological traits. However, some life-history traits, important for determining the likelihood of a plant existing along an environmental gradient, are directly related to the morphological properties of the plant, important for the plant's impact on fluvial processes. For example, stem density (i.e., dry mass divided by volume of stem) is positively correlated to drought tolerance and is also related to the modulus of elasticity. Growth form, which is related to the plant's susceptibility to biomass-removing fluvial disturbances, is also related to frontal area. Using this approach, we can identify how plant community composition and distribution shifts with a change to the flow

  4. Identifying sources and processes controlling the sulphur cycle in the Canyon Creek watershed, Alberta, Canada.

    PubMed

    Nightingale, Michael; Mayer, Bernhard

    2012-01-01

    Sources and processes affecting the sulphur cycle in the Canyon Creek watershed in Alberta (Canada) were investigated. The catchment is important for water supply and recreational activities and is also a source of oil and natural gas. Water was collected from 10 locations along an 8 km stretch of Canyon Creek including three so-called sulphur pools, followed by the chemical and isotopic analyses on water and its major dissolved species. The δ(2)H and δ(18)O values of the water plotted near the regional meteoric water line, indicating a meteoric origin of the water and no contribution from deeper formation waters. Calcium, magnesium and bicarbonate were the dominant ions in the upstream portion of the watershed, whereas sulphate was the dominant anion in the water from the three sulphur pools. The isotopic composition of sulphate (δ(34)S and δ(18)O) revealed three major sulphate sources with distinct isotopic compositions throughout the catchment: (1) a combination of sulphate from soils and sulphide oxidation in the bedrock in the upper reaches of Canyon Creek; (2) sulphide oxidation in pyrite-rich shales in the lower reaches of Canyon Creek and (3) dissolution of Devonian anhydrite constituting the major sulphate source for the three sulphur pools in the central portion of the watershed. The presence of H(2)S in the sulphur pools with δ(34)S values ∼30 ‰ lower than those of sulphate further indicated the occurrence of bacterial (dissimilatory) sulphate reduction. This case study reveals that δ(34)S values of surface water systems can vary by more than 20 ‰ over short geographic distances and that isotope analyses are an effective tool to identify sources and processes that govern the sulphur cycle in watersheds.

  5. A more robust model of the biodiesel reaction, allowing identification of process conditions for significantly enhanced rate and water tolerance.

    PubMed

    Eze, Valentine C; Phan, Anh N; Harvey, Adam P

    2014-03-01

    A more robust kinetic model of base-catalysed transesterification than the conventional reaction scheme has been developed. All the relevant reactions in the base-catalysed transesterification of rapeseed oil (RSO) to fatty acid methyl ester (FAME) were investigated experimentally, and validated numerically in a model implemented using MATLAB. It was found that including the saponification of RSO and FAME side reactions and hydroxide-methoxide equilibrium data explained various effects that are not captured by simpler conventional models. Both the experiment and modelling showed that the "biodiesel reaction" can reach the desired level of conversion (>95%) in less than 2min. Given the right set of conditions, the transesterification can reach over 95% conversion, before the saponification losses become significant. This means that the reaction must be performed in a reactor exhibiting good mixing and good control of residence time, and the reaction mixture must be quenched rapidly as it leaves the reactor.

  6. A more robust model of the biodiesel reaction, allowing identification of process conditions for significantly enhanced rate and water tolerance.

    PubMed

    Eze, Valentine C; Phan, Anh N; Harvey, Adam P

    2014-03-01

    A more robust kinetic model of base-catalysed transesterification than the conventional reaction scheme has been developed. All the relevant reactions in the base-catalysed transesterification of rapeseed oil (RSO) to fatty acid methyl ester (FAME) were investigated experimentally, and validated numerically in a model implemented using MATLAB. It was found that including the saponification of RSO and FAME side reactions and hydroxide-methoxide equilibrium data explained various effects that are not captured by simpler conventional models. Both the experiment and modelling showed that the "biodiesel reaction" can reach the desired level of conversion (>95%) in less than 2min. Given the right set of conditions, the transesterification can reach over 95% conversion, before the saponification losses become significant. This means that the reaction must be performed in a reactor exhibiting good mixing and good control of residence time, and the reaction mixture must be quenched rapidly as it leaves the reactor. PMID:24508659

  7. Comparison of remote sensing image processing techniques to identify tornado damage areas from Landsat TM data

    USGS Publications Warehouse

    Myint, S.W.; Yuan, M.; Cerveny, R.S.; Giri, C.P.

    2008-01-01

    Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and objectoriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. ?? 2008 by MDPI.

  8. Identifying Highly Penetrant Disease Causal Mutations Using Next Generation Sequencing: Guide to Whole Process

    PubMed Central

    Erzurumluoglu, A. Mesut; Shihab, Hashem A.; Baird, Denis; Richardson, Tom G.; Day, Ian N. M.; Gaunt, Tom R.

    2015-01-01

    Recent technological advances have created challenges for geneticists and a need to adapt to a wide range of new bioinformatics tools and an expanding wealth of publicly available data (e.g., mutation databases, and software). This wide range of methods and a diversity of file formats used in sequence analysis is a significant issue, with a considerable amount of time spent before anyone can even attempt to analyse the genetic basis of human disorders. Another point to consider that is although many possess “just enough” knowledge to analyse their data, they do not make full use of the tools and databases that are available and also do not fully understand how their data was created. The primary aim of this review is to document some of the key approaches and provide an analysis schema to make the analysis process more efficient and reliable in the context of discovering highly penetrant causal mutations/genes. This review will also compare the methods used to identify highly penetrant variants when data is obtained from consanguineous individuals as opposed to nonconsanguineous; and when Mendelian disorders are analysed as opposed to common-complex disorders. PMID:26106619

  9. Identifying Highly Penetrant Disease Causal Mutations Using Next Generation Sequencing: Guide to Whole Process.

    PubMed

    Erzurumluoglu, A Mesut; Rodriguez, Santiago; Shihab, Hashem A; Baird, Denis; Richardson, Tom G; Day, Ian N M; Gaunt, Tom R

    2015-01-01

    Recent technological advances have created challenges for geneticists and a need to adapt to a wide range of new bioinformatics tools and an expanding wealth of publicly available data (e.g., mutation databases, and software). This wide range of methods and a diversity of file formats used in sequence analysis is a significant issue, with a considerable amount of time spent before anyone can even attempt to analyse the genetic basis of human disorders. Another point to consider that is although many possess "just enough" knowledge to analyse their data, they do not make full use of the tools and databases that are available and also do not fully understand how their data was created. The primary aim of this review is to document some of the key approaches and provide an analysis schema to make the analysis process more efficient and reliable in the context of discovering highly penetrant causal mutations/genes. This review will also compare the methods used to identify highly penetrant variants when data is obtained from consanguineous individuals as opposed to nonconsanguineous; and when Mendelian disorders are analysed as opposed to common-complex disorders.

  10. Identifying weathering processes by Si isotopes in two small catchments in the Black Forest (Germany)

    NASA Astrophysics Data System (ADS)

    Steinhoefel, G.; Breuer, J.; von Blanckenburg, F.; Horn, I.; Kaczorek, D.; Sommer, M.

    2013-12-01

    isotopically light Si with Fe-oxides, which shifts surface water to δ30Si values up to 1.1‰. The Si isotope signature of the main stream depends on variable proportion of inflowing surface water and groundwater. The results on these small catchments demonstrate that Si isotopes are a powerful tool to identify weathering processes and the sources of dissolved Si, which can now be used to constrain the isotope signature of large river systems.

  11. On the processes generating latitudinal richness gradients: identifying diagnostic patterns and predictions

    SciTech Connect

    Hurlbert, Allen H.; Stegen, James C.

    2014-12-02

    Many processes have been put forward to explain the latitudinal gradient in species richness. Here, we use a simulation model to examine four of the most common hypotheses and identify patterns that might be diagnostic of those four hypotheses. The hypotheses examined include (1) tropical niche conservatism, or the idea that the tropics are more diverse because a tropical clade origin has allowed more time for diversification in the tropics and has resulted in few species adapted to extra-tropical climates. (2) The productivity, or energetic constraints, hypothesis suggests that species richness is limited by the amount of biologically available energy in a region. (3) The tropical stability hypothesis argues that major climatic fluctuations and glacial cycles in extratropical regions have led to greater extinction rates and less opportunity for specialization relative to the tropics. (4) Finally, the speciation rates hypothesis suggests that the latitudinal richness gradient arises from a parallel gradient in rates of speciation. We found that tropical niche conservatism can be distinguished from the other three scenarios by phylogenies which are more balanced than expected, no relationship between mean root distance and richness across regions, and a homogeneous rate of speciation across clades and through time. The energy gradient, speciation gradient, and disturbance gradient scenarios all exhibited phylogenies which were more imbalanced than expected, showed a negative relationship between mean root distance and richness, and diversity-dependence of speciation rate estimates through time. Using Bayesian Analysis of Macroevolutionary Mixtures on the simulated phylogenies, we found that the relationship between speciation rates and latitude could distinguish among these three scenarios. We emphasize the importance of considering multiple hypotheses and focusing on diagnostic predictions instead of predictions that are consistent with more than one hypothesis.

  12. Robust conversion of marrow cells to skeletal muscle with formation of marrow-derived muscle cell colonies: A multifactorial process

    SciTech Connect

    Abedi, Mehrdad; Greer, Deborah A.; Colvin, Gerald A.; Demers, Delia A.; Dooner, Mark S.; Harpel, Jasha A.; Weier, Heinz-Ulrich G.; Lambert, Jean-Francois; Quesenberry, Peter J.

    2004-01-10

    Murine marrow cells are capable of repopulating skeletal muscle fibers. A point of concern has been the robustness of such conversions. We have investigated the impact of type of cell delivery, muscle injury, nature of delivered cell, and stem cell mobilizations on marrow to muscle conversion. We transplanted GFP transgenic marrow into irradiated C57BL/6 mice and then injured anterior tibialis muscle by cardiotoxin. One month after injury, sections were analyzed by standard and deconvolutional microscopy for expression of muscle and hematopietic markers. Irradiation was essential to conversion although whether by injury or induction of chimerism is not clear. Cardiotoxin and to a lesser extent PBS injected muscles showed significant number of GFP+ muscle fibers while uninjected muscles showed only rare GFP+ cells. Marrow conversion to muscle was increased by two cycles of G-CSF mobilization and to a lesser extent with G-CSF and steel or GM-CSF. Transplantation of female GFP to male C57 BL/6 and GFP to Rosa26 mice showed fusion of donor cells to recipient muscle. High numbers of donor derived muscle colonies and up to12 percent GFP positive muscle cells were seen after mobilization or direct injection. These levels of donor muscle chimerism approach levels which could be clinically significant in developing strategies for the treatment of muscular dystrophies. In summary, the conversion of marrow to skeletal muscle cells is based on cell fusion and is critically dependent on injury. This conversion is also numerically significant and increases with mobilization.

  13. Energy Landscape Reveals That the Budding Yeast Cell Cycle Is a Robust and Adaptive Multi-stage Process

    PubMed Central

    Lv, Cheng; Li, Xiaoguang; Li, Fangting; Li, Tiejun

    2015-01-01

    Quantitatively understanding the robustness, adaptivity and efficiency of cell cycle dynamics under the influence of noise is a fundamental but difficult question to answer for most eukaryotic organisms. Using a simplified budding yeast cell cycle model perturbed by intrinsic noise, we systematically explore these issues from an energy landscape point of view by constructing an energy landscape for the considered system based on large deviation theory. Analysis shows that the cell cycle trajectory is sharply confined by the ambient energy barrier, and the landscape along this trajectory exhibits a generally flat shape. We explain the evolution of the system on this flat path by incorporating its non-gradient nature. Furthermore, we illustrate how this global landscape changes in response to external signals, observing a nice transformation of the landscapes as the excitable system approaches a limit cycle system when nutrients are sufficient, as well as the formation of additional energy wells when the DNA replication checkpoint is activated. By taking into account the finite volume effect, we find additional pits along the flat cycle path in the landscape associated with the checkpoint mechanism of the cell cycle. The difference between the landscapes induced by intrinsic and extrinsic noise is also discussed. In our opinion, this meticulous structure of the energy landscape for our simplified model is of general interest to other cell cycle dynamics, and the proposed methods can be applied to study similar biological systems. PMID:25794282

  14. CONTAINER MATERIALS, FABRICATION AND ROBUSTNESS

    SciTech Connect

    Dunn, K.; Louthan, M.; Rawls, G.; Sindelar, R.; Zapp, P.; Mcclard, J.

    2009-11-10

    The multi-barrier 3013 container used to package plutonium-bearing materials is robust and thereby highly resistant to identified degradation modes that might cause failure. The only viable degradation mechanisms identified by a panel of technical experts were pressurization within and corrosion of the containers. Evaluations of the container materials and the fabrication processes and resulting residual stresses suggest that the multi-layered containers will mitigate the potential for degradation of the outer container and prevent the release of the container contents to the environment. Additionally, the ongoing surveillance programs and laboratory studies should detect any incipient degradation of containers in the 3013 storage inventory before an outer container is compromised.

  15. Robust control of accelerators

    SciTech Connect

    Johnson, W.J.D. ); Abdallah, C.T. )

    1990-01-01

    The problem of controlling the variations in the rf power system can be effectively cast as an application of modern control theory. Two components of this theory are obtaining a model and a feedback structure. The model inaccuracies influence the choice of a particular controller structure. Because of the modeling uncertainty, one has to design either a variable, adaptive controller or a fixed, robust controller to achieve the desired objective. The adaptive control scheme usually results in very complex hardware; and, therefore, shall not be pursued in this research. In contrast, the robust control methods leads to simpler hardware. However, robust control requires a more accurate mathematical model of the physical process than is required by adaptive control. Our research at the Los Alamos National Laboratory (LANL) and the University of New Mexico (UNM) has led to the development and implementation of a new robust rf power feedback system. In this paper, we report on our research progress. In section one, the robust control problem for the rf power system and the philosophy adopted for the beginning phase of our research is presented. In section two, the results of our proof-of-principle experiments are presented. In section three, we describe the actual controller configuration that is used in LANL FEL physics experiments. The novelty of our approach is that the control hardware is implemented directly in rf without demodulating, compensating, and then remodulating.

  16. Extreme robustness of scaling in sample space reducing processes explains Zipf’s law in diffusion on directed networks

    NASA Astrophysics Data System (ADS)

    Corominas-Murtra, Bernat; Hanel, Rudolf; Thurner, Stefan

    2016-09-01

    It has been shown recently that a specific class of path-dependent stochastic processes, which reduce their sample space as they unfold, lead to exact scaling laws in frequency and rank distributions. Such sample space reducing processes offer an alternative new mechanism to understand the emergence of scaling in countless processes. The corresponding power law exponents were shown to be related to noise levels in the process. Here we show that the emergence of scaling is not limited to the simplest SSRPs, but holds for a huge domain of stochastic processes that are characterised by non-uniform prior distributions. We demonstrate mathematically that in the absence of noise the scaling exponents converge to ‑1 (Zipf’s law) for almost all prior distributions. As a consequence it becomes possible to fully understand targeted diffusion on weighted directed networks and its associated scaling laws in node visit distributions. The presence of cycles can be properly interpreted as playing the same role as noise in SSRPs and, accordingly, determine the scaling exponents. The result that Zipf’s law emerges as a generic feature of diffusion on networks, regardless of its details, and that the exponent of visiting times is related to the amount of cycles in a network could be relevant for a series of applications in traffic-, transport- and supply chain management.

  17. Extreme robustness of scaling in sample space reducing processes explains Zipf’s law in diffusion on directed networks

    NASA Astrophysics Data System (ADS)

    Corominas-Murtra, Bernat; Hanel, Rudolf; Thurner, Stefan

    2016-09-01

    It has been shown recently that a specific class of path-dependent stochastic processes, which reduce their sample space as they unfold, lead to exact scaling laws in frequency and rank distributions. Such sample space reducing processes offer an alternative new mechanism to understand the emergence of scaling in countless processes. The corresponding power law exponents were shown to be related to noise levels in the process. Here we show that the emergence of scaling is not limited to the simplest SSRPs, but holds for a huge domain of stochastic processes that are characterised by non-uniform prior distributions. We demonstrate mathematically that in the absence of noise the scaling exponents converge to -1 (Zipf’s law) for almost all prior distributions. As a consequence it becomes possible to fully understand targeted diffusion on weighted directed networks and its associated scaling laws in node visit distributions. The presence of cycles can be properly interpreted as playing the same role as noise in SSRPs and, accordingly, determine the scaling exponents. The result that Zipf’s law emerges as a generic feature of diffusion on networks, regardless of its details, and that the exponent of visiting times is related to the amount of cycles in a network could be relevant for a series of applications in traffic-, transport- and supply chain management.

  18. On the robustness of the r-process in neutron-star mergers against variations of nuclear masses

    NASA Astrophysics Data System (ADS)

    Mendoza-Temis, J. J.; Wu, M. R.; Martínez-Pinedo, G.; Langanke, K.; Bauswein, A.; Janka, H.-T.; Frank, A.

    2016-07-01

    r-process calculations have been performed for matter ejected dynamically in neutron star mergers (NSM), such calculations are based on a complete set of trajectories from a three-dimensional relativistic smoothed particle hydrodynamic (SPH) simulation. Our calculations consider an extended nuclear reaction network, including spontaneous, β- and neutron-induced fission and adopting fission yield distributions from the ABLA code. In this contribution we have studied the sensitivity of the r-process abundances to nuclear masses by using diferent mass models for the calculation of neutron capture cross sections via the statistical model. Most of the trajectories, corresponding to 90% of the ejected mass, follow a relatively slow expansion allowing for all neutrons to be captured. The resulting abundances are very similar to each other and reproduce the general features of the observed r-process abundance (the second and third peaks, the rare-earth peak and the lead peak) for all mass models as they are mainly determined by the fission yields. We find distinct differences in the predictions of the mass models at and just above the third peak, which can be traced back to different predictions of neutron separation energies for r-process nuclei around neutron number N = 130.

  19. 20 CFR 1010.300 - What processes are to be implemented to identify covered persons?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... identify covered persons? (a) Recipients of funds for qualified job training programs must implement...-Stop Career Center established pursuant to the Workforce Investment Act of 1998, as part of an... covered person status; and (ii) Permit those qualified job training programs specified in §...

  20. 20 CFR 1010.300 - What processes are to be implemented to identify covered persons?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... identify covered persons? (a) Recipients of funds for qualified job training programs must implement...-Stop Career Center established pursuant to the Workforce Investment Act of 1998, as part of an... covered person status; and (ii) Permit those qualified job training programs specified in §...

  1. Identifying Leadership Potential: The Process of Principals within a Charter School Network

    ERIC Educational Resources Information Center

    Waidelich, Lynn A.

    2012-01-01

    The importance of strong educational leadership for American K-12 schools cannot be overstated. As such, school districts need to actively recruit and develop leaders. One way to do so is for school officials to become more strategic in leadership identification and development. If contemporary leaders are strategic about whom they identify and…

  2. 20 CFR 1010.300 - What processes are to be implemented to identify covered persons?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... identify covered persons? (a) Recipients of funds for qualified job training programs must implement... persons at the point of entry must be designed to: (i) Permit the individual to make known his or her covered person status; and (ii) Permit those qualified job training programs specified in §...

  3. Students' Conceptual Knowledge and Process Skills in Civic Education: Identifying Cognitive Profiles and Classroom Correlates

    ERIC Educational Resources Information Center

    Zhang, Ting; Torney-Purta, Judith; Barber, Carolyn

    2012-01-01

    In 2 related studies framed by social constructivism theory, the authors explored a fine-grained analysis of adolescents' civic conceptual knowledge and skills and investigated them in relation to factors such as teachers' qualifications and students' classroom experiences. In Study 1 (with about 2,800 U.S. students), the authors identified 4…

  4. Identifying the hazard characteristics of powder byproducts generated from semiconductor fabrication processes.

    PubMed

    Choi, Kwang-Min; An, Hee-Chul; Kim, Kwan-Sick

    2015-01-01

    Semiconductor manufacturing processes generate powder particles as byproducts which potentially could affect workers' health. The chemical composition, size, shape, and crystal structure of these powder particles were investigated by scanning electron microscopy equipped with an energy dispersive spectrometer, Fourier transform infrared spectrometry, and X-ray diffractometry. The powders generated in diffusion and chemical mechanical polishing processes were amorphous silica. The particles in the chemical vapor deposition (CVD) and etch processes were TiO(2) and Al(2)O(3), and Al(2)O(3) particles, respectively. As for metallization, WO(3), TiO(2), and Al(2)O(3) particles were generated from equipment used for tungsten and barrier metal (TiN) operations. In photolithography, the size and shape of the powder particles showed 1-10 μm and were of spherical shape. In addition, the powders generated from high-current and medium-current processes for ion implantation included arsenic (As), whereas the high-energy process did not include As. For all samples collected using a personal air sampler during preventive maintenance of process equipment, the mass concentrations of total airborne particles were < 1 μg, which is the detection limit of the microbalance. In addition, the mean mass concentrations of airborne PM10 (particles less than 10 μm in diameter) using direct-reading aerosol monitor by area sampling were between 0.00 and 0.02 μg/m(3). Although the exposure concentration of airborne particles during preventive maintenance is extremely low, it is necessary to make continuous improvements to the process and work environment, because the influence of chronic low-level exposure cannot be excluded.

  5. Identifying temporal and causal contributions of neural processes underlying the Implicit Association Test (IAT)

    PubMed Central

    Forbes, Chad E.; Cameron, Katherine A.; Grafman, Jordan; Barbey, Aron; Solomon, Jeffrey; Ritter, Walter; Ruchkin, Daniel S.

    2012-01-01

    The Implicit Association Test (IAT) is a popular behavioral measure that assesses the associative strength between outgroup members and stereotypical and counterstereotypical traits. Less is known, however, about the degree to which the IAT reflects automatic processing. Two studies examined automatic processing contributions to a gender-IAT using a data driven, social neuroscience approach. Performance on congruent (e.g., categorizing male names with synonyms of strength) and incongruent (e.g., categorizing female names with synonyms of strength) IAT blocks were separately analyzed using EEG (event-related potentials, or ERPs, and coherence; Study 1) and lesion (Study 2) methodologies. Compared to incongruent blocks, performance on congruent IAT blocks was associated with more positive ERPs that manifested in frontal and occipital regions at automatic processing speeds, occipital regions at more controlled processing speeds and was compromised by volume loss in the anterior temporal lobe (ATL), insula and medial PFC. Performance on incongruent blocks was associated with volume loss in supplementary motor areas, cingulate gyrus and a region in medial PFC similar to that found for congruent blocks. Greater coherence was found between frontal and occipital regions to the extent individuals exhibited more bias. This suggests there are separable neural contributions to congruent and incongruent blocks of the IAT but there is also a surprising amount of overlap. Given the temporal and regional neural distinctions, these results provide converging evidence that stereotypic associative strength assessed by the IAT indexes automatic processing to a degree. PMID:23226123

  6. Accessing spoilage features of osmotolerant yeasts identified from kiwifruit plantation and processing environment in Shaanxi, China.

    PubMed

    Niu, Chen; Yuan, Yahong; Hu, Zhongqiu; Wang, Zhouli; Liu, Bin; Wang, Huxuan; Yue, Tianli

    2016-09-01

    Osmotolerant yeasts originating from kiwifruit industrial chain can result in spoilage incidences, while little information is available about their species and spoilage features. This work identified possible spoilage osmotolerant yeasts from orchards and a manufacturer (quick-freeze kiwifruit manufacturer) in main producing areas in Shaanxi, China and further characterized their spoilage features. A total of 86 osmotolerant isolates dispersing over 29 species were identified through 26S rDNA sequencing at the D1/D2 domain, among which Hanseniaspora uvarum occurred most frequently and have intimate relationships with kiwifruit. RAPD analysis indicated a high variability of this species from sampling regions. The correlation of genotypes with origins was established except for isolates from Zhouzhi orchards, and the mobility of H. uvarum from orchard to the manufacturer can be speculated and contributed to spoilage sourcing. The manufacturing environment favored the inhabitance of osmotolerant yeasts more than the orchard by giving high positive sample ratio or osmotolerant yeast ratio. The growth curves under various glucose levels were fitted by Grofit R package and the obtained growth parameters indicated phenotypic diversity in the H. uvarum and the rest species. Wickerhamomyces anomalus (OM14) and Candida glabrata (OZ17) were the most glucose tolerant species and availability of high glucose would assist them to produce more gas. The test osmotolerant species were odor altering in kiwifruit concentrate juice. 3-Methyl-1-butanol, phenylethyl alcohol, phenylethyl acetate, 5-hydroxymethylfurfural (5-HMF) and ethyl acetate were the most altered compound identified by GC/MS in the juice. Particularly, W. anomalus produced 4-vinylguaiacol and M. guilliermondii produced 4-ethylguaiacol that would imperil product acceptance. The study determines the target spoilers as well as offering a detailed spoilage features, which will be instructive in implementing preventative

  7. Accessing spoilage features of osmotolerant yeasts identified from kiwifruit plantation and processing environment in Shaanxi, China.

    PubMed

    Niu, Chen; Yuan, Yahong; Hu, Zhongqiu; Wang, Zhouli; Liu, Bin; Wang, Huxuan; Yue, Tianli

    2016-09-01

    Osmotolerant yeasts originating from kiwifruit industrial chain can result in spoilage incidences, while little information is available about their species and spoilage features. This work identified possible spoilage osmotolerant yeasts from orchards and a manufacturer (quick-freeze kiwifruit manufacturer) in main producing areas in Shaanxi, China and further characterized their spoilage features. A total of 86 osmotolerant isolates dispersing over 29 species were identified through 26S rDNA sequencing at the D1/D2 domain, among which Hanseniaspora uvarum occurred most frequently and have intimate relationships with kiwifruit. RAPD analysis indicated a high variability of this species from sampling regions. The correlation of genotypes with origins was established except for isolates from Zhouzhi orchards, and the mobility of H. uvarum from orchard to the manufacturer can be speculated and contributed to spoilage sourcing. The manufacturing environment favored the inhabitance of osmotolerant yeasts more than the orchard by giving high positive sample ratio or osmotolerant yeast ratio. The growth curves under various glucose levels were fitted by Grofit R package and the obtained growth parameters indicated phenotypic diversity in the H. uvarum and the rest species. Wickerhamomyces anomalus (OM14) and Candida glabrata (OZ17) were the most glucose tolerant species and availability of high glucose would assist them to produce more gas. The test osmotolerant species were odor altering in kiwifruit concentrate juice. 3-Methyl-1-butanol, phenylethyl alcohol, phenylethyl acetate, 5-hydroxymethylfurfural (5-HMF) and ethyl acetate were the most altered compound identified by GC/MS in the juice. Particularly, W. anomalus produced 4-vinylguaiacol and M. guilliermondii produced 4-ethylguaiacol that would imperil product acceptance. The study determines the target spoilers as well as offering a detailed spoilage features, which will be instructive in implementing preventative

  8. Identify in a Canadian Urban Community. A Process Report of the Brunskill Subproject. Project Canada West.

    ERIC Educational Resources Information Center

    Burke, M.; And Others

    The purpose of this subproject is to guide students to meet and interact with individuals from the many subcultures in a community (see ED 055 011). This progress report of the second year's activities includes information on the process of curriculum development, the materials developed, evaluation, roles of supporting agencies, behavioral…

  9. Identifying the Neural Correlates Underlying Social Pain: Implications for Developmental Processes

    ERIC Educational Resources Information Center

    Eisenberger, Naomi I.

    2006-01-01

    Although the need for social connection is critical for early social development as well as for psychological well-being throughout the lifespan, relatively little is known about the neural processes involved in maintaining social connections. The following review summarizes what is known regarding the neural correlates underlying feeling of…

  10. Learning Disabled Children's Problem Solving: Identifying Mental Processes Underlying Intelligent Performance.

    ERIC Educational Resources Information Center

    Swanson, H. Lee

    1988-01-01

    The differences between learning disabled (LD) and non-LD children's problem-solving protocols were analyzed during a picture arrangement task. Although the groups of 29 LD and 27 non-LD children were comparable in global mental processing and task performance, LD children had difficulty with representing problems and deleting irrelevant…

  11. Identifying Process Variables for a Low Atmospheric Pressure Stunning/Killing System

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Current systems for pre-slaughter gas stunning/killing of broilers use process gases such as carbon dioxide, argon, or a mixture of these gases with air or oxygen. Both carbon dioxide and argon work by displacing oxygen to induce hypoxia in the bird, leading to unconsciousness and ultimately death....

  12. Stress test: identifying crowding stress-tolerant hybrids in processing sweet corn

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Improvement in tolerance to intense competition at high plant populations (i.e. crowding stress) is a major genetic driver of corn yield gain the last half-century. Recent research found differences in crowding stress tolerance among a few modern processing sweet corn hybrids; however, a larger asse...

  13. A national effort to identify fry processing clones with low acrylamide-forming potential

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Acrylamide is a suspected human carcinogen. Processed potato products, such as chips and fries, contribute to dietary intake of acrylamide. One of the most promising approaches to reducing acrylamide consumption is to develop and commercialize new potato varieties with low acrylamide-forming potenti...

  14. Image processing techniques for identifying Mycobacterium tuberculosis in Ziehl-Neelsen stains.

    PubMed

    Sadaphal, P; Rao, J; Comstock, G W; Beg, M F

    2008-05-01

    Worldwide, laboratory technicians tediously read sputum smears for tuberculosis (TB) diagnosis. We demonstrate proof of principle of an innovative computational algorithm that successfully recognizes Ziehl-Neelsen (ZN) stained acid-fast bacilli (AFB) in digital images. Automated, multi-stage, color-based Bayesian segmentation identified possible 'TB objects', removed artifacts by shape comparison and color-labeled objects as 'definite', 'possible' or 'non-TB', bypassing photomicrographic calibration. Superimposed AFB clusters, extreme stain variation and low depth of field were challenges. Our novel method facilitates electronic diagnosis of TB, permitting wider application in developing countries where fluorescent microscopy is currently inaccessible and unaffordable. We plan refinement and validation in the future.

  15. Using Workflow Modeling to Identify Areas to Improve Genetic Test Processes in the University of Maryland Translational Pharmacogenomics Project

    PubMed Central

    Cutting, Elizabeth M.; Overby, Casey L.; Banchero, Meghan; Pollin, Toni; Kelemen, Mark; Shuldiner, Alan R.; Beitelshees, Amber L.

    2015-01-01

    Delivering genetic test results to clinicians is a complex process. It involves many actors and multiple steps, requiring all of these to work together in order to create an optimal course of treatment for the patient. We used information gained from focus groups in order to illustrate the current process of delivering genetic test results to clinicians. We propose a business process model and notation (BPMN) representation of this process for a Translational Pharmacogenomics Project being implemented at the University of Maryland Medical Center, so that personalized medicine program implementers can identify areas to improve genetic testing processes. We found that the current process could be improved to reduce input errors, better inform and notify clinicians about the implications of certain genetic tests, and make results more easily understood. We demonstrate our use of BPMN to improve this important clinical process for CYP2C19 genetic testing in patients undergoing invasive treatment of coronary heart disease. PMID:26958179

  16. Identified auditory neurons in the cricket Gryllus rubens: temporal processing in calling song sensitive units.

    PubMed

    Farris, Hamilton E; Mason, Andrew C; Hoy, Ronald R

    2004-07-01

    This study characterizes aspects of the anatomy and physiology of auditory receptors and certain interneurons in the cricket Gryllus rubens. We identified an 'L'-shaped ascending interneuron tuned to frequencies > 15 kHz (57 dB SPL threshold at 20 kHz). Also identified were two intrasegmental 'omega'-shaped interneurons that were broadly tuned to 3-65 kHz, with best sensitivity to frequencies of the male calling song (5 kHz, 52 dB SPL). The temporal sensitivity of units excited by calling song frequencies were measured using sinusoidally amplitude modulated stimuli that varied in both modulation rate and depth, parameters that vary with song propagation distance and the number of singing males. Omega cells responded like low-pass filters with a time constant of 42 ms. In contrast, receptors significantly coded modulation rates up to the maximum rate presented (85 Hz). Whereas omegas required approximately 65% modulation depth at 45 Hz (calling song AM) to elicit significant synchrony coding, receptors tolerated a approximately 50% reduction in modulation depth up to 85 Hz. These results suggest that omega cells in G. rubens might not play a role in detecting song modulation per se at increased distances from a singing male.

  17. ROBUSTNESS OF THE CSSX PROCESS TO FEED VARIATION: EFFICIENT CESIUM REMOVAL FROM THE HIGH POTASSIUM WASTES AT HANFORD

    SciTech Connect

    Delmau, Laetitia Helene; Birdwell Jr, Joseph F; McFarlane, Joanna; Moyer, Bruce A

    2010-01-01

    This contribution finds the Caustic-Side Solvent Extraction (CSSX) process to be effective for the removal of cesium from the Hanford tank-waste supernatant solutions. The Hanford waste types are more challenging than those at the Savannah River Site (SRS) in that they contain significantly higher levels of potassium, the chief competing ion in the extraction of cesium. By use of a computerized CSSX thermodynamic model, it was calculated that the higher levels of potassium depress the cesium distribution ratio (D{sub Cs}), as validated to within {+-}11% by the measurement of D{sub Cs} values on various Hanford waste-simulant compositions. A simple analog model equation that can be readily applied in a spreadsheet for estimating the D{sub Cs} values for the varying waste compositions was developed and shown to yield nearly identical estimates as the computerized CSSX model. It is concluded from the batch distribution experiments, the physical-property measurements, the equilibrium modeling, the flowsheet calculations, and the contactor sizing that the CSSX process as currently formulated for cesium removal from alkaline salt waste at the SRS is capable of treating similar Hanford tank feeds, albeit with more stages. For the most challenging Hanford waste composition tested, 31 stages would be required to provide a cesium decontamination factor (DF) of 5000 and a concentration factor (CF) of 2. Commercial contacting equipment with rotor diameters of 10 in. for extraction and 5 in. for stripping should have the capacity to meet throughput requirements, but testing will be required to confirm that the needed efficiency and hydraulic performance are actually obtainable. Markedly improved flowsheet performance was calculated based on experimental distribution ratios determined for an improved solvent formulation employing the more soluble cesium extractant BEHBCalixC6 used with alternative scrub and strip solutions, respectively 0.1 M NaOH and 0.010 M boric acid. The

  18. Highly robust electron beam lithography lift-off process using chemically amplified positive tone resist and PEDOT:PSS as a protective coating

    NASA Astrophysics Data System (ADS)

    Kofler, Johannes; Schmoltner, Kerstin; Klug, Andreas; List-Kratochvil, Emil J. W.

    2014-09-01

    Highly sensitive chemically amplified resists are well suited for large-area, high-resolution rapid prototyping by electron beam lithography. The major drawback of these resists is their susceptibility to T-topping effects, sensitivity losses, and linewidth variations caused by delay times between individual process steps. Hence, they require a very tight process control, which hinders their potentially wide application in R&D. We demonstrate a highly robust electron beam lithography lift-off process using a chemically amplified positive tone 40XT photoresist in combination with an acidic conducting polymer (PEDOT:PSS) as a protective top-coating. Even extended delay times of 24 h did not lead to any sensitivity losses or linewidth variations. Moreover, an overall high performance with a resolution of 80 nm (after lift-off) and a high sensitivity (<10 µC/cm2) comparable to other standard chemically amplified resists was achieved. The development characteristics of this resist-layer system revealed new insights into the immanent trade-off between resolution and process stability.

  19. Exploring High-Dimensional Data Space: Identifying Optimal Process Conditions in Photovoltaics: Preprint

    SciTech Connect

    Suh, C.; Glynn, S.; Scharf, J.; Contreras, M. A.; Noufi, R.; Jones, W. B.; Biagioni, D.

    2011-07-01

    We demonstrate how advanced exploratory data analysis coupled to data-mining techniques can be used to scrutinize the high-dimensional data space of photovoltaics in the context of thin films of Al-doped ZnO (AZO), which are essential materials as a transparent conducting oxide (TCO) layer in CuInxGa1-xSe2 (CIGS) solar cells. AZO data space, wherein each sample is synthesized from a different process history and assessed with various characterizations, is transformed, reorganized, and visualized in order to extract optimal process conditions. The data-analysis methods used include parallel coordinates, diffusion maps, and hierarchical agglomerative clustering algorithms combined with diffusion map embedding.

  20. Octopaminergic Modulation of Temporal Frequency Coding in an Identified Optic Flow-Processing Interneuron

    PubMed Central

    Longden, Kit D.; Krapp, Holger G.

    2010-01-01

    Flying generates predictably different patterns of optic flow compared with other locomotor states. A sensorimotor system tuned to rapid responses and a high bandwidth of optic flow would help the animal to avoid wasting energy through imprecise motor action. However, neural processing that covers a higher input bandwidth itself comes at higher energetic costs which would be a poor investment when the animal was not flying. How does the blowfly adjust the dynamic range of its optic flow-processing neurons to the locomotor state? Octopamine (OA) is a biogenic amine central to the initiation and maintenance of flight in insects. We used an OA agonist chlordimeform (CDM) to simulate the widespread OA release during flight and recorded the effects on the temporal frequency coding of the H2 cell. This cell is a visual interneuron known to be involved in flight stabilization reflexes. The application of CDM resulted in (i) an increase in the cell's spontaneous activity, expanding the inhibitory signaling range (ii) an initial response gain to moving gratings (20–60 ms post-stimulus) that depended on the temporal frequency of the grating and (iii) a reduction in the rate and magnitude of motion adaptation that was also temporal frequency-dependent. To our knowledge, this is the first demonstration that the application of a neuromodulator can induce velocity-dependent alterations in the gain of a wide-field optic flow-processing neuron. The observed changes in the cell's response properties resulted in a 33% increase of the cell's information rate when encoding random changes in temporal frequency of the stimulus. The increased signaling range and more rapid, longer lasting responses employed more spikes to encode each bit, and so consumed a greater amount of energy. It appears that for the fly investing more energy in sensory processing during flight is more efficient than wasting energy on under-performing motor control. PMID:21152339

  1. Comparative assessment of genomic DNA extraction processes for Plasmodium: Identifying the appropriate method.

    PubMed

    Mann, Riti; Sharma, Supriya; Mishra, Neelima; Valecha, Neena; Anvikar, Anupkumar R

    2015-12-01

    Plasmodium DNA, in addition to being used for molecular diagnosis of malaria, find utility in monitoring patient responses to antimalarial drugs, drug resistance studies, genotyping and sequencing purposes. Over the years, numerous protocols have been proposed for extracting Plasmodium DNA from a variety of sources. Given that DNA isolation is fundamental to successful molecular studies, here we review the most commonly used methods for Plasmodium genomic DNA isolation, emphasizing their pros and cons. A comparison of these existing methods has been made, to evaluate their appropriateness for use in different applications and identify the method suitable for a particular laboratory based study. Selection of a suitable and accessible DNA extraction method for Plasmodium requires consideration of many factors, the most important being sensitivity, cost-effectiveness and, purity and stability of isolated DNA. Need of the hour is to accentuate on the development of a method that upholds well on all these parameters.

  2. The June 2014 eruption at Piton de la Fournaise: Robust methods developed for monitoring challenging eruptive processes

    NASA Astrophysics Data System (ADS)

    Villeneuve, N.; Ferrazzini, V.; Di Muro, A.; Peltier, A.; Beauducel, F.; Roult, G. C.; Lecocq, T.; Brenguier, F.; Vlastelic, I.; Gurioli, L.; Guyard, S.; Catry, T.; Froger, J. L.; Coppola, D.; Harris, A. J. L.; Favalli, M.; Aiuppa, A.; Liuzzo, M.; Giudice, G.; Boissier, P.; Brunet, C.; Catherine, P.; Fontaine, F. J.; Henriette, L.; Lauret, F.; Riviere, A.; Kowalski, P.

    2014-12-01

    After almost 3.5 years of quiescence, Piton de la Fournaise (PdF) produced a small summit eruption on 20 June 2014 at 21:35 (GMT). The eruption lasted 20 hours and was preceded by: i) onset of deep eccentric seismicity (15-20 km bsl; 9 km NW of the volcano summit) in March and April 2014; ii) enhanced CO2 soil flux along the NW rift zone; iii) increase in the number and energy of shallow (<1.5 km asl) VT events. The increase in VT events occurred on 9 June. Their signature, and shallow location, was not characteristic of an eruptive crisis. However, at 20:06 on 20/06 their character changed. This was 74 minutes before the onset of tremor. Deformations then began at 20:20. Since 2007, PdF has emitted small magma volumes (<3 Mm3) in events preceded by weak and short precursory phases. To respond to this challenging activity style, new monitoring methods were deployed at OVPF. While the JERK and MSNoise methods were developed for processing of seismic data, borehole tiltmeters and permanent monitoring of summit gas emissions, plus CO2 soil flux, were used to track precursory activity. JERK, based on an analysis of the acceleration slope of a broad-band seismometer data, allowed advanced notice of the new eruption by 50 minutes. MSNoise, based on seismic velocity determination, showed a significant decrease 7 days before the eruption. These signals were coupled with change in summit fumarole composition. Remote sensing allowed the following syn-eruptive observations: - INSAR confirmed measurements made by the OVPF geodetic network, showing that deformation was localized around the eruptive fissures; - A SPOT5 image acquired at 05:41 on 21/06 allowed definition of the flow field area (194 500 m2); - A MODIS image acquired at 06:35 on 21/06 gave a lava discharge rate of 6.9±2.8 m3 s-1, giving an erupted volume of 0.3 and 0.4 Mm3. - This rate was used with the DOWNFLOW and FLOWGO models, calibrated with the textural data from Piton's 2010 lava, to run lava flow

  3. Unsupervised image processing scheme for transistor photon emission analysis in order to identify defect location

    NASA Astrophysics Data System (ADS)

    Chef, Samuel; Jacquir, Sabir; Sanchez, Kevin; Perdu, Philippe; Binczak, Stéphane

    2015-01-01

    The study of the light emitted by transistors in a highly scaled complementary metal oxide semiconductor (CMOS) integrated circuit (IC) has become a key method with which to analyze faulty devices, track the failure root cause, and have candidate locations for where to start the physical analysis. The localization of defective areas in IC corresponds to a reliability check and gives information to the designer to improve the IC design. The scaling of CMOS leads to an increase in the number of active nodes inside the acquisition area. There are also more differences between the spot's intensities. In order to improve the identification of all of the photon emission spots, we introduce an unsupervised processing scheme. It is based on iterative thresholding decomposition (ITD) and mathematical morphology operations. It unveils all of the emission spots and removes most of the noise from the database thanks to a succession of image processing. The ITD approach based on five thresholding methods is tested on 15 photon emission databases (10 real cases and 5 simulated cases). The photon emission areas' localization is compared to an expert identification and the estimation quality is quantified using the object consistency error.

  4. Modelling evapotranspiration during precipitation deficits: identifying critical processes in a land surface model

    NASA Astrophysics Data System (ADS)

    Ukkola, Anna M.; Pitman, Andy J.; Decker, Mark; De Kauwe, Martin G.; Abramowitz, Gab; Kala, Jatin; Wang, Ying-Ping

    2016-06-01

    Surface fluxes from land surface models (LSMs) have traditionally been evaluated against monthly, seasonal or annual mean states. The limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions has been previously noted, but very few studies have systematically evaluated these models during rainfall deficits. We evaluated latent heat fluxes simulated by the Community Atmosphere Biosphere Land Exchange (CABLE) LSM across 20 flux tower sites at sub-annual to inter-annual timescales, in particular focusing on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux was explored by employing alternative representations of hydrology, leaf area index, soil properties and stomatal conductance. We found that the representation of hydrological processes was critical for capturing observed declines in latent heat during rainfall deficits. By contrast, the effects of soil properties, LAI and stomatal conductance were highly site-specific. Whilst the standard model performs reasonably well at annual scales as measured by common metrics, it grossly underestimates latent heat during rainfall deficits. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions, but remaining biases point to future research needs. Our results highlight the importance of evaluating LSMs under water-stressed conditions and across multiple plant functional types and climate regimes.

  5. Identifying the Institutional Decision Process to Introduce Decentralized Sanitation in the City of Kunming (China)

    NASA Astrophysics Data System (ADS)

    Medilanski, Edi; Chuan, Liang; Mosler, Hans-Joachim; Schertenleib, Roland; Larsen, Tove A.

    2007-05-01

    We conducted a study of the institutional barriers to introducing urine source separation in the urban area of Kunming, China. On the basis of a stakeholder analysis, we constructed stakeholder diagrams showing the relative importance of decision-making power and (positive) interest in the topic. A hypothetical decision-making process for the urban case was derived based on a successful pilot project in a periurban area. All our results were evaluated by the stakeholders. We concluded that although a number of primary stakeholders have a large interest in testing urine source separation also in an urban context, most of the key stakeholders would be reluctant to this idea. However, the success in the periurban area showed that even a single, well-received pilot project can trigger the process of broad dissemination of new technologies. Whereas the institutional setting for such a pilot project is favorable in Kunming, a major challenge will be to adapt the technology to the demands of an urban population. Methodologically, we developed an approach to corroborate a stakeholder analysis with the perception of the stakeholders themselves. This is important not only in order to validate the analysis but also to bridge the theoretical gap between stakeholder analysis and stakeholder involvement. We also show that in disagreement with the assumption of most policy theories, local stakeholders consider informal decision pathways to be of great importance in actual policy-making.

  6. Modelling evapotranspiration during precipitation deficits: Identifying critical processes in a land surface model

    DOE PAGES

    Ukkola, Anna M.; Pitman, Andy J.; Decker, Mark; De Kauwe, Martin G.; Abramowitz, Gab; Kala, Jatin; Wang, Ying -Ping

    2016-06-21

    Surface fluxes from land surface models (LSMs) have traditionally been evaluated against monthly, seasonal or annual mean states. The limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions has been previously noted, but very few studies have systematically evaluated these models during rainfall deficits. We evaluated latent heat fluxes simulated by the Community Atmosphere Biosphere Land Exchange (CABLE) LSM across 20 flux tower sites at sub-annual to inter-annual timescales, in particular focusing on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux was explored by employing alternative representations of hydrology, leafmore » area index, soil properties and stomatal conductance. We found that the representation of hydrological processes was critical for capturing observed declines in latent heat during rainfall deficits. By contrast, the effects of soil properties, LAI and stomatal conductance were highly site-specific. Whilst the standard model performs reasonably well at annual scales as measured by common metrics, it grossly underestimates latent heat during rainfall deficits. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions, but remaining biases point to future research needs. Lastly, our results highlight the importance of evaluating LSMs under water-stressed conditions and across multiple plant functional types and climate regimes.« less

  7. Modelling evapotranspiration during precipitation deficits: identifying critical processes in a land surface model

    NASA Astrophysics Data System (ADS)

    Ukkola, A.; Pitman, A.; Decker, M. R.; De Kauwe, M. G.; Abramowitz, G.; Wang, Y.; Kala, J.

    2015-12-01

    Surface fluxes from land surface models (LSM) have traditionally been evaluated against monthly, seasonal or annual mean states. Previous studies have noted the limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions but very few studies have systematically evaluated LSMs during rainfall deficits. We investigate the performance of the Community Atmosphere Biosphere Land Exchange (CABLE) LSM in simulating latent heat fluxes in offline mode. CABLE is evaluated against eddy covariance measurements of latent heat flux across 20 flux tower sites at sub-annual to inter-annual time scales, with a focus on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux is explored by employing alternative representations of hydrology, soil properties, leaf area index and stomatal conductance. We demonstrate the critical role of hydrological processes for capturing observed declines in latent heat. The effects of soil, LAI and stomatal conductance are shown to be highly site-specific. The default CABLE performs reasonably well at annual scales despite grossly underestimating latent heat during rainfall deficits, highlighting the importance for evaluating models explicitly under water-stressed conditions across multiple vegetation and climate regimes. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions but remaining deficiencies point to future research needs.

  8. Modelling evapotranspiration during precipitation deficits: identifying critical processes in a land surface model

    NASA Astrophysics Data System (ADS)

    Ukkola, A. M.; Pitman, A. J.; Decker, M.; De Kauwe, M. G.; Abramowitz, G.; Kala, J.; Wang, Y.-P.

    2015-10-01

    Surface fluxes from land surface models (LSM) have traditionally been evaluated against monthly, seasonal or annual mean states. The limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions has been previously noted, but very few studies have systematically evaluated these models during rainfall deficits. We evaluated latent heat flux simulated by the Community Atmosphere Biosphere Land Exchange (CABLE) LSM across 20 flux tower sites at sub-annual to inter-annual time scales, in particular focusing on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux are explored by employing alternative representations of hydrology, leaf area index, soil properties and stomatal conductance. We found that the representation of hydrological processes was critical for capturing observed declines in latent heat during rainfall deficits. By contrast, the effects of soil properties, LAI and stomatal conductance are shown to be highly site-specific. Whilst the standard model performs reasonably well at annual scales as measured by common metrics, it grossly underestimates latent heat during rainfall deficits. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions but remaining biases point to future research needs. Our results highlight the importance of evaluating LSMs under water-stressed conditions and across multiple plant functional types and climate regimes.

  9. Establishment of a Cost-Effective and Robust Planning Basis for the Processing of M-91 Waste at the Hanford Site

    SciTech Connect

    Johnson, Wayne L.; Parker, Brian M.

    2004-07-30

    This report identifies and evaluates viable alternatives for the accelerated processing of Hanford Site transuranic (TRU) and mixed low-level wastes (MLLW) that cannot be processed using existing site capabilities. Accelerated processing of these waste streams will lead to earlier reduction of risk and considerable life-cycle cost savings. The processing need is to handle both oversized MLLW and TRU containers as well as containers with surface contact dose rates greater than 200 mrem/hr. This capability is known as the ''M-91'' processing capability required by the Tri-Party Agreement milestone M-91--01. The new, phased approach proposed in this evaluation would use a combination of existing and planned processing capabilities to treat and more easily manage contact-handled waste streams first and would provide for earlier processing of these wastes.

  10. ForBild: efficient robust image hashing

    NASA Astrophysics Data System (ADS)

    Steinebach, Martin; Liu, Huajian; Yannikos, York

    2012-03-01

    Forensic analysis of image sets today is most often done with the help of cryptographic hashes due to their efficiency, their integration in forensic tools and their excellent reliability in the domain of false detection alarms. A drawback of these hash methods is their fragility to any image processing operation. Even a simple re-compression with JPEG results in an image not detectable. A different approach is to apply image identification methods, allowing identifying illegal images by e.g. semantic models or facing detection algorithms. Their common drawback is a high computational complexity and significant false alarm rates. Robust hashing is a well-known approach sharing characteristics of both cryptographic hashes and image identification methods. It is fast, robust to common image processing and features low false alarm rates. To verify its usability in forensic evaluation, in this work we discuss and evaluate the behavior of an optimized block-based hash.

  11. Joint-specific DNA methylation and transcriptome signatures in rheumatoid arthritis identify distinct pathogenic processes

    PubMed Central

    Ai, Rizi; Hammaker, Deepa; Boyle, David L.; Morgan, Rachel; Walsh, Alice M.; Fan, Shicai; Firestein, Gary S.; Wang, Wei

    2016-01-01

    Stratifying patients on the basis of molecular signatures could facilitate development of therapeutics that target pathways specific to a particular disease or tissue location. Previous studies suggest that pathogenesis of rheumatoid arthritis (RA) is similar in all affected joints. Here we show that distinct DNA methylation and transcriptome signatures not only discriminate RA fibroblast-like synoviocytes (FLS) from osteoarthritis FLS, but also distinguish RA FLS isolated from knees and hips. Using genome-wide methods, we show differences between RA knee and hip FLS in the methylation of genes encoding biological pathways, such as IL-6 signalling via JAK-STAT pathway. Furthermore, differentially expressed genes are identified between knee and hip FLS using RNA-sequencing. Double-evidenced genes that are both differentially methylated and expressed include multiple HOX genes. Joint-specific DNA signatures suggest that RA disease mechanisms might vary from joint to joint, thus potentially explaining some of the diversity of drug responses in RA patients. PMID:27282753

  12. Efficacy of identifying neural components in the face and emotion processing system in schizophrenia using a dynamic functional localizer.

    PubMed

    Arnold, Aiden E G F; Iaria, Giuseppe; Goghari, Vina M

    2016-02-28

    Schizophrenia is associated with deficits in face perception and emotion recognition. Despite consistent behavioural results, the neural mechanisms underlying these cognitive abilities have been difficult to isolate, in part due to differences in neuroimaging methods used between studies for identifying regions in the face processing system. Given this problem, we aimed to validate a recently developed fMRI-based dynamic functional localizer task for use in studies of psychiatric populations and specifically schizophrenia. Previously, this functional localizer successfully identified each of the core face processing regions (i.e. fusiform face area, occipital face area, superior temporal sulcus), and regions within an extended system (e.g. amygdala) in healthy individuals. In this study, we tested the functional localizer success rate in 27 schizophrenia patients and in 24 community controls. Overall, the core face processing regions were localized equally between both the schizophrenia and control group. Additionally, the amygdala, a candidate brain region from the extended system, was identified in nearly half the participants from both groups. These results indicate the effectiveness of a dynamic functional localizer at identifying regions of interest associated with face perception and emotion recognition in schizophrenia. The use of dynamic functional localizers may help standardize the investigation of the facial and emotion processing system in this and other clinical populations. PMID:26792586

  13. Using Statistical Process Control Charts to Identify the Steroids Era in Major League Baseball: An Educational Exercise

    ERIC Educational Resources Information Center

    Hill, Stephen E.; Schvaneveldt, Shane J.

    2011-01-01

    This article presents an educational exercise in which statistical process control charts are constructed and used to identify the Steroids Era in American professional baseball. During this period (roughly 1993 until the present), numerous baseball players were alleged or proven to have used banned, performance-enhancing drugs. Also observed…

  14. Efficacy of identifying neural components in the face and emotion processing system in schizophrenia using a dynamic functional localizer.

    PubMed

    Arnold, Aiden E G F; Iaria, Giuseppe; Goghari, Vina M

    2016-02-28

    Schizophrenia is associated with deficits in face perception and emotion recognition. Despite consistent behavioural results, the neural mechanisms underlying these cognitive abilities have been difficult to isolate, in part due to differences in neuroimaging methods used between studies for identifying regions in the face processing system. Given this problem, we aimed to validate a recently developed fMRI-based dynamic functional localizer task for use in studies of psychiatric populations and specifically schizophrenia. Previously, this functional localizer successfully identified each of the core face processing regions (i.e. fusiform face area, occipital face area, superior temporal sulcus), and regions within an extended system (e.g. amygdala) in healthy individuals. In this study, we tested the functional localizer success rate in 27 schizophrenia patients and in 24 community controls. Overall, the core face processing regions were localized equally between both the schizophrenia and control group. Additionally, the amygdala, a candidate brain region from the extended system, was identified in nearly half the participants from both groups. These results indicate the effectiveness of a dynamic functional localizer at identifying regions of interest associated with face perception and emotion recognition in schizophrenia. The use of dynamic functional localizers may help standardize the investigation of the facial and emotion processing system in this and other clinical populations.

  15. Progression after AKI: Understanding Maladaptive Repair Processes to Predict and Identify Therapeutic Treatments.

    PubMed

    Basile, David P; Bonventre, Joseph V; Mehta, Ravindra; Nangaku, Masaomi; Unwin, Robert; Rosner, Mitchell H; Kellum, John A; Ronco, Claudio

    2016-03-01

    Recent clinical studies indicate a strong link between AKI and progression of CKD. The increasing prevalence of AKI must compel the nephrology community to consider the long-term ramifications of this syndrome. Considerable gaps in knowledge exist regarding the connection between AKI and CKD. The 13th Acute Dialysis Quality Initiative meeting entitled "Therapeutic Targets of Human Acute Kidney Injury: Harmonizing Human and Experimental Animal Acute Kidney Injury" convened in April of 2014 and assigned a working group to focus on issues related to progression after AKI. This article provides a summary of the key conclusions and recommendations of the group, including an emphasis on terminology related to injury and repair processes for both clinical and preclinical studies, elucidation of pathophysiologic alterations of AKI, identification of potential treatment strategies, identification of patients predisposed to progression, and potential management strategies.

  16. Progression after AKI: Understanding Maladaptive Repair Processes to Predict and Identify Therapeutic Treatments.

    PubMed

    Basile, David P; Bonventre, Joseph V; Mehta, Ravindra; Nangaku, Masaomi; Unwin, Robert; Rosner, Mitchell H; Kellum, John A; Ronco, Claudio

    2016-03-01

    Recent clinical studies indicate a strong link between AKI and progression of CKD. The increasing prevalence of AKI must compel the nephrology community to consider the long-term ramifications of this syndrome. Considerable gaps in knowledge exist regarding the connection between AKI and CKD. The 13th Acute Dialysis Quality Initiative meeting entitled "Therapeutic Targets of Human Acute Kidney Injury: Harmonizing Human and Experimental Animal Acute Kidney Injury" convened in April of 2014 and assigned a working group to focus on issues related to progression after AKI. This article provides a summary of the key conclusions and recommendations of the group, including an emphasis on terminology related to injury and repair processes for both clinical and preclinical studies, elucidation of pathophysiologic alterations of AKI, identification of potential treatment strategies, identification of patients predisposed to progression, and potential management strategies. PMID:26519085

  17. Identifying sources and processes influencing nitrogen export to a small stream using dual isotopes of nitrate

    NASA Astrophysics Data System (ADS)

    Lohse, K. A.; Sanderman, J.; Amundson, R.

    2009-12-01

    Interactions between plant and microbial reactions exert strong controls on sources and export of nitrate to headwater streams. Yet quantifying this interaction is challenging due to spatial and temporal changes in these processes. Topography has been hypothesized to play a large role in these processes, yet few studies have coupled measurement of soil nitrogen cycling to hydrologic losses of N. In water limited environments such as Mediterranean grasslands, we hypothesized that seasonal shifts in runoff mechanisms and flow paths would change stream water sources of nitrate from deep subsoil sources to near-surface sources. In theory, these changes can be quantified using mixing models and dual isotopes of nitrate. We examined the temporal patterns of N stream export using hydrometric methods and dual isotopes of nitrate in a small headwater catchment on the coast of Northern California. A plot of stream water 15N-nitrate and 18O-nitrate with known isotopic value of nitrate in rainwater, fertilizer, and soil N confirmed that the nitrate was primarily microbial nitrate. Plots of 15N-nitrate and the inverse nitrate concentration, as well as the log of nitrate concentration, indicated both mixing and fractionation via denitrification. Further analysis of soil water 15N-nitrate and 18O-nitrate revealed two denitrification vectors for both surface and subsurface soil waters (slopes of 0.50 ±0.1) that constrained the stream water 15N and 18O-nitrate values indicating mixing of two soil water sources. Analysis of mixing models showed shifts in surface and subsurface soil water nitrate sources to stream water along with progressive denitrification over the course of the season.

  18. Pervasive robustness in biological systems.

    PubMed

    Félix, Marie-Anne; Barkoulas, Michalis

    2015-08-01

    Robustness is characterized by the invariant expression of a phenotype in the face of a genetic and/or environmental perturbation. Although phenotypic variance is a central measure in the mapping of the genotype and environment to the phenotype in quantitative evolutionary genetics, robustness is also a key feature in systems biology, resulting from nonlinearities in quantitative relationships between upstream and downstream components. In this Review, we provide a synthesis of these two lines of investigation, converging on understanding how variation propagates across biological systems. We critically assess the recent proliferation of studies identifying robustness-conferring genes in the context of the nonlinearity in biological systems. PMID:26184598

  19. On the processes generating latitudinal richness gradients: identifying diagnostic patterns and predictions

    PubMed Central

    Hurlbert, Allen H.; Stegen, James C.

    2014-01-01

    We use a simulation model to examine four of the most common hypotheses for the latitudinal richness gradient and identify patterns that might be diagnostic of those four hypotheses. The hypotheses examined include (1) tropical niche conservatism, or the idea that the tropics are more diverse because a tropical clade origin has allowed more time for diversification in the tropics and has resulted in few species adapted to extra-tropical climates. (2) The ecological limits hypothesis suggests that species richness is limited by the amount of biologically available energy in a region. (3) The speciation rates hypothesis suggests that the latitudinal gradient arises from a gradient in speciation rates. (4) Finally, the tropical stability hypothesis argues that climatic fluctuations and glacial cycles in extratropical regions have led to greater extinction rates and less opportunity for specialization relative to the tropics. We found that tropical niche conservatism can be distinguished from the other three scenarios by phylogenies which are more balanced than expected, no relationship between mean root distance (MRD) and richness across regions, and a homogeneous rate of speciation across clades and through time. The energy gradient, speciation gradient, and disturbance gradient scenarios all produced phylogenies which were more imbalanced than expected, showed a negative relationship between MRD and richness, and diversity-dependence of speciation rate estimates through time. We found that the relationship between speciation rates and latitude could distinguish among these three scenarios, with no relation expected under the ecological limits hypothesis, a negative relationship expected under the speciation rates hypothesis, and a positive relationship expected under the tropical stability hypothesis. We emphasize the importance of considering multiple hypotheses and focusing on diagnostic predictions instead of predictions that are consistent with multiple

  20. Identifying influential nodes in a wound healing-related network of biological processes using mean first-passage time

    NASA Astrophysics Data System (ADS)

    Arodz, Tomasz; Bonchev, Danail

    2015-02-01

    In this study we offer an approach to network physiology, which proceeds from transcriptomic data and uses gene ontology analysis to identify the biological processes most enriched in several critical time points of wound healing process (days 0, 3 and 7). The top-ranking differentially expressed genes for each process were used to build two networks: one with all proteins regulating the transcription of selected genes, and a second one involving the proteins from the signaling pathways that activate the transcription factors. The information from these networks is used to build a network of the most enriched processes with undirected links weighted proportionally to the count of shared genes between the pair of processes, and directed links weighted by the count of relationships connecting genes from one process to genes from the other. In analyzing the network thus built we used an approach based on random walks and accounting for the temporal aspects of the spread of a signal in the network (mean-first passage time, MFPT). The MFPT scores allowed identifying the top influential, as well as the top essential biological processes, which vary with the progress in the healing process. Thus, the most essential for day 0 was found to be the Wnt-receptor signaling pathway, well known for its crucial role in wound healing, while in day 3 this was the regulation of NF-kB cascade, essential for matrix remodeling in the wound healing process. The MFPT-based scores correctly reflected the pattern of the healing process dynamics to be highly concentrated around several processes between day 0 and day 3, and becoming more diffuse at day 7.

  1. Identifying the processes underpinning anticipation and decision-making in a dynamic time-constrained task.

    PubMed

    Roca, André; Ford, Paul R; McRobert, Allistair P; Mark Williams, A

    2011-08-01

    A novel, representative task was used to examine skill-based differences in the perceptual and cognitive processes underlying performance on a dynamic, externally paced task. Skilled and less skilled soccer players were required to move and interact with life-size, action sequences involving 11 versus 11 soccer situations filmed from the perspective of a central defender in soccer. The ability of participants to anticipate the intentions of their opponents and to make decisions about how they should respond was measured across two separate experiments. In Experiment 1, visual search behaviors were examined using an eye-movement registration system. In Experiment 2, retrospective verbal reports of thinking were gathered from a new sample of skilled and less skilled participants. Skilled participants were more accurate than less skilled participants at anticipating the intentions of opponents and in deciding on an appropriate course of action. The skilled players employed a search strategy involving more fixations of shorter duration in a different sequential order and toward more disparate and informative locations in the display when compared with the less skilled counterparts. The skilled players generated a greater number of verbal report statements with a higher proportion of evaluation, prediction, and planning statements than the less skilled players, suggesting they employed more complex domain-specific memory representations to solve the task. Theoretical, methodological, and practical implications are discussed.

  2. Isotopic investigations of dissolved organic N in soils identifies N mineralization as a major sink process

    NASA Astrophysics Data System (ADS)

    Wanek, Wolfgang; Prommer, Judith; Hofhansl, Florian

    2016-04-01

    Dissolved organic nitrogen (DON) is a major component of transfer processes in the global nitrogen (N) cycle, contributing to atmospheric N deposition, terrestrial N losses and aquatic N inputs. In terrestrial ecosystems several sources and sinks contribute to belowground DON pools but yet are hard to quantify. In soils, DON is released by desorption of soil organic N and by microbial lysis. Major losses from the DON pool occur via sorption, hydrological losses and by soil N mineralization. Sorption/desorption, lysis and hydrological losses are expected to exhibit no 15N fractionation therefore allowing to trace different DON sources. Soil N mineralization of DON has been commonly assumed to have no or only a small isotope effect of between 0-4‰, however isotope fractionation by N mineralization has rarely been measured and might be larger than anticipated. Depending on the degree of 15N fractionation by soil N mineralization, we would expect DON to become 15N-enriched relative to bulk soil N, and dissolved inorganic N (DIN; ammonium and nitrate) to become 15N-depleted relative to both, bulk soil N and DON. Isotopic analyses of soil organic N, DON and DIN might therefore provide insights into the relative contributions of different sources and sink processes. This study therefore aimed at a better understanding of the isotopic signatures of DON and its controls in soils. We investigated the concentration and isotopic composition of bulk soil N, DON and DIN in a wide range of sites, covering arable, grassland and forest ecosystems in Austria across an altitudinal transect. Isotopic composition of ammonium, nitrate and DON were measured in soil extracts after chemical conversion to N2O by purge-and-trap isotope ratio mass spectrometry. We found that delta15N values of DON ranged between -0.4 and 7.6‰, closely tracking the delta15N values of bulk soils. However, DON was 15N-enriched relative to bulk soil N by 1.5±1.3‰ (1 SD), and inorganic N was 15N

  3. Identifying In-Trans Process Associated Genes in Breast Cancer by Integrated Analysis of Copy Number and Expression Data

    PubMed Central

    Liestøl, Knut; Lipson, Doron; Nyberg, Sandra; Naume, Bjørn; Sahlberg, Kristine Kleivi; Kristensen, Vessela N.; Børresen-Dale, Anne-Lise; Lingjærde, Ole Christian; Yakhini, Zohar

    2013-01-01

    Genomic copy number alterations are common in cancer. Finding the genes causally implicated in oncogenesis is challenging because the gain or loss of a chromosomal region may affect a few key driver genes and many passengers. Integrative analyses have opened new vistas for addressing this issue. One approach is to identify genes with frequent copy number alterations and corresponding changes in expression. Several methods also analyse effects of transcriptional changes on known pathways. Here, we propose a method that analyses in-cis correlated genes for evidence of in-trans association to biological processes, with no bias towards processes of a particular type or function. The method aims to identify cis-regulated genes for which the expression correlation to other genes provides further evidence of a network-perturbing role in cancer. The proposed unsupervised approach involves a sequence of statistical tests to systematically narrow down the list of relevant genes, based on integrative analysis of copy number and gene expression data. A novel adjustment method handles confounding effects of co-occurring copy number aberrations, potentially a large source of false positives in such studies. Applying the method to whole-genome copy number and expression data from 100 primary breast carcinomas, 6373 genes were identified as commonly aberrant, 578 were highly in-cis correlated, and 56 were in addition associated in-trans to biological processes. Among these in-trans process associated and cis-correlated (iPAC) genes, 28% have previously been reported as breast cancer associated, and 64% as cancer associated. By combining statistical evidence from three separate subanalyses that focus respectively on copy number, gene expression and the combination of the two, the proposed method identifies several known and novel cancer driver candidates. Validation in an independent data set supports the conclusion that the method identifies genes implicated in cancer. PMID

  4. Regulatory design in a simple system integrating membrane potential generation and metabolic ATP consumption. Robustness and the role of energy dissipating processes.

    PubMed

    Acerenza, Luis; Cristina, Ernesto; Hernández, Julio A

    2011-12-01

    Bacterial physiological responses integrate energy-coupling processes at the membrane level with metabolic energy demand. The regulatory design behind these responses remains largely unexplored. Propionigenium modestum is an adequate organism to study these responses because it presents the simplest scheme known integrating membrane potential generation and metabolic ATP consumption. A hypothetical sodium leak is added to the scheme as the sole regulatory site. Allosteric regulation is assumed to be absent. Information of the rate equations is not available. However, relevant features of the patterns of responses may be obtained using Metabolic Control Analysis (MCA) and Metabolic Control Design (MCD). With these tools, we show that membrane potential disturbances can be compensated by adjusting the leak flux, without significant perturbations of ATP consumption. Perturbations of membrane potential by ATP demand are inevitable and also require compensatory changes in the leak. Numerical simulations were performed with a kinetic model exhibiting the responses for small changes obtained with MCA and MCD. A modest leak (10% of input) was assumed for the reference state. We found that disturbances in membrane potential and ATP consumption, produced by environmental perturbations of the cation concentration, may be reverted to the reference state adjusting the leak. Leak changes can also compensate for undesirable effects on membrane potential produced by changes in nutrient availability or ATP demand, in a wide range of values. The system is highly robust to parameter fluctuations. The regulatory role of energy dissipating processes and the trade-off between energetic efficiency and regulatory capacity are discussed.

  5. An Evaluation of a Natural Language Processing Tool for Identifying and Encoding Allergy Information in Emergency Department Clinical Notes

    PubMed Central

    Goss, Foster R.; Plasek, Joseph M.; Lau, Jason J.; Seger, Diane L.; Chang, Frank Y.; Zhou, Li

    2014-01-01

    Emergency department (ED) visits due to allergic reactions are common. Allergy information is often recorded in free-text provider notes; however, this domain has not yet been widely studied by the natural language processing (NLP) community. We developed an allergy module built on the MTERMS NLP system to identify and encode food, drug, and environmental allergies and allergic reactions. The module included updates to our lexicon using standard terminologies, and novel disambiguation algorithms. We developed an annotation schema and annotated 400 ED notes that served as a gold standard for comparison to MTERMS output. MTERMS achieved an F-measure of 87.6% for the detection of allergen names and no known allergies, 90% for identifying true reactions in each allergy statement where true allergens were also identified, and 69% for linking reactions to their allergen. These preliminary results demonstrate the feasibility using NLP to extract and encode allergy information from clinical notes. PMID:25954363

  6. Assessment Approach for Identifying Compatibility of Restoration Projects with Geomorphic and Flooding Processes in Gravel Bed Rivers.

    PubMed

    DeVries, Paul; Aldrich, Robert

    2015-08-01

    A critical requirement for a successful river restoration project in a dynamic gravel bed river is that it be compatible with natural hydraulic and sediment transport processes operating at the reach scale. The potential for failure is greater at locations where the influence of natural processes is inconsistent with intended project function and performance. We present an approach using practical GIS, hydrologic, hydraulic, and sediment transport analyses to identify locations where specific restoration project types have the greatest likelihood of working as intended because their function and design are matched with flooding and morphologic processes. The key premise is to identify whether a specific river analysis segment (length ~1-10 bankfull widths) within a longer reach is geomorphically active or inactive in the context of vertical and lateral stabilities, and hydrologically active for floodplain connectivity. Analyses involve empirical channel geometry relations, aerial photographic time series, LiDAR data, HEC-RAS hydraulic modeling, and a time-integrated sediment transport budget to evaluate trapping efficiency within each segment. The analysis segments are defined by HEC-RAS model cross sections. The results have been used effectively to identify feasible projects in a variety of alluvial gravel bed river reaches with lengths between 11 and 80 km and 2-year flood magnitudes between ~350 and 1330 m(3)/s. Projects constructed based on the results have all performed as planned. In addition, the results provide key criteria for formulating erosion and flood management plans. PMID:25910870

  7. Assessment Approach for Identifying Compatibility of Restoration Projects with Geomorphic and Flooding Processes in Gravel Bed Rivers.

    PubMed

    DeVries, Paul; Aldrich, Robert

    2015-08-01

    A critical requirement for a successful river restoration project in a dynamic gravel bed river is that it be compatible with natural hydraulic and sediment transport processes operating at the reach scale. The potential for failure is greater at locations where the influence of natural processes is inconsistent with intended project function and performance. We present an approach using practical GIS, hydrologic, hydraulic, and sediment transport analyses to identify locations where specific restoration project types have the greatest likelihood of working as intended because their function and design are matched with flooding and morphologic processes. The key premise is to identify whether a specific river analysis segment (length ~1-10 bankfull widths) within a longer reach is geomorphically active or inactive in the context of vertical and lateral stabilities, and hydrologically active for floodplain connectivity. Analyses involve empirical channel geometry relations, aerial photographic time series, LiDAR data, HEC-RAS hydraulic modeling, and a time-integrated sediment transport budget to evaluate trapping efficiency within each segment. The analysis segments are defined by HEC-RAS model cross sections. The results have been used effectively to identify feasible projects in a variety of alluvial gravel bed river reaches with lengths between 11 and 80 km and 2-year flood magnitudes between ~350 and 1330 m(3)/s. Projects constructed based on the results have all performed as planned. In addition, the results provide key criteria for formulating erosion and flood management plans.

  8. Hétérochronies dans l'évolution des hominidés. Le développement dentaire des australopithécines «robustes»Heterochronic process in hominid evolution. The dental development in 'robust' australopithecines.

    NASA Astrophysics Data System (ADS)

    Ramirez Rozzi, Fernando V.

    2000-10-01

    Heterochrony is defined as an evolutionary modification in time and in the relative rate of development [6]. Growth (size), development (shape), and age (adult) are the three fundamental factors of ontogeny and have to be known to carry out a study on heterochronies. These three factors have been analysed in 24 Plio-Pleistocene hominid molars from Omo, Ethiopia, attributed to A. afarensis and robust australopithecines ( A. aethiopicus and A. aff. aethiopicus) . Molars were grouped into three chronological periods. The analysis suggests that morphological modifications through time are due to heterochronic process, a neoteny ( A. afarensis - robust australopithecine clade) and a time hypermorphosis ( A. aethiopicus - A. aff. aethiopicus).

  9. A fast-initiating ionically tagged ruthenium complex: a robust supported pre-catalyst for batch-process and continuous-flow olefin metathesis.

    PubMed

    Borré, Etienne; Rouen, Mathieu; Laurent, Isabelle; Magrez, Magaly; Caijo, Fréderic; Crévisy, Christophe; Solodenko, Wladimir; Toupet, Loic; Frankfurter, René; Vogt, Carla; Kirschning, Andreas; Mauduit, Marc

    2012-12-14

    In this study, a new pyridinium-tagged Ru complex was designed and anchored onto sulfonated silica, thereby forming a robust and highly active supported olefin-metathesis pre-catalyst for applications under batch and continuous-flow conditions. The involvement of an oxazine-benzylidene ligand allowed the reactivity of the formed Ru pre-catalyst to be efficiently controlled through both steric and electronic activation. The oxazine scaffold facilitated the introduction of the pyridinium tag, thereby affording the corresponding cationic pre-catalyst in good yield. Excellent activities in ring-closing (RCM), cross (CM), and enyne metathesis were observed with only 0.5 mol % loading of the pre-catalyst. When this powerful pre-catalyst was immobilized onto a silica-based cationic-exchange resin, a versatile catalytically active material for batch reactions was generated that also served as fixed-bed material for flow reactors. This system could be reused at 1 mol % loading to afford metathesis products in high purity with very low ruthenium contamination under batch conditions (below 5 ppm). Scavenging procedures for both batch and flow processes were conducted, which led to a lowering of the ruthenium content to as little as one tenth of the original values.

  10. Formosa Plastics Corporation: Plant-Wide Assessment of Texas Plant Identifies Opportunities for Improving Process Efficiency and Reducing Energy Costs

    SciTech Connect

    2005-01-01

    At Formosa Plastics Corporation's plant in Point Comfort, Texas, a plant-wide assessment team analyzed process energy requirements, reviewed new technologies for applicability, and found ways to improve the plant's energy efficiency. The assessment team identified the energy requirements of each process and compared actual energy consumption with theoretical process requirements. The team estimated that total annual energy savings would be about 115,000 MBtu for natural gas and nearly 14 million kWh for electricity if the plant makes several improvements, which include upgrading the gas compressor impeller, improving the vent blower system, and recovering steam condensate for reuse. Total annual cost savings could be $1.5 million. The U.S. Department of Energy's Industrial Technologies Program cosponsored this assessment.

  11. Hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) and its application to predicting key process variables.

    PubMed

    He, Yan-Lin; Xu, Yuan; Geng, Zhi-Qiang; Zhu, Qun-Xiong

    2016-03-01

    In this paper, a hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) is proposed. Firstly, an improved functional link neural network with small norm of expanded weights and high input-output correlation (SNEWHIOC-FLNN) was proposed for enhancing the generalization performance of FLNN. Unlike the traditional FLNN, the expanded variables of the original inputs are not directly used as the inputs in the proposed SNEWHIOC-FLNN model. The original inputs are attached to some small norm of expanded weights. As a result, the correlation coefficient between some of the expanded variables and the outputs is enhanced. The larger the correlation coefficient is, the more relevant the expanded variables tend to be. In the end, the expanded variables with larger correlation coefficient are selected as the inputs to improve the performance of the traditional FLNN. In order to test the proposed SNEWHIOC-FLNN model, three UCI (University of California, Irvine) regression datasets named Housing, Concrete Compressive Strength (CCS), and Yacht Hydro Dynamics (YHD) are selected. Then a hybrid model based on the improved FLNN integrating with partial least square (IFLNN-PLS) was built. In IFLNN-PLS model, the connection weights are calculated using the partial least square method but not the error back propagation algorithm. Lastly, IFLNN-PLS was developed as an intelligent measurement model for accurately predicting the key variables in the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. Simulation results illustrated that the IFLNN-PLS could significant improve the prediction performance. PMID:26685746

  12. Legal and ethical considerations in processing patient-identifiable data without patient consent: lessons learnt from developing a disease register.

    PubMed

    Haynes, Charlotte L; Cook, Gary A; Jones, Michael A

    2007-05-01

    The legal requirements and justifications for collecting patient-identifiable data without patient consent were examined. The impetus for this arose from legal and ethical issues raised during the development of a population-based disease register. Numerous commentaries and case studies have been discussing the impact of the Data Protection Act 1998 (DPA1998) and Caldicott principles of good practice on the uses of personal data. But uncertainty still remains about the legal requirements for processing patient-identifiable data without patient consent for research purposes. This is largely owing to ignorance, or misunderstandings of the implications of the common law duty of confidentiality and section 60 of the Health and Social Care Act 2001. The common law duty of confidentiality states that patient-identifiable data should not be provided to third parties, regardless of compliance with the DPA1998. It is an obligation derived from case law, and is open to interpretation. Compliance with section 60 ensures that collection of patient-identifiable data without patient consent is lawful despite the duty of confidentiality. Fears regarding the duty of confidentiality have resulted in a common misconception that section 60 must be complied with. Although this is not the case, section 60 support does provide the most secure basis in law for collecting such data. Using our own experience in developing a disease register as a backdrop, this article will clarify the procedures, risks and potential costs of applying for section 60 support.

  13. Genome-Wide Functional Profiling Identifies Genes and Processes Important for Zinc-Limited Growth of Saccharomyces cerevisiae

    PubMed Central

    Loguinov, Alex V.; Zimmerman, Ginelle R.; Vulpe, Chris D.; Eide, David J.

    2012-01-01

    Zinc is an essential nutrient because it is a required cofactor for many enzymes and transcription factors. To discover genes and processes in yeast that are required for growth when zinc is limiting, we used genome-wide functional profiling. Mixed pools of ∼4,600 deletion mutants were inoculated into zinc-replete and zinc-limiting media. These cells were grown for several generations, and the prevalence of each mutant in the pool was then determined by microarray analysis. As a result, we identified more than 400 different genes required for optimal growth under zinc-limiting conditions. Among these were several targets of the Zap1 zinc-responsive transcription factor. Their importance is consistent with their up-regulation by Zap1 in low zinc. We also identified genes that implicate Zap1-independent processes as important. These include endoplasmic reticulum function, oxidative stress resistance, vesicular trafficking, peroxisome biogenesis, and chromatin modification. Our studies also indicated the critical role of macroautophagy in low zinc growth. Finally, as a result of our analysis, we discovered a previously unknown role for the ICE2 gene in maintaining ER zinc homeostasis. Thus, functional profiling has provided many new insights into genes and processes that are needed for cells to thrive under the stress of zinc deficiency. PMID:22685415

  14. Identifying process and outcome indicators of successful transitions from child to adult mental health services: protocol for a scoping review

    PubMed Central

    Cleverley, Kristin; Bennett, Kathryn; Jeffs, Lianne

    2016-01-01

    Introduction A significant proportion of youth need to transition from child and adolescent mental health services (CAMHS) to adult mental health services (AMHS); however, the transition process is not well understood and often experienced poorly by youth. In the effort to design and evaluate standards of practice for transitions, there is a need to identify key elements of a successful transition. The objectives of this scoping review are to: (1) identify definitions of successful transitions from CAMHS to AMHS; and (2) identify indicators that have been used to measure CAMHS–AMHS transition care processes and quality, and outcomes. Methods We will search 8 electronic bibliographic databases from 1980 to 2016 (eg, Medline, EMBASE, PsycINFO), professional associations, policy documents, and other grey literature to identify relevant material. We will include experimental, quasi-experimental, observational studies, and non-research studies (guidelines, narrative reviews, policy documents) examining the transition from CAMHS to AMHS. 2 raters will independently screen each retrieved title and abstract for eligibility using the study inclusion criteria (level 1), and then will independently assess full-text articles to determine if these meet the inclusion criteria (level 2). Data extraction will be completed and results will be synthesised both quantitatively and qualitatively. Ethics and dissemination The results of the scoping review will be used to develop a set of indicators that will be prioritised and evaluated in a Delphi consensus study. This will serve as a foundation for the development of the first instrument to assess the quality and success of CAMHS–AMHS transitions. Ethics approval is not required for this scoping study. PMID:27381213

  15. Robust indexing for automatic data collection

    SciTech Connect

    Sauter, Nicholas K.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.

    2003-12-09

    We present improved methods for indexing diffraction patterns from macromolecular crystals. The novel procedures include a more robust way to verify the position of the incident X-ray beam on the detector, an algorithm to verify that the deduced lattice basis is consistent with the observations, and an alternative approach to identify the metric symmetry of the lattice. These methods help to correct failures commonly experienced during indexing, and increase the overall success rate of the process. Rapid indexing, without the need for visual inspection, will play an important role as beamlines at synchrotron sources prepare for high-throughput automation.

  16. Fast robust correlation.

    PubMed

    Fitch, Alistair J; Kadyrov, Alexander; Christmas, William J; Kittler, Josef

    2005-08-01

    A new, fast, statistically robust, exhaustive, translational image-matching technique is presented: fast robust correlation. Existing methods are either slow or non-robust, or rely on optimization. Fast robust correlation works by expressing a robust matching surface as a series of correlations. Speed is obtained by computing correlations in the frequency domain. Computational cost is analyzed and the method is shown to be fast. Speed is comparable to conventional correlation and, for large images, thousands of times faster than direct robust matching. Three experiments demonstrate the advantage of the technique over standard correlation.

  17. Comparison of the Analytic Hierarchy Process and Incomplete Analytic Hierarchy Process for identifying customer preferences in the Texas retail energy provider market

    NASA Astrophysics Data System (ADS)

    Davis, Christopher

    The competitive market for retail energy providers in Texas has been in existence for 10 years. When the market opened in 2002, 5 energy providers existed, offering, on average, 20 residential product plans in total. As of January 2012, there are now 115 energy providers in Texas offering over 300 residential product plans for customers. With the increase in providers and product plans, customers can be bombarded with information and suffer from the "too much choice" effect. The goal of this praxis is to aid customers in the decision making process of identifying an energy provider and product plan. Using the Analytic Hierarchy Process (AHP), a hierarchical decomposition decision making tool, and the Incomplete Analytic Hierarchy Process (IAHP), a modified version of AHP, customers can prioritize criteria such as price, rate type, customer service, and green energy products to identify the provider and plan that best meets their needs. To gather customer data, a survey tool has been developed for customers to complete the pairwise comparison process. Results are compared for the Incomplete AHP and AHP method to determine if the Incomplete AHP method is just as accurate, but more efficient, than the traditional AHP method.

  18. Robust springback compensation

    NASA Astrophysics Data System (ADS)

    Carleer, Bart; Grimm, Peter

    2013-12-01

    Springback simulation and springback compensation are more and more applied in productive use of die engineering. In order to successfully compensate a tool accurate springback results are needed as well as an effective compensation approach. In this paper a methodology has been introduce in order to effectively compensate tools. First step is the full process simulation meaning that not only the drawing operation will be simulated but also all secondary operations like trimming and flanging. Second will be the verification whether the process is robust meaning that it obtains repeatable results. In order to effectively compensate a minimum clamping concept will be defined. Once these preconditions are fulfilled the tools can be compensated effectively.

  19. Processing of needle rinse material from fine-needle aspirations rarely detects malignancy not identified in smears.

    PubMed

    Henry-Stanley, M J; Stanley, M W

    1992-01-01

    When preparing FNA smears, we recover material left in the needle hub by forcefully striking the open hub against a slide. Material in the syringe tip is expressed by repeated forceful blasts of air (needle unattached). We investigated the utility of recovering additional material by rinsing the needle and syringe. Saline was used to flush the needle and syringe tip repeatedly. All material was processed by cytocentrifugation. We studied 159 needle rinse (NR) specimens from 152 patients (breast = 70, lymph node = 30, lung = 15, soft tissue = 14, salivary gland = 12, thyroid = 12, liver = 5, branchial cleft cyst = 1). Malignancy was identified in 21 FNAs (13%) from 21 patients (14%). All were diagnosed in smears (9 lung, 5 liver, 4 lymph node, 2 breast, 1 soft tissue). NR material identified 16 of these (76%). No case with benign smears (n = 138) showed malignancy in NR material. We conclude that if good technique is applied to preparation of smears and recovery of material from the needle hub and syringe tip, NR material will rarely identify additional malignancies. It thus represents an inefficient allocation of technical and human resources within the laboratory. However, NR may provide additional slides for special stains and may be useful for clinicians who do not always prepare high quality smears. Furthermore, the ease with which FNA of palpable masses can be repeated suggests that in the small number of cases requiring special stains, additional material can be readily obtained.(ABSTRACT TRUNCATED AT 250 WORDS)

  20. Systematic processes of land use/land cover change to identify relevant driving forces: implications on water quality.

    PubMed

    Teixeira, Zara; Teixeira, Heliana; Marques, João C

    2014-02-01

    Land use and land cover (LULC) are driving forces that potentially exert pressures on water bodies, which are most commonly quantified by simply obtained aggregated data. However, this is insufficient to detect the drivers that arise from the landscape change itself. To achieve this objective one must distinguish between random and systematic transitions and identify the transitions that show strong signals of change, since these will make it possible to identify the transitions that have evolved due to population growth, industrial expansion and/or changes in land management policies. Our goal is to describe a method to characterize driving forces both from LULC and dominant LULC changes, recognizing that the presence of certain LULC classes as well as the processes of transition to other uses are both sources of stress with potential effects on the condition of water bodies. This paper first quantifies the driving forces from LULC and also from processes of LULC change for three nested regions within the Mondego river basin in 1990, 2000 and 2006. It then discusses the implications for the environmental water body condition and management policies. The fingerprint left on the landscape by some of the dominant changes found, such as urbanization and industrial expansion, is, as expected, low due to their proportion in the geographic regions under study, yet their magnitude of change and consistency reveal strong signals of change regarding the pressures acting in the system. Assessing dominant LULC changes is vital for a comprehensive study of driving forces with potential impacts on water condition.

  1. Enabling Rapid and Robust Structural Analysis During Conceptual Design

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.; Padula, Sharon L.; Li, Wu

    2015-01-01

    This paper describes a multi-year effort to add a structural analysis subprocess to a supersonic aircraft conceptual design process. The desired capabilities include parametric geometry, automatic finite element mesh generation, static and aeroelastic analysis, and structural sizing. The paper discusses implementation details of the new subprocess, captures lessons learned, and suggests future improvements. The subprocess quickly compares concepts and robustly handles large changes in wing or fuselage geometry. The subprocess can rank concepts with regard to their structural feasibility and can identify promising regions of the design space. The automated structural analysis subprocess is deemed robust and rapid enough to be included in multidisciplinary conceptual design and optimization studies.

  2. Robust geostatistical analysis of spatial data

    NASA Astrophysics Data System (ADS)

    Papritz, A.; Künsch, H. R.; Schwierz, C.; Stahel, W. A.

    2012-04-01

    Most of the geostatistical software tools rely on non-robust algorithms. This is unfortunate, because outlying observations are rather the rule than the exception, in particular in environmental data sets. Outlying observations may results from errors (e.g. in data transcription) or from local perturbations in the processes that are responsible for a given pattern of spatial variation. As an example, the spatial distribution of some trace metal in the soils of a region may be distorted by emissions of local anthropogenic sources. Outliers affect the modelling of the large-scale spatial variation, the so-called external drift or trend, the estimation of the spatial dependence of the residual variation and the predictions by kriging. Identifying outliers manually is cumbersome and requires expertise because one needs parameter estimates to decide which observation is a potential outlier. Moreover, inference after the rejection of some observations is problematic. A better approach is to use robust algorithms that prevent automatically that outlying observations have undue influence. Former studies on robust geostatistics focused on robust estimation of the sample variogram and ordinary kriging without external drift. Furthermore, Richardson and Welsh (1995) [2] proposed a robustified version of (restricted) maximum likelihood ([RE]ML) estimation for the variance components of a linear mixed model, which was later used by Marchant and Lark (2007) [1] for robust REML estimation of the variogram. We propose here a novel method for robust REML estimation of the variogram of a Gaussian random field that is possibly contaminated by independent errors from a long-tailed distribution. It is based on robustification of estimating equations for the Gaussian REML estimation. Besides robust estimates of the parameters of the external drift and of the variogram, the method also provides standard errors for the estimated parameters, robustified kriging predictions at both sampled

  3. Exploiting Cloud Radar Doppler Spectra of Mixed-Phase Clouds during ACCEPT Field Experiment to Identify Microphysical Processes

    NASA Astrophysics Data System (ADS)

    Kalesse, H.; Myagkov, A.; Seifert, P.; Buehl, J.

    2015-12-01

    Cloud radar Doppler spectra offer much information about cloud processes. By analyzing millimeter radar Doppler spectra from cloud-top to -base in mixed-phase clouds in which super-cooled liquid-layers are present we try to tell the microphysical evolution story of particles that are present by disentangling the contributions of the solid and liquid particles to the total radar returns. Instead of considering vertical profiles, dynamical effects are taken into account by following the particle population evolution along slanted paths which are caused by horizontal advection of the cloud. The goal is to identify regions in which different microphysical processes such as new particle formation (nucleation), water vapor deposition, aggregation, riming, or sublimation occurr. Cloud radar measurements are supplemented by Doppler lidar and Raman lidar observations as well as observations with MWR, wind profiler, and radio sondes. The presence of super-cooled liquid layers is identified by positive liquid water paths in MWR measurements, the vertical location of liquid layers (in non-raining systems and below lidar extinction) is derived from regions of high-backscatter and low depolarization in Raman lidar observations. In collocated cloud radar measurements, we try to identify cloud phase in the cloud radar Doppler spectrum via location of the Doppler peak(s), the existence of multi-modalities or the spectral skewness. Additionally, within the super-cooled liquid layers, the radar-identified liquid droplets are used as air motion tracer to correct the radar Doppler spectrum for vertical air motion w. These radar-derived estimates of w are validated by independent estimates of w from collocated Doppler lidar measurements. A 35 GHz vertically pointing cloud Doppler radar (METEK MIRA-35) in linear depolarization (LDR) mode is used. Data is from the deployment of the Leipzig Aerosol and Cloud Remote Observations System (LACROS) during the Analysis of the Composition of

  4. Identifying biogeochemical processes beneath stormwater infiltration ponds in support of a new best management practice for groundwater protection

    USGS Publications Warehouse

    O'Reilly, Andrew M.; Chang, Ni-Bin; Wanielista, Martin P.; Xuan, Zhemin; Schirmer, Mario; Hoehn, Eduard; Vogt, Tobias

    2011-01-01

     When applying a stormwater infiltration pond best management practice (BMP) for protecting the quality of underlying groundwater, a common constituent of concern is nitrate. Two stormwater infiltration ponds, the SO and HT ponds, in central Florida, USA, were monitored. A temporal succession of biogeochemical processes was identified beneath the SO pond, including oxygen reduction, denitrification, manganese and iron reduction, and methanogenesis. In contrast, aerobic conditions persisted beneath the HT pond, resulting in nitrate leaching into groundwater. Biogeochemical differences likely are related to soil textural and hydraulic properties that control surface/subsurface oxygen exchange. A new infiltration BMP was developed and a full-scale application was implemented for the HT pond. Preliminary results indicate reductions in nitrate concentration exceeding 50% in soil water and shallow groundwater beneath the HT pond.

  5. Reducing Missed Laboratory Results: Defining Temporal Responsibility, Generating User Interfaces for Test Process Tracking, and Retrospective Analyses to Identify Problems

    PubMed Central

    Tarkan, Sureyya; Plaisant, Catherine; Shneiderman, Ben; Hettinger, A. Zachary

    2011-01-01

    Researchers have conducted numerous case studies reporting the details on how laboratory test results of patients were missed by the ordering medical providers. Given the importance of timely test results in an outpatient setting, there is limited discussion of electronic versions of test result management tools to help clinicians and medical staff with this complex process. This paper presents three ideas to reduce missed results with a system that facilitates tracking laboratory tests from order to completion as well as during follow-up: (1) define a workflow management model that clarifies responsible agents and associated time frame, (2) generate a user interface for tracking that could eventually be integrated into current electronic health record (EHR) systems, (3) help identify common problems in past orders through retrospective analyses. PMID:22195201

  6. Proteomic analyses identify a diverse array of nuclear processes affected by small ubiquitin-like modifier conjugation in Arabidopsis

    PubMed Central

    Miller, Marcus J.; Barrett-Wilt, Gregory A.; Hua, Zhihua; Vierstra, Richard D.

    2010-01-01

    The covalent attachment of SUMO (small ubiquitin-like modifier) to other intracellular proteins affects a broad range of nuclear processes in yeast and animals, including chromatin maintenance, transcription, and transport across the nuclear envelope, as well as protects proteins from ubiquitin addition. Substantial increases in SUMOylated proteins upon various stresses have also implicated this modification in the general stress response. To help understand the role(s) of SUMOylation in plants, we developed a stringent method to isolate SUMO-protein conjugates from Arabidopsis thaliana that exploits a tagged SUMO1 variant that faithfully replaces the wild-type protein. Following purification under denaturing conditions, SUMOylated proteins were identified by tandem mass spectrometry from both nonstressed plants and those exposed to heat and oxidative stress. The list of targets is enriched for factors that direct SUMOylation and for nuclear proteins involved in chromatin remodeling/repair, transcription, RNA metabolism, and protein trafficking. Targets of particular interest include histone H2B, components in the LEUNIG/TOPLESS corepressor complexes, and proteins that control histone acetylation and DNA methylation, which affect genome-wide transcription. SUMO attachment site(s) were identified in a subset of targets, including SUMO1 itself to confirm the assembly of poly-SUMO chains. SUMO1 also becomes conjugated with ubiquitin during heat stress, thus connecting these two posttranslational modifications in plants. Taken together, we propose that SUMOylation represents a rapid and global mechanism for reversibly manipulating plant chromosomal functions, especially during environmental stress. PMID:20813957

  7. Microcalcifications versus artifacts: initial evaluation of a new ultrasound image processing technique to identify breast microcalcifications in a screening population.

    PubMed

    Machado, Priscilla; Eisenbrey, John R; Cavanaugh, Barbara; Forsberg, Flemming

    2014-09-01

    A new commercial image processing technique (MicroPure, Toshiba America Medical Systems, Tustin, CA, USA) that identifies breast microcalcifications was evaluated at the time of patients' annual screening mammograms. Twenty women scheduled for annual screening mammography were enrolled in the study. Patients underwent bilateral outer-upper-quadrant real-time dual gray scale ultrasound and MicroPure imaging using an Aplio XG scanner (Toshiba). MicroPure combines non-linear imaging and speckle suppression to mark suspected calcifications as white spots in a blue overlay image. Four independent and blinded readers analyzed digital clips to determine the presence or absence of microcalcifications and artifacts. The presence of microcalcifications determined by readers was not significantly different from that of mammography (p = 0.57). However, the accuracy was low overall (52%) and also in younger women (<50 years, 54%). In conclusion, although microcalcifications can be identified using MicroPure imaging, this method is not currently appropriate for a screening population and should be used in more focused applications. PMID:25023105

  8. A multivariate statistical approach to identify the spatio-temporal variation of geochemical process in a hard rock aquifer.

    PubMed

    Thivya, C; Chidambaram, S; Thilagavathi, R; Prasanna, M V; Singaraja, C; Adithya, V S; Nepolian, M

    2015-09-01

    A study has been carried out in crystalline hard rock aquifers of Madurai district, Tamil Nadu, to identify the spatial and temporal variations and to understand sources responsible for hydrogeochemical processes in the region. Totally, 216 samples were collected for four seasons [premonsoon (PRM), southwest monsoon (SWM), northeast monsoon (NWM), and postmonsoon (POM)]. The Na and K ions are attributed from weathering of feldspars in charnockite and fissile hornblende gneiss. The results also indicate that monsoon leaches the U ions in the groundwater and later it is reflected in the (222)Rn levels also. The statistical relationship on the temporal data reflects the fact that Ca, Mg, Na, Cl, HCO3, and SO4 form the spinal species, which are the chief ions playing the significant role in the geochemistry of the region. The factor loadings of the temporal data reveal the fact that the predominant factor is anthropogenic process and followed by natural weathering and U dissolution. The spatial analysis of the temporal data reveals that weathering is prominent in the NW part and that of distribution of U and (222)Rn along the NE part of the study area. This is also reflected in the cluster analysis, and it is understood that lithology, land use pattern, lineaments, and groundwater flow direction determine the spatial variation of these ions with respect to season. PMID:26239570

  9. A multivariate statistical approach to identify the spatio-temporal variation of geochemical process in a hard rock aquifer.

    PubMed

    Thivya, C; Chidambaram, S; Thilagavathi, R; Prasanna, M V; Singaraja, C; Adithya, V S; Nepolian, M

    2015-09-01

    A study has been carried out in crystalline hard rock aquifers of Madurai district, Tamil Nadu, to identify the spatial and temporal variations and to understand sources responsible for hydrogeochemical processes in the region. Totally, 216 samples were collected for four seasons [premonsoon (PRM), southwest monsoon (SWM), northeast monsoon (NWM), and postmonsoon (POM)]. The Na and K ions are attributed from weathering of feldspars in charnockite and fissile hornblende gneiss. The results also indicate that monsoon leaches the U ions in the groundwater and later it is reflected in the (222)Rn levels also. The statistical relationship on the temporal data reflects the fact that Ca, Mg, Na, Cl, HCO3, and SO4 form the spinal species, which are the chief ions playing the significant role in the geochemistry of the region. The factor loadings of the temporal data reveal the fact that the predominant factor is anthropogenic process and followed by natural weathering and U dissolution. The spatial analysis of the temporal data reveals that weathering is prominent in the NW part and that of distribution of U and (222)Rn along the NE part of the study area. This is also reflected in the cluster analysis, and it is understood that lithology, land use pattern, lineaments, and groundwater flow direction determine the spatial variation of these ions with respect to season.

  10. The role of various amino acids in enzymatic browning process in potato tubers, and identifying the browning products.

    PubMed

    Ali, Hussein M; El-Gizawy, Ahmed M; El-Bassiouny, Rawia E I; Saleh, Mahmoud A

    2016-02-01

    The effects of five structurally variant amino acids, glycine, valine, methionine, phenylalanine and cysteine were examined as inhibitors and/or stimulators of fresh-cut potato browning. The first four amino acids showed conflict effects; high concentrations (⩾ 100mM for glycine and ⩾ 1.0M for the other three amino acids) induced potato browning while lower concentrations reduced the browning process. Alternatively, increasing cysteine concentration consistently reduced the browning process due to reaction with quinone to give colorless adduct. In PPO assay, high concentrations (⩾ 1.11 mM) of the four amino acids developed more color than that of control samples. Visible spectra indicated a continuous condensation of quinone and glycine to give colored adducts absorbed at 610-630 nm which were separated and identified by LC-ESI-MS as catechol-diglycine adduct that undergoes polymerization with other glycine molecules to form peptide side chains. In lower concentrations, the less concentration the less developed color was observed.

  11. The role of various amino acids in enzymatic browning process in potato tubers, and identifying the browning products.

    PubMed

    Ali, Hussein M; El-Gizawy, Ahmed M; El-Bassiouny, Rawia E I; Saleh, Mahmoud A

    2016-02-01

    The effects of five structurally variant amino acids, glycine, valine, methionine, phenylalanine and cysteine were examined as inhibitors and/or stimulators of fresh-cut potato browning. The first four amino acids showed conflict effects; high concentrations (⩾ 100mM for glycine and ⩾ 1.0M for the other three amino acids) induced potato browning while lower concentrations reduced the browning process. Alternatively, increasing cysteine concentration consistently reduced the browning process due to reaction with quinone to give colorless adduct. In PPO assay, high concentrations (⩾ 1.11 mM) of the four amino acids developed more color than that of control samples. Visible spectra indicated a continuous condensation of quinone and glycine to give colored adducts absorbed at 610-630 nm which were separated and identified by LC-ESI-MS as catechol-diglycine adduct that undergoes polymerization with other glycine molecules to form peptide side chains. In lower concentrations, the less concentration the less developed color was observed. PMID:26304424

  12. Identifying Armed Respondents to Domestic Violence Restraining Orders and Recovering Their Firearms: Process Evaluation of an Initiative in California

    PubMed Central

    Frattaroli, Shannon; Claire, Barbara E.; Vittes, Katherine A.; Webster, Daniel W.

    2014-01-01

    Objectives. We evaluated a law enforcement initiative to screen respondents to domestic violence restraining orders for firearm ownership or possession and recover their firearms. Methods. The initiative was implemented in San Mateo and Butte counties in California from 2007 through 2010. We used descriptive methods to evaluate the screening process and recovery effort in each county, relying on records for individual cases. Results. Screening relied on an archive of firearm transactions, court records, and petitioner interviews; no single source was adequate. Screening linked 525 respondents (17.7%) in San Mateo County to firearms; 405 firearms were recovered from 119 (22.7%) of them. In Butte County, 88 (31.1%) respondents were linked to firearms; 260 firearms were recovered from 45 (51.1%) of them. Nonrecovery occurred most often when orders were never served or respondents denied having firearms. There were no reports of serious violence or injury. Conclusions. Recovering firearms from persons subject to domestic violence restraining orders is possible. We have identified design and implementation changes that may improve the screening process and the yield from recovery efforts. Larger implementation trials are needed. PMID:24328660

  13. Identifying key processes in the hydrochemistry of a basin through the combined use of factor and regression models

    NASA Astrophysics Data System (ADS)

    Yidana, Sandow Mark; Banoeng-Yakubo, Bruce; Sakyi, Patrick Asamoah

    2012-04-01

    An innovative technique of measuring the intensities of major sources of variation in the hydrochemistry of (ground) water in a basin has been developed. This technique, which is based on the combination of R-mode factor and multiple regression analyses, can be used to measure the degrees of influence of the major sources of variation in the hydrochemistry without measuring the concentrations of the entire set of physico-chemical parameters which are often used to characterize water systems. R-mode factor analysis was applied to the data of 13 physico-chemical parameters and 50 samples in order to determine the major sources of variation in the hydrochemistry of some aquifers in the western region of Ghana. In this study, three sources of variation in the hydrochemistry were distinguished: the dissolution of chlorides and sulfates of the major cations, carbonate mineral dissolution, and silicate mineral weathering. Two key parameters were identified with each of the processes and multiple regression models were developed for each process. These models were tested and found to predict these processes quite accurately, and can be applied anywhere within the terrain. This technique can be reliably applied in areas where logistical constraints limit water sampling for whole basin hydrochemical characterization. Q-mode hierarchical cluster analysis (HCA) applied to the data revealed three major groundwater associations distinguished on the basis of the major causes of variation in the hydrochemistry. The three groundwater types represent Na-HCO3, Ca-HCO3, and Na-Cl groundwater types. Silicate stability diagrams suggest that all these groundwater types are mainly stable in the kaolinite and montmorillonite fields suggesting moderately restricted flow conditions.

  14. Comparing Four Instructional Techniques for Promoting Robust Knowledge

    ERIC Educational Resources Information Center

    Richey, J. Elizabeth; Nokes-Malach, Timothy J.

    2015-01-01

    Robust knowledge serves as a common instructional target in academic settings. Past research identifying characteristics of experts' knowledge across many domains can help clarify the features of robust knowledge as well as ways of assessing it. We review the expertise literature and identify three key features of robust knowledge (deep,…

  15. Processes for Identifying Regional Influences of and Responses to Increasing Atmospheric CO sub 2 and Climate Change --- The MINK Project

    SciTech Connect

    Easterling, W.E. III; McKenney, M.S.; Rosenberg, N.J.; Lemon, K.M.

    1991-08-01

    The second report of a series Processes for Identifying Regional Influences of and Responses to Increasing Atmospheric CO{sub 2} and Climate Change -- The MINK Project is composed of two parts. This Report (IIB) deals with agriculture at the level of farms and Major Land Resource Areas (MLRAs). The Erosion Productivity Impact Calculator (EPIC), a crop growth simulation model developed by scientists at the US Department of Agriculture, is used to study the impacts of the analog climate on yields of main crops in both the 1984/87 and the 2030 baselines. The results of this work with EPIC are the basis for the analysis of the climate change impacts on agriculture at the region-wide level undertaken in this report. Report IIA treats agriculture in MINK in terms of state and region-wide production and resource use for the main crops and animals in the baseline periods of 1984/87 and 2030. The effects of the analog climate on the industry at this level of aggregation are considered in both baseline periods. 41 refs., 40 figs., 46 tabs.

  16. Network Robustness: the whole story

    NASA Astrophysics Data System (ADS)

    Longjas, A.; Tejedor, A.; Zaliapin, I. V.; Ambroj, S.; Foufoula-Georgiou, E.

    2014-12-01

    A multitude of actual processes operating on hydrological networks may exhibit binary outcomes such as clean streams in a river network that may become contaminated. These binary outcomes can be modeled by node removal processes (attacks) acting in a network. Network robustness against attacks has been widely studied in fields as diverse as the Internet, power grids and human societies. However, the current definition of robustness is only accounting for the connectivity of the nodes unaffected by the attack. Here, we put forward the idea that the connectivity of the affected nodes can play a crucial role in proper evaluation of the overall network robustness and its future recovery from the attack. Specifically, we propose a dual perspective approach wherein at any instant in the network evolution under attack, two distinct networks are defined: (i) the Active Network (AN) composed of the unaffected nodes and (ii) the Idle Network (IN) composed of the affected nodes. The proposed robustness metric considers both the efficiency of destroying the AN and the efficiency of building-up the IN. This approach is motivated by concrete applied problems, since, for example, if we study the dynamics of contamination in river systems, it is necessary to know both the connectivity of the healthy and contaminated parts of the river to assess its ecological functionality. We show that trade-offs between the efficiency of the Active and Idle network dynamics give rise to surprising crossovers and re-ranking of different attack strategies, pointing to significant implications for decision making.

  17. Mechanisms for Robust Cognition

    ERIC Educational Resources Information Center

    Walsh, Matthew M.; Gluck, Kevin A.

    2015-01-01

    To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within…

  18. A natural class of robust networks.

    PubMed

    Aldana, Maximino; Cluzel, Philippe

    2003-07-22

    As biological studies shift from molecular description to system analysis we need to identify the design principles of large intracellular networks. In particular, without knowing the molecular details, we want to determine how cells reliably perform essential intracellular tasks. Recent analyses of signaling pathways and regulatory transcription networks have revealed a common network architecture, termed scale-free topology. Although the structural properties of such networks have been thoroughly studied, their dynamical properties remain largely unexplored. We present a prototype for the study of dynamical systems to predict the functional robustness of intracellular networks against variations of their internal parameters. We demonstrate that the dynamical robustness of these complex networks is a direct consequence of their scale-free topology. By contrast, networks with homogeneous random topologies require fine-tuning of their internal parameters to sustain stable dynamical activity. Considering the ubiquity of scale-free networks in nature, we hypothesize that this topology is not only the result of aggregation processes such as preferential attachment; it may also be the result of evolutionary selective processes. PMID:12853565

  19. Robust Nonlinear Neural Codes

    NASA Astrophysics Data System (ADS)

    Yang, Qianli; Pitkow, Xaq

    2015-03-01

    Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.

  20. Euphausiid distribution along the Western Antarctic Peninsula—Part A: Development of robust multi-frequency acoustic techniques to identify euphausiid aggregations and quantify euphausiid size, abundance, and biomass

    NASA Astrophysics Data System (ADS)

    Lawson, Gareth L.; Wiebe, Peter H.; Stanton, Timothy K.; Ashjian, Carin J.

    2008-02-01

    Methods were refined and tested for identifying the aggregations of Antarctic euphausiids ( Euphausia spp.) and then estimating euphausiid size, abundance, and biomass, based on multi-frequency acoustic survey data. A threshold level of volume backscattering strength for distinguishing euphausiid aggregations from other zooplankton was derived on the basis of published measurements of euphausiid visual acuity and estimates of the minimum density of animals over which an individual can maintain visual contact with its nearest neighbor. Differences in mean volume backscattering strength at 120 and 43 kHz further served to distinguish euphausiids from other sources of scattering. An inversion method was then developed to estimate simultaneously the mean length and density of euphausiids in these acoustically identified aggregations based on measurements of mean volume backscattering strength at four frequencies (43, 120, 200, and 420 kHz). The methods were tested at certain locations within an acoustically surveyed continental shelf region in and around Marguerite Bay, west of the Antarctic Peninsula, where independent evidence was also available from net and video systems. Inversion results at these test sites were similar to net samples for estimated length, but acoustic estimates of euphausiid density exceeded those from nets by one to two orders of magnitude, likely due primarily to avoidance and to a lesser extent to differences in the volumes sampled by the two systems. In a companion study, these methods were applied to the full acoustic survey data in order to examine the distribution of euphausiids in relation to aspects of the physical and biological environment [Lawson, G.L., Wiebe, P.H., Ashjian, C.J., Stanton, T.K., 2008. Euphausiid distribution along the Western Antarctic Peninsula—Part B: Distribution of euphausiid aggregations and biomass, and associations with environmental features. Deep-Sea Research II, this issue [doi:10.1016/j.dsr2.2007.11.014

  1. Robust, optimal subsonic airfoil shapes

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor)

    2008-01-01

    Method system, and product from application of the method, for design of a subsonic airfoil shape, beginning with an arbitrary initial airfoil shape and incorporating one or more constraints on the airfoil geometric parameters and flow characteristics. The resulting design is robust against variations in airfoil dimensions and local airfoil shape introduced in the airfoil manufacturing process. A perturbation procedure provides a class of airfoil shapes, beginning with an initial airfoil shape.

  2. Identifying Complex Cultural Interactions in the Instructional Design Process: A Case Study of a Cross-Border, Cross-Sector Training for Innovation Program

    ERIC Educational Resources Information Center

    Russell, L. Roxanne; Kinuthia, Wanjira L.; Lokey-Vega, Anissa; Tsang-Kosma, Winnie; Madathany, Reeny

    2013-01-01

    The purpose of this research is to identify complex cultural dynamics in the instructional design process of a cross-sector, cross-border training environment by applying Young's (2009) Culture-Based Model (CBM) as a theoretical framework and taxonomy for description of the instructional design process under the conditions of one case. This…

  3. Identifying causal networks linking cancer processes and anti-tumor immunity using Bayesian network inference and metagene constructs.

    PubMed

    Kaiser, Jacob L; Bland, Cassidy L; Klinke, David J

    2016-03-01

    Cancer arises from a deregulation of both intracellular and intercellular networks that maintain system homeostasis. Identifying the architecture of these networks and how they are changed in cancer is a pre-requisite for designing drugs to restore homeostasis. Since intercellular networks only appear in intact systems, it is difficult to identify how these networks become altered in human cancer using many of the common experimental models. To overcome this, we used the diversity in normal and malignant human tissue samples from the Cancer Genome Atlas (TCGA) database of human breast cancer to identify the topology associated with intercellular networks in vivo. To improve the underlying biological signals, we constructed Bayesian networks using metagene constructs, which represented groups of genes that are concomitantly associated with different immune and cancer states. We also used bootstrap resampling to establish the significance associated with the inferred networks. In short, we found opposing relationships between cell proliferation and epithelial-to-mesenchymal transformation (EMT) with regards to macrophage polarization. These results were consistent across multiple carcinomas in that proliferation was associated with a type 1 cell-mediated anti-tumor immune response and EMT was associated with a pro-tumor anti-inflammatory response. To address the identifiability of these networks from other datasets, we could identify the relationship between EMT and macrophage polarization with fewer samples when the Bayesian network was generated from malignant samples alone. However, the relationship between proliferation and macrophage polarization was identified with fewer samples when the samples were taken from a combination of the normal and malignant samples. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:470-479, 2016.

  4. Socially Shared Metacognitive Regulation during Reciprocal Peer Tutoring: Identifying Its Relationship with Students' Content Processing and Transactive Discussions

    ERIC Educational Resources Information Center

    De Backer, Liesje; Van Keer, Hilde; Valcke, Martin

    2015-01-01

    Although successful collaborative learning requires socially shared metacognitive regulation (SSMR) of the learning process among multiple students, empirical research on SSMR is limited. The present study contributes to the emerging research on SSMR by examining its correlation with both collaborative learners' content processing strategies and…

  5. A MULTI-SCALE SCREENING PROCESS TO IDENTIFY LEAST-DISTURBED STREAM SITES FOR USE IN WATER QUALITY MONITORING

    EPA Science Inventory

    We developed a four-step screening procedure to identify least-disturbed stream sites for an EPA Environmental Monitoring and Assessment Program (EMAP) pilot project being conducted in twelve western states. In this project, biological attributes at least-disturbed sites are use...

  6. Examining the Cognitive Processes Used by Adolescent Girls and Women Scientists in Identifying Science Role Models: A Feminist Approach

    ERIC Educational Resources Information Center

    Buck, Gayle A.; Plano Clark, Vicki L.; Leslie-Pelecky, Diandra; Lu, Yun; Cerda-Lizarraga, Particia

    2008-01-01

    Women remain underrepresented in science professions. Studies have shown that students are more likely to select careers when they can identify a role model in that career path. Further research has shown that the success of this strategy is enhanced by the use of gender-matched role models. While prior work provides insights into the value of…

  7. Seventeen Projects Carried out by Students Designing for and with Disabled Children: Identifying Designers' Difficulties during the Whole Design Process

    ERIC Educational Resources Information Center

    Magnier, Cecile; Thomann, Guillaume; Villeneuve, Francois

    2012-01-01

    This article aims to identify the difficulties that may arise when designing assistive devices for disabled children. Seventeen design projects involving disabled children, engineering students, and special schools were analysed. A content analysis of the design reports was performed. For this purpose, a coding scheme was built based on a review…

  8. Use of a marker organism in poultry processing to identify sites of cross-contamination and evaluate possible control measures.

    PubMed

    Mead, G C; Hudson, W R; Hinton, M H

    1994-07-01

    1. Nine different sites at a poultry processing plant were selected in the course of a hazard analysis to investigate the degree of microbial cross-contamination that could occur during processing and the effectiveness of possible control measures. 2. At each site, carcases, equipment or working surfaces were inoculated with a non-pathogenic strain of nalidixic acid-resistant Escherichia coli K12; transmission of the organism among carcases being processed was followed qualitatively and, where appropriate, quantitatively. 3. The degree of cross-contamination and the extent to which it could be controlled by the proposed measures varied from one site to another. PMID:7953779

  9. THE ORIGINS OF LIGHT AND HEAVY R-PROCESS ELEMENTS IDENTIFIED BY CHEMICAL TAGGING OF METAL-POOR STARS

    SciTech Connect

    Tsujimoto, Takuji; Shigeyama, Toshikazu

    2014-11-01

    Growing interests in neutron star (NS) mergers as the origin of r-process elements have sprouted since the discovery of evidence for the ejection of these elements from a short-duration γ-ray burst. The hypothesis of a NS merger origin is reinforced by a theoretical update of nucleosynthesis in NS mergers successful in yielding r-process nuclides with A > 130. On the other hand, whether the origin of light r-process elements are associated with nucleosynthesis in NS merger events remains unclear. We find a signature of nucleosynthesis in NS mergers from peculiar chemical abundances of stars belonging to the Galactic globular cluster M15. This finding combined with the recent nucleosynthesis results implies a potential diversity of nucleosynthesis in NS mergers. Based on these considerations, we are successful in the interpretation of an observed correlation between [light r-process/Eu] and [Eu/Fe] among Galactic halo stars and accordingly narrow down the role of supernova nucleosynthesis in the r-process production site. We conclude that the tight correlation by a large fraction of halo stars is attributable to the fact that core-collapse supernovae produce light r-process elements while heavy r-process elements such as Eu and Ba are produced by NS mergers. On the other hand, stars in the outlier, composed of r-enhanced stars ([Eu/Fe] ≳ +1) such as CS22892-052, were exclusively enriched by matter ejected by a subclass of NS mergers that is inclined to be massive and consist of both light and heavy r-process nuclides.

  10. Assembly interruptability robustness model with applications to Space Station Freedom

    NASA Astrophysics Data System (ADS)

    Wade, James William

    1991-02-01

    Interruptability robustness of a construction project together with its assembly sequence may be measured by calculating the probability of its survival and successful completion in the face of unplanned interruptions of the assembly process. Such an interruption may jeopardize the survival of the structure being assembled, the survival of the support equipment, and/or the safety of the members of the construction crew, depending upon the stage in the assembly sequence when the interruption occurs. The interruption may be due to a number of actors such as: machinery break-downs, environmental damage, worker emergency illness or injury, etc. Each source of interruption has a probability of occurring, and adds an associated probability of loss, schedule delay, and cost to the project. Several options may exist for reducing the consequences of an interruption at a given point in the assembly sequence, including altering the assembly sequence, adding extra components or equipment as interruptability 'insurance', increasing the capability of support facilities, etc. Each option may provide a different overall performance of the project as it relates to success, assembly time, and project cost. The Interruptability Robustness Model was devised and provides a method which allows the overall interruptability robustness of construction of a project design and its assembly sequence to be quantified. In addition, it identifies the susceptibility to interruptions for the assembly sequence at all points within the assembly sequence. The model is applied to the present problem of quantifying and improving interruptability robustness during the construction of Space Station Freedom. This application was used as a touchstone for devising the Interruptability Robustness Model. However, the model may be utilized to assist in the analysis of interruptability robustness for other space-related construction projects such as the lunar base and orbital assembly of the manned Mars

  11. Identifying low-dimensional dynamics in type-I edge-localised-mode processes in JET plasmas

    SciTech Connect

    Calderon, F. A.; Chapman, S. C.; Nicol, R. M.; Dendy, R. O.; Webster, A. J.; Alper, B. [EURATOM Collaboration: JET EFDA Contributors

    2013-04-15

    Edge localised mode (ELM) measurements from reproducibly similar plasmas in the Joint European Torus (JET) tokamak, which differ only in their gas puffing rate, are analysed in terms of the pattern in the sequence of inter-ELM time intervals. It is found that the category of ELM defined empirically as type I-typically more regular, less frequent, and having larger amplitude than other ELM types-embraces substantially different ELMing processes. By quantifying the structure in the sequence of inter-ELM time intervals using delay time plots, we reveal transitions between distinct phase space dynamics, implying transitions between distinct underlying physical processes. The control parameter for these transitions between these different ELMing processes is the gas puffing rate.

  12. The covariation of independent and dependant variables in neurofeedback: a proposal framework to identify cognitive processes and brain activity variables.

    PubMed

    Micoulaud-Franchi, Jean-Arthur; Quiles, Clélia; Fond, Guillaume; Cermolacce, Michel; Vion-Dury, Jean

    2014-05-01

    This methodological article proposes a framework for analysing the relationship between cognitive processes and brain activity using variables measured by neurofeedback (NF) carried out by functional Magnetic Resonance Imagery (fMRI NF). Cognitive processes and brain activity variables can be analysed as either the dependant variable or the independent variable. Firstly, we propose two traditional approaches, defined in the article as the "neuropsychological" approach (NP) and the "psychophysiology" approach (PP), to extract dependent and independent variables in NF protocols. Secondly, we suggest that NF can be inspired by the style of inquiry used in neurophenomenology. fMRI NF allows participants to experiment with his or her own cognitive processes and their effects on brain region of interest (ROI) activations simultaneously. Thus, we suggest that fMRI NF could be improved by implementing "the elicitation interview method", which allows the investigator to gather relevant verbatim from participants' introspection on subjective experiences.

  13. On the Nature and Evolutionary Impact of Phenotypic Robustness Mechanisms

    PubMed Central

    Leu, Jun-Yi

    2015-01-01

    Biologists have long observed that physiological and developmental processes are insensitive, or robust, to many genetic and environmental perturbations. A complete understanding of the evolutionary causes and consequences of this robustness is lacking. Recent progress has been made in uncovering the regulatory mechanisms that underlie environmental robustness in particular. Less is known about robustness to the effects of mutations, and indeed the evolution of mutational robustness remains a controversial topic. The controversy has spread to related topics, in particular the evolutionary relevance of cryptic genetic variation. This review aims to synthesize current understanding of robustness mechanisms and to cut through the controversy by shedding light on what is and is not known about mutational robustness. Some studies have confused mutational robustness with non-additive interactions between mutations (epistasis). We conclude that a profitable way forward is to focus investigations (and rhetoric) less on mutational robustness and more on epistasis. PMID:26034410

  14. A Robust Biomarker

    NASA Technical Reports Server (NTRS)

    Westall, F.; Steele, A.; Toporski, J.; Walsh, M. M.; Allen, C. C.; Guidry, S.; McKay, D. S.; Gibson, E. K.; Chafetz, H. S.

    2000-01-01

    Polymers of bacterial origin, either through cell secretion or the degraded product of cell lysis, form isolated mucoidal strands as well as well-developed biofilms on interfaces. Biofilms are structurally and compositionally complex and are readily distinguishable from abiogenic films. These structures range in size from micrometers to decimeters, the latter occurring as the well-known, mineralised biofilms called stromatolites. Compositionally bacterial polymers are greater than 90 % water, with while the majority of the macromolecules forming the framework of the polymers consisting of polysaccharides (with and some nucteic acids and proteins). These macromolecules contain a vaste amount of functional groups, such as carboxyls, hydroxyls, and phosphoryls which are implicated in cation-binding. It is the elevated metal- binding capacity which provides the bacterial polymer with structural support and also helps to preserves it for up to 3.5 b.y. in the terrestrial rock record. The macromolecules, thus, can become rapidly mineralised and trapped in a mineral matrix. Through early and late diagenesis (bacterial degradation, burial, heat, pressure and time) they break down, losing the functional groups and, gradually, their hydrogen atoms. The degraded product is known as "kerogen". With further diagenesis and metamorphism, all the hydrogen atoms are lost and the carbonaceous matter becomes graphite. until the remnant carbonaceous material become graphitised. This last sentence reads a bit as if ALL these macromolecules break down and end up as graphite., but since we find 441 this is not true for all of the macromolecules. We have traced fossilised polymer and biofilms in rocks from throughout Earth's history, to rocks as old as the oldest being 3.5 b.y.-old. Furthermore, Time of Flight Secondary Ion Mass Spectrometry has been able to identify individual macromolecules of bacterial origin, the identities of which are still being investigated, in all the samples

  15. The Role of the Speech-Language Pathologist in Identifying and Treating Children with Auditory Processing Disorder

    ERIC Educational Resources Information Center

    Richard, Gail J.

    2011-01-01

    Purpose: The purpose of this prologue is to provide a historical perspective regarding the controversial issues surrounding auditory processing disorder (APD), as well as a summary of the current issues and perspectives that will be discussed in the articles in this forum. Method: An evidence-based systematic review was conducted to examine…

  16. An Exploration of Strategic Planning Perspectives and Processes within Community Colleges Identified as Being Distinctive in Their Strategic Planning Practices

    ERIC Educational Resources Information Center

    Augustyniak, Lisa J.

    2015-01-01

    Community college leaders face unprecedented change, and some have begun reexamining their institutional strategic planning processes. Yet, studies in higher education strategic planning spend little time examining how community colleges formulate their strategic plans. This mixed-method qualitative study used an expert sampling method to identify…

  17. The Concept of a Zone of Intervention for Identifying the Role of Intermediaries in the Information Search Process.

    ERIC Educational Resources Information Center

    Kuhlthau, Carol C.

    1996-01-01

    Examines patterns of uncertainty, complexity, and process in the perceptions of information users from different work environments. Zone intervention based on Vygotsky's concept of the zone of proximal development is presented. A study with two early career professionals showed need for a more interactive, collaborative role for the library…

  18. Identifying the Associated Factors of Mediation and Due Process in Families of Students with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Burke, Meghan M.; Goldman, Samantha E.

    2015-01-01

    Compared to families of students with other types of disabilities, families of students with autism spectrum disorder (ASD) are significantly more likely to enact their procedural safeguards such as mediation and due process. However, we do not know which school, child, and parent characteristics are associated with the enactment of safeguards.…

  19. The AP Chemistry Course Audit: A Fertile Ground for Identifying and Addressing Misconceptions about the Course and Process

    ERIC Educational Resources Information Center

    Schwenz, Richard W.; Miller, Sheldon

    2014-01-01

    The advanced placement course audit was implemented to standardize the college-level curricular and resource requirements for AP courses. While the process has had this effect, it has brought with it misconceptions about how much the College Board intends to control what happens within the classroom, what information is required to be included in…

  20. Frequency-dependent processing and interpretation (FDPI) of seismic data for identifying, imaging and monitoring fluid-saturated underground reservoirs

    DOEpatents

    Goloshubin, Gennady M.; Korneev, Valeri A.

    2005-09-06

    A method for identifying, imaging and monitoring dry or fluid-saturated underground reservoirs using seismic waves reflected from target porous or fractured layers is set forth. Seismic imaging the porous or fractured layer occurs by low pass filtering of the windowed reflections from the target porous or fractured layers leaving frequencies below low-most corner (or full width at half maximum) of a recorded frequency spectra. Additionally, the ratio of image amplitudes is shown to be approximately proportional to reservoir permeability, viscosity of fluid, and the fluid saturation of the porous or fractured layers.

  1. Frequency-dependent processing and interpretation (FDPI) of seismic data for identifying, imaging and monitoring fluid-saturated underground reservoirs

    DOEpatents

    Goloshubin, Gennady M.; Korneev, Valeri A.

    2006-11-14

    A method for identifying, imaging and monitoring dry or fluid-saturated underground reservoirs using seismic waves reflected from target porous or fractured layers is set forth. Seismic imaging the porous or fractured layer occurs by low pass filtering of the windowed reflections from the target porous or fractured layers leaving frequencies below low-most corner (or full width at half maximum) of a recorded frequency spectra. Additionally, the ratio of image amplitudes is shown to be approximately proportional to reservoir permeability, viscosity of fluid, and the fluid saturation of the porous or fractured layers.

  2. Identifying consumer preferences for specific beef flavor characteristics in relation to cattle production and postmortem processing parameters.

    PubMed

    O'Quinn, T G; Woerner, D R; Engle, T E; Chapman, P L; Legako, J F; Brooks, J C; Belk, K E; Tatum, J D

    2016-02-01

    Sensory analysis of ground LL samples representing 12 beef product categories was conducted in 3 different regions of the U.S. to identify flavor preferences of beef consumers. Treatments characterized production-related flavor differences associated with USDA grade, cattle type, finishing diet, growth enhancement, and postmortem aging method. Consumers (N=307) rated cooked samples for 12 flavors and overall flavor desirability. Samples were analyzed to determine fatty acid content. Volatile compounds produced by cooking were extracted and quantified. Overall, consumers preferred beef that rated high for beefy/brothy, buttery/beef fat, and sweet flavors and disliked beef with fishy, livery, gamey, and sour flavors. Flavor attributes of samples higher in intramuscular fat with greater amounts of monounsaturated fatty acids and lesser proportions of saturated, odd-chain, omega-3, and trans fatty acids were preferred by consumers. Of the volatiles identified, diacetyl and acetoin were most closely correlated with desirable ratings for overall flavor and dimethyl sulfide was associated with an undesirable sour flavor. PMID:26560806

  3. Modeling Efforts to Aid in the Prediction of Process Enrichment Levels with the Intent of Identifying Potential Material Diversion

    SciTech Connect

    Guenther, C F; Elayat, H A; O?Connell, W J; Lambert, H E

    2007-05-31

    As part of an ongoing effort at Lawrence Livermore National Laboratory (LLNL) to enhance analytical models that simulate enrichment and conversion facilities, efforts are underway to develop routines to estimate the total gamma-ray flux and that of specific lines around process piping containing UF{sub 6}. The intent of the simulation modeling effort is to aid in the identification of possible areas where material diversion could occur, as input to an overall safeguards strategy. The operation of an enrichment facility for the production of low enriched uranium (LEU) presents certain proliferation concerns, including both the possibility of diversion of LEU and the potential for producing material enriched to higher-than-declared, weapons-usable levels. Safeguards applied by the International Atomic Energy Agency (IAEA) are designed to provide assurance against diversion or misuse. Among the measures being considered for use is the measurement of radiation fields at various locations in the cascade hall. Our prior efforts in this area have focused on developing a model to predict neutron fields and how they would change during diversion of misuse. The neutron models indicated that while neutron detection useful in monitoring feed and product containers, it was not useful for monitoring process lines. Our current effort is aimed at developing algorithms that provide estimates of the gamma radiation field outside any process line for the purpose of determining the most effective locations for placing in-plant gamma-monitoring equipment. These algorithms could also be modified to provide both dose and spectral information and, ultimately, detector responses that could be physically measured at various points on the process line. Such information could be used to optimize detector locations in support of real-time on-site monitoring to determine the enrichment levels within a process stream. The results of parametric analyses to establish expected variations for

  4. Identifying the sources and processes of mercury in subtropical estuarine and ocean sediments using Hg isotopic composition.

    PubMed

    Yin, Runsheng; Feng, Xinbin; Chen, Baowei; Zhang, Junjun; Wang, Wenxiong; Li, Xiangdong

    2015-02-01

    The concentrations and isotopic compositions of mercury (Hg) in surface sediments of the Pearl River Estuary (PRE) and the South China Sea (SCS) were analyzed. The data revealed significant differences between the total Hg (THg) in fine-grained sediments collected from the PRE (8-251 μg kg(-1)) and those collected from the SCS (12-83 μg kg(-1)). Large spatial variations in Hg isotopic compositions were observed in the SCS (δ(202)Hg, from -2.82 to -2.10‰; Δ(199)Hg, from +0.21 to +0.45‰) and PRE (δ(202)Hg, from -2.80 to -0.68‰; Δ(199)Hg, from -0.15 to +0.16‰). The large positive Δ(199)Hg in the SCS indicated that a fraction of Hg has undergone Hg(2+) photoreduction processes prior to incorporation into the sediments. The relatively negative Δ(199)Hg values in the PRE indicated that photoreduction of Hg is not the primary route for the removal of Hg from the water column. The riverine input of fine particles played an important role in transporting Hg to the PRE sediments. In the deep ocean bed of the SCS, source-related signatures of Hg isotopes may have been altered by natural geochemical processes (e.g., Hg(2+) photoreduction and preferential adsorption processes). Using Hg isotope compositions, we estimate that river deliveries of Hg from industrial and urban sources and natural soils could be the main inputs of Hg to the PRE. However, the use of Hg isotopes as tracers in source attribution could be limited because of the isotope fractionation by natural processes in the SCS.

  5. SU-C-304-02: Robust and Efficient Process for Acceptance Testing of Varian TrueBeam Linacs Using An Electronic Portal Imaging Device (EPID)

    SciTech Connect

    Yaddanapudi, S; Cai, B; Sun, B; Li, H; Noel, C; Goddu, S; Mutic, S; Harry, T; Pawlicki, T

    2015-06-15

    Purpose: The purpose of this project was to develop a process that utilizes the onboard kV and MV electronic portal imaging devices (EPIDs) to perform rapid acceptance testing (AT) of linacs in order to improve efficiency and standardize AT equipment and processes. Methods: In this study a Varian TrueBeam linac equipped with an amorphous silicon based EPID (aSi1000) was used. The conventional set of AT tests and tolerances was used as a baseline guide, and a novel methodology was developed to perform as many tests as possible using EPID exclusively. The developer mode on Varian TrueBeam linac was used to automate the process. In the current AT process there are about 45 tests that call for customer demos. Many of the geometric tests such as jaw alignment and MLC positioning are performed with highly manual methods, such as using graph paper. The goal of the new methodology was to achieve quantitative testing while reducing variability in data acquisition, analysis and interpretation of the results. The developed process was validated on two machines at two different institutions. Results: At least 25 of the 45 (56%) tests which required customer demo can be streamlined and performed using EPIDs. More than half of the AT tests can be fully automated using the developer mode, while others still require some user interaction. Overall, the preliminary data shows that EPID-based linac AT can be performed in less than a day, compared to 2–3 days using conventional methods. Conclusions: Our preliminary results show that performance of onboard imagers is quite suitable for both geometric and dosimetric testing of TrueBeam systems. A standardized AT process can tremendously improve efficiency, and minimize the variability related to third party quality assurance (QA) equipment and the available onsite expertise. Research funding provided by Varian Medical Systems. Dr. Sasa Mutic receives compensation for providing patient safety training services from Varian Medical

  6. Meta-analysis of genome-wide association studies identifies novel loci that influence cupping and the glaucomatous process.

    PubMed

    Springelkamp, Henriët; Höhn, René; Mishra, Aniket; Hysi, Pirro G; Khor, Chiea-Chuen; Loomis, Stephanie J; Bailey, Jessica N Cooke; Gibson, Jane; Thorleifsson, Gudmar; Janssen, Sarah F; Luo, Xiaoyan; Ramdas, Wishal D; Vithana, Eranga; Nongpiur, Monisha E; Montgomery, Grant W; Xu, Liang; Mountain, Jenny E; Gharahkhani, Puya; Lu, Yi; Amin, Najaf; Karssen, Lennart C; Sim, Kar-Seng; van Leeuwen, Elisabeth M; Iglesias, Adriana I; Verhoeven, Virginie J M; Hauser, Michael A; Loon, Seng-Chee; Despriet, Dominiek D G; Nag, Abhishek; Venturini, Cristina; Sanfilippo, Paul G; Schillert, Arne; Kang, Jae H; Landers, John; Jonasson, Fridbert; Cree, Angela J; van Koolwijk, Leonieke M E; Rivadeneira, Fernando; Souzeau, Emmanuelle; Jonsson, Vesteinn; Menon, Geeta; Weinreb, Robert N; de Jong, Paulus T V M; Oostra, Ben A; Uitterlinden, André G; Hofman, Albert; Ennis, Sarah; Thorsteinsdottir, Unnur; Burdon, Kathryn P; Spector, Timothy D; Mirshahi, Alireza; Saw, Seang-Mei; Vingerling, Johannes R; Teo, Yik-Ying; Haines, Jonathan L; Wolfs, Roger C W; Lemij, Hans G; Tai, E-Shyong; Jansonius, Nomdo M; Jonas, Jost B; Cheng, Ching-Yu; Aung, Tin; Viswanathan, Ananth C; Klaver, Caroline C W; Craig, Jamie E; Macgregor, Stuart; Mackey, David A; Lotery, Andrew J; Stefansson, Kari; Bergen, Arthur A B; Young, Terri L; Wiggs, Janey L; Pfeiffer, Norbert; Wong, Tien-Yin; Pasquale, Louis R; Hewitt, Alex W; van Duijn, Cornelia M; Hammond, Christopher J

    2014-09-22

    Glaucoma is characterized by irreversible optic nerve degeneration and is the most frequent cause of irreversible blindness worldwide. Here, the International Glaucoma Genetics Consortium conducts a meta-analysis of genome-wide association studies of vertical cup-disc ratio (VCDR), an important disease-related optic nerve parameter. In 21,094 individuals of European ancestry and 6,784 individuals of Asian ancestry, we identify 10 new loci associated with variation in VCDR. In a separate risk-score analysis of five case-control studies, Caucasians in the highest quintile have a 2.5-fold increased risk of primary open-angle glaucoma as compared with those in the lowest quintile. This study has more than doubled the known loci associated with optic disc cupping and will allow greater understanding of mechanisms involved in this common blinding condition.

  7. Meta-analysis of genome-wide association studies identifies novel loci that influence cupping and the glaucomatous process.

    PubMed

    Springelkamp, Henriët; Höhn, René; Mishra, Aniket; Hysi, Pirro G; Khor, Chiea-Chuen; Loomis, Stephanie J; Bailey, Jessica N Cooke; Gibson, Jane; Thorleifsson, Gudmar; Janssen, Sarah F; Luo, Xiaoyan; Ramdas, Wishal D; Vithana, Eranga; Nongpiur, Monisha E; Montgomery, Grant W; Xu, Liang; Mountain, Jenny E; Gharahkhani, Puya; Lu, Yi; Amin, Najaf; Karssen, Lennart C; Sim, Kar-Seng; van Leeuwen, Elisabeth M; Iglesias, Adriana I; Verhoeven, Virginie J M; Hauser, Michael A; Loon, Seng-Chee; Despriet, Dominiek D G; Nag, Abhishek; Venturini, Cristina; Sanfilippo, Paul G; Schillert, Arne; Kang, Jae H; Landers, John; Jonasson, Fridbert; Cree, Angela J; van Koolwijk, Leonieke M E; Rivadeneira, Fernando; Souzeau, Emmanuelle; Jonsson, Vesteinn; Menon, Geeta; Weinreb, Robert N; de Jong, Paulus T V M; Oostra, Ben A; Uitterlinden, André G; Hofman, Albert; Ennis, Sarah; Thorsteinsdottir, Unnur; Burdon, Kathryn P; Spector, Timothy D; Mirshahi, Alireza; Saw, Seang-Mei; Vingerling, Johannes R; Teo, Yik-Ying; Haines, Jonathan L; Wolfs, Roger C W; Lemij, Hans G; Tai, E-Shyong; Jansonius, Nomdo M; Jonas, Jost B; Cheng, Ching-Yu; Aung, Tin; Viswanathan, Ananth C; Klaver, Caroline C W; Craig, Jamie E; Macgregor, Stuart; Mackey, David A; Lotery, Andrew J; Stefansson, Kari; Bergen, Arthur A B; Young, Terri L; Wiggs, Janey L; Pfeiffer, Norbert; Wong, Tien-Yin; Pasquale, Louis R; Hewitt, Alex W; van Duijn, Cornelia M; Hammond, Christopher J

    2014-01-01

    Glaucoma is characterized by irreversible optic nerve degeneration and is the most frequent cause of irreversible blindness worldwide. Here, the International Glaucoma Genetics Consortium conducts a meta-analysis of genome-wide association studies of vertical cup-disc ratio (VCDR), an important disease-related optic nerve parameter. In 21,094 individuals of European ancestry and 6,784 individuals of Asian ancestry, we identify 10 new loci associated with variation in VCDR. In a separate risk-score analysis of five case-control studies, Caucasians in the highest quintile have a 2.5-fold increased risk of primary open-angle glaucoma as compared with those in the lowest quintile. This study has more than doubled the known loci associated with optic disc cupping and will allow greater understanding of mechanisms involved in this common blinding condition. PMID:25241763

  8. Meta-analysis of genome-wide association studies identifies novel loci that influence cupping and the glaucomatous process

    PubMed Central

    Springelkamp, Henriët.; Höhn, René; Mishra, Aniket; Hysi, Pirro G.; Khor, Chiea-Chuen; Loomis, Stephanie J.; Bailey, Jessica N. Cooke; Gibson, Jane; Thorleifsson, Gudmar; Janssen, Sarah F.; Luo, Xiaoyan; Ramdas, Wishal D.; Vithana, Eranga; Nongpiur, Monisha E.; Montgomery, Grant W.; Xu, Liang; Mountain, Jenny E.; Gharahkhani, Puya; Lu, Yi; Amin, Najaf; Karssen, Lennart C.; Sim, Kar-Seng; van Leeuwen, Elisabeth M.; Iglesias, Adriana I.; Verhoeven, Virginie J. M.; Hauser, Michael A.; Loon, Seng-Chee; Despriet, Dominiek D. G.; Nag, Abhishek; Venturini, Cristina; Sanfilippo, Paul G.; Schillert, Arne; Kang, Jae H.; Landers, John; Jonasson, Fridbert; Cree, Angela J.; van Koolwijk, Leonieke M. E.; Rivadeneira, Fernando; Souzeau, Emmanuelle; Jonsson, Vesteinn; Menon, Geeta; Mitchell, Paul; Wang, Jie Jin; Rochtchina, Elena; Attia, John; Scott, Rodney; Holliday, Elizabeth G.; Wong, Tien-Yin; Baird, Paul N.; Xie, Jing; Inouye, Michael; Viswanathan, Ananth; Sim, Xueling; Weinreb, Robert N.; de Jong, Paulus T. V. M.; Oostra, Ben A.; Uitterlinden, André G.; Hofman, Albert; Ennis, Sarah; Thorsteinsdottir, Unnur; Burdon, Kathryn P.; Allingham, R. Rand; Brilliant, Murray H.; Budenz, Donald L.; Cooke Bailey, Jessica N.; Christen, William G.; Fingert, John; Friedman, David S.; Gaasterland, Douglas; Gaasterland, Terry; Haines, Jonathan L.; Hauser, Michael A.; Kang, Jae Hee; Kraft, Peter; Lee, Richard K.; Lichter, Paul R.; Liu, Yutao; Loomis, Stephanie J.; Moroi, Sayoko E.; Pasquale, Louis R.; Pericak-Vance, Margaret A.; Realini, Anthony; Richards, Julia E.; Schuman, Joel S.; Scott, William K.; Singh, Kuldev; Sit, Arthur J.; Vollrath, Douglas; Weinreb, Robert N.; Wiggs, Janey L.; Wollstein, Gadi; Zack, Donald J.; Zhang, Kang; Donnelly (Chair), Peter; Barroso (Deputy Chair), Ines; Blackwell, Jenefer M.; Bramon, Elvira; Brown, Matthew A.; Casas, Juan P.; Corvin, Aiden; Deloukas, Panos; Duncanson, Audrey; Jankowski, Janusz; Markus, Hugh S.; Mathew, Christopher G.; Palmer, Colin N. A.; Plomin, Robert; Rautanen, Anna; Sawcer, Stephen J.; Trembath, Richard C.; Viswanathan, Ananth C.; Wood, Nicholas W.; Spencer, Chris C. A.; Band, Gavin; Bellenguez, Céline; Freeman, Colin; Hellenthal, Garrett; Giannoulatou, Eleni; Pirinen, Matti; Pearson, Richard; Strange, Amy; Su, Zhan; Vukcevic, Damjan; Donnelly, Peter; Langford, Cordelia; Hunt, Sarah E.; Edkins, Sarah; Gwilliam, Rhian; Blackburn, Hannah; Bumpstead, Suzannah J.; Dronov, Serge; Gillman, Matthew; Gray, Emma; Hammond, Naomi; Jayakumar, Alagurevathi; McCann, Owen T.; Liddle, Jennifer; Potter, Simon C.; Ravindrarajah, Radhi; Ricketts, Michelle; Waller, Matthew; Weston, Paul; Widaa, Sara; Whittaker, Pamela; Barroso, Ines; Deloukas, Panos; Mathew (Chair), Christopher G.; Blackwell, Jenefer M.; Brown, Matthew A.; Corvin, Aiden; Spencer, Chris C. A.; Spector, Timothy D.; Mirshahi, Alireza; Saw, Seang-Mei; Vingerling, Johannes R.; Teo, Yik-Ying; Haines, Jonathan L.; Wolfs, Roger C. W.; Lemij, Hans G.; Tai, E-Shyong; Jansonius, Nomdo M.; Jonas, Jost B.; Cheng, Ching-Yu; Aung, Tin; Viswanathan, Ananth C.; Klaver, Caroline C. W.; Craig, Jamie E.; Macgregor, Stuart; Mackey, David A.; Lotery, Andrew J.; Stefansson, Kari; Bergen, Arthur A. B.; Young, Terri L.; Wiggs, Janey L.; Pfeiffer, Norbert; Wong, Tien-Yin; Pasquale, Louis R.; Hewitt, Alex W.; van Duijn, Cornelia M.; Hammond, Christopher J.

    2014-01-01

    Glaucoma is characterized by irreversible optic nerve degeneration and is the most frequent cause of irreversible blindness worldwide. Here, the International Glaucoma Genetics Consortium conducts a meta-analysis of genome-wide association studies of vertical cup-disc ratio (VCDR), an important disease-related optic nerve parameter. In 21,094 individuals of European ancestry and 6,784 individuals of Asian ancestry, we identify 10 new loci associated with variation in VCDR. In a separate risk-score analysis of five case-control studies, Caucasians in the highest quintile have a 2.5-fold increased risk of primary open-angle glaucoma as compared with those in the lowest quintile. This study has more than doubled the known loci associated with optic disc cupping and will allow greater understanding of mechanisms involved in this common blinding condition. PMID:25241763

  9. Robustness of spatial micronetworks

    NASA Astrophysics Data System (ADS)

    McAndrew, Thomas C.; Danforth, Christopher M.; Bagrow, James P.

    2015-04-01

    Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure.

  10. Design principles for robust oscillatory behavior.

    PubMed

    Castillo-Hair, Sebastian M; Villota, Elizabeth R; Coronado, Alberto M

    2015-09-01

    Oscillatory responses are ubiquitous in regulatory networks of living organisms, a fact that has led to extensive efforts to study and replicate the circuits involved. However, to date, design principles that underlie the robustness of natural oscillators are not completely known. Here we study a three-component enzymatic network model in order to determine the topological requirements for robust oscillation. First, by simulating every possible topological arrangement and varying their parameter values, we demonstrate that robust oscillators can be obtained by augmenting the number of both negative feedback loops and positive autoregulations while maintaining an appropriate balance of positive and negative interactions. We then identify network motifs, whose presence in more complex topologies is a necessary condition for obtaining oscillatory responses. Finally, we pinpoint a series of simple architectural patterns that progressively render more robust oscillators. Together, these findings can help in the design of more reliable synthetic biomolecular networks and may also have implications in the understanding of other oscillatory systems.

  11. Process development of a New Haemophilus influenzae type b conjugate vaccine and the use of mathematical modeling to identify process optimization possibilities.

    PubMed

    Hamidi, Ahd; Kreeftenberg, Hans; V D Pol, Leo; Ghimire, Saroj; V D Wielen, Luuk A M; Ottens, Marcel

    2016-05-01

    Vaccination is one of the most successful public health interventions being a cost-effective tool in preventing deaths among young children. The earliest vaccines were developed following empirical methods, creating vaccines by trial and error. New process development tools, for example mathematical modeling, as well as new regulatory initiatives requiring better understanding of both the product and the process are being applied to well-characterized biopharmaceuticals (for example recombinant proteins). The vaccine industry is still running behind in comparison to these industries. A production process for a new Haemophilus influenzae type b (Hib) conjugate vaccine, including related quality control (QC) tests, was developed and transferred to a number of emerging vaccine manufacturers. This contributed to a sustainable global supply of affordable Hib conjugate vaccines, as illustrated by the market launch of the first Hib vaccine based on this technology in 2007 and concomitant price reduction of Hib vaccines. This paper describes the development approach followed for this Hib conjugate vaccine as well as the mathematical modeling tool applied recently in order to indicate options for further improvements of the initial Hib process. The strategy followed during the process development of this Hib conjugate vaccine was a targeted and integrated approach based on prior knowledge and experience with similar products using multi-disciplinary expertise. Mathematical modeling was used to develop a predictive model for the initial Hib process (the 'baseline' model) as well as an 'optimized' model, by proposing a number of process changes which could lead to further reduction in price. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:568-580, 2016.

  12. Niche Divergence versus Neutral Processes: Combined Environmental and Genetic Analyses Identify Contrasting Patterns of Differentiation in Recently Diverged Pine Species

    PubMed Central

    Moreno-Letelier, Alejandra; Ortíz-Medrano, Alejandra; Piñero, Daniel

    2013-01-01

    Background and Aims Solving relationships of recently diverged taxa, poses a challenge due to shared polymorphism and weak reproductive barriers. Multiple lines of evidence are needed to identify independently evolving lineages. This is especially true of long-lived species with large effective population sizes, and slow rates of lineage sorting. North American pines are an interesting group to test this multiple approach. Our aim is to combine cytoplasmic genetic markers with environmental information to clarify species boundaries and relationships of the species complex of Pinus flexilis, Pinus ayacahuite, and Pinus strobiformis. Methods Mitochondrial and chloroplast sequences were combined with previously obtained microsatellite data and contrasted with environmental information to reconstruct phylogenetic relationships of the species complex. Ecological niche models were compared to test if ecological divergence is significant among species. Key Results and Conclusion Separately, both genetic and ecological evidence support a clear differentiation of all three species but with different topology, but also reveal an ancestral contact zone between P. strobiformis and P. ayacahuite. The marked ecological differentiation of P. flexilis suggests that ecological speciation has occurred in this lineage, but this is not reflected in neutral markers. The inclusion of environmental traits in phylogenetic reconstruction improved the resolution of internal branches. We suggest that combining environmental and genetic information would be useful for species delimitation and phylogenetic studies in other recently diverged species complexes. PMID:24205167

  13. Genomic analysis of snub-nosed monkeys (Rhinopithecus) identifies genes and processes related to high-altitude adaptation.

    PubMed

    Yu, Li; Wang, Guo-Dong; Ruan, Jue; Chen, Yong-Bin; Yang, Cui-Ping; Cao, Xue; Wu, Hong; Liu, Yan-Hu; Du, Zheng-Lin; Wang, Xiao-Ping; Yang, Jing; Cheng, Shao-Chen; Zhong, Li; Wang, Lu; Wang, Xuan; Hu, Jing-Yang; Fang, Lu; Bai, Bing; Wang, Kai-Le; Yuan, Na; Wu, Shi-Fang; Li, Bao-Guo; Zhang, Jin-Guo; Yang, Ye-Qin; Zhang, Cheng-Lin; Long, Yong-Cheng; Li, Hai-Shu; Yang, Jing-Yuan; Irwin, David M; Ryder, Oliver A; Li, Ying; Wu, Chung-I; Zhang, Ya-Ping

    2016-08-01

    The snub-nosed monkey genus Rhinopithecus includes five closely related species distributed across altitudinal gradients from 800 to 4,500 m. Rhinopithecus bieti, Rhinopithecus roxellana, and Rhinopithecus strykeri inhabit high-altitude habitats, whereas Rhinopithecus brelichi and Rhinopithecus avunculus inhabit lowland regions. We report the de novo whole-genome sequence of R. bieti and genomic sequences for the four other species. Eight shared substitutions were found in six genes related to lung function, DNA repair, and angiogenesis in the high-altitude snub-nosed monkeys. Functional assays showed that the high-altitude variant of CDT1 (Ala537Val) renders cells more resistant to UV irradiation, and the high-altitude variants of RNASE4 (Asn89Lys and Thr128Ile) confer enhanced ability to induce endothelial tube formation in vitro. Genomic scans in the R. bieti and R. roxellana populations identified signatures of selection between and within populations at genes involved in functions relevant to high-altitude adaptation. These results provide valuable insights into the adaptation to high altitude in the snub-nosed monkeys. PMID:27399969

  14. Genomic analysis of snub-nosed monkeys (Rhinopithecus) identifies genes and processes related to high-altitude adaptation.

    PubMed

    Yu, Li; Wang, Guo-Dong; Ruan, Jue; Chen, Yong-Bin; Yang, Cui-Ping; Cao, Xue; Wu, Hong; Liu, Yan-Hu; Du, Zheng-Lin; Wang, Xiao-Ping; Yang, Jing; Cheng, Shao-Chen; Zhong, Li; Wang, Lu; Wang, Xuan; Hu, Jing-Yang; Fang, Lu; Bai, Bing; Wang, Kai-Le; Yuan, Na; Wu, Shi-Fang; Li, Bao-Guo; Zhang, Jin-Guo; Yang, Ye-Qin; Zhang, Cheng-Lin; Long, Yong-Cheng; Li, Hai-Shu; Yang, Jing-Yuan; Irwin, David M; Ryder, Oliver A; Li, Ying; Wu, Chung-I; Zhang, Ya-Ping

    2016-08-01

    The snub-nosed monkey genus Rhinopithecus includes five closely related species distributed across altitudinal gradients from 800 to 4,500 m. Rhinopithecus bieti, Rhinopithecus roxellana, and Rhinopithecus strykeri inhabit high-altitude habitats, whereas Rhinopithecus brelichi and Rhinopithecus avunculus inhabit lowland regions. We report the de novo whole-genome sequence of R. bieti and genomic sequences for the four other species. Eight shared substitutions were found in six genes related to lung function, DNA repair, and angiogenesis in the high-altitude snub-nosed monkeys. Functional assays showed that the high-altitude variant of CDT1 (Ala537Val) renders cells more resistant to UV irradiation, and the high-altitude variants of RNASE4 (Asn89Lys and Thr128Ile) confer enhanced ability to induce endothelial tube formation in vitro. Genomic scans in the R. bieti and R. roxellana populations identified signatures of selection between and within populations at genes involved in functions relevant to high-altitude adaptation. These results provide valuable insights into the adaptation to high altitude in the snub-nosed monkeys.

  15. Robust Three-Metallization Back End of Line Process for 0.18 μm Embedded Ferroelectric Random Access Memory

    NASA Astrophysics Data System (ADS)

    Kang, Seung-Kuk; Rhie, Hyoung-Seub; Kim, Hyun-Ho; Koo, Bon-Jae; Joo, Heung-Jin; Park, Jung-Hun; Kang, Young-Min; Choi, Do-Hyun; Lee, Sung-Young; Jeong, Hong-Sik; Kim, Kinam

    2005-04-01

    We developed ferroelectric random access memory (FRAM)-embedded smartcards in which FRAM replaces electrically erasable PROM (EEPROM) and static random access memory (SRAM) to improve the read/write cycle time and endurance of data memories during operation, in which the main time delay retardation observed in EEPROM embedded smartcards occurs because of slow data update time. EEPROM-embedded smartcards have EEPROM, ROM, and SRAM. To utilize FRAM-embedded smartcards, we should integrate submicron ferroelectric capacitors into embedded logic complementary metal oxide semiconductor (CMOS) without the degradation of the ferroelectric properties. We resolved this process issue from the viewpoint of the back end of line (BEOL) process. As a result, we realized a highly reliable sensing window for FRAM-embedded smartcards that were realized by novel integration schemes such as tungsten and barrier metal (BM) technology, multilevel encapsulating (EBL) layer scheme and optimized intermetallic dielectrics (IMD) technology.

  16. What develops during emotional development? A component process approach to identifying sources of psychopathology risk in adolescence.

    PubMed

    McLaughlin, Katie A; Garrad, Megan C; Somerville, Leah H

    2015-12-01

    Adolescence is a phase of the lifespan associated with widespread changes in emotional behavior thought to reflect both changing environments and stressors, and psychological and neurobiological development. However, emotions themselves are complex phenomena that are composed of multiple subprocesses. In this paper, we argue that examining emotional development from a process-level perspective facilitates important insights into the mechanisms that underlie adolescents' shifting emotions and intensified risk for psychopathology. Contrasting the developmental progressions for the antecedents to emotion, physiological reactivity to emotion, emotional regulation capacity, and motivation to experience particular affective states reveals complex trajectories that intersect in a unique way during adolescence. We consider the implications of these intersecting trajectories for negative outcomes such as psychopathology, as well as positive outcomes for adolescent social bonds.

  17. What develops during emotional development? A component process approach to identifying sources of psychopathology risk in adolescence

    PubMed Central

    McLaughlin, Katie A.; Garrad, Megan C.; Somerville, Leah H.

    2015-01-01

    Adolescence is a phase of the lifespan associated with widespread changes in emotional behavior thought to reflect both changing environments and stressors, and psychological and neurobiological development. However, emotions themselves are complex phenomena that are composed of multiple subprocesses. In this paper, we argue that examining emotional development from a process-level perspective facilitates important insights into the mechanisms that underlie adolescents' shifting emotions and intensified risk for psychopathology. Contrasting the developmental progressions for the antecedents to emotion, physiological reactivity to emotion, emotional regulation capacity, and motivation to experience particular affective states reveals complex trajectories that intersect in a unique way during adolescence. We consider the implications of these intersecting trajectories for negative outcomes such as psychopathology, as well as positive outcomes for adolescent social bonds. PMID:26869841

  18. Using empirical models of species colonization under multiple threatening processes to identify complementary threat-mitigation strategies.

    PubMed

    Tulloch, Ayesha I T; Mortelliti, Alessio; Kay, Geoffrey M; Florance, Daniel; Lindenmayer, David

    2016-08-01

    Approaches to prioritize conservation actions are gaining popularity. However, limited empirical evidence exists on which species might benefit most from threat mitigation and on what combination of threats, if mitigated simultaneously, would result in the best outcomes for biodiversity. We devised a way to prioritize threat mitigation at a regional scale with empirical evidence based on predicted changes to population dynamics-information that is lacking in most threat-management prioritization frameworks that rely on expert elicitation. We used dynamic occupancy models to investigate the effects of multiple threats (tree cover, grazing, and presence of an hyperaggressive competitor, the Noisy Miner (Manorina melanocephala) on bird-population dynamics in an endangered woodland community in southeastern Australia. The 3 threatening processes had different effects on different species. We used predicted patch-colonization probabilities to estimate the benefit to each species of removing one or more threats. We then determined the complementary set of threat-mitigation strategies that maximized colonization of all species while ensuring that redundant actions with little benefit were avoided. The single action that resulted in the highest colonization was increasing tree cover, which increased patch colonization by 5% and 11% on average across all species and for declining species, respectively. Combining Noisy Miner control with increasing tree cover increased species colonization by 10% and 19% on average for all species and for declining species respectively, and was a higher priority than changing grazing regimes. Guidance for prioritizing threat mitigation is critical in the face of cumulative threatening processes. By incorporating population dynamics in prioritization of threat management, our approach helps ensure funding is not wasted on ineffective management programs that target the wrong threats or species.

  19. A systematic study of process windows and MEF for line end shortening under various photo conditions for more effective and robust OPC correction

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Zhu, Jun; Wu, Peng; Jiang, Yuntao

    2006-03-01

    Line end shortening (LES) is a classical phenomenon in photolithography, which is primarily caused by finite resolution from the optics at the position of the line ends. The shortening varies from a couple tens of nanometers for processes with a k1 around 0.5 to as much as 100 nanometers for advanced processes with more aggressive k1 numbers. Besides illumination, the effective resist diffusion has been found to worsen the situation. The effective diffusion length for a typical chemically amplified resist, which has been demonstrated to be critical to the performance of the photolithographic process, can be as much as 30 to 60 nm, which has been found to generate some extra 30 nm LES. Experiments have indicated that wider lines have less LES effect. However, under certain CD through-pitch condition, when the lines or spaces are very wide, the opposing line ends may even merge. Currently, two methods have been widely used to improve the situation. One method to fix this problem is to extend the line ends on mask, or to make them move closer toward each other to compensate for the shortening. However, for a more conservatively defined minimum external separation rule, this method itself may not be enough to fully offset the LES. This is because it has been found that there is a limit when the line ends are too close to each other on mask, any perturbation on the mask CD may cause line ends to merge on wafer. The other way is to add hammerheads, or to add wider endings. This is equivalent to the situation of an effectively wider line ends, which has less shortening effect and can also live with a rather conservative minimum external separation. But in some design, this luxury may not have room to implement, i.e., when the line ends are sandwiched by dense lines with minimum ground-rules. Therefore, to best minimize the effect of LES or to completely characterize the LES effect, one will need to study both the process window and mask error factor (MEF) under a variety

  20. Genome-wide transcriptome study in wheat identified candidate genes related to processing quality, majority of them showing interaction (quality x development) and having temporal and spatial distributions

    PubMed Central

    2014-01-01

    Background The cultivated bread wheat (Triticum aestivum L.) possesses unique flour quality, which can be processed into many end-use food products such as bread, pasta, chapatti (unleavened flat bread), biscuit, etc. The present wheat varieties require improvement in processing quality to meet the increasing demand of better quality food products. However, processing quality is very complex and controlled by many genes, which have not been completely explored. To identify the candidate genes whose expressions changed due to variation in processing quality and interaction (quality x development), genome-wide transcriptome studies were performed in two sets of diverse Indian wheat varieties differing for chapatti quality. It is also important to understand the temporal and spatial distributions of their expressions for designing tissue and growth specific functional genomics experiments. Results Gene-specific two-way ANOVA analysis of expression of about 55 K transcripts in two diverse sets of Indian wheat varieties for chapatti quality at three seed developmental stages identified 236 differentially expressed probe sets (10-fold). Out of 236, 110 probe sets were identified for chapatti quality. Many processing quality related key genes such as glutenin and gliadins, puroindolines, grain softness protein, alpha and beta amylases, proteases, were identified, and many other candidate genes related to cellular and molecular functions were also identified. The ANOVA analysis revealed that the expression of 56 of 110 probe sets was involved in interaction (quality x development). Majority of the probe sets showed differential expression at early stage of seed development i.e. temporal expression. Meta-analysis revealed that the majority of the genes expressed in one or a few growth stages indicating spatial distribution of their expressions. The differential expressions of a few candidate genes such as pre-alpha/beta-gliadin and gamma gliadin were validated by RT

  1. Biological Robustness: Paradigms, Mechanisms, and Systems Principles

    PubMed Central

    Whitacre, James Michael

    2012-01-01

    Robustness has been studied through the analysis of data sets, simulations, and a variety of experimental techniques that each have their own limitations but together confirm the ubiquity of biological robustness. Recent trends suggest that different types of perturbation (e.g., mutational, environmental) are commonly stabilized by similar mechanisms, and system sensitivities often display a long-tailed distribution with relatively few perturbations representing the majority of sensitivities. Conceptual paradigms from network theory, control theory, complexity science, and natural selection have been used to understand robustness, however each paradigm has a limited scope of applicability and there has been little discussion of the conditions that determine this scope or the relationships between paradigms. Systems properties such as modularity, bow-tie architectures, degeneracy, and other topological features are often positively associated with robust traits, however common underlying mechanisms are rarely mentioned. For instance, many system properties support robustness through functional redundancy or through response diversity with responses regulated by competitive exclusion and cooperative facilitation. Moreover, few studies compare and contrast alternative strategies for achieving robustness such as homeostasis, adaptive plasticity, environment shaping, and environment tracking. These strategies share similarities in their utilization of adaptive and self-organization processes that are not well appreciated yet might be suggestive of reusable building blocks for generating robust behavior. PMID:22593762

  2. Biological robustness: paradigms, mechanisms, and systems principles.

    PubMed

    Whitacre, James Michael

    2012-01-01

    Robustness has been studied through the analysis of data sets, simulations, and a variety of experimental techniques that each have their own limitations but together confirm the ubiquity of biological robustness. Recent trends suggest that different types of perturbation (e.g., mutational, environmental) are commonly stabilized by similar mechanisms, and system sensitivities often display a long-tailed distribution with relatively few perturbations representing the majority of sensitivities. Conceptual paradigms from network theory, control theory, complexity science, and natural selection have been used to understand robustness, however each paradigm has a limited scope of applicability and there has been little discussion of the conditions that determine this scope or the relationships between paradigms. Systems properties such as modularity, bow-tie architectures, degeneracy, and other topological features are often positively associated with robust traits, however common underlying mechanisms are rarely mentioned. For instance, many system properties support robustness through functional redundancy or through response diversity with responses regulated by competitive exclusion and cooperative facilitation. Moreover, few studies compare and contrast alternative strategies for achieving robustness such as homeostasis, adaptive plasticity, environment shaping, and environment tracking. These strategies share similarities in their utilization of adaptive and self-organization processes that are not well appreciated yet might be suggestive of reusable building blocks for generating robust behavior. PMID:22593762

  3. Identifying a Network of Brain Regions Involved in Aversion-Related Processing: A Cross-Species Translational Investigation

    PubMed Central

    Hayes, Dave J.; Northoff, Georg

    2011-01-01

    The ability to detect and respond appropriately to aversive stimuli is essential for all organisms, from fruit flies to humans. This suggests the existence of a core neural network which mediates aversion-related processing. Human imaging studies on aversion have highlighted the involvement of various cortical regions, such as the prefrontal cortex, while animal studies have focused largely on subcortical regions like the periaqueductal gray and hypothalamus. However, whether and how these regions form a core neural network of aversion remains unclear. To help determine this, a translational cross-species investigation in humans (i.e., meta-analysis) and other animals (i.e., systematic review of functional neuroanatomy) was performed. Our results highlighted the recruitment of the anterior cingulate cortex, the anterior insula, and the amygdala as well as other subcortical (e.g., thalamus, midbrain) and cortical (e.g., orbitofrontal) regions in both animals and humans. Importantly, involvement of these regions remained independent of sensory modality. This study provides evidence for a core neural network mediating aversion in both animals and humans. This not only contributes to our understanding of the trans-species neural correlates of aversion but may also carry important implications for psychiatric disorders where abnormal aversive behavior can often be observed. PMID:22102836

  4. Pattern recognition and data mining techniques to identify factors in wafer processing and control determining overlay error

    NASA Astrophysics Data System (ADS)

    Lam, Auguste; Ypma, Alexander; Gatefait, Maxime; Deckers, David; Koopman, Arne; van Haren, Richard; Beltman, Jan

    2015-03-01

    On-product overlay can be improved through the use of context data from the fab and the scanner. Continuous improvements in lithography and processing performance over the past years have resulted in consequent overlay performance improvement for critical layers. Identification of the remaining factors causing systematic disturbances and inefficiencies will further reduce overlay. By building a context database, mappings between context, fingerprints and alignment & overlay metrology can be learned through techniques from pattern recognition and data mining. We relate structure (`patterns') in the metrology data to relevant contextual factors. Once understood, these factors could be moved to the known effects (e.g. the presence of systematic fingerprints from reticle writing error or lens and reticle heating). Hence, we build up a knowledge base of known effects based on data. Outcomes from such an integral (`holistic') approach to lithography data analysis may be exploited in a model-based predictive overlay controller that combines feedback and feedforward control [1]. Hence, the available measurements from scanner, fab and metrology equipment are combined to reveal opportunities for further overlay improvement which would otherwise go unnoticed.

  5. Significance of silica in identifying the processes affecting groundwater chemistry in parts of Kali watershed, Central Ganga Plain, India

    NASA Astrophysics Data System (ADS)

    Khan, Arina; Umar, Rashid; Khan, Haris Hasan

    2015-03-01

    Chemical geothermometry using silica was employed in the present study to estimate the sub-surface groundwater temperature and the corresponding depth of the groundwater in parts of Kali watershed in Bulandshahr and Aligarh district. 42 groundwater samples each were collected from borewells during pre-monsoon and post-monsoon season 2012 and analysed for all major ions and silica. Silica values in the area range from 18.72 to 50.64 mg/l in May 2012 and from 18.89 to 52.23 mg/l in November 2012. Chalcedony temperature >60 °C was deduced for five different locations in each season, which corresponds to a depth of more than 1,000 metres. Spatial variation of silica shows high values along a considerable stretch of River Kali, during pre-monsoon season. Relationship of silica with Total Dissolved Solids and Chloride was established to infer the role of geogenic and anthropogenic processes in solute acquisition. It was found that both water-rock interaction and anthropogenic influences are responsible for the observed water chemistry.

  6. A High-Resolution Dynamic Approach to Identifying and Characterizing Slow Slip and Subduction Locking Processes in Cascadia

    NASA Astrophysics Data System (ADS)

    Dimitrova, L. L.; Haines, A. J.; Wallace, L. M.; Bartlow, N. M.

    2014-12-01

    Slow slip events (SSEs) in Cascadia occur at ~30-50 km depth, every 10-19 months, and typically involve slip of a few cm, producing surface displacements on the order of a few mm up to ~1cm. There is a well-known association between tremor and SSEs; however, there are more frequent tremor episodes that are not clearly associated with geodetically detected SSEs (Wech and Creager 2011). This motivates the question: Are there smaller SSE signals that we are currently not recognizing geodetically? Most existing methods to investigate transient deformation with continuous GPS (cGPS) data employ kinematic, smoothed approaches to fit the cGPS data, limiting SSE identification and characterization.Recently, Haines et al. (submitted) showed that Vertical Derivatives of Horizontal Stress (VDoHS) rates, calculated from GPS data by solving the force balance equations at the Earth's surface, represent the most inclusive and spatially compact surface expressions of subsurface deformation sources: VDoHS rate vectors are tightly localized above the sources and point in the direction of push or pull. We adapt this approach, previously applied to campaign GPS data in New Zealand (e.g., Dimitrova et al. 2013), to daily cGPS time series from Cascadia and compare our results with those from the Network Inversion Filter (NIF) for 2009 (Bartlow et al. 2011). In both NIF and VDoHS rate inversions, the main 2009 SSE pulse reaches a peak slip value and splits into northern and southern sections. However, our inversion shows that the SSE started prior to July 27-28, compared to August 6-7 from the NIF results. Furthermore, we detect a smaller (~1 mm surface displacement) event from June 29-July 7 in southern Cascadia, which had not been identified previously.VDoHS rates also reveal the boundaries between the locked and unlocked portions of the megathrust, and we can track how this varies throughout the SSE cycle. Above the locked interface, the pull of the subducted plate generates shear

  7. Robust acoustic object detection

    NASA Astrophysics Data System (ADS)

    Amit, Yali; Koloydenko, Alexey; Niyogi, Partha

    2005-10-01

    We consider a novel approach to the problem of detecting phonological objects like phonemes, syllables, or words, directly from the speech signal. We begin by defining local features in the time-frequency plane with built in robustness to intensity variations and time warping. Global templates of phonological objects correspond to the coincidence in time and frequency of patterns of the local features. These global templates are constructed by using the statistics of the local features in a principled way. The templates have clear phonetic interpretability, are easily adaptable, have built in invariances, and display considerable robustness in the face of additive noise and clutter from competing speakers. We provide a detailed evaluation of the performance of some diphone detectors and a word detector based on this approach. We also perform some phonetic classification experiments based on the edge-based features suggested here.

  8. Doubly robust survival trees.

    PubMed

    Steingrimsson, Jon Arni; Diao, Liqun; Molinaro, Annette M; Strawderman, Robert L

    2016-09-10

    Estimating a patient's mortality risk is important in making treatment decisions. Survival trees are a useful tool and employ recursive partitioning to separate patients into different risk groups. Existing 'loss based' recursive partitioning procedures that would be used in the absence of censoring have previously been extended to the setting of right censored outcomes using inverse probability censoring weighted estimators of loss functions. In this paper, we propose new 'doubly robust' extensions of these loss estimators motivated by semiparametric efficiency theory for missing data that better utilize available data. Simulations and a data analysis demonstrate strong performance of the doubly robust survival trees compared with previously used methods. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27037609

  9. Enhancing the Performance of a robust sol-gel-processed p-type delafossite CuFeO2 photocathode for solar water reduction.

    PubMed

    Prévot, Mathieu S; Guijarro, Néstor; Sivula, Kevin

    2015-04-24

    Delafossite CuFeO2 is a promising material for solar hydrogen production, but is limited by poor photocurrent. Strategies are demonstrated herein to improve the performance of CuFeO2 electrodes prepared directly on transparent conductive substrates by using a simple sol-gel technique. Optimizing the delafossite layer thickness and increasing the majority carrier concentration (through the thermal intercalation of oxygen) give insights into the limitations of photogenerated charge extraction and enable performance improvements. In oxygen-saturated electrolyte, (sacrificial) photocurrents (1 sun illumination) up to 1.51 mA cm(-2) at +0.35 V versus a reversible hydrogen electrode (RHE) are observed. Water photoreduction with bare delafossite is limited by poor hydrogen evolution catalysis, but employing methyl viologen as an electron acceptor verifies that photogenerated electrons can be extracted from the conduction band before recombination into mid-gap trap states identified by electrochemical impedance spectroscopy. Through the use of suitable oxide overlayers and a platinum catalyst, sustained solar hydrogen production photocurrents of 0.4 mA cm(-2) at 0 V versus RHE (0.8 mA cm(-2) at -0.2 V) are demonstrated. Importantly, bare CuFeO2 is highly stable at potentials at which photocurrent is generated. No degradation is observed after 40 h under operating conditions in oxygen-saturated electrolyte. PMID:25572288

  10. Robust reinforcement learning.

    PubMed

    Morimoto, Jun; Doya, Kenji

    2005-02-01

    This letter proposes a new reinforcement learning (RL) paradigm that explicitly takes into account input disturbance as well as modeling errors. The use of environmental models in RL is quite popular for both offline learning using simulations and for online action planning. However, the difference between the model and the real environment can lead to unpredictable, and often unwanted, results. Based on the theory of H(infinity) control, we consider a differential game in which a "disturbing" agent tries to make the worst possible disturbance while a "control" agent tries to make the best control input. The problem is formulated as finding a min-max solution of a value function that takes into account the amount of the reward and the norm of the disturbance. We derive online learning algorithms for estimating the value function and for calculating the worst disturbance and the best control in reference to the value function. We tested the paradigm, which we call robust reinforcement learning (RRL), on the control task of an inverted pendulum. In the linear domain, the policy and the value function learned by online algorithms coincided with those derived analytically by the linear H(infinity) control theory. For a fully nonlinear swing-up task, RRL achieved robust performance with changes in the pendulum weight and friction, while a standard reinforcement learning algorithm could not deal with these changes. We also applied RRL to the cart-pole swing-up task, and a robust swing-up policy was acquired.

  11. Robust facial expression recognition via compressive sensing.

    PubMed

    Zhang, Shiqing; Zhao, Xiaoming; Lei, Bicheng

    2012-01-01

    Recently, compressive sensing (CS) has attracted increasing attention in the areas of signal processing, computer vision and pattern recognition. In this paper, a new method based on the CS theory is presented for robust facial expression recognition. The CS theory is used to construct a sparse representation classifier (SRC). The effectiveness and robustness of the SRC method is investigated on clean and occluded facial expression images. Three typical facial features, i.e., the raw pixels, Gabor wavelets representation and local binary patterns (LBP), are extracted to evaluate the performance of the SRC method. Compared with the nearest neighbor (NN), linear support vector machines (SVM) and the nearest subspace (NS), experimental results on the popular Cohn-Kanade facial expression database demonstrate that the SRC method obtains better performance and stronger robustness to corruption and occlusion on robust facial expression recognition tasks.

  12. Metabolic engineering of industrial platform microorganisms for biorefinery applications--optimization of substrate spectrum and process robustness by rational and evolutive strategies.

    PubMed

    Buschke, Nele; Schäfer, Rudolf; Becker, Judith; Wittmann, Christoph

    2013-05-01

    Bio-based production promises a sustainable route to myriads of chemicals, materials and fuels. With regard to eco-efficiency, its future success strongly depends on a next level of bio-processes using raw materials beyond glucose. Such renewables, i.e., polymers, complex substrate mixtures and diluted waste streams, often cannot be metabolized naturally by the producing organisms. This particularly holds for well-known microorganisms from the traditional sugar-based biotechnology, including Escherichia coli, Corynebacterium glutamicum and Saccharomyces cerevisiae which have been engineered successfully to produce a broad range of products from glucose. In order to make full use of their production potential within the bio-refinery value chain, they have to be adapted to various feed-stocks of interest. This review focuses on the strategies to be applied for this purpose which combine rational and evolutive approaches. Hereby, the three industrial platform microorganisms, E. coli, C. glutamicum and S. cerevisiae are highlighted due to their particular importance.

  13. Robustness of Tree Extraction Algorithms from LIDAR

    NASA Astrophysics Data System (ADS)

    Dumitru, M.; Strimbu, B. M.

    2015-12-01

    Forest inventory faces a new era as unmanned aerial systems (UAS) increased the precision of measurements, while reduced field effort and price of data acquisition. A large number of algorithms were developed to identify various forest attributes from UAS data. The objective of the present research is to assess the robustness of two types of tree identification algorithms when UAS data are combined with digital elevation models (DEM). The algorithms use as input photogrammetric point cloud, which are subsequent rasterized. The first type of algorithms associate tree crown with an inversed watershed (subsequently referred as watershed based), while the second type is based on simultaneous representation of tree crown as an individual entity, and its relation with neighboring crowns (subsequently referred as simultaneous representation). A DJI equipped with a SONY a5100 was used to acquire images over an area from center Louisiana. The images were processed with Pix4D, and a photogrammetric point cloud with 50 points / m2 was attained. DEM was obtained from a flight executed in 2013, which also supplied a LIDAR point cloud with 30 points/m2. The algorithms were tested on two plantations with different species and crown class complexities: one homogeneous (i.e., a mature loblolly pine plantation), and one heterogeneous (i.e., an unmanaged uneven-aged stand with mixed species pine -hardwoods). Tree identification on photogrammetric point cloud reveled that simultaneous representation algorithm outperforms watershed algorithm, irrespective stand complexity. Watershed algorithm exhibits robustness to parameters, but the results were worse than majority sets of parameters needed by the simultaneous representation algorithm. The simultaneous representation algorithm is a better alternative to watershed algorithm even when parameters are not accurately estimated. Similar results were obtained when the two algorithms were run on the LIDAR point cloud.

  14. The maternal-to-zygotic transition targets actin to promote robustness during morphogenesis.

    PubMed

    Zheng, Liuliu; Sepúlveda, Leonardo A; Lua, Rhonald C; Lichtarge, Olivier; Golding, Ido; Sokac, Anna Marie

    2013-11-01

    Robustness is a property built into biological systems to ensure stereotypical outcomes despite fluctuating inputs from gene dosage, biochemical noise, and the environment. During development, robustness safeguards embryos against structural and functional defects. Yet, our understanding of how robustness is achieved in embryos is limited. While much attention has been paid to the role of gene and signaling networks in promoting robust cell fate determination, little has been done to rigorously assay how mechanical processes like morphogenesis are designed to buffer against variable conditions. Here we show that the cell shape changes that drive morphogenesis can be made robust by mechanisms targeting the actin cytoskeleton. We identified two novel members of the Vinculin/α-Catenin Superfamily that work together to promote robustness during Drosophila cellularization, the dramatic tissue-building event that generates the primary epithelium of the embryo. We find that zygotically-expressed Serendipity-α (Sry-α) and maternally-loaded Spitting Image (Spt) share a redundant, actin-regulating activity during cellularization. Spt alone is sufficient for cellularization at an optimal temperature, but both Spt plus Sry-α are required at high temperature and when actin assembly is compromised by genetic perturbation. Our results offer a clear example of how the maternal and zygotic genomes interact to promote the robustness of early developmental events. Specifically, the Spt and Sry-α collaboration is informative when it comes to genes that show both a maternal and zygotic requirement during a given morphogenetic process. For the cellularization of Drosophilids, Sry-α and its expression profile may represent a genetic adaptive trait with the sole purpose of making this extreme event more reliable. Since all morphogenesis depends on cytoskeletal remodeling, both in embryos and adults, we suggest that robustness-promoting mechanisms aimed at actin could be effective at

  15. The Maternal-to-Zygotic Transition Targets Actin to Promote Robustness during Morphogenesis

    PubMed Central

    Zheng, Liuliu; Sepúlveda, Leonardo A.; Lua, Rhonald C.; Lichtarge, Olivier; Golding, Ido; Sokac, Anna Marie

    2013-01-01

    Robustness is a property built into biological systems to ensure stereotypical outcomes despite fluctuating inputs from gene dosage, biochemical noise, and the environment. During development, robustness safeguards embryos against structural and functional defects. Yet, our understanding of how robustness is achieved in embryos is limited. While much attention has been paid to the role of gene and signaling networks in promoting robust cell fate determination, little has been done to rigorously assay how mechanical processes like morphogenesis are designed to buffer against variable conditions. Here we show that the cell shape changes that drive morphogenesis can be made robust by mechanisms targeting the actin cytoskeleton. We identified two novel members of the Vinculin/α-Catenin Superfamily that work together to promote robustness during Drosophila cellularization, the dramatic tissue-building event that generates the primary epithelium of the embryo. We find that zygotically-expressed Serendipity-α (Sry-α) and maternally-loaded Spitting Image (Spt) share a redundant, actin-regulating activity during cellularization. Spt alone is sufficient for cellularization at an optimal temperature, but both Spt plus Sry-α are required at high temperature and when actin assembly is compromised by genetic perturbation. Our results offer a clear example of how the maternal and zygotic genomes interact to promote the robustness of early developmental events. Specifically, the Spt and Sry-α collaboration is informative when it comes to genes that show both a maternal and zygotic requirement during a given morphogenetic process. For the cellularization of Drosophilids, Sry-α and its expression profile may represent a genetic adaptive trait with the sole purpose of making this extreme event more reliable. Since all morphogenesis depends on cytoskeletal remodeling, both in embryos and adults, we suggest that robustness-promoting mechanisms aimed at actin could be effective at

  16. The maternal-to-zygotic transition targets actin to promote robustness during morphogenesis.

    PubMed

    Zheng, Liuliu; Sepúlveda, Leonardo A; Lua, Rhonald C; Lichtarge, Olivier; Golding, Ido; Sokac, Anna Marie

    2013-11-01

    Robustness is a property built into biological systems to ensure stereotypical outcomes despite fluctuating inputs from gene dosage, biochemical noise, and the environment. During development, robustness safeguards embryos against structural and functional defects. Yet, our understanding of how robustness is achieved in embryos is limited. While much attention has been paid to the role of gene and signaling networks in promoting robust cell fate determination, little has been done to rigorously assay how mechanical processes like morphogenesis are designed to buffer against variable conditions. Here we show that the cell shape changes that drive morphogenesis can be made robust by mechanisms targeting the actin cytoskeleton. We identified two novel members of the Vinculin/α-Catenin Superfamily that work together to promote robustness during Drosophila cellularization, the dramatic tissue-building event that generates the primary epithelium of the embryo. We find that zygotically-expressed Serendipity-α (Sry-α) and maternally-loaded Spitting Image (Spt) share a redundant, actin-regulating activity during cellularization. Spt alone is sufficient for cellularization at an optimal temperature, but both Spt plus Sry-α are required at high temperature and when actin assembly is compromised by genetic perturbation. Our results offer a clear example of how the maternal and zygotic genomes interact to promote the robustness of early developmental events. Specifically, the Spt and Sry-α collaboration is informative when it comes to genes that show both a maternal and zygotic requirement during a given morphogenetic process. For the cellularization of Drosophilids, Sry-α and its expression profile may represent a genetic adaptive trait with the sole purpose of making this extreme event more reliable. Since all morphogenesis depends on cytoskeletal remodeling, both in embryos and adults, we suggest that robustness-promoting mechanisms aimed at actin could be effective at

  17. Robust Systems Test Framework

    2003-01-01

    The Robust Systems Test Framework (RSTF) provides a means of specifying and running test programs on various computation platforms. RSTF provides a level of specification above standard scripting languages. During a set of runs, standard timing information is collected. The RSTF specification can also gather job-specific information, and can include ways to classify test outcomes. All results and scripts can be stored into and retrieved from an SQL database for later data analysis. RSTF alsomore » provides operations for managing the script and result files, and for compiling applications and gathering compilation information such as optimization flags.« less

  18. Robust quantum spatial search

    NASA Astrophysics Data System (ADS)

    Tulsi, Avatar

    2016-07-01

    Quantum spatial search has been widely studied with most of the study focusing on quantum walk algorithms. We show that quantum walk algorithms are extremely sensitive to systematic errors. We present a recursive algorithm which offers significant robustness to certain systematic errors. To search N items, our recursive algorithm can tolerate errors of size O(1{/}√{ln N}) which is exponentially better than quantum walk algorithms for which tolerable error size is only O(ln N{/}√{N}). Also, our algorithm does not need any ancilla qubit. Thus our algorithm is much easier to implement experimentally compared to quantum walk algorithms.

  19. Robust Systems Test Framework

    SciTech Connect

    Ballance, Robert A.

    2003-01-01

    The Robust Systems Test Framework (RSTF) provides a means of specifying and running test programs on various computation platforms. RSTF provides a level of specification above standard scripting languages. During a set of runs, standard timing information is collected. The RSTF specification can also gather job-specific information, and can include ways to classify test outcomes. All results and scripts can be stored into and retrieved from an SQL database for later data analysis. RSTF also provides operations for managing the script and result files, and for compiling applications and gathering compilation information such as optimization flags.

  20. Robust telescope scheduling

    NASA Technical Reports Server (NTRS)

    Swanson, Keith; Bresina, John; Drummond, Mark

    1994-01-01

    This paper presents a technique for building robust telescope schedules that tend not to break. The technique is called Just-In-Case (JIC) scheduling and it implements the common sense idea of being prepared for likely errors, just in case they should occur. The JIC algorithm analyzes a given schedule, determines where it is likely to break, reinvokes a scheduler to generate a contingent schedule for each highly probable break case, and produces a 'multiply contingent' schedule. The technique was developed for an automatic telescope scheduling problem, and the paper presents empirical results showing that Just-In-Case scheduling performs extremely well for this problem.

  1. Robust Photon Locking

    SciTech Connect

    Bayer, T.; Wollenhaupt, M.; Sarpe-Tudoran, C.; Baumert, T.

    2009-01-16

    We experimentally demonstrate a strong-field coherent control mechanism that combines the advantages of photon locking (PL) and rapid adiabatic passage (RAP). Unlike earlier implementations of PL and RAP by pulse sequences or chirped pulses, we use shaped pulses generated by phase modulation of the spectrum of a femtosecond laser pulse with a generalized phase discontinuity. The novel control scenario is characterized by a high degree of robustness achieved via adiabatic preparation of a state of maximum coherence. Subsequent phase control allows for efficient switching among different target states. We investigate both properties by photoelectron spectroscopy on potassium atoms interacting with the intense shaped light field.

  2. Robust control for uncertain structures

    NASA Technical Reports Server (NTRS)

    Douglas, Joel; Athans, Michael

    1991-01-01

    Viewgraphs on robust control for uncertain structures are presented. Topics covered include: robust linear quadratic regulator (RLQR) formulas; mismatched LQR design; RLQR design; interpretations of RLQR design; disturbance rejection; and performance comparisons: RLQR vs. mismatched LQR.

  3. Experimental Robust Control Studies on an Unstable Magnetic Suspension System

    NASA Technical Reports Server (NTRS)

    Lim, Kyong B.; Cox, David E.

    1993-01-01

    This study is an experimental investigation of the robustness of various controllers designed for the Large Angle Magnetic Suspension Test Fixture (LAMSTF). Both analytical and identified nominal models are used for designing controllers along with two different types of uncertainty models. Robustness refers to maintain- ing tracking performance under analytical model errors and dynamically induced eddy currents, while external disturbances are not considered. Results show that incorporating robustness into analytical models gives significantly better results. However, incorporating incorrect uncertainty models may lead to poorer performance than not designing for robustness at all. Designing controllers based on accurate identified models gave the best performance. In fact, incorporating a significant level of robustness into an accurate nominal model resulted in reduced performance. This paper discusses an assortment of experimental results in a consistent manner using robust control theory.

  4. Robust omniphobic surfaces

    PubMed Central

    Tuteja, Anish; Choi, Wonjae; Mabry, Joseph M.; McKinley, Gareth H.; Cohen, Robert E.

    2008-01-01

    Superhydrophobic surfaces display water contact angles greater than 150° in conjunction with low contact angle hysteresis. Microscopic pockets of air trapped beneath the water droplets placed on these surfaces lead to a composite solid-liquid-air interface in thermodynamic equilibrium. Previous experimental and theoretical studies suggest that it may not be possible to form similar fully-equilibrated, composite interfaces with drops of liquids, such as alkanes or alcohols, that possess significantly lower surface tension than water (γlv = 72.1 mN/m). In this work we develop surfaces possessing re-entrant texture that can support strongly metastable composite solid-liquid-air interfaces, even with very low surface tension liquids such as pentane (γlv = 15.7 mN/m). Furthermore, we propose four design parameters that predict the measured contact angles for a liquid droplet on a textured surface, as well as the robustness of the composite interface, based on the properties of the solid surface and the contacting liquid. These design parameters allow us to produce two different families of re-entrant surfaces— randomly-deposited electrospun fiber mats and precisely fabricated microhoodoo surfaces—that can each support a robust composite interface with essentially any liquid. These omniphobic surfaces display contact angles greater than 150° and low contact angle hysteresis with both polar and nonpolar liquids possessing a wide range of surface tensions. PMID:19001270

  5. Quality by Design Approaches to Formulation Robustness-An Antibody Case Study.

    PubMed

    Wurth, Christine; Demeule, Barthelemy; Mahler, Hanns-Christian; Adler, Michael

    2016-05-01

    The International Conference on Harmonization Q8 (R2) includes a requirement that "Critical formulation attributes and process parameters are generally identified through an assessment of the extent to which their variation can impact the quality of the drug product," that is, the need to assess the robustness of a formulation. In this article, a quality-by-design-based definition of a "robust formulation" for a biopharmaceutical product is proposed and illustrated with a case study. A multivariate formulation robustness study was performed for a selected formulation of a monoclonal antibody to demonstrate acceptable quality at the target composition as well as at the edges of the allowable composition ranges and fulfillment of the end-of-shelf-life stability requirements of 36 months at the intended storage temperature (2°C-8°C). Extrapolation of 24 months' formulation robustness data to end of shelf life showed that the MAb formulation was robust within the claimed formulation composition ranges. Based on this case study, we propose that a formulation can be claimed as "robust" if all drug substance and drug product critical quality attributes remain within their respective end-of-shelf-life critical quality attribute-acceptance criteria throughout the entire claimed formulation composition range. PMID:27001536

  6. Invariants reveal multiple forms of robustness in bifunctional enzyme systems.

    PubMed

    Dexter, Joseph P; Dasgupta, Tathagata; Gunawardena, Jeremy

    2015-08-01

    Experimental and theoretical studies have suggested that bifunctional enzymes catalyzing opposing modification and demodification reactions can confer steady-state concentration robustness to their substrates. However, the types of robustness and the biochemical basis for them have remained elusive. Here we report a systematic study of the most general biochemical reaction network for a bifunctional enzyme acting on a substrate with one modification site, along with eleven sub-networks with more specialized biochemical assumptions. We exploit ideas from computational algebraic geometry, introduced in previous work, to find a polynomial expression (an invariant) between the steady state concentrations of the modified and unmodified substrate for each network. We use these invariants to identify five classes of robust behavior: robust upper bounds on concentration, robust two-sided bounds on concentration ratio, hybrid robustness, absolute concentration robustness (ACR), and robust concentration ratio. This analysis demonstrates that robustness can take a variety of forms and that the type of robustness is sensitive to many biochemical details, with small changes in biochemistry leading to very different steady-state behaviors. In particular, we find that the widely-studied ACR requires highly specialized assumptions in addition to bifunctionality. An unexpected result is that the robust bounds derived from invariants are strictly tighter than those derived by ad hoc manipulation of the underlying differential equations, confirming the value of invariants as a tool to gain insight into biochemical reaction networks. Furthermore, invariants yield multiple experimentally testable predictions and illuminate new strategies for inferring enzymatic mechanisms from steady-state measurements.

  7. Robust geostatistical analysis of spatial data

    NASA Astrophysics Data System (ADS)

    Papritz, Andreas; Künsch, Hans Rudolf; Schwierz, Cornelia; Stahel, Werner A.

    2013-04-01

    Most of the geostatistical software tools rely on non-robust algorithms. This is unfortunate, because outlying observations are rather the rule than the exception, in particular in environmental data sets. Outliers affect the modelling of the large-scale spatial trend, the estimation of the spatial dependence of the residual variation and the predictions by kriging. Identifying outliers manually is cumbersome and requires expertise because one needs parameter estimates to decide which observation is a potential outlier. Moreover, inference after the rejection of some observations is problematic. A better approach is to use robust algorithms that prevent automatically that outlying observations have undue influence. Former studies on robust geostatistics focused on robust estimation of the sample variogram and ordinary kriging without external drift. Furthermore, Richardson and Welsh (1995) proposed a robustified version of (restricted) maximum likelihood ([RE]ML) estimation for the variance components of a linear mixed model, which was later used by Marchant and Lark (2007) for robust REML estimation of the variogram. We propose here a novel method for robust REML estimation of the variogram of a Gaussian random field that is possibly contaminated by independent errors from a long-tailed distribution. It is based on robustification of estimating equations for the Gaussian REML estimation (Welsh and Richardson, 1997). Besides robust estimates of the parameters of the external drift and of the variogram, the method also provides standard errors for the estimated parameters, robustified kriging predictions at both sampled and non-sampled locations and kriging variances. Apart from presenting our modelling framework, we shall present selected simulation results by which we explored the properties of the new method. This will be complemented by an analysis a data set on heavy metal contamination of the soil in the vicinity of a metal smelter. Marchant, B.P. and Lark, R

  8. On the robustness of Herlihy's hierarchy

    NASA Technical Reports Server (NTRS)

    Jayanti, Prasad

    1993-01-01

    A wait-free hierarchy maps object types to levels in Z(+) U (infinity) and has the following property: if a type T is at level N, and T' is an arbitrary type, then there is a wait-free implementation of an object of type T', for N processes, using only registers and objects of type T. The infinite hierarchy defined by Herlihy is an example of a wait-free hierarchy. A wait-free hierarchy is robust if it has the following property: if T is at level N, and S is a finite set of types belonging to levels N - 1 or lower, then there is no wait-free implementation of an object of type T, for N processes, using any number and any combination of objects belonging to the types in S. Robustness implies that there are no clever ways of combining weak shared objects to obtain stronger ones. Contrary to what many researchers believe, we prove that Herlihy's hierarchy is not robust. We then define some natural variants of Herlihy's hierarchy, which are also infinite wait-free hierarchies. With the exception of one, which is still open, these are not robust either. We conclude with the open question of whether non-trivial robust wait-free hierarchies exist.

  9. A New Robust Watermarking Scheme to Increase Image Security

    NASA Astrophysics Data System (ADS)

    Rahmani, Hossein; Mortezaei, Reza; Ebrahimi Moghaddam, Mohsen

    2010-12-01

    In digital image watermarking, an image is embedded into a picture for a variety of purposes such as captioning and copyright protection. In this paper, a robust private watermarking scheme for embedding a gray-scale watermark is proposed. In the proposed method, the watermark and original image are processed by applying blockwise DCT. Also, a Dynamic Fuzzy Inference System (DFIS) is used to identify the best place for watermark insertion by approximating the relationship established between the properties of HVS model. In the insertion phase, the DC coefficients of the original image are modified according to DC value of watermark and output of Fuzzy System. In the experiment phase, the CheckMark (StirMark MATLAB) software was used to verify the method robustness by applying several conventional attacks on the watermarked image. The results showed that the proposed scheme provided high image quality while it was robust against various attacks, such as Compression, Filtering, additive Noise, Cropping, Scaling, Changing aspect ratio, Copy attack, and Composite attack in comparison with related methods.

  10. A model to assess the Mars Telecommunications Network relay robustness

    NASA Technical Reports Server (NTRS)

    Girerd, Andre R.; Meshkat, Leila; Edwards, Charles D., Jr.; Lee, Charles H.

    2005-01-01

    The relatively long mission durations and compatible radio protocols of current and projected Mars orbiters have enabled the gradual development of a heterogeneous constellation providing proximity communication services for surface assets. The current and forecasted capability of this evolving network has reached the point that designers of future surface missions consider complete dependence on it. Such designers, along with those architecting network requirements, have a need to understand the robustness of projected communication service. A model has been created to identify the robustness of the Mars Network as a function of surface location and time. Due to the decade-plus time horizon considered, the network will evolve, with emerging productive nodes and nodes that cease or fail to contribute. The model is a flexible framework to holistically process node information into measures of capability robustness that can be visualized for maximum understanding. Outputs from JPL's Telecom Orbit Analysis Simulation Tool (TOAST) provide global telecom performance parameters for current and projected orbiters. Probabilistic estimates of orbiter fuel life are derived from orbit keeping burn rates, forecasted maneuver tasking, and anomaly resolution budgets. Orbiter reliability is estimated probabilistically. A flexible scheduling framework accommodates the projected mission queue as well as potential alterations.

  11. Robustness in Digital Hardware

    NASA Astrophysics Data System (ADS)

    Woods, Roger; Lightbody, Gaye

    The growth in electronics has probably been the equivalent of the Industrial Revolution in the past century in terms of how much it has transformed our daily lives. There is a great dependency on technology whether it is in the devices that control travel (e.g., in aircraft or cars), our entertainment and communication systems, or our interaction with money, which has been empowered by the onset of Internet shopping and banking. Despite this reliance, there is still a danger that at some stage devices will fail within the equipment's lifetime. The purpose of this chapter is to look at the factors causing failure and address possible measures to improve robustness in digital hardware technology and specifically chip technology, giving a long-term forecast that will not reassure the reader!

  12. Robust automated knowledge capture.

    SciTech Connect

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  13. Robust characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2016-04-01

    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  14. Robust matching for voice recognition

    NASA Astrophysics Data System (ADS)

    Higgins, Alan; Bahler, L.; Porter, J.; Blais, P.

    1994-10-01

    This paper describes an automated method of comparing a voice sample of an unknown individual with samples from known speakers in order to establish or verify the individual's identity. The method is based on a statistical pattern matching approach that employs a simple training procedure, requires no human intervention (transcription, work or phonetic marketing, etc.), and makes no assumptions regarding the expected form of the statistical distributions of the observations. The content of the speech material (vocabulary, grammar, etc.) is not assumed to be constrained in any way. An algorithm is described which incorporates frame pruning and channel equalization processes designed to achieve robust performance with reasonable computational resources. An experimental implementation demonstrating the feasibility of the concept is described.

  15. Robust Crossfeed Design for Hovering Rotorcraft

    NASA Technical Reports Server (NTRS)

    Catapang, David R.

    1993-01-01

    Control law design for rotorcraft fly-by-wire systems normally attempts to decouple angular responses using fixed-gain crossfeeds. This approach can lead to poor decoupling over the frequency range of pilot inputs and increase the load on the feedback loops. In order to improve the decoupling performance, dynamic crossfeeds may be adopted. Moreover, because of the large changes that occur in rotorcraft dynamics due to small changes about the nominal design condition, especially for near-hovering flight, the crossfeed design must be 'robust'. A new low-order matching method is presented here to design robust crossfeed compensators for multi-input, multi-output (MIMO) systems. The technique identifies degrees-of-freedom that can be decoupled using crossfeeds, given an anticipated set of parameter variations for the range of flight conditions of concern. Cross-coupling is then reduced for degrees-of-freedom that can use crossfeed compensation by minimizing off-axis response magnitude average and variance. Results are presented for the analysis of pitch, roll, yaw and heave coupling of the UH-60 Black Hawk helicopter in near-hovering flight. Robust crossfeeds are designed that show significant improvement in decoupling performance and robustness over nominal, single design point, compensators. The design method and results are presented in an easily used graphical format that lends significant physical insight to the design procedure. This plant pre-compensation technique is an appropriate preliminary step to the design of robust feedback control laws for rotorcraft.

  16. The Robust Beauty of Ordinary Information

    ERIC Educational Resources Information Center

    Katsikopoulos, Konstantinos V.; Schooler, Lael J.; Hertwig, Ralph

    2010-01-01

    Heuristics embodying limited information search and noncompensatory processing of information can yield robust performance relative to computationally more complex models. One criticism raised against heuristics is the argument that complexity is hidden in the calculation of the cue order used to make predictions. We discuss ways to order cues…

  17. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    PubMed Central

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  18. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization.

    PubMed

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A

    2011-09-21

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented.

  19. Information theory perspective on network robustness

    NASA Astrophysics Data System (ADS)

    Schieber, Tiago A.; Carpi, Laura; Frery, Alejandro C.; Rosso, Osvaldo A.; Pardalos, Panos M.; Ravetti, Martín G.

    2016-01-01

    A crucial challenge in network theory is the study of the robustness of a network when facing a sequence of failures. In this work, we propose a dynamical definition of network robustness based on Information Theory, that considers measurements of the structural changes caused by failures of the network's components. Failures are defined here as a temporal process defined in a sequence. Robustness is then evaluated by measuring dissimilarities between topologies after each time step of the sequence, providing a dynamical information about the topological damage. We thoroughly analyze the efficiency of the method in capturing small perturbations by considering different probability distributions on networks. In particular, we find that distributions based on distances are more consistent in capturing network structural deviations, as better reflect the consequences of the failures. Theoretical examples and real networks are used to study the performance of this methodology.

  20. Robust fuzzy logic stabilization with disturbance elimination.

    PubMed

    Danapalasingam, Kumeresan A

    2014-01-01

    A robust fuzzy logic controller is proposed for stabilization and disturbance rejection in nonlinear control systems of a particular type. The dynamic feedback controller is designed as a combination of a control law that compensates for nonlinear terms in a control system and a dynamic fuzzy logic controller that addresses unknown model uncertainties and an unmeasured disturbance. Since it is challenging to derive a highly accurate mathematical model, the proposed controller requires only nominal functions of a control system. In this paper, a mathematical derivation is carried out to prove that the controller is able to achieve asymptotic stability by processing state measurements. Robustness here refers to the ability of the controller to asymptotically steer the state vector towards the origin in the presence of model uncertainties and a disturbance input. Simulation results of the robust fuzzy logic controller application in a magnetic levitation system demonstrate the feasibility of the control design. PMID:25177713

  1. Robust Fuzzy Logic Stabilization with Disturbance Elimination

    PubMed Central

    Danapalasingam, Kumeresan A.

    2014-01-01

    A robust fuzzy logic controller is proposed for stabilization and disturbance rejection in nonlinear control systems of a particular type. The dynamic feedback controller is designed as a combination of a control law that compensates for nonlinear terms in a control system and a dynamic fuzzy logic controller that addresses unknown model uncertainties and an unmeasured disturbance. Since it is challenging to derive a highly accurate mathematical model, the proposed controller requires only nominal functions of a control system. In this paper, a mathematical derivation is carried out to prove that the controller is able to achieve asymptotic stability by processing state measurements. Robustness here refers to the ability of the controller to asymptotically steer the state vector towards the origin in the presence of model uncertainties and a disturbance input. Simulation results of the robust fuzzy logic controller application in a magnetic levitation system demonstrate the feasibility of the control design. PMID:25177713

  2. Robust fuzzy logic stabilization with disturbance elimination.

    PubMed

    Danapalasingam, Kumeresan A

    2014-01-01

    A robust fuzzy logic controller is proposed for stabilization and disturbance rejection in nonlinear control systems of a particular type. The dynamic feedback controller is designed as a combination of a control law that compensates for nonlinear terms in a control system and a dynamic fuzzy logic controller that addresses unknown model uncertainties and an unmeasured disturbance. Since it is challenging to derive a highly accurate mathematical model, the proposed controller requires only nominal functions of a control system. In this paper, a mathematical derivation is carried out to prove that the controller is able to achieve asymptotic stability by processing state measurements. Robustness here refers to the ability of the controller to asymptotically steer the state vector towards the origin in the presence of model uncertainties and a disturbance input. Simulation results of the robust fuzzy logic controller application in a magnetic levitation system demonstrate the feasibility of the control design.

  3. FPGA implementation of robust Capon beamformer

    NASA Astrophysics Data System (ADS)

    Guan, Xin; Zmuda, Henry; Li, Jian; Du, Lin; Sheplak, Mark

    2012-03-01

    The Capon Beamforming algorithm is an optimal spatial filtering algorithm used in various signal processing applications where excellent interference rejection performance is required, such as Radar and Sonar systems, Smart Antenna systems for wireless communications. Its lack of robustness, however, means that it is vulnerable to array calibration errors and other model errors. To overcome this problem, numerous robust Capon Beamforming algorithms have been proposed, which are much more promising for practical applications. In this paper, an FPGA implementation of a robust Capon Beamforming algorithm is investigated and presented. This realization takes an array output with 4 channels, computes the complex-valued adaptive weight vectors for beamforming with an 18 bit fixed-point representation and runs at a 100 MHz clock on Xilinx V4 FPGA. This work will be applied in our medical imaging project for breast cancer detection.

  4. Horse metabolism and the photocatalytic process as a tool to identify metabolic products formed from dopant substances: the case of sildenafil.

    PubMed

    Medana, Claudio; Calza, Paola; Giancotti, Valeria; Dal Bello, Federica; Pasello, Emanuela; Montana, Marco; Baiocchi, Claudio

    2011-10-01

    Two horses were treated with sildenafil, and its metabolic products were sought in both urine and plasma samples. Prior to this, a simulative laboratory study had been done using a photocatalytic process, to identify all possible main and secondary transformation products, in a clean matrix; these were then sought in the biological samples. The transformation of sildenafil and the formation of intermediate products were evaluated adopting titanium dioxide as photocatalyst. Several products were formed and characterized using the HPLC/HRMS(n) technique. The main intermediates identified in these experimental conditions were the same as the major sildenafil metabolites found in in vivo studies on rats and horses. Concerning horse metabolism, sildenafil and the demethylated product (UK 103,320) were quantified in blood samples. Sildenafil propyloxide, de-ethyl, and demethyl sildenafil, were the main metabolites quantified in urine. Some more oxidized species, already formed in the photocatalytic process, were also found in urine and plasma samples of treated animals. Their formation involved hydroxylation on the aromatic ring, combined oxidation and dihydroxylation, N-demethylation on the pyrazole ring, and hydroxylation. These new findings could be of interest in further metabolism studies. PMID:21964727

  5. Footprint Reduction Process: Using Remote Sensing and GIS Technologies to Identify Non-Contaminated Land Parcels on the Oak Ridge Reservation National Priorities List Site

    SciTech Connect

    Halsey, P.A.; Kendall, D.T.; King, A.L.; Storms, R.A.

    1998-12-09

    In 1989, the Agency for Toxic Substances and Disease Registry evaluated the entire 35,000-acre U. S: Department of Energy (DOE) Oak Ridge Reservation (ORR, located in Oak Ridge, TN) and placed it on the National Priorities List (NPL), making the ORR subject to Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) regulations. Although much of the ORR has not been impacted by previous federal activities, without investigation it is difficult to discern which parcels of land are free of surface contamination. In 1996, the DOE Oak Ridge Environmental Management Program (EM) funded the Footprint Reduction Project to: 1) develop a process to study the large areas of the ORR that are believed to be free of surface contamination and 2) initiate the delisting of the "clean" areas from the NPL. Although this project's goals do not include the transfer of federal property to non-federal entities, the process development team aimed to provide a final product with multiple uses. Therefore, the process was developed to meet the requirements of NPL delisting and the transfer of non- contaminated federal lands to future land users. Section 120 (h) of the CERCLA law identifies the requirements for the transfer of federal property that is currently part of an NPL site. Reviews of historical information (including aerial photography), field inspections, and the recorded chain of title documents for the property are required for the delisting of property prior to transfer from the federal government. Despite the widespread availability of remote sensing and other digital geographic data and geographic information systems (GIS) for the analysis of such data, historical aerial photography is the only geographic data source required for review under the CERCLA 120 (h) process. However, since the ORR Environmental Management Program had an established Remote Sensing Program, the Footprint Reduction Project included the development and application of a methodology

  6. The robustness of complex networks

    NASA Astrophysics Data System (ADS)

    Albert, Reka

    2002-03-01

    Many complex networks display a surprising degree of tolerance against errors. For example, organisms and ecosystems exhibit remarkable robustness to large variations in temperature, moisture, and nutrients, and communication networks continue to function despite local failures. This presentation will explore the effects of the network topology on its robust functioning. First, we will consider the topological integrity of several networks under node disruption. Then we will focus on the functional robustness of biological signaling networks, and the decisive role played by the network topology in this robustness.

  7. Learning robust plans for mobile robots from a single trial

    SciTech Connect

    Engelson, S.P.

    1996-12-31

    We address the problem of learning robust plans for robot navigation by observing particular robot behaviors. In this Paper we present a method which can learn a robust reactive example of a desired behavior. The translating a sequence of events arising system into a plan which represents among such events. This method allows us to rely or the underlying stability properties of low-level behavior processes in order to produce robust plans. Since the resultant plan reproduces the original behavior of the robot at a high level, it generalizes over small environmental changes and is robust to sensor and effector noise.

  8. Robust detection-isolation-accommodation for sensor failures

    NASA Technical Reports Server (NTRS)

    Weiss, J. L.; Pattipati, K. R.; Willsky, A. S.; Eterno, J. S.; Crawford, J. T.

    1985-01-01

    The results of a one year study to: (1) develop a theory for Robust Failure Detection and Identification (FDI) in the presence of model uncertainty, (2) develop a design methodology which utilizes the robust FDI ththeory, (3) apply the methodology to a sensor FDI problem for the F-100 jet engine, and (4) demonstrate the application of the theory to the evaluation of alternative FDI schemes are presented. Theoretical results in statistical discrimination are used to evaluate the robustness of residual signals (or parity relations) in terms of their usefulness for FDI. Furthermore, optimally robust parity relations are derived through the optimization of robustness metrics. The result is viewed as decentralization of the FDI process. A general structure for decentralized FDI is proposed and robustness metrics are used for determining various parameters of the algorithm.

  9. Robust, Optimal Subsonic Airfoil Shapes

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2014-01-01

    A method has been developed to create an airfoil robust enough to operate satisfactorily in different environments. This method determines a robust, optimal, subsonic airfoil shape, beginning with an arbitrary initial airfoil shape, and imposes the necessary constraints on the design. Also, this method is flexible and extendible to a larger class of requirements and changes in constraints imposed.

  10. Robust Understanding of Statistical Variation

    ERIC Educational Resources Information Center

    Peters, Susan A.

    2011-01-01

    This paper presents a framework that captures the complexity of reasoning about variation in ways that are indicative of robust understanding and describes reasoning as a blend of design, data-centric, and modeling perspectives. Robust understanding is indicated by integrated reasoning about variation within each perspective and across…

  11. Facial symmetry in robust anthropometrics.

    PubMed

    Kalina, Jan

    2012-05-01

    Image analysis methods commonly used in forensic anthropology do not have desirable robustness properties, which can be ensured by robust statistical methods. In this paper, the face localization in images is carried out by detecting symmetric areas in the images. Symmetry is measured between two neighboring rectangular areas in the images using a new robust correlation coefficient, which down-weights regions in the face violating the symmetry. Raw images of faces without usual preliminary transformations are considered. The robust correlation coefficient based on the least weighted squares regression yields very promising results also in the localization of such faces, which are not entirely symmetric. Standard methods of statistical machine learning are applied for comparison. The robust correlation analysis can be applicable to other problems of forensic anthropology.

  12. Robust Fault Detection Using Robust Z1 Estimation and Fuzzy Logic

    NASA Technical Reports Server (NTRS)

    Curry, Tramone; Collins, Emmanuel G., Jr.; Selekwa, Majura; Guo, Ten-Huei (Technical Monitor)

    2001-01-01

    This research considers the application of robust Z(sub 1), estimation in conjunction with fuzzy logic to robust fault detection for an aircraft fight control system. It begins with the development of robust Z(sub 1) estimators based on multiplier theory and then develops a fixed threshold approach to fault detection (FD). It then considers the use of fuzzy logic for robust residual evaluation and FD. Due to modeling errors and unmeasurable disturbances, it is difficult to distinguish between the effects of an actual fault and those caused by uncertainty and disturbance. Hence, it is the aim of a robust FD system to be sensitive to faults while remaining insensitive to uncertainty and disturbances. While fixed thresholds only allow a decision on whether a fault has or has not occurred, it is more valuable to have the residual evaluation lead to a conclusion related to the degree of, or probability of, a fault. Fuzzy logic is a viable means of determining the degree of a fault and allows the introduction of human observations that may not be incorporated in the rigorous threshold theory. Hence, fuzzy logic can provide a more reliable and informative fault detection process. Using an aircraft flight control system, the results of FD using robust Z(sub 1) estimation with a fixed threshold are demonstrated. FD that combines robust Z(sub 1) estimation and fuzzy logic is also demonstrated. It is seen that combining the robust estimator with fuzzy logic proves to be advantageous in increasing the sensitivity to smaller faults while remaining insensitive to uncertainty and disturbances.

  13. Analytical redundancy and the design of robust failure detection systems

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.; Willsky, A. S.

    1984-01-01

    The Failure Detection and Identification (FDI) process is viewed as consisting of two stages: residual generation and decision making. It is argued that a robust FDI system can be achieved by designing a robust residual generation process. Analytical redundancy, the basis for residual generation, is characterized in terms of a parity space. Using the concept of parity relations, residuals can be generated in a number of ways and the design of a robust residual generation process can be formulated as a minimax optimization problem. An example is included to illustrate this design methodology. Previously announcedd in STAR as N83-20653

  14. RSRE: RNA structural robustness evaluator.

    PubMed

    Shu, Wenjie; Bo, Xiaochen; Zheng, Zhiqiang; Wang, Shengqi

    2007-07-01

    Biological robustness, defined as the ability to maintain stable functioning in the face of various perturbations, is an important and fundamental topic in current biology, and has become a focus of numerous studies in recent years. Although structural robustness has been explored in several types of RNA molecules, the origins of robustness are still controversial. Computational analysis results are needed to make up for the lack of evidence of robustness in natural biological systems. The RNA structural robustness evaluator (RSRE) web server presented here provides a freely available online tool to quantitatively evaluate the structural robustness of RNA based on the widely accepted definition of neutrality. Several classical structure comparison methods are employed; five randomization methods are implemented to generate control sequences; sub-optimal predicted structures can be optionally utilized to mitigate the uncertainty of secondary structure prediction. With a user-friendly interface, the web application is easy to use. Intuitive illustrations are provided along with the original computational results to facilitate analysis. The RSRE will be helpful in the wide exploration of RNA structural robustness and will catalyze our understanding of RNA evolution. The RSRE web server is freely available at http://biosrv1.bmi.ac.cn/RSRE/ or http://biotech.bmi.ac.cn/RSRE/.

  15. Population genetics of translational robustness.

    PubMed

    Wilke, Claus O; Drummond, D Allan

    2006-05-01

    Recent work has shown that expression level is the main predictor of a gene's evolutionary rate and that more highly expressed genes evolve slower. A possible explanation for this observation is selection for proteins that fold properly despite mistranslation, in short selection for translational robustness. Translational robustness leads to the somewhat paradoxical prediction that highly expressed genes are extremely tolerant to missense substitutions but nevertheless evolve very slowly. Here, we study a simple theoretical model of translational robustness that allows us to gain analytic insight into how this paradoxical behavior arises.

  16. Robustness of airline route networks

    NASA Astrophysics Data System (ADS)

    Lordan, Oriol; Sallan, Jose M.; Escorihuela, Nuria; Gonzalez-Prieto, David

    2016-03-01

    Airlines shape their route network by defining their routes through supply and demand considerations, paying little attention to network performance indicators, such as network robustness. However, the collapse of an airline network can produce high financial costs for the airline and all its geographical area of influence. The aim of this study is to analyze the topology and robustness of the network route of airlines following Low Cost Carriers (LCCs) and Full Service Carriers (FSCs) business models. Results show that FSC hubs are more central than LCC bases in their route network. As a result, LCC route networks are more robust than FSC networks.

  17. Robust Control Design for Systems With Probabilistic Uncertainty

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.

    2005-01-01

    This paper presents a reliability- and robustness-based formulation for robust control synthesis for systems with probabilistic uncertainty. In a reliability-based formulation, the probability of violating design requirements prescribed by inequality constraints is minimized. In a robustness-based formulation, a metric which measures the tendency of a random variable/process to cluster close to a target scalar/function is minimized. A multi-objective optimization procedure, which combines stability and performance requirements in time and frequency domains, is used to search for robustly optimal compensators. Some of the fundamental differences between the proposed strategy and conventional robust control methods are: (i) unnecessary conservatism is eliminated since there is not need for convex supports, (ii) the most likely plants are favored during synthesis allowing for probabilistic robust optimality, (iii) the tradeoff between robust stability and robust performance can be explored numerically, (iv) the uncertainty set is closely related to parameters with clear physical meaning, and (v) compensators with improved robust characteristics for a given control structure can be synthesized.

  18. Robust Optimization of Biological Protocols

    PubMed Central

    Flaherty, Patrick; Davis, Ronald W.

    2015-01-01

    When conducting high-throughput biological experiments, it is often necessary to develop a protocol that is both inexpensive and robust. Standard approaches are either not cost-effective or arrive at an optimized protocol that is sensitive to experimental variations. We show here a novel approach that directly minimizes the cost of the protocol while ensuring the protocol is robust to experimental variation. Our approach uses a risk-averse conditional value-at-risk criterion in a robust parameter design framework. We demonstrate this approach on a polymerase chain reaction protocol and show that our improved protocol is less expensive than the standard protocol and more robust than a protocol optimized without consideration of experimental variation. PMID:26417115

  19. Robust multi-objective calibration strategies - chances for improving flood forecasting

    NASA Astrophysics Data System (ADS)

    Krauße, T.; Cullmann, J.; Saile, P.; Schmitz, G. H.

    2011-04-01

    Process-oriented rainfall-runoff models are designed to approximate the complex hydrologic processes within a specific catchment and in particular to simulate the discharge at the catchment outlet. Most of these models exhibit a high degree of complexity and require the determination of various parameters by calibration. Recently automatic calibration methods became popular in order to identify parameter vectors with high corresponding model performance. The model performance is often assessed by a purpose-oriented objective function. Practical experience suggests that in many situations one single objective function cannot adequately describe the model's ability to represent any aspect of the catchment's behaviour. This is regardless whether the objective is aggregated of several criteria that measure different (possibly opposite) aspects of the system behaviour. One strategy to circumvent this problem is to define multiple objective functions and to apply a multi-objective optimisation algorithm to identify the set of Pareto optimal or non-dominated solutions. One possible approach to estimate the Pareto set effectively and efficiently is the particle swarm optimisation (PSO). It has already been successfully applied in various other fields and has been reported to show effective and efficient performance. Krauße and Cullmann (2011b) presented a method entitled ROPEPSO which merges the strengths of PSO and data depth measures in order to identify robust parameter vectors for hydrological models. In this paper we present a multi-objective parameter estimation algorithm, entitled the Multi-Objective Robust Particle Swarm Parameter Estimation (MO-ROPE). The algorithm is a further development of the previously mentioned single-objective ROPEPSO approach. It applies a newly developed multi-objective particle swarm optimisation algorithm in order to identify non-dominated robust model parameter vectors. Subsequently it samples robust parameter vectors by the

  20. A robust method to heterogenise and recycle group 9 catalysts.

    PubMed

    Lucas, Stephanie J; Crossley, Benjamin D; Pettman, Alan J; Vassileiou, Antony D; Screen, Thomas E O; Blacker, A John; McGowan, Patrick C

    2013-06-21

    This paper provides a viable, reproducible and robust method for immobilising hydroxyl tethered iridium-rhodium complexes. The materials have been shown to be both effective and recyclable in the process of catalytic transfer hydrogenation with minimal metal leaching.

  1. Robust controls with structured perturbations

    NASA Technical Reports Server (NTRS)

    Keel, Leehyun

    1993-01-01

    This final report summarizes the recent results obtained by the principal investigator and his coworkers on the robust stability and control of systems containing parametric uncertainty. The starting point is a generalization of Kharitonov's theorem obtained in 1989, and its generalization to the multilinear case, the singling out of extremal stability subsets, and other ramifications now constitutes an extensive and coherent theory of robust parametric stability that is summarized in the results contained here.

  2. Building robust conservation plans.

    PubMed

    Visconti, Piero; Joppa, Lucas

    2015-04-01

    Systematic conservation planning optimizes trade-offs between biodiversity conservation and human activities by accounting for socioeconomic costs while aiming to achieve prescribed conservation objectives. However, the most cost-efficient conservation plan can be very dissimilar to any other plan achieving the set of conservation objectives. This is problematic under conditions of implementation uncertainty (e.g., if all or part of the plan becomes unattainable). We determined through simulations of parallel implementation of conservation plans and habitat loss the conditions under which optimal plans have limited chances of implementation and where implementation attempts would fail to meet objectives. We then devised a new, flexible method for identifying conservation priorities and scheduling conservation actions. This method entails generating a number of alternative plans, calculating the similarity in site composition among all plans, and selecting the plan with the highest density of neighboring plans in similarity space. We compared our method with the classic method that maximizes cost efficiency with synthetic and real data sets. When implementation was uncertain--a common reality--our method provided higher likelihood of achieving conservation targets. We found that χ, a measure of the shortfall in objectives achieved by a conservation plan if the plan could not be implemented entirely, was the main factor determining the relative performance of a flexibility enhanced approach to conservation prioritization. Our findings should help planning authorities prioritize conservation efforts in the face of uncertainty about future condition and availability of sites.

  3. Heterologous Expression Screens in Nicotiana benthamiana Identify a Candidate Effector of the Wheat Yellow Rust Pathogen that Associates with Processing Bodies.

    PubMed

    Petre, Benjamin; Saunders, Diane G O; Sklenar, Jan; Lorrain, Cécile; Krasileva, Ksenia V; Win, Joe; Duplessis, Sébastien; Kamoun, Sophien

    2016-01-01

    Rust fungal pathogens of wheat (Triticum spp.) affect crop yields worldwide. The molecular mechanisms underlying the virulence of these pathogens remain elusive, due to the limited availability of suitable molecular genetic research tools. Notably, the inability to perform high-throughput analyses of candidate virulence proteins (also known as effectors) impairs progress. We previously established a pipeline for the fast-forward screens of rust fungal candidate effectors in the model plant Nicotiana benthamiana. This pipeline involves selecting candidate effectors in silico and performing cell biology and protein-protein interaction assays in planta to gain insight into the putative functions of candidate effectors. In this study, we used this pipeline to identify and characterize sixteen candidate effectors from the wheat yellow rust fungal pathogen Puccinia striiformis f sp tritici. Nine candidate effectors targeted a specific plant subcellular compartment or protein complex, providing valuable information on their putative functions in plant cells. One candidate effector, PST02549, accumulated in processing bodies (P-bodies), protein complexes involved in mRNA decapping, degradation, and storage. PST02549 also associates with the P-body-resident ENHANCER OF mRNA DECAPPING PROTEIN 4 (EDC4) from N. benthamiana and wheat. We propose that P-bodies are a novel plant cell compartment targeted by pathogen effectors.

  4. Heterologous Expression Screens in Nicotiana benthamiana Identify a Candidate Effector of the Wheat Yellow Rust Pathogen that Associates with Processing Bodies.

    PubMed

    Petre, Benjamin; Saunders, Diane G O; Sklenar, Jan; Lorrain, Cécile; Krasileva, Ksenia V; Win, Joe; Duplessis, Sébastien; Kamoun, Sophien

    2016-01-01

    Rust fungal pathogens of wheat (Triticum spp.) affect crop yields worldwide. The molecular mechanisms underlying the virulence of these pathogens remain elusive, due to the limited availability of suitable molecular genetic research tools. Notably, the inability to perform high-throughput analyses of candidate virulence proteins (also known as effectors) impairs progress. We previously established a pipeline for the fast-forward screens of rust fungal candidate effectors in the model plant Nicotiana benthamiana. This pipeline involves selecting candidate effectors in silico and performing cell biology and protein-protein interaction assays in planta to gain insight into the putative functions of candidate effectors. In this study, we used this pipeline to identify and characterize sixteen candidate effectors from the wheat yellow rust fungal pathogen Puccinia striiformis f sp tritici. Nine candidate effectors targeted a specific plant subcellular compartment or protein complex, providing valuable information on their putative functions in plant cells. One candidate effector, PST02549, accumulated in processing bodies (P-bodies), protein complexes involved in mRNA decapping, degradation, and storage. PST02549 also associates with the P-body-resident ENHANCER OF mRNA DECAPPING PROTEIN 4 (EDC4) from N. benthamiana and wheat. We propose that P-bodies are a novel plant cell compartment targeted by pathogen effectors. PMID:26863009

  5. Heterologous Expression Screens in Nicotiana benthamiana Identify a Candidate Effector of the Wheat Yellow Rust Pathogen that Associates with Processing Bodies

    PubMed Central

    Petre, Benjamin; Saunders, Diane G. O.; Sklenar, Jan; Lorrain, Cécile; Krasileva, Ksenia V.; Win, Joe; Duplessis, Sébastien; Kamoun, Sophien

    2016-01-01

    Rust fungal pathogens of wheat (Triticum spp.) affect crop yields worldwide. The molecular mechanisms underlying the virulence of these pathogens remain elusive, due to the limited availability of suitable molecular genetic research tools. Notably, the inability to perform high-throughput analyses of candidate virulence proteins (also known as effectors) impairs progress. We previously established a pipeline for the fast-forward screens of rust fungal candidate effectors in the model plant Nicotiana benthamiana. This pipeline involves selecting candidate effectors in silico and performing cell biology and protein-protein interaction assays in planta to gain insight into the putative functions of candidate effectors. In this study, we used this pipeline to identify and characterize sixteen candidate effectors from the wheat yellow rust fungal pathogen Puccinia striiformis f sp tritici. Nine candidate effectors targeted a specific plant subcellular compartment or protein complex, providing valuable information on their putative functions in plant cells. One candidate effector, PST02549, accumulated in processing bodies (P-bodies), protein complexes involved in mRNA decapping, degradation, and storage. PST02549 also associates with the P-body-resident ENHANCER OF mRNA DECAPPING PROTEIN 4 (EDC4) from N. benthamiana and wheat. We propose that P-bodies are a novel plant cell compartment targeted by pathogen effectors. PMID:26863009

  6. Metal impurities provide useful tracers for identifying exposures to airborne single-wall carbon nanotubes released from work-related processes

    NASA Astrophysics Data System (ADS)

    Rasmussen, Pat E.; Jayawardene, Innocent; Gardner, H. David; Chénier, Marc; Levesque, Christine; Niu, Jianjun

    2013-04-01

    This study investigated the use of metal impurities in single-wall carbon nanotubes (SWCNT) as potential tracers to distinguish engineered nanomaterials from background aerosols. TEM and SEM were used to characterize parent material and aerosolized agglomerates collected on PTFE filters using a cascade impactor. SEM image analysis indicated that the SWCNT agglomerates contained about 45% amorphous carbon and backscatter electron analysis indicated that metal impurities were concentrated within the amorphous carbon component. Two elements present as impurities (Y and Ni) were selected as appropriate tracers in this case as their concentrations were found to be highly elevated in the SWCNT parent material (% range) compared to ambient air particles (μg/g range), and background air concentrations were below detection limits for both elements. Bioaccessibility was also determined using physiologically-based extractions at pH conditions relevant to both ingestion and inhalation pathways. A portable wet electrostatic precipitation system effectively captured airborne Y and Ni released during sieving processes, in proportions similar to the bulk sample. These observations support the potential for catalysts and other metal impurities in carbon nanotubes to serve as tracers that uniquely identify emissions at source, after an initial analysis to select appropriate tracers.

  7. Use of chemical modification and mass spectrometry to identify substrate-contacting sites in proteinaceous RNase P, a tRNA processing enzyme

    PubMed Central

    Chen, Tien-Hao; Tanimoto, Akiko; Shkriabai, Nikoloz; Kvaratskhelia, Mamuka; Wysocki, Vicki; Gopalan, Venkat

    2016-01-01

    Among all enzymes in nature, RNase P is unique in that it can use either an RNA- or a protein-based active site for its function: catalyzing cleavage of the 5′-leader from precursor tRNAs (pre-tRNAs). The well-studied catalytic RNase P RNA uses a specificity module to recognize the pre-tRNA and a catalytic module to perform cleavage. Similarly, the recently discovered proteinaceous RNase P (PRORP) possesses two domains – pentatricopeptide repeat (PPR) and metallonuclease (NYN) – that are present in some other RNA processing factors. Here, we combined chemical modification of lysines and multiple-reaction monitoring mass spectrometry to identify putative substrate-contacting residues in Arabidopsis thaliana PRORP1 (AtPRORP1), and subsequently validated these candidate sites by site-directed mutagenesis. Using biochemical studies to characterize the wild-type (WT) and mutant derivatives, we found that AtPRORP1 exploits specific lysines strategically positioned at the tips of it's V-shaped arms, in the first PPR motif and in the NYN domain proximal to the catalytic center, to bind and cleave pre-tRNA. Our results confirm that the protein- and RNA-based forms of RNase P have distinct modules for substrate recognition and cleavage, an unanticipated parallel in their mode of action. PMID:27166372

  8. Robust and intelligent bearing estimation

    SciTech Connect

    Claassen, J.P.

    1998-07-01

    As the monitoring thresholds of global and regional networks are lowered, bearing estimates become more important to the processes which associate (sparse) detections and which locate events. Current methods of estimating bearings from observations by 3-component stations and arrays lack both accuracy and precision. Methods are required which will develop all the precision inherently available in the arrival, determine the measurability of the arrival, provide better estimates of the bias induced by the medium, permit estimates at lower SNRs, and provide physical insight into the effects of the medium on the estimates. Initial efforts have focused on 3-component stations since the precision is poorest there. An intelligent estimation process for 3-component stations has been developed and explored. The method, called SEE for Search, Estimate, and Evaluation, adaptively exploits all the inherent information in the arrival at every step of the process to achieve optimal results. In particular, the approach uses a consistent and robust mathematical framework to define the optimal time-frequency windows on which to make estimates, to make the bearing estimates themselves, and to withdraw metrics helpful in choosing the best estimate(s) or admitting that the bearing is immeasurable. The approach is conceptually superior to current methods, particular those which rely on real values signals. The method has been evaluated to a considerable extent in a seismically active region and has demonstrated remarkable utility by providing not only the best estimates possible but also insight into the physical processes affecting the estimates. It has been shown, for example, that the best frequency at which to make an estimate seldom corresponds to the frequency having the best detection SNR and sometimes the best time interval is not at the onset of the signal. The method is capable of measuring bearing dispersion, thereby withdrawing the bearing bias as a function of frequency

  9. A digital signal processing-based bioinformatics approach to identifying the origins of HIV-1 non B subtypes infecting US Army personnel serving abroad.

    PubMed

    Nwankwo, Norbert

    2013-06-01

    Two HIV-1 non B isolates, 98US_MSC5007 and 98US_MSC5016, which have been identified amongst the US Army personnel serving abroad, are known to have originated from other nations. Notwithstanding, they are categorized as American strains. This is because their countries of origin are unknown. American isolates are basically B subtype. 98US_MSC5007 belongs to Circulating Recombinant Form (CRF02_AG) while 98US_MSC5016 is of the C clade. Both sub-groups are recognized to have originated from African and Asian continents. It has become necessary to properly determine the countries of origin of microbes and viruses. This is because diversity and cross-subtyping have been found to mitigate the designing and development of vaccine and therapeutic interventions. The aim of this study therefore is to identify the countries of origin of the two American isolates found amongst US Army personnel serving abroad. A Digital Signal Processing-based Bioinformatics technique called Informational Spectrum Method (ISM) has been engaged. ISM entails translating the amino acids sequences of the protein into numerical sequences (signals) by means of one biological parameter (Amino Acids Scale). The signals are then processed using Discrete Fourier Transform (DFT) in order to uncover and present the embedded biological information as Informational Spectra (IS). Spectral Position of Maximum Binding Interaction (SPMBI) is used. Several approaches including Phylogeny have preliminarily been employed in the determination of evolutionary trends of organisms and viruses. SPMBI has preliminarily been used to re-establish the semblance and common originality that exist between human and Chimpanzee, evolutionary roadmaps in the Influenza and HIV viruses. The results disclosed that 98US_MSC5007 shared same semblance and originality with a Nigeria isolate (92NG083) while 98US_MSC5016 with the Zairian isolates (ELI, MAL, and Z2/CDC-34). These results appear to demonstrate that the American soldiers

  10. Learning robust pulses for generating universal quantum gates

    PubMed Central

    Dong, Daoyi; Wu, Chengzhi; Chen, Chunlin; Qi, Bo; Petersen, Ian R.; Nori, Franco

    2016-01-01

    Constructing a set of universal quantum gates is a fundamental task for quantum computation. The existence of noises, disturbances and fluctuations is unavoidable during the process of implementing quantum gates for most practical quantum systems. This paper employs a sampling-based learning method to find robust control pulses for generating a set of universal quantum gates. Numerical results show that the learned robust control fields are insensitive to disturbances, uncertainties and fluctuations during the process of realizing universal quantum gates. PMID:27782219

  11. Genome Sequencing Identifies Two Nearly Unchanged Strains of Persistent Listeria monocytogenes Isolated at Two Different Fish Processing Plants Sampled 6 Years Apart

    PubMed Central

    Holch, Anne; Webb, Kristen; Lukjancenko, Oksana; Ussery, David; Rosenthal, Benjamin M.

    2013-01-01

    Listeria monocytogenes is a food-borne human-pathogenic bacterium that can cause infections with a high mortality rate. It has a remarkable ability to persist in food processing facilities. Here we report the genome sequences for two L. monocytogenes strains (N53-1 and La111) that were isolated 6 years apart from two different Danish fish processers. Both strains are of serotype 1/2a and belong to a highly persistent DNA subtype (random amplified polymorphic DNA [RAPD] type 9). We demonstrate using in silico analyses that both strains belong to the multilocus sequence typing (MLST) type ST121 that has been isolated as a persistent subtype in several European countries. The purpose of this study was to use genome analyses to identify genes or proteins that could contribute to persistence. In a genome comparison, the two persistent strains were extremely similar and collectively differed from the reference lineage II strain, EGD-e. Also, they differed markedly from a lineage I strain (F2365). On the proteome level, the two strains were almost identical, with a predicted protein homology of 99.94%, differing at only 2 proteins. No single-nucleotide polymorphism (SNP) differences were seen between the two strains; in contrast, N53-1 and La111 differed from the EGD-e reference strain by 3,942 and 3,471 SNPs, respectively. We included a persistent L. monocytogenes strain from the United States (F6854) in our comparisons. Compared to nonpersistent strains, all three persistent strains were distinguished by two genome deletions: one, of 2,472 bp, typically contains the gene for inlF, and the other, of 3,017 bp, includes three genes potentially related to bacteriocin production and transport (lmo2774, lmo2775, and the 3′-terminal part of lmo2776). Further studies of highly persistent strains are required to determine if the absence of these genes promotes persistence. While the genome comparison did not point to a clear physiological explanation of the persistent

  12. Designing robust control laws using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Marrison, Chris

    1994-01-01

    The purpose of this research is to create a method of finding practical, robust control laws. The robustness of a controller is judged by Stochastic Robustness metrics and the level of robustness is optimized by searching for design parameters that minimize a robustness cost function.

  13. How robust is a robust policy? A comparative analysis of alternative robustness metrics for supporting robust decision analysis.

    NASA Astrophysics Data System (ADS)

    Kwakkel, Jan; Haasnoot, Marjolijn

    2015-04-01

    In response to climate and socio-economic change, in various policy domains there is increasingly a call for robust plans or policies. That is, plans or policies that performs well in a very large range of plausible futures. In the literature, a wide range of alternative robustness metrics can be found. The relative merit of these alternative conceptualizations of robustness has, however, received less attention. Evidently, different robustness metrics can result in different plans or policies being adopted. This paper investigates the consequences of several robustness metrics on decision making, illustrated here by the design of a flood risk management plan. A fictitious case, inspired by a river reach in the Netherlands is used. The performance of this system in terms of casualties, damages, and costs for flood and damage mitigation actions is explored using a time horizon of 100 years, and accounting for uncertainties pertaining to climate change and land use change. A set of candidate policy options is specified up front. This set of options includes dike raising, dike strengthening, creating more space for the river, and flood proof building and evacuation options. The overarching aim is to design an effective flood risk mitigation strategy that is designed from the outset to be adapted over time in response to how the future actually unfolds. To this end, the plan will be based on the dynamic adaptive policy pathway approach (Haasnoot, Kwakkel et al. 2013) being used in the Dutch Delta Program. The policy problem is formulated as a multi-objective robust optimization problem (Kwakkel, Haasnoot et al. 2014). We solve the multi-objective robust optimization problem using several alternative robustness metrics, including both satisficing robustness metrics and regret based robustness metrics. Satisficing robustness metrics focus on the performance of candidate plans across a large ensemble of plausible futures. Regret based robustness metrics compare the

  14. Meristem size contributes to the robustness of phyllotaxis in Arabidopsis

    PubMed Central

    Landrein, Benoit; Refahi, Yassin; Besnard, Fabrice; Hervieux, Nathan; Mirabet, Vincent; Boudaoud, Arezki; Vernoux, Teva; Hamant, Olivier

    2015-01-01

    Using the plant model Arabidopsis, the relationship between day length, the size of the shoot apical meristem, and the robustness of phyllotactic patterns were analysed. First, it was found that reducing day length leads to an increased meristem size and an increased number of alterations in the final positions of organs along the stem. Most of the phyllotactic defects could be related to an altered tempo of organ emergence, while not affecting the spatial positions of organ initiations at the meristem. A correlation was also found between meristem size and the robustness of phyllotaxis in two accessions (Col-0 and WS-4) and a mutant (clasp-1), independent of growth conditions. A reduced meristem size in clasp-1 was even associated with an increased robustness of the phyllotactic pattern, beyond what is observed in the wild type. Interestingly it was also possible to modulate the robustness of phyllotaxis in these different genotypes by changing day length. To conclude, it is shown first that robustness of the phyllotactic pattern is not maximal in the wild type, suggesting that, beyond its apparent stereotypical order, the robustness of phyllotaxis is regulated. Secondly, a role for day length in the robustness of the phyllotaxis was also identified, thus providing a new example of a link between patterning and environment in plants. Thirdly, the experimental results validate previous model predictions suggesting a contribution of meristem size in the robustness of phyllotaxis via the coupling between the temporal sequence and spatial pattern of organ initiations. PMID:25504644

  15. Robust disturbance rejection for flexible mechanical structures

    NASA Astrophysics Data System (ADS)

    Enzmann, Marc R.; Doeschner, Christian

    2000-06-01

    Topic of the presentation is a procedure to determine controller parameters using principles from Internal Model Control (IMC) in combination with Quantitative Feedback Theory (QFT) for robust vibration control of flexible mechanical structures. IMC design is based on a parameterization of all controllers that stabilize a given nominal plant, called the Q-parameter or Youla-parameter. It will be shown that it is possible to choose the controller structure and the Q- parameter in a very straightforward manner, so that a low order controller results, which stabilizes the given nominal model. Additional constraints can be implemented, so that the method allows for a direct and transparent trade-off between control performance and controller complexity and facilitates the inclusion of low-pass filters. In order to test (and if necessary augment) the inherent robust performance of the resulting controllers, boundaries based on the work of Kidron and Yaniv are calculated in the Nichols-Charts of the open loop and the complementary sensitivity function. The application of these boundaries is presented. Very simple uncertainty models for resonant modes are used to assess the robustness of the design. Using a simply structured plant as illustrative example we will demonstrate the design process. This will illuminate several important features of the design process, e.g. trade-off between conflicting objectives, trade- off between controller complexity and achievable performance.

  16. Robust control technique for nuclear power plants

    SciTech Connect

    Murphy, G.V.; Bailey, J.M.

    1989-03-01

    This report summarizes the linear quadratic Guassian (LQG) design technique with loop transfer recovery (LQG/LTR) for design of control systems. The concepts of return ratio, return difference, inverse return difference, and singular values are summarized. The LQG/LTR design technique allows the synthesis of a robust control system. To illustrate the LQG/LTR technique, a linearized model of a simple process has been chosen. The process has three state variables, one input, and one output. Three control system design methods are compared: LQG, LQG/LTR, and a proportional plus integral controller (PI). 7 refs., 20 figs., 6 tabs.

  17. Reactive Transport Modeling of Chemical and Isotope Data to Identify Degradation Processes of Chlorinated Ethenes in a Diffusion-Dominated Media

    NASA Astrophysics Data System (ADS)

    Chambon, J. C.; Damgaard, I.; Jeannottat, S.; Hunkeler, D.; Broholm, M. M.; Binning, P. J.; Bjerg, P. L.

    2012-12-01

    Chlorinated ethenes are among the most widespread contaminants in the subsurface and a major threat to groundwater quality at numerous contaminated sites. Many of these contaminated sites are found in low-permeability media, such as clay tills, where contaminant transport is controlled by diffusion. Degradation and transport processes of chlorinated ethenes are not well understood in such geological settings, therefore risk assessment and remediation at these sites are particularly challenging. In this work, a combined approach of chemical and isotope analysis on core samples, and reactive transport modeling has been used to identify the degradation processes occurring at the core scale. The field data was from a site located at Vadsby, Denmark, where chlorinated solvents were spilled during the 1960-70's, resulting in contamination of the clay till and the underlying sandy layer (15 meters below surface). The clay till is heavily contaminated between 4 and 15 mbs, both with the mother compounds PCE/TCE and TCA and the daughter products (DCE, VC, ethene, DCA), indicating the occurrence of natural dechlorination of both PCE/TCE and TCA. Intact core samples of length 0.5m were collected from the source zone (between 6 and 12 mbs). Concentrations and stable isotope ratios of the mother compounds and their daughter products, as well as redox parameters, fatty acids and microbial data, were analyzed with discrete sub-sampling along the cores. More samples (each 5 mm) were collected around the observed higher permeability zones such as sand lenses, sand stringers and fractures, where a higher degradation activity was expected. This study made use of a reactive transport model to investigate the appropriateness of several conceptual models. The conceptual models considered the location of dechlorination and degradation pathways (biotic reductive dechlorination or abiotic β-elimination with iron minerals) in three core profiles. The model includes diffusion in the matrix

  18. Vehicle active steering control research based on two-DOF robust internal model control

    NASA Astrophysics Data System (ADS)

    Wu, Jian; Liu, Yahui; Wang, Fengbo; Bao, Chunjiang; Sun, Qun; Zhao, Youqun

    2016-07-01

    Because of vehicle's external disturbances and model uncertainties, robust control algorithms have obtained popularity in vehicle stability control. The robust control usually gives up performance in order to guarantee the robustness of the control algorithm, therefore an improved robust internal model control(IMC) algorithm blending model tracking and internal model control is put forward for active steering system in order to reach high performance of yaw rate tracking with certain robustness. The proposed algorithm inherits the good model tracking ability of the IMC control and guarantees robustness to model uncertainties. In order to separate the design process of model tracking from the robustness design process, the improved 2 degree of freedom(DOF) robust internal model controller structure is given from the standard Youla parameterization. Simulations of double lane change maneuver and those of crosswind disturbances are conducted for evaluating the robust control algorithm, on the basis of a nonlinear vehicle simulation model with a magic tyre model. Results show that the established 2-DOF robust IMC method has better model tracking ability and a guaranteed level of robustness and robust performance, which can enhance the vehicle stability and handling, regardless of variations of the vehicle model parameters and the external crosswind interferences. Contradiction between performance and robustness of active steering control algorithm is solved and higher control performance with certain robustness to model uncertainties is obtained.

  19. 3D robust digital image correlation for vibration measurement.

    PubMed

    Chen, Zhong; Zhang, Xianmin; Fatikow, Sergej

    2016-03-01

    Discrepancies of speckle images under dynamic measurement due to the different viewing angles will deteriorate the correspondence in 3D digital image correlation (3D-DIC) for vibration measurement. Facing this kind of bottleneck, this paper presents two types of robust 3D-DIC methods for vibration measurement, SSD-robust and SWD-robust, which use a sum of square difference (SSD) estimator plus a Geman-McClure regulating term and a Welch estimator plus a Geman-McClure regulating term, respectively. Because the regulating term with an adaptive rejecting bound can lessen the influence of the abnormal pixel data in the dynamical measuring process, the robustness of the algorithm is enhanced. The robustness and precision evaluation experiments using a dual-frequency laser interferometer are implemented. The experimental results indicate that the two presented robust estimators can suppress the effects of the abnormality in the speckle images and, meanwhile, keep higher precision in vibration measurement in contrast with the traditional SSD method; thus, the SWD-robust and SSD-robust methods are suitable for weak image noise and strong image noise, respectively. PMID:26974624

  20. Robust image segmentation using local robust statistics and correntropy-based K-means clustering

    NASA Astrophysics Data System (ADS)

    Huang, Chencheng; Zeng, Li

    2015-03-01

    It is an important work to segment the real world images with intensity inhomogeneity such as magnetic resonance (MR) and computer tomography (CT) images. In practice, such images are often polluted by noise which make them difficult to be segmented by traditional level set based segmentation models. In this paper, we propose a robust level set image segmentation model combining local with global fitting energies to segment noised images. In the proposed model, the local fitting energy is based on the local robust statistics (LRS) information of an input image, which can efficiently reduce the effects of the noise, and the global fitting energy utilizes the correntropy-based K-means (CK) method, which can adaptively emphasize the samples that are close to their corresponding cluster centers. By integrating the advantages of global information and local robust statistics characteristics, the proposed model can efficiently segment images with intensity inhomogeneity and noise. Then, a level set regularization term is used to avoid re-initialization procedures in the process of curve evolution. In addition, the Gaussian filter is utilized to keep the level set smoothing in the curve evolution process. The proposed model first appeared as a two-phase model and then extended to a multi-phase one. Experimental results show the advantages of our model in terms of accuracy and robustness to the noise. In particular, our method has been applied on some synthetic and real images with desirable results.

  1. Robust control of hypersonic aircraft

    NASA Astrophysics Data System (ADS)

    Fan, Yong-hua; Yang, Jun; Zhang, Yu-zhuo

    2007-11-01

    Design of a robust controller for the longitudinal dynamics of a hypersonic aircraft by using parameter space method is present. The desirable poles are mapped to the parameter space of the controller using pole placement approach in this method. The intersection of the parameter space is the common controller for the multiple mode system. This controller can meet the need of the different phases of aircraft. It has been proved by simulation that the controller has highly performance of precision and robustness for the disturbance caused by separation, cowl open, fuel on and fuel off and perturbation caused by unknown dynamics.

  2. Robust Sparse Blind Source Separation

    NASA Astrophysics Data System (ADS)

    Chenot, Cecile; Bobin, Jerome; Rapin, Jeremy

    2015-11-01

    Blind Source Separation is a widely used technique to analyze multichannel data. In many real-world applications, its results can be significantly hampered by the presence of unknown outliers. In this paper, a novel algorithm coined rGMCA (robust Generalized Morphological Component Analysis) is introduced to retrieve sparse sources in the presence of outliers. It explicitly estimates the sources, the mixing matrix, and the outliers. It also takes advantage of the estimation of the outliers to further implement a weighting scheme, which provides a highly robust separation procedure. Numerical experiments demonstrate the efficiency of rGMCA to estimate the mixing matrix in comparison with standard BSS techniques.

  3. Design optimization for cost and quality: The robust design approach

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  4. Panaceas, uncertainty, and the robust control framework in sustainability science

    PubMed Central

    Anderies, John M.; Rodriguez, Armando A.; Janssen, Marco A.; Cifdaloz, Oguzhan

    2007-01-01

    A critical challenge faced by sustainability science is to develop strategies to cope with highly uncertain social and ecological dynamics. This article explores the use of the robust control framework toward this end. After briefly outlining the robust control framework, we apply it to the traditional Gordon–Schaefer fishery model to explore fundamental performance–robustness and robustness–vulnerability trade-offs in natural resource management. We find that the classic optimal control policy can be very sensitive to parametric uncertainty. By exploring a large class of alternative strategies, we show that there are no panaceas: even mild robustness properties are difficult to achieve, and increasing robustness to some parameters (e.g., biological parameters) results in decreased robustness with respect to others (e.g., economic parameters). On the basis of this example, we extract some broader themes for better management of resources under uncertainty and for sustainability science in general. Specifically, we focus attention on the importance of a continual learning process and the use of robust control to inform this process. PMID:17881574

  5. Parenchymal texture analysis in digital mammography: robust texture feature identification and equivalence across devices.

    PubMed

    Keller, Brad M; Oustimov, Andrew; Wang, Yan; Chen, Jinbo; Acciavatti, Raymond J; Zheng, Yuanjie; Ray, Shonket; Gee, James C; Maidment, Andrew D A; Kontos, Despina

    2015-04-01

    An analytical framework is presented for evaluating the equivalence of parenchymal texture features across different full-field digital mammography (FFDM) systems using a physical breast phantom. Phantom images (FOR PROCESSING) are acquired from three FFDM systems using their automated exposure control setting. A panel of texture features, including gray-level histogram, co-occurrence, run length, and structural descriptors, are extracted. To identify features that are robust across imaging systems, a series of equivalence tests are performed on the feature distributions, in which the extent of their intersystem variation is compared to their intrasystem variation via the Hodges-Lehmann test statistic. Overall, histogram and structural features tend to be most robust across all systems, and certain features, such as edge enhancement, tend to be more robust to intergenerational differences between detectors of a single vendor than to intervendor differences. Texture features extracted from larger regions of interest (i.e., [Formula: see text]) and with a larger offset length (i.e., [Formula: see text]), when applicable, also appear to be more robust across imaging systems. This framework and observations from our experiments may benefit applications utilizing mammographic texture analysis on images acquired in multivendor settings, such as in multicenter studies of computer-aided detection and breast cancer risk assessment. PMID:26158105

  6. Parenchymal texture analysis in digital mammography: robust texture feature identification and equivalence across devices

    PubMed Central

    Keller, Brad M.; Oustimov, Andrew; Wang, Yan; Chen, Jinbo; Acciavatti, Raymond J.; Zheng, Yuanjie; Ray, Shonket; Gee, James C.; Maidment, Andrew D. A.; Kontos, Despina

    2015-01-01

    Abstract. An analytical framework is presented for evaluating the equivalence of parenchymal texture features across different full-field digital mammography (FFDM) systems using a physical breast phantom. Phantom images (FOR PROCESSING) are acquired from three FFDM systems using their automated exposure control setting. A panel of texture features, including gray-level histogram, co-occurrence, run length, and structural descriptors, are extracted. To identify features that are robust across imaging systems, a series of equivalence tests are performed on the feature distributions, in which the extent of their intersystem variation is compared to their intrasystem variation via the Hodges–Lehmann test statistic. Overall, histogram and structural features tend to be most robust across all systems, and certain features, such as edge enhancement, tend to be more robust to intergenerational differences between detectors of a single vendor than to intervendor differences. Texture features extracted from larger regions of interest (i.e., >63  pixels2) and with a larger offset length (i.e., >7  pixels), when applicable, also appear to be more robust across imaging systems. This framework and observations from our experiments may benefit applications utilizing mammographic texture analysis on images acquired in multivendor settings, such as in multicenter studies of computer-aided detection and breast cancer risk assessment. PMID:26158105

  7. UNIX-based operating systems robustness evaluation

    NASA Technical Reports Server (NTRS)

    Chang, Yu-Ming

    1996-01-01

    Robust operating systems are required for reliable computing. Techniques for robustness evaluation of operating systems not only enhance the understanding of the reliability of computer systems, but also provide valuable feed- back to system designers. This thesis presents results from robustness evaluation experiments on five UNIX-based operating systems, which include Digital Equipment's OSF/l, Hewlett Packard's HP-UX, Sun Microsystems' Solaris and SunOS, and Silicon Graphics' IRIX. Three sets of experiments were performed. The methodology for evaluation tested (1) the exception handling mechanism, (2) system resource management, and (3) system capacity under high workload stress. An exception generator was used to evaluate the exception handling mechanism of the operating systems. Results included exit status of the exception generator and the system state. Resource management techniques used by individual operating systems were tested using programs designed to usurp system resources such as physical memory and process slots. Finally, the workload stress testing evaluated the effect of the workload on system performance by running a synthetic workload and recording the response time of local and remote user requests. Moderate to severe performance degradations were observed on the systems under stress.

  8. A robust DCT domain watermarking algorithm based on chaos system

    NASA Astrophysics Data System (ADS)

    Xiao, Mingsong; Wan, Xiaoxia; Gan, Chaohua; Du, Bo

    2009-10-01

    Digital watermarking is a kind of technique that can be used for protecting and enforcing the intellectual property (IP) rights of the digital media like the digital images containting in the transaction copyright. There are many kinds of digital watermarking algorithms. However, existing digital watermarking algorithms are not robust enough against geometric attacks and signal processing operations. In this paper, a robust watermarking algorithm based on chaos array in DCT (discrete cosine transform)-domain for gray images is proposed. The algorithm provides an one-to-one method to extract the watermark.Experimental results have proved that this new method has high accuracy and is highly robust against geometric attacks, signal processing operations and geometric transformations. Furthermore, the one who have on idea of the key can't find the position of the watermark embedded in. As a result, the watermark not easy to be modified, so this scheme is secure and robust.

  9. Enhancing robustness of coupled networks under targeted recoveries

    PubMed Central

    Gong, Maoguo; Ma, Lijia; Cai, Qing; Jiao, Licheng

    2015-01-01

    Coupled networks are extremely fragile because a node failure of a network would trigger a cascade of failures on the entire system. Existing studies mainly focused on the cascading failures and the robustness of coupled networks when the networks suffer from attacks. In reality, it is necessary to recover the damaged networks, and there are cascading failures in recovery processes. In this study, firstly, we analyze the cascading failures of coupled networks during recoveries. Then, a recovery robustness index is presented for evaluating the resilience of coupled networks to cascading failures in the recovery processes. Finally, we propose a technique aiming at protecting several influential nodes for enhancing robustness of coupled networks under the recoveries, and adopt six strategies based on the potential knowledge of network centrality to find the influential nodes. Experiments on three coupling networks demonstrate that with a small number of influential nodes protected, the robustness of coupled networks under the recoveries can be greatly enhanced. PMID:25675980

  10. Robust Sliding Window Synchronizer Developed

    NASA Technical Reports Server (NTRS)

    Chun, Kue S.; Xiong, Fuqin; Pinchak, Stanley

    2004-01-01

    The development of an advanced robust timing synchronization scheme is crucial for the support of two NASA programs--Advanced Air Transportation Technologies and Aviation Safety. A mobile aeronautical channel is a dynamic channel where various adverse effects--such as Doppler shift, multipath fading, and shadowing due to precipitation, landscape, foliage, and buildings--cause the loss of symbol timing synchronization.

  11. Mental Models: A Robust Definition

    ERIC Educational Resources Information Center

    Rook, Laura

    2013-01-01

    Purpose: The concept of a mental model has been described by theorists from diverse disciplines. The purpose of this paper is to offer a robust definition of an individual mental model for use in organisational management. Design/methodology/approach: The approach adopted involves an interdisciplinary literature review of disciplines, including…

  12. Robust Portfolio Optimization Using Pseudodistances

    PubMed Central

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948

  13. Robust design of dynamic observers

    NASA Technical Reports Server (NTRS)

    Bhattacharyya, S. P.

    1974-01-01

    The two (identity) observer realizations z = Mz + Ky and z = transpose of Az + transpose of K(y - transpose of Cz), respectively called the open loop and closed loop realizations, for the linear system x = Ax, y = Cx are analyzed with respect to the requirement of robustness; i.e., the requirement that the observer continue to regulate the error x - z satisfactorily despite small variations in the observer parameters from the projected design values. The results show that the open loop realization is never robust, that robustness requires a closed loop implementation, and that the closed loop realization is robust with respect to small perturbations in the gains transpose of K if and only if the observer can be built to contain an exact replica of the unstable and underdamped dynamics of the system being observed. These results clarify the stringent accuracy requirements on both models and hardware that must be met before an observer can be considered for use in a control system.

  14. Robust multiplatform RF emitter localization

    NASA Astrophysics Data System (ADS)

    Al Issa, Huthaifa; Ordóñez, Raúl

    2012-06-01

    In recent years, position based services has increase. Thus, recent developments in communications and RF technology have enabled system concept formulations and designs for low-cost radar systems using state-of-the-art software radio modules. This research is done to investigate a novel multi-platform RF emitter localization technique denoted as Position-Adaptive RF Direction Finding (PADF). The formulation is based on the investigation of iterative path-loss (i.e., Path Loss Exponent, or PLE) metrics estimates that are measured across multiple platforms in order to autonomously adapt (i.e. self-adjust) of the location of each distributed/cooperative platform. Experiments conducted at the Air-Force Research laboratory (AFRL) indicate that this position-adaptive approach exhibits potential for accurate emitter localization in challenging embedded multipath environments such as in urban environments. The focus of this paper is on the robustness of the distributed approach to RF-based location tracking. In order to localize the transmitter, we use the Received Signal Strength Indicator (RSSI) data to approximate distance from the transmitter to the revolving receivers. We provide an algorithm for on-line estimation of the Path Loss Exponent (PLE) that is used in modeling the distance based on Received Signal Strength (RSS) measurements. The emitter position estimation is calculated based on surrounding sensors RSS values using Least-Square Estimation (LSE). The PADF has been tested on a number of different configurations in the laboratory via the design and implementation of four IRIS wireless sensor nodes as receivers and one hidden sensor as a transmitter during the localization phase. The robustness of detecting the transmitters position is initiated by getting the RSSI data through experiments and then data manipulation in MATLAB will determine the robustness of each node and ultimately that of each configuration. The parameters that are used in the functions are

  15. System identification for robust control design

    SciTech Connect

    Dohner, J.L.

    1995-04-01

    System identification for the purpose of robust control design involves estimating a nominal model of a physical system and the uncertainty bounds of that nominal model via the use of experimentally measured input/output data. Although many algorithms have been developed to identify nominal models, little effort has been directed towards identifying uncertainty bounds. Therefore, in this document, a discussion of both nominal model identification and bounded output multiplicative uncertainty identification will be presented. This document is divided into several sections. Background information relevant to system identification and control design will be presented. A derivation of eigensystem realization type algorithms will be presented. An algorithm will be developed for calculating the maximum singular value of output multiplicative uncertainty from measured data. An application will be given involving the identification of a complex system with aliased dynamics, feedback control, and exogenous noise disturbances. And, finally, a short discussion of results will be presented.

  16. Robustness measure of hybrid intra-particle entanglement, discord, and classical correlation with initial Werner state

    NASA Astrophysics Data System (ADS)

    Saha, P.; Sarkar, D.

    2016-02-01

    Quantum information processing is largely dependent on the robustness of non-classical correlations, such as entanglement and quantum discord. However, all the realistic quantum systems are thermodynamically open and lose their coherence with time through environmental interaction. The time evolution of quantum entanglement, discord, and the respective classical correlation for a single, spin-1/2 particle under spin and energy degrees of freedom, with an initial Werner state, has been investigated in the present study. The present intra-particle system is considered to be easier to produce than its inter-particle counterpart. Experimentally, this type of system may be realized in the well-known Penning trap. The most stable correlation was identified through maximization of a system-specific global objective function. Quantum discord was found to be the most stable, followed by the classical correlation. Moreover, all the correlations were observed to attain highest robustness under initial Bell state, with minimum possible dephasing and decoherence parameters.

  17. Metadata-driven comparative analysis tool for sequences (meta-CATS): an automated process for identifying significant sequence variations that correlate with virus attributes.

    PubMed

    Pickett, B E; Liu, M; Sadat, E L; Squires, R B; Noronha, J M; He, S; Jen, W; Zaremba, S; Gu, Z; Zhou, L; Larsen, C N; Bosch, I; Gehrke, L; McGee, M; Klem, E B; Scheuermann, R H

    2013-12-01

    The Virus Pathogen Resource (ViPR; www.viprbrc.org) and Influenza Research Database (IRD; www.fludb.org) have developed a metadata-driven Comparative Analysis Tool for Sequences (meta-CATS), which performs statistical comparative analyses of nucleotide and amino acid sequence data to identify correlations between sequence variations and virus attributes (metadata). Meta-CATS guides users through: selecting a set of nucleotide or protein sequences; dividing them into multiple groups based on any associated metadata attribute (e.g. isolation location, host species); performing a statistical test at each aligned position; and identifying all residues that significantly differ between the groups. As proofs of concept, we have used meta-CATS to identify sequence biomarkers associated with dengue viruses isolated from different hemispheres, and to identify variations in the NS1 protein that are unique to each of the 4 dengue serotypes. Meta-CATS is made freely available to virology researchers to identify genotype-phenotype correlations for development of improved vaccines, diagnostics, and therapeutics.

  18. Metadata-driven Comparative Analysis Tool for Sequences (meta-CATS): an Automated Process for Identifying Significant Sequence Variations that Correlate with Virus Attributes

    PubMed Central

    Pickett, BE; Liu, M; Sadat, EL; Squires, RB; Noronha, JM; He, S; Jen, W; Zaremba, S; Gu, Z; Zhou, L; Larsen, CN; Bosch, I; Gehrke, L; McGee, M; Klem, EB; Scheuermann, RH

    2016-01-01

    The Virus Pathogen Resource (ViPR; www.viprbrc.org) and Influenza Research Database (IRD; www.fludb.org) have developed a metadata-driven Comparative Analysis Tool for Sequences (meta-CATS), which performs statistical comparative analyses of nucleotide and amino acid sequence data to identify correlations between sequence variations and virus attributes (metadata). Meta-CATS guides users through: selecting a set of nucleotide or protein sequences; dividing them into multiple groups based on any associated metadata attribute (e.g. isolation location, host species); performing a statistical test at each aligned position; and identifying all residues that significantly differ between the groups. As proofs of concept, we have used meta-CATS to identify sequence biomarkers associated with dengue viruses isolated from different hemispheres, and to identify variations in the NS1 protein that are unique to each of the 4 dengue serotypes. Meta-CATS is made freely available to virology researchers to identify genotype-phenotype correlations for development of improved vaccines, diagnostics, and therapeutics. PMID:24210098

  19. Design analysis, robust methods, and stress classification

    SciTech Connect

    Bees, W.J.

    1993-01-01

    This special edition publication volume is comprised of papers presented at the 1993 ASME Pressure Vessels and Piping Conference, July 25--29, 1993 in Denver, Colorado. The papers were prepared for presentations in technical sessions developed under the auspices of the PVPD Committees on Computer Technology, Design and Analysis, Operations Applications and Components. The topics included are: Analysis of Pressure Vessels and Components; Expansion Joints; Robust Methods; Stress Classification; and Non-Linear Analysis. Individual papers have been processed separately for inclusion in the appropriate data bases.

  20. Robust technique allowing manufacturing superoleophobic surfaces

    NASA Astrophysics Data System (ADS)

    Bormashenko, Edward; Grynyov, Roman; Chaniel, Gilad; Taitelbaum, Haim; Bormashenko, Yelena

    2013-04-01

    We report the robust technique allowing manufacturing of superhydrophobic and oleophobic (omniphobic) surfaces with industrial grade low density polyethylene. The reported process includes two stages: (1) hot embossing of polyethylene with micro-scaled steel gauzes; (2) treatment of embossed surfaces with cold radiofrequency plasma of tetrafluoromethane. The reported surfaces demonstrate not only pronounced superhydrophobicity but also superoleophobicity. Superoleophobicity results from the hierarchical nano-scaled topography of fluorinated polyethylene surface. The observed superoleophobicity is strengthened by the hydrophobic recovery. The stability of the Cassie wetting regime was studied.

  1. Identifying Training Needs to Improve Indigenous Community Representatives Input into Environmental Resource Management Consultative Processes: A Case Study of the Bundjalung Nation

    ERIC Educational Resources Information Center

    Lloyd, David; Norrie, Fiona

    2004-01-01

    Despite increased engagement of Indigenous representatives as participants on consultative panels charged with processes of natural resource management, concerns have been raised by both Indigenous representatives and management agencies regarding the ability of Indigenous people to have quality input into the decisions these processes produce. In…

  2. Methodology for the conceptual design of a robust and opportunistic system-of-systems

    NASA Astrophysics Data System (ADS)

    Talley, Diana Noonan

    Systems are becoming more complicated, complex, and interrelated. Designers have recognized the need to develop systems from a holistic perspective and design them as Systems-of-Systems (SoS). The design of the SoS, especially in the conceptual design phase, is generally characterized by significant uncertainty. As a result, it is possible for all three types of uncertainty (aleatory, epistemic, and error) and the associated factors of uncertainty (randomness, sampling, confusion, conflict, inaccuracy, ambiguity, vagueness, coarseness, and simplification) to affect the design process. While there are a number of existing SoS design methods, several gaps have been identified: the ability to modeling all of the factors of uncertainty at varying levels of knowledge; the ability to consider both the pernicious and propitious aspects of uncertainty; and, the ability to determine the value of reducing the uncertainty in the design process. While there are numerous uncertainty modeling theories, no one theory can effectively model every kind of uncertainty. This research presents a Hybrid Uncertainty Modeling Method (HUMM) that integrates techniques from the following theories: Probability Theory, Evidence Theory, Fuzzy Set Theory, and Info-Gap theory. The HUMM is capable of modeling all of the different factors of uncertainty and can model the uncertainty for multiple levels of knowledge. In the design process, there are both pernicious and propitious characteristics associated with the uncertainty. Existing design methods typically focus on developing robust designs that are insensitive to the associated uncertainty. These methods do not capitalize on the possibility of maximizing the potential benefit associated with the uncertainty. This research demonstrates how these deficiencies can be overcome by identifying the most robust and opportunistic design. In a design process it is possible that the most robust and opportunistic design will not be selected from the set

  3. Key molecular processes of the diapause to post-diapause quiescence transition in the alfalfa leafcutting bee Megachile rotundata identified by comparative transcriptome analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Insect diapause (dormancy) synchronizes an insect’s life cycle to seasonal changes in the abiotic and biotic resources required for development and reproduction. Transcription analysis of Megachile rotundata diapause termination identified 399 post-diapause upregulated and 144 post-diapause down-reg...

  4. Chevron: Refinery Identifies $4.4 Million in Annual Savings by Using Process Simulation Models to Perform Energy-Efficiency Assessment

    SciTech Connect

    Not Available

    2004-05-01

    In an energy-efficiency study at its refinery near Salt Lake City, Utah, Chevron focused on light hydrocarbons processing. The company found it could recover hydrocarbons from its fuel gas system and sell them. By using process simulation models of special distillation columns and associated reboilers and condensers, Chevron could predict the performance of potential equipment configuration changes and process modifications. More than 25,000 MMBtu in natural gas could be saved annually if a debutanizer upgrade project and a new saturated gas plant project were completed. Together, these projects would save $4.4 million annually.

  5. Robust multi-objective calibration strategies - possibilities for improving flood forecasting

    NASA Astrophysics Data System (ADS)

    Krauße, T.; Cullmann, J.; Saile, P.; Schmitz, G. H.

    2012-10-01

    Process-oriented rainfall-runoff models are designed to approximate the complex hydrologic processes within a specific catchment and in particular to simulate the discharge at the catchment outlet. Most of these models exhibit a high degree of complexity and require the determination of various parameters by calibration. Recently, automatic calibration methods became popular in order to identify parameter vectors with high corresponding model performance. The model performance is often assessed by a purpose-oriented objective function. Practical experience suggests that in many situations one single objective function cannot adequately describe the model's ability to represent any aspect of the catchment's behaviour. This is regardless of whether the objective is aggregated of several criteria that measure different (possibly opposite) aspects of the system behaviour. One strategy to circumvent this problem is to define multiple objective functions and to apply a multi-objective optimisation algorithm to identify the set of Pareto optimal or non-dominated solutions. Nonetheless, there is a major disadvantage of automatic calibration procedures that understand the problem of model calibration just as the solution of an optimisation problem: due to the complex-shaped response surface, the estimated solution of the optimisation problem can result in different near-optimum parameter vectors that can lead to a very different performance on the validation data. Bárdossy and Singh (2008) studied this problem for single-objective calibration problems using the example of hydrological models and proposed a geometrical sampling approach called Robust Parameter Estimation (ROPE). This approach applies the concept of data depth in order to overcome the shortcomings of automatic calibration procedures and find a set of robust parameter vectors. Recent studies confirmed the effectivity of this method. However, all ROPE approaches published so far just identify robust model

  6. Using stable isotopes and major ions to identify hydrological processes and geochemical characteristics in a typical karstic basin, Guizhou, Southwest China.

    PubMed

    Han, Zhiwei; Tang, Changyuan; Wu, Pan; Zhang, Ruixue; Zhang, Chipeng

    2014-01-01

    The investigation of hydrological processes is very important for water resource development in karst basins. In order to understand these processes associated with complex hydrogeochemical evolution, a typical basin was chosen in Houzai, southwest China. The basin was hydrogeologically classified into three zones based on hydrogen and oxygen isotopes as well as the field surveys. Isotopic values were found to be enriched in zone 2 where paddy fields were prevailing with well-developed underground flow systems, and heavier than those in zone 1. Zone 3 was considered as the mixture of zones 1 and 2 with isotopic values falling in the range between the two zones. A conceptual hydrological model was thus proposed to reveal the probable hydrological cycle in the basin. In addition, major processes of long-term chemical weathering in the karstic basin were discussed, and reactions between water and carbonate rocks proved to be the main geochemical processes in karst aquifers.

  7. Algebraic connectivity and graph robustness.

    SciTech Connect

    Feddema, John Todd; Byrne, Raymond Harry; Abdallah, Chaouki T.

    2009-07-01

    Recent papers have used Fiedler's definition of algebraic connectivity to show that network robustness, as measured by node-connectivity and edge-connectivity, can be increased by increasing the algebraic connectivity of the network. By the definition of algebraic connectivity, the second smallest eigenvalue of the graph Laplacian is a lower bound on the node-connectivity. In this paper we show that for circular random lattice graphs and mesh graphs algebraic connectivity is a conservative lower bound, and that increases in algebraic connectivity actually correspond to a decrease in node-connectivity. This means that the networks are actually less robust with respect to node-connectivity as the algebraic connectivity increases. However, an increase in algebraic connectivity seems to correlate well with a decrease in the characteristic path length of these networks - which would result in quicker communication through the network. Applications of these results are then discussed for perimeter security.

  8. Robust dynamic mitigation of instabilities

    NASA Astrophysics Data System (ADS)

    Kawata, S.; Karino, T.

    2015-04-01

    A dynamic mitigation mechanism for instability growth was proposed and discussed in the paper [S. Kawata, Phys. Plasmas 19, 024503 (2012)]. In the present paper, the robustness of the dynamic instability mitigation mechanism is discussed further. The results presented here show that the mechanism of the dynamic instability mitigation is rather robust against changes in the phase, the amplitude, and the wavelength of the wobbling perturbation applied. Generally, instability would emerge from the perturbation of the physical quantity. Normally, the perturbation phase is unknown so that the instability growth rate is discussed. However, if the perturbation phase is known, the instability growth can be controlled by a superposition of perturbations imposed actively: If the perturbation is induced by, for example, a driving beam axis oscillation or wobbling, the perturbation phase could be controlled, and the instability growth is mitigated by the superposition of the growing perturbations.

  9. Robust dynamic mitigation of instabilities

    SciTech Connect

    Kawata, S.; Karino, T.

    2015-04-15

    A dynamic mitigation mechanism for instability growth was proposed and discussed in the paper [S. Kawata, Phys. Plasmas 19, 024503 (2012)]. In the present paper, the robustness of the dynamic instability mitigation mechanism is discussed further. The results presented here show that the mechanism of the dynamic instability mitigation is rather robust against changes in the phase, the amplitude, and the wavelength of the wobbling perturbation applied. Generally, instability would emerge from the perturbation of the physical quantity. Normally, the perturbation phase is unknown so that the instability growth rate is discussed. However, if the perturbation phase is known, the instability growth can be controlled by a superposition of perturbations imposed actively: If the perturbation is induced by, for example, a driving beam axis oscillation or wobbling, the perturbation phase could be controlled, and the instability growth is mitigated by the superposition of the growing perturbations.

  10. Robust flight control of rotorcraft

    NASA Astrophysics Data System (ADS)

    Pechner, Adam Daniel

    With recent design improvement in fixed wing aircraft, there has been a considerable interest in the design of robust flight control systems to compensate for the inherent instability necessary to achieve desired performance. Such systems are designed for maximum available retention of stability and performance in the presence of significant vehicle damage or system failure. The rotorcraft industry has shown similar interest in adopting these reconfigurable flight control schemes specifically because of their ability to reject disturbance inputs and provide a significant amount of robustness for all but the most catastrophic of situations. The research summarized herein focuses on the extension of the pseudo-sliding mode control design procedure interpreted in the frequency domain. Application of the technique is employed and simulated on two well known helicopters, a simplified model of a hovering Sikorsky S-61 and the military's Black Hawk UH-60A also produced by Sikorsky. The Sikorsky helicopter model details are readily available and was chosen because it can be limited to pitch and roll motion reducing the number of degrees of freedom and yet contains two degrees of freedom, which is the minimum requirement in proving the validity of the pseudo-sliding control technique. The full order model of a hovering Black Hawk system was included both as a comparison to the S-61 helicopter design system and as a means to demonstrate the scaleability and effectiveness of the control technique on sophisticated systems where design robustness is of critical concern.

  11. Robust video hashing via multilinear subspace projections.

    PubMed

    Li, Mu; Monga, Vishal

    2012-10-01

    The goal of video hashing is to design hash functions that summarize videos by short fingerprints or hashes. While traditional applications of video hashing lie in database searches and content authentication, the emergence of websites such as YouTube and DailyMotion poses a challenging problem of anti-piracy video search. That is, hashes or fingerprints of an original video (provided to YouTube by the content owner) must be matched against those uploaded to YouTube by users to identify instances of "illegal" or undesirable uploads. Because the uploaded videos invariably differ from the original in their digital representation (owing to incidental or malicious distortions), robust video hashes are desired. We model videos as order-3 tensors and use multilinear subspace projections, such as a reduced rank parallel factor analysis (PARAFAC) to construct video hashes. We observe that, unlike most standard descriptors of video content, tensor-based subspace projections can offer excellent robustness while effectively capturing the spatio-temporal essence of the video for discriminability. We introduce randomization in the hash function by dividing the video into (secret key based) pseudo-randomly selected overlapping sub-cubes to prevent against intentional guessing and forgery. Detection theoretic analysis of the proposed hash-based video identification is presented, where we derive analytical approximations for error probabilities. Remarkably, these theoretic error estimates closely mimic empirically observed error probability for our hash algorithm. Furthermore, experimental receiver operating characteristic (ROC) curves reveal that the proposed tensor-based video hash exhibits enhanced robustness against both spatial and temporal video distortions over state-of-the-art video hashing techniques.

  12. Gearbox design for uncertain load requirements using active robust optimization

    NASA Astrophysics Data System (ADS)

    Salomon, Shaul; Avigad, Gideon; Purshouse, Robin C.; Fleming, Peter J.

    2016-04-01

    Design and optimization of gear transmissions have been intensively studied, but surprisingly the robustness of the resulting optimal design to uncertain loads has never been considered. Active Robust (AR) optimization is a methodology to design products that attain robustness to uncertain or changing environmental conditions through adaptation. In this study the AR methodology is utilized to optimize the number of transmissions, as well as their gearing ratios, for an uncertain load demand. The problem is formulated as a bi-objective optimization problem where the objectives are to satisfy the load demand in the most energy efficient manner and to minimize production cost. The results show that this approach can find a set of robust designs, revealing a trade-off between energy efficiency and production cost. This can serve as a useful decision-making tool for the gearbox design process, as well as for other applications.

  13. DBSI: DNA-binding site identifier

    PubMed Central

    Zhu, Xiaolei; Ericksen, Spencer S.; Mitchell, Julie C.

    2013-01-01

    In this study, we present the DNA-Binding Site Identifier (DBSI), a new structure-based method for predicting protein interaction sites for DNA binding. DBSI was trained and validated on a data set of 263 proteins (TRAIN-263), tested on an independent set of protein-DNA complexes (TEST-206) and data sets of 29 unbound (APO-29) and 30 bound (HOLO-30) protein structures distinct from the training data. We computed 480 candidate features for identifying protein residues that bind DNA, including new features that capture the electrostatic microenvironment within shells near the protein surface. Our iterative feature selection process identified features important in other models, as well as features unique to the DBSI model, such as a banded electrostatic feature with spatial separation comparable with the canonical width of the DNA minor groove. Validations and comparisons with established methods using a range of performance metrics clearly demonstrate the predictive advantage of DBSI, and its comparable performance on unbound (APO-29) and bound (HOLO-30) conformations demonstrates robustness to binding-induced protein conformational changes. Finally, we offer our feature data table to others for integration into their own models or for testing improved feature selection and model training strategies based on DBSI. PMID:23873960

  14. Robust hashing for 3D models

    NASA Astrophysics Data System (ADS)

    Berchtold, Waldemar; Schäfer, Marcel; Rettig, Michael; Steinebach, Martin

    2014-02-01

    3D models and applications are of utmost interest in both science and industry. With the increment of their usage, their number and thereby the challenge to correctly identify them increases. Content identification is commonly done by cryptographic hashes. However, they fail as a solution in application scenarios such as computer aided design (CAD), scientific visualization or video games, because even the smallest alteration of the 3D model, e.g. conversion or compression operations, massively changes the cryptographic hash as well. Therefore, this work presents a robust hashing algorithm for 3D mesh data. The algorithm applies several different bit extraction methods. They are built to resist desired alterations of the model as well as malicious attacks intending to prevent correct allocation. The different bit extraction methods are tested against each other and, as far as possible, the hashing algorithm is compared to the state of the art. The parameters tested are robustness, security and runtime performance as well as False Acceptance Rate (FAR) and False Rejection Rate (FRR), also the probability calculation of hash collision is included. The introduced hashing algorithm is kept adaptive e.g. in hash length, to serve as a proper tool for all applications in practice.

  15. Understanding and Identifying the Child at Risk for Auditory Processing Disorders: A Case Method Approach in Examining the Interdisciplinary Role of the School Nurse

    ERIC Educational Resources Information Center

    Neville, Kathleen; Foley, Marie; Gertner, Alan

    2011-01-01

    Despite receiving increased professional and public awareness since the initial American Speech Language Hearing Association (ASHA) statement defining Auditory Processing Disorders (APDs) in 1993 and the subsequent ASHA statement (2005), many misconceptions remain regarding APDs in school-age children among health and academic professionals. While…

  16. Designing Flood Management Systems for Joint Economic and Ecological Robustness

    NASA Astrophysics Data System (ADS)

    Spence, C. M.; Grantham, T.; Brown, C. M.; Poff, N. L.

    2015-12-01

    Freshwater ecosystems across the United States are threatened by hydrologic change caused by water management operations and non-stationary climate trends. Nonstationary hydrology also threatens flood management systems' performance. Ecosystem managers and flood risk managers need tools to design systems that achieve flood risk reduction objectives while sustaining ecosystem functions and services in an uncertain hydrologic future. Robust optimization is used in water resources engineering to guide system design under climate change uncertainty. Using principles introduced by Eco-Engineering Decision Scaling (EEDS), we extend robust optimization techniques to design flood management systems that meet both economic and ecological goals simultaneously across a broad range of future climate conditions. We use three alternative robustness indices to identify flood risk management solutions that preserve critical ecosystem functions in a case study from the Iowa River, where recent severe flooding has tested the limits of the existing flood management system. We seek design modifications to the system that both reduce expected cost of flood damage while increasing ecologically beneficial inundation of riparian floodplains across a wide range of plausible climate futures. The first robustness index measures robustness as the fraction of potential climate scenarios in which both engineering and ecological performance goals are met, implicitly weighting each climate scenario equally. The second index builds on the first by using climate projections to weight each climate scenario, prioritizing acceptable performance in climate scenarios most consistent with climate projections. The last index measures robustness as mean performance across all climate scenarios, but penalizes scenarios with worse performance than average, rewarding consistency. Results stemming from alternate robustness indices reflect implicit assumptions about attitudes toward risk and reveal the

  17. Robust Bearing Estimation for 3-Component Stations

    SciTech Connect

    Claassen, John P.

    1999-06-03

    A robust bearing estimation process for 3-component stations has been developed and explored. The method, called SEEC for Search, Estimate, Evaluate and Correct, intelligently exploits the in- herent information in the arrival at every step of the process to achieve near-optimal results. In particular, the approach uses a consistent framework to define the optimal time-frequency windows on which to make estimates, to make the bearing estimates themselves, to construct metrics helpful in choosing the better estimates or admitting that the bearing is immeasurable, andjinally to apply bias corrections when calibration information is available to yield a single final estimate. The method was applied to a small but challenging set of events in a seismically active region. The method demonstrated remarkable utility by providing better estimates and insights than previously available. Various monitoring implications are noted fiom these findings.

  18. Robust bearing estimation for 3-component stations

    SciTech Connect

    CLAASSEN,JOHN P.

    2000-02-01

    A robust bearing estimation process for 3-component stations has been developed and explored. The method, called SEEC for Search, Estimate, Evaluate and Correct, intelligently exploits the inherent information in the arrival at every step of the process to achieve near-optimal results. In particular the approach uses a consistent framework to define the optimal time-frequency windows on which to make estimates, to make the bearing estimates themselves, to construct metrics helpful in choosing the better estimates or admitting that the bearing is immeasurable, and finally to apply bias corrections when calibration information is available to yield a single final estimate. The algorithm was applied to a small but challenging set of events in a seismically active region. It demonstrated remarkable utility by providing better estimates and insights than previously available. Various monitoring implications are noted from these findings.

  19. Robust and efficient overset grid assembly for partitioned unstructured meshes

    SciTech Connect

    Roget, Beatrice Sitaraman, Jayanarayanan

    2014-03-01

    This paper presents a method to perform efficient and automated Overset Grid Assembly (OGA) on a system of overlapping unstructured meshes in a parallel computing environment where all meshes are partitioned into multiple mesh-blocks and processed on multiple cores. The main task of the overset grid assembler is to identify, in parallel, among all points in the overlapping mesh system, at which points the flow solution should be computed (field points), interpolated (receptor points), or ignored (hole points). Point containment search or donor search, an algorithm to efficiently determine the cell that contains a given point, is the core procedure necessary for accomplishing this task. Donor search is particularly challenging for partitioned unstructured meshes because of the complex irregular boundaries that are often created during partitioning. Another challenge arises because of the large variation in the type of mesh-block overlap and the resulting large load imbalance on multiple processors. Desirable traits for the grid assembly method are efficiency (requiring only a small fraction of the solver time), robustness (correct identification of all point types), and full automation (no user input required other than the mesh system). Additionally, the method should be scalable, which is an important challenge due to the inherent load imbalance. This paper describes a fully-automated grid assembly method, which can use two different donor search algorithms. One is based on the use of auxiliary grids and Exact Inverse Maps (EIM), and the other is based on the use of Alternating Digital Trees (ADT). The EIM method is demonstrated to be more efficient than the ADT method, while retaining robustness. An adaptive load re-balance algorithm is also designed and implemented, which considerably improves the scalability of the method.

  20. Determining the rp-process flow through 56Ni: resonances in 57Cu(p,γ)58Zn identified with GRETINA.

    PubMed

    Langer, C; Montes, F; Aprahamian, A; Bardayan, D W; Bazin, D; Brown, B A; Browne, J; Crawford, H; Cyburt, R H; Domingo-Pardo, C; Gade, A; George, S; Hosmer, P; Keek, L; Kontos, A; Lee, I-Y; Lemasson, A; Lunderberg, E; Maeda, Y; Matos, M; Meisel, Z; Noji, S; Nunes, F M; Nystrom, A; Perdikakis, G; Pereira, J; Quinn, S J; Recchia, F; Schatz, H; Scott, M; Siegl, K; Simon, A; Smith, M; Spyrou, A; Stevens, J; Stroberg, S R; Weisshaar, D; Wheeler, J; Wimmer, K; Zegers, R G T

    2014-07-18

    An approach is presented to experimentally constrain previously unreachable (p, γ) reaction rates on nuclei far from stability in the astrophysical rp process. Energies of all critical resonances in the (57)Cu(p,γ)(58)Zn reaction are deduced by populating states in (58)Zn with a (d, n) reaction in inverse kinematics at 75 MeV/u, and detecting γ-ray-recoil coincidences with the state-of-the-art γ-ray tracking array GRETINA and the S800 spectrograph at the National Superconducting Cyclotron Laboratory. The results reduce the uncertainty in the (57)Cu(p,γ) reaction rate by several orders of magnitude. The effective lifetime of (56)Ni, an important waiting point in the rp process in x-ray bursts, can now be determined entirely from experimentally constrained reaction rates.

  1. Using Ground based LiDAR to identify the processes of bluff erosion in the LeSueur River basin in Southern Minnesota

    NASA Astrophysics Data System (ADS)

    Day, S. S.; Gran, K. B.; Belmont, P.; Jennings, C. E.; Wawrzyniec, T. F.

    2009-12-01

    Like with aerial LiDAR, the many benefits of ground ground-based LiDAR are continuing to be discovered. Ground Ground-based LiDAR offers high high-resolution surveys of vertical features like bluffs and stream banks. When the surveys are repeated and compared, the data can reveal not only the rate of erosion, but in some cases the processes responsible for erosion. Three years of LiDAR surveys at fifteen sites in the Le Sueur River basin in southern Minnesota have revealed that the bluff composition as well as the degree of consolidation dictates the erosion rate and the processes which cause them to erode. The bluffs in this basin are composed of glacial tills at varying degrees of compaction. Because this basin is incising, many of the bluffs are cut into terraces capped by alluvium. Over-consolidated bluffs tend to fail along joints, as large blocks. The tills which are not over-consolidated fail due to undercutting as the toe of the bluff is eroded by the river. Finally, those bluffs which are capped by alluvium fail do due to undercutting at the toe, as well as by groundwater flow through the gravel layer at the base of the alluvium. Bluffs in the Le Sueur River contribute significantly towards a turbidity impairment in the basin, and understanding both the rate of erosion and the processes by which these bluffs erode can help managers target remediation practices. In this basin ground-based LiDAR has not only helped to quantify bluff erosion, but has also shown the dominant erosion process.

  2. Autoantibodies against the exocrine pancreas in autoimmune pancreatitis: gene and protein expression profiling and immunoassays identify pancreatic enzymes as a major target of the inflammatory process

    PubMed Central

    Löhr, J.-Matthias; Faissner, Ralf; Koczan, Dirk; Bewerunge, Peter; Bassi, Claudio; Brors, Benedikt; Eils, Roland; Frulloni, Luca; Funk, Anette; Halangk, Walter; Jesenofsky, Ralf; Kaderali, Lars; Kleeff, Jörg; Krüger, Burkhard; Lerch, Markus M.; Lösel, Ralf; Magnani, Mauro; Neumaier, Michael; Nittka, Stephanie; Sahin-Tóth, Miklós; Sänger, Julian; Serafini, Sonja; Schnölzer, Martina; Thierse, Hermann-Josef; Wandschneider, Silke; Zamboni, Giuseppe; Klöppel, Günter

    2011-01-01

    Objectives Autoimmune pancreatitis (AIP) is thought to be an immune-mediated inflammatory process, directed against the epithelial components of the pancreas. Methods In order to explore key targets of the inflammatory process we analysed the expression of proteins at the RNA and protein level using genomics and proteomics, immunohistochemistry, Western blot and immunoassay. An animal model of AIP with LP-BM5 murine leukemia virus infected mice was studied in parallel. RNA microarrays of pancreatic tissue from 12 patients with AIP were compared to those of 8 patients with non-AIP chronic pancreatitis (CP). Results Expression profiling revealed 272 upregulated genes, including those encoding for immunoglobulins, chemokines and their receptors, and 86 downregulated genes, including those for pancreatic proteases such as three trypsinogen isoforms. Protein profiling showed that the expression of trypsinogens and other pancreatic enzymes was greatly reduced. Immunohistochemistry demonstrated a near-loss of trypsin positive acinar cells, which was also confirmed by Western blotting. The serum of AIP patients contained high titres of autoantibodies against the trypsinogens PRSS1, and PRSS2 but not against PRSS3. In addition, there were autoantibodies against the trypsin inhibitor PSTI (the product of the SPINK1 gene). In the pancreas of AIP animals we found similar protein patterns and a reduction in trypsinogen. Conclusion These data indicate that the immune-mediated process characterizing AIP involves pancreatic acinar cells and their secretory enzymes such as trypsin isoforms. Demonstration of trypsinogen autoantibodies may be helpful for the diagnosis of AIP. PMID:20407433

  3. Recent Progress toward Robust Photocathodes

    SciTech Connect

    Mulhollan, G. A.; Bierman, J. C.

    2009-08-04

    RF photoinjectors for next generation spin-polarized electron accelerators require photo-cathodes capable of surviving RF gun operation. Free electron laser photoinjectors can benefit from more robust visible light excited photoemitters. A negative electron affinity gallium arsenide activation recipe has been found that diminishes its background gas susceptibility without any loss of near bandgap photoyield. The highest degree of immunity to carbon dioxide exposure was achieved with a combination of cesium and lithium. Activated amorphous silicon photocathodes evince advantageous properties for high current photoinjectors including low cost, substrate flexibility, visible light excitation and greatly reduced gas reactivity compared to gallium arsenide.

  4. Robust Software Architecture for Robots

    NASA Technical Reports Server (NTRS)

    Aghazanian, Hrand; Baumgartner, Eric; Garrett, Michael

    2009-01-01

    Robust Real-Time Reconfigurable Robotics Software Architecture (R4SA) is the name of both a software architecture and software that embodies the architecture. The architecture was conceived in the spirit of current practice in designing modular, hard, realtime aerospace systems. The architecture facilitates the integration of new sensory, motor, and control software modules into the software of a given robotic system. R4SA was developed for initial application aboard exploratory mobile robots on Mars, but is adaptable to terrestrial robotic systems, real-time embedded computing systems in general, and robotic toys.

  5. On identified predictive control

    NASA Technical Reports Server (NTRS)

    Bialasiewicz, Jan T.

    1993-01-01

    Self-tuning control algorithms are potential successors to manually tuned PID controllers traditionally used in process control applications. A very attractive design method for self-tuning controllers, which has been developed over recent years, is the long-range predictive control (LRPC). The success of LRPC is due to its effectiveness with plants of unknown order and dead-time which may be simultaneously nonminimum phase and unstable or have multiple lightly damped poles (as in the case of flexible structures or flexible robot arms). LRPC is a receding horizon strategy and can be, in general terms, summarized as follows. Using assumed long-range (or multi-step) cost function the optimal control law is found in terms of unknown parameters of the predictor model of the process, current input-output sequence, and future reference signal sequence. The common approach is to assume that the input-output process model is known or separately identified and then to find the parameters of the predictor model. Once these are known, the optimal control law determines control signal at the current time t which is applied at the process input and the whole procedure is repeated at the next time instant. Most of the recent research in this field is apparently centered around the LRPC formulation developed by Clarke et al., known as generalized predictive control (GPC). GPC uses ARIMAX/CARIMA model of the process in its input-output formulation. In this paper, the GPC formulation is used but the process predictor model is derived from the state space formulation of the ARIMAX model and is directly identified over the receding horizon, i.e., using current input-output sequence. The underlying technique in the design of identified predictive control (IPC) algorithm is the identification algorithm of observer/Kalman filter Markov parameters developed by Juang et al. at NASA Langley Research Center and successfully applied to identification of flexible structures.

  6. TARGET Researchers Identify Mutations in SIX1/2 and microRNA Processing Genes in Favorable Histology Wilms Tumor | Office of Cancer Genomics

    Cancer.gov

    TARGET researchers molecularly characterized favorable histology Wilms tumor (FHWT), a pediatric renal cancer. Comprehensive genome and transcript analyses revealed single-nucleotide substitution/deletion mutations in microRNA processing genes (15% of FHWT patients) and Sine Oculis Homeobox Homolog 1/2 (SIX1/2) genes (7% of FHWT patients). SIX1/2 genes play a critical role in renal development and were not previously associated with FHWT, thus presenting a novel role for SIX1/2 pathway aberrations in this disease.

  7. Identifying weaknesses in undergraduate programs within the context input process product model framework in view of faculty and library staff in 2014

    PubMed Central

    2016-01-01

    Purpose: Objective of this research is to find out weaknesses of undergraduate programs in terms of personnel and financial, organizational management and facilities in view of faculty and library staff, and determining factors that may facilitate program quality–improvement. Methods: This is a descriptive analytical survey research and from purpose aspect is an application evaluation study that undergraduate groups of selected faculties (Public Health, Nursing and Midwifery, Allied Medical Sciences and Rehabilitation) at Tehran University of Medical Sciences (TUMS) have been surveyed using context input process product model in 2014. Statistical population were consist of three subgroups including department head (n=10), faculty members (n=61), and library staff (n=10) with total population of 81 people. Data collected through three researcher-made questionnaires which were based on Likert scale. The data were then analyzed using descriptive and inferential statistics. Results: Results showed desirable and relatively desirable situation for factors in context, input, process, and product fields except for factors of administration and financial; and research and educational spaces and equipment which were in undesirable situation. Conclusion: Based on results, researcher highlighted weaknesses in the undergraduate programs of TUMS in terms of research and educational spaces and facilities, educational curriculum, administration and financial; and recommended some steps in terms of financial, organizational management and communication with graduates in order to improve the quality of this system. PMID:27240892

  8. Applying meta-pathway analyses through metagenomics to identify the functional properties of the major bacterial communities of a single spontaneous cocoa bean fermentation process sample.

    PubMed

    Illeghems, Koen; Weckx, Stefan; De Vuyst, Luc

    2015-09-01

    A high-resolution functional metagenomic analysis of a representative single sample of a Brazilian spontaneous cocoa bean fermentation process was carried out to gain insight into its bacterial community functioning. By reconstruction of microbial meta-pathways based on metagenomic data, the current knowledge about the metabolic capabilities of bacterial members involved in the cocoa bean fermentation ecosystem was extended. Functional meta-pathway analysis revealed the distribution of the metabolic pathways between the bacterial members involved. The metabolic capabilities of the lactic acid bacteria present were most associated with the heterolactic fermentation and citrate assimilation pathways. The role of Enterobacteriaceae in the conversion of substrates was shown through the use of the mixed-acid fermentation and methylglyoxal detoxification pathways. Furthermore, several other potential functional roles for Enterobacteriaceae were indicated, such as pectinolysis and citrate assimilation. Concerning acetic acid bacteria, metabolic pathways were partially reconstructed, in particular those related to responses toward stress, explaining their metabolic activities during cocoa bean fermentation processes. Further, the in-depth metagenomic analysis unveiled functionalities involved in bacterial competitiveness, such as the occurrence of CRISPRs and potential bacteriocin production. Finally, comparative analysis of the metagenomic data with bacterial genomes of cocoa bean fermentation isolates revealed the applicability of the selected strains as functional starter cultures. PMID:25998815

  9. Applying meta-pathway analyses through metagenomics to identify the functional properties of the major bacterial communities of a single spontaneous cocoa bean fermentation process sample.

    PubMed

    Illeghems, Koen; Weckx, Stefan; De Vuyst, Luc

    2015-09-01

    A high-resolution functional metagenomic analysis of a representative single sample of a Brazilian spontaneous cocoa bean fermentation process was carried out to gain insight into its bacterial community functioning. By reconstruction of microbial meta-pathways based on metagenomic data, the current knowledge about the metabolic capabilities of bacterial members involved in the cocoa bean fermentation ecosystem was extended. Functional meta-pathway analysis revealed the distribution of the metabolic pathways between the bacterial members involved. The metabolic capabilities of the lactic acid bacteria present were most associated with the heterolactic fermentation and citrate assimilation pathways. The role of Enterobacteriaceae in the conversion of substrates was shown through the use of the mixed-acid fermentation and methylglyoxal detoxification pathways. Furthermore, several other potential functional roles for Enterobacteriaceae were indicated, such as pectinolysis and citrate assimilation. Concerning acetic acid bacteria, metabolic pathways were partially reconstructed, in particular those related to responses toward stress, explaining their metabolic activities during cocoa bean fermentation processes. Further, the in-depth metagenomic analysis unveiled functionalities involved in bacterial competitiveness, such as the occurrence of CRISPRs and potential bacteriocin production. Finally, comparative analysis of the metagenomic data with bacterial genomes of cocoa bean fermentation isolates revealed the applicability of the selected strains as functional starter cultures.

  10. Robust radiometric calibration and vignetting correction.

    PubMed

    Kim, Seon Joo; Pollefeys, Marc

    2008-04-01

    In many computer vision systems, it is assumed that the image brightness of a point directly reflects the scene radiance of the point. However, the assumption does not hold in most cases due to nonlinear camera response function, exposure changes, and vignetting. The effects of these factors are most visible in image mosaics and textures of 3D models where colors look inconsistent and notable boundaries exist. In this paper, we propose a full radiometric calibration algorithm that includes robust estimation of the radiometric response function, exposures, and vignetting. By decoupling the effect of vignetting from the response function estimation, we approach each process in a manner that is robust to noise and outliers. We verify our algorithm with both synthetic and real data which shows significant improvement compared to existing methods. We apply our estimation results to radiometrically align images for seamless mosaics and 3D model textures. We also use our method to create high dynamic range (HDR) mosaics which are more representative of the scene than normal mosaics.

  11. Robust excitons inhabit soft supramolecular nanotubes

    PubMed Central

    Eisele, Dörthe M.; Arias, Dylan H.; Fu, Xiaofeng; Bloemsma, Erik A.; Steiner, Colby P.; Jensen, Russell A.; Rebentrost, Patrick; Eisele, Holger; Tokmakoff, Andrei; Lloyd, Seth; Nelson, Keith A.; Nicastro, Daniela; Knoester, Jasper; Bawendi, Moungi G.

    2014-01-01

    Nature's highly efficient light-harvesting antennae, such as those found in green sulfur bacteria, consist of supramolecular building blocks that self-assemble into a hierarchy of close-packed structures. In an effort to mimic the fundamental processes that govern nature’s efficient systems, it is important to elucidate the role of each level of hierarchy: from molecule, to supramolecular building block, to close-packed building blocks. Here, we study the impact of hierarchical structure. We present a model system that mirrors nature’s complexity: cylinders self-assembled from cyanine-dye molecules. Our work reveals that even though close-packing may alter the cylinders’ soft mesoscopic structure, robust delocalized excitons are retained: Internal order and strong excitation-transfer interactions—prerequisites for efficient energy transport—are both maintained. Our results suggest that the cylindrical geometry strongly favors robust excitons; it presents a rational design that is potentially key to nature’s high efficiency, allowing construction of efficient light-harvesting devices even from soft, supramolecular materials. PMID:25092336

  12. Robust Inflation from fibrous strings

    NASA Astrophysics Data System (ADS)

    Burgess, C. P.; Cicoli, M.; de Alwis, S.; Quevedo, F.

    2016-05-01

    Successful inflationary models should (i) describe the data well; (ii) arise generically from sensible UV completions; (iii) be insensitive to detailed fine-tunings of parameters and (iv) make interesting new predictions. We argue that a class of models with these properties is characterized by relatively simple potentials with a constant term and negative exponentials. We here continue earlier work exploring UV completions for these models—including the key (though often ignored) issue of modulus stabilisation—to assess the robustness of their predictions. We show that string models where the inflaton is a fibration modulus seem to be robust due to an effective rescaling symmetry, and fairly generic since most known Calabi-Yau manifolds are fibrations. This class of models is characterized by a generic relation between the tensor-to-scalar ratio r and the spectral index ns of the form r propto (ns‑1)2 where the proportionality constant depends on the nature of the effects used to develop the inflationary potential and the topology of the internal space. In particular we find that the largest values of the tensor-to-scalar ratio that can be obtained by generalizing the original set-up are of order r lesssim 0.01. We contrast this general picture with specific popular models, such as the Starobinsky scenario and α-attractors. Finally, we argue the self consistency of large-field inflationary models can strongly constrain non-supersymmetric inflationary mechanisms.

  13. The Robustness of Acoustic Analogies

    NASA Technical Reports Server (NTRS)

    Freund, J. B.; Lele, S. K.; Wei, M.

    2004-01-01

    Acoustic analogies for the prediction of flow noise are exact rearrangements of the flow equations N(right arrow q) = 0 into a nominal sound source S(right arrow q) and sound propagation operator L such that L(right arrow q) = S(right arrow q). In practice, the sound source is typically modeled and the propagation operator inverted to make predictions. Since the rearrangement is exact, any sufficiently accurate model of the source will yield the correct sound, so other factors must determine the merits of any particular formulation. Using data from a two-dimensional mixing layer direct numerical simulation (DNS), we evaluate the robustness of two analogy formulations to different errors intentionally introduced into the source. The motivation is that since S can not be perfectly modeled, analogies that are less sensitive to errors in S are preferable. Our assessment is made within the framework of Goldstein's generalized acoustic analogy, in which different choices of a base flow used in constructing L give different sources S and thus different analogies. A uniform base flow yields a Lighthill-like analogy, which we evaluate against a formulation in which the base flow is the actual mean flow of the DNS. The more complex mean flow formulation is found to be significantly more robust to errors in the energetic turbulent fluctuations, but its advantage is less pronounced when errors are made in the smaller scales.

  14. Processes for identifying regional influences of and responses to increasing atmospheric CO{sub 2} and climate change - the MINK project: An overview

    SciTech Connect

    Rosenberg, N.J.; Crosson, P.R.

    1991-08-01

    Scientists believe that a serious change in the climate of the earth could occur in the course of the next two to five decades as a result of warming caused by the rapid accumulation of radiatively active trace gases in the atmosphere. There is concern that not only the amount of warming but the rate at which it occurs could be unprecedented, at least since the current interglacial period began. Scientific uncertainties remain in our understanding of the climatic changes that may follow from greenhouse warming. Nevertheless, large and rapid changes in regional climate are conceivable. General circulation models (GCMs) predict changes for the central U.S. as large as an 8{degrees}C increase in mean summertime temperature accompanied by a 1 mm/day decrease in mean precipitation. Most predictions are less extreme but, so long as the direction of change is credible, efforts are warranted to identify just what kinds of impacts to expect if society chooses to allow climate to change or cannot stop it from changing, and just what might be done to adjust to those impacts.

  15. Identifying phonological processing deficits in Northern Sotho-speaking children: The use of non-word repetition as a language assessment tool in the South African context.

    PubMed

    Wilsenach, Carien

    2016-01-01

    Diagnostic testing of speech/language skills in the African languages spoken in South Africa is a challenging task, as standardised language tests in the official languages of South Africa barely exist. Commercially available language tests are in English, and have been standardised in other parts of the world. Such tests are often translated into African languages, a practice that speech language therapists deem linguistically and culturally inappropriate. In response to the need for developing clinical language assessment instruments that could be used in South Africa, this article reports on data collected with a Northern Sotho non-word repetition task (NRT). Non-word repetition measures various aspects of phonological processing, including phonological working memory (PWM), and is used widely by speech language therapists, linguists, and educational psychologists in the Western world. The design of a novel Northern Sotho NRT is described, and it is argued that the task could be used successfully in the South African context to discriminate between children with weak and strong Northern Sotho phonological processing ability, regardless of the language of learning and teaching. The NRT was piloted with 120 third graders, and showed moderate to strong correlations with other measures of PWM, such as digit span and English non-word repetition. Furthermore, the task was positively associated with both word and fluent reading in Northern Sotho, and it reliably predicted reading outcomes in the tested population. Suggestions are made for improving the current version of the Northern Sotho NRT, whereafter it should be suitable to test learners from various age groups.

  16. Wavelet Filtering to Reduce Conservatism in Aeroservoelastic Robust Stability Margins

    NASA Technical Reports Server (NTRS)

    Brenner, Marty; Lind, Rick

    1998-01-01

    Wavelet analysis for filtering and system identification was used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins was reduced with parametric and nonparametric time-frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data was used to reduce the effects of external desirableness and unmodeled dynamics. Parametric estimates of modal stability were also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. F-18 high Alpha Research Vehicle aeroservoelastic flight test data demonstrated improved robust stability prediction by extension of the stability boundary beyond the flight regime.

  17. Complexity and robustness in hypernetwork models of metabolism.

    PubMed

    Pearcy, Nicole; Chuzhanova, Nadia; Crofts, Jonathan J

    2016-10-01

    Metabolic reaction data is commonly modelled using a complex network approach, whereby nodes represent the chemical species present within the organism of interest, and connections are formed between those nodes participating in the same chemical reaction. Unfortunately, such an approach provides an inadequate description of the metabolic process in general, as a typical chemical reaction will involve more than two nodes, thus risking oversimplification of the system of interest in a potentially significant way. In this paper, we employ a complex hypernetwork formalism to investigate the robustness of bacterial metabolic hypernetworks by extending the concept of a percolation process to hypernetworks. Importantly, this provides a novel method for determining the robustness of these systems and thus for quantifying their resilience to random attacks/errors. Moreover, we performed a site percolation analysis on a large cohort of bacterial metabolic networks and found that hypernetworks that evolved in more variable environments displayed increased levels of robustness and topological complexity. PMID:27354314

  18. Robust Burg estimation of stationary autoregressive mixtures covariance

    NASA Astrophysics Data System (ADS)

    Decurninge, Alexis; Barbaresco, Frédéric

    2015-01-01

    Burg estimators are classically used for the estimation of the autocovariance of a stationary autoregressive process. We propose to consider scale mixtures of stationary autoregressive processes, a non-Gaussian extension of the latter. The traces of such processes are Spherically Invariant Random Vectors (SIRV) with a constraint on the scatter matrix due to the autoregressive model. We propose adaptations of the Burg estimators to the considered models and their associated robust versions based on geometrical considerations.

  19. Mutant alpha-galactosidase A enzymes identified in Fabry disease patients with residual enzyme activity: biochemical characterization and restoration of normal intracellular processing by 1-deoxygalactonojirimycin.

    PubMed

    Ishii, Satoshi; Chang, Hui-Hwa; Kawasaki, Kunito; Yasuda, Kayo; Wu, Hui-Li; Garman, Scott C; Fan, Jian-Qiang

    2007-09-01

    Fabry disease is a lysosomal storage disorder caused by the deficiency of alpha-Gal A (alpha-galactosidase A) activity. In order to understand the molecular mechanism underlying alpha-Gal A deficiency in Fabry disease patients with residual enzyme activity, enzymes with different missense mutations were purified from transfected COS-7 cells and the biochemical properties were characterized. The mutant enzymes detected in variant patients (A20P, E66Q, M72V, I91T, R112H, F113L, N215S, Q279E, M296I, M296V and R301Q), and those found mostly in mild classic patients (A97V, A156V, L166V and R356W) appeared to have normal K(m) and V(max) values. The degradation of all mutants (except E59K) was partially inhibited by treatment with kifunensine, a selective inhibitor of ER (endoplasmic reticulum) alpha-mannosidase I. Metabolic labelling and subcellular fractionation studies in COS-7 cells expressing the L166V and R301Q alpha-Gal A mutants indicated that the mutant protein was retained in the ER and degraded without processing. Addition of DGJ (1-deoxygalactonojirimycin) to the culture medium of COS-7 cells transfected with a large set of missense mutant alpha-Gal A cDNAs effectively increased both enzyme activity and protein yield. DGJ was capable of normalizing intracellular processing of mutant alpha-Gal A found in both classic (L166V) and variant (R301Q) Fabry disease patients. In addition, the residual enzyme activity in fibroblasts or lymphoblasts from both classic and variant hemizygous Fabry disease patients carrying a variety of missense mutations could be substantially increased by cultivation of the cells with DGJ. These results indicate that a large proportion of mutant enzymes in patients with residual enzyme activity are kinetically active. Excessive degradation in the ER could be responsible for the deficiency of enzyme activity in vivo, and the DGJ approach may be broadly applicable to Fabry disease patients with missense mutations.

  20. Mutant α-galactosidase A enzymes identified in Fabry disease patients with residual enzyme activity: biochemical characterization and restoration of normal intracellular processing by 1-deoxygalactonojirimycin

    PubMed Central

    Ishii, Satoshi; Chang, Hui-Hwa; Kawasaki, Kunito; Yasuda, Kayo; Wu, Hui-Li; Garman, Scott C.; Fan, Jian-Qiang

    2007-01-01

    Fabry disease is a lysosomal storage disorder caused by the deficiency of α-Gal A (α-galactosidase A) activity. In order to understand the molecular mechanism underlying α-Gal A deficiency in Fabry disease patients with residual enzyme activity, enzymes with different missense mutations were purified from transfected COS-7 cells and the biochemical properties were characterized. The mutant enzymes detected in variant patients (A20P, E66Q, M72V, I91T, R112H, F113L, N215S, Q279E, M296I, M296V and R301Q), and those found mostly in mild classic patients (A97V, A156V, L166V and R356W) appeared to have normal Km and Vmax values. The degradation of all mutants (except E59K) was partially inhibited by treatment with kifunensine, a selective inhibitor of ER (endoplasmic reticulum) α-mannosidase I. Metabolic labelling and subcellular fractionation studies in COS-7 cells expressing the L166V and R301Q α-Gal A mutants indicated that the mutant protein was retained in the ER and degraded without processing. Addition of DGJ (1-deoxygalactonojirimycin) to the culture medium of COS-7 cells transfected with a large set of missense mutant α-Gal A cDNAs effectively increased both enzyme activity and protein yield. DGJ was capable of normalizing intracellular processing of mutant α-Gal A found in both classic (L166V) and variant (R301Q) Fabry disease patients. In addition, the residual enzyme activity in fibroblasts or lymphoblasts from both classic and variant hemizygous Fabry disease patients carrying a variety of missense mutations could be substantially increased by cultivation of the cells with DGJ. These results indicate that a large proportion of mutant enzymes in patients with residual enzyme activity are kinetically active. Excessive degradation in the ER could be responsible for the deficiency of enzyme activity in vivo, and the DGJ approach may be broadly applicable to Fabry disease patients with missense mutations. PMID:17555407

  1. Robust, multifunctional flood defenses in the Dutch rural riverine area

    NASA Astrophysics Data System (ADS)

    van Loon-Steensma, J. M.; Vellinga, P.

    2014-05-01

    This paper reviews the possible functions as well as strengths, weaknesses, opportunities, and threats for robust flood defenses in the rural riverine areas of the Netherlands on the basis of the recent literature and case studies at five locations in the Netherlands where dike reinforcement is planned. For each of the case studies semi-structured interviews with experts and stakeholders were conducted. At each of the five locations, suitable robust flood defenses could be identified that would contribute to the envisaged functions and ambitions for the respective areas. Primary strengths of a robust, multifunctional dike in comparison to a traditional dike appeared to be the more efficient space use due to the combination of different functions, a longer-term focus and greater safety.

  2. The robustness and restoration of a network of ecological networks.

    PubMed

    Pocock, Michael J O; Evans, Darren M; Memmott, Jane

    2012-02-24

    Understanding species' interactions and the robustness of interaction networks to species loss is essential to understand the effects of species' declines and extinctions. In most studies, different types of networks (such as food webs, parasitoid webs, seed dispersal networks, and pollination networks) have been studied separately. We sampled such multiple networks simultaneously in an agroecosystem. We show that the networks varied in their robustness; networks including pollinators appeared to be particularly fragile. We show that, overall, networks did not strongly covary in their robustness, which suggests that ecological restoration (for example, through agri-environment schemes) benefitting one functional group will not inevitably benefit others. Some individual plant species were disproportionately well linked to many other species. This type of information can be used in restoration management, because it identifies the plant taxa that can potentially lead to disproportionate gains in biodiversity.

  3. Topological properties of robust biological and computational networks

    PubMed Central

    Navlakha, Saket; He, Xin; Faloutsos, Christos; Bar-Joseph, Ziv

    2014-01-01

    Network robustness is an important principle in biology and engineering. Previous studies of global networks have identified both redundancy and sparseness as topological properties used by robust networks. By focusing on molecular subnetworks, or modules, we show that module topology is tightly linked to the level of environmental variability (noise) the module expects to encounter. Modules internal to the cell that are less exposed to environmental noise are more connected and less robust than external modules. A similar design principle is used by several other biological networks. We propose a simple change to the evolutionary gene duplication model which gives rise to the rich range of module topologies observed within real networks. We apply these observations to evaluate and design communication networks that are specifically optimized for noisy or malicious environments. Combined, joint analysis of biological and computational networks leads to novel algorithms and insights benefiting both fields. PMID:24789562

  4. Automated robust registration of grossly misregistered whole-slide images with varying stains

    NASA Astrophysics Data System (ADS)

    Litjens, G.; Safferling, K.; Grabe, N.

    2016-03-01

    Cancer diagnosis and pharmaceutical research increasingly depend on the accurate quantification of cancer biomarkers. Identification of biomarkers is usually performed through immunohistochemical staining of cancer sections on glass slides. However, combination of multiple biomarkers from a wide variety of immunohistochemically stained slides is a tedious process in traditional histopathology due to the switching of glass slides and re-identification of regions of interest by pathologists. Digital pathology now allows us to apply image registration algorithms to digitized whole-slides to align the differing immunohistochemical stains automatically. However, registration algorithms need to be robust to changes in color due to differing stains and severe changes in tissue content between slides. In this work we developed a robust registration methodology to allow for fast coarse alignment of multiple immunohistochemical stains to the base hematyoxylin and eosin stained image. We applied HSD color model conversion to obtain a less stain color dependent representation of the whole-slide images. Subsequently, optical density thresholding and connected component analysis were used to identify the relevant regions for registration. Template matching using normalized mutual information was applied to provide initial translation and rotation parameters, after which a cost function-driven affine registration was performed. The algorithm was validated using 40 slides from 10 prostate cancer patients, with landmark registration error as a metric. Median landmark registration error was around 180 microns, which indicates performance is adequate for practical application. None of the registrations failed, indicating the robustness of the algorithm.

  5. Fast and Robust Segmentation and Classification for Change Detection in Urban Point Clouds

    NASA Astrophysics Data System (ADS)

    Roynard, X.; Deschaud, J.-E.; Goulette, F.

    2016-06-01

    Change detection is an important issue in city monitoring to analyse street furniture, road works, car parking, etc. For example, parking surveys are needed but are currently a laborious task involving sending operators in the streets to identify the changes in car locations. In this paper, we propose a method that performs a fast and robust segmentation and classification of urban point clouds, that can be used for change detection. We apply this method to detect the cars, as a particular object class, in order to perform parking surveys automatically. A recently proposed method already addresses the need for fast segmentation and classification of urban point clouds, using elevation images. The interest to work on images is that processing is much faster, proven and robust. However there may be a loss of information in complex 3D cases: for example when objects are one above the other, typically a car under a tree or a pedestrian under a balcony. In this paper we propose a method that retain the three-dimensional information while preserving fast computation times and improving segmentation and classification accuracy. It is based on fast region-growing using an octree, for the segmentation, and specific descriptors with Random-Forest for the classification. Experiments have been performed on large urban point clouds acquired by Mobile Laser Scanning. They show that the method is as fast as the state of the art, and that it gives more robust results in the complex 3D cases.

  6. Mechanisms of mutational robustness in transcriptional regulation

    PubMed Central

    Payne, Joshua L.; Wagner, Andreas

    2015-01-01

    Robustness is the invariance of a phenotype in the face of environmental or genetic change. The phenotypes produced by transcriptional regulatory circuits are gene expression patterns that are to some extent robust to mutations. Here we review several causes of this robustness. They include robustness of individual transcription factor binding sites, homotypic clusters of such sites, redundant enhancers, transcription factors, redundant transcription factors, and the wiring of transcriptional regulatory circuits. Such robustness can either be an adaptation by itself, a byproduct of other adaptations, or the result of biophysical principles and non-adaptive forces of genome evolution. The potential consequences of such robustness include complex regulatory network topologies that arise through neutral evolution, as well as cryptic variation, i.e., genotypic divergence without phenotypic divergence. On the longest evolutionary timescales, the robustness of transcriptional regulation has helped shape life as we know it, by facilitating evolutionary innovations that helped organisms such as flowering plants and vertebrates diversify. PMID:26579194

  7. Contextualizing the Genes Altered in Bladder Neoplasms in Pediatric andTeen Patients Allows Identifying Two Main Classes of Biological ProcessesInvolved and New Potential Therapeutic Targets

    PubMed Central

    Porrello, A.; Piergentili, R. b

    2016-01-01

    Research on bladder neoplasms in pediatric and teen patients (BNPTP) has described 21 genes, which are variously involved in this disease and are mostly responsible for deregulated cell proliferation. However, due to the limited number of publications on this subject, it is still unclear what type of relationships there are among these genes and which are the chances that, while having different molecular functions, they i) act as downstream effector genes of well-known pro- or anti- proliferative stimuli and/or interplay with biochemical pathways having oncological relevance or ii) are specific and, possibly, early biomarkers of these pathologies. A Gene Ontology (GO)-based analysis showed that these 21 genes are involved in biological processes, which can be split into two main classes: cell regulation-based and differentiation/development-based. In order to understand the involvement/overlapping with main cancer-related pathways, we performed a meta-analysis dependent on the 189 oncogenic signatures of the Molecular Signatures Database (OSMSD) curated by the Broad Institute. We generated a binary matrix with 53 gene signatures having at least one hit; this analysis i) suggests that some genes of the original list show inconsistencies and might need to be experimentally re- assessed or evaluated as biomarkers (in particular, ACTA2) and ii) allows hypothesizing that important (proto)oncogenes (E2F3, ERBB2/HER2, CCND1, WNT1, and YAP1) and (putative) tumor suppressors (BRCA1, RBBP8/CTIP, and RB1-RBL2/p130) may participate in the onset of this disease or worsen the observed phenotype, thus expanding the list of possible molecular targets for the treatment of BNPTP. PMID:27013923

  8. Robust control with structured perturbations

    NASA Technical Reports Server (NTRS)

    Keel, Leehyun

    1991-01-01

    This semi-annual report describes continued progress on the research. Among several approaches in this area of research, our approach to the parametric uncertainties are being matured everyday. This approach deals with real parameter uncertainties which other techniques such as H (sup infinity) optimal control, micron analysis and synthesis, and l(sub 1) optimal control cannot deal. The primary assumption of this approach is that the mathematical models are well obtained so that the most of system uncertainties can be translated into parameter uncertainties of their linear system representations. These uncertainties may be due to modeling, nonlinearity of the physical system, some time-varying parameters, etc. In this report period of research, we are concentrating on implementing a computer aided analysis and design tool based on new results on parametric robust stability. This implementation will help us to reveal further details in this approach.

  9. The structure of robust observers

    NASA Technical Reports Server (NTRS)

    Bhattacharyya, S. P.

    1975-01-01

    Conventional observers for linear time-invariant systems are shown to be structurally inadequate from a sensitivity standpoint. It is proved that if a linear dynamic system is to provide observer action despite arbitrary small perturbations in a specified subset of its parameters, it must: (1) be a closed loop system, be driven by the observer error, (2) possess redundancy, the observer must be generating, implicitly or explicitly, at least one linear combination of states that is already contained in the measurements, and (3) contain a perturbation-free model of the portion of the system observable from the external input to the observer. The procedure for design of robust observers possessing the above structural features is established and discussed.

  10. How robust are distributed systems

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1989-01-01

    A distributed system is made up of large numbers of components operating asynchronously from one another and hence with imcomplete and inaccurate views of one another's state. Load fluctuations are common as new tasks arrive and active tasks terminate. Jointly, these aspects make it nearly impossible to arrive at detailed predictions for a system's behavior. It is important to the successful use of distributed systems in situations in which humans cannot provide the sorts of predictable realtime responsiveness of a computer, that the system be robust. The technology of today can too easily be affected by worn programs or by seemingly trivial mechanisms that, for example, can trigger stock market disasters. Inventors of a technology have an obligation to overcome flaws that can exact a human cost. A set of principles for guiding solutions to distributed computing problems is presented.

  11. Robust holographic storage system design.

    PubMed

    Watanabe, Takahiro; Watanabe, Minoru

    2011-11-21

    Demand is increasing daily for large data storage systems that are useful for applications in spacecraft, space satellites, and space robots, which are all exposed to radiation-rich space environment. As candidates for use in space embedded systems, holographic storage systems are promising because they can easily provided the demanded large-storage capability. Particularly, holographic storage systems, which have no rotation mechanism, are demanded because they are virtually maintenance-free. Although a holographic memory itself is an extremely robust device even in a space radiation environment, its associated lasers and drive circuit devices are vulnerable. Such vulnerabilities sometimes engendered severe problems that prevent reading of all contents of the holographic memory, which is a turn-off failure mode of a laser array. This paper therefore presents a proposal for a recovery method for the turn-off failure mode of a laser array on a holographic storage system, and describes results of an experimental demonstration.

  12. Robust Identification of Noncoding RNA from Transcriptomes Requires Phylogenetically-Informed Sampling

    PubMed Central

    Lai, Alicia Sook-Wei; Eldai, Hisham; Liu, Wenting; McGimpsey, Stephanie; Wheeler, Nicole E.; Biggs, Patrick J.; Thomson, Nick R.; Barquist, Lars; Poole, Anthony M.; Gardner, Paul P.

    2014-01-01

    Noncoding RNAs are integral to a wide range of biological processes, including translation, gene regulation, host-pathogen interactions and environmental sensing. While genomics is now a mature field, our capacity to identify noncoding RNA elements in bacterial and archaeal genomes is hampered by the difficulty of de novo identification. The emergence of new technologies for characterizing transcriptome outputs, notably RNA-seq, are improving noncoding RNA identification and expression quantification. However, a major challenge is to robustly distinguish functional outputs from transcriptional noise. To establish whether annotation of existing transcriptome data has effectively captured all functional outputs, we analysed over 400 publicly available RNA-seq datasets spanning 37 different Archaea and Bacteria. Using comparative tools, we identify close to a thousand highly-expressed candidate noncoding RNAs. However, our analyses reveal that capacity to identify noncoding RNA outputs is strongly dependent on phylogenetic sampling. Surprisingly, and in stark contrast to protein-coding genes, the phylogenetic window for effective use of comparative methods is perversely narrow: aggregating public datasets only produced one phylogenetic cluster where these tools could be used to robustly separate unannotated noncoding RNAs from a null hypothesis of transcriptional noise. Our results show that for the full potential of transcriptomics data to be realized, a change in experimental design is paramount: effective transcriptomics requires phylogeny-aware sampling. PMID:25357249

  13. Problems Identifying Independent and Dependent Variables

    ERIC Educational Resources Information Center

    Leatham, Keith R.

    2012-01-01

    This paper discusses one step from the scientific method--that of identifying independent and dependent variables--from both scientific and mathematical perspectives. It begins by analyzing an episode from a middle school mathematics classroom that illustrates the need for students and teachers alike to develop a robust understanding of…

  14. Robust micromachining of compliant mechanisms using silicides

    NASA Astrophysics Data System (ADS)

    Khosraviani, Kourosh; Leung, Albert M.

    2013-01-01

    We introduce an innovative sacrificial surface micromachining process that enhances the mechanical robustness of freestanding microstructures and compliant mechanisms. This process facilitates the fabrication, and improves the assembly yield of the out-of-plane micro sensors and actuators. Fabrication of a compliant mechanism using conventional sacrificial surface micromachining results in a non-planar structure with a step between the structure and its anchor. During mechanism actuation or assembly, stress accumulation at the structure step can easily exceed the yield strength of the material and lead to the structure failure. Our process overcomes this topographic issue by virtually eliminating the step between the structure and its anchor, and achieves planarization without using chemical mechanical polishing. The process is based on low temperature and post-CMOS compatible nickel silicide technology. We use a layer of amorphous silicon (a-Si) as a sacrificial layer, which is locally converted to nickel silicide to form the anchors. High etch selectivity between silicon and nickel silicide in the xenon difluoride gas (sacrificial layer etchant) enables us to use the silicide to anchor the structures to the substrate. The formed silicide has the same thickness as the sacrificial layer; therefore, the structure is virtually flat. The maximum measured step between the anchor and the sacrificial layer is about 10 nm on a 300 nm thick sacrificial layer.

  15. Robust statistical fusion of image labels.

    PubMed

    Landman, Bennett A; Asman, Andrew J; Scoggins, Andrew G; Bogovic, John A; Xing, Fangxu; Prince, Jerry L

    2012-02-01

    Image labeling and parcellation (i.e., assigning structure to a collection of voxels) are critical tasks for the assessment of volumetric and morphometric features in medical imaging data. The process of image labeling is inherently error prone as images are corrupted by noise and artifacts. Even expert interpretations are subject to subjectivity and the precision of the individual raters. Hence, all labels must be considered imperfect with some degree of inherent variability. One may seek multiple independent assessments to both reduce this variability and quantify the degree of uncertainty. Existing techniques have exploited maximum a posteriori statistics to combine data from multiple raters and simultaneously estimate rater reliabilities. Although quite successful, wide-scale application has been hampered by unstable estimation with practical datasets, for example, with label sets with small or thin objects to be labeled or with partial or limited datasets. As well, these approaches have required each rater to generate a complete dataset, which is often impossible given both human foibles and the typical turnover rate of raters in a research or clinical environment. Herein, we propose a robust approach to improve estimation performance with small anatomical structures, allow for missing data, account for repeated label sets, and utilize training/catch trial data. With this approach, numerous raters can label small, overlapping portions of a large dataset, and rater heterogeneity can be robustly controlled while simultaneously estimating a single, reliable label set and characterizing uncertainty. The proposed approach enables many individuals to collaborate in the construction of large datasets for labeling tasks (e.g., human parallel processing) and reduces the otherwise detrimental impact of rater unavailability. PMID:22010145

  16. Identifying depositional and post-depositional processes using high-resolution elemental distribution in sedimentary cores from the Eastern Mediterranean and Black Sea

    NASA Astrophysics Data System (ADS)

    Gennari, G.; Tamburini, F.; Ariztegui, D.; Hajdas, I.; Wacker, L.; Mart, Y.; Spezzaferri, S.

    2009-04-01

    The Mediterranean Sea is an extremely complex system, subdivided in several basins interconnected by often very shallow straits and sills. As a result, its sediments can amplify the geochemical signal of both climate and sea level changes. Thus, together with its eastern marginal basins - the Marmara and Black Seas - the Mediterranean Sea provides us with a natural laboratory for paleoenvironmental studies. Climatically-driven changes in paleoenvironmental conditions are often reflected in the relative abundance of major and minor elements (e.g., Wehausen and Brumsack, 1998). Hence, their variation in marine sedimentary sequences may provide high-resolution records of past environments. Here we present two examples of ultra-high resolution geochemical studies on sedimentary cores from the upper Pleistocene-Holocene of the Eastern Mediterranean (core SIN97-01GC) and Black Sea (core MedEx05-10), and their application in paleoceanographic reconstructions. Ultra high-resolution qualitative analyses of major and minor elements (Mn, Fe, Ca, Mg, Al, Sr, Cl, Ti) were performed on macroscopic contiguous samples (average spacing between analytical points was 0.35 mm) by X-ray microfluorescence (μ-XRF), using an EDAX Eagle III XPL μprobe with an analytical spot size of 50 μm. The geochemical characterization of core SIN97-01GC (Cretan Ridge, Eastern Mediterranean) provides evidence of the diagenetic alteration of sapropel S1. Spectral analysis on this very high-resolution proxy record further allowed us to identify high-frequency millennial to decennial-scale solar cycles. The latter suggests that climate in the Mediterranean region during sapropel S1 deposition was paced by solar variability even at short periodicities (Gennari et al., 2008). The elemental distribution on core MedEx05-10 located in the south-western Black Sea shelf allows to separate two main intervals. According to the Ca and Ti/Ca contents, that reflect variations in biogenic/authigenic calcite versus

  17. The GODDESS ionization chamber: developing robust windows

    NASA Astrophysics Data System (ADS)

    Blanchard, Rose; Baugher, Travis; Cizewski, Jolie; Pain, Steven; Ratkiewicz, Andrew; Goddess Collaboration

    2015-10-01

    Reaction studies of nuclei far from stability require high-efficiency arrays of detectors and the ability to identify beam-like particles, especially when the beam is a cocktail beam. The Gammasphere ORRUBA Dual Detectors for Experimental Structure Studies (GODDESS) is made up of the Oak Ridge-Rutgers University Barrel Array (ORRUBA) of silicon detectors for charged particles inside of the gamma-ray detector array Gammasphere. A high-rate ionization chamber is being developed to identify beam-like particles. Consisting of twenty-one alternating anode and cathode grids, the ionization chamber sits downstream of the target chamber and is used to measure the energy loss of recoiling ions. A critical component of the system is a thin and robust mylar window which serves to separate the gas-filled ionization chamber from the vacuum of the target chamber with minimal energy loss. After construction, windows were tested to assure that they would not break below the required pressure, causing harm to the wire grids. This presentation will summarize the status of the ionization chamber and the results of the first tests with beams. This work is supported in part by the U.S. Department of Energy and National Science Foundation.

  18. Robust fluidic connections to freestanding microfluidic hydrogels

    PubMed Central

    Baer, Bradly B.; Larsen, Taylor S. H.

    2015-01-01

    Biomimetic scaffolds approaching physiological scale, whose size and large cellular load far exceed the limits of diffusion, require incorporation of a fluidic means to achieve adequate nutrient/metabolite exchange. This need has driven the extension of microfluidic technologies into the area of biomaterials. While construction of perfusable scaffolds is essentially a problem of microfluidic device fabrication, functional implementation of free-standing, thick-tissue constructs depends upon successful integration of external pumping mechanisms through optimized connective assemblies. However, a critical analysis to identify optimal materials/assembly components for hydrogel substrates has received little focus to date. This investigation addresses this issue directly by evaluating the efficacy of a range of adhesive and mechanical fluidic connection methods to gelatin hydrogel constructs based upon both mechanical property analysis and cell compatibility. Results identify a novel bioadhesive, comprised of two enzymatically modified gelatin compounds, for connecting tubing to hydrogel constructs that is both structurally robust and non-cytotoxic. Furthermore, outcomes from this study provide clear evidence that fluidic interconnect success varies with substrate composition (specifically hydrogel versus polydimethylsiloxane), highlighting not only the importance of selecting the appropriately tailored components for fluidic hydrogel systems but also that of encouraging ongoing, targeted exploration of this issue. The optimization of such interconnect systems will ultimately promote exciting scientific and therapeutic developments provided by microfluidic, cell-laden scaffolds. PMID:26045731

  19. Sensitive Periods for Developing a Robust Trait of Appetitive Aggression

    PubMed Central

    Köbach, Anke; Elbert, Thomas

    2015-01-01

    Violent behavior can be intrinsically rewarding; especially combatants fighting in current civil wars present with elevated traits of appetitive aggression. The majority of these fighters were recruited as children or adolescents. In the present study, we test whether there is a developmental period where combatants are sensitive for developing a robust trait of appetitive aggression. We investigated 95 combatants in their demobilization process that were recruited at different ages in the Kivu regions of the eastern Democratic Republic of Congo. Using random forest with conditional inference trees, we identified recruitment at the ages from 16 and 17 years as being predictive of the level of appetitive aggression; the number of lifetime, perpetrated acts was the most important predictor. We conclude that high levels of appetitive aggression develop in ex-combatants, especially in those recruited during their middle to late teenage, which is a developmental period marked by a natural inclination to exercise physical force. Consequently, ex-combatants may remain vulnerable for aggressive behavior patterns and re-recruitment unless they are provided alternative strategies for dealing with their aggression. PMID:26528191

  20. Robust photon entanglement via quantum interference in optomechanical interfaces.

    PubMed

    Tian, Lin

    2013-06-01

    Entanglement is a key element in quantum information processing. Here, we present schemes to generate robust photon entanglement via optomechanical quantum interfaces in the strong coupling regime. The schemes explore the excitation of the Bogoliubov dark mode and the destructive quantum interference between the bright modes of the interface, similar to electromagnetically induced transparency, to eliminate leading-order effects of the mechanical noise. Both continuous-variable and discrete-state entanglements that are robust against the mechanical noise can be achieved. The schemes can be used to generate entanglement in hybrid quantum systems between, e.g., microwave photon and optical photon.

  1. Robust Speaker Authentication Based on Combined Speech and Voiceprint Recognition

    NASA Astrophysics Data System (ADS)

    Malcangi, Mario

    2009-08-01

    Personal authentication is becoming increasingly important in many applications that have to protect proprietary data. Passwords and personal identification numbers (PINs) prove not to be robust enough to ensure that unauthorized people do not use them. Biometric authentication technology may offer a secure, convenient, accurate solution but sometimes fails due to its intrinsically fuzzy nature. This research aims to demonstrate that combining two basic speech processing methods, voiceprint identification and speech recognition, can provide a very high degree of robustness, especially if fuzzy decision logic is used.

  2. Robust reflective pupil slicing technology

    NASA Astrophysics Data System (ADS)

    Meade, Jeffrey T.; Behr, Bradford B.; Cenko, Andrew T.; Hajian, Arsen R.

    2014-07-01

    Tornado Spectral Systems (TSS) has developed the High Throughput Virtual Slit (HTVSTM), robust all-reflective pupil slicing technology capable of replacing the slit in research-, commercial- and MIL-SPEC-grade spectrometer systems. In the simplest configuration, the HTVS allows optical designers to remove the lossy slit from pointsource spectrometers and widen the input slit of long-slit spectrometers, greatly increasing throughput without loss of spectral resolution or cross-dispersion information. The HTVS works by transferring etendue between image plane axes but operating in the pupil domain rather than at a focal plane. While useful for other technologies, this is especially relevant for spectroscopic applications by performing the same spectral narrowing as a slit without throwing away light on the slit aperture. HTVS can be implemented in all-reflective designs and only requires a small number of reflections for significant spectral resolution enhancement-HTVS systems can be efficiently implemented in most wavelength regions. The etendueshifting operation also provides smooth scaling with input spot/image size without requiring reconfiguration for different targets (such as different seeing disk diameters or different fiber core sizes). Like most slicing technologies, HTVS provides throughput increases of several times without resolution loss over equivalent slitbased designs. HTVS technology enables robust slit replacement in point-source spectrometer systems. By virtue of pupilspace operation this technology has several advantages over comparable image-space slicer technology, including the ability to adapt gracefully and linearly to changing source size and better vertical packing of the flux distribution. Additionally, this technology can be implemented with large slicing factors in both fast and slow beams and can easily scale from large, room-sized spectrometers through to small, telescope-mounted devices. Finally, this same technology is directly

  3. Robustness of Auditory Teager Energy Cepstrum Coefficients for Classification of Pathological and Normal Voices in Noisy Environments

    PubMed Central

    Salhi, Lotfi; Cherif, Adnane

    2013-01-01

    This paper focuses on a robust feature extraction algorithm for automatic classification of pathological and normal voices in noisy environments. The proposed algorithm is based on human auditory processing and the nonlinear Teager-Kaiser energy operator. The robust features which labeled Teager Energy Cepstrum Coefficients (TECCs) are computed in three steps. Firstly, each speech signal frame is passed through a Gammatone or Mel scale triangular filter bank. Then, the absolute value of the Teager energy operator of the short-time spectrum is calculated. Finally, the discrete cosine transform of the log-filtered Teager Energy spectrum is applied. This feature is proposed to identify the pathological voices using a developed neural system of multilayer perceptron (MLP). We evaluate the developed method using mixed voice database composed of recorded voice samples from normophonic or dysphonic speakers. In order to show the robustness of the proposed feature in detection of pathological voices at different White Gaussian noise levels, we compare its performance with results for clean environments. The experimental results show that TECCs computed from Gammatone filter bank are more robust in noisy environments than other extracted features, while their performance is practically similar to clean environments. PMID:23818821

  4. Asymmetric disassembly and robustness in declining networks

    PubMed Central

    Saavedra, Serguei; Reed-Tsochas, Felix; Uzzi, Brian

    2008-01-01

    Mechanisms that enable declining networks to avert structural collapse and performance degradation are not well understood. This knowledge gap reflects a shortage of data on declining networks and an emphasis on models of network growth. Analyzing >700,000 transactions between firms in the New York garment industry over 19 years, we tracked this network's decline and measured how its topology and global performance evolved. We find that favoring asymmetric (disassortative) links is key to preserving the topology and functionality of the declining network. Based on our findings, we tested a model of network decline that combines an asymmetric disassembly process for contraction with a preferential attachment process for regrowth. Our simulation results indicate that the model can explain robustness under decline even if the total population of nodes contracts by more than an order of magnitude, in line with our observations for the empirical network. These findings suggest that disassembly mechanisms are not simply assembly mechanisms in reverse and that our model is relevant to understanding the process of decline and collapse in a broad range of biological, technological, and financial networks. PMID:18936489

  5. Asymmetric disassembly and robustness in declining networks.

    PubMed

    Saavedra, Serguei; Reed-Tsochas, Felix; Uzzi, Brian

    2008-10-28

    Mechanisms that enable declining networks to avert structural collapse and performance degradation are not well understood. This knowledge gap reflects a shortage of data on declining networks and an emphasis on models of network growth. Analyzing >700,000 transactions between firms in the New York garment industry over 19 years, we tracked this network's decline and measured how its topology and global performance evolved. We find that favoring asymmetric (disassortative) links is key to preserving the topology and functionality of the declining network. Based on our findings, we tested a model of network decline that combines an asymmetric disassembly process for contraction with a preferential attachment process for regrowth. Our simulation results indicate that the model can explain robustness under decline even if the total population of nodes contracts by more than an order of magnitude, in line with our observations for the empirical network. These findings suggest that disassembly mechanisms are not simply assembly mechanisms in reverse and that our model is relevant to understanding the process of decline and collapse in a broad range of biological, technological, and financial networks.

  6. S-system-based analysis of the robust properties common to many biochemical network models.

    PubMed

    Matsuoka, Yu; Jahan, Nusrat; Kurata, Hiroyuki

    2016-05-01

    Robustness is a key feature to characterize the adaptation of organisms to changes in their internal and external environments. A broad range of kinetic or dynamic models of biochemical systems have been developed. Robustness analyses are attractive for exploring some common properties of many biochemical models. To reveal such features, we transform different types of mathematical equations into a standard or intelligible formula and use the multiple parameter sensitivity (MPS) to identify some factors critically responsible for the total robustness to many perturbations. The MPS would be determined by the top quarter of the highly sensitive parameters rather than the single parameter with the maximum sensitivity. The MPS did not show any correlation to the network size. The MPS is closely related to the standard deviation of the sensitivity profile. A decrease in the standard deviation enhanced the total robustness, which shows the hallmark of distributed robustness that many factors (pathways) involve the total robustness.

  7. Robust template matching for affine resistant image watermarks.

    PubMed

    Pereira, S; Pun, T

    2000-01-01

    Digital watermarks have been proposed as a method for discouraging illicit copying and distribution of copyrighted material. This paper describes a method for the secure and robust copyright protection of digital images. We present an approach for embedding a digital watermark into an image using the Fourier transform. To this watermark is added a template in the Fourier transform domain to render the method robust against general linear transformations. We detail a new algorithm based on polar maps for the accurate and efficient recovery of the template in an image which has undergone a general affine transformation. We also present results which demonstrate the robustness of the method against some common image processing operations such as compression, rotation, scaling, and aspect ratio changes. PMID:18255481

  8. Robust image modeling techniques with an image restoration application

    NASA Astrophysics Data System (ADS)

    Kashyap, Rangasami L.; Eom, Kie-Bum

    1988-08-01

    A robust parameter-estimation algorithm for a nonsymmetric half-plane (NSHP) autoregressive model, where the driving noise is a mixture of a Gaussian and an outlier process, is presented. The convergence of the estimation algorithm is proved. An algorithm to estimate parameters and original image intensity simultaneously from the impulse-noise-corrupted image, where the model governing the image is not available, is also presented. The robustness of the parameter estimates is demonstrated by simulation. Finally, an algorithm to restore realistic images is presented. The entire image generally does not obey a simple image model, but a small portion (e.g., 8 x 8) of the image is assumed to obey an NSHP model. The original image is divided into windows and the robust estimation algorithm is applied for each window. The restoration algorithm is tested by comparing it to traditional methods on several different images.

  9. Probabilistic collocation for simulation-based robust concept exploration

    NASA Astrophysics Data System (ADS)

    Rippel, Markus; Choi, Seung-Kyum; Allen, Janet K.; Mistree, Farrokh

    2012-08-01

    In the early stages of an engineering design process it is necessary to explore the design space to find a feasible range that satisfies design requirements. When robustness of the system is among the requirements, the robust concept exploration method can be used. In this method, a global metamodel, such as a global response surface of the design space, is used to evaluate robustness. However, for large design spaces, this is computationally expensive and may be relatively inaccurate for some local regions. In this article, a method is developed for successively generating local response models at points of interest as the design space is explored. This approach is based on the probabilistic collocation method. Although the focus of this article is on the method, it is demonstrated using an artificial performance function and a linear cellular alloy heat exchanger. For these problems, this approach substantially reduces computation time while maintaining accuracy.

  10. Origin of robustness in generating drug-resistant malaria parasites.

    PubMed

    Kümpornsin, Krittikorn; Modchang, Charin; Heinberg, Adina; Ekland, Eric H; Jirawatcharadech, Piyaporn; Chobson, Pornpimol; Suwanakitti, Nattida; Chaotheing, Sastra; Wilairat, Prapon; Deitsch, Kirk W; Kamchonwongpaisan, Sumalee; Fidock, David A; Kirkman, Laura A; Yuthavong, Yongyuth; Chookajorn, Thanat

    2014-07-01

    Biological robustness allows mutations to accumulate while maintaining functional phenotypes. Despite its crucial role in evolutionary processes, the mechanistic details of how robustness originates remain elusive. Using an evolutionary trajectory analysis approach, we demonstrate how robustness evolved in malaria parasites under selective pressure from an antimalarial drug inhibiting the folate synthesis pathway. A series of four nonsynonymous amino acid substitutions at the targeted enzyme, dihydrofolate reductase (DHFR), render the parasites highly resistant to the antifolate drug pyrimethamine. Nevertheless, the stepwise gain of these four dhfr mutations results in tradeoffs between pyrimethamine resistance and parasite fitness. Here, we report the epistatic interaction between dhfr mutations and amplification of the gene encoding the first upstream enzyme in the folate pathway, GTP cyclohydrolase I (GCH1). gch1 amplification confers low level pyrimethamine resistance and would thus be selected for by pyrimethamine treatment. Interestingly, the gch1 amplification can then be co-opted by the parasites because it reduces the cost of acquiring drug-resistant dhfr mutations downstream in the same metabolic pathway. The compensation of compromised fitness by extra GCH1 is an example of how robustness can evolve in a system and thus expand the accessibility of evolutionary trajectories leading toward highly resistant alleles. The evolution of robustness during the gain of drug-resistant mutations has broad implications for both the development of new drugs and molecular surveillance for resistance to existing drugs.

  11. Origin of Robustness in Generating Drug-Resistant Malaria Parasites

    PubMed Central

    Kümpornsin, Krittikorn; Modchang, Charin; Heinberg, Adina; Ekland, Eric H.; Jirawatcharadech, Piyaporn; Chobson, Pornpimol; Suwanakitti, Nattida; Chaotheing, Sastra; Wilairat, Prapon; Deitsch, Kirk W.; Kamchonwongpaisan, Sumalee; Fidock, David A.; Kirkman, Laura A.; Yuthavong, Yongyuth; Chookajorn, Thanat

    2014-01-01

    Biological robustness allows mutations to accumulate while maintaining functional phenotypes. Despite its crucial role in evolutionary processes, the mechanistic details of how robustness originates remain elusive. Using an evolutionary trajectory analysis approach, we demonstrate how robustness evolved in malaria parasites under selective pressure from an antimalarial drug inhibiting the folate synthesis pathway. A series of four nonsynonymous amino acid substitutions at the targeted enzyme, dihydrofolate reductase (DHFR), render the parasites highly resistant to the antifolate drug pyrimethamine. Nevertheless, the stepwise gain of these four dhfr mutations results in tradeoffs between pyrimethamine resistance and parasite fitness. Here, we report the epistatic interaction between dhfr mutations and amplification of the gene encoding the first upstream enzyme in the folate pathway, GTP cyclohydrolase I (GCH1). gch1 amplification confers low level pyrimethamine resistance and would thus be selected for by pyrimethamine treatment. Interestingly, the gch1 amplification can then be co-opted by the parasites because it reduces the cost of acquiring drug-resistant dhfr mutations downstream in the same metabolic pathway. The compensation of compromised fitness by extra GCH1 is an example of how robustness can evolve in a system and thus expand the accessibility of evolutionary trajectories leading toward highly resistant alleles. The evolution of robustness during the gain of drug-resistant mutations has broad implications for both the development of new drugs and molecular surveillance for resistance to existing drugs. PMID:24739308

  12. A network property necessary for concentration robustness

    PubMed Central

    Eloundou-Mbebi, Jeanne M. O.; Küken, Anika; Omranian, Nooshin; Kleessen, Sabrina; Neigenfind, Jost; Basler, Georg; Nikoloski, Zoran

    2016-01-01

    Maintenance of functionality of complex cellular networks and entire organisms exposed to environmental perturbations often depends on concentration robustness of the underlying components. Yet, the reasons and consequences of concentration robustness in large-scale cellular networks remain largely unknown. Here, we derive a necessary condition for concentration robustness based only on the structure of networks endowed with mass action kinetics. The structural condition can be used to design targeted experiments to study concentration robustness. We show that metabolites satisfying the necessary condition are present in metabolic networks from diverse species, suggesting prevalence of this property across kingdoms of life. We also demonstrate that our predictions about concentration robustness of energy-related metabolites are in line with experimental evidence from Escherichia coli. The necessary condition is applicable to mass action biological systems of arbitrary size, and will enable understanding the implications of concentration robustness in genetic engineering strategies and medical applications. PMID:27759015

  13. Efficient robust conditional random fields.

    PubMed

    Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A

    2015-10-01

    Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.

  14. Robust satisficing and the probability of survival

    NASA Astrophysics Data System (ADS)

    Ben-Haim, Yakov

    2014-01-01

    Concepts of robustness are sometimes employed when decisions under uncertainty are made without probabilistic information. We present a theorem that establishes necessary and sufficient conditions for non-probabilistic robustness to be equivalent to the probability of satisfying the specified outcome requirements. When this holds, probability is enhanced (or maximised) by enhancing (or maximising) robustness. Two further theorems establish important special cases. These theorems have implications for success or survival under uncertainty. Applications to foraging and finance are discussed.

  15. Robustness enhancement of neurocontroller and state estimator

    NASA Technical Reports Server (NTRS)

    Troudet, Terry

    1993-01-01

    The feasibility of enhancing neurocontrol robustness, through training of the neurocontroller and state estimator in the presence of system uncertainties, is investigated on the example of a multivariable aircraft control problem. The performance and robustness of the newly trained neurocontroller are compared to those for an existing neurocontrol design scheme. The newly designed dynamic neurocontroller exhibits a better trade-off between phase and gain stability margins, and it is significantly more robust to degradations of the plant dynamics.

  16. Robust quantum receivers for coherent state discrimination

    NASA Astrophysics Data System (ADS)

    Becerra, Francisco Elohim

    2014-05-01

    Quantum state discrimination is a central task for quantum information and is a fundamental problem in quantum mechanics. Nonorthogonal states, such as coherent states which have intrinsic quantum noise, cannot be discriminated with total certainty because of their intrinsic overlap. This nonorthogonality is at the heart of quantum key distribution for ensuring absolute secure communications between a transmitter and a receiver, and can enable many quantum information protocols based on coherent states. At the same time, while coherent states are used for communications because of their robustness to loss and simplicity of generation and detection, their nonorthogonality inherently produces errors in the process of decoding the information. The minimum error probability in the discrimination of nonorthogonal coherent states measured by an ideal lossless and noiseless conventional receiver is given by the standard quantum limit (SQL). This limit sets strict bounds on the ultimate performance of coherent communications and many coherent-state-based quantum information protocols. However, measurement strategies based on the quantum properties of these states can allow for better measurements that surpass the SQL and approach the ultimate measurement limits allowed by quantum mechanics. These measurement strategies can allow for optimally extracting information encoded in these states for coherent and quantum communications. We present the demonstration of a receiver based on adaptive measurements and single-photon counting that unconditionally discriminates multiple nonorthogonal coherent states below the SQL. We also discuss the potential of photon-number-resolving detection to provide robustness and high sensitivity under realistic conditions for an adaptive coherent receiver with detectors with finite photon-number resolution.

  17. Improved robust point matching with label consistency

    NASA Astrophysics Data System (ADS)

    Bhagalia, Roshni; Miller, James V.; Roy, Arunabha

    2010-03-01

    Robust point matching (RPM) jointly estimates correspondences and non-rigid warps between unstructured point-clouds. RPM does not, however, utilize information of the topological structure or group memberships of the data it is matching. In numerous medical imaging applications, each extracted point can be assigned group membership attributes or labels based on segmentation, partitioning, or clustering operations. For example, points on the cortical surface of the brain can be grouped according to the four lobes. Estimated warps should enforce the topological structure of such point-sets, e.g. points belonging to the temporal lobe in the two point-sets should be mapped onto each other. We extend the RPM objective function to incorporate group membership labels by including a Label Entropy (LE) term. LE discourages mappings that transform points within a single group in one point-set onto points from multiple distinct groups in the other point-set. The resulting Labeled Point Matching (LPM) algorithm requires a very simple modification to the standard RPM update rules. We demonstrate the performance of LPM on coronary trees extracted from cardiac CT images. We partitioned the point sets into coronary sections without a priori anatomical context, yielding potentially disparate labelings (e.g. [1,2,3] --> [a,b,c,d]). LPM simultaneously estimated label correspondences, point correspondences, and a non-linear warp. Non-matching branches were treated wholly through the standard RPM outlier process akin to non-matching points. Results show LPM produces warps that are more physically meaningful than RPM alone. In particular, LPM mitigates unrealistic branch crossings and results in more robust non-rigid warp estimates.

  18. A robust multilevel simultaneous eigenvalue solver

    NASA Technical Reports Server (NTRS)

    Costiner, Sorin; Taasan, Shlomo

    1993-01-01

    Multilevel (ML) algorithms for eigenvalue problems are often faced with several types of difficulties such as: the mixing of approximated eigenvectors by the solution process, the approximation of incomplete clusters of eigenvectors, the poor representation of solution on coarse levels, and the existence of close or equal eigenvalues. Algorithms that do not treat appropriately these difficulties usually fail, or their performance degrades when facing them. These issues motivated the development of a robust adaptive ML algorithm which treats these difficulties, for the calculation of a few eigenvectors and their corresponding eigenvalues. The main techniques used in the new algorithm include: the adaptive completion and separation of the relevant clusters on different levels, the simultaneous treatment of solutions within each cluster, and the robustness tests which monitor the algorithm's efficiency and convergence. The eigenvectors' separation efficiency is based on a new ML projection technique generalizing the Rayleigh Ritz projection, combined with a technique, the backrotations. These separation techniques, when combined with an FMG formulation, in many cases lead to algorithms of O(qN) complexity, for q eigenvectors of size N on the finest level. Previously developed ML algorithms are less focused on the mentioned difficulties. Moreover, algorithms which employ fine level separation techniques are of O(q(sub 2)N) complexity and usually do not overcome all these difficulties. Computational examples are presented where Schrodinger type eigenvalue problems in 2-D and 3-D, having equal and closely clustered eigenvalues, are solved with the efficiency of the Poisson multigrid solver. A second order approximation is obtained in O(qN) work, where the total computational work is equivalent to only a few fine level relaxations per eigenvector.

  19. A robust and cost-effective integrated process for nitrogen and bio-refractory organics removal from landfill leachate via short-cut nitrification, anaerobic ammonium oxidation in tandem with electrochemical oxidation.

    PubMed

    Wu, Li-Na; Liang, Da-Wei; Xu, Ying-Ying; Liu, Ting; Peng, Yong-Zhen; Zhang, Jie

    2016-07-01

    A cost-effective process, consisting of a denitrifying upflow anaerobic sludge blanket (UASB), an oxygen-limited anoxic/aerobic (A/O) process for short-cut nitrification, and an anaerobic reactor (ANR) for anaerobic ammonia oxidation (anammox), followed by an electrochemical oxidation process with a Ti-based SnO2-Sb2O5 anode, was developed to remove organics and nitrogen in a sewage diluted leachate. The final chemical oxygen demand (COD), ammonia nitrogen (NH4(+)-N) and total nitrogen (TN) of 70, 11.3 and 39 (all in mg/L), respectively, were obtained. TN removal in UASB, A/O and ANR were 24.6%, 49.6% and 16.1%, respectively. According to the water quality and molecular biology analysis, a high degree of anammox besides short-cut nitrification and denitrification occurred in A/O. Counting for 16.1% of TN removal in ANR, at least 43.2-49% of TN was removed via anammox. The anammox bacteria in A/O and ANR, were in respective titers of (2.5-5.9)×10(9) and 2.01×10(10)copy numbers/(gSS).

  20. A robust and cost-effective integrated process for nitrogen and bio-refractory organics removal from landfill leachate via short-cut nitrification, anaerobic ammonium oxidation in tandem with electrochemical oxidation.

    PubMed

    Wu, Li-Na; Liang, Da-Wei; Xu, Ying-Ying; Liu, Ting; Peng, Yong-Zhen; Zhang, Jie

    2016-07-01

    A cost-effective process, consisting of a denitrifying upflow anaerobic sludge blanket (UASB), an oxygen-limited anoxic/aerobic (A/O) process for short-cut nitrification, and an anaerobic reactor (ANR) for anaerobic ammonia oxidation (anammox), followed by an electrochemical oxidation process with a Ti-based SnO2-Sb2O5 anode, was developed to remove organics and nitrogen in a sewage diluted leachate. The final chemical oxygen demand (COD), ammonia nitrogen (NH4(+)-N) and total nitrogen (TN) of 70, 11.3 and 39 (all in mg/L), respectively, were obtained. TN removal in UASB, A/O and ANR were 24.6%, 49.6% and 16.1%, respectively. According to the water quality and molecular biology analysis, a high degree of anammox besides short-cut nitrification and denitrification occurred in A/O. Counting for 16.1% of TN removal in ANR, at least 43.2-49% of TN was removed via anammox. The anammox bacteria in A/O and ANR, were in respective titers of (2.5-5.9)×10(9) and 2.01×10(10)copy numbers/(gSS). PMID:27115616

  1. Robust Fixed-Structure Controller Synthesis

    NASA Technical Reports Server (NTRS)

    Corrado, Joseph R.; Haddad, Wassim M.; Gupta, Kajal (Technical Monitor)

    2000-01-01

    The ability to develop an integrated control system design methodology for robust high performance controllers satisfying multiple design criteria and real world hardware constraints constitutes a challenging task. The increasingly stringent performance specifications required for controlling such systems necessitates a trade-off between controller complexity and robustness. The principle challenge of the minimal complexity robust control design is to arrive at a tractable control design formulation in spite of the extreme complexity of such systems. Hence, design of minimal complexitY robust controllers for systems in the face of modeling errors has been a major preoccupation of system and control theorists and practitioners for the past several decades.

  2. Robustness: confronting lessons from physics and biology.

    PubMed

    Lesne, Annick

    2008-11-01

    The term robustness is encountered in very different scientific fields, from engineering and control theory to dynamical systems to biology. The main question addressed herein is whether the notion of robustness and its correlates (stability, resilience, self-organisation) developed in physics are relevant to biology, or whether specific extensions and novel frameworks are required to account for the robustness properties of living systems. To clarify this issue, the different meanings covered by this unique term are discussed; it is argued that they crucially depend on the kind of perturbations that a robust system should by definition withstand. Possible mechanisms underlying robust behaviours are examined, either encountered in all natural systems (symmetries, conservation laws, dynamic stability) or specific to biological systems (feedbacks and regulatory networks). Special attention is devoted to the (sometimes counterintuitive) interrelations between robustness and noise. A distinction between dynamic selection and natural selection in the establishment of a robust behaviour is underlined. It is finally argued that nested notions of robustness, relevant to different time scales and different levels of organisation, allow one to reconcile the seemingly contradictory requirements for robustness and adaptability in living systems. PMID:18823391

  3. Robust Hypothesis Testing with alpha -Divergence

    NASA Astrophysics Data System (ADS)

    Gul, Gokhan; Zoubir, Abdelhak M.

    2016-09-01

    A robust minimax test for two composite hypotheses, which are determined by the neighborhoods of two nominal distributions with respect to a set of distances - called $\\alpha-$divergence distances, is proposed. Sion's minimax theorem is adopted to characterize the saddle value condition. Least favorable distributions, the robust decision rule and the robust likelihood ratio test are derived. If the nominal probability distributions satisfy a symmetry condition, the design procedure is shown to be simplified considerably. The parameters controlling the degree of robustness are bounded from above and the bounds are shown to be resulting from a solution of a set of equations. The simulations performed evaluate and exemplify the theoretical derivations.

  4. Robust, Optimal Water Infrastructure Planning Under Deep Uncertainty Using Metamodels

    NASA Astrophysics Data System (ADS)

    Maier, H. R.; Beh, E. H. Y.; Zheng, F.; Dandy, G. C.; Kapelan, Z.

    2015-12-01

    Optimal long-term planning plays an important role in many water infrastructure problems. However, this task is complicated by deep uncertainty about future conditions, such as the impact of population dynamics and climate change. One way to deal with this uncertainty is by means of robustness, which aims to ensure that water infrastructure performs adequately under a range of plausible future conditions. However, as robustness calculations require computationally expensive system models to be run for a large number of scenarios, it is generally computationally intractable to include robustness as an objective in the development of optimal long-term infrastructure plans. In order to overcome this shortcoming, an approach is developed that uses metamodels instead of computationally expensive simulation models in robustness calculations. The approach is demonstrated for the optimal sequencing of water supply augmentation options for the southern portion of the water supply for Adelaide, South Australia. A 100-year planning horizon is subdivided into ten equal decision stages for the purpose of sequencing various water supply augmentation options, including desalination, stormwater harvesting and household rainwater tanks. The objectives include the minimization of average present value of supply augmentation costs, the minimization of average present value of greenhouse gas emissions and the maximization of supply robustness. The uncertain variables are rainfall, per capita water consumption and population. Decision variables are the implementation stages of the different water supply augmentation options. Artificial neural networks are used as metamodels to enable all objectives to be calculated in a computationally efficient manner at each of the decision stages. The results illustrate the importance of identifying optimal staged solutions to ensure robustness and sustainability of water supply into an uncertain long-term future.

  5. Persistent Identifiers, Discoverability and Open Science (Communication)

    NASA Astrophysics Data System (ADS)

    Murphy, Fiona; Lehnert, Kerstin; Hanson, Brooks

    2016-04-01

    Early in 2016, the American Geophysical Union announced it was incorporating ORCIDs into its submission workflows. This was accompanied by a strong statement supporting the use of other persistent identifiers - such as IGSNs, and the CrossRef open registry 'funding data'. This was partly in response to funders' desire to track and manage their outputs. However the more compelling argument, and the reason why the AGU has also signed up to the Center for Open Science's Transparency and Openness Promotion (TOP) Guidelines (http://cos.io/top), is that ultimately science and scientists will be the richer for these initiatives due to increased opportunities for interoperability, reproduceability and accreditation. The AGU has appealed to the wider community to engage with these initiatives, recognising that - unlike the introduction of Digital Object Identifiers (DOIs) for articles by CrossRef - full, enriched use of persistent identifiers throughout the scientific process requires buy-in from a range of scholarly communications stakeholders. At the same time, across the general research landscape, initiatives such as Project CRediT (contributor roles taxonomy), Publons (reviewer acknowledgements) and the forthcoming CrossRef DOI Event Tracker are contributing to our understanding and accreditation of contributions and impact. More specifically for earth science and scientists, the cross-functional Coalition for Publishing Data in the Earth and Space Sciences (COPDESS) was formed in October 2014 and is working to 'provide an organizational framework for Earth and space science publishers and data facilities to jointly implement and promote common policies and procedures for the publication and citation of data across Earth Science journals'. Clearly, the judicious integration of standards, registries and persistent identifiers such as ORCIDs and International Geo Sample Numbers (IGSNs) to the research and research output processes is key to the success of this venture

  6. Robust control with structured perturbations

    NASA Technical Reports Server (NTRS)

    Keel, Leehyun

    1988-01-01

    Two important problems in the area of control systems design and analysis are discussed. The first is the robust stability using characteristic polynomial, which is treated first in characteristic polynomial coefficient space with respect to perturbations in the coefficients of the characteristic polynomial, and then for a control system containing perturbed parameters in the transfer function description of the plant. In coefficient space, a simple expression is first given for the l(sup 2) stability margin for both monic and non-monic cases. Following this, a method is extended to reveal much larger stability region. This result has been extended to the parameter space so that one can determine the stability margin, in terms of ranges of parameter variations, of the closed loop system when the nominal stabilizing controller is given. The stability margin can be enlarged by a choice of better stabilizing controller. The second problem describes the lower order stabilization problem, the motivation of the problem is as follows. Even though the wide range of stabilizing controller design methodologies is available in both the state space and transfer function domains, all of these methods produce unnecessarily high order controllers. In practice, the stabilization is only one of many requirements to be satisfied. Therefore, if the order of a stabilizing controller is excessively high, one can normally expect to have a even higher order controller on the completion of design such as inclusion of dynamic response requirements, etc. Therefore, it is reasonable to have a lowest possible order stabilizing controller first and then adjust the controller to meet additional requirements. The algorithm for designing a lower order stabilizing controller is given. The algorithm does not necessarily produce the minimum order controller; however, the algorithm is theoretically logical and some simulation results show that the algorithm works in general.

  7. Noise and Robustness in Phyllotaxis

    PubMed Central

    Mirabet, Vincent; Besnard, Fabrice; Vernoux, Teva; Boudaoud, Arezki

    2012-01-01

    A striking feature of vascular plants is the regular arrangement of lateral organs on the stem, known as phyllotaxis. The most common phyllotactic patterns can be described using spirals, numbers from the Fibonacci sequence and the golden angle. This rich mathematical structure, along with the experimental reproduction of phyllotactic spirals in physical systems, has led to a view of phyllotaxis focusing on regularity. However all organisms are affected by natural stochastic variability, raising questions about the effect of this variability on phyllotaxis and the achievement of such regular patterns. Here we address these questions theoretically using a dynamical system of interacting sources of inhibitory field. Previous work has shown that phyllotaxis can emerge deterministically from the self-organization of such sources and that inhibition is primarily mediated by the depletion of the plant hormone auxin through polarized transport. We incorporated stochasticity in the model and found three main classes of defects in spiral phyllotaxis – the reversal of the handedness of spirals, the concomitant initiation of organs and the occurrence of distichous angles – and we investigated whether a secondary inhibitory field filters out defects. Our results are consistent with available experimental data and yield a prediction of the main source of stochasticity during organogenesis. Our model can be related to cellular parameters and thus provides a framework for the analysis of phyllotactic mutants at both cellular and tissular levels. We propose that secondary fields associated with organogenesis, such as other biochemical signals or mechanical forces, are important for the robustness of phyllotaxis. More generally, our work sheds light on how a target pattern can be achieved within a noisy background. PMID:22359496

  8. Near Identifiability of Dynamical Systems

    NASA Technical Reports Server (NTRS)

    Hadaegh, F. Y.; Bekey, G. A.

    1987-01-01

    Concepts regarding approximate mathematical models treated rigorously. Paper presents new results in analysis of structural identifiability, equivalence, and near equivalence between mathematical models and physical processes they represent. Helps establish rigorous mathematical basis for concepts related to structural identifiability and equivalence revealing fundamental requirements, tacit assumptions, and sources of error. "Structural identifiability," as used by workers in this field, loosely translates as meaning ability to specify unique mathematical model and set of model parameters that accurately predict behavior of corresponding physical system.

  9. Robust approach to ocular fundus image analysis

    NASA Astrophysics Data System (ADS)

    Tascini, Guido; Passerini, Giorgio; Puliti, Paolo; Zingaretti, Primo

    1993-07-01

    The analysis of morphological and structural modifications of retinal blood vessels plays an important role both to establish the presence of some systemic diseases as hypertension and diabetes and to study their course. The paper describes a robust set of techniques developed to quantitatively evaluate morphometric aspects of the ocular fundus vascular and micro vascular network. They are defined: (1) the concept of 'Local Direction of a vessel' (LD); (2) a special form of edge detection, named Signed Edge Detection (SED), which uses LD to choose the convolution kernel in the edge detection process and is able to distinguish between the left or the right vessel edge; (3) an iterative tracking (IT) method. The developed techniques use intensively both LD and SED in: (a) the automatic detection of number, position and size of blood vessels departing from the optical papilla; (b) the tracking of body and edges of the vessels; (c) the recognition of vessel branches and crossings; (d) the extraction of a set of features as blood vessel length and average diameter, arteries and arterioles tortuosity, crossing position and angle between two vessels. The algorithms, implemented in C language, have an execution time depending on the complexity of the currently processed vascular network.

  10. Robust mechanisms of ventral furrow invagination require the combination of cellular shape changes

    NASA Astrophysics Data System (ADS)

    Conte, Vito; Muñoz, José J.; Baum, Buzz; Miodownik, Mark

    2009-03-01

    Ventral furrow formation in Drosophila is the first large-scale morphogenetic movement during the life of the embryo, and is driven by co-ordinated changes in the shape of individual epithelial cells within the cellular blastoderm. Although many of the genes involved have been identified, the details of the mechanical processes that convert local changes in gene expression into whole-scale changes in embryonic form remain to be fully understood. Biologists have identified two main cell deformation modes responsible for ventral furrow invagination: constriction of the apical ends of the cells (apical wedging) and deformation along their apical-basal axes (radial lengthening/shortening). In this work, we used a computer 2D finite element model of ventral furrow formation to investigate the ability of different combinations of three plausible elementary active cell shape changes to bring about epithelial invagination: ectodermal apical-basal shortening, mesodermal apical-basal lengthening/shortening and mesodermal apical constriction. We undertook a systems analysis of the biomechanical system, which revealed many different combinations of active forces (invagination mechanisms) were able to generate a ventral furrow. Two important general features were revealed. First that combinations of shape changes are the most robust to environmental and mutational perturbation, in particular those combining ectodermal pushing and mesodermal wedging. Second, that ectodermal pushing plays a big part in all of the robust mechanisms (mesodermal forces alone do not close the furrow), and this provides evidence that it may be an important element in the mechanics of invagination in Drosophila.

  11. A Robust Actin Filaments Image Analysis Framework

    PubMed Central

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-01-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a ‘cartoon’ part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the ‘cartoon’ image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts

  12. A Robust Actin Filaments Image Analysis Framework.

    PubMed

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-08-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a 'cartoon' part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the 'cartoon' image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts grown in

  13. Robust detection, isolation and accommodation for sensor failures

    NASA Technical Reports Server (NTRS)

    Emami-Naeini, A.; Akhter, M. M.; Rock, S. M.

    1986-01-01

    The objective is to extend the recent advances in robust control system design of multivariable systems to sensor failure detection, isolation, and accommodation (DIA), and estimator design. This effort provides analysis tools to quantify the trade-off between performance robustness and DIA sensitivity, which are to be used to achieve higher levels of performance robustness for given levels of DIA sensitivity. An innovations-based DIA scheme is used. Estimators, which depend upon a model of the process and process inputs and outputs, are used to generate these innovations. Thresholds used to determine failure detection are computed based on bounds on modeling errors, noise properties, and the class of failures. The applicability of the newly developed tools are demonstrated on a multivariable aircraft turbojet engine example. A new concept call the threshold selector was developed. It represents a significant and innovative tool for the analysis and synthesis of DiA algorithms. The estimators were made robust by introduction of an internal model and by frequency shaping. The internal mode provides asymptotically unbiased filter estimates.The incorporation of frequency shaping of the Linear Quadratic Gaussian cost functional modifies the estimator design to make it suitable for sensor failure DIA. The results are compared with previous studies which used thresholds that were selcted empirically. Comparison of these two techniques on a nonlinear dynamic engine simulation shows improved performance of the new method compared to previous techniques

  14. Best Practices for Reliable and Robust Spacecraft Structures

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.; Murthy, P. L. N.; Patel, Naresh R.; Bonacuse, Peter J.; Elliott, Kenny B.; Gordon, S. A.; Gyekenyesi, J. P.; Daso, E. O.; Aggarwal, P.; Tillman, R. F.

    2007-01-01

    A study was undertaken to capture the best practices for the development of reliable and robust spacecraft structures for NASA s next generation cargo and crewed launch vehicles. In this study, the NASA heritage programs such as Mercury, Gemini, Apollo, and the Space Shuttle program were examined. A series of lessons learned during the NASA and DoD heritage programs are captured. The processes that "make the right structural system" are examined along with the processes to "make the structural system right". The impact of technology advancements in materials and analysis and testing methods on reliability and robustness of spacecraft structures is studied. The best practices and lessons learned are extracted from these studies. Since the first human space flight, the best practices for reliable and robust spacecraft structures appear to be well established, understood, and articulated by each generation of designers and engineers. However, these best practices apparently have not always been followed. When the best practices are ignored or short cuts are taken, risks accumulate, and reliability suffers. Thus program managers need to be vigilant of circumstances and situations that tend to violate best practices. Adherence to the best practices may help develop spacecraft systems with high reliability and robustness against certain anomalies and unforeseen events.

  15. Low-power micromachined structures for gas sensors with improved robustness

    NASA Astrophysics Data System (ADS)

    Gracia, Isabel; Goetz, Andreas; Plaza, Jose A.; Cane, Carles; Roetsch, Patrice; Boettner, Harald; Seibert, Klaus

    2000-08-01

    Current research on microstructures for semiconductor gas sensors is on development on low power and robust substrates. In this paper a microstructure based on the combination of micromachined silicon substrates and glass wafers is presented. This device shows high robustness and can reach high temperatures up to 700$DEGC with good power consumption. The optimisation of the design and the process fabrication is described.

  16. Fast and robust entanglement using Rydberg atoms

    NASA Astrophysics Data System (ADS)

    Côté, Robin

    2001-05-01

    In recent years, numerous proposals to build quantum information processors have been suggested. Due to their very long coherence times and the well-developed techniques for cooling and trapping them, neutral atoms are particularly attractive for quantum computing. To design fast quantum gates, one needs to identify strong and controllable two-body interactions. However, large interactions are usually associated with strong mechanical forces on the trapped atoms: their internal states (the qubits) may become entangled with their motional degrees of freedom, leading to rapid decoherence. A new system for implementing quantum logic gates based on ultracold Rydberg atoms is presented. Atoms in excited Rydberg states have long lifetimes and enormous dipole moments. When excited in a constant electric field, their controllable strong dipole-dipole interactions provide the large interaction energy required to perform fast gate operations. The mechanical effects can also be greatly suppressed by using the ``dipole blockade" resulting from the strong dipole-dipole interactions. The gate becomes insensitive to the temperature of the atoms and to the variations in atom-atom separation. Hence, a fast and robust two-qubit quantum gate with operation time much faster than the time scale of the atomic motion is possible(D. Jaksch et al.) Phys. Rev. Lett. 85, 2208 (2000).. The generalization to collective states of mesoscopic ensembles can be accomplished using the same dipole blockade(M.D. Lukin et al.), quant-phy/0011028..

  17. The Utility of Robust Means in Statistics