Science.gov

Sample records for identifying robust process

  1. Robust regulation of anaerobic digestion processes.

    PubMed

    Mailleret, L; Bernard, O; Steyer, J P

    2003-01-01

    This paper deals with the problem of controlling anaerobic digestion processes. A two-step (i.e. acidogenesis-methanization) mass balance model is considered for a 1 m3 fixed bed digester treating industrial wine distillery wastewater. The control law aims at regulating the organic pollution level while avoiding washout of biomass. To this end, a simple output feedback controller is considered which regulates a variable strongly related to the Chemical Oxygen Demand (COD). Numerical simulations assuming noisy measurements first illustrate the robustness of this control procedure. Then, the regulating procedure is implemented on the considered anaerobic digestion process in order to validate and demonstrate its efficiency in real life experiments.

  2. Robustness

    NASA Technical Reports Server (NTRS)

    Ryan, R.

    1993-01-01

    Robustness is a buzz word common to all newly proposed space systems design as well as many new commercial products. The image that one conjures up when the word appears is a 'Paul Bunyon' (lumberjack design), strong and hearty; healthy with margins in all aspects of the design. In actuality, robustness is much broader in scope than margins, including such factors as simplicity, redundancy, desensitization to parameter variations, control of parameter variations (environments flucation), and operational approaches. These must be traded with concepts, materials, and fabrication approaches against the criteria of performance, cost, and reliability. This includes manufacturing, assembly, processing, checkout, and operations. The design engineer or project chief is faced with finding ways and means to inculcate robustness into an operational design. First, however, be sure he understands the definition and goals of robustness. This paper will deal with these issues as well as the need for the requirement for robustness.

  3. Numerical robust stability estimation in milling process

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoming; Zhu, Limin; Ding, Han; Xiong, Youlun

    2012-09-01

    The conventional prediction of milling stability has been extensively studied based on the assumptions that the milling process dynamics is time invariant. However, nominal cutting parameters cannot guarantee the stability of milling process at the shop floor level since there exists many uncertain factors in a practical manufacturing environment. This paper proposes a novel numerical method to estimate the upper and lower bounds of Lobe diagram, which is used to predict the milling stability in a robust way by taking into account the uncertain parameters of milling system. Time finite element method, a milling stability theory is adopted as the conventional deterministic model. The uncertain dynamics parameters are dealt with by the non-probabilistic model in which the parameters with uncertainties are assumed to be bounded and there is no need for probabilistic distribution densities functions. By doing so, interval instead of deterministic stability Lobe is obtained, which guarantees the stability of milling process in an uncertain milling environment. In the simulations, the upper and lower bounds of Lobe diagram obtained by the changes of modal parameters of spindle-tool system and cutting coefficients are given, respectively. The simulation results show that the proposed method is effective and can obtain satisfying bounds of Lobe diagrams. The proposed method is helpful for researchers at shop floor to making decision on machining parameters selection.

  4. Identifying robust and sensitive frequency bands for interrogating neural oscillations.

    PubMed

    Shackman, Alexander J; McMenamin, Brenton W; Maxwell, Jeffrey S; Greischar, Lawrence L; Davidson, Richard J

    2010-07-15

    Recent years have seen an explosion of interest in using neural oscillations to characterize the mechanisms supporting cognition and emotion. Oftentimes, oscillatory activity is indexed by mean power density in predefined frequency bands. Some investigators use broad bands originally defined by prominent surface features of the spectrum. Others rely on narrower bands originally defined by spectral factor analysis (SFA). Presently, the robustness and sensitivity of these competing band definitions remains unclear. Here, a Monte Carlo-based SFA strategy was used to decompose the tonic ("resting" or "spontaneous") electroencephalogram (EEG) into five bands: delta (1-5Hz), alpha-low (6-9Hz), alpha-high (10-11Hz), beta (12-19Hz), and gamma (>21Hz). This pattern was consistent across SFA methods, artifact correction/rejection procedures, scalp regions, and samples. Subsequent analyses revealed that SFA failed to deliver enhanced sensitivity; narrow alpha sub-bands proved no more sensitive than the classical broadband to individual differences in temperament or mean differences in task-induced activation. Other analyses suggested that residual ocular and muscular artifact was the dominant source of activity during quiescence in the delta and gamma bands. This was observed following threshold-based artifact rejection or independent component analysis (ICA)-based artifact correction, indicating that such procedures do not necessarily confer adequate protection. Collectively, these findings highlight the limitations of several commonly used EEG procedures and underscore the necessity of routinely performing exploratory data analyses, particularly data visualization, prior to hypothesis testing. They also suggest the potential benefits of using techniques other than SFA for interrogating high-dimensional EEG datasets in the frequency or time-frequency (event-related spectral perturbation, event-related synchronization/desynchronization) domains.

  5. Identifying Robust and Sensitive Frequency Bands for Interrogating Neural Oscillations

    PubMed Central

    Shackman, Alexander J.; McMenamin, Brenton W.; Maxwell, Jeffrey S.; Greischar, Lawrence L.; Davidson, Richard J.

    2010-01-01

    Recent years have seen an explosion of interest in using neural oscillations to characterize the mechanisms supporting cognition and emotion. Oftentimes, oscillatory activity is indexed by mean power density in predefined frequency bands. Some investigators use broad bands originally defined by prominent surface features of the spectrum. Others rely on narrower bands originally defined by spectral factor analysis (SFA). Presently, the robustness and sensitivity of these competing band definitions remains unclear. Here, a Monte Carlo-based SFA strategy was used to decompose the tonic (“resting” or “spontaneous”) electroencephalogram (EEG) into five bands: delta (1–5Hz), alpha-low (6–9Hz), alpha-high (10–11Hz), beta (12–19Hz), and gamma (>21Hz). This pattern was consistent across SFA methods, artifact correction/rejection procedures, scalp regions, and samples. Subsequent analyses revealed that SFA failed to deliver enhanced sensitivity; narrow alpha sub-bands proved no more sensitive than the classical broadband to individual differences in temperament or mean differences in task-induced activation. Other analyses suggested that residual ocular and muscular artifact was the dominant source of activity during quiescence in the delta and gamma bands. This was observed following threshold-based artifact rejection or independent component analysis (ICA)-based artifact correction, indicating that such procedures do not necessarily confer adequate protection. Collectively, these findings highlight the limitations of several commonly used EEG procedures and underscore the necessity of routinely performing exploratory data analyses, particularly data visualization, prior to hypothesis testing. They also suggest the potential benefits of using techniques other than SFA for interrogating high-dimensional EEG datasets in the frequency or time-frequency (event-related spectral perturbation, event-related synchronization / desynchronization) domains. PMID

  6. Using Many-Objective Optimization and Robust Decision Making to Identify Robust Regional Water Resource System Plans

    NASA Astrophysics Data System (ADS)

    Matrosov, E. S.; Huskova, I.; Harou, J. J.

    2015-12-01

    Water resource system planning regulations are increasingly requiring potential plans to be robust, i.e., perform well over a wide range of possible future conditions. Robust Decision Making (RDM) has shown success in aiding the development of robust plans under conditions of 'deep' uncertainty. Under RDM, decision makers iteratively improve the robustness of a candidate plan (or plans) by quantifying its vulnerabilities to future uncertain inputs and proposing ameliorations. RDM requires planners to have an initial candidate plan. However, if the initial plan is far from robust, it may take several iterations before planners are satisfied with its performance across the wide range of conditions. Identifying an initial candidate plan is further complicated if many possible alternative plans exist and if performance is assessed against multiple conflicting criteria. Planners may benefit from considering a plan that already balances multiple performance criteria and provides some level of robustness before the first RDM iteration. In this study we use many-objective evolutionary optimization to identify promising plans before undertaking RDM. This is done for a very large regional planning problem spanning the service area of four major water utilities in East England. The five-objective optimization is performed under an ensemble of twelve uncertainty scenarios to ensure the Pareto-approximate plans exhibit an initial level of robustness. New supply interventions include two reservoirs, one aquifer recharge and recovery scheme, two transfers from an existing reservoir, five reuse and five desalination schemes. Each option can potentially supply multiple demands at varying capacities resulting in 38 unique decisions. Four candidate portfolios were selected using trade-off visualization with the involved utilities. The performance of these plans was compared under a wider range of possible scenarios. The most balanced plan was then submitted into the vulnerability

  7. On adaptive robustness approach to Anti-Jam signal processing

    NASA Astrophysics Data System (ADS)

    Poberezhskiy, Y. S.; Poberezhskiy, G. Y.

    An effective approach to exploiting statistical differences between desired and jamming signals named adaptive robustness is proposed and analyzed in this paper. It combines conventional Bayesian, adaptive, and robust approaches that are complementary to each other. This combining strengthens the advantages and mitigates the drawbacks of the conventional approaches. Adaptive robustness is equally applicable to both jammers and their victim systems. The capabilities required for realization of adaptive robustness in jammers and victim systems are determined. The employment of a specific nonlinear robust algorithm for anti-jam (AJ) processing is described and analyzed. Its effectiveness in practical situations has been proven analytically and confirmed by simulation. Since adaptive robustness can be used by both sides in electronic warfare, it is more advantageous for the fastest and most intelligent side. Many results obtained and discussed in this paper are also applicable to commercial applications such as communications in unregulated or poorly regulated frequency ranges and systems with cognitive capabilities.

  8. Robust Motion Processing in the Visual Cortex

    NASA Astrophysics Data System (ADS)

    Sederberg, Audrey; Liu, Julia; Kaschube, Matthias

    2009-03-01

    Direction selectivity is an important model system for studying cortical processing. The role of inhibition in models of direction selectivity in the visual cortex is not well understood. We probe the selectivity of an integrate-and-fire neuron with a noisy background on top of a deterministic input current determined by a temporal-lag model for selectivity, including first only excitatory inputs and later both excitatory and inhibitory input. In this model, postsynaptic potentials are fully synchronous for the preferred direction and maximally dispersed in time for the null direction. Further, any inhibitory inputs lag excitatory inputs, as Priebe and Ferster have observed (2005). At any level of input strength, the selectivity is weak when only excitatory inputs are considered. The inclusion of inhibition significantly strengthens selectivity, and this selectivity is preserved over a wide range of background noise levels and for short stimulus durations. We conclude that inhibition likely plays an essential role in the mechanism underlying direction selectivity.

  9. On-Line Robust Modal Stability Prediction using Wavelet Processing

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.; Lind, Rick

    1998-01-01

    Wavelet analysis for filtering and system identification has been used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins is reduced with parametric and nonparametric time- frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data is used to reduce the effects of external disturbances and unmodeled dynamics. Parametric estimates of modal stability are also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. The F-18 High Alpha Research Vehicle aeroservoelastic flight test data demonstrates improved robust stability prediction by extension of the stability boundary beyond the flight regime. Guidelines and computation times are presented to show the efficiency and practical aspects of these procedures for on-line implementation. Feasibility of the method is shown for processing flight data from time- varying nonstationary test points.

  10. Modelling System Processes to Support Uncertainty Analysis and Robustness Evaluation

    NASA Technical Reports Server (NTRS)

    Blackwell, Charles; Cuzzi, Jeffrey (Technical Monitor)

    1996-01-01

    In the use of advanced systems control techniques in the development of a dynamic system, results from effective mathematical modelling is required. Historically, in some cases the use of a model which only reflects the "expected" or "nominal" important -information about the system's internal processes has resulted in acceptable system performance, but it should be recognized that for those cases success was due to a combination of the remarkable inherent potential of feedback control for robustness and fortuitously wide margins between system performance requirements and system performance capability. In the cases of a CELSS development, no such fortuitous combinations should be expected, and it should be expected that the uncertainty in the information on the system's processes will have to be taken into account in order to generate a performance robust design. In this paper, we develop one perspective of the issue of providing robustness as mathematical modelling impacts it, and present some examples of model formats which serve the needed purpose.

  11. Confronting Oahu's Water Woes: Identifying Scenarios for a Robust Evaluation of Policy Alternatives

    NASA Astrophysics Data System (ADS)

    van Rees, C. B.; Garcia, M. E.; Alarcon, T.; Sixt, G.

    2013-12-01

    The Pearl Harbor aquifer is the most important freshwater resource on Oahu (Hawaii, U.S.A), providing water to nearly half a million people. Recent studies show that current water use is reaching or exceeding sustainable yield. Climate change and increasing resident and tourist populations are predicted to further stress the aquifer. The island has lost huge tracts of freshwater and estuarine wetlands since human settlement; the dependence of many endemic, endangered species on these wetlands, as well as ecosystem benefits from wetlands, link humans and wildlife through water management. After the collapse of the sugar industry on Oahu (mid-1990s), the Waiahole ditch--a massive stream diversion bringing water from the island's windward to the leeward side--became a hotly disputed resource. Commercial interests and traditional farmers have clashed over the water, which could also serve to support the Pearl Harbor aquifer. Considering competing interests, impending scarcity, and uncertain future conditions, how can groundwater be managed most effectively? Complex water networks like this are characterized by conflicts between stakeholders, coupled human-natural systems, and future uncertainty. The Water Diplomacy Framework offers a model for analyzing such complex issues by integrating multiple disciplinary perspectives, identifying intervention points, and proposing sustainable solutions. The Water Diplomacy Framework is a theory and practice of implementing adaptive water management for complex problems by shifting the discussion from 'allocation of water' to 'benefit from water resources'. This is accomplished through an interactive process that includes stakeholder input, joint fact finding, collaborative scenario development, and a negotiated approach to value creation. Presented here are the results of the initial steps in a long term project to resolve water limitations on Oahu. We developed a conceptual model of the Pearl Harbor Aquifer system and identified

  12. Processing Robustness for A Phenylethynyl Terminated Polyimide Composite

    NASA Technical Reports Server (NTRS)

    Hou, Tan-Hung

    2004-01-01

    The processability of a phenylethynyl terminated imide resin matrix (designated as PETI-5) composite is investigated. Unidirectional prepregs are made by coating an N-methylpyrrolidone solution of the amide acid oligomer (designated as PETAA-5/NMP) onto unsized IM7 fibers. Two batches of prepregs are used: one is made by NASA in-house, and the other is from an industrial source. The composite processing robustness is investigated with respect to the prepreg shelf life, the effect of B-staging conditions, and the optimal processing window. Prepreg rheology and open hole compression (OHC) strengths are found not to be affected by prolonged (i.e., up to 60 days) ambient storage. Rheological measurements indicate that the PETAA-5/NMP processability is only slightly affected over a wide range of B-stage temperatures from 250 deg C to 300 deg C. The OHC strength values are statistically indistinguishable among laminates consolidated using various B-staging conditions. An optimal processing window is established by means of the response surface methodology. IM7/PETAA-5/NMP prepreg is more sensitive to consolidation temperature than to pressure. A good consolidation is achievable at 371 deg C (700 deg F)/100 Psi, which yields an RT OHC strength of 62 Ksi. However, processability declines dramatically at temperatures below 350 deg C (662 deg F), as evidenced by the OHC strength values. The processability of the IM7/LARC(TM) PETI-5 prepreg was found to be robust.

  13. Robust design of binary countercurrent adsorption separation processes

    SciTech Connect

    Storti, G. ); Mazzotti, M.; Morbidelli, M.; Carra, S. )

    1993-03-01

    The separation of a binary mixture, using a third component having intermediate adsorptivity as desorbent, in a four section countercurrent adsorption separation unit is considered. A procedure for the optimal and robust design of the unit is developed in the frame of Equilibrium Theory, using a model where the adsorption equilibria are described through the constant selectivity stoichiometric model, while mass-transfer resistances and axial mixing are neglected. By requiring that the unit achieves complete separation, it is possible to identify a set of implicity constraints on the operating parameters, that is, the flow rate ratios in the four sections of the unit. From these constraints explicit bounds on the operating parameters are obtained, thus yielding a region in the operating parameters space, which can be drawn a priori in terms of the adsorption equilibrium constants and the feed composition. This result provides a very convenient tool to determine both optimal and robust operating conditions. The latter issue is addressed by first analyzing the various possible sources of disturbances, as well as their effect on the separation performance. Next, the criteria for the robust design of the unit are discussed. Finally, these theoretical findings are compared with a set of experimental results obtained in a six port simulated moving bed adsorption separation unit operated in the vapor phase.

  14. Exploring critical pathways for urban water management to identify robust strategies under deep uncertainties.

    PubMed

    Urich, Christian; Rauch, Wolfgang

    2014-12-01

    Long-term projections for key drivers needed in urban water infrastructure planning such as climate change, population growth, and socio-economic changes are deeply uncertain. Traditional planning approaches heavily rely on these projections, which, if a projection stays unfulfilled, can lead to problematic infrastructure decisions causing high operational costs and/or lock-in effects. New approaches based on exploratory modelling take a fundamentally different view. Aim of these is, to identify an adaptation strategy that performs well under many future scenarios, instead of optimising a strategy for a handful. However, a modelling tool to support strategic planning to test the implication of adaptation strategies under deeply uncertain conditions for urban water management does not exist yet. This paper presents a first step towards a new generation of such strategic planning tools, by combing innovative modelling tools, which coevolve the urban environment and urban water infrastructure under many different future scenarios, with robust decision making. The developed approach is applied to the city of Innsbruck, Austria, which is spatially explicitly evolved 20 years into the future under 1000 scenarios to test the robustness of different adaptation strategies. Key findings of this paper show that: (1) Such an approach can be used to successfully identify parameter ranges of key drivers in which a desired performance criterion is not fulfilled, which is an important indicator for the robustness of an adaptation strategy; and (2) Analysis of the rich dataset gives new insights into the adaptive responses of agents to key drivers in the urban system by modifying a strategy.

  15. Process Architecture for Managing Digital Object Identifiers

    NASA Astrophysics Data System (ADS)

    Wanchoo, L.; James, N.; Stolte, E.

    2014-12-01

    In 2010, NASA's Earth Science Data and Information System (ESDIS) Project implemented a process for registering Digital Object Identifiers (DOIs) for data products distributed by Earth Observing System Data and Information System (EOSDIS). For the first 3 years, ESDIS evolved the process involving the data provider community in the development of processes for creating and assigning DOIs, and guidelines for the landing page. To accomplish this, ESDIS established two DOI User Working Groups: one for reviewing the DOI process whose recommendations were submitted to ESDIS in February 2014; and the other recently tasked to review and further develop DOI landing page guidelines for ESDIS approval by end of 2014. ESDIS has recently upgraded the DOI system from a manually-driven system to one that largely automates the DOI process. The new automated feature include: a) reviewing the DOI metadata, b) assigning of opaque DOI name if data provider chooses, and c) reserving, registering, and updating the DOIs. The flexibility of reserving the DOI allows data providers to embed and test the DOI in the data product metadata before formally registering with EZID. The DOI update process allows the changing of any DOI metadata except the DOI name unless the name has not been registered. Currently, ESDIS has processed a total of 557 DOIs of which 379 DOIs are registered with EZID and 178 are reserved with ESDIS. The DOI incorporates several metadata elements that effectively identify the data product and the source of availability. Of these elements, the Uniform Resource Locator (URL) attribute has the very important function of identifying the landing page which describes the data product. ESDIS in consultation with data providers in the Earth Science community is currently developing landing page guidelines that specify the key data product descriptive elements to be included on each data product's landing page. This poster will describe in detail the unique automated process and

  16. Robust interval-based regulation for anaerobic digestion processes.

    PubMed

    Alcaraz-González, V; Harmand, J; Rapaport, A; Steyer, J P; González-Alvarez, V; Pelayo-Ortiz, C

    2005-01-01

    A robust regulation law is applied to the stabilization of a class of biochemical reactors exhibiting partially known highly nonlinear dynamic behavior. An uncertain environment with the presence of unknown inputs is considered. Based on some structural and operational conditions, this regulation law is shown to exponentially stabilize the aforementioned bioreactors around a desired set-point. This approach is experimentally applied and validated on a pilot-scale (1 m3) anaerobic digestion process for the treatment of raw industrial wine distillery wastewater where the objective is the regulation of the chemical oxygen demand (COD) by using the dilution rate as the manipulated variable. Despite large disturbances on the input COD and state and parametric uncertainties, this regulation law gave excellent performances leading the output COD towards its set-point and keeping it inside a pre-specified interval.

  17. Product and Process Improvement Using Mixture-Process Variable Designs and Robust Optimization Techniques

    SciTech Connect

    Sahni, Narinder S.; Piepel, Gregory F.; Naes, Tormod

    2009-04-01

    The quality of an industrial product depends on the raw material proportions and the process variable levels, both of which need to be taken into account in designing a product. This article presents a case study from the food industry in which both kinds of variables were studied by combining a constrained mixture experiment design and a central composite process variable design. Based on the natural structure of the situation, a split-plot experiment was designed and models involving the raw material proportions and process variable levels (separately and combined) were fitted. Combined models were used to study: (i) the robustness of the process to variations in raw material proportions, and (ii) the robustness of the raw material recipes with respect to fluctuations in the process variable levels. Further, the expected variability in the robust settings was studied using the bootstrap.

  18. Application of NMR Methods to Identify Detection Reagents for Use in the Development of Robust Nanosensors

    SciTech Connect

    Cosman, M; Krishnan, V V; Balhorn, R

    2004-04-29

    Nuclear Magnetic Resonance (NMR) spectroscopy is a powerful technique for studying bi-molecular interactions at the atomic scale. Our NMR lab is involved in the identification of small molecules, or ligands that bind to target protein receptors, such as tetanus (TeNT) and botulinum (BoNT) neurotoxins, anthrax proteins and HLA-DR10 receptors on non-Hodgkin's lymphoma cancer cells. Once low affinity binders are identified, they can be linked together to produce multidentate synthetic high affinity ligands (SHALs) that have very high specificity for their target protein receptors. An important nanotechnology application for SHALs is their use in the development of robust chemical sensors or biochips for the detection of pathogen proteins in environmental samples or body fluids. Here, we describe a recently developed NMR competition assay based on transferred nuclear Overhauser effect spectroscopy (trNOESY) that enables the identification of sets of ligands that bind to the same site, or a different site, on the surface of TeNT fragment C (TetC) than a known ''marker'' ligand, doxorubicin. Using this assay, we can identify the optimal pairs of ligands to be linked together for creating detection reagents, as well as estimate the relative binding constants for ligands competing for the same site.

  19. Microphone-Independent Robust Signal Processing Using Probabilistic Optimum Filtering

    DTIC Science & Technology

    1994-01-01

    11. A. Acero , "Acoustical and Environmental Robustness in Automatic Speech Recognition," Ph.D. Thesis, Carnegie-MeLton University, Sep- tember 1990...12. R.M. Stem, FJ-I. Leu, Y. Ohshima, T.M. Sullivan, and A. Acero , "Multiple Approaches to Robust Speech Recognition," 1992 Interna- tional

  20. Surrogate models for identifying robust, high yield regions of parameter space for ICF implosion simulations

    NASA Astrophysics Data System (ADS)

    Humbird, Kelli; Peterson, J. Luc; Brandon, Scott; Field, John; Nora, Ryan; Spears, Brian

    2016-10-01

    Next-generation supercomputer architecture and in-transit data analysis have been used to create a large collection of 2-D ICF capsule implosion simulations. The database includes metrics for approximately 60,000 implosions, with x-ray images and detailed physics parameters available for over 20,000 simulations. To map and explore this large database, surrogate models for numerous quantities of interest are built using supervised machine learning algorithms. Response surfaces constructed using the predictive capabilities of the surrogates allow for continuous exploration of parameter space without requiring additional simulations. High performing regions of the input space are identified to guide the design of future experiments. In particular, a model for the yield built using a random forest regression algorithm has a cross validation score of 94.3% and is consistently conservative for high yield predictions. The model is used to search for robust volumes of parameter space where high yields are expected, even given variations in other input parameters. Surrogates for additional quantities of interest relevant to ignition are used to further characterize the high yield regions. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, Lawrence Livermore National Security, LLC. LLNL-ABS-697277.

  1. Stretching the limits of forming processes by robust optimization: A demonstrator

    SciTech Connect

    Wiebenga, J. H.; Atzema, E. H.; Boogaard, A. H. van den

    2013-12-16

    Robust design of forming processes using numerical simulations is gaining attention throughout the industry. In this work, it is demonstrated how robust optimization can assist in further stretching the limits of metal forming processes. A deterministic and a robust optimization study are performed, considering a stretch-drawing process of a hemispherical cup product. For the robust optimization study, both the effect of material and process scatter are taken into account. For quantifying the material scatter, samples of 41 coils of a drawing quality forming steel have been collected. The stochastic material behavior is obtained by a hybrid approach, combining mechanical testing and texture analysis, and efficiently implemented in a metamodel based optimization strategy. The deterministic and robust optimization results are subsequently presented and compared, demonstrating an increased process robustness and decreased number of product rejects by application of the robust optimization approach.

  2. Robust syntaxin-4 immunoreactivity in mammalian horizontal cell processes

    PubMed Central

    HIRANO, ARLENE A.; BRANDSTÄTTER, JOHANN HELMUT; VILA, ALEJANDRO; BRECHA, NICHOLAS C.

    2009-01-01

    Horizontal cells mediate inhibitory feed-forward and feedback communication in the outer retina; however, mechanisms that underlie transmitter release from mammalian horizontal cells are poorly understood. Toward determining whether the molecular machinery for exocytosis is present in horizontal cells, we investigated the localization of syntaxin-4, a SNARE protein involved in targeting vesicles to the plasma membrane, in mouse, rat, and rabbit retinae using immunocytochemistry. We report robust expression of syntaxin-4 in the outer plexiform layer of all three species. Syntaxin-4 occurred in processes and tips of horizontal cells, with regularly spaced, thicker sandwich-like structures along the processes. Double labeling with syntaxin-4 and calbindin antibodies, a horizontal cell marker, demonstrated syntaxin-4 localization to horizontal cell processes; whereas, double labeling with PKC antibodies, a rod bipolar cell (RBC) marker, showed a lack of co-localization, with syntaxin-4 immunolabeling occurring just distal to RBC dendritic tips. Syntaxin-4 immunolabeling occurred within VGLUT-1-immunoreactive photoreceptor terminals and underneath synaptic ribbons, labeled by CtBP2/RIBEYE antibodies, consistent with localization in invaginating horizontal cell tips at photoreceptor triad synapses. Vertical sections of retina immunostained for syntaxin-4 and peanut agglutinin (PNA) established that the prominent patches of syntaxin-4 immunoreactivity were adjacent to the base of cone pedicles. Horizontal sections through the OPL indicate a one-to-one co-localization of syntaxin-4 densities at likely all cone pedicles, with syntaxin-4 immunoreactivity interdigitating with PNA labeling. Pre-embedding immuno-electron microscopy confirmed the subcellular localization of syntaxin-4 labeling to lateral elements at both rod and cone triad synapses. Finally, co-localization with SNAP-25, a possible binding partner of syntaxin-4, indicated co-expression of these SNARE proteins in

  3. Integrated, multicohort analysis of systemic sclerosis identifies robust transcriptional signature of disease severity

    PubMed Central

    Lofgren, Shane; Aren, Kathleen; Arroyo, Esperanza; Cheung, Peggie; Kuo, Alex; Valenzuela, Antonia; Haemel, Anna; Wolters, Paul J.; Gordon, Jessica; Spiera, Robert; Assassi, Shervin; Boin, Francesco; Chung, Lorinda; Fiorentino, David; Utz, Paul J.; Whitfield, Michael L.

    2016-01-01

    Systemic sclerosis (SSc) is a rare autoimmune disease with the highest case-fatality rate of all connective tissue diseases. Current efforts to determine patient response to a given treatment using the modified Rodnan skin score (mRSS) are complicated by interclinician variability, confounding, and the time required between sequential mRSS measurements to observe meaningful change. There is an unmet critical need for an objective metric of SSc disease severity. Here, we performed an integrated, multicohort analysis of SSc transcriptome data across 7 datasets from 6 centers composed of 515 samples. Using 158 skin samples from SSc patients and healthy controls recruited at 2 centers as a discovery cohort, we identified a 415-gene expression signature specific for SSc, and validated its ability to distinguish SSc patients from healthy controls in an additional 357 skin samples from 5 independent cohorts. Next, we defined the SSc skin severity score (4S). In every SSc cohort of skin biopsy samples analyzed in our study, 4S correlated significantly with mRSS, allowing objective quantification of SSc disease severity. Using transcriptome data from the largest longitudinal trial of SSc patients to date, we showed that 4S allowed us to objectively monitor individual SSc patients over time, as (a) the change in 4S of a patient is significantly correlated with change in the mRSS, and (b) the change in 4S at 12 months of treatment could predict the change in mRSS at 24 months. Our results suggest that 4S could be used to distinguish treatment responders from nonresponders prior to mRSS change. Our results demonstrate the potential clinical utility of a novel robust molecular signature and a computational approach to SSc disease severity quantification. PMID:28018971

  4. Working Toward Robust Process Monitoring for Safeguards Applications

    SciTech Connect

    Krichinsky, Alan M; Bell, Lisa S; Gilligan, Kimberly V; Laughter, Mark D; Miller, Paul; Pickett, Chris A; Richardson, Dave; Rowe, Nathan C; Younkin, James R

    2010-01-01

    New safeguards technologies allow continuous monitoring of plant processes. Efforts to deploy these technologies, as described in a preponderance of literature, typically have consisted of case studies attempting to prove their efficacy in proof-of-principle installations. While the enhanced safeguards capabilities of continuous monitoring have been established, studies thus far have not addressed such challenges as manipulation of a system by a host nation. To prevent this and other such vulnerabilities, one technology, continuous load cell monitoring, was reviewed. This paper will present vulnerabilities as well as mitigation strategies that were identified.

  5. Impact of genetic background and experimental reproducibility on identifying chemical compounds with robust longevity effects

    PubMed Central

    Lucanic, Mark; Plummer, W. Todd; Chen, Esteban; Harke, Jailynn; Foulger, Anna C.; Onken, Brian; Coleman-Hulbert, Anna L.; Dumas, Kathleen J.; Guo, Suzhen; Johnson, Erik; Bhaumik, Dipa; Xue, Jian; Crist, Anna B.; Presley, Michael P.; Harinath, Girish; Sedore, Christine A.; Chamoli, Manish; Kamat, Shaunak; Chen, Michelle K.; Angeli, Suzanne; Chang, Christina; Willis, John H.; Edgar, Daniel; Royal, Mary Anne; Chao, Elizabeth A.; Patel, Shobhna; Garrett, Theo; Ibanez-Ventoso, Carolina; Hope, June; Kish, Jason L; Guo, Max; Lithgow, Gordon J.; Driscoll, Monica; Phillips, Patrick C.

    2017-01-01

    Limiting the debilitating consequences of ageing is a major medical challenge of our time. Robust pharmacological interventions that promote healthy ageing across diverse genetic backgrounds may engage conserved longevity pathways. Here we report results from the Caenorhabditis Intervention Testing Program in assessing longevity variation across 22 Caenorhabditis strains spanning 3 species, using multiple replicates collected across three independent laboratories. Reproducibility between test sites is high, whereas individual trial reproducibility is relatively low. Of ten pro-longevity chemicals tested, six significantly extend lifespan in at least one strain. Three reported dietary restriction mimetics are mainly effective across C. elegans strains, indicating species and strain-specific responses. In contrast, the amyloid dye ThioflavinT is both potent and robust across the strains. Our results highlight promising pharmacological leads and demonstrate the importance of assessing lifespans of discrete cohorts across repeat studies to capture biological variation in the search for reproducible ageing interventions. PMID:28220799

  6. Commonsense Conceptions of Emergent Processes: Why Some Misconceptions Are Robust

    ERIC Educational Resources Information Center

    Chi, Michelene T. H.

    2005-01-01

    This article offers a plausible domain-general explanation for why some concepts of processes are resistant to instructional remediation although other, apparently similar concepts are more easily understood. The explanation assumes that processes may differ in ontological ways: that some processes (such as the apparent flow in diffusion of dye in…

  7. Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy

    NASA Astrophysics Data System (ADS)

    Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.

    2011-08-01

    The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.

  8. Robust Prediction for Stationary Processes. 2D Enriched Version.

    DTIC Science & Technology

    1987-11-24

    When i=! thwn for any e~, thle anise at any of" thle nominal processes 1,2.3, or 4 is the same Sfor the prediLtk)[s inl (29) and (30), as expected. (2...For any mi>l1 thle anise at thle nominal process ;.t viz. e(;.yG1 ), i=1,2 converges to e(f.vrnm*) as e- As c-*1, eq *In ) converges to a Except for

  9. Natural Language Processing: Toward Large-Scale, Robust Systems.

    ERIC Educational Resources Information Center

    Haas, Stephanie W.

    1996-01-01

    Natural language processing (NLP) is concerned with getting computers to do useful things with natural language. Major applications include machine translation, text generation, information retrieval, and natural language interfaces. Reviews important developments since 1987 that have led to advances in NLP; current NLP applications; and problems…

  10. Probability fold change: a robust computational approach for identifying differentially expressed gene lists.

    PubMed

    Deng, Xutao; Xu, Jun; Hui, James; Wang, Charles

    2009-02-01

    Identifying genes that are differentially expressed under different experimental conditions is a fundamental task in microarray studies. However, different ranking methods generate very different gene lists, and this could profoundly impact follow-up analyses and biological interpretation. Therefore, developing improved ranking methods are critical in microarray data analysis. We developed a new algorithm, the probabilistic fold change (PFC), which ranks genes based on a confidence interval estimate of fold change. We performed extensive testing using multiple benchmark data sources including the MicroArray Quality Control (MAQC) data sets. We corroborated our observations with MAQC data sets using qRT-PCR data sets and Latin square spike-in data sets. Along with PFC, we tested six other popular ranking algorithms including Mean Fold Change (FC), SAM, t-statistic (T), Bayesian-t (BAYT), Intensity-Conditional Fold Change (CFC), and Rank Product (RP). PFC achieved reproducibility and accuracy that are consistently among the best of the seven ranking algorithms while other ranking algorithms would show weakness in some cases. Contrary to common belief, our results demonstrated that statistical accuracy will not translate to biological reproducibility and therefore both quality aspects need to be evaluated.

  11. Leveraging the Cloud for Robust and Efficient Lunar Image Processing

    NASA Technical Reports Server (NTRS)

    Chang, George; Malhotra, Shan; Wolgast, Paul

    2011-01-01

    The Lunar Mapping and Modeling Project (LMMP) is tasked to aggregate lunar data, from the Apollo era to the latest instruments on the LRO spacecraft, into a central repository accessible by scientists and the general public. A critical function of this task is to provide users with the best solution for browsing the vast amounts of imagery available. The image files LMMP manages range from a few gigabytes to hundreds of gigabytes in size with new data arriving every day. Despite this ever-increasing amount of data, LMMP must make the data readily available in a timely manner for users to view and analyze. This is accomplished by tiling large images into smaller images using Hadoop, a distributed computing software platform implementation of the MapReduce framework, running on a small cluster of machines locally. Additionally, the software is implemented to use Amazon's Elastic Compute Cloud (EC2) facility. We also developed a hybrid solution to serve images to users by leveraging cloud storage using Amazon's Simple Storage Service (S3) for public data while keeping private information on our own data servers. By using Cloud Computing, we improve upon our local solution by reducing the need to manage our own hardware and computing infrastructure, thereby reducing costs. Further, by using a hybrid of local and cloud storage, we are able to provide data to our users more efficiently and securely. 12 This paper examines the use of a distributed approach with Hadoop to tile images, an approach that provides significant improvements in image processing time, from hours to minutes. This paper describes the constraints imposed on the solution and the resulting techniques developed for the hybrid solution of a customized Hadoop infrastructure over local and cloud resources in managing this ever-growing data set. It examines the performance trade-offs of using the more plentiful resources of the cloud, such as those provided by S3, against the bandwidth limitations such use

  12. Robustness of the Process of Nucleoid Exclusion of Protein Aggregates in Escherichia coli

    PubMed Central

    Neeli-Venkata, Ramakanth; Martikainen, Antti; Gupta, Abhishekh; Gonçalves, Nadia; Fonseca, Jose

    2016-01-01

    ABSTRACT Escherichia coli segregates protein aggregates to the poles by nucleoid exclusion. Combined with cell divisions, this generates heterogeneous aggregate distributions in subsequent cell generations. We studied the robustness of this process with differing medium richness and antibiotics stress, which affect nucleoid size, using multimodal, time-lapse microscopy of live cells expressing both a fluorescently tagged chaperone (IbpA), which identifies in vivo the location of aggregates, and HupA-mCherry, a fluorescent variant of a nucleoid-associated protein. We find that the relative sizes of the nucleoid's major and minor axes change widely, in a positively correlated fashion, with medium richness and antibiotic stress. The aggregate's distribution along the major cell axis also changes between conditions and in agreement with the nucleoid exclusion phenomenon. Consequently, the fraction of aggregates at the midcell region prior to cell division differs between conditions, which will affect the degree of asymmetries in the partitioning of aggregates between cells of future generations. Finally, from the location of the peak of anisotropy in the aggregate displacement distribution, the nucleoid relative size, and the spatiotemporal aggregate distribution, we find that the exclusion of detectable aggregates from midcell is most pronounced in cells with mid-sized nucleoids, which are most common under optimal conditions. We conclude that the aggregate management mechanisms of E. coli are significantly robust but are not immune to stresses due to the tangible effect that these have on nucleoid size. IMPORTANCE Escherichia coli segregates protein aggregates to the poles by nucleoid exclusion. From live single-cell microscopy studies of the robustness of this process to various stresses known to affect nucleoid size, we find that nucleoid size and aggregate preferential locations change concordantly between conditions. Also, the degree of influence of the nucleoid

  13. Identifying robustness in the regulation of collective foraging of ant colonies using an interaction-based model with backward bifurcation.

    PubMed

    Udiani, Oyita; Pinter-Wollman, Noa; Kang, Yun

    2015-02-21

    Collective behaviors in social insect societies often emerge from simple local rules. However, little is known about how these behaviors are dynamically regulated in response to environmental changes. Here, we use a compartmental modeling approach to identify factors that allow harvester ant colonies to regulate collective foraging activity in response to their environment. We propose a set of differential equations describing the dynamics of: (1) available foragers inside the nest, (2) active foragers outside the nest, and (3) successful returning foragers, to understand how colony-specific parameters, such as baseline number of foragers, interactions among foragers, food discovery rates, successful forager return rates, and foraging duration might influence collective foraging dynamics, while maintaining functional robustness to perturbations. Our analysis indicates that the model can undergo a forward (transcritical) bifurcation or a backward bifurcation depending on colony-specific parameters. In the former case, foraging activity persists when the average number of recruits per successful returning forager is larger than one. In the latter case, the backward bifurcation creates a region of bistability in which the size and fate of foraging activity depends on the distribution of the foraging workforce among the model's compartments. We validate the model with experimental data from harvester ants (Pogonomyrmex barbatus) and perform sensitivity analysis. Our model provides insights on how simple, local interactions can achieve an emergent and robust regulatory system of collective foraging activity in ant colonies.

  14. Methylome-wide association study of whole blood DNA in the Norfolk Island isolate identifies robust loci associated with age.

    PubMed

    Benton, Miles C; Sutherland, Heidi G; Macartney-Coxson, Donia; Haupt, Larisa M; Lea, Rodney A; Griffiths, Lyn R

    2017-02-28

    Epigenetic regulation of various genomic functions, including gene expression, provide mechanisms whereby an organism can dynamically respond to changes in its environment and modify gene expression accordingly. One epigenetic mechanism implicated in human aging and age-related disorders is DNA methylation. Isolated populations such as Norfolk Island (NI) should be advantageous for the identification of epigenetic factors related to aging due to reduced genetic and environmental variation. Here we conducted a methylome-wide association study of age using whole blood DNA in 24 healthy female individuals from the NI genetic isolate (aged 24-47 years). We analysed 450K methylation array data using a machine learning approach (GLMnet) to identify age-associated CpGs. We identified 497 CpG sites, mapping to 422 genes, associated with age, with 11 sites previously associated with age. The strongest associations identified were for a single CpG site in MYOF and an extended region within the promoter of DDO. These hits were validated in curated public data from 2316 blood samples (MARMAL-AID). This study is the first to report robust age associations for MYOF and DDO, both of which have plausible functional roles in aging. This study also illustrates the value of genetic isolates to reveal new associations with epigenome-level data.

  15. Robust Modulo Remaindering and Applications in Radar and Sensor Signal Processing

    DTIC Science & Technology

    2015-08-27

    AFRL-AFOSR-VA-TR-2015-0254 Robust Modulo Remaindering and Applications in Radar and Sensor Signal Processing Xiang-Gen Xia UNIVERSITY OF DELAWARE...Remaindering and Applications in Radar and Sensor Signal Processing 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-12-1-0055 5c. PROGRAM ELEMENT NUMBER 6...This report describes the main research achievements during the time period cited above on the research project in the area of digital signal processing

  16. A preferential design approach for energy-efficient and robust implantable neural signal processing hardware.

    PubMed

    Narasimhan, Seetharam; Chiel, Hillel J; Bhunia, Swarup

    2009-01-01

    For implantable neural interface applications, it is important to compress data and analyze spike patterns across multiple channels in real time. Such a computational task for online neural data processing requires an innovative circuit-architecture level design approach for low-power, robust and area-efficient hardware implementation. Conventional microprocessor or Digital Signal Processing (DSP) chips would dissipate too much power and are too large in size for an implantable system. In this paper, we propose a novel hardware design approach, referred to as "Preferential Design" that exploits the nature of the neural signal processing algorithm to achieve a low-voltage, robust and area-efficient implementation using nanoscale process technology. The basic idea is to isolate the critical components with respect to system performance and design them more conservatively compared to the noncritical ones. This allows aggressive voltage scaling for low power operation while ensuring robustness and area efficiency. We have applied the proposed approach to a neural signal processing algorithm using the Discrete Wavelet Transform (DWT) and observed significant improvement in power and robustness over conventional design.

  17. Identifying influential factors of business process performance using dependency analysis

    NASA Astrophysics Data System (ADS)

    Wetzstein, Branimir; Leitner, Philipp; Rosenberg, Florian; Dustdar, Schahram; Leymann, Frank

    2011-02-01

    We present a comprehensive framework for identifying influential factors of business process performance. In particular, our approach combines monitoring of process events and Quality of Service (QoS) measurements with dependency analysis to effectively identify influential factors. The framework uses data mining techniques to construct tree structures to represent dependencies of a key performance indicator (KPI) on process and QoS metrics. These dependency trees allow business analysts to determine how process KPIs depend on lower-level process metrics and QoS characteristics of the IT infrastructure. The structure of the dependencies enables a drill-down analysis of single factors of influence to gain a deeper knowledge why certain KPI targets are not met.

  18. Intuitive robust stability metric for PID control of self-regulating processes.

    PubMed

    Arbogast, Jeffrey E; Beauregard, Brett M; Cooper, Douglas J

    2008-10-01

    Published methods establish how plant-model mismatch in the process gain and dead time impacts closed-loop stability. However, these methods assume no plant-model mismatch in the process time constant. The work presented here proposes the robust stability factor metric, RSF, to examine the effect of plant-model mismatch in the process gain, dead time, and time constant. The RSF is presented in two forms: an equation form and a visual form displayed on robustness plots derived from the Bode and Nyquist stability criteria. This understanding of robust stability is reinforced through visual examples of how closed-loop performance changes with various levels of plant-model mismatch. One example shows how plant-model mismatch in the time constant can impact closed-loop stability as much as plant-model mismatch in the gain and/or dead time. Theoretical discussion shows that the impact is greater for small dead time to time constant ratios. As the closed-loop time constant used in Internal Model Control (IMC) tuning decreases, the impact becomes significant for a larger range of dead time to time constant ratios. To complete the presentation, the RSF is used to compare the robust stability of IMC-PI tuning to other PI, PID, and PID with Filter tuning correlations.

  19. Robust Design of Sheet Metal Forming Process Based on Kriging Metamodel

    NASA Astrophysics Data System (ADS)

    Xie, Yanmin

    2011-08-01

    Nowadays, sheet metal forming processes design is not a trivial task due to the complex issues to be taken into account (conflicting design goals, complex shapes forming and so on). Optimization methods have also been widely applied in sheet metal forming. Therefore, proper design methods to reduce time and costs have to be developed mostly based on computer aided procedures. At the same time, the existence of variations during manufacturing processes significantly may influence final product quality, rendering non-robust optimal solutions. In this paper, a small size of design of experiments is conducted to investigate how a stochastic behavior of noise factors affects drawing quality. The finite element software (LS_DYNA) is used to simulate the complex sheet metal stamping processes. The Kriging metamodel is adopted to map the relation between input process parameters and part quality. Robust design models for sheet metal forming process integrate adaptive importance sampling with Kriging model, in order to minimize impact of the variations and achieve reliable process parameters. In the adaptive sample, an improved criterion is used to provide direction in which additional training samples can be added to better the Kriging model. Nonlinear functions as test functions and a square stamping example (NUMISHEET'93) are employed to verify the proposed method. Final results indicate application feasibility of the aforesaid method proposed for multi-response robust design.

  20. Phosphoproteomic profiling of tumor tissues identifies HSP27 Ser82 phosphorylation as a robust marker of early ischemia

    PubMed Central

    Zahari, Muhammad Saddiq; Wu, Xinyan; Pinto, Sneha M.; Nirujogi, Raja Sekhar; Kim, Min-Sik; Fetics, Barry; Philip, Mathew; Barnes, Sheri R.; Godfrey, Beverly; Gabrielson, Edward; Nevo, Erez; Pandey, Akhilesh

    2015-01-01

    Delays between tissue collection and tissue fixation result in ischemia and ischemia-associated changes in protein phosphorylation levels, which can misguide the examination of signaling pathway status. To identify a biomarker that serves as a reliable indicator of ischemic changes that tumor tissues undergo, we subjected harvested xenograft tumors to room temperature for 0, 2, 10 and 30 minutes before freezing in liquid nitrogen. Multiplex TMT-labeling was conducted to achieve precise quantitation, followed by TiO2 phosphopeptide enrichment and high resolution mass spectrometry profiling. LC-MS/MS analyses revealed phosphorylation level changes of a number of phosphosites in the ischemic samples. The phosphorylation of one of these sites, S82 of the heat shock protein 27 kDa (HSP27), was especially abundant and consistently upregulated in tissues with delays in freezing as short as 2 minutes. In order to eliminate effects of ischemia, we employed a novel cryogenic biopsy device which begins freezing tissues in situ before they are excised. Using this device, we showed that the upregulation of phosphorylation of S82 on HSP27 was abrogated. We thus demonstrate that our cryogenic biopsy device can eliminate ischemia-induced phosphoproteome alterations, and measurements of S82 on HSP27 can be used as a robust marker of ischemia in tissues. PMID:26329039

  1. A robust two-stage design identifying the optimal biological dose for phase I/II clinical trials.

    PubMed

    Zang, Yong; Lee, J Jack

    2017-01-15

    We propose a robust two-stage design to identify the optimal biological dose for phase I/II clinical trials evaluating both toxicity and efficacy outcomes. In the first stage of dose finding, we use the Bayesian model averaging continual reassessment method to monitor the toxicity outcomes and adopt an isotonic regression method based on the efficacy outcomes to guide dose escalation. When the first stage ends, we use the Dirichlet-multinomial distribution to jointly model the toxicity and efficacy outcomes and pick the candidate doses based on a three-dimensional volume ratio. The selected candidate doses are then seamlessly advanced to the second stage for dose validation. Both toxicity and efficacy outcomes are continuously monitored so that any overly toxic and/or less efficacious dose can be dropped from the study as the trial continues. When the phase I/II trial ends, we select the optimal biological dose as the dose obtaining the minimal value of the volume ratio within the candidate set. An advantage of the proposed design is that it does not impose a monotonically increasing assumption on the shape of the dose-efficacy curve. We conduct extensive simulation studies to examine the operating characteristics of the proposed design. The simulation results show that the proposed design has desirable operating characteristics across different shapes of the underlying true dose-toxicity and dose-efficacy curves. The software to implement the proposed design is available upon request. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Amino acid positions subject to multiple co-evolutionary constraints can be robustly identified by their eigenvector network centrality scores

    PubMed Central

    Parente, Daniel J.; Ray, J. Christian J.; Swint-Kruse, Liskin

    2015-01-01

    As proteins evolve, amino acid positions key to protein structure or function are subject to mutational constraints. These positions can be detected by analyzing sequence families for amino acid conservation or for co-evolution between pairs of positions. Co-evolutionary scores are usually rank-ordered and thresholded to reveal the top pairwise scores, but they also can be treated as weighted networks. Here, we used network analyses to bypass a major complication of co-evolution studies: For a given sequence alignment, alternative algorithms usually identify different, top pairwise scores. We reconciled results from five commonly-used, mathematically divergent algorithms (ELSC, McBASC, OMES, SCA, and ZNMI), using the LacI/GalR and 1,6-bisphosphate aldolase protein families as models. Calculations used unthresholded co-evolution scores from which column-specific properties such as sequence entropy and random noise were subtracted; “central” positions were identified by calculating various network centrality scores. When compared among algorithms, network centrality methods, particularly eigenvector centrality, showed markedly better agreement than comparisons of the top pairwise scores. Positions with large centrality scores occurred at key structural locations and/or were functionally sensitive to mutations. Further, the top central positions often differed from those with top pairwise co-evolution scores: Instead of a few strong scores, central positions often had multiple, moderate scores. We conclude that eigenvector centrality calculations reveal a robust evolutionary pattern of constraints – detectable by divergent algorithms – that occur at key protein locations. Finally, we discuss the fact that multiple patterns co-exist in evolutionary data that, together, give rise to emergent protein functions. PMID:26503808

  3. Optical wafer metrology sensors for process-robust CD and overlay control in semiconductor device manufacturing

    NASA Astrophysics Data System (ADS)

    den Boef, Arie J.

    2016-06-01

    This paper presents three optical wafer metrology sensors that are used in lithography for robustly measuring the shape and position of wafers and device patterns on these wafers. The first two sensors are a level sensor and an alignment sensor that measure, respectively, a wafer height map and a wafer position before a new pattern is printed on the wafer. The third sensor is an optical scatterometer that measures critical dimension-variations and overlay after the resist has been exposed and developed. These sensors have different optical concepts but they share the same challenge that sub-nm precision is required at high throughput on a large variety of processed wafers and in the presence of unknown wafer processing variations. It is the purpose of this paper to explain these challenges in more detail and give an overview of the various solutions that have been introduced over the years to come to process-robust optical wafer metrology.

  4. Robust Low Cost Aerospike/RLV Combustion Chamber by Advanced Vacuum Plasma Process

    NASA Technical Reports Server (NTRS)

    Holmes, Richard; Ellis, David; McKechnie

    1999-01-01

    Next-generation, regeneratively cooled rocket engines will require materials that can withstand high temperatures while retaining high thermal conductivity. At the same time, fabrication techniques must be cost efficient so that engine components can be manufactured within the constraints of a shrinking NASA budget. In recent years, combustion chambers of equivalent size to the Aerospike chamber have been fabricated at NASA-Marshall Space Flight Center (MSFC) using innovative, relatively low-cost, vacuum-plasma-spray (VPS) techniques. Typically, such combustion chambers are made of the copper alloy NARloy-Z. However, current research and development conducted by NASA-Lewis Research Center (LeRC) has identified a Cu-8Cr-4Nb alloy which possesses excellent high-temperature strength, creep resistance, and low cycle fatigue behavior combined with exceptional thermal stability. In fact, researchers at NASA-LeRC have demonstrated that powder metallurgy (P/M) Cu-8Cr-4Nb exhibits better mechanical properties at 1,200 F than NARloy-Z does at 1,000 F. The objective of this program was to develop and demonstrate the technology to fabricate high-performance, robust, inexpensive combustion chambers for advanced propulsion systems (such as Lockheed-Martin's VentureStar and NASA's Reusable Launch Vehicle, RLV) using the low-cost, VPS process to deposit Cu-8Cr-4Nb with mechanical properties that match or exceed those of P/M Cu-8Cr-4Nb. In addition, oxidation resistant and thermal barrier coatings can be incorporated as an integral part of the hot wall of the liner during the VPS process. Tensile properties of Cu-8Cr-4Nb material produced by VPS are reviewed and compared to material produced previously by extrusion. VPS formed combustion chamber liners have also been prepared and will be reported on following scheduled hot firing tests at NASA-Lewis.

  5. Robust Estimation of Transition Matrices in High Dimensional Heavy-tailed Vector Autoregressive Processes

    PubMed Central

    Qiu, Huitong; Xu, Sheng; Han, Fang; Liu, Han; Caffo, Brian

    2016-01-01

    Gaussian vector autoregressive (VAR) processes have been extensively studied in the literature. However, Gaussian assumptions are stringent for heavy-tailed time series that frequently arises in finance and economics. In this paper, we develop a unified framework for modeling and estimating heavy-tailed VAR processes. In particular, we generalize the Gaussian VAR model by an elliptical VAR model that naturally accommodates heavy-tailed time series. Under this model, we develop a quantile-based robust estimator for the transition matrix of the VAR process. We show that the proposed estimator achieves parametric rates of convergence in high dimensions. This is the first work in analyzing heavy-tailed high dimensional VAR processes. As an application of the proposed framework, we investigate Granger causality in the elliptical VAR process, and show that the robust transition matrix estimator induces sign-consistent estimators of Granger causality. The empirical performance of the proposed methodology is demonstrated by both synthetic and real data. We show that the proposed estimator is robust to heavy tails, and exhibit superior performance in stock price prediction. PMID:28133642

  6. Identifying microorganisms responsible for ecologically significant biogeochemical processes.

    PubMed

    Madsen, Eugene L

    2005-05-01

    Throughout evolutionary time, and each day in every habitat throughout the globe, microorganisms have been responsible for maintaining the biosphere. Despite the crucial part that they play in the cycling of nutrients in habitats such as soils, sediments and waters, only rarely have the microorganisms actually responsible for key processes been identified. Obstacles that have traditionally impeded fundamental microbial ecology inquiries are now yielding to technical advancements that have important parallels in medical microbiology. The pace of new discoveries that document ecological processes and their causative agents will no doubt accelerate in the near future, and might assist in ecosystem management.

  7. Robust control chart for change point detection of process variance in the presence of disturbances

    NASA Astrophysics Data System (ADS)

    Huat, Ng Kooi; Midi, Habshah

    2015-02-01

    A conventional control chart for detecting shifts in variance of a process is typically developed where in most circumstances the nominal value of variance is unknown and based upon one of the essential assumptions that the underlying distribution of the quality characteristic is normal. However, this is not always the case as it is fairly evident that the statistical estimates used for these charts are very sensitive to the occurrence of occasional outliers. This is for the reason that the robust control charts are put forward when the underlying normality assumption is not met, and served as a remedial measure to the problem of contamination in process data. Realizing that the existing approach, namely Biweight A pooled residuals method, appears to be resistance to localized disturbances but lack of efficiency when there are diffuse disturbances. To be concrete, diffuse disturbances are those that have equal change of being perturbed by any observation, while a localized disturbance will have effect on every member of a certain subsample or subsamples. Since the efficiency of estimators in the presence of disturbances can rely heavily upon whether the disturbances are distributed throughout the observations or concentrated in a few subsamples. Hence, to this end, in this paper we proposed a new robust MBAS control chart by means of subsample-based robust Modified Biweight A scale estimator in estimating the process standard deviation. It has strong resistance to both localized and diffuse disturbances as well as high efficiency when no disturbances are present. The performance of the proposed robust chart was evaluated based on some decision criteria through Monte Carlo simulation study.

  8. Some Results on the Analysis of Stochastic Processes with Uncertain Transition Probabilities and Robust Optimal Control

    SciTech Connect

    Keyong Li; Seong-Cheol Kang; I. Ch. Paschalidis

    2007-09-01

    This paper investigates stochastic processes that are modeled by a finite number of states but whose transition probabilities are uncertain and possibly time-varying. The treatment of uncertain transition probabilities is important because there appears to be a disconnection between the practice and theory of stochastic processes due to the difficulty of assigning exact probabilities to real-world events. Also, when the finite-state process comes as a reduced model of one that is more complicated in nature (possibly in a continuous state space), existing results do not facilitate rigorous analysis. Two approaches are introduced here. The first focuses on processes with one terminal state and the properties that affect their convergence rates. When a process is on a complicated graph, the bound of the convergence rate is not trivially related to that of the probabilities of individual transitions. Discovering the connection between the two led us to define two concepts which we call 'progressivity' and 'sortedness', and to a new comparison theorem for stochastic processes. An optimality criterion for robust optimal control also derives from this comparison theorem. In addition, this result is applied to the case of mission-oriented autonomous robot control to produce performance estimate within a control framework that we propose. The second approach is in the MDP frame work. We will introduce our preliminary work on optimistic robust optimization, which aims at finding solutions that guarantee the upper bounds of the accumulative discounted cost with prescribed probabilities. The motivation here is to address the issue that the standard robust optimal solution tends to be overly conservative.

  9. Robust matched-field processing using a coherent broadband white noise constraint processor.

    PubMed

    Debever, Claire; Kuperman, W A

    2007-10-01

    Adaptive matched-field processing (MFP) is not only very sensitive to mismatch, but also requires the received sound levels to exceed a threshold signal-to-noise ratio. Furthermore, acoustic sources and interferers have to move slowly enough across resolution cells so that a full rank cross-spectral density matrix can be constructed. Coherent-broadband MFP takes advantage of the temporal complexity of the signal, and therefore offers an additional gain over narrow-band processing by augmenting the dimension of the data space. However, the sensitivity to mismatch is also increased in the process, since a single constraint is usually not enough to achieve robustness and the snapshot requirement becomes even more problematic. The white noise constraint method, typically used for narrow-band processing, is applied to a previously derived broadband processor to enhance its robustness to environmental mismatch and snapshot deficiency. The broadband white noise constraint theory is presented and validated through simulation and experimental data. The dynamic range bias obtained from the snapshot-deficient processing is shown to be consistent with that previously presented in the literature for a single frequency.

  10. Conceptual information processing: A robust approach to KBS-DBMS integration

    NASA Technical Reports Server (NTRS)

    Lazzara, Allen V.; Tepfenhart, William; White, Richard C.; Liuzzi, Raymond

    1987-01-01

    Integrating the respective functionality and architectural features of knowledge base and data base management systems is a topic of considerable interest. Several aspects of this topic and associated issues are addressed. The significance of integration and the problems associated with accomplishing that integration are discussed. The shortcomings of current approaches to integration and the need to fuse the capabilities of both knowledge base and data base management systems motivates the investigation of information processing paradigms. One such paradigm is concept based processing, i.e., processing based on concepts and conceptual relations. An approach to robust knowledge and data base system integration is discussed by addressing progress made in the development of an experimental model for conceptual information processing.

  11. Optimisation of multiplet identifier processing on a PLAYSTATION® 3

    NASA Astrophysics Data System (ADS)

    Hattori, Masami; Mizuno, Takashi

    2010-02-01

    To enable high-performance computing (HPC) for applications with large datasets using a Sony® PLAYSTATION® 3 (PS3™) video game console, we configured a hybrid system consisting of a Windows® PC and a PS3™. To validate this system, we implemented the real-time multiplet identifier (RTMI) application, which identifies multiplets of microearthquakes in terms of the similarity of their waveforms. The cross-correlation computation, which is a core algorithm of the RTMI application, was optimised for the PS3™ platform, while the rest of the computation, including data input and output remained on the PC. With this configuration, the core part of the algorithm ran 69 times faster than the original program, accelerating total computation speed more than five times. As a result, the system processed up to 2100 total microseismic events, whereas the original implementation had a limit of 400 events. These results indicate that this system enables high-performance computing for large datasets using the PS3™, as long as data transfer time is negligible compared with computation time.

  12. CORROSION PROCESS IN REINFORCED CONCRETE IDENTIFIED BY ACOUSTIC EMISSION

    NASA Astrophysics Data System (ADS)

    Kawasaki, Yuma; Kitaura, Misuzu; Tomoda, Yuichi; Ohtsu, Masayasu

    Deterioration of Reinforced Concrete (RC) due to salt attack is known as one of serious problems. Thus, development of non-destructive evaluation (NDE) techniques is important to assess the corrosion process. Reinforcement in concrete normally does not corrode because of a passive film on the surface of reinforcement. When chloride concentration at reinfo rcement exceeds the threshold level, the passive film is destroyed. Thus maintenance is desirable at an early stage. In this study, to identify the onset of corrosion and the nucleation of corrosion-induced cracking in concrete due to expansion of corrosion products, continuous acoustic emission (AE) monitoring is applied. Accelerated corrosion and cyclic wet and dry tests are performed in a laboratory. The SiGMA (Simplified Green's functions for Moment tensor Analysis) proce dure is applied to AE waveforms to clarify source kinematics of micro-cracks locations, types and orientations. Results show that the onset of corrosion and the nu cleation of corrosion-induced cracking in concrete are successfully identified. Additionally, cross-sections inside the reinforcement are observed by a scanning electron microscope (SEM). From these results, a great promise for AE techniques to monitor salt damage at an early stage in RC structures is demonstrated.

  13. RECORD processing - A robust pathway to component-resolved HR-PGSE NMR diffusometry

    NASA Astrophysics Data System (ADS)

    Stilbs, Peter

    2010-12-01

    It is demonstrated that very robust spectral component separation can be achieved through global least-squares CORE data analysis of automatically or manually selected spectral regions in complex NMR spectra in a high-resolution situation. This procedure (acronym RECORD) only takes a few seconds and quite significantly improves the effective signal/noise of the experiment as compared to individual frequency channel fitting, like in the generic HR-DOSY approach or when using basic peak height or integral fitting. Results from RECORD processing can be further used as starting value estimates for subsequent CORE analysis of spectral data with higher degree of spectral overlap.

  14. Inference of Longevity-Related Genes from a Robust Coexpression Network of Seed Maturation Identifies Regulators Linking Seed Storability to Biotic Defense-Related Pathways

    PubMed Central

    Righetti, Karima; Vu, Joseph Ly; Pelletier, Sandra; Vu, Benoit Ly; Glaab, Enrico; Lalanne, David; Pasha, Asher; Patel, Rohan V.; Provart, Nicholas J.; Verdier, Jerome; Leprince, Olivier

    2015-01-01

    Seed longevity, the maintenance of viability during storage, is a crucial factor for preservation of genetic resources and ensuring proper seedling establishment and high crop yield. We used a systems biology approach to identify key genes regulating the acquisition of longevity during seed maturation of Medicago truncatula. Using 104 transcriptomes from seed developmental time courses obtained in five growth environments, we generated a robust, stable coexpression network (MatNet), thereby capturing the conserved backbone of maturation. Using a trait-based gene significance measure, a coexpression module related to the acquisition of longevity was inferred from MatNet. Comparative analysis of the maturation processes in M. truncatula and Arabidopsis thaliana seeds and mining Arabidopsis interaction databases revealed conserved connectivity for 87% of longevity module nodes between both species. Arabidopsis mutant screening for longevity and maturation phenotypes demonstrated high predictive power of the longevity cross-species network. Overrepresentation analysis of the network nodes indicated biological functions related to defense, light, and auxin. Characterization of defense-related wrky3 and nf-x1-like1 (nfxl1) transcription factor mutants demonstrated that these genes regulate some of the network nodes and exhibit impaired acquisition of longevity during maturation. These data suggest that seed longevity evolved by co-opting existing genetic pathways regulating the activation of defense against pathogens. PMID:26410298

  15. Quantifying Community Assembly Processes and Identifying Features that Impose Them

    SciTech Connect

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; Chen, Xingyuan; Kennedy, David W.; Murray, Christopher J.; Rockhold, Mark L.; Konopka, Allan

    2013-06-06

    Across a set of ecological communities connected to each other through organismal dispersal (a ‘meta-community’), turnover in composition is governed by (ecological) Drift, Selection, and Dispersal Limitation. Quantitative estimates of these processes remain elusive, but would represent a common currency needed to unify community ecology. Using a novel analytical framework we quantitatively estimate the relative influences of Drift, Selection, and Dispersal Limitation on subsurface, sediment-associated microbial meta-communities. The communities we study are distributed across two geologic formations encompassing ~12,500m3 of uranium-contaminated sediments within the Hanford Site in eastern Washington State. We find that Drift consistently governs ~25% of spatial turnover in community composition; Selection dominates (governing ~60% of turnover) across spatially-structured habitats associated with fine-grained, low permeability sediments; and Dispersal Limitation is most influential (governing ~40% of turnover) across spatially-unstructured habitats associated with coarse-grained, highly-permeable sediments. Quantitative influences of Selection and Dispersal Limitation may therefore be predictable from knowledge of environmental structure. To develop a system-level conceptual model we extend our analytical framework to compare process estimates across formations, characterize measured and unmeasured environmental variables that impose Selection, and identify abiotic features that limit dispersal. Insights gained here suggest that community ecology can benefit from a shift in perspective; the quantitative approach developed here goes beyond the ‘niche vs. neutral’ dichotomy by moving towards a style of natural history in which estimates of Selection, Dispersal Limitation and Drift can be described, mapped and compared across ecological systems.

  16. Development of a robust calibration model for nonlinear in-line process data

    PubMed

    Despagne; Massart; Chabot

    2000-04-01

    A comparative study involving a global linear method (partial least squares), a local linear method (locally weighted regression), and a nonlinear method (neural networks) has been performed in order to implement a calibration model on an industrial process. The models were designed to predict the water content in a reactor during a distillation process, using in-line measurements from a near-infrared analyzer. Curved effects due to changes in temperature and variations between the different batches make the problem particularly challenging. The influence of spectral range selection and data preprocessing has been studied. With each calibration method, specific procedures have been applied to promote model robustness. In particular, the use of a monitoring set with neural networks does not always prevent overfitting. Therefore, we developed a model selection criterion based on the determination of the median of monitoring error over replicate trials. The back-propagation neural network models selected were found to outperform the other methods on independent test data.

  17. Robust Canonical Coherence for Quasi-Cyclostationary Processes: Geomagnetism and Seismicity in Peru

    NASA Astrophysics Data System (ADS)

    Lepage, K.; Thomson, D. J.

    2007-12-01

    Preliminary results suggesting a connection between long-period, geomagnetic fluctuations and long-period, seismic fluctuations are presented. Data from the seismic detector, NNA, situated in ~Naña, Peru, is compared to geomagnetic data from HUA, located in Huancayo, Peru. The high-pass filtered data from the two stations exhibits quasi-cyclostationary pulsation with daily periodicity, and suggests correspondence. The pulsation contains power predominantly between 2000 μ Hz and 8000 μ Hz, with the geomagnetic pulses leading by approximately 4 to 5 hours. A many data section, multitaper, robust canonical coherence analysis of the two, three component data sets is performed. The method, involving an adaptation, suitable for quasi-cyclostationary processes, of the technique presented in "Robust estimation of power spectra", (by Kleiner, Martin and Thomson, Journal of the Royal Statistical Society, Series B Methodological, 1979) is described. Simulations are presented exploring the applicability of the method. Canonical coherence is detected, predominantly between the geomagnetic field and the vertical component of seismic velocity, in the band of frequencies between 1500 μ Hz and 2500 μ Hz. Subsequent group delay estimates between the geomagnetic components and seismic velocity vertical at frequencies corresponding to large canonical coherence are computed. The estimated group delays are 8 min between geomagnetic east and seismic velocity vertical, 16 min between geomagnetic north and seismic velocity vertical and 11 min between geomagnetic vertical and seismic velocity vertical. Possible coupling mechanisms are discussed.

  18. Low Power and Robust Domino Circuit with Process Variations Tolerance for High Speed Digital Signal Processing

    NASA Astrophysics Data System (ADS)

    Wang, Jinhui; Peng, Xiaohong; Li, Xinxin; Hou, Ligang; Wu, Wuchen

    Utilizing the sleep switch transistor technique and dual threshold voltage technique, a source following evaluation gate (SEFG) based domino circuit is presented in this paper for simultaneously suppressing the leakage current and enhancing noise immunity. Simulation results show that the leakage current of the proposed design can be reduced by 43%, 62%, and 67% while improving 19.7%, 3.4 %, and 12.5% noise margin as compared to standard low threshold voltage circuit, standard dual threshold voltage circuit, and SEFG structure, respectively. Also, the inputs and clock signals combination static state dependent leakage current characteristic is analyzed and the minimum leakage states of different domino AND gates are obtained. At last, the leakage power characteristic under process variations is discussed.

  19. Delays in auditory processing identified in preschool children with FASD

    PubMed Central

    Stephen, Julia M.; Kodituwakku, Piyadasa W.; Kodituwakku, Elizabeth L.; Romero, Lucinda; Peters, Amanda M.; Sharadamma, Nirupama Muniswamy; Caprihan, Arvind; Coffman, Brian A.

    2012-01-01

    Background Both sensory and cognitive deficits have been associated with prenatal exposure to alcohol; however, very few studies have focused on sensory deficits in preschool aged children. Since sensory skills develop early, characterization of sensory deficits using novel imaging methods may reveal important neural markers of prenatal alcohol exposure. Materials and Methods Participants in this study were 10 children with a fetal alcohol spectrum disorder (FASD) and 15 healthy control children aged 3-6 years. All participants had normal hearing as determined by clinical screens. We measured their neurophysiological responses to auditory stimuli (1000 Hz, 72 dB tone) using magnetoencephalography (MEG). We used a multi-dipole spatio-temporal modeling technique (CSST – Ranken et al. 2002) to identify the location and timecourse of cortical activity in response to the auditory tones. The timing and amplitude of the left and right superior temporal gyrus sources associated with activation of left and right primary/secondary auditory cortices were compared across groups. Results There was a significant delay in M100 and M200 latencies for the FASD children relative to the HC children (p = 0.01), when including age as a covariate. The within-subjects effect of hemisphere was not significant. A comparable delay in M100 and M200 latencies was observed in children across the FASD subtypes. Discussion Auditory delay revealed by MEG in children with FASD may prove to be a useful neural marker of information processing difficulties in young children with prenatal alcohol exposure. The fact that delayed auditory responses were observed across the FASD spectrum suggests that it may be a sensitive measure of alcohol-induced brain damage. Therefore, this measure in conjunction with other clinical tools may prove useful for early identification of alcohol affected children, particularly those without dysmorphia. PMID:22458372

  20. Identifying critical success factors for designing selection processes into postgraduate specialty training: the case of UK general practice.

    PubMed

    Plint, Simon; Patterson, Fiona

    2010-06-01

    The UK national recruitment process into general practice training has been developed over several years, with incremental introduction of stages which have been piloted and validated. Previously independent processes, which encouraged multiple applications and produced inconsistent outcomes, have been replaced by a robust national process which has high reliability and predictive validity, and is perceived to be fair by candidates and allocates applicants equitably across the country. Best selection practice involves a job analysis which identifies required competencies, then designs reliable assessment methods to measure them, and over the long term ensures that the process has predictive validity against future performance. The general practitioner recruitment process introduced machine markable short listing assessments for the first time in the UK postgraduate recruitment context, and also adopted selection centre workplace simulations. The key success factors have been identified as corporate commitment to the goal of a national process, with gradual convergence maintaining locus of control rather than the imposition of change without perceived legitimate authority.

  1. Evolving Robust Gene Regulatory Networks

    PubMed Central

    Noman, Nasimul; Monjo, Taku; Moscato, Pablo; Iba, Hitoshi

    2015-01-01

    Design and implementation of robust network modules is essential for construction of complex biological systems through hierarchical assembly of ‘parts’ and ‘devices’. The robustness of gene regulatory networks (GRNs) is ascribed chiefly to the underlying topology. The automatic designing capability of GRN topology that can exhibit robust behavior can dramatically change the current practice in synthetic biology. A recent study shows that Darwinian evolution can gradually develop higher topological robustness. Subsequently, this work presents an evolutionary algorithm that simulates natural evolution in silico, for identifying network topologies that are robust to perturbations. We present a Monte Carlo based method for quantifying topological robustness and designed a fitness approximation approach for efficient calculation of topological robustness which is computationally very intensive. The proposed framework was verified using two classic GRN behaviors: oscillation and bistability, although the framework is generalized for evolving other types of responses. The algorithm identified robust GRN architectures which were verified using different analysis and comparison. Analysis of the results also shed light on the relationship among robustness, cooperativity and complexity. This study also shows that nature has already evolved very robust architectures for its crucial systems; hence simulation of this natural process can be very valuable for designing robust biological systems. PMID:25616055

  2. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to

  3. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering.

    PubMed

    Shanechi, Maryam M; Orsborn, Amy L; Carmena, Jose M

    2016-04-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain's behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user's motor intention during CLDA-a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter

  4. Improved Signal Processing Technique Leads to More Robust Self Diagnostic Accelerometer System

    NASA Technical Reports Server (NTRS)

    Tokars, Roger; Lekki, John; Jaros, Dave; Riggs, Terrence; Evans, Kenneth P.

    2010-01-01

    The self diagnostic accelerometer (SDA) is a sensor system designed to actively monitor the health of an accelerometer. In this case an accelerometer is considered healthy if it can be determined that it is operating correctly and its measurements may be relied upon. The SDA system accomplishes this by actively monitoring the accelerometer for a variety of failure conditions including accelerometer structural damage, an electrical open circuit, and most importantly accelerometer detachment. In recent testing of the SDA system in emulated engine operating conditions it has been found that a more robust signal processing technique was necessary. An improved accelerometer diagnostic technique and test results of the SDA system utilizing this technique are presented here. Furthermore, the real time, autonomous capability of the SDA system to concurrently compensate for effects from real operating conditions such as temperature changes and mechanical noise, while monitoring the condition of the accelerometer health and attachment, will be demonstrated.

  5. Processing and Properties of Fiber Reinforced Polymeric Matrix Composites. Part 2; Processing Robustness of IM7/PETI Polyimide Composites

    NASA Technical Reports Server (NTRS)

    Hou, Tan-Hung

    1996-01-01

    The processability of a phenylethynyl terminated imide (PETI) resin matrix composite was investigated. Unidirectional prepregs were made by coating an N-methylpyrrolidone solution of the amide acid oligomer onto unsized IM7. Two batches of prepregs were used: one was made by NASA in-house, and the other was from an industrial source. The composite processing robustness was investigated with respect to the effect of B-staging conditions, the prepreg shelf life, and the optimal processing window. Rheological measurements indicated that PETI's processability was only slightly affected over a wide range of B-staging temperatures (from 250 C to 300 C). The open hole compression (OHC) strength values were statistically indistinguishable among specimens consolidated using various B-staging conditions. Prepreg rheology and OHC strengths were also found not to be affected by prolonged (i.e., up to 60 days) ambient storage. An optimal processing window was established using response surface methodology. It was found that IM7/PETI composite is more sensitive to the consolidation temperature than to the consolidation pressure. A good consolidation was achievable at 371 C/100 Psi, which yielded an OHC strength of 62 Ksi at room temperature. However, processability declined dramatically at temperatures below 350 C.

  6. Global transcriptomic analysis of Cyanothece 51142 reveals robust diurnal oscillation of central metabolic processes.

    PubMed

    Stöckel, Jana; Welsh, Eric A; Liberton, Michelle; Kunnvakkam, Rangesh; Aurora, Rajeev; Pakrasi, Himadri B

    2008-04-22

    Cyanobacteria are photosynthetic organisms and are the only prokaryotes known to have a circadian lifestyle. Unicellular diazotrophic cyanobacteria such as Cyanothece sp. ATCC 51142 produce oxygen and can also fix atmospheric nitrogen, a process exquisitely sensitive to oxygen. To accommodate such antagonistic processes, the intracellular environment of Cyanothece oscillates between aerobic and anaerobic conditions during a day-night cycle. This is accomplished by temporal separation of the two processes: photosynthesis during the day and nitrogen fixation at night. Although previous studies have examined periodic changes in transcript levels for a limited number of genes in Cyanothece and other unicellular diazotrophic cyanobacteria, a comprehensive study of transcriptional activity in a nitrogen-fixing cyanobacterium is necessary to understand the impact of the temporal separation of photosynthesis and nitrogen fixation on global gene regulation and cellular metabolism. We have examined the expression patterns of nearly 5,000 genes in Cyanothece 51142 during two consecutive diurnal periods. Our analysis showed that approximately 30% of these genes exhibited robust oscillating expression profiles. Interestingly, this set included genes for almost all central metabolic processes in Cyanothece 51142. A transcriptional network of all genes with significantly oscillating transcript levels revealed that the majority of genes encoding enzymes in numerous individual biochemical pathways, such as glycolysis, oxidative pentose phosphate pathway, and glycogen metabolism, were coregulated and maximally expressed at distinct phases during the diurnal cycle. These studies provide a comprehensive picture of how a physiologically relevant diurnal light-dark cycle influences the metabolism in a photosynthetic bacterium.

  7. Directed International Technological Change and Climate Policy: New Methods for Identifying Robust Policies Under Conditions of Deep Uncertainty

    NASA Astrophysics Data System (ADS)

    Molina-Perez, Edmundo

    It is widely recognized that international environmental technological change is key to reduce the rapidly rising greenhouse gas emissions of emerging nations. In 2010, the United Nations Framework Convention on Climate Change (UNFCCC) Conference of the Parties (COP) agreed to the creation of the Green Climate Fund (GCF). This new multilateral organization has been created with the collective contributions of COP members, and has been tasked with directing over USD 100 billion per year towards investments that can enhance the development and diffusion of clean energy technologies in both advanced and emerging nations (Helm and Pichler, 2015). The landmark agreement arrived at the COP 21 has reaffirmed the key role that the GCF plays in enabling climate mitigation as it is now necessary to align large scale climate financing efforts with the long-term goals agreed at Paris 2015. This study argues that because of the incomplete understanding of the mechanics of international technological change, the multiplicity of policy options and ultimately the presence of climate and technological change deep uncertainty, climate financing institutions such as the GCF, require new analytical methods for designing long-term robust investment plans. Motivated by these challenges, this dissertation shows that the application of new analytical methods, such as Robust Decision Making (RDM) and Exploratory Modeling (Lempert, Popper and Bankes, 2003) to the study of international technological change and climate policy provides useful insights that can be used for designing a robust architecture of international technological cooperation for climate change mitigation. For this study I developed an exploratory dynamic integrated assessment model (EDIAM) which is used as the scenario generator in a large computational experiment. The scope of the experimental design considers an ample set of climate and technological scenarios. These scenarios combine five sources of uncertainty

  8. Using DRS during breast conserving surgery: identifying robust optical parameters and influence of inter-patient variation

    PubMed Central

    de Boer, Lisanne L.; Hendriks, Benno H. W.; van Duijnhoven, Frederieke; Peeters-Baas, Marie-Jeanne T. F. D. Vrancken; Van de Vijver, Koen; Loo, Claudette E.; Jóźwiak, Katarzyna; Sterenborg, Henricus J. C. M.; Ruers, Theo J. M.

    2016-01-01

    Successful breast conserving surgery consists of complete removal of the tumor while sparing healthy surrounding tissue. Despite currently available imaging and margin assessment tools, recognizing tumor tissue at a resection margin during surgery is challenging. Diffuse reflectance spectroscopy (DRS), which uses light for tissue characterization, can potentially guide surgeons to prevent tumor positive margins. However, inter-patient variation and changes in tissue physiology occurring during the resection might hamper this light-based technology. Here we investigate how inter-patient variation and tissue status (in vivo vs ex vivo) affect the performance of the DRS optical parameters. In vivo and ex vivo measurements of 45 breast cancer patients were obtained and quantified with an analytical model to acquire the optical parameters. The optical parameter representing the ratio between fat and water provided the best discrimination between normal and tumor tissue, with an area under the receiver operating characteristic curve of 0.94. There was no substantial influence of other patient factors such as menopausal status on optical measurements. Contrary to expectations, normalization of the optical parameters did not improve the discriminative power. Furthermore, measurements taken in vivo were not significantly different from the measurements taken ex vivo. These findings indicate that DRS is a robust technology for the detection of tumor tissue during breast conserving surgery. PMID:28018735

  9. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations, 2. Robustness of Techniques

    SciTech Connect

    Helton, J.C.; Kleijnen, J.P.C.

    1999-03-24

    Procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses are described and illustrated. These procedures attempt to detect increasingly complex patterns in scatterplots and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. A sequence of example analyses with a large model for two-phase fluid flow illustrates how the individual procedures can differ in the variables that they identify as having effects on particular model outcomes. The example analyses indicate that the use of a sequence of procedures is a good analysis strategy and provides some assurance that an important effect is not overlooked.

  10. Method for processing seismic data to identify anomalous absorption zones

    DOEpatents

    Taner, M. Turhan

    2006-01-03

    A method is disclosed for identifying zones anomalously absorptive of seismic energy. The method includes jointly time-frequency decomposing seismic traces, low frequency bandpass filtering the decomposed traces to determine a general trend of mean frequency and bandwidth of the seismic traces, and high frequency bandpass filtering the decomposed traces to determine local variations in the mean frequency and bandwidth of the seismic traces. Anomalous zones are determined where there is difference between the general trend and the local variations.

  11. Identifying and tracking dynamic processes in social networks

    NASA Astrophysics Data System (ADS)

    Chung, Wayne; Savell, Robert; Schütt, Jan-Peter; Cybenko, George

    2006-05-01

    The detection and tracking of embedded malicious subnets in an active social network can be computationally daunting due to the quantity of transactional data generated in the natural interaction of large numbers of actors comprising a network. In addition, detection of illicit behavior may be further complicated by evasive strategies designed to camouflage the activities of the covert subnet. In this work, we move beyond traditional static methods of social network analysis to develop a set of dynamic process models which encode various modes of behavior in active social networks. These models will serve as the basis for a new application of the Process Query System (PQS) to the identification and tracking of covert dynamic processes in social networks. We present a preliminary result from application of our technique in a real-world data stream-- the Enron email corpus.

  12. Application of thermoluminescence technique to identify radiation processed foods

    NASA Astrophysics Data System (ADS)

    Kiyak, N.

    1995-02-01

    Research studies reported by various authors have shown that a few methods one of which is thermoluminescence technique- may be suitable for identification of some certain irradiated spicies and food containing bones. This study is an application of the thermoluminescence technique for identifying the irradiated samples. The investigation was carried out on different types of foodstuffs such as onions, potatoes and kiwi. Measurements show that the technique can be applied as a reliable method to distinguish the irradiated food products from non-irradiated ones. The results demonstrate also that it is possible to use this method for determining the absorbed dose of irradiated samples from the established dose-effect curve.

  13. Global transcriptomic analysis of Cyanothece 51142 reveals robust diurnal oscillation of central metabolic processes

    SciTech Connect

    Stockel, Jana; Welsh, Eric A.; Liberton, Michelle L.; Kunnavakkam, Rangesh V.; Aurora, Rajeev; Pakrasi, Himadri B.

    2008-04-22

    Cyanobacteria are oxygenic photosynthetic organisms, and the only prokaryotes known to have a circadian cycle. Unicellular diazotrophic cyanobacteria such as Cyanothece 51142 can fix atmospheric nitrogen, a process exquisitely sensitive to oxygen. Thus, the intracellular environment of Cyanothece oscillates between aerobic and anaerobic conditions during a day-night cycle. This is accomplished by temporal separation of two processes: photosynthesis during the day, and nitrogen fixation at night. While previous studies have examined periodic changes transcript levels for a limited number of genes in Cyanothece and other unicellular diazotrophic cyanobacteria, a comprehensive study of transcriptional activity in a nitrogen-fixing cyanobacterium is necessary to understand the impact of the temporal separation of photosynthesis and nitrogen fixation on global gene regulation and cellular metabolism. We have examined the expression patterns of nearly 5000 genes in Cyanothece 51142 during two consecutive diurnal periods. We found that ~30% of these genes exhibited robust oscillating expression profiles. Interestingly, this set included genes for almost all central metabolic processes in Cyanothece. A transcriptional network of all genes with significantly oscillating transcript levels revealed that the majority of genes in numerous individual pathways, such as glycolysis, pentose phosphate pathway and glycogen metabolism, were co-regulated and maximally expressed at distinct phases during the diurnal cycle. Our analyses suggest that the demands of nitrogen fixation greatly influence major metabolic activities inside Cyanothece cells and thus drive various cellular activities. These studies provide a comprehensive picture of how a physiologically relevant diurnal light-dark cycle influences the metabolism in a photosynthetic bacterium

  14. A Robust Gold Deconvolution Approach for LiDAR Waveform Data Processing to Characterize Vegetation Structure

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Popescu, S. C.; Krause, K.; Sheridan, R.; Ku, N. W.

    2014-12-01

    Increasing attention has been paid in the remote sensing community to the next generation Light Detection and Ranging (lidar) waveform data systems for extracting information on topography and the vertical structure of vegetation. However, processing waveform lidar data raises some challenges compared to analyzing discrete return data. The overall goal of this study was to present a robust de-convolution algorithm- Gold algorithm used to de-convolve waveforms in a lidar dataset acquired within a 60 x 60m study area located in the Harvard Forest in Massachusetts. The waveform lidar data was collected by the National Ecological Observatory Network (NEON). Specific objectives were to: (1) explore advantages and limitations of various waveform processing techniques to derive topography and canopy height information; (2) develop and implement a novel de-convolution algorithm, the Gold algorithm, to extract elevation and canopy metrics; and (3) compare results and assess accuracy. We modeled lidar waveforms with a mixture of Gaussian functions using the Non-least squares (NLS) algorithm implemented in R and derived a Digital Terrain Model (DTM) and canopy height. We compared our waveform-derived topography and canopy height measurements using the Gold de-convolution algorithm to results using the Richardson-Lucy algorithm. Our findings show that the Gold algorithm performed better than the Richardson-Lucy algorithm in terms of recovering the hidden echoes and detecting false echoes for generating a DTM, which indicates that the Gold algorithm could potentially be applied to processing of waveform lidar data to derive information on terrain elevation and canopy characteristics.

  15. Data-based robust multiobjective optimization of interconnected processes: energy efficiency case study in papermaking.

    PubMed

    Afshar, Puya; Brown, Martin; Maciejowski, Jan; Wang, Hong

    2011-12-01

    Reducing energy consumption is a major challenge for "energy-intensive" industries such as papermaking. A commercially viable energy saving solution is to employ data-based optimization techniques to obtain a set of "optimized" operational settings that satisfy certain performance indices. The difficulties of this are: 1) the problems of this type are inherently multicriteria in the sense that improving one performance index might result in compromising the other important measures; 2) practical systems often exhibit unknown complex dynamics and several interconnections which make the modeling task difficult; and 3) as the models are acquired from the existing historical data, they are valid only locally and extrapolations incorporate risk of increasing process variability. To overcome these difficulties, this paper presents a new decision support system for robust multiobjective optimization of interconnected processes. The plant is first divided into serially connected units to model the process, product quality, energy consumption, and corresponding uncertainty measures. Then multiobjective gradient descent algorithm is used to solve the problem in line with user's preference information. Finally, the optimization results are visualized for analysis and decision making. In practice, if further iterations of the optimization algorithm are considered, validity of the local models must be checked prior to proceeding to further iterations. The method is implemented by a MATLAB-based interactive tool DataExplorer supporting a range of data analysis, modeling, and multiobjective optimization techniques. The proposed approach was tested in two U.K.-based commercial paper mills where the aim was reducing steam consumption and increasing productivity while maintaining the product quality by optimization of vacuum pressures in forming and press sections. The experimental results demonstrate the effectiveness of the method.

  16. Identifiability and Estimation in Random Translations of Marked Point Processes.

    DTIC Science & Technology

    1982-10-01

    inversion of the Laplace Transform Hermit* Distribution Parmetrc and Non- parametric Estimation 20 ABSTRACT ContInwue en reverse side If neeee ww ftn id... Parametric Estimation of h(.) and P(.) If h(.) and P(.) belong to a certain known family of functions with some unknown parameters, the expression (3...complex type of data is required and nothing can be learned about the arrival process. Non- Parametric Estimation of P(’) Let h(.) be a completely

  17. An Excel Workbook for Identifying Redox Processes in Ground Water

    USGS Publications Warehouse

    Jurgens, Bryant C.; McMahon, Peter B.; Chapelle, Francis H.; Eberts, Sandra M.

    2009-01-01

    The reduction/oxidation (redox) condition of ground water affects the concentration, transport, and fate of many anthropogenic and natural contaminants. The redox state of a ground-water sample is defined by the dominant type of reduction/oxidation reaction, or redox process, occurring in the sample, as inferred from water-quality data. However, because of the difficulty in defining and applying a systematic redox framework to samples from diverse hydrogeologic settings, many regional water-quality investigations do not attempt to determine the predominant redox process in ground water. Recently, McMahon and Chapelle (2008) devised a redox framework that was applied to a large number of samples from 15 principal aquifer systems in the United States to examine the effect of redox processes on water quality. This framework was expanded by Chapelle and others (in press) to use measured sulfide data to differentiate between iron(III)- and sulfate-reducing conditions. These investigations showed that a systematic approach to characterize redox conditions in ground water could be applied to datasets from diverse hydrogeologic settings using water-quality data routinely collected in regional water-quality investigations. This report describes the Microsoft Excel workbook, RedoxAssignment_McMahon&Chapelle.xls, that assigns the predominant redox process to samples using the framework created by McMahon and Chapelle (2008) and expanded by Chapelle and others (in press). Assignment of redox conditions is based on concentrations of dissolved oxygen (O2), nitrate (NO3-), manganese (Mn2+), iron (Fe2+), sulfate (SO42-), and sulfide (sum of dihydrogen sulfide [aqueous H2S], hydrogen sulfide [HS-], and sulfide [S2-]). The logical arguments for assigning the predominant redox process to each sample are performed by a program written in Microsoft Visual Basic for Applications (VBA). The program is called from buttons on the main worksheet. The number of samples that can be analyzed

  18. Robust superhydrophobic transparent coatings fabricated by a low-temperature sol-gel process

    NASA Astrophysics Data System (ADS)

    Huang, Wei-Heng; Lin, Chao-Sung

    2014-06-01

    A coating with robust, superhydrophobic, and transparent properties was fabricated on glass substrates by a sol-gel method at a temperature of 80 °C. The coating was formed in a solution containing silica nanoparticles and silicic acid, in which the ratio of silica nanoparticles and silicic acid was varied to tune the roughness of the coating. Subsequently, the as-deposited coating was dipped with a low surface energy material, 1H,1H,2H,2H-perfluorooctyltrichloro silane. The coated glass substrate was characterized in terms of surface morphology, optical transmittance, water- and CH2I2-contact angles, and its chemical as well as mechanical stability was evaluated by ultrasonication in ethanol for 120 min. The results showed that the coating had a water contact angle exceeding 160°, a sliding angle lower than 10°, a CH2I2 static contact angle of approximately 150°. The transmittance of the coating was reduced by less than 5% compared to that of the bare glass substrate at wavelengths above 500 nm. Moreover, the properties of the coating hardly changed after the ultrasonication test and still retained the superhydrophobicity after water dropping impact. Because the fabrication process is performed under low temperatures, it is feasible for scale-up production at low energy consumptions.

  19. Accelerated evaluation of the robustness of treatment plans against geometric uncertainties by Gaussian processes.

    PubMed

    Sobotta, B; Söhn, M; Alber, M

    2012-12-07

    In order to provide a consistently high quality treatment, it is of great interest to assess the robustness of a treatment plan under the influence of geometric uncertainties. One possible method to implement this is to run treatment simulations for all scenarios that may arise from these uncertainties. These simulations may be evaluated in terms of the statistical distribution of the outcomes (as given by various dosimetric quality metrics) or statistical moments thereof, e.g. mean and/or variance. This paper introduces a method to compute the outcome distribution and all associated values of interest in a very efficient manner. This is accomplished by substituting the original patient model with a surrogate provided by a machine learning algorithm. This Gaussian process (GP) is trained to mimic the behavior of the patient model based on only very few samples. Once trained, the GP surrogate takes the place of the patient model in all subsequent calculations.The approach is demonstrated on two examples. The achieved computational speedup is more than one order of magnitude.

  20. On the robustness of prime response retrieval processes: evidence from auditory negative priming without probe interference.

    PubMed

    Mayr, Susanne; Buchner, Axel

    2014-02-01

    Visual negative priming has been shown to depend on the presence of probe distractors, a finding that has been traditionally seen to support the episodic retrieval model of negative priming; however, facilitated prime-to-probe contingency learning might also underlie this effect. In four sound identification experiments, the role of probe distractor interference in auditory negative priming was investigated. In each experiment, a group of participants was exposed to probe distractor interference while another group ran the task in the absence of probe distractors. Experiments 1A, 1B, and 1C varied in the extent to which fast versus accurate responding was required. Between Experiments 1 and 2, the spatial cueing of the to-be-attended ear was varied. Whereas participants switched ears from prime to probe in Experiment 1, they kept a stable attentional focus throughout Experiment 2. For trials with probe distractors, a negative priming effect was present in all experiments. For trials without probe distractors, the only ubiquitous after-effect of ignoring a prime distractor was an increase of prime response errors in ignored repetition compared to control trials, indicating that prime response retrieval processes took place. Whether negative priming beyond this error increase was found depended on the stability of the attentional focus. The findings suggest that several mechanisms underlie auditory negative priming with the only robust one being prime response retrieval.

  1. Fabrication of robust micro-patterned polymeric films via static breath-figure process and vulcanization.

    PubMed

    Li, Lei; Zhong, Yawen; Gong, Jianliang; Li, Jian; Huang, Jin; Ma, Zhi

    2011-02-15

    Here, we present the preparation of thermally stable and solvent resistant micro-patterned polymeric films via static breath-figure process and sequent vulcanization, with a commercially available triblock polymer, polystyrene-b-polyisoprene-b-polystyrene (SIS). The vulcanized honeycomb structured SIS films became self-supported and resistant to a wide range of organic solvents and thermally stable up to 350°C for 2h, an increase of more than 300K as compared to the uncross-linked films. This superior robustness could be attributed to the high degree of polyisoprene cross-linking. The versatility of the methodology was demonstrated by applying to another commercially available triblock polymer, polystyrene-b-polybutadiene-b-polystyrene (SBS). Particularly, hydroxy groups were introduced into SBS by hydroboration. The functionalized two-dimensional micro-patterns feasible for site-directed grafting were created by the hydroxyl-containing polymers. In addition, the fixed microporous structures could be replicated to fabricate textured positive PDMS stamps. This simple technique offers new prospects in the field of micro-patterns, soft lithography and templates.

  2. A Robust Power Remote Manipulator for Use in Waste Sorting, Processing, and Packaging - 12158

    SciTech Connect

    Cole, Matt; Martin, Scott

    2012-07-01

    Disposition of radioactive waste is one of the Department of Energy's (DOE's) highest priorities. A critical component of the waste disposition strategy is shipment of Transuranic (TRU) waste from DOE's Oak Ridge Reservation to the Waste Isolation Plant Project (WIPP) in Carlsbad, New Mexico. This is the mission of the DOE TRU Waste Processing Center (TWPC). The remote-handled TRU waste at the Oak Ridge Reservation is currently in a mixed waste form that must be repackaged in to meet WIPP Waste Acceptance Criteria (WAC). Because this remote-handled legacy waste is very diverse, sorting, size reducing, and packaging will require equipment flexibility and strength that is not possible with standard master-slave manipulators. To perform the wide range of tasks necessary with such diverse, highly contaminated material, TWPC worked with S.A. Technology (SAT) to modify SAT's Power Remote Manipulator (PRM) technology to provide the processing center with an added degree of dexterity and high load handling capability inside its shielded cells. TWPC and SAT incorporated innovative technologies into the PRM design to better suit the operations required at TWPC, and to increase the overall capability of the PRM system. Improving on an already proven PRM system will ensure that TWPC gains the capabilities necessary to efficiently complete its TRU waste disposition mission. The collaborative effort between TWPC and S.A. Technology has yielded an extremely capable and robust solution to perform the wide range of tasks necessary to repackage TRU waste containers at TWPC. Incorporating innovative technologies into a proven manipulator system, these PRMs are expected to be an important addition to the capabilities available to shielded cell operators. The PRMs provide operators with the ability to reach anywhere in the cell, lift heavy objects, perform size reduction associated with the disposition of noncompliant waste. Factory acceptance testing of the TWPC Powered Remote

  3. Robust Low Cost Liquid Rocket Combustion Chamber by Advanced Vacuum Plasma Process

    NASA Technical Reports Server (NTRS)

    Holmes, Richard; Elam, Sandra; Ellis, David L.; McKechnie, Timothy; Hickman, Robert; Rose, M. Franklin (Technical Monitor)

    2001-01-01

    Next-generation, regeneratively cooled rocket engines will require materials that can withstand high temperatures while retaining high thermal conductivity. Fabrication techniques must be cost efficient so that engine components can be manufactured within the constraints of shrinking budgets. Three technologies have been combined to produce an advanced liquid rocket engine combustion chamber at NASA-Marshall Space Flight Center (MSFC) using relatively low-cost, vacuum-plasma-spray (VPS) techniques. Copper alloy NARloy-Z was replaced with a new high performance Cu-8Cr-4Nb alloy developed by NASA-Glenn Research Center (GRC), which possesses excellent high-temperature strength, creep resistance, and low cycle fatigue behavior combined with exceptional thermal stability. Functional gradient technology, developed building composite cartridges for space furnaces was incorporated to add oxidation resistant and thermal barrier coatings as an integral part of the hot wall of the liner during the VPS process. NiCrAlY, utilized to produce durable protective coating for the space shuttle high pressure fuel turbopump (BPFTP) turbine blades, was used as the functional gradient material coating (FGM). The FGM not only serves as a protection from oxidation or blanching, the main cause of engine failure, but also serves as a thermal barrier because of its lower thermal conductivity, reducing the temperature of the combustion liner 200 F, from 1000 F to 800 F producing longer life. The objective of this program was to develop and demonstrate the technology to fabricate high-performance, robust, inexpensive combustion chambers for advanced propulsion systems (such as Lockheed-Martin's VentureStar and NASA's Reusable Launch Vehicle, RLV) using the low-cost VPS process. VPS formed combustion chamber test articles have been formed with the FGM hot wall built in and hot fire tested, demonstrating for the first time a coating that will remain intact through the hot firing test, and with

  4. Quantitative morphometry of electrophysiologically identified CA3b interneurons reveals robust local geometry and distinct cell classes.

    PubMed

    Ascoli, Giorgio A; Brown, Kerry M; Calixto, Eduardo; Card, J Patrick; Galván, E J; Perez-Rosello, T; Barrionuevo, Germán

    2009-08-20

    The morphological and electrophysiological diversity of inhibitory cells in hippocampal area CA3 may underlie specific computational roles and is not yet fully elucidated. In particular, interneurons with somata in strata radiatum (R) and lacunosum-moleculare (L-M) receive converging stimulation from the dentate gyrus and entorhinal cortex as well as within CA3. Although these cells express different forms of synaptic plasticity, their axonal trees and connectivity are still largely unknown. We investigated the branching and spatial patterns, plus the membrane and synaptic properties, of rat CA3b R and L-M interneurons digitally reconstructed after intracellular labeling. We found considerable variability within but no difference between the two layers, and no correlation between morphological and biophysical properties. Nevertheless, two cell types were identified based on the number of dendritic bifurcations, with significantly different anatomical and electrophysiological features. Axons generally branched an order of magnitude more than dendrites. However, interneurons on both sides of the R/L-M boundary revealed surprisingly modular axodendritic arborizations with consistently uniform local branch geometry. Both axons and dendrites followed a lamellar organization, and axons displayed a spatial preference toward the fissure. Moreover, only a small fraction of the axonal arbor extended to the outer portion of the invaded volume, and tended to return toward the proximal region. In contrast, dendritic trees demonstrated more limited but isotropic volume occupancy. These results suggest a role of predominantly local feedforward and lateral inhibitory control for both R and L-M interneurons. Such a role may be essential to balance the extensive recurrent excitation of area CA3 underlying hippocampal autoassociative memory function.

  5. Identifying User Needs and the Participative Design Process

    NASA Astrophysics Data System (ADS)

    Meiland, Franka; Dröes, Rose-Marie; Sävenstedt, Stefan; Bergvall-Kåreborn, Birgitta; Andersson, Anna-Lena

    As the number of persons with dementia increases and also the demands on care and support at home, additional solutions to support persons with dementia are needed. The COGKNOW project aims to develop an integrated, user-driven cognitive prosthetic device to help persons with dementia. The project focuses on support in the areas of memory, social contact, daily living activities and feelings of safety. The design process is user-participatory and consists of iterative cycles at three test sites across Europe. In the first cycle persons with dementia and their carers (n = 17) actively participated in the developmental process. Based on their priorities of needs and solutions, on their disabilities and after discussion between the team, a top four list of Information and Communication Technology (ICT) solutions was made and now serves as the basis for development: in the area of remembering - day and time orientation support, find mobile service and reminding service, in the area of social contact - telephone support by picture dialling, in the area of daily activities - media control support through a music playback and radio function, and finally, in the area of safety - a warning service to indicate when the front door is open and an emergency contact service to enhance feelings of safety. The results of this first project phase show that, in general, the people with mild dementia as well as their carers were able to express and prioritize their (unmet) needs, and the kind of technological assistance they preferred in the selected areas. In next phases it will be tested if the user-participatory design and multidisciplinary approach employed in the COGKNOW project result in a user-friendly, useful device that positively impacts the autonomy and quality of life of persons with dementia and their carers.

  6. Processing of Perceptual Information Is More Robust than Processing of Conceptual Information in Preschool-Age Children: Evidence from Costs of Switching

    ERIC Educational Resources Information Center

    Fisher, Anna V.

    2011-01-01

    Is processing of conceptual information as robust as processing of perceptual information early in development? Existing empirical evidence is insufficient to answer this question. To examine this issue, 3- to 5-year-old children were presented with a flexible categorization task, in which target items (e.g., an open red umbrella) shared category…

  7. Use of uncertainty polytope to describe constraint processes with uncertain time-delay for robust model predictive control applications.

    PubMed

    Huang, Gongsheng; Wang, Shengwei

    2009-10-01

    This paper studies the application of robust model predictive control (MPC) in a constraint process suffering from time-delay uncertainty. The process is described using a transfer function and sampled into a discrete model for computer control design. A polytope is firstly developed to describe the uncertain discrete model due to the process's time-delay uncertainty. Based on the proposed description, a linear matrix inequality (LMI) based MPC algorithm is employed and modified to design a robust controller for such a constraint process. In case studies, the effect of time-delay uncertainty on the control performance of a standard MPC algorithm is investigated, and the proposed description and the modified control algorithm are validated in the temperature control of a typical air-handling unit.

  8. Integrated Process Monitoring based on Systems of Sensors for Enhanced Nuclear Safeguards Sensitivity and Robustness

    SciTech Connect

    Humberto E. Garcia

    2014-07-01

    This paper illustrates safeguards benefits that process monitoring (PM) can have as a diversion deterrent and as a complementary safeguards measure to nuclear material accountancy (NMA). In order to infer the possible existence of proliferation-driven activities, the objective of NMA-based methods is often to statistically evaluate materials unaccounted for (MUF) computed by solving a given mass balance equation related to a material balance area (MBA) at every material balance period (MBP), a particular objective for a PM-based approach may be to statistically infer and evaluate anomalies unaccounted for (AUF) that may have occurred within a MBP. Although possibly being indicative of proliferation-driven activities, the detection and tracking of anomaly patterns is not trivial because some executed events may be unobservable or unreliably observed as others. The proposed similarity between NMA- and PM-based approaches is important as performance metrics utilized for evaluating NMA-based methods, such as detection probability (DP) and false alarm probability (FAP), can also be applied for assessing PM-based safeguards solutions. To this end, AUF count estimates can be translated into significant quantity (SQ) equivalents that may have been diverted within a given MBP. A diversion alarm is reported if this mass estimate is greater than or equal to the selected value for alarm level (AL), appropriately chosen to optimize DP and FAP based on the particular characteristics of the monitored MBA, the sensors utilized, and the data processing method employed for integrating and analyzing collected measurements. To illustrate the application of the proposed PM approach, a protracted diversion of Pu in a waste stream was selected based on incomplete fuel dissolution in a dissolver unit operation, as this diversion scenario is considered to be problematic for detection using NMA-based methods alone. Results demonstrate benefits of conducting PM under a system

  9. A Benchmark Data Set to Evaluate the Illumination Robustness of Image Processing Algorithms for Object Segmentation and Classification

    PubMed Central

    Khan, Arif ul Maula; Mikut, Ralf; Reischl, Markus

    2015-01-01

    Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects. PMID:26191792

  10. A Benchmark Data Set to Evaluate the Illumination Robustness of Image Processing Algorithms for Object Segmentation and Classification.

    PubMed

    Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus

    2015-01-01

    Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects.

  11. Robustness of movement models: can models bridge the gap between temporal scales of data sets and behavioural processes?

    PubMed

    Schlägel, Ulrike E; Lewis, Mark A

    2016-12-01

    Discrete-time random walks and their extensions are common tools for analyzing animal movement data. In these analyses, resolution of temporal discretization is a critical feature. Ideally, a model both mirrors the relevant temporal scale of the biological process of interest and matches the data sampling rate. Challenges arise when resolution of data is too coarse due to technological constraints, or when we wish to extrapolate results or compare results obtained from data with different resolutions. Drawing loosely on the concept of robustness in statistics, we propose a rigorous mathematical framework for studying movement models' robustness against changes in temporal resolution. In this framework, we define varying levels of robustness as formal model properties, focusing on random walk models with spatially-explicit component. With the new framework, we can investigate whether models can validly be applied to data across varying temporal resolutions and how we can account for these different resolutions in statistical inference results. We apply the new framework to movement-based resource selection models, demonstrating both analytical and numerical calculations, as well as a Monte Carlo simulation approach. While exact robustness is rare, the concept of approximate robustness provides a promising new direction for analyzing movement models.

  12. Robust Library Building for Autonomous Classification of Downhole Geophysical Logs Using Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Silversides, Katherine L.; Melkumyan, Arman

    2016-12-01

    Machine learning techniques such as Gaussian Processes can be used to identify stratigraphically important features in geophysical logs. The marker shales in the banded iron formation hosted iron ore deposits of the Hamersley Ranges, Western Australia, form distinctive signatures in the natural gamma logs. The identification of these marker shales is important for stratigraphic identification of unit boundaries for the geological modelling of the deposit. Machine learning techniques each have different unique properties that will impact the results. For Gaussian Processes (GPs), the output values are inclined towards the mean value, particularly when there is not sufficient information in the library. The impact that these inclinations have on the classification can vary depending on the parameter values selected by the user. Therefore, when applying machine learning techniques, care must be taken to fit the technique to the problem correctly. This study focuses on optimising the settings and choices for training a GPs system to identify a specific marker shale. We show that the final results converge even when different, but equally valid starting libraries are used for the training. To analyse the impact on feature identification, GP models were trained so that the output was inclined towards a positive, neutral or negative output. For this type of classification, the best results were when the pull was towards a negative output. We also show that the GP output can be adjusted by using a standard deviation coefficient that changes the balance between certainty and accuracy in the results.

  13. Robust Library Building for Autonomous Classification of Downhole Geophysical Logs Using Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Silversides, Katherine L.; Melkumyan, Arman

    2017-03-01

    Machine learning techniques such as Gaussian Processes can be used to identify stratigraphically important features in geophysical logs. The marker shales in the banded iron formation hosted iron ore deposits of the Hamersley Ranges, Western Australia, form distinctive signatures in the natural gamma logs. The identification of these marker shales is important for stratigraphic identification of unit boundaries for the geological modelling of the deposit. Machine learning techniques each have different unique properties that will impact the results. For Gaussian Processes (GPs), the output values are inclined towards the mean value, particularly when there is not sufficient information in the library. The impact that these inclinations have on the classification can vary depending on the parameter values selected by the user. Therefore, when applying machine learning techniques, care must be taken to fit the technique to the problem correctly. This study focuses on optimising the settings and choices for training a GPs system to identify a specific marker shale. We show that the final results converge even when different, but equally valid starting libraries are used for the training. To analyse the impact on feature identification, GP models were trained so that the output was inclined towards a positive, neutral or negative output. For this type of classification, the best results were when the pull was towards a negative output. We also show that the GP output can be adjusted by using a standard deviation coefficient that changes the balance between certainty and accuracy in the results.

  14. Deep transcriptome-sequencing and proteome analysis of the hydrothermal vent annelid Alvinella pompejana identifies the CvP-bias as a robust measure of eukaryotic thermostability

    PubMed Central

    2013-01-01

    Background Alvinella pompejana is an annelid worm that inhabits deep-sea hydrothermal vent sites in the Pacific Ocean. Living at a depth of approximately 2500 meters, these worms experience extreme environmental conditions, including high temperature and pressure as well as high levels of sulfide and heavy metals. A. pompejana is one of the most thermotolerant metazoans, making this animal a subject of great interest for studies of eukaryotic thermoadaptation. Results In order to complement existing EST resources we performed deep sequencing of the A. pompejana transcriptome. We identified several thousand novel protein-coding transcripts, nearly doubling the sequence data for this annelid. We then performed an extensive survey of previously established prokaryotic thermoadaptation measures to search for global signals of thermoadaptation in A. pompejana in comparison with mesophilic eukaryotes. In an orthologous set of 457 proteins, we found that the best indicator of thermoadaptation was the difference in frequency of charged versus polar residues (CvP-bias), which was highest in A. pompejana. CvP-bias robustly distinguished prokaryotic thermophiles from prokaryotic mesophiles, as well as the thermophilic fungus Chaetomium thermophilum from mesophilic eukaryotes. Experimental values for thermophilic proteins supported higher CvP-bias as a measure of thermal stability when compared to their mesophilic orthologs. Proteome-wide mean CvP-bias also correlated with the body temperatures of homeothermic birds and mammals. Conclusions Our work extends the transcriptome resources for A. pompejana and identifies the CvP-bias as a robust and widely applicable measure of eukaryotic thermoadaptation. Reviewer This article was reviewed by Sándor Pongor, L. Aravind and Anthony M. Poole. PMID:23324115

  15. Highly robust hydrogels via a fast, simple and cytocompatible dual crosslinking-based process.

    PubMed

    Costa, Ana M S; Mano, João F

    2015-11-07

    A highly robust hydrogel device made from a single biopolymer formulation is reported. Owing to the presence of covalent and non-covalent crosslinks, these engineered systems were able to (i) sustain a compressive strength of ca. 20 MPa, (ii) quickly recover upon unloading, and (iii) encapsulate cells with high viability rates.

  16. Individualized relapse prediction: personality measures and striatal and insular activity during reward-processing robustly predict relapse*

    PubMed Central

    Gowin, Joshua L.; Ball, Tali M.; Wittmann, Marc; Tapert, Susan F.; Paulus, Martin P.

    2015-01-01

    Background Nearly half of individuals with substance use disorders relapse in the year after treatment. A diagnostic tool to help clinicians make decisions regarding treatment does not exist for psychiatric conditions. Identifying individuals with high risk for relapse to substance use following abstinence has profound clinical consequences. This study aimed to develop neuroimaging as a robust tool to predict relapse. Methods 68 methamphetamine-dependent adults (15 female) were recruited from 28-day inpatient treatment. During treatment, participants completed a functional MRI scan that examined brain activation during reward processing. Patients were followed 1 year later to assess abstinence. We examined brain activation during reward processing between relapsing and abstaining individuals and employed three random forest prediction models (clinical and personality measures, neuroimaging measures, a combined model) to generate predictions for each participant regarding their relapse likelihood. Results 18 individuals relapsed. There were significant group by reward-size interactions for neural activation in the left insula and right striatum for rewards. Abstaining individuals showed increased activation for large, risky relative to small, safe rewards, whereas relapsing individuals failed to show differential activation between reward types. All three random forest models yielded good test characteristics such that a positive test for relapse yielded a likelihood ratio 2.63, whereas a negative test had a likelihood ratio of 0.48. Conclusions These findings suggest that neuroimaging can be developed in combination with other measures as an instrument to predict relapse, advancing tools providers can use to make decisions about individualized treatment of substance use disorders. PMID:25977206

  17. A two-step patterning process increases the robustness of periodic patterning in the fly eye.

    PubMed

    Gavish, Avishai; Barkai, Naama

    2016-06-01

    Complex periodic patterns can self-organize through dynamic interactions between diffusible activators and inhibitors. In the biological context, self-organized patterning is challenged by spatial heterogeneities ('noise') inherent to biological systems. How spatial variability impacts the periodic patterning mechanism and how it can be buffered to ensure precise patterning is not well understood. We examine the effect of spatial heterogeneity on the periodic patterning of the fruit fly eye, an organ composed of ∼800 miniature eye units (ommatidia) whose periodic arrangement along a hexagonal lattice self-organizes during early stages of fly development. The patterning follows a two-step process, with an initial formation of evenly spaced clusters of ∼10 cells followed by a subsequent refinement of each cluster into a single selected cell. Using a probabilistic approach, we calculate the rate of patterning errors resulting from spatial heterogeneities in cell size, position and biosynthetic capacity. Notably, error rates were largely independent of the desired cluster size but followed the distributions of signaling speeds. Pre-formation of large clusters therefore greatly increases the reproducibility of the overall periodic arrangement, suggesting that the two-stage patterning process functions to guard the pattern against errors caused by spatial heterogeneities. Our results emphasize the constraints imposed on self-organized patterning mechanisms by the need to buffer stochastic effects. Author summary Complex periodic patterns are common in nature and are observed in physical, chemical and biological systems. Understanding how these patterns are generated in a precise manner is a key challenge. Biological patterns are especially intriguing, as they are generated in a noisy environment; cell position and cell size, for example, are subject to stochastic variations, as are the strengths of the chemical signals mediating cell-to-cell communication. The need

  18. New results on the robust stability of PID controllers with gain and phase margins for UFOPTD processes.

    PubMed

    Jin, Q B; Liu, Q; Huang, B

    2016-03-01

    This paper considers the problem of determining all the robust PID (proportional-integral-derivative) controllers in terms of the gain and phase margins (GPM) for open-loop unstable first order plus time delay (UFOPTD) processes. It is the first time that the feasible ranges of the GPM specifications provided by a PID controller are given for UFOPTD processes. A gain and phase margin tester is used to modify the original model, and the ranges of the margin specifications are derived such that the modified model can be stabilized by a stabilizing PID controller based on Hermite-Biehlers Theorem. Furthermore, we obtain all the controllers satisfying a given margin specification. Simulation studies show how to use the results to design a robust PID controller.

  19. The Self in Movement: Being Identified and Identifying Oneself in the Process of Migration and Asylum Seeking.

    PubMed

    Watzlawik, Meike; Brescó de Luna, Ignacio

    2017-03-15

    How migration influences the processes of identity development has been under longstanding scrutiny in the social sciences. Usually, stage models have been suggested, and different strategies for acculturation (e.g., integration, assimilation, separation, and marginalization) have been considered as ways to make sense of the psychological transformations of migrants as a group. On an individual level, however, identity development is a more complex endeavor: Identity does not just develop by itself, but is constructed as an ongoing process. To capture these processes, we will look at different aspects of migration and asylum seeking; for example, the cultural-specific values and expectations of the hosting (European) countries (e.g., as identifier), but also of the arriving individuals/groups (e.g., identified as refugees). Since the two may contradict each other, negotiations between identities claims and identity assignments become necessary. Ways to solve these contradictions are discussed, with a special focus on the experienced (and often missing) agency in different settings upon arrival in a new country. In addition, it will be shown how sudden events (e.g., 9/11, the Charlie Hebdo attack) may challenge identity processes in different ways.

  20. Multiple feedback loop design in the tryptophan regulatory network of Escherichia coli suggests a paradigm for robust regulation of processes in series

    PubMed Central

    Bhartiya, Sharad; Chaudhary, Nikhil; Venkatesh, K.V; Doyle, Francis J

    2005-01-01

    Biological networks have evolved through adaptation in uncertain environments. Of the different possible design paradigms, some may offer functional advantages over others. These designs can be quantified by the structure of the network resulting from molecular interactions and the parameter values. One may, therefore, like to identify the design motif present in the evolved network that makes it preferable over other alternatives. In this work, we focus on the regulatory networks characterized by serially arranged processes, which are regulated by multiple feedback loops. Specifically, we consider the tryptophan system present in Escherichia coli, which may be conceptualized as three processes in series, namely transcription, translation and tryptophan synthesis. The multiple feedback loop motif results from three distinct negative feedback loops, namely genetic repression, mRNA attenuation and enzyme inhibition. A framework is introduced to identify the key design components of this network responsible for its physiological performance. We demonstrate that the multiple feedback loop motif, as seen in the tryptophan system, enables robust performance to variations in system parameters while maintaining a rapid response to achieve homeostasis. Superior performance, if arising from a design principle, is intrinsic and, therefore, inherent to any similarly designed system, either natural or engineered. An experimental engineering implementation of the multiple feedback loop design on a two-tank system supports the generality of the robust attributes offered by the design. PMID:16849267

  1. A robust hidden semi-Markov model with application to aCGH data processing.

    PubMed

    Ding, Jiarui; Shah, Sohrab

    2013-01-01

    Hidden semi-Markov models are effective at modelling sequences with succession of homogenous zones by choosing appropriate state duration distributions. To compensate for model mis-specification and provide protection against outliers, we design a robust hidden semi-Markov model with Student's t mixture models as the emission distributions. The proposed approach is used to model array based comparative genomic hybridization data. Experiments conducted on the benchmark data from the Coriell cell lines, and glioblastoma multiforme data illustrate the reliability of the technique.

  2. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    PubMed

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical.

  3. Design of a robust fuzzy controller for the arc stability of CO(2) welding process using the Taguchi method.

    PubMed

    Kim, Dongcheol; Rhee, Sehun

    2002-01-01

    CO(2) welding is a complex process. Weld quality is dependent on arc stability and minimizing the effects of disturbances or changes in the operating condition commonly occurring during the welding process. In order to minimize these effects, a controller can be used. In this study, a fuzzy controller was used in order to stabilize the arc during CO(2) welding. The input variable of the controller was the Mita index. This index estimates quantitatively the arc stability that is influenced by many welding process parameters. Because the welding process is complex, a mathematical model of the Mita index was difficult to derive. Therefore, the parameter settings of the fuzzy controller were determined by performing actual control experiments without using a mathematical model of the controlled process. The solution, the Taguchi method was used to determine the optimal control parameter settings of the fuzzy controller to make the control performance robust and insensitive to the changes in the operating conditions.

  4. OGS#PETSc approach for robust and efficient simulations of strongly coupled hydrothermal processes in EGS reservoirs

    NASA Astrophysics Data System (ADS)

    Watanabe, Norihiro; Blucher, Guido; Cacace, Mauro; Kolditz, Olaf

    2016-04-01

    A robust and computationally efficient solution is important for 3D modelling of EGS reservoirs. This is particularly the case when the reservoir model includes hydraulic conduits such as induced or natural fractures, fault zones, and wellbore open-hole sections. The existence of such hydraulic conduits results in heterogeneous flow fields and in a strengthened coupling between fluid flow and heat transport processes via temperature dependent fluid properties (e.g. density and viscosity). A commonly employed partitioned solution (or operator-splitting solution) may not robustly work for such strongly coupled problems its applicability being limited by small time step sizes (e.g. 5-10 days) whereas the processes have to be simulated for 10-100 years. To overcome this limitation, an alternative approach is desired which can guarantee a robust solution of the coupled problem with minor constraints on time step sizes. In this work, we present a Newton-Raphson based monolithic coupling approach implemented in the OpenGeoSys simulator (OGS) combined with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library. The PETSc library is used for both linear and nonlinear solvers as well as MPI-based parallel computations. The suggested method has been tested by application to the 3D reservoir site of Groß Schönebeck, in northern Germany. Results show that the exact Newton-Raphson approach can also be limited to small time step sizes (e.g. one day) due to slight oscillations in the temperature field. The usage of a line search technique and modification of the Jacobian matrix were necessary to achieve robust convergence of the nonlinear solution. For the studied example, the proposed monolithic approach worked even with a very large time step size of 3.5 years.

  5. Processivity of peptidoglycan synthesis provides a built-in mechanism for the robustness of straight-rod cell morphology

    PubMed Central

    Sliusarenko, Oleksii; Cabeen, Matthew T.; Wolgemuth, Charles W.; Jacobs-Wagner, Christine; Emonet, Thierry

    2010-01-01

    The propagation of cell shape across generations is remarkably robust in most bacteria. Even when deformations are acquired, growing cells progressively recover their original shape once the deforming factors are eliminated. For instance, straight-rod-shaped bacteria grow curved when confined to circular microchambers, but straighten in a growth-dependent fashion when released. Bacterial cell shape is maintained by the peptidoglycan (PG) cell wall, a giant macromolecule of glycan strands that are synthesized by processive enzymes and cross-linked by peptide chains. Changes in cell geometry require modifying the PG and therefore depend directly on the molecular-scale properties of PG structure and synthesis. Using a mathematical model we quantify the straightening of curved Caulobacter crescentus cells after disruption of the cell-curving crescentin structure. We observe that cells straighten at a rate that is about half (57%) the cell growth rate. Next we show that in the absence of other effects there exists a mathematical relationship between the rate of cell straightening and the processivity of PG synthesis—the number of subunits incorporated before termination of synthesis. From the measured rate of cell straightening this relationship predicts processivity values that are in good agreement with our estimates from published data. Finally, we consider the possible role of three other mechanisms in cell straightening. We conclude that regardless of the involvement of other factors, intrinsic properties of PG processivity provide a robust mechanism for cell straightening that is hardwired to the cell wall synthesis machinery. PMID:20479277

  6. Robust Kriged Kalman Filtering

    SciTech Connect

    Baingana, Brian; Dall'Anese, Emiliano; Mateos, Gonzalo; Giannakis, Georgios B.

    2015-11-11

    Although the kriged Kalman filter (KKF) has well-documented merits for prediction of spatial-temporal processes, its performance degrades in the presence of outliers due to anomalous events, or measurement equipment failures. This paper proposes a robust KKF model that explicitly accounts for presence of measurement outliers. Exploiting outlier sparsity, a novel l1-regularized estimator that jointly predicts the spatial-temporal process at unmonitored locations, while identifying measurement outliers is put forth. Numerical tests are conducted on a synthetic Internet protocol (IP) network, and real transformer load data. Test results corroborate the effectiveness of the novel estimator in joint spatial prediction and outlier identification.

  7. Robust Multi-Length Scale Deformation Process Design for the Control of Microstructure-Sensitive Material Properties

    DTIC Science & Technology

    2007-07-18

    die underfill caused by material porosity This problem studies the effect of a random voids in the design of flashless closed die forging processes...provides a robust way to estimate the statistics of the extent of die underfill as a result of a random distribution of voids in the billet. The initial...2.38) i=1 where fo = 0.03 is the mean void fraction. A 9x9 grid was used for computing the statistics. The mean underfill was estimated to be

  8. Identification, characterization and HPLC quantification of process-related impurities in Trelagliptin succinate bulk drug: Six identified as new compounds.

    PubMed

    Zhang, Hui; Sun, Lili; Zou, Liang; Hui, Wenkai; Liu, Lei; Zou, Qiaogen; Ouyang, Pingkai

    2016-09-05

    A sensitive, selective and stability indicating reversed-phase LC method was developed for the determination of process related impurities of Trelagliptin succinate in bulk drug. Six impurities were identified by LC-MS. Further, their structures were characterized and confirmed utilizing LC-MS/MS, IR and NMR spectral data. The most probable mechanisms for the formation of these impurities were also discussed. To the best of our knowledge, six structures among these impurities are new compounds and have not been reported previously. The superior separation was achieved on an InertSustain C18 (250mm×4.6mm, 5μm) column in a gradient mixture of acetonitrile and 20mmol potassium dihydrogen phosphate with 0.25% triethylamine (pH adjusted to 3.5 with phosphate acid). The method was validated as per regulatory guidelines to demonstrate system suitability, specificity, sensitivity, linearity, robustness, and stability.

  9. Robust factor selection in early cell culture process development for the production of a biosimilar monoclonal antibody.

    PubMed

    Sokolov, Michael; Ritscher, Jonathan; MacKinnon, Nicola; Bielser, Jean-Marc; Brühlmann, David; Rothenhäusler, Dominik; Thanei, Gian; Soos, Miroslav; Stettler, Matthieu; Souquet, Jonathan; Broly, Hervé; Morbidelli, Massimo; Butté, Alessandro

    2017-01-01

    This work presents a multivariate methodology combining principal component analysis, the Mahalanobis distance and decision trees for the selection of process factors and their levels in early process development of generic molecules. It is applied to a high throughput study testing more than 200 conditions for the production of a biosimilar monoclonal antibody at microliter scale. The methodology provides the most important selection criteria for the process design in order to improve product quality towards the quality attributes of the originator molecule. Robustness of the selections is ensured by cross-validation of each analysis step. The concluded selections are then successfully validated with an external data set. Finally, the results are compared to those obtained with a widely used software revealing similarities and clear advantages of the presented methodology. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 33:181-191, 2017.

  10. MIMO model of an interacting series process for Robust MPC via System Identification.

    PubMed

    Wibowo, Tri Chandra S; Saad, Nordin

    2010-07-01

    This paper discusses the empirical modeling using system identification technique with a focus on an interacting series process. The study is carried out experimentally using a gaseous pilot plant as the process, in which the dynamic of such a plant exhibits the typical dynamic of an interacting series process. Three practical approaches are investigated and their performances are evaluated. The models developed are also examined in real-time implementation of a linear model predictive control. The selected model is able to reproduce the main dynamic characteristics of the plant in open-loop and produces zero steady-state errors in closed-loop control system. Several issues concerning the identification process and the construction of a MIMO state space model for a series interacting process are deliberated.

  11. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  12. A New Hybrid BFOA-PSO Optimization Technique for Decoupling and Robust Control of Two-Coupled Distillation Column Process

    PubMed Central

    Mohamed, Amr E.; Dorrah, Hassen T.

    2016-01-01

    The two-coupled distillation column process is a physically complicated system in many aspects. Specifically, the nested interrelationship between system inputs and outputs constitutes one of the significant challenges in system control design. Mostly, such a process is to be decoupled into several input/output pairings (loops), so that a single controller can be assigned for each loop. In the frame of this research, the Brain Emotional Learning Based Intelligent Controller (BELBIC) forms the control structure for each decoupled loop. The paper's main objective is to develop a parameterization technique for decoupling and control schemes, which ensures robust control behavior. In this regard, the novel optimization technique Bacterial Swarm Optimization (BSO) is utilized for the minimization of summation of the integral time-weighted squared errors (ITSEs) for all control loops. This optimization technique constitutes a hybrid between two techniques, which are the Particle Swarm and Bacterial Foraging algorithms. According to the simulation results, this hybridized technique ensures low mathematical burdens and high decoupling and control accuracy. Moreover, the behavior analysis of the proposed BELBIC shows a remarkable improvement in the time domain behavior and robustness over the conventional PID controller. PMID:27807444

  13. A New Hybrid BFOA-PSO Optimization Technique for Decoupling and Robust Control of Two-Coupled Distillation Column Process.

    PubMed

    Abdelkarim, Noha; Mohamed, Amr E; El-Garhy, Ahmed M; Dorrah, Hassen T

    2016-01-01

    The two-coupled distillation column process is a physically complicated system in many aspects. Specifically, the nested interrelationship between system inputs and outputs constitutes one of the significant challenges in system control design. Mostly, such a process is to be decoupled into several input/output pairings (loops), so that a single controller can be assigned for each loop. In the frame of this research, the Brain Emotional Learning Based Intelligent Controller (BELBIC) forms the control structure for each decoupled loop. The paper's main objective is to develop a parameterization technique for decoupling and control schemes, which ensures robust control behavior. In this regard, the novel optimization technique Bacterial Swarm Optimization (BSO) is utilized for the minimization of summation of the integral time-weighted squared errors (ITSEs) for all control loops. This optimization technique constitutes a hybrid between two techniques, which are the Particle Swarm and Bacterial Foraging algorithms. According to the simulation results, this hybridized technique ensures low mathematical burdens and high decoupling and control accuracy. Moreover, the behavior analysis of the proposed BELBIC shows a remarkable improvement in the time domain behavior and robustness over the conventional PID controller.

  14. Use of a qualitative methodological scaffolding process to design robust interprofessional studies.

    PubMed

    Wener, Pamela; Woodgate, Roberta L

    2013-07-01

    Increasingly, researchers are using qualitative methodology to study interprofessional collaboration (IPC). With this increase in use, there seems to be an appreciation for how qualitative studies allow us to understand the unique individual or group experience in more detail and form a basis for policy change and innovative interventions. Furthermore, there is an increased understanding of the potential of studying new or emerging phenomena qualitatively to inform further large-scale studies. Although there is a current trend toward greater acceptance of the value of qualitative studies describing the experiences of IPC, these studies are mostly descriptive in nature. Applying a process suggested by Crotty (1998) may encourage researchers to consider the value in situating research questions within a broader theoretical framework that will inform the overall research approach including methodology and methods. This paper describes the application of a process to a research project and then illustrates how this process encouraged iterative cycles of thinking and doing. The authors describe each step of the process, shares decision-making points, as well as suggests an additional step to the process. Applying this approach to selecting data collection methods may serve to guide and support the qualitative researcher in creating a well-designed study approach.

  15. Robust carrier formation process in low-band gap organic photovoltaics

    NASA Astrophysics Data System (ADS)

    Yonezawa, Kouhei; Kamioka, Hayato; Yasuda, Takeshi; Han, Liyuan; Moritomo, Yutaka

    2013-10-01

    By means of femto-second time-resolved spectroscopy, we investigated the carrier formation process against film morphology and temperature (T) in highly-efficient organic photovoltaic, poly[[4,8-bis[(2-ethylhexyl)oxy]benzo[1,2-b:4,5-b '] dithiophene-2,6-diyl][3-fluoro-2-[(2-ethylhexyl)carbonyl]thieno[3,4-b] thiophenediyl

  16. Optimization of Tape Winding Process Parameters to Enhance the Performance of Solid Rocket Nozzle Throat Back Up Liners using Taguchi's Robust Design Methodology

    NASA Astrophysics Data System (ADS)

    Nath, Nayani Kishore

    2016-06-01

    The throat back up liners is used to protect the nozzle structural members from the severe thermal environment in solid rocket nozzles. The throat back up liners is made with E-glass phenolic prepregs by tape winding process. The objective of this work is to demonstrate the optimization of process parameters of tape winding process to achieve better insulative resistance using Taguchi's robust design methodology. In this method four control factors machine speed, roller pressure, tape tension, tape temperature that were investigated for the tape winding process. The presented work was to study the cogency and acceptability of Taguchi's methodology in manufacturing of throat back up liners. The quality characteristic identified was Back wall temperature. Experiments carried out using L{9/'} (34) orthogonal array with three levels of four different control factors. The test results were analyzed using smaller the better criteria for Signal to Noise ratio in order to optimize the process. The experimental results were analyzed conformed and successfully used to achieve the minimum back wall temperature of the throat back up liners. The enhancement in performance of the throat back up liners was observed by carrying out the oxy-acetylene tests. The influence of back wall temperature on the performance of throat back up liners was verified by ground firing test.

  17. A sensitivity analysis of process design parameters, commodity prices and robustness on the economics of odour abatement technologies.

    PubMed

    Estrada, José M; Kraakman, N J R Bart; Lebrero, Raquel; Muñoz, Raúl

    2012-01-01

    The sensitivity of the economics of the five most commonly applied odour abatement technologies (biofiltration, biotrickling filtration, activated carbon adsorption, chemical scrubbing and a hybrid technology consisting of a biotrickling filter coupled with carbon adsorption) towards design parameters and commodity prices was evaluated. Besides, the influence of the geographical location on the Net Present Value calculated for a 20 years lifespan (NPV20) of each technology and its robustness towards typical process fluctuations and operational upsets were also assessed. This comparative analysis showed that biological techniques present lower operating costs (up to 6 times) and lower sensitivity than their physical/chemical counterparts, with the packing material being the key parameter affecting their operating costs (40-50% of the total operating costs). The use of recycled or partially treated water (e.g. secondary effluent in wastewater treatment plants) offers an opportunity to significantly reduce costs in biological techniques. Physical/chemical technologies present a high sensitivity towards H2S concentration, which is an important drawback due to the fluctuating nature of malodorous emissions. The geographical analysis evidenced high NPV20 variations around the world for all the technologies evaluated, but despite the differences in wage and price levels, biofiltration and biotrickling filtration are always the most cost-efficient alternatives (NPV20). When, in an economical evaluation, the robustness is as relevant as the overall costs (NPV20), the hybrid technology would move up next to BTF as the most preferred technologies.

  18. Observations on the Use of SCAN To Identify Children at Risk for Central Auditory Processing Disorder.

    ERIC Educational Resources Information Center

    Emerson, Maria F.; And Others

    1997-01-01

    The SCAN: A Screening Test for Auditory Processing Disorders was administered to 14 elementary children with a history of otitis media and 14 typical children, to evaluate the validity of the test in identifying children with central auditory processing disorder. Another experiment found that test results differed based on the testing environment…

  19. A Robust Multi-Scale Modeling System for the Study of Cloud and Precipitation Processes

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2012-01-01

    During the past decade, numerical weather and global non-hydrostatic models have started using more complex microphysical schemes originally developed for high resolution cloud resolving models (CRMs) with 1-2 km or less horizontal resolutions. These microphysical schemes affect the dynamic through the release of latent heat (buoyancy loading and pressure gradient) the radiation through the cloud coverage (vertical distribution of cloud species), and surface processes through rainfall (both amount and intensity). Recently, several major improvements of ice microphysical processes (or schemes) have been developed for cloud-resolving model (Goddard Cumulus Ensemble, GCE, model) and regional scale (Weather Research and Forecast, WRF) model. These improvements include an improved 3-ICE (cloud ice, snow and graupel) scheme (Lang et al. 2010); a 4-ICE (cloud ice, snow, graupel and hail) scheme and a spectral bin microphysics scheme and two different two-moment microphysics schemes. The performance of these schemes has been evaluated by using observational data from TRMM and other major field campaigns. In this talk, we will present the high-resolution (1 km) GeE and WRF model simulations and compared the simulated model results with observation from recent field campaigns [i.e., midlatitude continental spring season (MC3E; 2010), high latitude cold-season (C3VP, 2007; GCPEx, 2012), and tropical oceanic (TWP-ICE, 2006)].

  20. Uncertainties and robustness of the ignition process in type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Iapichino, L.; Lesaffre, P.

    2010-03-01

    Context. It is widely accepted that the onset of the explosive carbon burning in the core of a carbon-oxygen white dwarf (CO WD) triggers the ignition of a type Ia supernova (SN Ia). The features of the ignition are among the few free parameters of the SN Ia explosion theory. Aims: We explore the role for the ignition process of two different issues: firstly, the ignition is studied in WD models coming from different accretion histories. Secondly, we estimate how a different reaction rate for C-burning can affect the ignition. Methods: Two-dimensional hydrodynamical simulations of temperature perturbations in the WD core (“bubbles”) are performed with the FLASH code. In order to evaluate the impact of the C-burning reaction rate on the WD model, the evolution code FLASH_THE_TORTOISE from Lesaffre et al. (2006, MNRAS, 368, 187) is used. Results: In different WD models a key role is played by the different gravitational acceleration in the progenitor's core. As a consequence, the ignition is disfavored at a large distance from the WD center in models with a larger central density, resulting from the evolution of initially more massive progenitors. Changes in the C reaction rate at T ⪉ 5 × 10^8~K slightly influence the ignition density in the WD core, while the ignition temperature is almost unaffected. Recent measurements of new resonances in the C-burning reaction rate (Spillane et al. 2007, Phys. Rev. Lett., 98, 122501) do not affect the core conditions of the WD significantly. Conclusions: This simple analysis, performed on the features of the temperature perturbations in the WD core, should be extended in the framework of the state-of-the-art numerical tools for studying the turbulent convection and ignition in the WD core. Future measurements of the C-burning reactions cross section at low energy, though certainly useful, are not expected to affect our current understanding of the ignition process dramatically.

  1. A point process approach to identifying and tracking transitions in neural spiking dynamics in the subthalamic nucleus of Parkinson's patients

    NASA Astrophysics Data System (ADS)

    Deng, Xinyi; Eskandar, Emad N.; Eden, Uri T.

    2013-12-01

    Understanding the role of rhythmic dynamics in normal and diseased brain function is an important area of research in neural electrophysiology. Identifying and tracking changes in rhythms associated with spike trains present an additional challenge, because standard approaches for continuous-valued neural recordings—such as local field potential, magnetoencephalography, and electroencephalography data—require assumptions that do not typically hold for point process data. Additionally, subtle changes in the history dependent structure of a spike train have been shown to lead to robust changes in rhythmic firing patterns. Here, we propose a point process modeling framework to characterize the rhythmic spiking dynamics in spike trains, test for statistically significant changes to those dynamics, and track the temporal evolution of such changes. We first construct a two-state point process model incorporating spiking history and develop a likelihood ratio test to detect changes in the firing structure. We then apply adaptive state-space filters and smoothers to track these changes through time. We illustrate our approach with a simulation study as well as with experimental data recorded in the subthalamic nucleus of Parkinson's patients performing an arm movement task. Our analyses show that during the arm movement task, neurons underwent a complex pattern of modulation of spiking intensity characterized initially by a release of inhibitory control at 20-40 ms after a spike, followed by a decrease in excitatory influence at 40-60 ms after a spike.

  2. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    NASA Astrophysics Data System (ADS)

    McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.

    2015-04-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.

  3. Towards a robust assessment of bridge clogging processes in flood risk management

    NASA Astrophysics Data System (ADS)

    Gschnitzer, T.; Gems, B.; Mazzorana, B.; Aufleger, M.

    2017-02-01

    River managers are aware that wood-clogging mechanisms frequently trigger damage-causing processes like structural damages at bridges, sudden channel outbursts, and occasionally, major displacements of the water course. To successfully mitigate flood risks related to the transport of large wood (LW), river managers need a guideline for an accurate and reliable risk assessment procedure and the design of river sections and bridges that are endangered of LW clogging. In recent years, comprehensive research dealing with the triggers of wood-clogging mechanisms at bridges and the corresponding impacts on flood risk was accomplished at the University of Innsbruck. A large set of laboratory experiments in a rectangular flume was conducted. In this paper we provide an overall view of these tests and present our findings. By applying a logistic regression analysis, the available knowledge on the influence of geometrical, hydraulic, and wood-related parameters on LW clogging probabilities is processed in a generalized form. Based on the experimental modeling results a practice-oriented guideline that supports the assessment of flood risk induced by LW clogging, is presented. In this context, two specific local structural protection measures at the bridge, aiming for a significant decrease of the entrapment probabilities, are illustrated: (i) a deflecting baffle installed on the upstream face of the bridge and (ii) a channel constriction leading to a change in flow state and a corresponding increase of the flow velocities and the freeboard at the bridge cross section. The presented guideline is based on a three-step approach: estimation of LW potential, entrainment, and transport; clogging scenario at the bridge; and the impact on channel and floodplain hydraulics. For a specific bridge susceptible to potential clogging caused by LW entrapment, it allows for a qualitative evaluation of potential LW entrainment in the upstream river segments, its transport toward the

  4. Extreme temperature robust optical sensor designs and fault-tolerant signal processing

    SciTech Connect

    Riza, Nabeel Agha; Perez, Frank

    2012-01-17

    Silicon Carbide (SiC) probe designs for extreme temperature and pressure sensing uses a single crystal SiC optical chip encased in a sintered SiC material probe. The SiC chip may be protected for high temperature only use or exposed for both temperature and pressure sensing. Hybrid signal processing techniques allow fault-tolerant extreme temperature sensing. Wavelength peak-to-peak (or null-to-null) collective spectrum spread measurement to detect wavelength peak/null shift measurement forms a coarse-fine temperature measurement using broadband spectrum monitoring. The SiC probe frontend acts as a stable emissivity Black-body radiator and monitoring the shift in radiation spectrum enables a pyrometer. This application combines all-SiC pyrometry with thick SiC etalon laser interferometry within a free-spectral range to form a coarse-fine temperature measurement sensor. RF notch filtering techniques improve the sensitivity of the temperature measurement where fine spectral shift or spectrum measurements are needed to deduce temperature.

  5. Robust Suppression of HIV Replication by Intracellularly Expressed Reverse Transcriptase Aptamers Is Independent of Ribozyme Processing

    PubMed Central

    Lange, Margaret J; Sharma, Tarun K; Whatley, Angela S; Landon, Linda A; Tempesta, Michael A; Johnson, Marc C; Burke, Donald H

    2012-01-01

    RNA aptamers that bind human immunodeficiency virus 1 (HIV-1) reverse transcriptase (RT) also inhibit viral replication, making them attractive as therapeutic candidates and potential tools for dissecting viral pathogenesis. However, it is not well understood how aptamer-expression context and cellular RNA pathways govern aptamer accumulation and net antiviral bioactivity. Using a previously-described expression cassette in which aptamers were flanked by two “minimal core” hammerhead ribozymes, we observed only weak suppression of pseudotyped HIV. To evaluate the importance of the minimal ribozymes, we replaced them with extended, tertiary-stabilized hammerhead ribozymes with enhanced self-cleavage activity, in addition to noncleaving ribozymes with active site mutations. Both the active and inactive versions of the extended hammerhead ribozymes increased inhibition of pseudotyped virus, indicating that processing is not necessary for bioactivity. Clonal stable cell lines expressing aptamers from these modified constructs strongly suppressed infectious virus, and were more effective than minimal ribozymes at high viral multiplicity of infection (MOI). Tertiary stabilization greatly increased aptamer accumulation in viral and subcellular compartments, again regardless of self-cleavage capability. We therefore propose that the increased accumulation is responsible for increased suppression, that the bioactive form of the aptamer is one of the uncleaved or partially cleaved transcripts, and that tertiary stabilization increases transcript stability by reducing exonuclease degradation. PMID:22948672

  6. The role of the PIRT process in identifying code improvements and executing code development

    SciTech Connect

    Wilson, G.E.; Boyack, B.E.

    1997-07-01

    In September 1988, the USNRC issued a revised ECCS rule for light water reactors that allows, as an option, the use of best estimate (BE) plus uncertainty methods in safety analysis. The key feature of this licensing option relates to quantification of the uncertainty in the determination that an NPP has a {open_quotes}low{close_quotes} probability of violating the safety criteria specified in 10 CFR 50. To support the 1988 licensing revision, the USNRC and its contractors developed the CSAU evaluation methodology to demonstrate the feasibility of the BE plus uncertainty approach. The PIRT process, Step 3 in the CSAU methodology, was originally formulated to support the BE plus uncertainty licensing option as executed in the CSAU approach to safety analysis. Subsequent work has shown the PIRT process to be a much more powerful tool than conceived in its original form. Through further development and application, the PIRT process has shown itself to be a robust means to establish safety analysis computer code phenomenological requirements in their order of importance to such analyses. Used early in research directed toward these objectives, PIRT results also provide the technical basis and cost effective organization for new experimental programs needed to improve the safety analysis codes for new applications. The primary purpose of this paper is to describe the generic PIRT process, including typical and common illustrations from prior applications. The secondary objective is to provide guidance to future applications of the process to help them focus, in a graded approach, on systems, components, processes and phenomena that have been common in several prior applications.

  7. Development of a Robust and Cost-Effective Friction Stir Welding Process for Use in Advanced Military Vehicles

    DTIC Science & Technology

    2011-01-01

    fusion 37 welding processes. FSW has established itself as a preferred 38 joining technique for aluminum components and its applica- 39 tions for...245influences the FSW joint profile as well as the weld material 246microstructure and properties. Initially, one-piece steel tools 247were used with both the pin...parameters for a given choice of the aluminum -alloy 628 grades and plate thicknesses. While attempting to identify 629 optimal FSW process and weld

  8. Magnetoencephalography identifies rapid temporal processing deficit in autism and language impairment.

    PubMed

    Oram Cardy, Janis E; Flagg, Elissa J; Roberts, Wendy; Brian, Jessica; Roberts, Timothy P L

    2005-03-15

    Deficient rapid temporal processing may contribute to impaired language development by interfering with the processing of brief acoustic transitions crucial for speech perception. Using magnetoencephalography, evoked neural activity (M50, M100) to two 40 ms tones passively presented in rapid succession was recorded in 10 neurologically normal adults and 40 8-17-year-olds with autism, specific language impairment, Asperger syndrome or typical development. While 80% of study participants with intact language (Asperger syndrome, typical development, adults) showed identifiable responses to the second tone, which presented rapid temporal processing demands, 65% of study participants with impaired language (autism, specific language impairment) did not, despite having shown identifiable responses to the first tone. Rapid temporal processing impairments may be fundamentally associated with impairments in language rather than autism spectrum disorder.

  9. A robust post-processing method to determine skin friction in turbulent boundary layers from the velocity profile

    NASA Astrophysics Data System (ADS)

    Rodríguez-López, Eduardo; Bruce, Paul J. K.; Buxton, Oliver R. H.

    2015-04-01

    The present paper describes a method to extrapolate the mean wall shear stress, , and the accurate relative position of a velocity probe with respect to the wall, , from an experimentally measured mean velocity profile in a turbulent boundary layer. Validation is made between experimental and direct numerical simulation data of turbulent boundary layer flows with independent measurement of the shear stress. The set of parameters which minimize the residual error with respect to the canonical description of the boundary layer profile is taken as the solution. Several methods are compared, testing different descriptions of the canonical mean velocity profile (with and without overshoot over the logarithmic law) and different definitions of the residual function of the optimization. The von Kármán constant is used as a parameter of the fitting process in order to avoid any hypothesis regarding its value that may be affected by different initial or boundary conditions of the flow. Results show that the best method provides an accuracy of for the estimation of the friction velocity and for the position of the wall. The robustness of the method is tested including unconverged near-wall measurements, pressure gradient, and reduced number of points; the importance of the location of the first point is also tested, and it is shown that the method presents a high robustness even in highly distorted flows, keeping the aforementioned accuracies if one acquires at least one data point in . The wake component and the thickness of the boundary layer are also simultaneously extrapolated from the mean velocity profile. This results in the first study, to the knowledge of the authors, where a five-parameter fitting is carried out without any assumption on the von Kármán constant and the limits of the logarithmic layer further from its existence.

  10. Clocking the social mind by identifying mental processes in the IAT with electrical neuroimaging.

    PubMed

    Schiller, Bastian; Gianotti, Lorena R R; Baumgartner, Thomas; Nash, Kyle; Koenig, Thomas; Knoch, Daria

    2016-03-08

    Why do people take longer to associate the word "love" with outgroup words (incongruent condition) than with ingroup words (congruent condition)? Despite the widespread use of the implicit association test (IAT), it has remained unclear whether this IAT effect is due to additional mental processes in the incongruent condition, or due to longer duration of the same processes. Here, we addressed this previously insoluble issue by assessing the spatiotemporal evolution of brain electrical activity in 83 participants. From stimulus presentation until response production, we identified seven processes. Crucially, all seven processes occurred in the same temporal sequence in both conditions, but participants needed more time to perform one early occurring process (perceptual processing) and one late occurring process (implementing cognitive control to select the motor response) in the incongruent compared with the congruent condition. We also found that the latter process contributed to individual differences in implicit bias. These results advance understanding of the neural mechanics of response time differences in the IAT: They speak against theories that explain the IAT effect as due to additional processes in the incongruent condition and speak in favor of theories that assume a longer duration of specific processes in the incongruent condition. More broadly, our data analysis approach illustrates the potential of electrical neuroimaging to illuminate the temporal organization of mental processes involved in social cognition.

  11. Clocking the social mind by identifying mental processes in the IAT with electrical neuroimaging

    PubMed Central

    Schiller, Bastian; Gianotti, Lorena R. R.; Baumgartner, Thomas; Nash, Kyle; Koenig, Thomas; Knoch, Daria

    2016-01-01

    Why do people take longer to associate the word “love” with outgroup words (incongruent condition) than with ingroup words (congruent condition)? Despite the widespread use of the implicit association test (IAT), it has remained unclear whether this IAT effect is due to additional mental processes in the incongruent condition, or due to longer duration of the same processes. Here, we addressed this previously insoluble issue by assessing the spatiotemporal evolution of brain electrical activity in 83 participants. From stimulus presentation until response production, we identified seven processes. Crucially, all seven processes occurred in the same temporal sequence in both conditions, but participants needed more time to perform one early occurring process (perceptual processing) and one late occurring process (implementing cognitive control to select the motor response) in the incongruent compared with the congruent condition. We also found that the latter process contributed to individual differences in implicit bias. These results advance understanding of the neural mechanics of response time differences in the IAT: They speak against theories that explain the IAT effect as due to additional processes in the incongruent condition and speak in favor of theories that assume a longer duration of specific processes in the incongruent condition. More broadly, our data analysis approach illustrates the potential of electrical neuroimaging to illuminate the temporal organization of mental processes involved in social cognition. PMID:26903643

  12. A Method for Identifying Contours in Processing Digital Images from Computer Tomograph

    NASA Astrophysics Data System (ADS)

    Roşu, Şerban; Pater, Flavius; Costea, Dan; Munteanu, Mihnea; Roşu, Doina; Fratila, Mihaela

    2011-09-01

    The first step in digital processing of two-dimensional computed tomography images is to identify the contour of component elements. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating new algorithms and methods in medical 2D and 3D imagery.

  13. Identifying Children Who Use a Perseverative Text Processing Strategy. Technical Report #15.

    ERIC Educational Resources Information Center

    Kimmel, Susan; MacGinitie, Walter H.

    To identify children who use a perseverative text processing strategy and to examine the effects of this strategy on recall and comprehenson, 255 fifth and sixth graders were screened for large differences between regressed standard scores for inductively (main idea last) and deductively (main idea first) structured paragraphs. Sixteen Ss were…

  14. Efficient and mechanically robust stretchable organic light-emitting devices by a laser-programmable buckling process

    PubMed Central

    Yin, Da; Feng, Jing; Ma, Rui; Liu, Yue-Feng; Zhang, Yong-Lai; Zhang, Xu-Lin; Bi, Yan-Gang; Chen, Qi-Dai; Sun, Hong-Bo

    2016-01-01

    Stretchable organic light-emitting devices are becoming increasingly important in the fast-growing fields of wearable displays, biomedical devices and health-monitoring technology. Although highly stretchable devices have been demonstrated, their luminous efficiency and mechanical stability remain impractical for the purposes of real-life applications. This is due to significant challenges arising from the high strain-induced limitations on the structure design of the device, the materials used and the difficulty of controlling the stretch-release process. Here we have developed a laser-programmable buckling process to overcome these obstacles and realize a highly stretchable organic light-emitting diode with unprecedented efficiency and mechanical robustness. The strained device luminous efficiency −70 cd A−1 under 70% strain - is the largest to date and the device can accommodate 100% strain while exhibiting only small fluctuations in performance over 15,000 stretch-release cycles. This work paves the way towards fully stretchable organic light-emitting diodes that can be used in wearable electronic devices. PMID:27187936

  15. Efficient and mechanically robust stretchable organic light-emitting devices by a laser-programmable buckling process.

    PubMed

    Yin, Da; Feng, Jing; Ma, Rui; Liu, Yue-Feng; Zhang, Yong-Lai; Zhang, Xu-Lin; Bi, Yan-Gang; Chen, Qi-Dai; Sun, Hong-Bo

    2016-05-17

    Stretchable organic light-emitting devices are becoming increasingly important in the fast-growing fields of wearable displays, biomedical devices and health-monitoring technology. Although highly stretchable devices have been demonstrated, their luminous efficiency and mechanical stability remain impractical for the purposes of real-life applications. This is due to significant challenges arising from the high strain-induced limitations on the structure design of the device, the materials used and the difficulty of controlling the stretch-release process. Here we have developed a laser-programmable buckling process to overcome these obstacles and realize a highly stretchable organic light-emitting diode with unprecedented efficiency and mechanical robustness. The strained device luminous efficiency -70 cd A(-1) under 70% strain - is the largest to date and the device can accommodate 100% strain while exhibiting only small fluctuations in performance over 15,000 stretch-release cycles. This work paves the way towards fully stretchable organic light-emitting diodes that can be used in wearable electronic devices.

  16. Efficient and mechanically robust stretchable organic light-emitting devices by a laser-programmable buckling process

    NASA Astrophysics Data System (ADS)

    Yin, Da; Feng, Jing; Ma, Rui; Liu, Yue-Feng; Zhang, Yong-Lai; Zhang, Xu-Lin; Bi, Yan-Gang; Chen, Qi-Dai; Sun, Hong-Bo

    2016-05-01

    Stretchable organic light-emitting devices are becoming increasingly important in the fast-growing fields of wearable displays, biomedical devices and health-monitoring technology. Although highly stretchable devices have been demonstrated, their luminous efficiency and mechanical stability remain impractical for the purposes of real-life applications. This is due to significant challenges arising from the high strain-induced limitations on the structure design of the device, the materials used and the difficulty of controlling the stretch-release process. Here we have developed a laser-programmable buckling process to overcome these obstacles and realize a highly stretchable organic light-emitting diode with unprecedented efficiency and mechanical robustness. The strained device luminous efficiency -70 cd A-1 under 70% strain - is the largest to date and the device can accommodate 100% strain while exhibiting only small fluctuations in performance over 15,000 stretch-release cycles. This work paves the way towards fully stretchable organic light-emitting diodes that can be used in wearable electronic devices.

  17. Model of areas for identifying risks influencing the compliance of technological processes and products

    NASA Astrophysics Data System (ADS)

    Misztal, A.; Belu, N.

    2016-08-01

    Operation of every company is associated with the risk of interfering with proper performance of its fundamental processes. This risk is associated with various internal areas of the company, as well as the environment in which it operates. From the point of view of ensuring compliance of the course of specific technological processes and, consequently, product conformity with requirements, it is important to identify these threats and eliminate or reduce the risk of their occurrence. The purpose of this article is to present a model of areas of identifying risk affecting the compliance of processes and products, which is based on multiregional targeted monitoring of typical places of interference and risk management methods. The model is based on the verification of risk analyses carried out in small and medium-sized manufacturing companies in various industries..

  18. Are all letters really processed equally and in parallel? Further evidence of a robust first letter advantage.

    PubMed

    Scaltritti, Michele; Balota, David A

    2013-10-01

    This present study examined accuracy and response latency of letter processing as a function of position within a horizontal array. In a series of 4 Experiments, target-strings were briefly (33ms for Experiments 1 to 3, 83ms for Experiment 4) displayed and both forward and backward masked. Participants then made a two alternative forced choice. The two alternative responses differed just in one element of the string, and position of mismatch was systematically manipulated. In Experiment 1, words of different lengths (from 3 to 6 letters) were presented in separate blocks. Across different lengths, there was a robust advantage in performance when the alternative response was different for the letter occurring at the first position, compared to when the difference occurred at any other position. Experiment 2 replicated this finding with the same materials used in Experiment 1, but with words of different lengths randomly intermixed within blocks. Experiment 3 provided evidence of the first position advantage with legal nonwords and strings of consonants, but did not provide any first position advantage for non-alphabetic symbols. The lack of a first position advantage for symbols was replicated in Experiment 4, where target-strings were displayed for a longer duration (83ms). Taken together these results suggest that the first position advantage is a phenomenon that occurs specifically and selectively for letters, independent of lexical constraints. We argue that the results are consistent with models that assume a processing advantage for coding letters in the first position, and are inconsistent with the commonly held assumption in visual word recognition models that letters are equally processed in parallel independent of letter position.

  19. Pilot-scale investigation of the robustness and efficiency of a copper-based treated wood wastes recycling process.

    PubMed

    Coudert, Lucie; Blais, Jean-François; Mercier, Guy; Cooper, Paul; Gastonguay, Louis; Morris, Paul; Janin, Amélie; Reynier, Nicolas

    2013-10-15

    The disposal of metal-bearing treated wood wastes is becoming an environmental challenge. An efficient recycling process based on sulfuric acid leaching has been developed to remove metals from copper-based treated wood chips (0robustness of this technology in removing metals from copper-based treated wood wastes at a pilot plant scale (130-L reactor tank). After 3 × 2 h leaching steps followed by 3 × 7 min rinsing steps, up to 97.5% of As, 87.9% of Cr, and 96.1% of Cu were removed from CCA-treated wood wastes with different initial metal loading (>7.3 kgm(-3)) and more than 94.5% of Cu was removed from ACQ-, CA- and MCQ-treated wood. The treatment of effluents by precipitation-coagulation was highly efficient; allowing removals more than 93% for the As, Cr, and Cu contained in the effluent. The economic analysis included operating costs, indirect costs and revenues related to remediated wood sales. The economic analysis concluded that CCA-treated wood wastes remediation can lead to a benefit of 53.7 US$t(-1) or a cost of 35.5 US$t(-1) and that ACQ-, CA- and MCQ-treated wood wastes recycling led to benefits ranging from 9.3 to 21.2 US$t(-1).

  20. Torque coordinating robust control of shifting process for dry dual clutch transmission equipped in a hybrid car

    NASA Astrophysics Data System (ADS)

    Zhao, Z.-G.; Chen, H.-J.; Yang, Y.-Y.; He, L.

    2015-09-01

    For a hybrid car equipped with dual clutch transmission (DCT), the coordination control problems of clutches and power sources are investigated while taking full advantage of the integrated starter generator motor's fast response speed and high accuracy (speed and torque). First, a dynamic model of the shifting process is established, the vehicle acceleration is quantified according to the intentions of the driver, and the torque transmitted by clutches is calculated based on the designed disengaging principle during the torque phase. Next, a robust H∞ controller is designed to ensure speed synchronisation despite the existence of model uncertainties, measurement noise, and engine torque lag. The engine torque lag and measurement noise are used as external disturbances to initially modify the output torque of the power source. Additionally, during the torque switch phase, the torque of the power sources is smoothly transitioned to the driver's demanded torque. Finally, the torque of the power sources is further distributed based on the optimisation of system efficiency, and the throttle opening of the engine is constrained to avoid sharp torque variations. The simulation results verify that the proposed control strategies effectively address the problem of coordinating control of clutches and power sources, establishing a foundation for the application of DCT in hybrid cars.

  1. A hybrid approach identifies metabolic signatures of high-producers for chinese hamster ovary clone selection and process optimization.

    PubMed

    Popp, Oliver; Müller, Dirk; Didzus, Katharina; Paul, Wolfgang; Lipsmeier, Florian; Kirchner, Florian; Niklas, Jens; Mauch, Klaus; Beaucamp, Nicola

    2016-09-01

    In-depth characterization of high-producer cell lines and bioprocesses is vital to ensure robust and consistent production of recombinant therapeutic proteins in high quantity and quality for clinical applications. This requires applying appropriate methods during bioprocess development to enable meaningful characterization of CHO clones and processes. Here, we present a novel hybrid approach for supporting comprehensive characterization of metabolic clone performance. The approach combines metabolite profiling with multivariate data analysis and fluxomics to enable a data-driven mechanistic analysis of key metabolic traits associated with desired cell phenotypes. We applied the methodology to quantify and compare metabolic performance in a set of 10 recombinant CHO-K1 producer clones and a host cell line. The comprehensive characterization enabled us to derive an extended set of clone performance criteria that not only captured growth and product formation, but also incorporated information on intracellular clone physiology and on metabolic changes during the process. These criteria served to establish a quantitative clone ranking and allowed us to identify metabolic differences between high-producing CHO-K1 clones yielding comparably high product titers. Through multivariate data analysis of the combined metabolite and flux data we uncovered common metabolic traits characteristic of high-producer clones in the screening setup. This included high intracellular rates of glutamine synthesis, low cysteine uptake, reduced excretion of aspartate and glutamate, and low intracellular degradation rates of branched-chain amino acids and of histidine. Finally, the above approach was integrated into a workflow that enables standardized high-content selection of CHO producer clones in a high-throughput fashion. In conclusion, the combination of quantitative metabolite profiling, multivariate data analysis, and mechanistic network model simulations can identify metabolic

  2. A novel mini-DNA barcoding assay to identify processed fins from internationally protected shark species.

    PubMed

    Fields, Andrew T; Abercrombie, Debra L; Eng, Rowena; Feldheim, Kevin; Chapman, Demian D

    2015-01-01

    There is a growing need to identify shark products in trade, in part due to the recent listing of five commercially important species on the Appendices of the Convention on International Trade in Endangered Species (CITES; porbeagle, Lamna nasus, oceanic whitetip, Carcharhinus longimanus scalloped hammerhead, Sphyrna lewini, smooth hammerhead, S. zygaena and great hammerhead S. mokarran) in addition to three species listed in the early part of this century (whale, Rhincodon typus, basking, Cetorhinus maximus, and white, Carcharodon carcharias). Shark fins are traded internationally to supply the Asian dried seafood market, in which they are used to make the luxury dish shark fin soup. Shark fins usually enter international trade with their skin still intact and can be identified using morphological characters or standard DNA-barcoding approaches. Once they reach Asia and are traded in this region the skin is removed and they are treated with chemicals that eliminate many key diagnostic characters and degrade their DNA ("processed fins"). Here, we present a validated mini-barcode assay based on partial sequences of the cytochrome oxidase I gene that can reliably identify the processed fins of seven of the eight CITES listed shark species. We also demonstrate that the assay can even frequently identify the species or genus of origin of shark fin soup (31 out of 50 samples).

  3. A Novel Mini-DNA Barcoding Assay to Identify Processed Fins from Internationally Protected Shark Species

    PubMed Central

    Fields, Andrew T.; Abercrombie, Debra L.; Eng, Rowena; Feldheim, Kevin; Chapman, Demian D.

    2015-01-01

    There is a growing need to identify shark products in trade, in part due to the recent listing of five commercially important species on the Appendices of the Convention on International Trade in Endangered Species (CITES; porbeagle, Lamna nasus, oceanic whitetip, Carcharhinus longimanus scalloped hammerhead, Sphyrna lewini, smooth hammerhead, S. zygaena and great hammerhead S. mokarran) in addition to three species listed in the early part of this century (whale, Rhincodon typus, basking, Cetorhinus maximus, and white, Carcharodon carcharias). Shark fins are traded internationally to supply the Asian dried seafood market, in which they are used to make the luxury dish shark fin soup. Shark fins usually enter international trade with their skin still intact and can be identified using morphological characters or standard DNA-barcoding approaches. Once they reach Asia and are traded in this region the skin is removed and they are treated with chemicals that eliminate many key diagnostic characters and degrade their DNA (“processed fins”). Here, we present a validated mini-barcode assay based on partial sequences of the cytochrome oxidase I gene that can reliably identify the processed fins of seven of the eight CITES listed shark species. We also demonstrate that the assay can even frequently identify the species or genus of origin of shark fin soup (31 out of 50 samples). PMID:25646789

  4. Method for identifying biochemical and chemical reactions and micromechanical processes using nanomechanical and electronic signal identification

    DOEpatents

    Holzrichter, J.F.; Siekhaus, W.J.

    1997-04-15

    A scanning probe microscope, such as an atomic force microscope (AFM) or a scanning tunneling microscope (STM), is operated in a stationary mode on a site where an activity of interest occurs to measure and identify characteristic time-varying micromotions caused by biological, chemical, mechanical, electrical, optical, or physical processes. The tip and cantilever assembly of an AFM is used as a micromechanical detector of characteristic micromotions transmitted either directly by a site of interest or indirectly through the surrounding medium. Alternatively, the exponential dependence of the tunneling current on the size of the gap in the STM is used to detect micromechanical movement. The stationary mode of operation can be used to observe dynamic biological processes in real time and in a natural environment, such as polymerase processing of DNA for determining the sequence of a DNA molecule. 6 figs.

  5. Method for identifying biochemical and chemical reactions and micromechanical processes using nanomechanical and electronic signal identification

    DOEpatents

    Holzrichter, John F.; Siekhaus, Wigbert J.

    1997-01-01

    A scanning probe microscope, such as an atomic force microscope (AFM) or a scanning tunneling microscope (STM), is operated in a stationary mode on a site where an activity of interest occurs to measure and identify characteristic time-varying micromotions caused by biological, chemical, mechanical, electrical, optical, or physical processes. The tip and cantilever assembly of an AFM is used as a micromechanical detector of characteristic micromotions transmitted either directly by a site of interest or indirectly through the surrounding medium. Alternatively, the exponential dependence of the tunneling current on the size of the gap in the STM is used to detect micromechanical movement. The stationary mode of operation can be used to observe dynamic biological processes in real time and in a natural environment, such as polymerase processing of DNA for determining the sequence of a DNA molecule.

  6. A stable isotope approach and its application for identifying nitrate source and transformation process in water.

    PubMed

    Xu, Shiguo; Kang, Pingping; Sun, Ya

    2016-01-01

    Nitrate contamination of water is a worldwide environmental problem. Recent studies have demonstrated that the nitrogen (N) and oxygen (O) isotopes of nitrate (NO3(-)) can be used to trace nitrogen dynamics including identifying nitrate sources and nitrogen transformation processes. This paper analyzes the current state of identifying nitrate sources and nitrogen transformation processes using N and O isotopes of nitrate. With regard to nitrate sources, δ(15)N-NO3(-) and δ(18)O-NO3(-) values typically vary between sources, allowing the sources to be isotopically fingerprinted. δ(15)N-NO3(-) is often effective at tracing NO(-)3 sources from areas with different land use. δ(18)O-NO3(-) is more useful to identify NO3(-) from atmospheric sources. Isotopic data can be combined with statistical mixing models to quantify the relative contributions of NO3(-) from multiple delineated sources. With regard to N transformation processes, N and O isotopes of nitrate can be used to decipher the degree of nitrogen transformation by such processes as nitrification, assimilation, and denitrification. In some cases, however, isotopic fractionation may alter the isotopic fingerprint associated with the delineated NO3(-) source(s). This problem may be addressed by combining the N and O isotopic data with other types of, including the concentration of selected conservative elements, e.g., chloride (Cl(-)), boron isotope (δ(11)B), and sulfur isotope (δ(35)S) data. Future studies should focus on improving stable isotope mixing models and furthering our understanding of isotopic fractionation by conducting laboratory and field experiments in different environments.

  7. Centimeter-Level Robust Gnss-Aided Inertial Post-Processing for Mobile Mapping Without Local Reference Stations

    NASA Astrophysics Data System (ADS)

    Hutton, J. J.; Gopaul, N.; Zhang, X.; Wang, J.; Menon, V.; Rieck, D.; Kipka, A.; Pastor, F.

    2016-06-01

    For almost two decades mobile mapping systems have done their georeferencing using Global Navigation Satellite Systems (GNSS) to measure position and inertial sensors to measure orientation. In order to achieve cm level position accuracy, a technique referred to as post-processed carrier phase differential GNSS (DGNSS) is used. For this technique to be effective the maximum distance to a single Reference Station should be no more than 20 km, and when using a network of Reference Stations the distance to the nearest station should no more than about 70 km. This need to set up local Reference Stations limits productivity and increases costs, especially when mapping large areas or long linear features such as roads or pipelines. An alternative technique to DGNSS for high-accuracy positioning from GNSS is the so-called Precise Point Positioning or PPP method. In this case instead of differencing the rover observables with the Reference Station observables to cancel out common errors, an advanced model for every aspect of the GNSS error chain is developed and parameterized to within an accuracy of a few cm. The Trimble Centerpoint RTX positioning solution combines the methodology of PPP with advanced ambiguity resolution technology to produce cm level accuracies without the need for local reference stations. It achieves this through a global deployment of highly redundant monitoring stations that are connected through the internet and are used to determine the precise satellite data with maximum accuracy, robustness, continuity and reliability, along with advance algorithms and receiver and antenna calibrations. This paper presents a new post-processed realization of the Trimble Centerpoint RTX technology integrated into the Applanix POSPac MMS GNSS-Aided Inertial software for mobile mapping. Real-world results from over 100 airborne flights evaluated against a DGNSS network reference are presented which show that the post-processed Centerpoint RTX solution agrees with

  8. Identifying influential nodes based on graph signal processing in complex networks

    NASA Astrophysics Data System (ADS)

    Zhao, Jia; Yu, Li; Li, Jing-Ru; Zhou, Peng

    2015-05-01

    Identifying influential nodes in complex networks is of both theoretical and practical importance. Existing methods identify influential nodes based on their positions in the network and assume that the nodes are homogeneous. However, node heterogeneity (i.e., different attributes such as interest, energy, age, and so on) ubiquitously exists and needs to be taken into consideration. In this paper, we conduct an investigation into node attributes and propose a graph signal processing based centrality (GSPC) method to identify influential nodes considering both the node attributes and the network topology. We first evaluate our GSPC method using two real-world datasets. The results show that our GSPC method effectively identifies influential nodes, which correspond well with the underlying ground truth. This is compatible to the previous eigenvector centrality and principal component centrality methods under circumstances where the nodes are homogeneous. In addition, spreading analysis shows that the GSPC method has a positive effect on the spreading dynamics. Project supported by the National Natural Science Foundation of China (Grant No. 61231010) and the Fundamental Research Funds for the Central Universities, China (Grant No. HUST No. 2012QN076).

  9. Modeling of 2D diffusion processes based on microscopy data: parameter estimation and practical identifiability analysis

    PubMed Central

    2013-01-01

    Background Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. Methods We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. Results and conclusion As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods. PMID:24267545

  10. Determination of all feasible robust PID controllers for open-loop unstable plus time delay processes with gain margin and phase margin specifications.

    PubMed

    Wang, Yuan-Jay

    2014-03-01

    This paper proposes a novel alternative method to graphically compute all feasible gain and phase margin specifications-oriented robust PID controllers for open-loop unstable plus time delay (OLUPTD) processes. This method is applicable to general OLUPTD processes without constraint on system order. To retain robustness for OLUPTD processes subject to positive or negative gain variations, the downward gain margin (GM(down)), upward gain margin (GM(up)), and phase margin (PM) are considered. A virtual gain-phase margin tester compensator is incorporated to guarantee the concerned system satisfies certain robust safety margins. In addition, the stability equation method and the parameter plane method are exploited to portray the stability boundary and the constant gain margin (GM) boundary as well as the constant PM boundary. The overlapping region of these boundaries is graphically determined and denotes the GM and PM specifications-oriented region (GPMSOR). Alternatively, the GPMSOR characterizes all feasible robust PID controllers which achieve the pre-specified safety margins. In particular, to achieve optimal gain tuning, the controller gains are searched within the GPMSOR to minimize the integral of the absolute error (IAE) or the integral of the squared error (ISE) performance criterion. Thus, an optimal PID controller gain set is successfully found within the GPMSOR and guarantees the OLUPTD processes with a pre-specified GM and PM as well as a minimum IAE or ISE. Consequently, both robustness and performance can be simultaneously assured. Further, the design procedures are summarized as an algorithm to help rapidly locate the GPMSOR and search an optimal PID gain set. Finally, three highly cited examples are provided to illustrate the design process and to demonstrate the effectiveness of the proposed method.

  11. Robust computational techniques for studying Stokes flow within deformable domains: Applications to global scale planetary formation processes

    NASA Astrophysics Data System (ADS)

    Furuichi, M.; May, D.

    2014-12-01

    We develop numerical schemes for solving global scale Stokes flow systems employing the "sticky air" (approximate free surface) boundary condition. Our target application considers the dynamics of planetary growth involving long time-scale global core formation process, for which the interaction between the surface geometry and interior dynamics play an important role (e.g. Golabek et.al. 2009, Lin et.al. 2009). The solution of Stokes flow problems including a free surface is one of the grand challenges of computational geodynamics due to the numerical instability arising at the deformable surface (e.g. Kaus et.al. 2010, Duretz et.al. 2011, Kramer et.al. 2012). Here, we present two strategies for the efficient solution of the Stokes flow system using the "spherical Cartesian" approach (Gerya and Yuen 2007). The first technique addresses the robustness of the Stokes flow with respect to the viscosity jump arising between the sticky air and planetary body (e.g. Furuichi et.al. 2009). For this we employ preconditioned iterative solvers utilising mixed precision arithmetic (Furuich et.al. 2011). Our numerical experiment shows that the mixed precision approach using double-double precision arithmetic improves convergence of the Krylov solver w.r.t to increasing viscosity jump without significantly increasing the calculation time (~20%). The second strategy introduces an implicit advection scheme for the stable time integration for the deformable free surface. The Stokes flow system becomes numerically stiff when the dynamical time scale associated with the surface deformation is very short in comparison to the time scale associated with other physical process, such as thermal convection. In this work, we propose to treat the advection as a coordinate non-linearity coupled to the momentum equation, thereby defining a fully implicit time integration scheme. Such an integrator scheme permits large time step size to be used without introducing spurious numerical

  12. Identifying potential misfit items in cognitive process of learning engineering mathematics based on Rasch model

    NASA Astrophysics Data System (ADS)

    Ataei, Sh; Mahmud, Z.; Khalid, M. N.

    2014-04-01

    The students learning outcomes clarify what students should know and be able to demonstrate after completing their course. So, one of the issues on the process of teaching and learning is how to assess students' learning. This paper describes an application of the dichotomous Rasch measurement model in measuring the cognitive process of engineering students' learning of mathematics. This study provides insights into the perspective of 54 engineering students' cognitive ability in learning Calculus III based on Bloom's Taxonomy on 31 items. The results denote that some of the examination questions are either too difficult or too easy for the majority of the students. This analysis yields FIT statistics which are able to identify if there is data departure from the Rasch theoretical model. The study has identified some potential misfit items based on the measurement of ZSTD where the removal misfit item was accomplished based on the MNSQ outfit of above 1.3 or less than 0.7 logit. Therefore, it is recommended that these items be reviewed or revised to better match the range of students' ability in the respective course.

  13. Transposon mutagenesis identifies genes and cellular processes driving epithelial-mesenchymal transition in hepatocellular carcinoma

    PubMed Central

    Kodama, Takahiro; Newberg, Justin Y.; Kodama, Michiko; Rangel, Roberto; Yoshihara, Kosuke; Tien, Jean C.; Parsons, Pamela H.; Wu, Hao; Finegold, Milton J.; Copeland, Neal G.; Jenkins, Nancy A.

    2016-01-01

    Epithelial-mesenchymal transition (EMT) is thought to contribute to metastasis and chemoresistance in patients with hepatocellular carcinoma (HCC), leading to their poor prognosis. The genes driving EMT in HCC are not yet fully understood, however. Here, we show that mobilization of Sleeping Beauty (SB) transposons in immortalized mouse hepatoblasts induces mesenchymal liver tumors on transplantation to nude mice. These tumors show significant down-regulation of epithelial markers, along with up-regulation of mesenchymal markers and EMT-related transcription factors (EMT-TFs). Sequencing of transposon insertion sites from tumors identified 233 candidate cancer genes (CCGs) that were enriched for genes and cellular processes driving EMT. Subsequent trunk driver analysis identified 23 CCGs that are predicted to function early in tumorigenesis and whose mutation or alteration in patients with HCC is correlated with poor patient survival. Validation of the top trunk drivers identified in the screen, including MET (MET proto-oncogene, receptor tyrosine kinase), GRB2-associated binding protein 1 (GAB1), HECT, UBA, and WWE domain containing 1 (HUWE1), lysine-specific demethylase 6A (KDM6A), and protein-tyrosine phosphatase, nonreceptor-type 12 (PTPN12), showed that deregulation of these genes activates an EMT program in human HCC cells that enhances tumor cell migration. Finally, deregulation of these genes in human HCC was found to confer sorafenib resistance through apoptotic tolerance and reduced proliferation, consistent with recent studies showing that EMT contributes to the chemoresistance of tumor cells. Our unique cell-based transposon mutagenesis screen appears to be an excellent resource for discovering genes involved in EMT in human HCC and potentially for identifying new drug targets. PMID:27247392

  14. Exon-level expression analyses identify MYCN and NTRK1 as major determinants of alternative exon usage and robustly predict primary neuroblastoma outcome

    PubMed Central

    Schramm, A; Schowe, B; Fielitz, K; Heilmann, M; Martin, M; Marschall, T; Köster, J; Vandesompele, J; Vermeulen, J; de Preter, K; Koster, J; Versteeg, R; Noguera, R; Speleman, F; Rahmann, S; Eggert, A; Morik, K; Schulte, J H

    2012-01-01

    Background: Using mRNA expression-derived signatures as predictors of individual patient outcome has been a goal ever since the introduction of microarrays. Here, we addressed whether analyses of tumour mRNA at the exon level can improve on the predictive power and classification accuracy of gene-based expression profiles using neuroblastoma as a model. Methods: In a patient cohort comprising 113 primary neuroblastoma specimens expression profiling using exon-level analyses was performed to define predictive signatures using various machine-learning techniques. Alternative transcript use was calculated from relative exon expression. Validation of alternative transcripts was achieved using qPCR- and cell-based approaches. Results: Both predictors derived from the gene or the exon levels resulted in prediction accuracies >80% for both event-free and overall survival and proved as independent prognostic markers in multivariate analyses. Alternative transcript use was most prominently linked to the amplification status of the MYCN oncogene, expression of the TrkA/NTRK1 neurotrophin receptor and survival. Conclusion: As exon level-based prediction yields comparable, but not significantly better, prediction accuracy than gene expression-based predictors, gene-based assays seem to be sufficiently precise for predicting outcome of neuroblastoma patients. However, exon-level analyses provide added knowledge by identifying alternative transcript use, which should deepen the understanding of neuroblastoma biology. PMID:23047593

  15. Efficiency of operation and robustness of internal image processing for a new type of phosphorplate scanner connected to HIS and PACS

    NASA Astrophysics Data System (ADS)

    Bellon, Erwin; Feron, Michel; Van den Bosch, Bart; Pauwels, Herman; Dhaenens, Frans; Houtput, Wilfried; Vanautgaerden, Mark; Baert, Albert L.; Suetens, Paul; Marchal, Guy

    1996-05-01

    We report on our experience with a recently introduced phosphorplate system (AGFA Diagnostic Center) from the viewpoint of overall operation efficiency. A first factor that determines efficiency is the time it takes to enter patient and examination information. A second factor is robustness of the automated image processing algorithms provided in the CR, as this determines the need for interactive image reprocessing on the workstation or for film retake. Both factors are strongly influenced by the integration of the modality within the HIS, whereby information about the patient and the examination request is automatically transferred to the phosphorplate system. Problems related to wrongly entered patient information are virtually eliminated. In comparison with manual entry of patient demographic data, efficiency has increased significantly. The examination information provided by the HIS helps the CR system to select optimal processing parameters automatically in the majority of situations. Furthermore, the image processing algorithms turn out to be rather robust and independent of pathology. We believe that both the HIS connection and the robustness of internal image processing contribute to making the current percentage of retakes and reprocessing in the order of 1.2% and 0.9% respectively, compared to more than 8% of retakes in the previous analogue systems.

  16. Towards a typology of business process management professionals: identifying patterns of competences through latent semantic analysis

    NASA Astrophysics Data System (ADS)

    Müller, Oliver; Schmiedel, Theresa; Gorbacheva, Elena; vom Brocke, Jan

    2016-01-01

    While researchers have analysed the organisational competences that are required for successful Business Process Management (BPM) initiatives, individual BPM competences have not yet been studied in detail. In this study, latent semantic analysis is used to examine a collection of 1507 BPM-related job advertisements in order to develop a typology of BPM professionals. This empirical analysis reveals distinct ideal types and profiles of BPM professionals on several levels of abstraction. A closer look at these ideal types and profiles confirms that BPM is a boundary-spanning field that requires interdisciplinary sets of competence that range from technical competences to business and systems competences. Based on the study's findings, it is posited that individual and organisational alignment with the identified ideal types and profiles is likely to result in high employability and organisational BPM success.

  17. DPNuc: Identifying Nucleosome Positions Based on the Dirichlet Process Mixture Model.

    PubMed

    Chen, Huidong; Guan, Jihong; Zhou, Shuigeng

    2015-01-01

    Nucleosomes and the free linker DNA between them assemble the chromatin. Nucleosome positioning plays an important role in gene transcription regulation, DNA replication and repair, alternative splicing, and so on. With the rapid development of ChIP-seq, it is possible to computationally detect the positions of nucleosomes on chromosomes. However, existing methods cannot provide accurate and detailed information about the detected nucleosomes, especially for the nucleosomes with complex configurations where overlaps and noise exist. Meanwhile, they usually require some prior knowledge of nucleosomes as input, such as the size or the number of the unknown nucleosomes, which may significantly influence the detection results. In this paper, we propose a novel approach DPNuc for identifying nucleosome positions based on the Dirichlet process mixture model. In our method, Markov chain Monte Carlo (MCMC) simulations are employed to determine the mixture model with no need of prior knowledge about nucleosomes. Compared with three existing methods, our approach can provide more detailed information of the detected nucleosomes and can more reasonably reveal the real configurations of the chromosomes; especially, our approach performs better in the complex overlapping situations. By mapping the detected nucleosomes to a synthetic benchmark nucleosome map and two existing benchmark nucleosome maps, it is shown that our approach achieves a better performance in identifying nucleosome positions and gets a higher F-score. Finally, we show that our approach can more reliably detect the size distribution of nucleosomes.

  18. An event-specific DNA microarray to identify genetically modified organisms in processed foods.

    PubMed

    Kim, Jae-Hwan; Kim, Su-Youn; Lee, Hyungjae; Kim, Young-Rok; Kim, Hae-Yeong

    2010-05-26

    We developed an event-specific DNA microarray system to identify 19 genetically modified organisms (GMOs), including two GM soybeans (GTS-40-3-2 and A2704-12), thirteen GM maizes (Bt176, Bt11, MON810, MON863, NK603, GA21, T25, TC1507, Bt10, DAS59122-7, TC6275, MIR604, and LY038), three GM canolas (GT73, MS8xRF3, and T45), and one GM cotton (LLcotton25). The microarray included 27 oligonucleotide probes optimized to identify endogenous reference targets, event-specific targets, screening targets (35S promoter and nos terminator), and an internal target (18S rRNA gene). Thirty-seven maize-containing food products purchased from South Korean and US markets were tested for the presence of GM maize using this microarray system. Thirteen GM maize events were simultaneously detected using multiplex PCR coupled with microarray on a single chip, at a limit of detection of approximately 0.5%. Using the system described here, we detected GM maize in 11 of the 37 food samples tested. These results suggest that an event-specific DNA microarray system can reliably detect GMOs in processed foods.

  19. Identifying children with autism spectrum disorder based on their face processing abnormality: A machine learning framework.

    PubMed

    Liu, Wenbo; Li, Ming; Yi, Li

    2016-08-01

    The atypical face scanning patterns in individuals with Autism Spectrum Disorder (ASD) has been repeatedly discovered by previous research. The present study examined whether their face scanning patterns could be potentially useful to identify children with ASD by adopting the machine learning algorithm for the classification purpose. Particularly, we applied the machine learning method to analyze an eye movement dataset from a face recognition task [Yi et al., 2016], to classify children with and without ASD. We evaluated the performance of our model in terms of its accuracy, sensitivity, and specificity of classifying ASD. Results indicated promising evidence for applying the machine learning algorithm based on the face scanning patterns to identify children with ASD, with a maximum classification accuracy of 88.51%. Nevertheless, our study is still preliminary with some constraints that may apply in the clinical practice. Future research should shed light on further valuation of our method and contribute to the development of a multitask and multimodel approach to aid the process of early detection and diagnosis of ASD. Autism Res 2016, 9: 888-898. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  20. An ENU mutagenesis screen identifies novel and known genes involved in epigenetic processes in the mouse

    PubMed Central

    2013-01-01

    Background We have used a sensitized ENU mutagenesis screen to produce mouse lines that carry mutations in genes required for epigenetic regulation. We call these lines Modifiers of murine metastable epialleles (Mommes). Results We report a basic molecular and phenotypic characterization for twenty of the Momme mouse lines, and in each case we also identify the causative mutation. Three of the lines carry a mutation in a novel epigenetic modifier, Rearranged L-myc fusion (Rlf), and one gene, Rap-interacting factor 1 (Rif1), has not previously been reported to be involved in transcriptional regulation in mammals. Many of the other lines are novel alleles of known epigenetic regulators. For two genes, Rlf and Widely-interspaced zinc finger (Wiz), we describe the first mouse mutants. All of the Momme mutants show some degree of homozygous embryonic lethality, emphasizing the importance of epigenetic processes. The penetrance of lethality is incomplete in a number of cases. Similarly, abnormalities in phenotype seen in the heterozygous individuals of some lines occur with incomplete penetrance. Conclusions Recent advances in sequencing enhance the power of sensitized mutagenesis screens to identify the function of previously uncharacterized factors and to discover additional functions for previously characterized proteins. The observation of incomplete penetrance of phenotypes in these inbred mutant mice, at various stages of development, is of interest. Overall, the Momme collection of mouse mutants provides a valuable resource for researchers across many disciplines. PMID:24025402

  1. Identifying Hydrologic Processes in Agricultural Watersheds Using Precipitation-Runoff Models

    USGS Publications Warehouse

    Linard, Joshua I.; Wolock, David M.; Webb, Richard M.T.; Wieczorek, Michael

    2009-01-01

    Understanding the fate and transport of agricultural chemicals applied to agricultural fields will assist in designing the most effective strategies to prevent water-quality impairments. At a watershed scale, the processes controlling the fate and transport of agricultural chemicals are generally understood only conceptually. To examine the applicability of conceptual models to the processes actually occurring, two precipitation-runoff models - the Soil and Water Assessment Tool (SWAT) and the Water, Energy, and Biogeochemical Model (WEBMOD) - were applied in different agricultural settings of the contiguous United States. Each model, through different physical processes, simulated the transport of water to a stream from the surface, the unsaturated zone, and the saturated zone. Models were calibrated for watersheds in Maryland, Indiana, and Nebraska. The calibrated sets of input parameters for each model at each watershed are discussed, and the criteria used to validate the models are explained. The SWAT and WEBMOD model results at each watershed conformed to each other and to the processes identified in each watershed's conceptual hydrology. In Maryland the conceptual understanding of the hydrology indicated groundwater flow was the largest annual source of streamflow; the simulation results for the validation period confirm this. The dominant source of water to the Indiana watershed was thought to be tile drains. Although tile drains were not explicitly simulated in the SWAT model, a large component of streamflow was received from lateral flow, which could be attributed to tile drains. Being able to explicitly account for tile drains, WEBMOD indicated water from tile drains constituted most of the annual streamflow in the Indiana watershed. The Nebraska models indicated annual streamflow was composed primarily of perennial groundwater flow and infiltration-excess runoff, which conformed to the conceptual hydrology developed for that watershed. The hydrologic

  2. Robustness. [in space systems

    NASA Technical Reports Server (NTRS)

    Ryan, Robert

    1993-01-01

    The concept of rubustness includes design simplicity, component and path redundancy, desensitization to the parameter and environment variations, control of parameter variations, and punctual operations. These characteristics must be traded with functional concepts, materials, and fabrication approach against the criteria of performance, cost, and reliability. The paper describes the robustness design process, which includes the following seven major coherent steps: translation of vision into requirements, definition of the robustness characteristics desired, criteria formulation of required robustness, concept selection, detail design, manufacturing and verification, operations.

  3. Modular Energy-Efficient and Robust Paradigms for a Disaster-Recovery Process over Wireless Sensor Networks.

    PubMed

    Razaque, Abdul; Elleithy, Khaled

    2015-07-06

    Robust paradigms are a necessity, particularly for emerging wireless sensor network (WSN) applications. The lack of robust and efficient paradigms causes a reduction in the provision of quality of service (QoS) and additional energy consumption. In this paper, we introduce modular energy-efficient and robust paradigms that involve two archetypes: (1) the operational medium access control (O-MAC) hybrid protocol and (2) the pheromone termite (PT) model. The O-MAC protocol controls overhearing and congestion and increases the throughput, reduces the latency and extends the network lifetime. O-MAC uses an optimized data frame format that reduces the channel access time and provides faster data delivery over the medium. Furthermore, O-MAC uses a novel randomization function that avoids channel collisions. The PT model provides robust routing for single and multiple links and includes two new significant features: (1) determining the packet generation rate to avoid congestion and (2) pheromone sensitivity to determine the link capacity prior to sending the packets on each link. The state-of-the-art research in this work is based on improving both the QoS and energy efficiency. To determine the strength of O-MAC with the PT model; we have generated and simulated a disaster recovery scenario using a network simulator (ns-3.10) that monitors the activities of disaster recovery staff; hospital staff and disaster victims brought into the hospital. Moreover; the proposed paradigm can be used for general purpose applications. Finally; the QoS metrics of the O-MAC and PT paradigms are evaluated and compared with other known hybrid protocols involving the MAC and routing features. The simulation results indicate that O-MAC with PT produced better outcomes.

  4. Modular Energy-Efficient and Robust Paradigms for a Disaster-Recovery Process over Wireless Sensor Networks

    PubMed Central

    Razaque, Abdul; Elleithy, Khaled

    2015-01-01

    Robust paradigms are a necessity, particularly for emerging wireless sensor network (WSN) applications. The lack of robust and efficient paradigms causes a reduction in the provision of quality of service (QoS) and additional energy consumption. In this paper, we introduce modular energy-efficient and robust paradigms that involve two archetypes: (1) the operational medium access control (O-MAC) hybrid protocol and (2) the pheromone termite (PT) model. The O-MAC protocol controls overhearing and congestion and increases the throughput, reduces the latency and extends the network lifetime. O-MAC uses an optimized data frame format that reduces the channel access time and provides faster data delivery over the medium. Furthermore, O-MAC uses a novel randomization function that avoids channel collisions. The PT model provides robust routing for single and multiple links and includes two new significant features: (1) determining the packet generation rate to avoid congestion and (2) pheromone sensitivity to determine the link capacity prior to sending the packets on each link. The state-of-the-art research in this work is based on improving both the QoS and energy efficiency. To determine the strength of O-MAC with the PT model; we have generated and simulated a disaster recovery scenario using a network simulator (ns-3.10) that monitors the activities of disaster recovery staff; hospital staff and disaster victims brought into the hospital. Moreover; the proposed paradigm can be used for general purpose applications. Finally; the QoS metrics of the O-MAC and PT paradigms are evaluated and compared with other known hybrid protocols involving the MAC and routing features. The simulation results indicate that O-MAC with PT produced better outcomes. PMID:26153768

  5. Multicomponent statistical analysis to identify flow and transport processes in a highly-complex environment

    NASA Astrophysics Data System (ADS)

    Moeck, Christian; Radny, Dirk; Borer, Paul; Rothardt, Judith; Auckenthaler, Adrian; Berg, Michael; Schirmer, Mario

    2016-11-01

    A combined approach of multivariate statistical analysis, namely factor analysis (FA) and hierarchical cluster analysis (HCA), interpretation of geochemical processes, stable water isotope data and organic micropollutants enabling to assess spatial patterns of water types was performed for a study area in Switzerland, where drinking water production is close to different potential input pathways for contamination. To avoid drinking water contamination, artificial groundwater recharge with surface water into an aquifer is used to create a hydraulic barrier between potential intake pathways for contamination and drinking water extraction wells. Inter-aquifer mixing in the subsurface is identified, where a high amount of artificial infiltrated surface water is mixed with a lesser amount of water originating from the regional flow pathway in the vicinity of drinking water extraction wells. The spatial distribution of different water types can be estimated and a conceptual system understanding is developed. Results of the multivariate statistical analysis are comparable with gained information from isotopic data and organic micropollutants analyses. The integrated approach using different kinds of observations can be easily transferred to a variety of hydrological settings to synthesise and evaluate large hydrochemical datasets. The combination of additional data with different information content is conceivable and enabled effective interpretation of hydrological processes. Using the applied approach leads to more sound conceptual system understanding acting as the very basis to develop improved water resources management practices in a sustainable way.

  6. Hominin cognitive evolution: identifying patterns and processes in the fossil and archaeological record

    PubMed Central

    Shultz, Susanne; Nelson, Emma; Dunbar, Robin I. M.

    2012-01-01

    As only limited insight into behaviour is available from the archaeological record, much of our understanding of historical changes in human cognition is restricted to identifying changes in brain size and architecture. Using both absolute and residual brain size estimates, we show that hominin brain evolution was likely to be the result of a mix of processes; punctuated changes at approximately 100 kya, 1 Mya and 1.8 Mya are supplemented by gradual within-lineage changes in Homo erectus and Homo sapiens sensu lato. While brain size increase in Homo in Africa is a gradual process, migration of hominins into Eurasia is associated with step changes at approximately 400 kya and approximately 100 kya. We then demonstrate that periods of rapid change in hominin brain size are not temporally associated with changes in environmental unpredictability or with long-term palaeoclimate trends. Thus, we argue that commonly used global sea level or Indian Ocean dust palaeoclimate records provide little evidence for either the variability selection or aridity hypotheses explaining changes in hominin brain size. Brain size change at approximately 100 kya is coincident with demographic change and the appearance of fully modern language. However, gaps remain in our understanding of the external pressures driving encephalization, which will only be filled by novel applications of the fossil, palaeoclimatic and archaeological records. PMID:22734056

  7. Robust Speech Processing & Recognition: Speaker ID, Language ID, Speech Recognition/Keyword Spotting, Diarization/Co-Channel/Environmental Characterization, Speaker State Assessment

    DTIC Science & Technology

    2015-10-01

    insight into a low resource languages ,” Transactions on Machine Learning and Artificial Intelligence, 2(4), Aug. 2014, 115-126. [4] Q. Zheng, G. Liu...by a person is a rich and valuable piece of information for several applications such as health monitoring, second language learning or language ...ROBUST SPEECH PROCESSING & RECOGNITION: SPEAKER ID, LANGUAGE ID, SPEECH RECOGNITION/KEYWORD SPOTTING, DIARIZATION/CO- CHANNEL/ENVIRONMENTAL

  8. Use of Sulphur and Boron Isotopes to Identify Natural Gas Processing Emissions Sources

    NASA Astrophysics Data System (ADS)

    Bradley, C. E.; Norman, A.; Wieser, M. E.

    2003-12-01

    Natural gas processing results in the emission of large amounts of gaseous pollutants as a result of planned and / or emergency flaring, sulphur incineration, and in the course of normal operation. Since many gas plants often contribute to the same air shed, it is not possible to conclusively determine the sources, amounts, and characteristics of pollution from a particular processing facility using traditional methods. However, sulphur isotopes have proven useful in the apportionment of sources of atmospheric sulphate (Norman et al., 1999), and boron isotopes have been shown to be of use in tracing coal contamination through groundwater (Davidson and Bassett, 1993). In this study, both sulphur and boron isotopes have been measured at source, receptor, and control sites, and, if emissions prove to be sufficiently distinct isotopically, they will be used to identify and apportion emissions downwind. Sulphur is present in natural gas as hydrogen sulphide (H2S), which is combusted to sulphur dioxide (SO2) prior to its release to the atmosphere, while boron is present both in hydrocarbon deposits as well as in any water used in the process. Little is known about the isotopic abundance variations of boron in hydrocarbon reservoirs, but Krouse (1991) has shown that the sulphur isotope composition of H2S in reservoirs varies according to both the concentration and the method of formation of H2S. As a result, gas plants processing gas from different reservoirs are expected to produce emissions with unique isotopic compositions. Samples were collected using a high-volume air sampler placed directly downwind of several gas plants, as well as at a receptor site and a control site. Aerosol sulphate and boron were collected on quartz fibre filters, while SO2 was collected on potassium hydroxide-impregnated cellulose filters. Solid sulphur samples were taken from those plants that process sulphur in order to compare the isotopic composition with atmospheric measurements. A

  9. Identifying vegetation's influence on multi-scale fluvial processes based on plant trait adaptations

    NASA Astrophysics Data System (ADS)

    Manners, R.; Merritt, D. M.; Wilcox, A. C.; Scott, M.

    2015-12-01

    Riparian vegetation-geomorphic interactions are critical to the physical and biological function of riparian ecosystems, yet we lack a mechanistic understanding of these interactions and predictive ability at the reach to watershed scale. Plant functional groups, or groupings of species that have similar traits, either in terms of a plant's life history strategy (e.g., drought tolerance) or morphology (e.g., growth form), may provide an expression of vegetation-geomorphic interactions. We are developing an approach that 1) identifies where along a river corridor plant functional groups exist and 2) links the traits that define functional groups and their impact on fluvial processes. The Green and Yampa Rivers in Dinosaur National Monument have wide variations in hydrology, hydraulics, and channel morphology, as well as a large dataset of species presence. For these rivers, we build a predictive model of the probable presence of plant functional groups based on site-specific aspects of the flow regime (e.g., inundation probability and duration), hydraulic characteristics (e.g., velocity), and substrate size. Functional group traits are collected from the literature and measured in the field. We found that life-history traits more strongly predicted functional group presence than did morphological traits. However, some life-history traits, important for determining the likelihood of a plant existing along an environmental gradient, are directly related to the morphological properties of the plant, important for the plant's impact on fluvial processes. For example, stem density (i.e., dry mass divided by volume of stem) is positively correlated to drought tolerance and is also related to the modulus of elasticity. Growth form, which is related to the plant's susceptibility to biomass-removing fluvial disturbances, is also related to frontal area. Using this approach, we can identify how plant community composition and distribution shifts with a change to the flow

  10. Identifying sources and processes controlling the sulphur cycle in the Canyon Creek watershed, Alberta, Canada.

    PubMed

    Nightingale, Michael; Mayer, Bernhard

    2012-01-01

    Sources and processes affecting the sulphur cycle in the Canyon Creek watershed in Alberta (Canada) were investigated. The catchment is important for water supply and recreational activities and is also a source of oil and natural gas. Water was collected from 10 locations along an 8 km stretch of Canyon Creek including three so-called sulphur pools, followed by the chemical and isotopic analyses on water and its major dissolved species. The δ(2)H and δ(18)O values of the water plotted near the regional meteoric water line, indicating a meteoric origin of the water and no contribution from deeper formation waters. Calcium, magnesium and bicarbonate were the dominant ions in the upstream portion of the watershed, whereas sulphate was the dominant anion in the water from the three sulphur pools. The isotopic composition of sulphate (δ(34)S and δ(18)O) revealed three major sulphate sources with distinct isotopic compositions throughout the catchment: (1) a combination of sulphate from soils and sulphide oxidation in the bedrock in the upper reaches of Canyon Creek; (2) sulphide oxidation in pyrite-rich shales in the lower reaches of Canyon Creek and (3) dissolution of Devonian anhydrite constituting the major sulphate source for the three sulphur pools in the central portion of the watershed. The presence of H(2)S in the sulphur pools with δ(34)S values ∼30 ‰ lower than those of sulphate further indicated the occurrence of bacterial (dissimilatory) sulphate reduction. This case study reveals that δ(34)S values of surface water systems can vary by more than 20 ‰ over short geographic distances and that isotope analyses are an effective tool to identify sources and processes that govern the sulphur cycle in watersheds.

  11. Identifying Highly Penetrant Disease Causal Mutations Using Next Generation Sequencing: Guide to Whole Process

    PubMed Central

    Erzurumluoglu, A. Mesut; Shihab, Hashem A.; Baird, Denis; Richardson, Tom G.; Day, Ian N. M.; Gaunt, Tom R.

    2015-01-01

    Recent technological advances have created challenges for geneticists and a need to adapt to a wide range of new bioinformatics tools and an expanding wealth of publicly available data (e.g., mutation databases, and software). This wide range of methods and a diversity of file formats used in sequence analysis is a significant issue, with a considerable amount of time spent before anyone can even attempt to analyse the genetic basis of human disorders. Another point to consider that is although many possess “just enough” knowledge to analyse their data, they do not make full use of the tools and databases that are available and also do not fully understand how their data was created. The primary aim of this review is to document some of the key approaches and provide an analysis schema to make the analysis process more efficient and reliable in the context of discovering highly penetrant causal mutations/genes. This review will also compare the methods used to identify highly penetrant variants when data is obtained from consanguineous individuals as opposed to nonconsanguineous; and when Mendelian disorders are analysed as opposed to common-complex disorders. PMID:26106619

  12. Disinvestment for re-allocation: a process to identify priorities in healthcare.

    PubMed

    Nuti, Sabina; Vainieri, Milena; Bonini, Anna

    2010-05-01

    Resource scarcity and increasing service demand lead health systems to cope with choices within constrained budgets. The aim of the paper is to describe the study carried out in the Tuscan Health System in Italy on how to set priorities in the disinvestment process for re-allocation. The analysis was based on 2007 data benchmarking of the Tuscan Health System with an impact on the level of resources used. For each indicator, the first step was to estimate the gap between the performance of each Health Authority (HA) and the best performance or the regional average. The second step was to measure this gap in terms of financial value. The results of the analysis demonstrated that, at the regional level, 2-7% of the healthcare budget can be re-allocated if all the institutions achieve the regional average or the best practice. The implications of this study can be useful for policy makers and the HA top management. In the context of resource scarcity, it allows managers to identify the areas where the institutions can achieve a higher level of efficiency without negative effects on quality of care and instead re-allocate resources toward services with more value for patients.

  13. Comparison of remote sensing image processing techniques to identify tornado damage areas from Landsat TM data

    USGS Publications Warehouse

    Myint, S.W.; Yuan, M.; Cerveny, R.S.; Giri, C.P.

    2008-01-01

    Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and objectoriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. ?? 2008 by MDPI.

  14. Comparison of Remote Sensing Image Processing Techniques to Identify Tornado Damage Areas from Landsat TM Data

    PubMed Central

    Myint, Soe W.; Yuan, May; Cerveny, Randall S.; Giri, Chandra P.

    2008-01-01

    Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and object-oriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. PMID:27879757

  15. Parallel processing in an identified neural circuit: the Aplysia californica gill-withdrawal response model system.

    PubMed

    Leonard, Janet L; Edstrom, John P

    2004-02-01

    /or 'Back-propagation' type. Such models may offer a more biologically realistic representation of nervous system organisation than has been thought. In this model, the six parallel GMNs of the CNS correspond to a hidden layer within one module of the gill-control system. That is, the gill-control system appears to be organised as a distributed system with several parallel modules, some of which are neural networks in their own right. A new model is presented here which predicts that the six GMNs serve as components of a 'push-pull' gain control system, along with known but largely unidentified inhibitory motor neurons from the PVG. This 'push-pull' gain control system sets the responsiveness of the peripheral gill motor system. Neither causal nor correlational links between specific forms of neural plasticity and behavioural plasticity have been demonstrated in the GWR model system. However, the GWR model system does provide an opportunity to observe and describe directly the physiological and biochemical mechanisms of distributed representation and parallel processing in a largely identifiable 'wetware' neural network.

  16. Identifying the influential aquifer heterogeneity factor on nitrate reduction processes by numerical simulation

    NASA Astrophysics Data System (ADS)

    Jang, E.; He, W.; Savoy, H.; Dietrich, P.; Kolditz, O.; Rubin, Y.; Schüth, C.; Kalbacher, T.

    2017-01-01

    Nitrate reduction reactions in groundwater systems are strongly influenced by various aquifer heterogeneity factors that affect the transport of chemical species, spatial distribution of redox reactive substances and, as a result, the overall nitrate reduction efficiency. In this study, we investigated the influence of physical and chemical aquifer heterogeneity, with a focus on nitrate transport and redox transformation processes. A numerical modeling study for simulating coupled hydrological-geochemical aquifer heterogeneity was conducted in order to improve our understanding of the influence of the aquifer heterogeneity on the nitrate reduction reactions and to identify the most influential aquifer heterogeneity factors throughout the simulation. Results show that the most influential aquifer heterogeneity factors could change over time. With abundant presence of electron donors in the high permeable zones (initial stage), physical aquifer heterogeneity significantly influences the nitrate reduction since it enables the preferential transport of nitrate to these zones and enhances mixing of reactive partners. Chemical aquifer heterogeneity plays a comparatively minor role. Increasing the spatial variability of the hydraulic conductivity also increases the nitrate removal efficiency of the system. However, ignoring chemical aquifer heterogeneity can lead to an underestimation of nitrate removals in long-term behavior. With the increase of the spatial variability of the electron donor, i.e. chemical heterogeneity, the number of the "hot spots" i.e. zones with comparably higher reactivity, should also increase. Hence, nitrate removal efficiencies will also be spatially variable but overall removal efficiency will be sustained if longer time scales are considered and nitrate fronts reach these high reactivity zones.

  17. On the processes generating latitudinal richness gradients: identifying diagnostic patterns and predictions

    SciTech Connect

    Hurlbert, Allen H.; Stegen, James C.

    2014-12-02

    Many processes have been put forward to explain the latitudinal gradient in species richness. Here, we use a simulation model to examine four of the most common hypotheses and identify patterns that might be diagnostic of those four hypotheses. The hypotheses examined include (1) tropical niche conservatism, or the idea that the tropics are more diverse because a tropical clade origin has allowed more time for diversification in the tropics and has resulted in few species adapted to extra-tropical climates. (2) The productivity, or energetic constraints, hypothesis suggests that species richness is limited by the amount of biologically available energy in a region. (3) The tropical stability hypothesis argues that major climatic fluctuations and glacial cycles in extratropical regions have led to greater extinction rates and less opportunity for specialization relative to the tropics. (4) Finally, the speciation rates hypothesis suggests that the latitudinal richness gradient arises from a parallel gradient in rates of speciation. We found that tropical niche conservatism can be distinguished from the other three scenarios by phylogenies which are more balanced than expected, no relationship between mean root distance and richness across regions, and a homogeneous rate of speciation across clades and through time. The energy gradient, speciation gradient, and disturbance gradient scenarios all exhibited phylogenies which were more imbalanced than expected, showed a negative relationship between mean root distance and richness, and diversity-dependence of speciation rate estimates through time. Using Bayesian Analysis of Macroevolutionary Mixtures on the simulated phylogenies, we found that the relationship between speciation rates and latitude could distinguish among these three scenarios. We emphasize the importance of considering multiple hypotheses and focusing on diagnostic predictions instead of predictions that are consistent with more than one hypothesis.

  18. A more robust model of the biodiesel reaction, allowing identification of process conditions for significantly enhanced rate and water tolerance.

    PubMed

    Eze, Valentine C; Phan, Anh N; Harvey, Adam P

    2014-03-01

    A more robust kinetic model of base-catalysed transesterification than the conventional reaction scheme has been developed. All the relevant reactions in the base-catalysed transesterification of rapeseed oil (RSO) to fatty acid methyl ester (FAME) were investigated experimentally, and validated numerically in a model implemented using MATLAB. It was found that including the saponification of RSO and FAME side reactions and hydroxide-methoxide equilibrium data explained various effects that are not captured by simpler conventional models. Both the experiment and modelling showed that the "biodiesel reaction" can reach the desired level of conversion (>95%) in less than 2min. Given the right set of conditions, the transesterification can reach over 95% conversion, before the saponification losses become significant. This means that the reaction must be performed in a reactor exhibiting good mixing and good control of residence time, and the reaction mixture must be quenched rapidly as it leaves the reactor.

  19. Energy Landscape Reveals That the Budding Yeast Cell Cycle Is a Robust and Adaptive Multi-stage Process

    PubMed Central

    Lv, Cheng; Li, Xiaoguang; Li, Fangting; Li, Tiejun

    2015-01-01

    Quantitatively understanding the robustness, adaptivity and efficiency of cell cycle dynamics under the influence of noise is a fundamental but difficult question to answer for most eukaryotic organisms. Using a simplified budding yeast cell cycle model perturbed by intrinsic noise, we systematically explore these issues from an energy landscape point of view by constructing an energy landscape for the considered system based on large deviation theory. Analysis shows that the cell cycle trajectory is sharply confined by the ambient energy barrier, and the landscape along this trajectory exhibits a generally flat shape. We explain the evolution of the system on this flat path by incorporating its non-gradient nature. Furthermore, we illustrate how this global landscape changes in response to external signals, observing a nice transformation of the landscapes as the excitable system approaches a limit cycle system when nutrients are sufficient, as well as the formation of additional energy wells when the DNA replication checkpoint is activated. By taking into account the finite volume effect, we find additional pits along the flat cycle path in the landscape associated with the checkpoint mechanism of the cell cycle. The difference between the landscapes induced by intrinsic and extrinsic noise is also discussed. In our opinion, this meticulous structure of the energy landscape for our simplified model is of general interest to other cell cycle dynamics, and the proposed methods can be applied to study similar biological systems. PMID:25794282

  20. Robust conversion of marrow cells to skeletal muscle with formation of marrow-derived muscle cell colonies: A multifactorial process

    SciTech Connect

    Abedi, Mehrdad; Greer, Deborah A.; Colvin, Gerald A.; Demers, Delia A.; Dooner, Mark S.; Harpel, Jasha A.; Weier, Heinz-Ulrich G.; Lambert, Jean-Francois; Quesenberry, Peter J.

    2004-01-10

    Murine marrow cells are capable of repopulating skeletal muscle fibers. A point of concern has been the robustness of such conversions. We have investigated the impact of type of cell delivery, muscle injury, nature of delivered cell, and stem cell mobilizations on marrow to muscle conversion. We transplanted GFP transgenic marrow into irradiated C57BL/6 mice and then injured anterior tibialis muscle by cardiotoxin. One month after injury, sections were analyzed by standard and deconvolutional microscopy for expression of muscle and hematopietic markers. Irradiation was essential to conversion although whether by injury or induction of chimerism is not clear. Cardiotoxin and to a lesser extent PBS injected muscles showed significant number of GFP+ muscle fibers while uninjected muscles showed only rare GFP+ cells. Marrow conversion to muscle was increased by two cycles of G-CSF mobilization and to a lesser extent with G-CSF and steel or GM-CSF. Transplantation of female GFP to male C57 BL/6 and GFP to Rosa26 mice showed fusion of donor cells to recipient muscle. High numbers of donor derived muscle colonies and up to12 percent GFP positive muscle cells were seen after mobilization or direct injection. These levels of donor muscle chimerism approach levels which could be clinically significant in developing strategies for the treatment of muscular dystrophies. In summary, the conversion of marrow to skeletal muscle cells is based on cell fusion and is critically dependent on injury. This conversion is also numerically significant and increases with mobilization.

  1. Dynamics robustness of cascading systems.

    PubMed

    Young, Jonathan T; Hatakeyama, Tetsuhiro S; Kaneko, Kunihiko

    2017-03-01

    A most important property of biochemical systems is robustness. Static robustness, e.g., homeostasis, is the insensitivity of a state against perturbations, whereas dynamics robustness, e.g., homeorhesis, is the insensitivity of a dynamic process. In contrast to the extensively studied static robustness, dynamics robustness, i.e., how a system creates an invariant temporal profile against perturbations, is little explored despite transient dynamics being crucial for cellular fates and are reported to be robust experimentally. For example, the duration of a stimulus elicits different phenotypic responses, and signaling networks process and encode temporal information. Hence, robustness in time courses will be necessary for functional biochemical networks. Based on dynamical systems theory, we uncovered a general mechanism to achieve dynamics robustness. Using a three-stage linear signaling cascade as an example, we found that the temporal profiles and response duration post-stimulus is robust to perturbations against certain parameters. Then analyzing the linearized model, we elucidated the criteria of when signaling cascades will display dynamics robustness. We found that changes in the upstream modules are masked in the cascade, and that the response duration is mainly controlled by the rate-limiting module and organization of the cascade's kinetics. Specifically, we found two necessary conditions for dynamics robustness in signaling cascades: 1) Constraint on the rate-limiting process: The phosphatase activity in the perturbed module is not the slowest. 2) Constraints on the initial conditions: The kinase activity needs to be fast enough such that each module is saturated even with fast phosphatase activity and upstream changes are attenuated. We discussed the relevance of such robustness to several biological examples and the validity of the above conditions therein. Given the applicability of dynamics robustness to a variety of systems, it will provide a

  2. Dynamics robustness of cascading systems

    PubMed Central

    Kaneko, Kunihiko

    2017-01-01

    A most important property of biochemical systems is robustness. Static robustness, e.g., homeostasis, is the insensitivity of a state against perturbations, whereas dynamics robustness, e.g., homeorhesis, is the insensitivity of a dynamic process. In contrast to the extensively studied static robustness, dynamics robustness, i.e., how a system creates an invariant temporal profile against perturbations, is little explored despite transient dynamics being crucial for cellular fates and are reported to be robust experimentally. For example, the duration of a stimulus elicits different phenotypic responses, and signaling networks process and encode temporal information. Hence, robustness in time courses will be necessary for functional biochemical networks. Based on dynamical systems theory, we uncovered a general mechanism to achieve dynamics robustness. Using a three-stage linear signaling cascade as an example, we found that the temporal profiles and response duration post-stimulus is robust to perturbations against certain parameters. Then analyzing the linearized model, we elucidated the criteria of when signaling cascades will display dynamics robustness. We found that changes in the upstream modules are masked in the cascade, and that the response duration is mainly controlled by the rate-limiting module and organization of the cascade’s kinetics. Specifically, we found two necessary conditions for dynamics robustness in signaling cascades: 1) Constraint on the rate-limiting process: The phosphatase activity in the perturbed module is not the slowest. 2) Constraints on the initial conditions: The kinase activity needs to be fast enough such that each module is saturated even with fast phosphatase activity and upstream changes are attenuated. We discussed the relevance of such robustness to several biological examples and the validity of the above conditions therein. Given the applicability of dynamics robustness to a variety of systems, it will provide a

  3. Phenotypic Screening Identifies Modulators of Amyloid Precursor Protein Processing in Human Stem Cell Models of Alzheimer's Disease.

    PubMed

    Brownjohn, Philip W; Smith, James; Portelius, Erik; Serneels, Lutgarde; Kvartsberg, Hlin; De Strooper, Bart; Blennow, Kaj; Zetterberg, Henrik; Livesey, Frederick J

    2017-03-06

    Human stem cell models have the potential to provide platforms for phenotypic screens to identify candidate treatments and cellular pathways involved in the pathogenesis of neurodegenerative disorders. Amyloid precursor protein (APP) processing and the accumulation of APP-derived amyloid β (Aβ) peptides are key processes in Alzheimer's disease (AD). We designed a phenotypic small-molecule screen to identify modulators of APP processing in trisomy 21/Down syndrome neurons, a complex genetic model of AD. We identified the avermectins, commonly used as anthelmintics, as compounds that increase the relative production of short Aβ peptides at the expense of longer, potentially more toxic peptides. Further studies demonstrated that this effect is not due to an interaction with the core γ-secretase responsible for Aβ production. This study demonstrates the feasibility of phenotypic drug screening in human stem cell models of Alzheimer-type dementia, and points to possibilities for indirectly modulating APP processing, independently of γ-secretase modulation.

  4. Identifying Gifted Students: Educator Beliefs regarding Various Policies, Processes, and Procedures

    ERIC Educational Resources Information Center

    Schroth, Stepen T.; Helfer, Jason A.

    2008-01-01

    Issues regarding the identification of gifted students have perplexed the field almost since its inception. How one identifies gifted students has tremendous ramifications for a gifted education program's size, curriculum, instructional methods, and administration. Little is known, however, regarding educator beliefs regarding gifted…

  5. 20 CFR 1010.300 - What processes are to be implemented to identify covered persons?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... FOR VETERANS' EMPLOYMENT AND TRAINING SERVICE, DEPARTMENT OF LABOR APPLICATION OF PRIORITY OF SERVICE... identify covered persons? (a) Recipients of funds for qualified job training programs must implement... service delivery programs or Web sites in order to provide covered persons with timely and...

  6. 20 CFR 1010.300 - What processes are to be implemented to identify covered persons?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... FOR VETERANS' EMPLOYMENT AND TRAINING SERVICE, DEPARTMENT OF LABOR APPLICATION OF PRIORITY OF SERVICE... identify covered persons? (a) Recipients of funds for qualified job training programs must implement... service delivery programs or Web sites in order to provide covered persons with timely and...

  7. 20 CFR 1010.300 - What processes are to be implemented to identify covered persons?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... FOR VETERANS' EMPLOYMENT AND TRAINING SERVICE, DEPARTMENT OF LABOR APPLICATION OF PRIORITY OF SERVICE... identify covered persons? (a) Recipients of funds for qualified job training programs must implement... service delivery programs or Web sites in order to provide covered persons with timely and...

  8. 20 CFR 1010.300 - What processes are to be implemented to identify covered persons?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... FOR VETERANS' EMPLOYMENT AND TRAINING SERVICE, DEPARTMENT OF LABOR APPLICATION OF PRIORITY OF SERVICE... identify covered persons? (a) Recipients of funds for qualified job training programs must implement... service delivery programs or Web sites in order to provide covered persons with timely and...

  9. 20 CFR 1010.300 - What processes are to be implemented to identify covered persons?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... FOR VETERANS' EMPLOYMENT AND TRAINING SERVICE, DEPARTMENT OF LABOR APPLICATION OF PRIORITY OF SERVICE... identify covered persons? (a) Recipients of funds for qualified job training programs must implement... service delivery programs or Web sites in order to provide covered persons with timely and...

  10. Identifying Leadership Potential: The Process of Principals within a Charter School Network

    ERIC Educational Resources Information Center

    Waidelich, Lynn A.

    2012-01-01

    The importance of strong educational leadership for American K-12 schools cannot be overstated. As such, school districts need to actively recruit and develop leaders. One way to do so is for school officials to become more strategic in leadership identification and development. If contemporary leaders are strategic about whom they identify and…

  11. CONTAINER MATERIALS, FABRICATION AND ROBUSTNESS

    SciTech Connect

    Dunn, K.; Louthan, M.; Rawls, G.; Sindelar, R.; Zapp, P.; Mcclard, J.

    2009-11-10

    The multi-barrier 3013 container used to package plutonium-bearing materials is robust and thereby highly resistant to identified degradation modes that might cause failure. The only viable degradation mechanisms identified by a panel of technical experts were pressurization within and corrosion of the containers. Evaluations of the container materials and the fabrication processes and resulting residual stresses suggest that the multi-layered containers will mitigate the potential for degradation of the outer container and prevent the release of the container contents to the environment. Additionally, the ongoing surveillance programs and laboratory studies should detect any incipient degradation of containers in the 3013 storage inventory before an outer container is compromised.

  12. Hypochondriasis Differs From Panic Disorder and Social Phobia: Specific Processes Identified Within Patient Groups.

    PubMed

    Höfling, Volkmar; Weck, Florian

    2017-03-01

    Studies of the comorbidity of hypochondriasis have indicated high rates of cooccurrence with other anxiety disorders. In this study, the contrast among hypochondriasis, panic disorder, and social phobia was investigated using specific processes drawing on cognitive-perceptual models of hypochondriasis. Affective, behavioral, cognitive, and perceptual processes specific to hypochondriasis were assessed with 130 diagnosed participants based on the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, criteria (66 with hypochondriasis, 32 with panic disorder, and 32 with social phobia). All processes specific to hypochondriasis were more intense for patients with hypochondriasis in contrast to those with panic disorder or social phobia (0.61 < d < 2.67). No differences were found between those with hypochondriasis with comorbid disorders and those without comorbid disorders. Perceptual processes were shown to best discriminate between patients with hypochondriasis and those with panic disorder.

  13. Extreme robustness of scaling in sample space reducing processes explains Zipf’s law in diffusion on directed networks

    NASA Astrophysics Data System (ADS)

    Corominas-Murtra, Bernat; Hanel, Rudolf; Thurner, Stefan

    2016-09-01

    It has been shown recently that a specific class of path-dependent stochastic processes, which reduce their sample space as they unfold, lead to exact scaling laws in frequency and rank distributions. Such sample space reducing processes offer an alternative new mechanism to understand the emergence of scaling in countless processes. The corresponding power law exponents were shown to be related to noise levels in the process. Here we show that the emergence of scaling is not limited to the simplest SSRPs, but holds for a huge domain of stochastic processes that are characterised by non-uniform prior distributions. We demonstrate mathematically that in the absence of noise the scaling exponents converge to -1 (Zipf’s law) for almost all prior distributions. As a consequence it becomes possible to fully understand targeted diffusion on weighted directed networks and its associated scaling laws in node visit distributions. The presence of cycles can be properly interpreted as playing the same role as noise in SSRPs and, accordingly, determine the scaling exponents. The result that Zipf’s law emerges as a generic feature of diffusion on networks, regardless of its details, and that the exponent of visiting times is related to the amount of cycles in a network could be relevant for a series of applications in traffic-, transport- and supply chain management.

  14. Identifying the hazard characteristics of powder byproducts generated from semiconductor fabrication processes.

    PubMed

    Choi, Kwang-Min; An, Hee-Chul; Kim, Kwan-Sick

    2015-01-01

    Semiconductor manufacturing processes generate powder particles as byproducts which potentially could affect workers' health. The chemical composition, size, shape, and crystal structure of these powder particles were investigated by scanning electron microscopy equipped with an energy dispersive spectrometer, Fourier transform infrared spectrometry, and X-ray diffractometry. The powders generated in diffusion and chemical mechanical polishing processes were amorphous silica. The particles in the chemical vapor deposition (CVD) and etch processes were TiO(2) and Al(2)O(3), and Al(2)O(3) particles, respectively. As for metallization, WO(3), TiO(2), and Al(2)O(3) particles were generated from equipment used for tungsten and barrier metal (TiN) operations. In photolithography, the size and shape of the powder particles showed 1-10 μm and were of spherical shape. In addition, the powders generated from high-current and medium-current processes for ion implantation included arsenic (As), whereas the high-energy process did not include As. For all samples collected using a personal air sampler during preventive maintenance of process equipment, the mass concentrations of total airborne particles were < 1 μg, which is the detection limit of the microbalance. In addition, the mean mass concentrations of airborne PM10 (particles less than 10 μm in diameter) using direct-reading aerosol monitor by area sampling were between 0.00 and 0.02 μg/m(3). Although the exposure concentration of airborne particles during preventive maintenance is extremely low, it is necessary to make continuous improvements to the process and work environment, because the influence of chronic low-level exposure cannot be excluded.

  15. Identifying Cortical Lateralization of Speech Processing in Infants Using Near-Infrared Spectroscopy

    PubMed Central

    Bortfeld, Heather; Fava, Eswen; Boas, David A.

    2010-01-01

    We investigate the utility of near-infrared spectroscopy (NIRS) as an alternative technique for studying infant speech processing. NIRS is an optical imaging technology that uses relative changes in total hemoglobin concentration and oxygenation as an indicator of neural activation. Procedurally, NIRS has the advantage over more common methods (e.g., fMRI) in that it can be used to study the neural responses of behaviorally active infants. Older infants (aged 6–9 months) were allowed to sit on their caretakers’ laps during stimulus presentation to determine relative differences in focal activity in the temporal region of the brain during speech processing. Results revealed a dissociation of sensory-specific processing in two cortical regions, the left and right temporal lobes. These findings are consistent with those obtained using other neurophysiological methods and point to the utility of NIRS as a means of establishing neural correlates of language development in older (and more active) infants. PMID:19142766

  16. You can't get there from here: identifying process routes to replication.

    PubMed

    Primavera, Judy

    2004-06-01

    All too often the reports of our community research and action are presented in an ahistorical and decontextualized fashion focused more on the content of what was done than on the process of how the work was done and why. The story of the university-community partnership and the family literacy intervention that was developed illustrates the importance of several key process variables in project development and implementation. More specifically, the role of the social-ecological context, prehistory, personality, self-correction, and unexpected serendipitous events are discussed. If, as community psychologists, we are serious about conducting our work in the most efficient and effective manner possible, if we truly wish to make our work available for replication, and if we seek to develop standards of "best practice" that are meaningful, our communication regarding process must shift from the anecdotal to a position of central importance.

  17. On the robustness of the r-process in neutron-star mergers against variations of nuclear masses

    NASA Astrophysics Data System (ADS)

    Mendoza-Temis, J. J.; Wu, M. R.; Martínez-Pinedo, G.; Langanke, K.; Bauswein, A.; Janka, H.-T.; Frank, A.

    2016-07-01

    r-process calculations have been performed for matter ejected dynamically in neutron star mergers (NSM), such calculations are based on a complete set of trajectories from a three-dimensional relativistic smoothed particle hydrodynamic (SPH) simulation. Our calculations consider an extended nuclear reaction network, including spontaneous, β- and neutron-induced fission and adopting fission yield distributions from the ABLA code. In this contribution we have studied the sensitivity of the r-process abundances to nuclear masses by using diferent mass models for the calculation of neutron capture cross sections via the statistical model. Most of the trajectories, corresponding to 90% of the ejected mass, follow a relatively slow expansion allowing for all neutrons to be captured. The resulting abundances are very similar to each other and reproduce the general features of the observed r-process abundance (the second and third peaks, the rare-earth peak and the lead peak) for all mass models as they are mainly determined by the fission yields. We find distinct differences in the predictions of the mass models at and just above the third peak, which can be traced back to different predictions of neutron separation energies for r-process nuclei around neutron number N = 130.

  18. Process development for robust removal of aggregates using cation exchange chromatography in monoclonal antibody purification with implementation of quality by design.

    PubMed

    Xu, Zhihao; Li, Jason; Zhou, Joe X

    2012-01-01

    Aggregate removal is one of the most important aspects in monoclonal antibody (mAb) purification. Cation-exchange chromatography (CEX), a widely used polishing step in mAb purification, is able to clear both process-related impurities and product-related impurities. In this study, with the implementation of quality by design (QbD), a process development approach for robust removal of aggregates using CEX is described. First, resin screening studies were performed and a suitable CEX resin was chosen because of its relatively better selectivity and higher dynamic binding capacity. Second, a pH-conductivity hybrid gradient elution method for the CEX was established, and the risk assessment for the process was carried out. Third, a process characterization study was used to evaluate the impact of the potentially important process parameters on the process performance with respect to aggregate removal. Accordingly, a process design space was established. Aggregate level in load is the critical parameter. Its operating range is set at 0-3% and the acceptable range is set at 0-5%. Equilibration buffer is the key parameter. Its operating range is set at 40 ± 5 mM acetate, pH 5.0 ± 0.1, and acceptable range is set at 40 ± 10 mM acetate, pH 5.0 ± 0.2. Elution buffer, load mass, and gradient elution volume are non-key parameters; their operating ranges and acceptable ranges are equally set at 250 ± 10 mM acetate, pH 6.0 ± 0.2, 45 ± 10 g/L resin, and 10 ± 20% CV respectively. Finally, the process was scaled up 80 times and the impurities removal profiles were revealed. Three scaled-up runs showed that the size-exclusion chromatography (SEC) purity of the CEX pool was 99.8% or above and the step yield was above 92%, thereby proving that the process is both consistent and robust.

  19. A national effort to identify fry processing clones with low acrylamide-forming potential

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Acrylamide is a suspected human carcinogen. Processed potato products, such as chips and fries, contribute to dietary intake of acrylamide. One of the most promising approaches to reducing acrylamide consumption is to develop and commercialize new potato varieties with low acrylamide-forming potenti...

  20. Sociometric Effects in Small Classroom Groups Using Curricula Identified as Process-Oriented.

    ERIC Educational Resources Information Center

    Nickse, Ruth S.; Ripple, Richard E.

    This study was an attempt fo document aspects of small group work in classrooms engaged in the process education curricula called "Materials and Activities for Teachers and Children" (MATCH). Data on student-student interaction was related to small group work and gathered by paper-and-pencil sociometric questionnaires and measures of group…

  1. Identifying the Neural Correlates Underlying Social Pain: Implications for Developmental Processes

    ERIC Educational Resources Information Center

    Eisenberger, Naomi I.

    2006-01-01

    Although the need for social connection is critical for early social development as well as for psychological well-being throughout the lifespan, relatively little is known about the neural processes involved in maintaining social connections. The following review summarizes what is known regarding the neural correlates underlying feeling of…

  2. Stress test: identifying crowding stress-tolerant hybrids in processing sweet corn

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Improvement in tolerance to intense competition at high plant populations (i.e. crowding stress) is a major genetic driver of corn yield gain the last half-century. Recent research found differences in crowding stress tolerance among a few modern processing sweet corn hybrids; however, a larger asse...

  3. Using Workflow Modeling to Identify Areas to Improve Genetic Test Processes in the University of Maryland Translational Pharmacogenomics Project

    PubMed Central

    Cutting, Elizabeth M.; Overby, Casey L.; Banchero, Meghan; Pollin, Toni; Kelemen, Mark; Shuldiner, Alan R.; Beitelshees, Amber L.

    2015-01-01

    Delivering genetic test results to clinicians is a complex process. It involves many actors and multiple steps, requiring all of these to work together in order to create an optimal course of treatment for the patient. We used information gained from focus groups in order to illustrate the current process of delivering genetic test results to clinicians. We propose a business process model and notation (BPMN) representation of this process for a Translational Pharmacogenomics Project being implemented at the University of Maryland Medical Center, so that personalized medicine program implementers can identify areas to improve genetic testing processes. We found that the current process could be improved to reduce input errors, better inform and notify clinicians about the implications of certain genetic tests, and make results more easily understood. We demonstrate our use of BPMN to improve this important clinical process for CYP2C19 genetic testing in patients undergoing invasive treatment of coronary heart disease. PMID:26958179

  4. Using Workflow Modeling to Identify Areas to Improve Genetic Test Processes in the University of Maryland Translational Pharmacogenomics Project.

    PubMed

    Cutting, Elizabeth M; Overby, Casey L; Banchero, Meghan; Pollin, Toni; Kelemen, Mark; Shuldiner, Alan R; Beitelshees, Amber L

    Delivering genetic test results to clinicians is a complex process. It involves many actors and multiple steps, requiring all of these to work together in order to create an optimal course of treatment for the patient. We used information gained from focus groups in order to illustrate the current process of delivering genetic test results to clinicians. We propose a business process model and notation (BPMN) representation of this process for a Translational Pharmacogenomics Project being implemented at the University of Maryland Medical Center, so that personalized medicine program implementers can identify areas to improve genetic testing processes. We found that the current process could be improved to reduce input errors, better inform and notify clinicians about the implications of certain genetic tests, and make results more easily understood. We demonstrate our use of BPMN to improve this important clinical process for CYP2C19 genetic testing in patients undergoing invasive treatment of coronary heart disease.

  5. Exploring High-Dimensional Data Space: Identifying Optimal Process Conditions in Photovoltaics

    SciTech Connect

    Suh, C.; Biagioni, D.; Glynn, S.; Scharf, J.; Contreras, M. A.; Noufi, R.; Jones, W. B.

    2011-01-01

    We demonstrate how advanced exploratory data analysis coupled to data-mining techniques can be used to scrutinize the high-dimensional data space of photovoltaics in the context of thin films of Al-doped ZnO (AZO), which are essential materials as a transparent conducting oxide (TCO) layer in CuIn{sub x}Ga{sub 1-x}Se{sub 2} (CIGS) solar cells. AZO data space, wherein each sample is synthesized from a different process history and assessed with various characterizations, is transformed, reorganized, and visualized in order to extract optimal process conditions. The data-analysis methods used include parallel coordinates, diffusion maps, and hierarchical agglomerative clustering algorithms combined with diffusion map embedding.

  6. Exploring High-Dimensional Data Space: Identifying Optimal Process Conditions in Photovoltaics: Preprint

    SciTech Connect

    Suh, C.; Glynn, S.; Scharf, J.; Contreras, M. A.; Noufi, R.; Jones, W. B.; Biagioni, D.

    2011-07-01

    We demonstrate how advanced exploratory data analysis coupled to data-mining techniques can be used to scrutinize the high-dimensional data space of photovoltaics in the context of thin films of Al-doped ZnO (AZO), which are essential materials as a transparent conducting oxide (TCO) layer in CuInxGa1-xSe2 (CIGS) solar cells. AZO data space, wherein each sample is synthesized from a different process history and assessed with various characterizations, is transformed, reorganized, and visualized in order to extract optimal process conditions. The data-analysis methods used include parallel coordinates, diffusion maps, and hierarchical agglomerative clustering algorithms combined with diffusion map embedding.

  7. Identifying scale-emergent, nonlinear, asynchronous processes of wetland methane exchange

    NASA Astrophysics Data System (ADS)

    Sturtevant, Cove; Ruddell, Benjamin L.; Knox, Sara Helen; Verfaillie, Joseph; Matthes, Jaclyn Hatala; Oikawa, Patricia Y.; Baldocchi, Dennis

    2016-01-01

    Methane (CH4) exchange in wetlands is complex, involving nonlinear asynchronous processes across diverse time scales. These processes and time scales are poorly characterized at the whole-ecosystem level, yet are crucial for accurate representation of CH4 exchange in process models. We used a combination of wavelet analysis and information theory to analyze interactions between whole-ecosystem CH4 flux and biophysical drivers in two restored wetlands of Northern California from hourly to seasonal time scales, explicitly questioning assumptions of linear, synchronous, single-scale analysis. Although seasonal variability in CH4 exchange was dominantly and synchronously controlled by soil temperature, water table fluctuations, and plant activity were important synchronous and asynchronous controls at shorter time scales that propagated to the seasonal scale. Intermittent, subsurface water table decline promoted short-term pulses of methane emission but ultimately decreased seasonal CH4 emission through subsequent inhibition after rewetting. Methane efflux also shared information with evapotranspiration from hourly to multiday scales and the strength and timing of hourly and diel interactions suggested the strong importance of internal gas transport in regulating short-term emission. Traditional linear correlation analysis was generally capable of capturing the major diel and seasonal relationships, but mesoscale, asynchronous interactions and nonlinear, cross-scale effects were unresolved yet important for a deeper understanding of methane flux dynamics. We encourage wider use of these methods to aid interpretation and modeling of long-term continuous measurements of trace gas and energy exchange.

  8. Robustness surfaces of complex networks

    NASA Astrophysics Data System (ADS)

    Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis

    2014-09-01

    Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared.

  9. Comparative assessment of genomic DNA extraction processes for Plasmodium: Identifying the appropriate method.

    PubMed

    Mann, Riti; Sharma, Supriya; Mishra, Neelima; Valecha, Neena; Anvikar, Anupkumar R

    2015-12-01

    Plasmodium DNA, in addition to being used for molecular diagnosis of malaria, find utility in monitoring patient responses to antimalarial drugs, drug resistance studies, genotyping and sequencing purposes. Over the years, numerous protocols have been proposed for extracting Plasmodium DNA from a variety of sources. Given that DNA isolation is fundamental to successful molecular studies, here we review the most commonly used methods for Plasmodium genomic DNA isolation, emphasizing their pros and cons. A comparison of these existing methods has been made, to evaluate their appropriateness for use in different applications and identify the method suitable for a particular laboratory based study. Selection of a suitable and accessible DNA extraction method for Plasmodium requires consideration of many factors, the most important being sensitivity, cost-effectiveness and, purity and stability of isolated DNA. Need of the hour is to accentuate on the development of a method that upholds well on all these parameters.

  10. Development of a Robust and Cost-Effective Friction Stir Welding Process for Use in Advanced Military Vehicles

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Arakere, G.; Pandurangan, B.; Hariharan, A.; Yen, C.-F.; Cheeseman, B. A.

    2011-02-01

    To respond to the advent of more lethal threats, recently designed aluminum-armor-based military-vehicle systems have resorted to an increasing use of higher strength aluminum alloys (with superior ballistic resistance against armor piercing (AP) threats and with high vehicle-light weighing potential). Unfortunately, these alloys are not very amenable to conventional fusion-based welding technologies and in-order to obtain high-quality welds, solid-state joining technologies such as Friction stir welding (FSW) have to be employed. However, since FSW is a relatively new and fairly complex joining technology, its introduction into advanced military vehicle structures is not straight forward and entails a comprehensive multi-step approach. One such (three-step) approach is developed in the present work. Within the first step, experimental and computational techniques are utilized to determine the optimal tool design and the optimal FSW process parameters which result in maximal productivity of the joining process and the highest quality of the weld. Within the second step, techniques are developed for the identification and qualification of the optimal weld joint designs in different sections of a prototypical military vehicle structure. In the third step, problems associated with the fabrication of a sub-scale military vehicle test structure and the blast survivability of the structure are assessed. The results obtained and the lessons learned are used to judge the potential of the current approach in shortening the development time and in enhancing reliability and blast survivability of military vehicle structures.

  11. Repurposing the clinical record: can an existing natural language processing system de-identify clinical notes?

    PubMed

    Morrison, Frances P; Li, Li; Lai, Albert M; Hripcsak, George

    2009-01-01

    Electronic clinical documentation can be useful for activities such as public health surveillance, quality improvement, and research, but existing methods of de-identification may not provide sufficient protection of patient data. The general-purpose natural language processor MedLEE retains medical concepts while excluding the remaining text so, in addition to processing text into structured data, it may be able provide a secondary benefit of de-identification. Without modifying the system, the authors tested the ability of MedLEE to remove protected health information (PHI) by comparing 100 outpatient clinical notes with the corresponding XML-tagged output. Of 809 instances of PHI, 26 (3.2%) were detected in output as a result of processing and identification errors. However, PHI in the output was highly transformed, much appearing as normalized terms for medical concepts, potentially making re-identification more difficult. The MedLEE processor may be a good enhancement to other de-identification systems, both removing PHI and providing coded data from clinical text.

  12. Identifying the Institutional Decision Process to Introduce Decentralized Sanitation in the City of Kunming (China)

    NASA Astrophysics Data System (ADS)

    Medilanski, Edi; Chuan, Liang; Mosler, Hans-Joachim; Schertenleib, Roland; Larsen, Tove A.

    2007-05-01

    We conducted a study of the institutional barriers to introducing urine source separation in the urban area of Kunming, China. On the basis of a stakeholder analysis, we constructed stakeholder diagrams showing the relative importance of decision-making power and (positive) interest in the topic. A hypothetical decision-making process for the urban case was derived based on a successful pilot project in a periurban area. All our results were evaluated by the stakeholders. We concluded that although a number of primary stakeholders have a large interest in testing urine source separation also in an urban context, most of the key stakeholders would be reluctant to this idea. However, the success in the periurban area showed that even a single, well-received pilot project can trigger the process of broad dissemination of new technologies. Whereas the institutional setting for such a pilot project is favorable in Kunming, a major challenge will be to adapt the technology to the demands of an urban population. Methodologically, we developed an approach to corroborate a stakeholder analysis with the perception of the stakeholders themselves. This is important not only in order to validate the analysis but also to bridge the theoretical gap between stakeholder analysis and stakeholder involvement. We also show that in disagreement with the assumption of most policy theories, local stakeholders consider informal decision pathways to be of great importance in actual policy-making.

  13. Modelling evapotranspiration during precipitation deficits: identifying critical processes in a land surface model

    NASA Astrophysics Data System (ADS)

    Ukkola, A.; Pitman, A.; Decker, M. R.; De Kauwe, M. G.; Abramowitz, G.; Wang, Y.; Kala, J.

    2015-12-01

    Surface fluxes from land surface models (LSM) have traditionally been evaluated against monthly, seasonal or annual mean states. Previous studies have noted the limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions but very few studies have systematically evaluated LSMs during rainfall deficits. We investigate the performance of the Community Atmosphere Biosphere Land Exchange (CABLE) LSM in simulating latent heat fluxes in offline mode. CABLE is evaluated against eddy covariance measurements of latent heat flux across 20 flux tower sites at sub-annual to inter-annual time scales, with a focus on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux is explored by employing alternative representations of hydrology, soil properties, leaf area index and stomatal conductance. We demonstrate the critical role of hydrological processes for capturing observed declines in latent heat. The effects of soil, LAI and stomatal conductance are shown to be highly site-specific. The default CABLE performs reasonably well at annual scales despite grossly underestimating latent heat during rainfall deficits, highlighting the importance for evaluating models explicitly under water-stressed conditions across multiple vegetation and climate regimes. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions but remaining deficiencies point to future research needs.

  14. Modelling evapotranspiration during precipitation deficits: identifying critical processes in a land surface model

    NASA Astrophysics Data System (ADS)

    Ukkola, Anna M.; Pitman, Andy J.; Decker, Mark; De Kauwe, Martin G.; Abramowitz, Gab; Kala, Jatin; Wang, Ying-Ping

    2016-06-01

    Surface fluxes from land surface models (LSMs) have traditionally been evaluated against monthly, seasonal or annual mean states. The limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions has been previously noted, but very few studies have systematically evaluated these models during rainfall deficits. We evaluated latent heat fluxes simulated by the Community Atmosphere Biosphere Land Exchange (CABLE) LSM across 20 flux tower sites at sub-annual to inter-annual timescales, in particular focusing on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux was explored by employing alternative representations of hydrology, leaf area index, soil properties and stomatal conductance. We found that the representation of hydrological processes was critical for capturing observed declines in latent heat during rainfall deficits. By contrast, the effects of soil properties, LAI and stomatal conductance were highly site-specific. Whilst the standard model performs reasonably well at annual scales as measured by common metrics, it grossly underestimates latent heat during rainfall deficits. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions, but remaining biases point to future research needs. Our results highlight the importance of evaluating LSMs under water-stressed conditions and across multiple plant functional types and climate regimes.

  15. Modelling evapotranspiration during precipitation deficits: identifying critical processes in a land surface model

    NASA Astrophysics Data System (ADS)

    Ukkola, A. M.; Pitman, A. J.; Decker, M.; De Kauwe, M. G.; Abramowitz, G.; Kala, J.; Wang, Y.-P.

    2015-10-01

    Surface fluxes from land surface models (LSM) have traditionally been evaluated against monthly, seasonal or annual mean states. The limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions has been previously noted, but very few studies have systematically evaluated these models during rainfall deficits. We evaluated latent heat flux simulated by the Community Atmosphere Biosphere Land Exchange (CABLE) LSM across 20 flux tower sites at sub-annual to inter-annual time scales, in particular focusing on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux are explored by employing alternative representations of hydrology, leaf area index, soil properties and stomatal conductance. We found that the representation of hydrological processes was critical for capturing observed declines in latent heat during rainfall deficits. By contrast, the effects of soil properties, LAI and stomatal conductance are shown to be highly site-specific. Whilst the standard model performs reasonably well at annual scales as measured by common metrics, it grossly underestimates latent heat during rainfall deficits. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions but remaining biases point to future research needs. Our results highlight the importance of evaluating LSMs under water-stressed conditions and across multiple plant functional types and climate regimes.

  16. Modelling evapotranspiration during precipitation deficits: Identifying critical processes in a land surface model

    SciTech Connect

    Ukkola, Anna M.; Pitman, Andy J.; Decker, Mark; De Kauwe, Martin G.; Abramowitz, Gab; Kala, Jatin; Wang, Ying -Ping

    2016-06-21

    Surface fluxes from land surface models (LSMs) have traditionally been evaluated against monthly, seasonal or annual mean states. The limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions has been previously noted, but very few studies have systematically evaluated these models during rainfall deficits. We evaluated latent heat fluxes simulated by the Community Atmosphere Biosphere Land Exchange (CABLE) LSM across 20 flux tower sites at sub-annual to inter-annual timescales, in particular focusing on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux was explored by employing alternative representations of hydrology, leaf area index, soil properties and stomatal conductance. We found that the representation of hydrological processes was critical for capturing observed declines in latent heat during rainfall deficits. By contrast, the effects of soil properties, LAI and stomatal conductance were highly site-specific. Whilst the standard model performs reasonably well at annual scales as measured by common metrics, it grossly underestimates latent heat during rainfall deficits. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions, but remaining biases point to future research needs. Lastly, our results highlight the importance of evaluating LSMs under water-stressed conditions and across multiple plant functional types and climate regimes.

  17. Modelling evapotranspiration during precipitation deficits: Identifying critical processes in a land surface model

    DOE PAGES

    Ukkola, Anna M.; Pitman, Andy J.; Decker, Mark; ...

    2016-06-21

    Surface fluxes from land surface models (LSMs) have traditionally been evaluated against monthly, seasonal or annual mean states. The limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions has been previously noted, but very few studies have systematically evaluated these models during rainfall deficits. We evaluated latent heat fluxes simulated by the Community Atmosphere Biosphere Land Exchange (CABLE) LSM across 20 flux tower sites at sub-annual to inter-annual timescales, in particular focusing on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux was explored by employing alternative representations of hydrology, leafmore » area index, soil properties and stomatal conductance. We found that the representation of hydrological processes was critical for capturing observed declines in latent heat during rainfall deficits. By contrast, the effects of soil properties, LAI and stomatal conductance were highly site-specific. Whilst the standard model performs reasonably well at annual scales as measured by common metrics, it grossly underestimates latent heat during rainfall deficits. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions, but remaining biases point to future research needs. Lastly, our results highlight the importance of evaluating LSMs under water-stressed conditions and across multiple plant functional types and climate regimes.« less

  18. Key biological processes driving metastatic spread of pancreatic cancer as identified by multi-omics studies.

    PubMed

    Le Large, T Y S; Bijlsma, M F; Kazemier, G; van Laarhoven, H M W; Giovannetti, E; Jimenez, C R

    2017-03-30

    Pancreatic ductal adenocarcinoma (PDAC) is an extremely aggressive malignancy, characterized by a high metastatic burden, already at the time of diagnosis. The metastatic potential of PDAC is one of the main reasons for the poor outcome next to lack of significant improvement in effective treatments in the last decade. Key mutated driver genes, such as activating KRAS mutations, are concordantly expressed in primary and metastatic tumors. However, the biology behind the metastatic potential of PDAC is not fully understood. Recently, large-scale omic approaches have revealed new mechanisms by which PDAC cells gain their metastatic potency. In particular, genomic studies have shown that multiple heterogeneous subclones reside in the primary tumor with different metastatic potential. The development of metastases may be correlated to a more mesenchymal transcriptomic subtype. However, for cancer cells to survive in a distant organ, metastatic sites need to be modulated into pre-metastatic niches. Proteomic studies identified the influence of exosomes on the Kuppfer cells in the liver, which could function to prepare this tissue for metastatic colonization. Phosphoproteomics adds an extra layer to the established omic techniques by unravelling key functional signalling. Future studies integrating results from these large-scale omic approaches will hopefully improve PDAC prognosis through identification of new therapeutic targets and patient selection tools. In this article, we will review the current knowledge on the biology of PDAC metastasis unravelled by large scale multi-omic approaches.

  19. Using Statistical Process Control Charts to Identify the Steroids Era in Major League Baseball: An Educational Exercise

    ERIC Educational Resources Information Center

    Hill, Stephen E.; Schvaneveldt, Shane J.

    2011-01-01

    This article presents an educational exercise in which statistical process control charts are constructed and used to identify the Steroids Era in American professional baseball. During this period (roughly 1993 until the present), numerous baseball players were alleged or proven to have used banned, performance-enhancing drugs. Also observed…

  20. ESL Teachers' Perceptions of the Process for Identifying Adolescent Latino English Language Learners with Specific Learning Disabilities

    ERIC Educational Resources Information Center

    Ferlis, Emily C.

    2012-01-01

    This dissertation examines the question "how do ESL teachers perceive the prereferral process for identifying adolescent Latino English language learners with specific learning disabilities?" The study fits within the Latino Critical Race Theory framework and employs an interpretive phenomenological qualitative research approach.…

  1. Progression after AKI: Understanding Maladaptive Repair Processes to Predict and Identify Therapeutic Treatments

    PubMed Central

    Bonventre, Joseph V.; Mehta, Ravindra; Nangaku, Masaomi; Unwin, Robert; Rosner, Mitchell H.; Kellum, John A.; Ronco, Claudio

    2016-01-01

    Recent clinical studies indicate a strong link between AKI and progression of CKD. The increasing prevalence of AKI must compel the nephrology community to consider the long-term ramifications of this syndrome. Considerable gaps in knowledge exist regarding the connection between AKI and CKD. The 13th Acute Dialysis Quality Initiative meeting entitled “Therapeutic Targets of Human Acute Kidney Injury: Harmonizing Human and Experimental Animal Acute Kidney Injury” convened in April of 2014 and assigned a working group to focus on issues related to progression after AKI. This article provides a summary of the key conclusions and recommendations of the group, including an emphasis on terminology related to injury and repair processes for both clinical and preclinical studies, elucidation of pathophysiologic alterations of AKI, identification of potential treatment strategies, identification of patients predisposed to progression, and potential management strategies. PMID:26519085

  2. Identifying influential nodes in a wound healing-related network of biological processes using mean first-passage time

    NASA Astrophysics Data System (ADS)

    Arodz, Tomasz; Bonchev, Danail

    2015-02-01

    In this study we offer an approach to network physiology, which proceeds from transcriptomic data and uses gene ontology analysis to identify the biological processes most enriched in several critical time points of wound healing process (days 0, 3 and 7). The top-ranking differentially expressed genes for each process were used to build two networks: one with all proteins regulating the transcription of selected genes, and a second one involving the proteins from the signaling pathways that activate the transcription factors. The information from these networks is used to build a network of the most enriched processes with undirected links weighted proportionally to the count of shared genes between the pair of processes, and directed links weighted by the count of relationships connecting genes from one process to genes from the other. In analyzing the network thus built we used an approach based on random walks and accounting for the temporal aspects of the spread of a signal in the network (mean-first passage time, MFPT). The MFPT scores allowed identifying the top influential, as well as the top essential biological processes, which vary with the progress in the healing process. Thus, the most essential for day 0 was found to be the Wnt-receptor signaling pathway, well known for its crucial role in wound healing, while in day 3 this was the regulation of NF-kB cascade, essential for matrix remodeling in the wound healing process. The MFPT-based scores correctly reflected the pattern of the healing process dynamics to be highly concentrated around several processes between day 0 and day 3, and becoming more diffuse at day 7.

  3. Development and optimization of a process for automated recovery of single cells identified by microengraving.

    PubMed

    Choi, Jae Hyeok; Ogunniyi, Adebola O; Du, Mindy; Du, Minna; Kretschmann, Marcel; Eberhardt, Jens; Love, J Christopher

    2010-01-01

    Microfabricated devices are useful tools for manipulating and interrogating large numbers of single cells in a rapid and cost-effective manner, but connecting these systems to the existing platforms used in routine high-throughput screening of libraries of cells remains challenging. Methods to sort individual cells of interest from custom microscale devices to standardized culture dishes in an efficient and automated manner without affecting the viability of the cells are critical. Combining a commercially available instrument for colony picking (CellCelector, AVISO GmbH) and a customized software module, we have established an optimized process for the automated retrieval of individual antibody-producing cells, secreting desirable antibodies, from dense arrays of subnanoliter containers. The selection of cells for retrieval is guided by data obtained from a high-throughput, single-cell screening method called microengraving. Using this system, 100 clones from a mixed population of two cell lines secreting different antibodies (12CA5 and HYB099-01) were sorted with 100% accuracy (50 clones of each) in approximately 2 h, and the cells retained viability.

  4. The June 2014 eruption at Piton de la Fournaise: Robust methods developed for monitoring challenging eruptive processes

    NASA Astrophysics Data System (ADS)

    Villeneuve, N.; Ferrazzini, V.; Di Muro, A.; Peltier, A.; Beauducel, F.; Roult, G. C.; Lecocq, T.; Brenguier, F.; Vlastelic, I.; Gurioli, L.; Guyard, S.; Catry, T.; Froger, J. L.; Coppola, D.; Harris, A. J. L.; Favalli, M.; Aiuppa, A.; Liuzzo, M.; Giudice, G.; Boissier, P.; Brunet, C.; Catherine, P.; Fontaine, F. J.; Henriette, L.; Lauret, F.; Riviere, A.; Kowalski, P.

    2014-12-01

    After almost 3.5 years of quiescence, Piton de la Fournaise (PdF) produced a small summit eruption on 20 June 2014 at 21:35 (GMT). The eruption lasted 20 hours and was preceded by: i) onset of deep eccentric seismicity (15-20 km bsl; 9 km NW of the volcano summit) in March and April 2014; ii) enhanced CO2 soil flux along the NW rift zone; iii) increase in the number and energy of shallow (<1.5 km asl) VT events. The increase in VT events occurred on 9 June. Their signature, and shallow location, was not characteristic of an eruptive crisis. However, at 20:06 on 20/06 their character changed. This was 74 minutes before the onset of tremor. Deformations then began at 20:20. Since 2007, PdF has emitted small magma volumes (<3 Mm3) in events preceded by weak and short precursory phases. To respond to this challenging activity style, new monitoring methods were deployed at OVPF. While the JERK and MSNoise methods were developed for processing of seismic data, borehole tiltmeters and permanent monitoring of summit gas emissions, plus CO2 soil flux, were used to track precursory activity. JERK, based on an analysis of the acceleration slope of a broad-band seismometer data, allowed advanced notice of the new eruption by 50 minutes. MSNoise, based on seismic velocity determination, showed a significant decrease 7 days before the eruption. These signals were coupled with change in summit fumarole composition. Remote sensing allowed the following syn-eruptive observations: - INSAR confirmed measurements made by the OVPF geodetic network, showing that deformation was localized around the eruptive fissures; - A SPOT5 image acquired at 05:41 on 21/06 allowed definition of the flow field area (194 500 m2); - A MODIS image acquired at 06:35 on 21/06 gave a lava discharge rate of 6.9±2.8 m3 s-1, giving an erupted volume of 0.3 and 0.4 Mm3. - This rate was used with the DOWNFLOW and FLOWGO models, calibrated with the textural data from Piton's 2010 lava, to run lava flow

  5. Isotopic investigations of dissolved organic N in soils identifies N mineralization as a major sink process

    NASA Astrophysics Data System (ADS)

    Wanek, Wolfgang; Prommer, Judith; Hofhansl, Florian

    2016-04-01

    Dissolved organic nitrogen (DON) is a major component of transfer processes in the global nitrogen (N) cycle, contributing to atmospheric N deposition, terrestrial N losses and aquatic N inputs. In terrestrial ecosystems several sources and sinks contribute to belowground DON pools but yet are hard to quantify. In soils, DON is released by desorption of soil organic N and by microbial lysis. Major losses from the DON pool occur via sorption, hydrological losses and by soil N mineralization. Sorption/desorption, lysis and hydrological losses are expected to exhibit no 15N fractionation therefore allowing to trace different DON sources. Soil N mineralization of DON has been commonly assumed to have no or only a small isotope effect of between 0-4‰, however isotope fractionation by N mineralization has rarely been measured and might be larger than anticipated. Depending on the degree of 15N fractionation by soil N mineralization, we would expect DON to become 15N-enriched relative to bulk soil N, and dissolved inorganic N (DIN; ammonium and nitrate) to become 15N-depleted relative to both, bulk soil N and DON. Isotopic analyses of soil organic N, DON and DIN might therefore provide insights into the relative contributions of different sources and sink processes. This study therefore aimed at a better understanding of the isotopic signatures of DON and its controls in soils. We investigated the concentration and isotopic composition of bulk soil N, DON and DIN in a wide range of sites, covering arable, grassland and forest ecosystems in Austria across an altitudinal transect. Isotopic composition of ammonium, nitrate and DON were measured in soil extracts after chemical conversion to N2O by purge-and-trap isotope ratio mass spectrometry. We found that delta15N values of DON ranged between -0.4 and 7.6‰, closely tracking the delta15N values of bulk soils. However, DON was 15N-enriched relative to bulk soil N by 1.5±1.3‰ (1 SD), and inorganic N was 15N

  6. Computational methods using genome-wide association studies to predict radiotherapy complications and to identify correlative molecular processes

    PubMed Central

    Oh, Jung Hun; Kerns, Sarah; Ostrer, Harry; Powell, Simon N.; Rosenstein, Barry; Deasy, Joseph O.

    2017-01-01

    The biological cause of clinically observed variability of normal tissue damage following radiotherapy is poorly understood. We hypothesized that machine/statistical learning methods using single nucleotide polymorphism (SNP)-based genome-wide association studies (GWAS) would identify groups of patients of differing complication risk, and furthermore could be used to identify key biological sources of variability. We developed a novel learning algorithm, called pre-conditioned random forest regression (PRFR), to construct polygenic risk models using hundreds of SNPs, thereby capturing genomic features that confer small differential risk. Predictive models were trained and validated on a cohort of 368 prostate cancer patients for two post-radiotherapy clinical endpoints: late rectal bleeding and erectile dysfunction. The proposed method results in better predictive performance compared with existing computational methods. Gene ontology enrichment analysis and protein-protein interaction network analysis are used to identify key biological processes and proteins that were plausible based on other published studies. In conclusion, we confirm that novel machine learning methods can produce large predictive models (hundreds of SNPs), yielding clinically useful risk stratification models, as well as identifying important underlying biological processes in the radiation damage and tissue repair process. The methods are generally applicable to GWAS data and are not specific to radiotherapy endpoints. PMID:28233873

  7. Computational methods using genome-wide association studies to predict radiotherapy complications and to identify correlative molecular processes

    NASA Astrophysics Data System (ADS)

    Oh, Jung Hun; Kerns, Sarah; Ostrer, Harry; Powell, Simon N.; Rosenstein, Barry; Deasy, Joseph O.

    2017-02-01

    The biological cause of clinically observed variability of normal tissue damage following radiotherapy is poorly understood. We hypothesized that machine/statistical learning methods using single nucleotide polymorphism (SNP)-based genome-wide association studies (GWAS) would identify groups of patients of differing complication risk, and furthermore could be used to identify key biological sources of variability. We developed a novel learning algorithm, called pre-conditioned random forest regression (PRFR), to construct polygenic risk models using hundreds of SNPs, thereby capturing genomic features that confer small differential risk. Predictive models were trained and validated on a cohort of 368 prostate cancer patients for two post-radiotherapy clinical endpoints: late rectal bleeding and erectile dysfunction. The proposed method results in better predictive performance compared with existing computational methods. Gene ontology enrichment analysis and protein-protein interaction network analysis are used to identify key biological processes and proteins that were plausible based on other published studies. In conclusion, we confirm that novel machine learning methods can produce large predictive models (hundreds of SNPs), yielding clinically useful risk stratification models, as well as identifying important underlying biological processes in the radiation damage and tissue repair process. The methods are generally applicable to GWAS data and are not specific to radiotherapy endpoints.

  8. SU-E-T-452: Identifying Inefficiencies in Radiation Oncology Workflow and Prioritizing Solutions for Process Improvement and Patient Safety

    SciTech Connect

    Bennion, N; Driewer, J; Denniston, K; Zhen, W; Enke, C; Jacobs, K; Poole, M; McMahon, R; Wilson, K; Yager, A

    2015-06-15

    Purpose: Successful radiation therapy requires multi-step processes susceptible to unnecessary delays that can negatively impact clinic workflow, patient satisfaction, and safety. This project applied process improvement tools to assess workflow bottlenecks and identify solutions to barriers for effective implementation. Methods: We utilized the DMAIC (define, measure, analyze, improve, control) methodology, limiting our scope to the treatment planning process. From May through December of 2014, times and dates of each step from simulation to treatment were recorded for 507 cases. A value-stream map created from this dataset directed our selection of outcome measures (Y metrics). Critical goals (X metrics) that would accomplish the Y metrics were identified. Barriers to actions were binned into control-impact matrices, in order to stratify them into four groups: in/out of control and high/low impact. Solutions to each barrier were then categorized into benefit-effort matries to identify those of high benefit and low effort. Results: For 507 cases, the mean time from simulation to treatment was 235 total hours. The mean process and wait time were 60 and 132 hours, respectively. The Y metric was to increase the ratio of all non-emergent plans completed the business day prior to treatment from 47% to 75%. Project X metrics included increasing the number of IMRT QAs completed at least 24 hours prior to treatment from 19% to 80% and the number of non-IMRT plans approved at least 24 hours prior to treatment from 33% to 80%. Intervals from simulation to target contour and from initial plan completion to plan approval were identified as periods that could benefit from intervention. Barriers to actions were binned into control-impact matrices and solutions by benefit-effort matrices. Conclusion: The DMAIC method can be successfully applied in radiation therapy clinics to identify inefficiencies and prioritize solutions for the highest impact.

  9. An evaluation of a natural language processing tool for identifying and encoding allergy information in emergency department clinical notes.

    PubMed

    Goss, Foster R; Plasek, Joseph M; Lau, Jason J; Seger, Diane L; Chang, Frank Y; Zhou, Li

    2014-01-01

    Emergency department (ED) visits due to allergic reactions are common. Allergy information is often recorded in free-text provider notes; however, this domain has not yet been widely studied by the natural language processing (NLP) community. We developed an allergy module built on the MTERMS NLP system to identify and encode food, drug, and environmental allergies and allergic reactions. The module included updates to our lexicon using standard terminologies, and novel disambiguation algorithms. We developed an annotation schema and annotated 400 ED notes that served as a gold standard for comparison to MTERMS output. MTERMS achieved an F-measure of 87.6% for the detection of allergen names and no known allergies, 90% for identifying true reactions in each allergy statement where true allergens were also identified, and 69% for linking reactions to their allergen. These preliminary results demonstrate the feasibility using NLP to extract and encode allergy information from clinical notes.

  10. Assessment Approach for Identifying Compatibility of Restoration Projects with Geomorphic and Flooding Processes in Gravel Bed Rivers.

    PubMed

    DeVries, Paul; Aldrich, Robert

    2015-08-01

    A critical requirement for a successful river restoration project in a dynamic gravel bed river is that it be compatible with natural hydraulic and sediment transport processes operating at the reach scale. The potential for failure is greater at locations where the influence of natural processes is inconsistent with intended project function and performance. We present an approach using practical GIS, hydrologic, hydraulic, and sediment transport analyses to identify locations where specific restoration project types have the greatest likelihood of working as intended because their function and design are matched with flooding and morphologic processes. The key premise is to identify whether a specific river analysis segment (length ~1-10 bankfull widths) within a longer reach is geomorphically active or inactive in the context of vertical and lateral stabilities, and hydrologically active for floodplain connectivity. Analyses involve empirical channel geometry relations, aerial photographic time series, LiDAR data, HEC-RAS hydraulic modeling, and a time-integrated sediment transport budget to evaluate trapping efficiency within each segment. The analysis segments are defined by HEC-RAS model cross sections. The results have been used effectively to identify feasible projects in a variety of alluvial gravel bed river reaches with lengths between 11 and 80 km and 2-year flood magnitudes between ~350 and 1330 m(3)/s. Projects constructed based on the results have all performed as planned. In addition, the results provide key criteria for formulating erosion and flood management plans.

  11. Reasoning about anomalies: a study of the analytical process of detecting and identifying anomalous behavior in maritime traffic data

    NASA Astrophysics Data System (ADS)

    Riveiro, Maria; Falkman, Göran; Ziemke, Tom; Kronhamn, Thomas

    2009-05-01

    The goal of visual analytical tools is to support the analytical reasoning process, maximizing human perceptual, understanding and reasoning capabilities in complex and dynamic situations. Visual analytics software must be built upon an understanding of the reasoning process, since it must provide appropriate interactions that allow a true discourse with the information. In order to deepen our understanding of the human analytical process and guide developers in the creation of more efficient anomaly detection systems, this paper investigates how is the human analytical process of detecting and identifying anomalous behavior in maritime traffic data. The main focus of this work is to capture the entire analysis process that an analyst goes through, from the raw data to the detection and identification of anomalous behavior. Three different sources are used in this study: a literature survey of the science of analytical reasoning, requirements specified by experts from organizations with interest in port security and user field studies conducted in different marine surveillance control centers. Furthermore, this study elaborates on how to support the human analytical process using data mining, visualization and interaction methods. The contribution of this paper is twofold: (1) within visual analytics, contribute to the science of analytical reasoning with practical understanding of users tasks in order to develop a taxonomy of interactions that support the analytical reasoning process and (2) within anomaly detection, facilitate the design of future anomaly detector systems when fully automatic approaches are not viable and human participation is needed.

  12. Robustness of metabolic networks

    NASA Astrophysics Data System (ADS)

    Jeong, Hawoong

    2009-03-01

    We investigated the robustness of cellular metabolism by simulating the system-level computational models, and also performed the corresponding experiments to validate our predictions. We address the cellular robustness from the ``metabolite''-framework by using the novel concept of ``flux-sum,'' which is the sum of all incoming or outgoing fluxes (they are the same under the pseudo-steady state assumption). By estimating the changes of the flux-sum under various genetic and environmental perturbations, we were able to clearly decipher the metabolic robustness; the flux-sum around an essential metabolite does not change much under various perturbations. We also identified the list of the metabolites essential to cell survival, and then ``acclimator'' metabolites that can control the cell growth were discovered. Furthermore, this concept of ``metabolite essentiality'' should be useful in developing new metabolic engineering strategies for improved production of various bioproducts and designing new drugs that can fight against multi-antibiotic resistant superbacteria by knocking-down the enzyme activities around an essential metabolite. Finally, we combined a regulatory network with the metabolic network to investigate its effect on dynamic properties of cellular metabolism.

  13. Formosa Plastics Corporation: Plant-Wide Assessment of Texas Plant Identifies Opportunities for Improving Process Efficiency and Reducing Energy Costs

    SciTech Connect

    2005-01-01

    At Formosa Plastics Corporation's plant in Point Comfort, Texas, a plant-wide assessment team analyzed process energy requirements, reviewed new technologies for applicability, and found ways to improve the plant's energy efficiency. The assessment team identified the energy requirements of each process and compared actual energy consumption with theoretical process requirements. The team estimated that total annual energy savings would be about 115,000 MBtu for natural gas and nearly 14 million kWh for electricity if the plant makes several improvements, which include upgrading the gas compressor impeller, improving the vent blower system, and recovering steam condensate for reuse. Total annual cost savings could be $1.5 million. The U.S. Department of Energy's Industrial Technologies Program cosponsored this assessment.

  14. Strategy for identifying dendritic cell-processed CD4+ T cell epitopes from the HIV gag p24 protein.

    PubMed

    Bozzacco, Leonia; Yu, Haiqiang; Dengjel, Jörn; Trumpfheller, Christine; Zebroski, Henry A; Zhang, Nawei; Küttner, Victoria; Ueberheide, Beatrix M; Deng, Haiteng; Chait, Brian T; Steinman, Ralph M; Mojsov, Svetlana; Fenyö, David

    2012-01-01

    Mass Spectrometry (MS) is becoming a preferred method to identify class I and class II peptides presented on major histocompability complexes (MHC) on antigen presenting cells (APC). We describe a combined computational and MS approach to identify exogenous MHC II peptides presented on mouse spleen dendritic cells (DCs). This approach enables rapid, effective screening of a large number of possible peptides by a computer-assisted strategy that utilizes the extraordinary human ability for pattern recognition. To test the efficacy of the approach, a mixture of epitope peptide mimics (mimetopes) from HIV gag p24 sequence were added exogenously to Fms-like tyrosine kinase 3 ligand (Flt3L)-mobilized splenic DCs. We identified the exogenously added peptide, VDRFYKTLRAEQASQ, and a second peptide, DRFYKLTRAEQASQ, derived from the original exogenously added 15-mer peptide. Furthermore, we demonstrated that our strategy works efficiently with HIV gag p24 protein when delivered, as vaccine protein, to Flt3L expanded mouse splenic DCs in vitro through the DEC-205 receptor. We found that the same MHC II-bound HIV gag p24 peptides, VDRFYKTLRAEQASQ and DRFYKLTRAEQASQ, were naturally processed from anti-DEC-205 HIV gag p24 protein and presented on DCs. The two identified VDRFYKTLRAEQASQ and DRFYKLTRAEQASQ MHC II-bound HIV gag p24 peptides elicited CD4(+) T-cell mediated responses in vitro. Their presentation by DCs to antigen-specific T cells was inhibited by chloroquine (CQ), indicating that optimal presentation of these exogenously added peptides required uptake and vesicular trafficking in mature DCs. These results support the application of our strategy to identify and characterize peptide epitopes derived from vaccine proteins processed by DCs and thus has the potential to greatly accelerate DC-based vaccine development.

  15. Hétérochronies dans l'évolution des hominidés. Le développement dentaire des australopithécines «robustes»Heterochronic process in hominid evolution. The dental development in 'robust' australopithecines.

    NASA Astrophysics Data System (ADS)

    Ramirez Rozzi, Fernando V.

    2000-10-01

    Heterochrony is defined as an evolutionary modification in time and in the relative rate of development [6]. Growth (size), development (shape), and age (adult) are the three fundamental factors of ontogeny and have to be known to carry out a study on heterochronies. These three factors have been analysed in 24 Plio-Pleistocene hominid molars from Omo, Ethiopia, attributed to A. afarensis and robust australopithecines ( A. aethiopicus and A. aff. aethiopicus) . Molars were grouped into three chronological periods. The analysis suggests that morphological modifications through time are due to heterochronic process, a neoteny ( A. afarensis - robust australopithecine clade) and a time hypermorphosis ( A. aethiopicus - A. aff. aethiopicus).

  16. Simple process-led algorithms for simulating habitats (SPLASH v.1.0): robust indices of radiation, evapotranspiration and plant-available moisture

    NASA Astrophysics Data System (ADS)

    Davis, Tyler W.; Prentice, I. Colin; Stocker, Benjamin D.; Thomas, Rebecca T.; Whitley, Rhys J.; Wang, Han; Evans, Bradley J.; Gallego-Sala, Angela V.; Sykes, Martin T.; Cramer, Wolfgang

    2017-02-01

    Bioclimatic indices for use in studies of ecosystem function, species distribution, and vegetation dynamics under changing climate scenarios depend on estimates of surface fluxes and other quantities, such as radiation, evapotranspiration and soil moisture, for which direct observations are sparse. These quantities can be derived indirectly from meteorological variables, such as near-surface air temperature, precipitation and cloudiness. Here we present a consolidated set of simple process-led algorithms for simulating habitats (SPLASH) allowing robust approximations of key quantities at ecologically relevant timescales. We specify equations, derivations, simplifications, and assumptions for the estimation of daily and monthly quantities of top-of-the-atmosphere solar radiation, net surface radiation, photosynthetic photon flux density, evapotranspiration (potential, equilibrium, and actual), condensation, soil moisture, and runoff, based on analysis of their relationship to fundamental climatic drivers. The climatic drivers include a minimum of three meteorological inputs: precipitation, air temperature, and fraction of bright sunshine hours. Indices, such as the moisture index, the climatic water deficit, and the Priestley-Taylor coefficient, are also defined. The SPLASH code is transcribed in C++, FORTRAN, Python, and R. A total of 1 year of results are presented at the local and global scales to exemplify the spatiotemporal patterns of daily and monthly model outputs along with comparisons to other model results.

  17. Robust verification analysis

    NASA Astrophysics Data System (ADS)

    Rider, William; Witkowski, Walt; Kamm, James R.; Wildey, Tim

    2016-02-01

    We introduce a new methodology for inferring the accuracy of computational simulations through the practice of solution verification. We demonstrate this methodology on examples from computational heat transfer, fluid dynamics and radiation transport. Our methodology is suited to both well- and ill-behaved sequences of simulations. Our approach to the analysis of these sequences of simulations incorporates expert judgment into the process directly via a flexible optimization framework, and the application of robust statistics. The expert judgment is systematically applied as constraints to the analysis, and together with the robust statistics guards against over-emphasis on anomalous analysis results. We have named our methodology Robust Verification. Our methodology is based on utilizing multiple constrained optimization problems to solve the verification model in a manner that varies the analysis' underlying assumptions. Constraints applied in the analysis can include expert judgment regarding convergence rates (bounds and expectations) as well as bounding values for physical quantities (e.g., positivity of energy or density). This approach then produces a number of error models, which are then analyzed through robust statistical techniques (median instead of mean statistics). This provides self-contained, data and expert informed error estimation including uncertainties for both the solution itself and order of convergence. Our method produces high quality results for the well-behaved cases relatively consistent with existing practice. The methodology can also produce reliable results for ill-behaved circumstances predicated on appropriate expert judgment. We demonstrate the method and compare the results with standard approaches used for both code and solution verification on well-behaved and ill-behaved simulations.

  18. Robust verification analysis

    SciTech Connect

    Rider, William; Witkowski, Walt; Kamm, James R.; Wildey, Tim

    2016-02-15

    We introduce a new methodology for inferring the accuracy of computational simulations through the practice of solution verification. We demonstrate this methodology on examples from computational heat transfer, fluid dynamics and radiation transport. Our methodology is suited to both well- and ill-behaved sequences of simulations. Our approach to the analysis of these sequences of simulations incorporates expert judgment into the process directly via a flexible optimization framework, and the application of robust statistics. The expert judgment is systematically applied as constraints to the analysis, and together with the robust statistics guards against over-emphasis on anomalous analysis results. We have named our methodology Robust Verification. Our methodology is based on utilizing multiple constrained optimization problems to solve the verification model in a manner that varies the analysis' underlying assumptions. Constraints applied in the analysis can include expert judgment regarding convergence rates (bounds and expectations) as well as bounding values for physical quantities (e.g., positivity of energy or density). This approach then produces a number of error models, which are then analyzed through robust statistical techniques (median instead of mean statistics). This provides self-contained, data and expert informed error estimation including uncertainties for both the solution itself and order of convergence. Our method produces high quality results for the well-behaved cases relatively consistent with existing practice. The methodology can also produce reliable results for ill-behaved circumstances predicated on appropriate expert judgment. We demonstrate the method and compare the results with standard approaches used for both code and solution verification on well-behaved and ill-behaved simulations.

  19. Robustness Elasticity in Complex Networks

    PubMed Central

    Matisziw, Timothy C.; Grubesic, Tony H.; Guo, Junyu

    2012-01-01

    Network robustness refers to a network’s resilience to stress or damage. Given that most networks are inherently dynamic, with changing topology, loads, and operational states, their robustness is also likely subject to change. However, in most analyses of network structure, it is assumed that interaction among nodes has no effect on robustness. To investigate the hypothesis that network robustness is not sensitive or elastic to the level of interaction (or flow) among network nodes, this paper explores the impacts of network disruption, namely arc deletion, over a temporal sequence of observed nodal interactions for a large Internet backbone system. In particular, a mathematical programming approach is used to identify exact bounds on robustness to arc deletion for each epoch of nodal interaction. Elasticity of the identified bounds relative to the magnitude of arc deletion is assessed. Results indicate that system robustness can be highly elastic to spatial and temporal variations in nodal interactions within complex systems. Further, the presence of this elasticity provides evidence that a failure to account for nodal interaction can confound characterizations of complex networked systems. PMID:22808060

  20. Comparison of the Analytic Hierarchy Process and Incomplete Analytic Hierarchy Process for identifying customer preferences in the Texas retail energy provider market

    NASA Astrophysics Data System (ADS)

    Davis, Christopher

    The competitive market for retail energy providers in Texas has been in existence for 10 years. When the market opened in 2002, 5 energy providers existed, offering, on average, 20 residential product plans in total. As of January 2012, there are now 115 energy providers in Texas offering over 300 residential product plans for customers. With the increase in providers and product plans, customers can be bombarded with information and suffer from the "too much choice" effect. The goal of this praxis is to aid customers in the decision making process of identifying an energy provider and product plan. Using the Analytic Hierarchy Process (AHP), a hierarchical decomposition decision making tool, and the Incomplete Analytic Hierarchy Process (IAHP), a modified version of AHP, customers can prioritize criteria such as price, rate type, customer service, and green energy products to identify the provider and plan that best meets their needs. To gather customer data, a survey tool has been developed for customers to complete the pairwise comparison process. Results are compared for the Incomplete AHP and AHP method to determine if the Incomplete AHP method is just as accurate, but more efficient, than the traditional AHP method.

  1. Robust indexing for automatic data collection

    SciTech Connect

    Sauter, Nicholas K.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.

    2003-12-09

    We present improved methods for indexing diffraction patterns from macromolecular crystals. The novel procedures include a more robust way to verify the position of the incident X-ray beam on the detector, an algorithm to verify that the deduced lattice basis is consistent with the observations, and an alternative approach to identify the metric symmetry of the lattice. These methods help to correct failures commonly experienced during indexing, and increase the overall success rate of the process. Rapid indexing, without the need for visual inspection, will play an important role as beamlines at synchrotron sources prepare for high-throughput automation.

  2. Stable Isotope Composition of Molecular Oxygen in Soil Gas and Groundwater: A Potentially Robust Tracer for Diffusion and Oxygen Consumption Processes

    NASA Astrophysics Data System (ADS)

    Aggarwal, Pradeep K.; Dillon, M. A.

    1998-02-01

    We have measured the concentration and isotopic composition of molecular oxygen in soil gas and groundwater. At a site near Lincoln, Nebraska, USA, soil gas oxygen concentrations ranged from 13.8 to 17.6% at depths of 3-4 m and the δ 18O values ranged mostly from 24.0 to 27.2‰ (SMOW). The concentration of dissolved oxygen in a perched aquifer in the Texas Panhandle (depth to water ˜76 m) was about 5 mg/L and the δ 18O values were 21.2-22.9‰. The δ 18O of soil gas oxygen in our study are higher and those of dissolved oxygen are lower than the δ 18O of atmospheric oxygen (23.5‰). A model for the oxygen concentration and isotopic composition in soil gas was developed using the molecular diffusion theory. The higher δ 18O values in soil gas at the Nebraska site can be explained by the effects of diffusion and soil respiration (plant root and bacterial) on the isotopic composition of molecular oxygen. The lower δ 18O of dissolved oxygen at the Texas site indicates that oxygen consumption below the root zone in the relatively thick unsaturated zone here may have occurred with a different fractionation factor (either due to inorganic consumption or due to low respiration rates) than that observed for the dominant pathways of plant root and bacterial respiration. It is concluded that the use of the concentration and isotopic composition of soil gas and dissolved oxygen should provide a robust tool for studying the subsurface gaseous diffusion and oxygen consumption processes.

  3. Information-processing alternatives to holistic perception: identifying the mechanisms of secondary-level holism within a categorization paradigm.

    PubMed

    Fifić, Mario; Townsend, James T

    2010-09-01

    Failure to selectively attend to a facial feature, in the part-to-whole paradigm, has been taken as evidence of holistic perception in a large body of face perception literature. In this article, we demonstrate that although failure of selective attention is a necessary property of holistic perception, its presence alone is not sufficient to conclude holistic processing has occurred. One must also consider the cognitive properties that are a natural part of information-processing systems, namely, mental architecture (serial, parallel), a stopping rule (self-terminating, exhaustive), and process dependency. We demonstrate that an analytic model (nonholistic) based on a parallel mental architecture and a self-terminating stopping rule can predict failure of selective attention. The new insights in our approach are based on the systems factorial technology, which provides a rigorous means of identifying the holistic-analytic distinction. Our main goal in the study was to compare potential changes in architecture when 2 second-order relational facial features are manipulated across different face contexts. Supported by simulation data, we suggest that the critical concept for modeling holistic perception is the interactive dependency between features. We argue that without conducting tests for architecture, stopping rule, and dependency, apparent holism could be confounded with analytic perception. This research adds to the list of converging operations for distinguishing between analytic forms and holistic forms of face perception.

  4. Reducing Missed Laboratory Results: Defining Temporal Responsibility, Generating User Interfaces for Test Process Tracking, and Retrospective Analyses to Identify Problems

    PubMed Central

    Tarkan, Sureyya; Plaisant, Catherine; Shneiderman, Ben; Hettinger, A. Zachary

    2011-01-01

    Researchers have conducted numerous case studies reporting the details on how laboratory test results of patients were missed by the ordering medical providers. Given the importance of timely test results in an outpatient setting, there is limited discussion of electronic versions of test result management tools to help clinicians and medical staff with this complex process. This paper presents three ideas to reduce missed results with a system that facilitates tracking laboratory tests from order to completion as well as during follow-up: (1) define a workflow management model that clarifies responsible agents and associated time frame, (2) generate a user interface for tracking that could eventually be integrated into current electronic health record (EHR) systems, (3) help identify common problems in past orders through retrospective analyses. PMID:22195201

  5. Identifying biogeochemical processes beneath stormwater infiltration ponds in support of a new best management practice for groundwater protection

    USGS Publications Warehouse

    O'Reilly, Andrew M.; Chang, Ni-Bin; Wanielista, Martin P.; Xuan, Zhemin; Schirmer, Mario; Hoehn, Eduard; Vogt, Tobias

    2011-01-01

     When applying a stormwater infiltration pond best management practice (BMP) for protecting the quality of underlying groundwater, a common constituent of concern is nitrate. Two stormwater infiltration ponds, the SO and HT ponds, in central Florida, USA, were monitored. A temporal succession of biogeochemical processes was identified beneath the SO pond, including oxygen reduction, denitrification, manganese and iron reduction, and methanogenesis. In contrast, aerobic conditions persisted beneath the HT pond, resulting in nitrate leaching into groundwater. Biogeochemical differences likely are related to soil textural and hydraulic properties that control surface/subsurface oxygen exchange. A new infiltration BMP was developed and a full-scale application was implemented for the HT pond. Preliminary results indicate reductions in nitrate concentration exceeding 50% in soil water and shallow groundwater beneath the HT pond.

  6. Enabling Rapid and Robust Structural Analysis During Conceptual Design

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.; Padula, Sharon L.; Li, Wu

    2015-01-01

    This paper describes a multi-year effort to add a structural analysis subprocess to a supersonic aircraft conceptual design process. The desired capabilities include parametric geometry, automatic finite element mesh generation, static and aeroelastic analysis, and structural sizing. The paper discusses implementation details of the new subprocess, captures lessons learned, and suggests future improvements. The subprocess quickly compares concepts and robustly handles large changes in wing or fuselage geometry. The subprocess can rank concepts with regard to their structural feasibility and can identify promising regions of the design space. The automated structural analysis subprocess is deemed robust and rapid enough to be included in multidisciplinary conceptual design and optimization studies.

  7. Proteomic analyses identify a diverse array of nuclear processes affected by small ubiquitin-like modifier conjugation in Arabidopsis.

    PubMed

    Miller, Marcus J; Barrett-Wilt, Gregory A; Hua, Zhihua; Vierstra, Richard D

    2010-09-21

    The covalent attachment of SUMO (small ubiquitin-like modifier) to other intracellular proteins affects a broad range of nuclear processes in yeast and animals, including chromatin maintenance, transcription, and transport across the nuclear envelope, as well as protects proteins from ubiquitin addition. Substantial increases in SUMOylated proteins upon various stresses have also implicated this modification in the general stress response. To help understand the role(s) of SUMOylation in plants, we developed a stringent method to isolate SUMO-protein conjugates from Arabidopsis thaliana that exploits a tagged SUMO1 variant that faithfully replaces the wild-type protein. Following purification under denaturing conditions, SUMOylated proteins were identified by tandem mass spectrometry from both nonstressed plants and those exposed to heat and oxidative stress. The list of targets is enriched for factors that direct SUMOylation and for nuclear proteins involved in chromatin remodeling/repair, transcription, RNA metabolism, and protein trafficking. Targets of particular interest include histone H2B, components in the LEUNIG/TOPLESS corepressor complexes, and proteins that control histone acetylation and DNA methylation, which affect genome-wide transcription. SUMO attachment site(s) were identified in a subset of targets, including SUMO1 itself to confirm the assembly of poly-SUMO chains. SUMO1 also becomes conjugated with ubiquitin during heat stress, thus connecting these two posttranslational modifications in plants. Taken together, we propose that SUMOylation represents a rapid and global mechanism for reversibly manipulating plant chromosomal functions, especially during environmental stress.

  8. Identifying and prioritizing the preference criteria using analytical hierarchical process for a student-lecturer allocation problem of internship programme

    NASA Astrophysics Data System (ADS)

    Faudzi, Syakinah; Abdul-Rahman, Syariza; Rahman, Rosshairy Abd; Hew, Jafri Hj. Zulkepli

    2016-10-01

    This paper discusses on identifying and prioritizing the student's preference criteria towards supervisor using Analytical Hierarchical Process (AHP) for student-lecturer allocation problem of internship programme. Typically a wide number of students undertake internship every semester and many preferences criteria may involve when assigning students to lecturer for supervision. Thus, identifying and prioritizing the preference criteria of assigning students to lecturer is critically needed especially when involving many preferences. AHP technique is used to prioritize the seven criteria which are capacity, specialization, academic position, availability, professional support, relationship and gender. Student's preference alternative is classified based on lecturer's academic position which are lecturer, senior lecturer, associate professor and professor. Criteria are ranked to find the best preference criteria and alternatives of the supervisor that students prefer to have. This problem is solved using Expert Choice 11 software. A sample of 30 respondents who are from semester 6 and above are randomly selected to participate in the study. By using questionnaire as our medium in collecting the student's data, consistency index is produced to validate the proposed study. Findings and result showed that, the most important preference criteria is professional support. It is followed by specialization, availability, relationship, gender, academic position and capacity. This study found that student would like to have a supportive supervisor because lack of supervision can lead the students to achieve low grade and knowledge from the internship session.

  9. Spectral identifiers from roasting process of Arabica and Robusta green beans using Laser-Induced Breakdown Spectroscopy (LIBS)

    NASA Astrophysics Data System (ADS)

    Wirani, Ayu Puspa; Nasution, Aulia; Suyanto, Hery

    2016-11-01

    Coffee (Coffea spp.) is one of the most widely consumed beverages in the world. World coffee consumption is around 70% comes from Arabica, 26% from Robusta , and the rest 4% from other varieties. Coffee beverages characteristics are related to chemical compositions of its roasted beans. Usually testing of coffee quality is subjectively tasted by an experienced coffee tester. An objective quantitative technique to analyze the chemical contents of coffee beans using LIBS will be reported in this paper. Optimum experimental conditions was using of 120 mJ of laser energy and delay time 1 μs. Elements contained in coffee beans are Ca, W, Sr, Mg, Na, H, K, O, Rb, and Be. The Calcium (Ca) is the main element in the coffee beans. Roasting process will cause the emission intensity of Ca decreased by 42.45%. In addition, discriminant analysis was used to distinguish the arabica and robusta variants, either in its green and roasted coffee beans. Observed identifier elements are Ca, W, Sr, and Mg. Overall chemical composition of roasted coffee beans are affected by many factors, such as the composition of the soil, the location, the weather in the neighborhood of its plantation, and the post-harvesting process of the green coffee beans (drying, storage, fermentation, and roasting methods used).

  10. Identifying Armed Respondents to Domestic Violence Restraining Orders and Recovering Their Firearms: Process Evaluation of an Initiative in California

    PubMed Central

    Frattaroli, Shannon; Claire, Barbara E.; Vittes, Katherine A.; Webster, Daniel W.

    2014-01-01

    Objectives. We evaluated a law enforcement initiative to screen respondents to domestic violence restraining orders for firearm ownership or possession and recover their firearms. Methods. The initiative was implemented in San Mateo and Butte counties in California from 2007 through 2010. We used descriptive methods to evaluate the screening process and recovery effort in each county, relying on records for individual cases. Results. Screening relied on an archive of firearm transactions, court records, and petitioner interviews; no single source was adequate. Screening linked 525 respondents (17.7%) in San Mateo County to firearms; 405 firearms were recovered from 119 (22.7%) of them. In Butte County, 88 (31.1%) respondents were linked to firearms; 260 firearms were recovered from 45 (51.1%) of them. Nonrecovery occurred most often when orders were never served or respondents denied having firearms. There were no reports of serious violence or injury. Conclusions. Recovering firearms from persons subject to domestic violence restraining orders is possible. We have identified design and implementation changes that may improve the screening process and the yield from recovery efforts. Larger implementation trials are needed. PMID:24328660

  11. Acetylome study in mouse adipocytes identifies targets of SIRT1 deacetylation in chromatin organization and RNA processing.

    PubMed

    Kim, Sun-Yee; Sim, Choon Kiat; Tang, Hui; Han, Weiping; Zhang, Kangling; Xu, Feng

    2016-05-15

    SIRT1 is a key protein deacetylase that regulates cellular metabolism through lysine deacetylation on both histones and non-histone proteins. Lysine acetylation is a wide-spread post-translational modification found on many regulatory proteins and it plays an essential role in cell signaling, transcription and metabolism. In mice, SIRT1 has known protective functions during high-fat diet but the acetylome regulated by SIRT1 in adipocytes is not completely understood. Here we conducted acetylome analyses in murine adipocytes treated with small-molecule modulators that inhibit or activate the deacetylase activity of SIRT1. We identified a total of 302 acetylated peptides from 78 proteins in this study. From the list of potential SIRT1 targets, we selected seven candidates and further verified that six of them can be deacetylated by SIRT1 in-vitro. Among them, half of the SIRT1 targets are involved in regulating chromatin structure and the other half is involved in RNA processing. Our results provide a resource for further SIRT1 target validation in fat cells and suggest a potential role of SIRT1 in the regulation of chromatin structure and RNA processing, which may possibly extend to other cell types as well.

  12. A multi-resolution analysis of lidar-DTMs to identify geomorphic processes from characteristic topographic length scales

    NASA Astrophysics Data System (ADS)

    Sangireddy, H.; Passalacqua, P.; Stark, C. P.

    2013-12-01

    Characteristic length scales are often present in topography, and they reflect the driving geomorphic processes. The wide availability of high resolution lidar Digital Terrain Models (DTMs) allows us to measure such characteristic scales, but new methods of topographic analysis are needed in order to do so. Here, we explore how transitions in probability distributions (pdfs) of topographic variables such as (log(area/slope)), defined as topoindex by Beven and Kirkby[1979], can be measured by Multi-Resolution Analysis (MRA) of lidar DTMs [Stark and Stark, 2001; Sangireddy et al.,2012] and used to infer dominant geomorphic processes such as non-linear diffusion and critical shear. We show this correlation between dominant geomorphic processes to characteristic length scales by comparing results from a landscape evolution model to natural landscapes. The landscape evolution model MARSSIM Howard[1994] includes components for modeling rock weathering, mass wasting by non-linear creep, detachment-limited channel erosion, and bedload sediment transport. We use MARSSIM to simulate steady state landscapes for a range of hillslope diffusivity and critical shear stresses. Using the MRA approach, we estimate modal values and inter-quartile ranges of slope, curvature, and topoindex as a function of resolution. We also construct pdfs at each resolution and identify and extract characteristic scale breaks. Following the approach of Tucker et al.,[2001], we measure the average length to channel from ridges, within the GeoNet framework developed by Passalacqua et al.,[2010] and compute pdfs for hillslope lengths at each scale defined in the MRA. We compare the hillslope diffusivity used in MARSSIM against inter-quartile ranges of topoindex and hillslope length scales, and observe power law relationships between the compared variables for simulated landscapes at steady state. We plot similar measures for natural landscapes and are able to qualitatively infer the dominant geomorphic

  13. Mobile Phone Apps to Improve Medication Adherence: A Systematic Stepwise Process to Identify High-Quality Apps

    PubMed Central

    Richtering, Sarah S; Chalmers, John; Thiagalingam, Aravinda; Chow, Clara K; Redfern, Julie

    2016-01-01

    Background There are a growing number of mobile phone apps available to support people in taking their medications and to improve medication adherence. However, little is known about how these apps differ in terms of features, quality, and effectiveness. Objective We aimed to systematically review the medication reminder apps available in the Australian iTunes store and Google Play to assess their features and their quality in order to identify high-quality apps. Methods This review was conducted in a similar manner to a systematic review by using a stepwise approach that included (1) a search strategy; (2) eligibility assessment; (3) app selection process through an initial screening of all retrieved apps and full app review of the included apps; (4) data extraction using a predefined set of features considered important or desirable in medication reminder apps; (5) analysis by classifying the apps as basic and advanced medication reminder apps and scoring and ranking them; and (6) a quality assessment by using the Mobile App Rating Scale (MARS), a reliable tool to assess mobile health apps. Results We identified 272 medication reminder apps, of which 152 were found only in Google Play, 87 only in iTunes, and 33 in both app stores. Apps found in Google Play had more customer reviews, higher star ratings, and lower cost compared with apps in iTunes. Only 109 apps were available for free and 124 were recently updated in 2015 or 2016. Overall, the median number of features per app was 3.0 (interquartile range 4.0) and only 18 apps had ≥9 of the 17 desirable features. The most common features were flexible scheduling that was present in 56.3% (153/272) of the included apps, medication tracking history in 54.8% (149/272), snooze option in 34.9% (95/272), and visual aids in 32.4% (88/272). We classified 54.8% (149/272) of the included apps as advanced medication reminder apps and 45.2% (123/272) as basic medication reminder apps. The advanced apps had a higher number

  14. Processes for Identifying Regional Influences of and Responses to Increasing Atmospheric CO sub 2 and Climate Change --- The MINK Project

    SciTech Connect

    Easterling, W.E. III; McKenney, M.S.; Rosenberg, N.J.; Lemon, K.M.

    1991-08-01

    The second report of a series Processes for Identifying Regional Influences of and Responses to Increasing Atmospheric CO{sub 2} and Climate Change -- The MINK Project is composed of two parts. This Report (IIB) deals with agriculture at the level of farms and Major Land Resource Areas (MLRAs). The Erosion Productivity Impact Calculator (EPIC), a crop growth simulation model developed by scientists at the US Department of Agriculture, is used to study the impacts of the analog climate on yields of main crops in both the 1984/87 and the 2030 baselines. The results of this work with EPIC are the basis for the analysis of the climate change impacts on agriculture at the region-wide level undertaken in this report. Report IIA treats agriculture in MINK in terms of state and region-wide production and resource use for the main crops and animals in the baseline periods of 1984/87 and 2030. The effects of the analog climate on the industry at this level of aggregation are considered in both baseline periods. 41 refs., 40 figs., 46 tabs.

  15. Comparing Four Instructional Techniques for Promoting Robust Knowledge

    ERIC Educational Resources Information Center

    Richey, J. Elizabeth; Nokes-Malach, Timothy J.

    2015-01-01

    Robust knowledge serves as a common instructional target in academic settings. Past research identifying characteristics of experts' knowledge across many domains can help clarify the features of robust knowledge as well as ways of assessing it. We review the expertise literature and identify three key features of robust knowledge (deep,…

  16. Robust Detection, Discrimination, and Remediation of UXO: Statistical Signal Processing Approaches to Address Uncertainties Encountered in Field Test Scenarios SERDP Project MR-1663

    DTIC Science & Technology

    2012-01-03

    2001.  [6]  S. L. Tantum and L. M. Collins. A comparison of algorithms for subsurface  target   detection   and  identification   using  time  domain...excellent classification performance can be achieved. Here, we aim to develop techniques to improve target characterization and reduce classifier...4  Robust  Target   Classification  with Limited Training Data

  17. Robust Adaptive Control

    NASA Technical Reports Server (NTRS)

    Narendra, K. S.; Annaswamy, A. M.

    1985-01-01

    Several concepts and results in robust adaptive control are are discussed and is organized in three parts. The first part surveys existing algorithms. Different formulations of the problem and theoretical solutions that have been suggested are reviewed here. The second part contains new results related to the role of persistent excitation in robust adaptive systems and the use of hybrid control to improve robustness. In the third part promising new areas for future research are suggested which combine different approaches currently known.

  18. A Systematic Approach of Employing Quality by Design Principles: Risk Assessment and Design of Experiments to Demonstrate Process Understanding and Identify the Critical Process Parameters for Coating of the Ethylcellulose Pseudolatex Dispersion Using Non-Conventional Fluid Bed Process.

    PubMed

    Kothari, Bhaveshkumar H; Fahmy, Raafat; Claycamp, H Gregg; Moore, Christine M V; Chatterjee, Sharmista; Hoag, Stephen W

    2016-07-14

    The goal of this study was to utilize risk assessment techniques and statistical design of experiments (DoE) to gain process understanding and to identify critical process parameters for the manufacture of controlled release multiparticulate beads using a novel disk-jet fluid bed technology. The material attributes and process parameters were systematically assessed using the Ishikawa fish bone diagram and failure mode and effect analysis (FMEA) risk assessment methods. The high risk attributes identified by the FMEA analysis were further explored using resolution V fractional factorial design. To gain an understanding of the processing parameters, a resolution V fractional factorial study was conducted. Using knowledge gained from the resolution V study, a resolution IV fractional factorial study was conducted; the purpose of this IV study was to identify the critical process parameters (CPP) that impact the critical quality attributes and understand the influence of these parameters on film formation. For both studies, the microclimate, atomization pressure, inlet air volume, product temperature (during spraying and curing), curing time, and percent solids in the coating solutions were studied. The responses evaluated were percent agglomeration, percent fines, percent yield, bead aspect ratio, median particle size diameter (d50), assay, and drug release rate. Pyrobuttons® were used to record real-time temperature and humidity changes in the fluid bed. The risk assessment methods and process analytical tools helped to understand the novel disk-jet technology and to systematically develop models of the coating process parameters like process efficiency and the extent of curing during the coating process.

  19. Association Study with 77 SNPs Confirms the Robust Role for the rs10830963/G of MTNR1B Variant and Identifies Two Novel Associations in Gestational Diabetes Mellitus Development

    PubMed Central

    Rosta, Klara; Al-Aissa, Zahra; Hadarits, Orsolya; Harreiter, Jürgen; Nádasdi, Ákos; Kelemen, Fanni; Bancher-Todesca, Dagmar; Komlósi, Zsolt; Németh, László; Rigó, János; Sziller, István; Somogyi, Anikó; Kautzky-Willer, Alexandra; Firneisz, Gábor

    2017-01-01

    Context Genetic variation in human maternal DNA contributes to the susceptibility for development of gestational diabetes mellitus (GDM). Objective We assessed 77 maternal single nucleotide gene polymorphisms (SNPs) for associations with GDM or plasma glucose levels at OGTT in pregnancy. Methods 960 pregnant women (after dropouts 820: case/control: m99’WHO: 303/517, IADPSG: 287/533) were enrolled in two countries into this case-control study. After genomic DNA isolation the 820 samples were collected in a GDM biobank and assessed using KASP (LGC Genomics) genotyping assay. Logistic regression risk models were used to calculate ORs according to IADPSG/m’99WHO criteria based on standard OGTT values. Results The most important risk alleles associated with GDM were rs10830963/G of MTNR1B (OR = 1.84/1.64 [IADPSG/m’99WHO], p = 0.0007/0.006), rs7754840/C (OR = 1.51/NS, p = 0.016) of CDKAL1 and rs1799884/T (OR = 1.4/1.56, p = 0.04/0.006) of GCK. The rs13266634/T (SLC30A8, OR = 0.74/0.71, p = 0.05/0.02) and rs7578326/G (LOC646736/IRS1, OR = 0.62/0.60, p = 0.001/0.006) variants were associated with lower risk to develop GDM. Carrying a minor allele of rs10830963 (MTNR1B); rs7903146 (TCF7L2); rs1799884 (GCK) SNPs were associated with increased plasma glucose levels at routine OGTT. Conclusions We confirmed the robust association of MTNR1B rs10830963/G variant with GDM binary and glycemic traits in this Caucasian case-control study. As novel associations we report the minor, G allele of the rs7578326 SNP in the LOC646736/IRS1 region as a significant and the rs13266634/T SNP (SLC30A8) as a suggestive protective variant against GDM development. Genetic susceptibility appears to be more preponderant in individuals who meet both the modified 99’WHO and the IADPSG GDM diagnostic criteria. PMID:28072873

  20. Network Robustness: the whole story

    NASA Astrophysics Data System (ADS)

    Longjas, A.; Tejedor, A.; Zaliapin, I. V.; Ambroj, S.; Foufoula-Georgiou, E.

    2014-12-01

    A multitude of actual processes operating on hydrological networks may exhibit binary outcomes such as clean streams in a river network that may become contaminated. These binary outcomes can be modeled by node removal processes (attacks) acting in a network. Network robustness against attacks has been widely studied in fields as diverse as the Internet, power grids and human societies. However, the current definition of robustness is only accounting for the connectivity of the nodes unaffected by the attack. Here, we put forward the idea that the connectivity of the affected nodes can play a crucial role in proper evaluation of the overall network robustness and its future recovery from the attack. Specifically, we propose a dual perspective approach wherein at any instant in the network evolution under attack, two distinct networks are defined: (i) the Active Network (AN) composed of the unaffected nodes and (ii) the Idle Network (IN) composed of the affected nodes. The proposed robustness metric considers both the efficiency of destroying the AN and the efficiency of building-up the IN. This approach is motivated by concrete applied problems, since, for example, if we study the dynamics of contamination in river systems, it is necessary to know both the connectivity of the healthy and contaminated parts of the river to assess its ecological functionality. We show that trade-offs between the efficiency of the Active and Idle network dynamics give rise to surprising crossovers and re-ranking of different attack strategies, pointing to significant implications for decision making.

  1. Robust Critical Point Detection

    SciTech Connect

    Bhatia, Harsh

    2016-07-28

    Robust Critical Point Detection is a software to compute critical points in a 2D or 3D vector field robustly. The software was developed as a part of the author's work at the lab as a Phd student under Livermore Scholar Program (now called Livermore Graduate Scholar Program).

  2. Mechanisms for Robust Cognition

    ERIC Educational Resources Information Center

    Walsh, Matthew M.; Gluck, Kevin A.

    2015-01-01

    To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within…

  3. Identifying Complex Cultural Interactions in the Instructional Design Process: A Case Study of a Cross-Border, Cross-Sector Training for Innovation Program

    ERIC Educational Resources Information Center

    Russell, L. Roxanne; Kinuthia, Wanjira L.; Lokey-Vega, Anissa; Tsang-Kosma, Winnie; Madathany, Reeny

    2013-01-01

    The purpose of this research is to identify complex cultural dynamics in the instructional design process of a cross-sector, cross-border training environment by applying Young's (2009) Culture-Based Model (CBM) as a theoretical framework and taxonomy for description of the instructional design process under the conditions of one case. This…

  4. Socially Shared Metacognitive Regulation during Reciprocal Peer Tutoring: Identifying Its Relationship with Students' Content Processing and Transactive Discussions

    ERIC Educational Resources Information Center

    De Backer, Liesje; Van Keer, Hilde; Valcke, Martin

    2015-01-01

    Although successful collaborative learning requires socially shared metacognitive regulation (SSMR) of the learning process among multiple students, empirical research on SSMR is limited. The present study contributes to the emerging research on SSMR by examining its correlation with both collaborative learners' content processing strategies and…

  5. Euphausiid distribution along the Western Antarctic Peninsula—Part A: Development of robust multi-frequency acoustic techniques to identify euphausiid aggregations and quantify euphausiid size, abundance, and biomass

    NASA Astrophysics Data System (ADS)

    Lawson, Gareth L.; Wiebe, Peter H.; Stanton, Timothy K.; Ashjian, Carin J.

    2008-02-01

    Methods were refined and tested for identifying the aggregations of Antarctic euphausiids ( Euphausia spp.) and then estimating euphausiid size, abundance, and biomass, based on multi-frequency acoustic survey data. A threshold level of volume backscattering strength for distinguishing euphausiid aggregations from other zooplankton was derived on the basis of published measurements of euphausiid visual acuity and estimates of the minimum density of animals over which an individual can maintain visual contact with its nearest neighbor. Differences in mean volume backscattering strength at 120 and 43 kHz further served to distinguish euphausiids from other sources of scattering. An inversion method was then developed to estimate simultaneously the mean length and density of euphausiids in these acoustically identified aggregations based on measurements of mean volume backscattering strength at four frequencies (43, 120, 200, and 420 kHz). The methods were tested at certain locations within an acoustically surveyed continental shelf region in and around Marguerite Bay, west of the Antarctic Peninsula, where independent evidence was also available from net and video systems. Inversion results at these test sites were similar to net samples for estimated length, but acoustic estimates of euphausiid density exceeded those from nets by one to two orders of magnitude, likely due primarily to avoidance and to a lesser extent to differences in the volumes sampled by the two systems. In a companion study, these methods were applied to the full acoustic survey data in order to examine the distribution of euphausiids in relation to aspects of the physical and biological environment [Lawson, G.L., Wiebe, P.H., Ashjian, C.J., Stanton, T.K., 2008. Euphausiid distribution along the Western Antarctic Peninsula—Part B: Distribution of euphausiid aggregations and biomass, and associations with environmental features. Deep-Sea Research II, this issue [doi:10.1016/j.dsr2.2007.11.014

  6. Identifying causal networks linking cancer processes and anti-tumor immunity using Bayesian network inference and metagene constructs

    PubMed Central

    Kaiser, Jacob L.; Bland, Cassidy L.; Klinke, David J.

    2017-01-01

    Cancer arises from a deregulation of both intracellular and intercellular networks that maintain system homeostasis. Identifying the architecture of these networks and how they are changed in cancer is a pre-requisite for designing drugs to restore homeostasis. Since intercellular networks only appear in intact systems, it is difficult to identify how these networks become altered in human cancer using many of the common experimental models. To overcome this, we used the diversity in normal and malignant human tissue samples from the Cancer Genome Atlas (TCGA) database of human breast cancer to identify the topology associated with intercellular networks in vivo. To improve the underlying biological signals, we constructed Bayesian networks using metagene constructs, which represented groups of genes that are concomitantly associated with different immune and cancer states. We also used bootstrap resampling to establish the significance associated with the inferred networks. In short, we found opposing relationships between cell proliferation and epithelial-to-mesenchymal transformation (EMT) with regards to macrophage polarization. These results were consistent across multiple carcinomas in that proliferation was associated with a type 1 cell-mediated anti-tumor immune response and EMT was associated with a pro-tumor anti-inflammatory response. To address the identifiability of these networks from other datasets, we could identify the relationship between EMT and macrophage polarization with fewer samples when the Bayesian network was generated from malignant samples alone. However, the relationship between proliferation and macrophage polarization was identified with fewer samples when the samples were taken from a combination of the normal and malignant samples. PMID:26785356

  7. Robust Nonlinear Neural Codes

    NASA Astrophysics Data System (ADS)

    Yang, Qianli; Pitkow, Xaq

    2015-03-01

    Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.

  8. Examining the Cognitive Processes Used by Adolescent Girls and Women Scientists in Identifying Science Role Models: A Feminist Approach

    ERIC Educational Resources Information Center

    Buck, Gayle A.; Plano Clark, Vicki L.; Leslie-Pelecky, Diandra; Lu, Yun; Cerda-Lizarraga, Particia

    2008-01-01

    Women remain underrepresented in science professions. Studies have shown that students are more likely to select careers when they can identify a role model in that career path. Further research has shown that the success of this strategy is enhanced by the use of gender-matched role models. While prior work provides insights into the value of…

  9. Seventeen Projects Carried out by Students Designing for and with Disabled Children: Identifying Designers' Difficulties during the Whole Design Process

    ERIC Educational Resources Information Center

    Magnier, Cecile; Thomann, Guillaume; Villeneuve, Francois

    2012-01-01

    This article aims to identify the difficulties that may arise when designing assistive devices for disabled children. Seventeen design projects involving disabled children, engineering students, and special schools were analysed. A content analysis of the design reports was performed. For this purpose, a coding scheme was built based on a review…

  10. Robust, optimal subsonic airfoil shapes

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor)

    2008-01-01

    Method system, and product from application of the method, for design of a subsonic airfoil shape, beginning with an arbitrary initial airfoil shape and incorporating one or more constraints on the airfoil geometric parameters and flow characteristics. The resulting design is robust against variations in airfoil dimensions and local airfoil shape introduced in the airfoil manufacturing process. A perturbation procedure provides a class of airfoil shapes, beginning with an initial airfoil shape.

  11. Efficient and Robust Signal Approximations

    DTIC Science & Technology

    2009-05-01

    otherwise. Remark. Permutation matrices are both orthogonal and doubly- stochastic [62]. We will now show how to further simplify the Robust Coding...reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching...Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: signal processing, image compression, independent component analysis , sparse

  12. THE ORIGINS OF LIGHT AND HEAVY R-PROCESS ELEMENTS IDENTIFIED BY CHEMICAL TAGGING OF METAL-POOR STARS

    SciTech Connect

    Tsujimoto, Takuji; Shigeyama, Toshikazu

    2014-11-01

    Growing interests in neutron star (NS) mergers as the origin of r-process elements have sprouted since the discovery of evidence for the ejection of these elements from a short-duration γ-ray burst. The hypothesis of a NS merger origin is reinforced by a theoretical update of nucleosynthesis in NS mergers successful in yielding r-process nuclides with A > 130. On the other hand, whether the origin of light r-process elements are associated with nucleosynthesis in NS merger events remains unclear. We find a signature of nucleosynthesis in NS mergers from peculiar chemical abundances of stars belonging to the Galactic globular cluster M15. This finding combined with the recent nucleosynthesis results implies a potential diversity of nucleosynthesis in NS mergers. Based on these considerations, we are successful in the interpretation of an observed correlation between [light r-process/Eu] and [Eu/Fe] among Galactic halo stars and accordingly narrow down the role of supernova nucleosynthesis in the r-process production site. We conclude that the tight correlation by a large fraction of halo stars is attributable to the fact that core-collapse supernovae produce light r-process elements while heavy r-process elements such as Eu and Ba are produced by NS mergers. On the other hand, stars in the outlier, composed of r-enhanced stars ([Eu/Fe] ≳ +1) such as CS22892-052, were exclusively enriched by matter ejected by a subclass of NS mergers that is inclined to be massive and consist of both light and heavy r-process nuclides.

  13. The Origins of Light and Heavy R-process Elements Identified by Chemical Tagging of Metal-poor Stars

    NASA Astrophysics Data System (ADS)

    Tsujimoto, Takuji; Shigeyama, Toshikazu

    2014-11-01

    Growing interests in neutron star (NS) mergers as the origin of r-process elements have sprouted since the discovery of evidence for the ejection of these elements from a short-duration γ-ray burst. The hypothesis of a NS merger origin is reinforced by a theoretical update of nucleosynthesis in NS mergers successful in yielding r-process nuclides with A > 130. On the other hand, whether the origin of light r-process elements are associated with nucleosynthesis in NS merger events remains unclear. We find a signature of nucleosynthesis in NS mergers from peculiar chemical abundances of stars belonging to the Galactic globular cluster M15. This finding combined with the recent nucleosynthesis results implies a potential diversity of nucleosynthesis in NS mergers. Based on these considerations, we are successful in the interpretation of an observed correlation between [light r-process/Eu] and [Eu/Fe] among Galactic halo stars and accordingly narrow down the role of supernova nucleosynthesis in the r-process production site. We conclude that the tight correlation by a large fraction of halo stars is attributable to the fact that core-collapse supernovae produce light r-process elements while heavy r-process elements such as Eu and Ba are produced by NS mergers. On the other hand, stars in the outlier, composed of r-enhanced stars ([Eu/Fe] gsim +1) such as CS22892-052, were exclusively enriched by matter ejected by a subclass of NS mergers that is inclined to be massive and consist of both light and heavy r-process nuclides.

  14. Feel No Guilt! Your Statistics Are Probably Robust.

    ERIC Educational Resources Information Center

    Micceri, Theodore

    This paper reports an attempt to identify appropriate and robust location estimators for situations that tend to occur among various types of empirical data. Emphasizing robustness across broad unidentifiable ranges of contamination, an attempt was made to replicate, on a somewhat smaller scale, the definitive Princeton Robustness Study of 1972 to…

  15. Acetylome Analysis Identifies SIRT1 Targets in mRNA-Processing and Chromatin-Remodeling in Mouse Liver

    PubMed Central

    Tang, Hui; Han, Weiping; Zhang, Kangling; Xu, Feng

    2015-01-01

    Lysine acetylation is a post-translational modification found on numerous proteins, a strategy used in cell signaling to change protein activity in response to internal or external cues. Sirtuin 1 (SIRT1) is a central lysine deacetylase involved in a variety of cellular processes including metabolism, apoptosis, and DNA repair. Here we characterize the lysine acetylome in mouse liver, and by using a model of Sirt1-/-knockout mouse, show that SIRT1 regulates the deacetylation of 70 proteins in the liver in-vivo. Amongst these SIRT1-regulated proteins, we find that four RNA-processing proteins and a chromatin-remodeling protein can be deacetylated by SIRT1 directly in-vitro. The discovery that SIRT1 has a potential role in RNA-processing suggests a new layer of regulation in the variety of functions performed by SIRT1. PMID:26468954

  16. Identifying low-dimensional dynamics in type-I edge-localised-mode processes in JET plasmas

    SciTech Connect

    Calderon, F. A.; Chapman, S. C.; Nicol, R. M.; Dendy, R. O.; Webster, A. J.; Alper, B. [EURATOM Collaboration: JET EFDA Contributors

    2013-04-15

    Edge localised mode (ELM) measurements from reproducibly similar plasmas in the Joint European Torus (JET) tokamak, which differ only in their gas puffing rate, are analysed in terms of the pattern in the sequence of inter-ELM time intervals. It is found that the category of ELM defined empirically as type I-typically more regular, less frequent, and having larger amplitude than other ELM types-embraces substantially different ELMing processes. By quantifying the structure in the sequence of inter-ELM time intervals using delay time plots, we reveal transitions between distinct phase space dynamics, implying transitions between distinct underlying physical processes. The control parameter for these transitions between these different ELMing processes is the gas puffing rate.

  17. Deep-UV positive resist image by dry etching (DUV PRIME): a robust process for 0.3-μm contact holes

    NASA Astrophysics Data System (ADS)

    Louis, Didier; Laporte, Philippe; Molle, Pascale; Ullmann, H.

    1994-05-01

    Classical positive resist process for DUV is not yet available and stabilized. We noticed various limiting points such as the delay time for resist material, the limitation of thickness related to ultimate resolution, and the bulk effect. P.R.I.M.E. (Positive Resist Image by dry Etching) process technology using DUV 248 nm exposure wavelength improve solutions for each process parameters, for example, a well known and stable resist (J.S.R- U.C.B PLASMASK 200G) is used with Hexamethyldisilazane (HMDS) as silylated compound. The combination of DUV exposure and top surface imaging P.R.I.M.E. process can open contact holes down to 0.3 micrometers with a large process window and a good wafer uniformity. This publication will show the improvement of each process parameter. Extended information will be given for process latitude (focus and exposure). We demonstrated and verified the feasibility of the contact holes process by etching 1 micrometers oxide (BPSG + USG) through the PRIME process lithography.

  18. An Exploration of Strategic Planning Perspectives and Processes within Community Colleges Identified as Being Distinctive in Their Strategic Planning Practices

    ERIC Educational Resources Information Center

    Augustyniak, Lisa J.

    2015-01-01

    Community college leaders face unprecedented change, and some have begun reexamining their institutional strategic planning processes. Yet, studies in higher education strategic planning spend little time examining how community colleges formulate their strategic plans. This mixed-method qualitative study used an expert sampling method to identify…

  19. Identifying the Associated Factors of Mediation and Due Process in Families of Students with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Burke, Meghan M.; Goldman, Samantha E.

    2015-01-01

    Compared to families of students with other types of disabilities, families of students with autism spectrum disorder (ASD) are significantly more likely to enact their procedural safeguards such as mediation and due process. However, we do not know which school, child, and parent characteristics are associated with the enactment of safeguards.…

  20. The AP Chemistry Course Audit: A Fertile Ground for Identifying and Addressing Misconceptions about the Course and Process

    ERIC Educational Resources Information Center

    Schwenz, Richard W.; Miller, Sheldon

    2014-01-01

    The advanced placement course audit was implemented to standardize the college-level curricular and resource requirements for AP courses. While the process has had this effect, it has brought with it misconceptions about how much the College Board intends to control what happens within the classroom, what information is required to be included in…

  1. Calculation of the Helfferich number to identify the rate-controlling step of ion exchange for a batch process

    SciTech Connect

    Bunzl, K.

    1995-08-01

    The Helfferich number He is used frequently as a valuable criterion to decide whether for an ion exchange process film diffusion or particle diffusion of the ions is the rate-determining step. The corresponding equation given by Helfferich is restricted, however, for the boundary condition of an infinite solution volume. In the present paper, the Helfferich number is calculated also for a finite solution volume, i.e., for a typical batch process. Because the resulting equation can be solved only numerically, the results are presented in graphical form. It is also examined for which batch processes the conventional Helfferich number already yields a conservative and thus a very simple and useful estimate of the rate-determining step. Information on the kinetics of ion exchange reactions is required not only for the economic employment of synthetic ion exchangers in the industry and the laboratory but also for a better understanding of these processes in natural systems, as, e.g., the sorption of nutrient and toxic ions by the soil.

  2. Identifying thresholds in pattern-process relationships: a new cross-scale interactions experiment at the Jornada Basin LTER

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Interactions among ecological patterns and processes at multiple scales play a significant role in threshold behaviors in arid systems. Black grama grasslands and mesquite shrublands are hypothesized to operate under unique sets of feedbacks: grasslands are maintained by fine-scale biotic feedbacks ...

  3. Robust control of ionic polymer metal composites

    NASA Astrophysics Data System (ADS)

    Kang, Sunhyuk; Shin, Jongho; Kim, Seong Jun; Kim, H. Jin; Hyup Kim, Yong

    2007-12-01

    Ionic polymer-metal composites (IPMCs) have been considered for various applications due to their light weight, large bending, and low actuation voltage requirements. However, their response can be slow and vary widely, depending on various factors such as fabrication processes, water content, and contact conditions with the electrodes. In order to utilize their capability in various high-performance microelectromechanical systems, controllers need to address this uncertainty and non-repeatability while improving the response speed. In this work, we identified an empirical model for the dynamic relationship between the applied voltage and the IPMC beam deflection, which includes the uncertainties and variations of the response. Then, four types of controller were designed, and their performances were compared: a proportional-integral-derivative (PID) controller with optimized gains using a co-evolutionary algorithm, and three types of robust controller based on H_\\infty , H_\\infty with loop shaping, and μ-synthesis, respectively. Our results show that the robust control techniques can significantly improve the IPMC performance against non-repeatability or parametric uncertainties, in terms of the faster response and lower overshoot than the PID control, using lower actuation voltage.

  4. Frequency-dependent processing and interpretation (FDPI) of seismic data for identifying, imaging and monitoring fluid-saturated underground reservoirs

    DOEpatents

    Goloshubin, Gennady M.; Korneev, Valeri A.

    2005-09-06

    A method for identifying, imaging and monitoring dry or fluid-saturated underground reservoirs using seismic waves reflected from target porous or fractured layers is set forth. Seismic imaging the porous or fractured layer occurs by low pass filtering of the windowed reflections from the target porous or fractured layers leaving frequencies below low-most corner (or full width at half maximum) of a recorded frequency spectra. Additionally, the ratio of image amplitudes is shown to be approximately proportional to reservoir permeability, viscosity of fluid, and the fluid saturation of the porous or fractured layers.

  5. Frequency-dependent processing and interpretation (FDPI) of seismic data for identifying, imaging and monitoring fluid-saturated underground reservoirs

    DOEpatents

    Goloshubin, Gennady M.; Korneev, Valeri A.

    2006-11-14

    A method for identifying, imaging and monitoring dry or fluid-saturated underground reservoirs using seismic waves reflected from target porous or fractured layers is set forth. Seismic imaging the porous or fractured layer occurs by low pass filtering of the windowed reflections from the target porous or fractured layers leaving frequencies below low-most corner (or full width at half maximum) of a recorded frequency spectra. Additionally, the ratio of image amplitudes is shown to be approximately proportional to reservoir permeability, viscosity of fluid, and the fluid saturation of the porous or fractured layers.

  6. Estimation and Identifiability of Model Parameters in Human Nociceptive Processing Using Yes-No Detection Responses to Electrocutaneous Stimulation

    PubMed Central

    Yang, Huan; Meijer, Hil G. E.; Buitenweg, Jan R.; van Gils, Stephan A.

    2016-01-01

    Healthy or pathological states of nociceptive subsystems determine different stimulus-response relations measured from quantitative sensory testing. In turn, stimulus-response measurements may be used to assess these states. In a recently developed computational model, six model parameters characterize activation of nerve endings and spinal neurons. However, both model nonlinearity and limited information in yes-no detection responses to electrocutaneous stimuli challenge to estimate model parameters. Here, we address the question whether and how one can overcome these difficulties for reliable parameter estimation. First, we fit the computational model to experimental stimulus-response pairs by maximizing the likelihood. To evaluate the balance between model fit and complexity, i.e., the number of model parameters, we evaluate the Bayesian Information Criterion. We find that the computational model is better than a conventional logistic model regarding the balance. Second, our theoretical analysis suggests to vary the pulse width among applied stimuli as a necessary condition to prevent structural non-identifiability. In addition, the numerically implemented profile likelihood approach reveals structural and practical non-identifiability. Our model-based approach with integration of psychophysical measurements can be useful for a reliable assessment of states of the nociceptive system. PMID:27994563

  7. Novel Dendritic Kinesin Sorting Identified by Different Process Targeting of Two Related Kinesins: KIF21A and KIF21B

    PubMed Central

    Marszalek, Joseph R.; Weiner, Joshua A.; Farlow, Samuel J.; Chun, Jerold; Goldstein, Lawrence S.B.

    1999-01-01

    Neurons use kinesin and dynein microtubule-dependent motor proteins to transport essential cellular components along axonal and dendritic microtubules. In a search for new kinesin-like proteins, we identified two neuronally enriched mouse kinesins that provide insight into a unique intracellular kinesin targeting mechanism in neurons. KIF21A and KIF21B share colinear amino acid similarity to each other, but not to any previously identified kinesins outside of the motor domain. Each protein also contains a domain of seven WD-40 repeats, which may be involved in binding to cargoes. Despite the amino acid sequence similarity between KIF21A and KIF21B, these proteins localize differently to dendrites and axons. KIF21A protein is localized throughout neurons, while KIF21B protein is highly enriched in dendrites. The plus end-directed motor activity of KIF21B and its enrichment in dendrites indicate that models suggesting that minus end-directed motor activity is sufficient for dendrite specific motor localization are inadequate. We suggest that a novel kinesin sorting mechanism is used by neurons to localize KIF21B protein to dendrites since its mRNA is restricted to the cell body. PMID:10225949

  8. Identifying consumer preferences for specific beef flavor characteristics in relation to cattle production and postmortem processing parameters.

    PubMed

    O'Quinn, T G; Woerner, D R; Engle, T E; Chapman, P L; Legako, J F; Brooks, J C; Belk, K E; Tatum, J D

    2016-02-01

    Sensory analysis of ground LL samples representing 12 beef product categories was conducted in 3 different regions of the U.S. to identify flavor preferences of beef consumers. Treatments characterized production-related flavor differences associated with USDA grade, cattle type, finishing diet, growth enhancement, and postmortem aging method. Consumers (N=307) rated cooked samples for 12 flavors and overall flavor desirability. Samples were analyzed to determine fatty acid content. Volatile compounds produced by cooking were extracted and quantified. Overall, consumers preferred beef that rated high for beefy/brothy, buttery/beef fat, and sweet flavors and disliked beef with fishy, livery, gamey, and sour flavors. Flavor attributes of samples higher in intramuscular fat with greater amounts of monounsaturated fatty acids and lesser proportions of saturated, odd-chain, omega-3, and trans fatty acids were preferred by consumers. Of the volatiles identified, diacetyl and acetoin were most closely correlated with desirable ratings for overall flavor and dimethyl sulfide was associated with an undesirable sour flavor.

  9. Estimation and Identifiability of Model Parameters in Human Nociceptive Processing Using Yes-No Detection Responses to Electrocutaneous Stimulation.

    PubMed

    Yang, Huan; Meijer, Hil G E; Buitenweg, Jan R; van Gils, Stephan A

    2016-01-01

    Healthy or pathological states of nociceptive subsystems determine different stimulus-response relations measured from quantitative sensory testing. In turn, stimulus-response measurements may be used to assess these states. In a recently developed computational model, six model parameters characterize activation of nerve endings and spinal neurons. However, both model nonlinearity and limited information in yes-no detection responses to electrocutaneous stimuli challenge to estimate model parameters. Here, we address the question whether and how one can overcome these difficulties for reliable parameter estimation. First, we fit the computational model to experimental stimulus-response pairs by maximizing the likelihood. To evaluate the balance between model fit and complexity, i.e., the number of model parameters, we evaluate the Bayesian Information Criterion. We find that the computational model is better than a conventional logistic model regarding the balance. Second, our theoretical analysis suggests to vary the pulse width among applied stimuli as a necessary condition to prevent structural non-identifiability. In addition, the numerically implemented profile likelihood approach reveals structural and practical non-identifiability. Our model-based approach with integration of psychophysical measurements can be useful for a reliable assessment of states of the nociceptive system.

  10. Identifying component-processes of executive functioning that serve as risk factors for the alcohol-aggression relation.

    PubMed

    Giancola, Peter R; Godlaski, Aaron J; Roth, Robert M

    2012-06-01

    The present investigation determined how different component-processes of executive functioning (EF) served as risk factors for intoxicated aggression. Participants were 512 (246 males and 266 females) healthy social drinkers between 21 and 35 years of age. EF was measured using the Behavior Rating Inventory of Executive Function-Adult Version (BRIEF-A) that assesses nine EF components. After the consumption of either an alcohol or a placebo beverage, participants were tested on a modified version of the Taylor Aggression Paradigm in which mild electric shocks were received from, and administered to, a fictitious opponent. Aggressive behavior was operationalized as the shock intensities and durations administered to the opponent. Although a general BRIEF-A EF construct consisting of all nine components predicted intoxicated aggression, the best predictor involved one termed the Behavioral Regulation Index that comprises component processes such as inhibition, emotional control, flexible thinking, and self-monitoring.

  11. Identifying the sources and processes of mercury in subtropical estuarine and ocean sediments using Hg isotopic composition.

    PubMed

    Yin, Runsheng; Feng, Xinbin; Chen, Baowei; Zhang, Junjun; Wang, Wenxiong; Li, Xiangdong

    2015-02-03

    The concentrations and isotopic compositions of mercury (Hg) in surface sediments of the Pearl River Estuary (PRE) and the South China Sea (SCS) were analyzed. The data revealed significant differences between the total Hg (THg) in fine-grained sediments collected from the PRE (8-251 μg kg(-1)) and those collected from the SCS (12-83 μg kg(-1)). Large spatial variations in Hg isotopic compositions were observed in the SCS (δ(202)Hg, from -2.82 to -2.10‰; Δ(199)Hg, from +0.21 to +0.45‰) and PRE (δ(202)Hg, from -2.80 to -0.68‰; Δ(199)Hg, from -0.15 to +0.16‰). The large positive Δ(199)Hg in the SCS indicated that a fraction of Hg has undergone Hg(2+) photoreduction processes prior to incorporation into the sediments. The relatively negative Δ(199)Hg values in the PRE indicated that photoreduction of Hg is not the primary route for the removal of Hg from the water column. The riverine input of fine particles played an important role in transporting Hg to the PRE sediments. In the deep ocean bed of the SCS, source-related signatures of Hg isotopes may have been altered by natural geochemical processes (e.g., Hg(2+) photoreduction and preferential adsorption processes). Using Hg isotope compositions, we estimate that river deliveries of Hg from industrial and urban sources and natural soils could be the main inputs of Hg to the PRE. However, the use of Hg isotopes as tracers in source attribution could be limited because of the isotope fractionation by natural processes in the SCS.

  12. Geostatistical analysis to identify hydrogeochemical processes in complex aquifers: a case study (Aguadulce unit, Almeria, SE Spain).

    PubMed

    Daniele, Linda; Pulido Bosch, Antonio; Vallejos, Angela; Molina, Luis

    2008-06-01

    The Aguadulce aquifer unit in southeastern Spain is a complex hydrogeological system because of the varied lithology of the aquifer strata and the variability of the processes that can take place within the unit. Factorial analysis of the data allowed the number of variables to be reduced to 3 factors, which were found to be related to such physico-chemical processes as marine intrusion and leaching of saline deposits. Variographic analysis was applied to these factors, culminating in a study of spatial distribution using ordinary kriging. Mapping of the factors allowed rapid differentiation of some of the processes that affect the waters of the Gador carbonate aquifer within the Aguadulce unit, without the need to recur to purely hydrogeochemical techniques. The results indicate the existence of several factors related to salinity: marine intrusion, paleowaters, and/or leaching of marls and evaporitic deposits. The techniques employed are effective, and the results conform to those obtained using hydrogeochemical methods (vertical records of conductivity and temperature, ion ratios, and others). The findings of this study confirm that the application of such analytical methods can provide a useful assessment of factors affecting groundwater composition.

  13. Meta-analysis of genome-wide association studies identifies novel loci that influence cupping and the glaucomatous process

    PubMed Central

    Springelkamp, Henriët.; Höhn, René; Mishra, Aniket; Hysi, Pirro G.; Khor, Chiea-Chuen; Loomis, Stephanie J.; Bailey, Jessica N. Cooke; Gibson, Jane; Thorleifsson, Gudmar; Janssen, Sarah F.; Luo, Xiaoyan; Ramdas, Wishal D.; Vithana, Eranga; Nongpiur, Monisha E.; Montgomery, Grant W.; Xu, Liang; Mountain, Jenny E.; Gharahkhani, Puya; Lu, Yi; Amin, Najaf; Karssen, Lennart C.; Sim, Kar-Seng; van Leeuwen, Elisabeth M.; Iglesias, Adriana I.; Verhoeven, Virginie J. M.; Hauser, Michael A.; Loon, Seng-Chee; Despriet, Dominiek D. G.; Nag, Abhishek; Venturini, Cristina; Sanfilippo, Paul G.; Schillert, Arne; Kang, Jae H.; Landers, John; Jonasson, Fridbert; Cree, Angela J.; van Koolwijk, Leonieke M. E.; Rivadeneira, Fernando; Souzeau, Emmanuelle; Jonsson, Vesteinn; Menon, Geeta; Mitchell, Paul; Wang, Jie Jin; Rochtchina, Elena; Attia, John; Scott, Rodney; Holliday, Elizabeth G.; Wong, Tien-Yin; Baird, Paul N.; Xie, Jing; Inouye, Michael; Viswanathan, Ananth; Sim, Xueling; Weinreb, Robert N.; de Jong, Paulus T. V. M.; Oostra, Ben A.; Uitterlinden, André G.; Hofman, Albert; Ennis, Sarah; Thorsteinsdottir, Unnur; Burdon, Kathryn P.; Allingham, R. Rand; Brilliant, Murray H.; Budenz, Donald L.; Cooke Bailey, Jessica N.; Christen, William G.; Fingert, John; Friedman, David S.; Gaasterland, Douglas; Gaasterland, Terry; Haines, Jonathan L.; Hauser, Michael A.; Kang, Jae Hee; Kraft, Peter; Lee, Richard K.; Lichter, Paul R.; Liu, Yutao; Loomis, Stephanie J.; Moroi, Sayoko E.; Pasquale, Louis R.; Pericak-Vance, Margaret A.; Realini, Anthony; Richards, Julia E.; Schuman, Joel S.; Scott, William K.; Singh, Kuldev; Sit, Arthur J.; Vollrath, Douglas; Weinreb, Robert N.; Wiggs, Janey L.; Wollstein, Gadi; Zack, Donald J.; Zhang, Kang; Donnelly (Chair), Peter; Barroso (Deputy Chair), Ines; Blackwell, Jenefer M.; Bramon, Elvira; Brown, Matthew A.; Casas, Juan P.; Corvin, Aiden; Deloukas, Panos; Duncanson, Audrey; Jankowski, Janusz; Markus, Hugh S.; Mathew, Christopher G.; Palmer, Colin N. A.; Plomin, Robert; Rautanen, Anna; Sawcer, Stephen J.; Trembath, Richard C.; Viswanathan, Ananth C.; Wood, Nicholas W.; Spencer, Chris C. A.; Band, Gavin; Bellenguez, Céline; Freeman, Colin; Hellenthal, Garrett; Giannoulatou, Eleni; Pirinen, Matti; Pearson, Richard; Strange, Amy; Su, Zhan; Vukcevic, Damjan; Donnelly, Peter; Langford, Cordelia; Hunt, Sarah E.; Edkins, Sarah; Gwilliam, Rhian; Blackburn, Hannah; Bumpstead, Suzannah J.; Dronov, Serge; Gillman, Matthew; Gray, Emma; Hammond, Naomi; Jayakumar, Alagurevathi; McCann, Owen T.; Liddle, Jennifer; Potter, Simon C.; Ravindrarajah, Radhi; Ricketts, Michelle; Waller, Matthew; Weston, Paul; Widaa, Sara; Whittaker, Pamela; Barroso, Ines; Deloukas, Panos; Mathew (Chair), Christopher G.; Blackwell, Jenefer M.; Brown, Matthew A.; Corvin, Aiden; Spencer, Chris C. A.; Spector, Timothy D.; Mirshahi, Alireza; Saw, Seang-Mei; Vingerling, Johannes R.; Teo, Yik-Ying; Haines, Jonathan L.; Wolfs, Roger C. W.; Lemij, Hans G.; Tai, E-Shyong; Jansonius, Nomdo M.; Jonas, Jost B.; Cheng, Ching-Yu; Aung, Tin; Viswanathan, Ananth C.; Klaver, Caroline C. W.; Craig, Jamie E.; Macgregor, Stuart; Mackey, David A.; Lotery, Andrew J.; Stefansson, Kari; Bergen, Arthur A. B.; Young, Terri L.; Wiggs, Janey L.; Pfeiffer, Norbert; Wong, Tien-Yin; Pasquale, Louis R.; Hewitt, Alex W.; van Duijn, Cornelia M.; Hammond, Christopher J.

    2014-01-01

    Glaucoma is characterized by irreversible optic nerve degeneration and is the most frequent cause of irreversible blindness worldwide. Here, the International Glaucoma Genetics Consortium conducts a meta-analysis of genome-wide association studies of vertical cup-disc ratio (VCDR), an important disease-related optic nerve parameter. In 21,094 individuals of European ancestry and 6,784 individuals of Asian ancestry, we identify 10 new loci associated with variation in VCDR. In a separate risk-score analysis of five case-control studies, Caucasians in the highest quintile have a 2.5-fold increased risk of primary open-angle glaucoma as compared with those in the lowest quintile. This study has more than doubled the known loci associated with optic disc cupping and will allow greater understanding of mechanisms involved in this common blinding condition. PMID:25241763

  14. Process development of a New Haemophilus influenzae type b conjugate vaccine and the use of mathematical modeling to identify process optimization possibilities.

    PubMed

    Hamidi, Ahd; Kreeftenberg, Hans; V D Pol, Leo; Ghimire, Saroj; V D Wielen, Luuk A M; Ottens, Marcel

    2016-05-01

    Vaccination is one of the most successful public health interventions being a cost-effective tool in preventing deaths among young children. The earliest vaccines were developed following empirical methods, creating vaccines by trial and error. New process development tools, for example mathematical modeling, as well as new regulatory initiatives requiring better understanding of both the product and the process are being applied to well-characterized biopharmaceuticals (for example recombinant proteins). The vaccine industry is still running behind in comparison to these industries. A production process for a new Haemophilus influenzae type b (Hib) conjugate vaccine, including related quality control (QC) tests, was developed and transferred to a number of emerging vaccine manufacturers. This contributed to a sustainable global supply of affordable Hib conjugate vaccines, as illustrated by the market launch of the first Hib vaccine based on this technology in 2007 and concomitant price reduction of Hib vaccines. This paper describes the development approach followed for this Hib conjugate vaccine as well as the mathematical modeling tool applied recently in order to indicate options for further improvements of the initial Hib process. The strategy followed during the process development of this Hib conjugate vaccine was a targeted and integrated approach based on prior knowledge and experience with similar products using multi-disciplinary expertise. Mathematical modeling was used to develop a predictive model for the initial Hib process (the 'baseline' model) as well as an 'optimized' model, by proposing a number of process changes which could lead to further reduction in price. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:568-580, 2016.

  15. Application of surface area measurement for identifying the source of batch-to-batch variation in processability.

    PubMed

    Vippagunta, Radha R; Pan, Changkang; Vakil, Ronak; Meda, Vindhya; Vivilecchia, Richard; Motto, Michael

    2009-01-01

    The primary goal of this study was to evaluate the use of specific surface area as a measurable physical property of materials for understanding the batch-to-batch variation in the flow behavior. The specific surface area measurements provide information about the nature of the surface making up the solid, which may include defects or void space on the surface. These void spaces are often present in the crystalline material due to varying degrees of disorderness and can be considered as amorphous regions. In the present work, the specific surface area for 10 batches of the same active pharmaceutical ingredient (compound 1) with varying quantity of amorphous content was investigated. Some of these batches showed different flow behavior when processed using roller compaction. The surface area value was found to increase in the presence of low amorphous content, and decrease with high amorphous content as compared to crystalline material. To complement the information obtained from the above study, physical blends of another crystalline active pharmaceutical ingredient (compound 2) and its amorphous form were prepared in known proportions. Similar trend in specific surface area value was found. Tablets prepared from known formulation with varying amorphous content of the active ingredient (compound 3) also exhibited the same trend. A hypothesis to explain the correlation between the amorphous content and specific surface area has been proposed. The results strongly support the use of specific surface area as a measurable tool for investigation of source of batch to batch variation in processability.

  16. Engineering robust intelligent robots

    NASA Astrophysics Data System (ADS)

    Hall, E. L.; Ali, S. M. Alhaj; Ghaffari, M.; Liao, X.; Cao, M.

    2010-01-01

    The purpose of this paper is to discuss the challenge of engineering robust intelligent robots. Robust intelligent robots may be considered as ones that not only work in one environment but rather in all types of situations and conditions. Our past work has described sensors for intelligent robots that permit adaptation to changes in the environment. We have also described the combination of these sensors with a "creative controller" that permits adaptive critic, neural network learning, and a dynamic database that permits task selection and criteria adjustment. However, the emphasis of this paper is on engineering solutions which are designed for robust operations and worst case situations such as day night cameras or rain and snow solutions. This ideal model may be compared to various approaches that have been implemented on "production vehicles and equipment" using Ethernet, CAN Bus and JAUS architectures and to modern, embedded, mobile computing architectures. Many prototype intelligent robots have been developed and demonstrated in terms of scientific feasibility but few have reached the stage of a robust engineering solution. Continual innovation and improvement are still required. The significance of this comparison is that it provides some insights that may be useful in designing future robots for various manufacturing, medical, and defense applications where robust and reliable performance is essential.

  17. Content uniformity determination of pharmaceutical tablets using five near-infrared reflectance spectrometers: a process analytical technology (PAT) approach using robust multivariate calibration transfer algorithms.

    PubMed

    Sulub, Yusuf; LoBrutto, Rosario; Vivilecchia, Richard; Wabuyele, Busolo Wa

    2008-03-24

    Near-infrared calibration models were developed for the determination of content uniformity of pharmaceutical tablets containing 29.4% drug load for two dosage strengths (X and Y). Both dosage strengths have a circular geometry and the only difference is the size and weight. Strength X samples weigh approximately 425 mg with a diameter of 12 mm while strength Y samples, weigh approximately 1700 mg with a diameter of 20mm. Data used in this study were acquired from five NIR instruments manufactured by two different vendors. One of these spectrometers is a dispersive-based NIR system while the other four were Fourier transform (FT) based. The transferability of the optimized partial least-squares (PLS) calibration models developed on the primary instrument (A) located in a research facility was evaluated using spectral data acquired from secondary instruments B, C, D and E. Instruments B and E were located in the same research facility as spectrometer A while instruments C and D were located in a production facility 35 miles away. The same set of tablet samples were used to acquire spectral data from all instruments. This scenario mimics the conventional pharmaceutical technology transfer from research and development to production. Direct cross-instrument prediction without standardization was performed between the primary and each secondary instrument to evaluate the robustness of the primary instrument calibration model. For the strength Y samples, this approach was successful for data acquired on instruments B, C, and D producing root mean square error of prediction (RMSEP) of 1.05, 1.05, and 1.22%, respectively. However for instrument E data, this approach was not successful producing an RMSEP value of 3.40%. A similar deterioration was observed for the strength X samples, with RMSEP values of 2.78, 5.54, 3.40, and 5.78% corresponding to spectral data acquired on instruments B, C, D, and E, respectively. To minimize the effect of instrument variability

  18. Comparative Transcriptional Analysis of Loquat Fruit Identifies Major Signal Networks Involved in Fruit Development and Ripening Process.

    PubMed

    Song, Huwei; Zhao, Xiangxiang; Hu, Weicheng; Wang, Xinfeng; Shen, Ting; Yang, Liming

    2016-11-04

    Loquat (Eriobotrya japonica Lindl.) is an important non-climacteric fruit and rich in essential nutrients such as minerals and carotenoids. During fruit development and ripening, thousands of the differentially expressed genes (DEGs) from various metabolic pathways cause a series of physiological and biochemical changes. To better understand the underlying mechanism of fruit development, the Solexa/Illumina RNA-seq high-throughput sequencing was used to evaluate the global changes of gene transcription levels. More than 51,610,234 high quality reads from ten runs of fruit development were sequenced and assembled into 48,838 unigenes. Among 3256 DEGs, 2304 unigenes could be annotated to the Gene Ontology database. These DEGs were distributed into 119 pathways described in the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. A large number of DEGs were involved in carbohydrate metabolism, hormone signaling, and cell-wall degradation. The real-time reverse transcription (qRT)-PCR analyses revealed that several genes related to cell expansion, auxin signaling and ethylene response were differentially expressed during fruit development. Other members of transcription factor families were also identified. There were 952 DEGs considered as novel genes with no annotation in any databases. These unigenes will serve as an invaluable genetic resource for loquat molecular breeding and postharvest storage.

  19. Niche Divergence versus Neutral Processes: Combined Environmental and Genetic Analyses Identify Contrasting Patterns of Differentiation in Recently Diverged Pine Species

    PubMed Central

    Moreno-Letelier, Alejandra; Ortíz-Medrano, Alejandra; Piñero, Daniel

    2013-01-01

    Background and Aims Solving relationships of recently diverged taxa, poses a challenge due to shared polymorphism and weak reproductive barriers. Multiple lines of evidence are needed to identify independently evolving lineages. This is especially true of long-lived species with large effective population sizes, and slow rates of lineage sorting. North American pines are an interesting group to test this multiple approach. Our aim is to combine cytoplasmic genetic markers with environmental information to clarify species boundaries and relationships of the species complex of Pinus flexilis, Pinus ayacahuite, and Pinus strobiformis. Methods Mitochondrial and chloroplast sequences were combined with previously obtained microsatellite data and contrasted with environmental information to reconstruct phylogenetic relationships of the species complex. Ecological niche models were compared to test if ecological divergence is significant among species. Key Results and Conclusion Separately, both genetic and ecological evidence support a clear differentiation of all three species but with different topology, but also reveal an ancestral contact zone between P. strobiformis and P. ayacahuite. The marked ecological differentiation of P. flexilis suggests that ecological speciation has occurred in this lineage, but this is not reflected in neutral markers. The inclusion of environmental traits in phylogenetic reconstruction improved the resolution of internal branches. We suggest that combining environmental and genetic information would be useful for species delimitation and phylogenetic studies in other recently diverged species complexes. PMID:24205167

  20. Comparative Transcriptional Analysis of Loquat Fruit Identifies Major Signal Networks Involved in Fruit Development and Ripening Process

    PubMed Central

    Song, Huwei; Zhao, Xiangxiang; Hu, Weicheng; Wang, Xinfeng; Shen, Ting; Yang, Liming

    2016-01-01

    Loquat (Eriobotrya japonica Lindl.) is an important non-climacteric fruit and rich in essential nutrients such as minerals and carotenoids. During fruit development and ripening, thousands of the differentially expressed genes (DEGs) from various metabolic pathways cause a series of physiological and biochemical changes. To better understand the underlying mechanism of fruit development, the Solexa/Illumina RNA-seq high-throughput sequencing was used to evaluate the global changes of gene transcription levels. More than 51,610,234 high quality reads from ten runs of fruit development were sequenced and assembled into 48,838 unigenes. Among 3256 DEGs, 2304 unigenes could be annotated to the Gene Ontology database. These DEGs were distributed into 119 pathways described in the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. A large number of DEGs were involved in carbohydrate metabolism, hormone signaling, and cell-wall degradation. The real-time reverse transcription (qRT)-PCR analyses revealed that several genes related to cell expansion, auxin signaling and ethylene response were differentially expressed during fruit development. Other members of transcription factor families were also identified. There were 952 DEGs considered as novel genes with no annotation in any databases. These unigenes will serve as an invaluable genetic resource for loquat molecular breeding and postharvest storage. PMID:27827928

  1. What develops during emotional development? A component process approach to identifying sources of psychopathology risk in adolescence

    PubMed Central

    McLaughlin, Katie A.; Garrad, Megan C.; Somerville, Leah H.

    2015-01-01

    Adolescence is a phase of the lifespan associated with widespread changes in emotional behavior thought to reflect both changing environments and stressors, and psychological and neurobiological development. However, emotions themselves are complex phenomena that are composed of multiple subprocesses. In this paper, we argue that examining emotional development from a process-level perspective facilitates important insights into the mechanisms that underlie adolescents' shifting emotions and intensified risk for psychopathology. Contrasting the developmental progressions for the antecedents to emotion, physiological reactivity to emotion, emotional regulation capacity, and motivation to experience particular affective states reveals complex trajectories that intersect in a unique way during adolescence. We consider the implications of these intersecting trajectories for negative outcomes such as psychopathology, as well as positive outcomes for adolescent social bonds. PMID:26869841

  2. What develops during emotional development? A component process approach to identifying sources of psychopathology risk in adolescence.

    PubMed

    McLaughlin, Katie A; Garrad, Megan C; Somerville, Leah H

    2015-12-01

    Adolescence is a phase of the lifespan associated with widespread changes in emotional behavior thought to reflect both changing environments and stressors, and psychological and neurobiological development. However, emotions themselves are complex phenomena that are composed of multiple subprocesses. In this paper, we argue that examining emotional development from a process-level perspective facilitates important insights into the mechanisms that underlie adolescents' shifting emotions and intensified risk for psychopathology. Contrasting the developmental progressions for the antecedents to emotion, physiological reactivity to emotion, emotional regulation capacity, and motivation to experience particular affective states reveals complex trajectories that intersect in a unique way during adolescence. We consider the implications of these intersecting trajectories for negative outcomes such as psychopathology, as well as positive outcomes for adolescent social bonds.

  3. Using empirical models of species colonization under multiple threatening processes to identify complementary threat-mitigation strategies.

    PubMed

    Tulloch, Ayesha I T; Mortelliti, Alessio; Kay, Geoffrey M; Florance, Daniel; Lindenmayer, David

    2016-08-01

    Approaches to prioritize conservation actions are gaining popularity. However, limited empirical evidence exists on which species might benefit most from threat mitigation and on what combination of threats, if mitigated simultaneously, would result in the best outcomes for biodiversity. We devised a way to prioritize threat mitigation at a regional scale with empirical evidence based on predicted changes to population dynamics-information that is lacking in most threat-management prioritization frameworks that rely on expert elicitation. We used dynamic occupancy models to investigate the effects of multiple threats (tree cover, grazing, and presence of an hyperaggressive competitor, the Noisy Miner (Manorina melanocephala) on bird-population dynamics in an endangered woodland community in southeastern Australia. The 3 threatening processes had different effects on different species. We used predicted patch-colonization probabilities to estimate the benefit to each species of removing one or more threats. We then determined the complementary set of threat-mitigation strategies that maximized colonization of all species while ensuring that redundant actions with little benefit were avoided. The single action that resulted in the highest colonization was increasing tree cover, which increased patch colonization by 5% and 11% on average across all species and for declining species, respectively. Combining Noisy Miner control with increasing tree cover increased species colonization by 10% and 19% on average for all species and for declining species respectively, and was a higher priority than changing grazing regimes. Guidance for prioritizing threat mitigation is critical in the face of cumulative threatening processes. By incorporating population dynamics in prioritization of threat management, our approach helps ensure funding is not wasted on ineffective management programs that target the wrong threats or species.

  4. Demonstration of the efficiency and robustness of an acid leaching process to remove metals from various CCA-treated wood samples.

    PubMed

    Coudert, Lucie; Blais, Jean-François; Mercier, Guy; Cooper, Paul; Janin, Amélie; Gastonguay, Louis

    2014-01-01

    In recent years, an efficient and economically attractive leaching process has been developed to remove metals from copper-based treated wood wastes. This study explored the applicability of this leaching process using chromated copper arsenate (CCA) treated wood samples with different initial metal loading and elapsed time between wood preservation treatment and remediation. The sulfuric acid leaching process resulted in the solubilization of more than 87% of the As, 70% of the Cr, and 76% of the Cu from CCA-chips and in the solubilization of more than 96% of the As, 78% of the Cr and 91% of the Cu from CCA-sawdust. The results showed that the performance of this leaching process might be influenced by the initial metal loading of the treated wood wastes and the elapsed time between preservation treatment and remediation. The effluents generated during the leaching steps were treated by precipitation-coagulation to satisfy the regulations for effluent discharge in municipal sewers. Precipitation using ferric chloride and sodium hydroxide was highly efficient, removing more than 99% of the As, Cr, and Cu. It appears that this leaching process can be successfully applied to remove metals from different CCA-treated wood samples and then from the effluents.

  5. SU-C-304-02: Robust and Efficient Process for Acceptance Testing of Varian TrueBeam Linacs Using An Electronic Portal Imaging Device (EPID)

    SciTech Connect

    Yaddanapudi, S; Cai, B; Sun, B; Li, H; Noel, C; Goddu, S; Mutic, S; Harry, T; Pawlicki, T

    2015-06-15

    Purpose: The purpose of this project was to develop a process that utilizes the onboard kV and MV electronic portal imaging devices (EPIDs) to perform rapid acceptance testing (AT) of linacs in order to improve efficiency and standardize AT equipment and processes. Methods: In this study a Varian TrueBeam linac equipped with an amorphous silicon based EPID (aSi1000) was used. The conventional set of AT tests and tolerances was used as a baseline guide, and a novel methodology was developed to perform as many tests as possible using EPID exclusively. The developer mode on Varian TrueBeam linac was used to automate the process. In the current AT process there are about 45 tests that call for customer demos. Many of the geometric tests such as jaw alignment and MLC positioning are performed with highly manual methods, such as using graph paper. The goal of the new methodology was to achieve quantitative testing while reducing variability in data acquisition, analysis and interpretation of the results. The developed process was validated on two machines at two different institutions. Results: At least 25 of the 45 (56%) tests which required customer demo can be streamlined and performed using EPIDs. More than half of the AT tests can be fully automated using the developer mode, while others still require some user interaction. Overall, the preliminary data shows that EPID-based linac AT can be performed in less than a day, compared to 2–3 days using conventional methods. Conclusions: Our preliminary results show that performance of onboard imagers is quite suitable for both geometric and dosimetric testing of TrueBeam systems. A standardized AT process can tremendously improve efficiency, and minimize the variability related to third party quality assurance (QA) equipment and the available onsite expertise. Research funding provided by Varian Medical Systems. Dr. Sasa Mutic receives compensation for providing patient safety training services from Varian Medical

  6. A Robust Biomarker

    NASA Technical Reports Server (NTRS)

    Westall, F.; Steele, A.; Toporski, J.; Walsh, M. M.; Allen, C. C.; Guidry, S.; McKay, D. S.; Gibson, E. K.; Chafetz, H. S.

    2000-01-01

    Polymers of bacterial origin, either through cell secretion or the degraded product of cell lysis, form isolated mucoidal strands as well as well-developed biofilms on interfaces. Biofilms are structurally and compositionally complex and are readily distinguishable from abiogenic films. These structures range in size from micrometers to decimeters, the latter occurring as the well-known, mineralised biofilms called stromatolites. Compositionally bacterial polymers are greater than 90 % water, with while the majority of the macromolecules forming the framework of the polymers consisting of polysaccharides (with and some nucteic acids and proteins). These macromolecules contain a vaste amount of functional groups, such as carboxyls, hydroxyls, and phosphoryls which are implicated in cation-binding. It is the elevated metal- binding capacity which provides the bacterial polymer with structural support and also helps to preserves it for up to 3.5 b.y. in the terrestrial rock record. The macromolecules, thus, can become rapidly mineralised and trapped in a mineral matrix. Through early and late diagenesis (bacterial degradation, burial, heat, pressure and time) they break down, losing the functional groups and, gradually, their hydrogen atoms. The degraded product is known as "kerogen". With further diagenesis and metamorphism, all the hydrogen atoms are lost and the carbonaceous matter becomes graphite. until the remnant carbonaceous material become graphitised. This last sentence reads a bit as if ALL these macromolecules break down and end up as graphite., but since we find 441 this is not true for all of the macromolecules. We have traced fossilised polymer and biofilms in rocks from throughout Earth's history, to rocks as old as the oldest being 3.5 b.y.-old. Furthermore, Time of Flight Secondary Ion Mass Spectrometry has been able to identify individual macromolecules of bacterial origin, the identities of which are still being investigated, in all the samples

  7. The interprocess NIR sampling as an alternative approach to multivariate statistical process control for identifying sources of product-quality variability.

    PubMed

    Marković, Snežana; Kerč, Janez; Horvat, Matej

    2017-03-01

    We are presenting a new approach of identifying sources of variability within a manufacturing process by NIR measurements of samples of intermediate material after each consecutive unit operation (interprocess NIR sampling technique). In addition, we summarize the development of a multivariate statistical process control (MSPC) model for the production of enteric-coated pellet product of the proton-pump inhibitor class. By developing provisional NIR calibration models, the identification of critical process points yields comparable results to the established MSPC modeling procedure. Both approaches are shown to lead to the same conclusion, identifying parameters of extrusion/spheronization and characteristics of lactose that have the greatest influence on the end-product's enteric coating performance. The proposed approach enables quicker and easier identification of variability sources during manufacturing process, especially in cases when historical process data is not straightforwardly available. In the presented case the changes of lactose characteristics are influencing the performance of the extrusion/spheronization process step. The pellet cores produced by using one (considered as less suitable) lactose source were on average larger and more fragile, leading to consequent breakage of the cores during subsequent fluid bed operations. These results were confirmed by additional experimental analyses illuminating the underlying mechanism of fracture of oblong pellets during the pellet coating process leading to compromised film coating.

  8. Cascading failure and robustness in metabolic networks.

    PubMed

    Smart, Ashley G; Amaral, Luis A N; Ottino, Julio M

    2008-09-09

    We investigate the relationship between structure and robustness in the metabolic networks of Escherichia coli, Methanosarcina barkeri, Staphylococcus aureus, and Saccharomyces cerevisiae, using a cascading failure model based on a topological flux balance criterion. We find that, compared to appropriate null models, the metabolic networks are exceptionally robust. Furthermore, by decomposing each network into rigid clusters and branched metabolites, we demonstrate that the enhanced robustness is related to the organization of branched metabolites, as rigid cluster formations in the metabolic networks appear to be consistent with null model behavior. Finally, we show that cascading in the metabolic networks can be described as a percolation process.

  9. Redundancy relations and robust failure detection

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.; Lou, X. C.; Verghese, G. C.; Willsky, A. S.

    1984-01-01

    All failure detection methods are based on the use of redundancy, that is on (possible dynamic) relations among the measured variables. Consequently the robustness of the failure detection process depends to a great degree on the reliability of the redundancy relations given the inevitable presence of model uncertainties. The problem of determining redundancy relations which are optimally robust in a sense which includes the major issues of importance in practical failure detection is addressed. A significant amount of intuition concerning the geometry of robust failure detection is provided.

  10. An audit of the processes involved in identifying and assessing bilingual learners suspected of being dyslexic: a Scottish study.

    PubMed

    Deponio, P; Landon, J; Mullin, K; Reid, G

    2000-01-01

    The Commission for Racial Equality (Special Educational Needs Assessment in Strathclyde: Report of a Formal Investigation, CRE, London, 1996) highlighted the significant under-representation of bilingual children among pupils assessed as having specific learning difficulties/dyslexia. In this present study an audit was undertaken in order to explore issues arising from the Commission's report, initially using 53 schools from one education authority. This revealed an extremely low incidence of suspected dyslexia among bilingual pupils. A second study was carried out in a further nine education authorities, surveying 91 schools with bilingual pupils. The incidence of suspected dyslexia in bilingual pupils was found to be extremely low. Twenty-seven cases were examined. Most cases concerned pupils aged 7:0-9:0. Difficulties associated with conventional indicators of dyslexia are discussed. A wide variety of assessment approaches were reported and the use of first language (L1) assessment varied. The process of assessment tended to be lengthy and inconclusive. However, this report suggests that caution is necessary when considering dyslexia in the early stages of second language (L2) development.

  11. Identifying a Network of Brain Regions Involved in Aversion-Related Processing: A Cross-Species Translational Investigation

    PubMed Central

    Hayes, Dave J.; Northoff, Georg

    2011-01-01

    The ability to detect and respond appropriately to aversive stimuli is essential for all organisms, from fruit flies to humans. This suggests the existence of a core neural network which mediates aversion-related processing. Human imaging studies on aversion have highlighted the involvement of various cortical regions, such as the prefrontal cortex, while animal studies have focused largely on subcortical regions like the periaqueductal gray and hypothalamus. However, whether and how these regions form a core neural network of aversion remains unclear. To help determine this, a translational cross-species investigation in humans (i.e., meta-analysis) and other animals (i.e., systematic review of functional neuroanatomy) was performed. Our results highlighted the recruitment of the anterior cingulate cortex, the anterior insula, and the amygdala as well as other subcortical (e.g., thalamus, midbrain) and cortical (e.g., orbitofrontal) regions in both animals and humans. Importantly, involvement of these regions remained independent of sensory modality. This study provides evidence for a core neural network mediating aversion in both animals and humans. This not only contributes to our understanding of the trans-species neural correlates of aversion but may also carry important implications for psychiatric disorders where abnormal aversive behavior can often be observed. PMID:22102836

  12. Significance of silica in identifying the processes affecting groundwater chemistry in parts of Kali watershed, Central Ganga Plain, India

    NASA Astrophysics Data System (ADS)

    Khan, Arina; Umar, Rashid; Khan, Haris Hasan

    2015-03-01

    Chemical geothermometry using silica was employed in the present study to estimate the sub-surface groundwater temperature and the corresponding depth of the groundwater in parts of Kali watershed in Bulandshahr and Aligarh district. 42 groundwater samples each were collected from borewells during pre-monsoon and post-monsoon season 2012 and analysed for all major ions and silica. Silica values in the area range from 18.72 to 50.64 mg/l in May 2012 and from 18.89 to 52.23 mg/l in November 2012. Chalcedony temperature >60 °C was deduced for five different locations in each season, which corresponds to a depth of more than 1,000 metres. Spatial variation of silica shows high values along a considerable stretch of River Kali, during pre-monsoon season. Relationship of silica with Total Dissolved Solids and Chloride was established to infer the role of geogenic and anthropogenic processes in solute acquisition. It was found that both water-rock interaction and anthropogenic influences are responsible for the observed water chemistry.

  13. A High-Resolution Dynamic Approach to Identifying and Characterizing Slow Slip and Subduction Locking Processes in Cascadia

    NASA Astrophysics Data System (ADS)

    Dimitrova, L. L.; Haines, A. J.; Wallace, L. M.; Bartlow, N. M.

    2014-12-01

    Slow slip events (SSEs) in Cascadia occur at ~30-50 km depth, every 10-19 months, and typically involve slip of a few cm, producing surface displacements on the order of a few mm up to ~1cm. There is a well-known association between tremor and SSEs; however, there are more frequent tremor episodes that are not clearly associated with geodetically detected SSEs (Wech and Creager 2011). This motivates the question: Are there smaller SSE signals that we are currently not recognizing geodetically? Most existing methods to investigate transient deformation with continuous GPS (cGPS) data employ kinematic, smoothed approaches to fit the cGPS data, limiting SSE identification and characterization.Recently, Haines et al. (submitted) showed that Vertical Derivatives of Horizontal Stress (VDoHS) rates, calculated from GPS data by solving the force balance equations at the Earth's surface, represent the most inclusive and spatially compact surface expressions of subsurface deformation sources: VDoHS rate vectors are tightly localized above the sources and point in the direction of push or pull. We adapt this approach, previously applied to campaign GPS data in New Zealand (e.g., Dimitrova et al. 2013), to daily cGPS time series from Cascadia and compare our results with those from the Network Inversion Filter (NIF) for 2009 (Bartlow et al. 2011). In both NIF and VDoHS rate inversions, the main 2009 SSE pulse reaches a peak slip value and splits into northern and southern sections. However, our inversion shows that the SSE started prior to July 27-28, compared to August 6-7 from the NIF results. Furthermore, we detect a smaller (~1 mm surface displacement) event from June 29-July 7 in southern Cascadia, which had not been identified previously.VDoHS rates also reveal the boundaries between the locked and unlocked portions of the megathrust, and we can track how this varies throughout the SSE cycle. Above the locked interface, the pull of the subducted plate generates shear

  14. Identifying the Parent Body of the Tagish Lake Meteorite and Characterizing its Internal Heating History and Surface Processes

    NASA Technical Reports Server (NTRS)

    Hiroi, Takahiro

    2004-01-01

    This short (1-year) funded research encompassed laboratory measurements of the Tagish Lake meteorite samples, experiments of simulated space weathering on them, and comparison with D, T, and P asteroids in reflectance spectrum. In spite of its limited funding and period, we have performed said experiments here at Brown University and at University of Tokyo. Some of the major results were reported at the Lunar and Planetary Science Conference held in Houston in March, 2004. The Tagish Lake meteorite shows a unique visible reflectance spectrum identical to that of the D and T type asteroids. After the present heating experiments at even the lowest temperature of 100 C, the characteristic spectral slope of the Tagish Lake meteorite sample increased. On the other hand, after irradiating its pellet sample with pulse laser, the slope decreased. As the result, the Tagish Lake meteorite and its processed samples have come to cover a wide range of visible reflectance spectra in slope from the C-type asteroids to some extreme T/D-type asteroids, including the P-type asteroids in between. Therefore, logically speaking, our initial affirmation that the Tagish Lake meteorite must have come from one of the D-type asteroids can be wrong if such a meteoritic material is hidden under a space-weathered surface regolith of a C-type asteroid. However, such a case is likely to have a small probability in general. Other major hits of this research includes the first spectral fitting of the P-type asteroids using reflectance spectra derived from the present research. This topic needs more experiments and analysis to be addressed uniquely, and thus further efforts will be proposed.

  15. Quantitative image analysis as a diagnostic tool for identifying structural changes during a revival process of anaerobic granular sludge.

    PubMed

    Abreu, A A; Costa, J C; Araya-Kroff, P; Ferreira, E C; Alves, M M

    2007-04-01

    Due to unspecified operational problems, the specific acetoclastic activity (SAA) of the anaerobic granular sludge present in an industrial UASB reactor was considerably damaged (from 250 to less than 10mL CH(4)@STP/gVSS.d), significantly reducing the biogas production of that industrial unit. The hydrogenotrophic methanogenic activity exhibited a value of 600mL CH4@STP/gVSS.d, the settling velocity was 31.4+/-9.8m/h, the average equivalent diameter was 0.92+/-0.43mm, and about 70% of the VSS were structured in aggregates larger than 1mm. In order to study the recovery of the SAA, this sludge was collected and inoculated in a lab-scale expanded granular sludge blanket (EGSB) reactor. Ethanol was fed as the sole carbon source during a trial period of 106 days. Process monitoring included COD removal efficiency, methane production, and periodic determination of the specific methanogenic activity in the presence of acetate, propionate, butyrate, ethanol and H(2)/CO(2). Quantitative image analysis allowed for information to be obtained on granular fragmentation/erosion and filaments release. During the first operational period, biogas production was mainly due to the hydrogenotrophic activity. However, after 40 days, the SAA steadily increased achieving a maximum value of 183+/-13mL CH4@STP/gVSS.d. The onset of SAA recovery, granules breakdown and filaments release to the bulk occurred simultaneously. Further increase in SAA was accompanied by granular growth. In the last 25 days of operation, the size distribution was stable with more than 80% of projected area of aggregates corresponding to granules larger than 1mm (equivalent diameter). Confocal images from FISH hybridized sections of the granules showed that after SAA recovery, the granules developed an organized structure where an acidogenic/acetogenic external layer was apparent. Granular fragmentation and increase of filaments in the bulk, simultaneously with the increase in the acetoclastic activity are

  16. A systematic study of process windows and MEF for line end shortening under various photo conditions for more effective and robust OPC correction

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Zhu, Jun; Wu, Peng; Jiang, Yuntao

    2006-03-01

    Line end shortening (LES) is a classical phenomenon in photolithography, which is primarily caused by finite resolution from the optics at the position of the line ends. The shortening varies from a couple tens of nanometers for processes with a k1 around 0.5 to as much as 100 nanometers for advanced processes with more aggressive k1 numbers. Besides illumination, the effective resist diffusion has been found to worsen the situation. The effective diffusion length for a typical chemically amplified resist, which has been demonstrated to be critical to the performance of the photolithographic process, can be as much as 30 to 60 nm, which has been found to generate some extra 30 nm LES. Experiments have indicated that wider lines have less LES effect. However, under certain CD through-pitch condition, when the lines or spaces are very wide, the opposing line ends may even merge. Currently, two methods have been widely used to improve the situation. One method to fix this problem is to extend the line ends on mask, or to make them move closer toward each other to compensate for the shortening. However, for a more conservatively defined minimum external separation rule, this method itself may not be enough to fully offset the LES. This is because it has been found that there is a limit when the line ends are too close to each other on mask, any perturbation on the mask CD may cause line ends to merge on wafer. The other way is to add hammerheads, or to add wider endings. This is equivalent to the situation of an effectively wider line ends, which has less shortening effect and can also live with a rather conservative minimum external separation. But in some design, this luxury may not have room to implement, i.e., when the line ends are sandwiched by dense lines with minimum ground-rules. Therefore, to best minimize the effect of LES or to completely characterize the LES effect, one will need to study both the process window and mask error factor (MEF) under a variety

  17. Robustness of spatial micronetworks

    NASA Astrophysics Data System (ADS)

    McAndrew, Thomas C.; Danforth, Christopher M.; Bagrow, James P.

    2015-04-01

    Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure.

  18. Robust Control Systems.

    DTIC Science & Technology

    1981-12-01

    106 A. 13 XSU ......................................... 108 A.14 DDTCON...................................... 108 A.15 DKFTR...operation is preserved. Although some papers (Refs 6 and 13 ) deal with robustness only in regard to parameter variations within the basic controlled...since these can ofter be neglected in actual implementation, a constant-gain time 13 ........................................ invariant solution with

  19. Robustness of spatial micronetworks.

    PubMed

    McAndrew, Thomas C; Danforth, Christopher M; Bagrow, James P

    2015-04-01

    Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure.

  20. Robust calibration of a global aerosol model

    NASA Astrophysics Data System (ADS)

    Lee, L.; Carslaw, K. S.; Pringle, K. J.; Reddington, C.

    2013-12-01

    Comparison of models and observations is vital for evaluating how well computer models can simulate real world processes. However, many current methods are lacking in their assessment of the model uncertainty, which introduces questions regarding the robustness of the observationally constrained model. In most cases, models are evaluated against observations using a single baseline simulation considered to represent the models' best estimate. The model is then improved in some way so that its comparison to observations is improved. Continuous adjustments in such a way may result in a model that compares better to observations but there may be many compensating features which make prediction with the newly calibrated model difficult to justify. There may also be some model outputs whose comparison to observations becomes worse in some regions/seasons as others improve. In such cases calibration cannot be considered robust. We present details of the calibration of a global aerosol model, GLOMAP, in which we consider not just a single model setup but a perturbed physics ensemble with 28 uncertain parameters. We first quantify the uncertainty in various model outputs (CCN, CN) for the year 2008 and use statistical emulation to identify which of the 28 parameters contribute most to this uncertainty. We then compare the emulated model simulations in the entire parametric uncertainty space to observations. Regions where the entire ensemble lies outside the error of the observations indicate structural model error or gaps in current knowledge which allows us to target future research areas. Where there is some agreement with the observations we use the information on the sources of the model uncertainty to identify geographical regions in which the important parameters are similar. Identification of regional calibration clusters helps us to use information from observation rich regions to calibrate regions with sparse observations and allow us to make recommendations for

  1. Identifying Anomality in Precipitation Processes

    NASA Astrophysics Data System (ADS)

    Jiang, P.; Zhang, Y.

    2014-12-01

    Safety, risk and economic analyses of engineering constructions such as storm sewer, street and urban drainage, and channel design are sensitive to precipitation storm properties. Whether the precipitation storm properties exhibit normal or anomalous characteristics remains obscure. In this study, we will decompose a precipitation time series as sequences of average storm intensity, storm duration and interstorm period to examine whether these sequences could be treated as a realization of a continuous time random walk with both "waiting times" (interstorm period) and "jump sizes" (average storm intensity and storm duration). Starting from this viewpoint, we will analyze the statistics of storm duration, interstorm period, and average storm intensity in four regions in southwestern United States. We will examine whether the probability distribution is temporal and spatial dependent. Finally, we will use fractional engine to capture the randomness in precipitation storms.

  2. Effects-Based Design of Robust Organizations

    DTIC Science & Technology

    2004-06-01

    turn, are used to synthesize a robust organizational structure. Keywords: Organizational Design, Markov Deci- sion Processes, Reinforcement Learning , and...Markov Decision Processes (MDP), reinforcement learning , Monte Carlo con- trol method, and mixed integer optimization, as in aElectrical and Computer...based on MDP, Monte Carlo control method, reinforcement learning , and mixed integer optimization techniques. In section III, we formulate the dynamic

  3. Comparing dependent robust correlations.

    PubMed

    Wilcox, Rand R

    2016-11-01

    Let r1 and r2 be two dependent estimates of Pearson's correlation. There is a substantial literature on testing H0  : ρ1  = ρ2 , the hypothesis that the population correlation coefficients are equal. However, it is well known that Pearson's correlation is not robust. Even a single outlier can have a substantial impact on Pearson's correlation, resulting in a misleading understanding about the strength of the association among the bulk of the points. A way of mitigating this concern is to use a correlation coefficient that guards against outliers, many of which have been proposed. But apparently there are no results on how to compare dependent robust correlation coefficients when there is heteroscedasicity. Extant results suggest that a basic percentile bootstrap will perform reasonably well. This paper reports simulation results indicating the extent to which this is true when using Spearman's rho, a Winsorized correlation or a skipped correlation.

  4. Robustness in bacterial chemotaxis

    NASA Astrophysics Data System (ADS)

    Alon, U.; Surette, M. G.; Barkai, N.; Leibler, S.

    1999-01-01

    Networks of interacting proteins orchestrate the responses of living cells to a variety of external stimuli, but how sensitive is the functioning of these protein networks to variations in theirbiochemical parameters? One possibility is that to achieve appropriate function, the reaction rate constants and enzyme concentrations need to be adjusted in a precise manner, and any deviation from these `fine-tuned' values ruins the network's performance. An alternative possibility is that key properties of biochemical networks are robust; that is, they are insensitive to the precise values of the biochemical parameters. Here we address this issue in experiments using chemotaxis of Escherichia coli, one of the best-characterized sensory systems,. We focus on how response and adaptation to attractant signals vary with systematic changes in the intracellular concentration of the components of the chemotaxis network. We find that some properties, such as steady-state behaviour and adaptation time, show strong variations in response to varying protein concentrations. In contrast, the precision of adaptation is robust and does not vary with the protein concentrations. This is consistent with a recently proposed molecular mechanism for exact adaptation, where robustness is a direct consequence of the network's architecture.

  5. Robustness of Interdependent Networks

    NASA Astrophysics Data System (ADS)

    Havlin, Shlomo

    2011-03-01

    In interdependent networks, when nodes in one network fail, they cause dependent nodes in other networks to also fail. This may happen recursively and can lead to a cascade of failures. In fact, a failure of a very small fraction of nodes in one network may lead to the complete fragmentation of a system of many interdependent networks. We will present a framework for understanding the robustness of interacting networks subject to such cascading failures and provide a basic analytic approach that may be useful in future studies. We present exact analytical solutions for the critical fraction of nodes that upon removal will lead to a failure cascade and to a complete fragmentation of two interdependent networks in a first order transition. Surprisingly, analyzing complex systems as a set of interdependent networks may alter a basic assumption that network theory has relied on: while for a single network a broader degree distribution of the network nodes results in the network being more robust to random failures, for interdependent networks, the broader the distribution is, the more vulnerable the networks become to random failure. We also show that reducing the coupling between the networks leads to a change from a first order percolation phase transition to a second order percolation transition at a critical point. These findings pose a significant challenge to the future design of robust networks that need to consider the unique properties of interdependent networks.

  6. Robust watermark technique using masking and Hermite transform.

    PubMed

    Coronel, Sandra L Gomez; Ramírez, Boris Escalante; Mosqueda, Marco A Acevedo

    2016-01-01

    The following paper evaluates a watermark algorithm designed for digital images by using a perceptive mask and a normalization process, thus preventing human eye detection, as well as ensuring its robustness against common processing and geometric attacks. The Hermite transform is employed because it allows a perfect reconstruction of the image, while incorporating human visual system properties; moreover, it is based on the Gaussian functions derivates. The applied watermark represents information of the digital image proprietor. The extraction process is blind, because it does not require the original image. The following techniques were utilized in the evaluation of the algorithm: peak signal-to-noise ratio, the structural similarity index average, the normalized crossed correlation, and bit error rate. Several watermark extraction tests were performed, with against geometric and common processing attacks. It allowed us to identify how many bits in the watermark can be modified for its adequate extraction.

  7. The Maternal-to-Zygotic Transition Targets Actin to Promote Robustness during Morphogenesis

    PubMed Central

    Zheng, Liuliu; Sepúlveda, Leonardo A.; Lua, Rhonald C.; Lichtarge, Olivier; Golding, Ido; Sokac, Anna Marie

    2013-01-01

    Robustness is a property built into biological systems to ensure stereotypical outcomes despite fluctuating inputs from gene dosage, biochemical noise, and the environment. During development, robustness safeguards embryos against structural and functional defects. Yet, our understanding of how robustness is achieved in embryos is limited. While much attention has been paid to the role of gene and signaling networks in promoting robust cell fate determination, little has been done to rigorously assay how mechanical processes like morphogenesis are designed to buffer against variable conditions. Here we show that the cell shape changes that drive morphogenesis can be made robust by mechanisms targeting the actin cytoskeleton. We identified two novel members of the Vinculin/α-Catenin Superfamily that work together to promote robustness during Drosophila cellularization, the dramatic tissue-building event that generates the primary epithelium of the embryo. We find that zygotically-expressed Serendipity-α (Sry-α) and maternally-loaded Spitting Image (Spt) share a redundant, actin-regulating activity during cellularization. Spt alone is sufficient for cellularization at an optimal temperature, but both Spt plus Sry-α are required at high temperature and when actin assembly is compromised by genetic perturbation. Our results offer a clear example of how the maternal and zygotic genomes interact to promote the robustness of early developmental events. Specifically, the Spt and Sry-α collaboration is informative when it comes to genes that show both a maternal and zygotic requirement during a given morphogenetic process. For the cellularization of Drosophilids, Sry-α and its expression profile may represent a genetic adaptive trait with the sole purpose of making this extreme event more reliable. Since all morphogenesis depends on cytoskeletal remodeling, both in embryos and adults, we suggest that robustness-promoting mechanisms aimed at actin could be effective at

  8. Cluster analysis of Plasmodium RNA-seq time-course data identifies stage-specific co-regulated biological processes and regulatory elements

    PubMed Central

    Oyelade, Jelili; Adebiyi, Ezekiel

    2016-01-01

    In this study, we interpreted RNA-seq time-course data of three developmental stages of Plasmodium species by clustering genes based on similarities in their expression profile without prior knowledge of the gene function. Functional enrichment of clusters of upregulated genes at specific time-points reveals potential targetable biological processes with information on their timings. We identified common consensus sequences that these clusters shared as potential points of coordinated transcriptional control. Five cluster groups showed upregulated profile patterns of biological interest. This included two clusters from the Intraerythrocytic Developmental Cycle (cluster 4 = 16 genes, and cluster 9 = 32 genes), one from the sexual development stage (cluster 2 = 851 genes), and two from the gamete-fertilization stage in the mosquito host (cluster 4 = 153 genes, and cluster 9 = 258 genes). The IDC expressed the least numbers of genes with only 1448 genes showing any significant activity of the 5020 genes (~29%) in the experiment. Gene ontology (GO) enrichment analysis of these clusters revealed a total of 671 uncharacterized genes implicated in 14 biological processes and components associated with these stages, some of which are currently being investigated as drug targets in on-going research. Five putative transcription regulatory binding motifs shared by members of each cluster were also identified, one of which was also identified in a previous study by separate researchers. Our study shows stage-specific genes and biological processes that may be important in antimalarial drug research efforts. In addition, timed-coordinated control of separate processes may explain the paucity of factors in parasites. PMID:27990252

  9. A risk-based approach for identifying constituents of concern in oil sands process-affected water from the Athabasca Oil Sands region.

    PubMed

    McQueen, Andrew D; Kinley, Ciera M; Hendrikse, Maas; Gaspari, Daniel P; Calomeni, Alyssa J; Iwinski, Kyla J; Castle, James W; Haakensen, Monique C; Peru, Kerry M; Headley, John V; Rodgers, John H

    2017-04-01

    Mining leases in the Athabasca Oil Sands (AOS) region produce large volumes of oil sands process-affected water (OSPW) containing constituents that limit beneficial uses and discharge into receiving systems. The aim of this research is to identify constituents of concern (COCs) in OSPW sourced from an active settling basin with the goal of providing a sound rational for developing mitigation strategies for using constructed treatment wetlands for COCs contained in OSPW. COCs were identified through several lines of evidence: 1) chemical and physical characterization of OSPW and comparisons with numeric water quality guidelines and toxicity endpoints, 2) measuring toxicity of OSPW using a taxonomic range of sentinel organisms (i.e. fish, aquatic invertebrates, and a macrophyte), 3) conducting process-based manipulations (PBMs) of OSPW to alter toxicity and inform treatment processes, and 4) discerning potential treatment pathways to mitigate ecological risks of OSPW based on identification of COCs, toxicological analyses, and PBM results. COCs identified in OSPW included organics (naphthenic acids [NAs], oil and grease [O/G]), metals/metalloids, and suspended solids. In terms of species sensitivities to undiluted OSPW, fish ≥ aquatic invertebrates > macrophytes. Bench-scale manipulations of the organic fractions of OSPW via PBMs (i.e. H2O2+UV254 and granular activated charcoal treatments) eliminated toxicity to Ceriodaphnia dubia (7-8 d), in terms of mortality and reproduction. Results from this study provide critical information to inform mitigation strategies using passive or semi-passive treatment processes (e.g., constructed treatment wetlands) to mitigate ecological risks of OSPW to aquatic organisms.

  10. Robustness of Tree Extraction Algorithms from LIDAR

    NASA Astrophysics Data System (ADS)

    Dumitru, M.; Strimbu, B. M.

    2015-12-01

    Forest inventory faces a new era as unmanned aerial systems (UAS) increased the precision of measurements, while reduced field effort and price of data acquisition. A large number of algorithms were developed to identify various forest attributes from UAS data. The objective of the present research is to assess the robustness of two types of tree identification algorithms when UAS data are combined with digital elevation models (DEM). The algorithms use as input photogrammetric point cloud, which are subsequent rasterized. The first type of algorithms associate tree crown with an inversed watershed (subsequently referred as watershed based), while the second type is based on simultaneous representation of tree crown as an individual entity, and its relation with neighboring crowns (subsequently referred as simultaneous representation). A DJI equipped with a SONY a5100 was used to acquire images over an area from center Louisiana. The images were processed with Pix4D, and a photogrammetric point cloud with 50 points / m2 was attained. DEM was obtained from a flight executed in 2013, which also supplied a LIDAR point cloud with 30 points/m2. The algorithms were tested on two plantations with different species and crown class complexities: one homogeneous (i.e., a mature loblolly pine plantation), and one heterogeneous (i.e., an unmanaged uneven-aged stand with mixed species pine -hardwoods). Tree identification on photogrammetric point cloud reveled that simultaneous representation algorithm outperforms watershed algorithm, irrespective stand complexity. Watershed algorithm exhibits robustness to parameters, but the results were worse than majority sets of parameters needed by the simultaneous representation algorithm. The simultaneous representation algorithm is a better alternative to watershed algorithm even when parameters are not accurately estimated. Similar results were obtained when the two algorithms were run on the LIDAR point cloud.

  11. Robust Photon Locking

    SciTech Connect

    Bayer, T.; Wollenhaupt, M.; Sarpe-Tudoran, C.; Baumert, T.

    2009-01-16

    We experimentally demonstrate a strong-field coherent control mechanism that combines the advantages of photon locking (PL) and rapid adiabatic passage (RAP). Unlike earlier implementations of PL and RAP by pulse sequences or chirped pulses, we use shaped pulses generated by phase modulation of the spectrum of a femtosecond laser pulse with a generalized phase discontinuity. The novel control scenario is characterized by a high degree of robustness achieved via adiabatic preparation of a state of maximum coherence. Subsequent phase control allows for efficient switching among different target states. We investigate both properties by photoelectron spectroscopy on potassium atoms interacting with the intense shaped light field.

  12. Complexity and robustness

    PubMed Central

    Carlson, J. M.; Doyle, John

    2002-01-01

    Highly optimized tolerance (HOT) was recently introduced as a conceptual framework to study fundamental aspects of complexity. HOT is motivated primarily by systems from biology and engineering and emphasizes, (i) highly structured, nongeneric, self-dissimilar internal configurations, and (ii) robust yet fragile external behavior. HOT claims these are the most important features of complexity and not accidents of evolution or artifices of engineering design but are inevitably intertwined and mutually reinforcing. In the spirit of this collection, our paper contrasts HOT with alternative perspectives on complexity, drawing on real-world examples and also model systems, particularly those from self-organized criticality. PMID:11875207

  13. Robustness of Cantor diffractals.

    PubMed

    Verma, Rupesh; Sharma, Manoj Kumar; Banerjee, Varsha; Senthilkumaran, Paramasivam

    2013-04-08

    Diffractals are electromagnetic waves diffracted by a fractal aperture. In an earlier paper, we reported an important property of Cantor diffractals, that of redundancy [R. Verma et. al., Opt. Express 20, 8250 (2012)]. In this paper, we report another important property, that of robustness. The question we address is: How much disorder in the Cantor grating can be accommodated by diffractals to continue to yield faithfully its fractal dimension and generator? This answer is of consequence in a number of physical problems involving fractal architecture.

  14. Signal Processing for Robust Speech Recognition

    DTIC Science & Technology

    1994-01-01

    identity. Since this phoneme -based approach relies on information from the acoustic-phonetic and language models to determine the compensation vectors, it...time for reviewing instructions , searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the...NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e . TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS

  15. Robust fusion with reliabilities weights

    NASA Astrophysics Data System (ADS)

    Grandin, Jean-Francois; Marques, Miguel

    2002-03-01

    The reliability is a value of the degree of trust in a given measurement. We analyze and compare: ML (Classical Maximum Likelihood), MLE (Maximum Likelihood weighted by Entropy), MLR (Maximum Likelihood weighted by Reliability), MLRE (Maximum Likelihood weighted by Reliability and Entropy), DS (Credibility Plausibility), DSR (DS weighted by reliabilities). The analysis is based on a model of a dynamical fusion process. It is composed of three sensors, which have each it's own discriminatory capacity, reliability rate, unknown bias and measurement noise. The knowledge of uncertainties is also severely corrupted, in order to analyze the robustness of the different fusion operators. Two sensor models are used: the first type of sensor is able to estimate the probability of each elementary hypothesis (probabilistic masses), the second type of sensor delivers masses on union of elementary hypotheses (DS masses). In the second case probabilistic reasoning leads to sharing the mass abusively between elementary hypotheses. Compared to the classical ML or DS which achieves just 50% of correct classification in some experiments, DSR, MLE, MLR and MLRE reveals very good performances on all experiments (more than 80% of correct classification rate). The experiment was performed with large variations of the reliability coefficients for each sensor (from 0 to 1), and with large variations on the knowledge of these coefficients (from 0 0.8). All four operators reveal good robustness, but the MLR reveals to be uniformly dominant on all the experiments in the Bayesian case and achieves the best mean performance under incomplete a priori information.

  16. Bayesian robust principal component analysis.

    PubMed

    Ding, Xinghao; He, Lihan; Carin, Lawrence

    2011-12-01

    A hierarchical Bayesian model is considered for decomposing a matrix into low-rank and sparse components, assuming the observed matrix is a superposition of the two. The matrix is assumed noisy, with unknown and possibly non-stationary noise statistics. The Bayesian framework infers an approximate representation for the noise statistics while simultaneously inferring the low-rank and sparse-outlier contributions; the model is robust to a broad range of noise levels, without having to change model hyperparameter settings. In addition, the Bayesian framework allows exploitation of additional structure in the matrix. For example, in video applications each row (or column) corresponds to a video frame, and we introduce a Markov dependency between consecutive rows in the matrix (corresponding to consecutive frames in the video). The properties of this Markov process are also inferred based on the observed matrix, while simultaneously denoising and recovering the low-rank and sparse components. We compare the Bayesian model to a state-of-the-art optimization-based implementation of robust PCA; considering several examples, we demonstrate competitive performance of the proposed model.

  17. Robust omniphobic surfaces

    PubMed Central

    Tuteja, Anish; Choi, Wonjae; Mabry, Joseph M.; McKinley, Gareth H.; Cohen, Robert E.

    2008-01-01

    Superhydrophobic surfaces display water contact angles greater than 150° in conjunction with low contact angle hysteresis. Microscopic pockets of air trapped beneath the water droplets placed on these surfaces lead to a composite solid-liquid-air interface in thermodynamic equilibrium. Previous experimental and theoretical studies suggest that it may not be possible to form similar fully-equilibrated, composite interfaces with drops of liquids, such as alkanes or alcohols, that possess significantly lower surface tension than water (γlv = 72.1 mN/m). In this work we develop surfaces possessing re-entrant texture that can support strongly metastable composite solid-liquid-air interfaces, even with very low surface tension liquids such as pentane (γlv = 15.7 mN/m). Furthermore, we propose four design parameters that predict the measured contact angles for a liquid droplet on a textured surface, as well as the robustness of the composite interface, based on the properties of the solid surface and the contacting liquid. These design parameters allow us to produce two different families of re-entrant surfaces— randomly-deposited electrospun fiber mats and precisely fabricated microhoodoo surfaces—that can each support a robust composite interface with essentially any liquid. These omniphobic surfaces display contact angles greater than 150° and low contact angle hysteresis with both polar and nonpolar liquids possessing a wide range of surface tensions. PMID:19001270

  18. Invariants reveal multiple forms of robustness in bifunctional enzyme systems.

    PubMed

    Dexter, Joseph P; Dasgupta, Tathagata; Gunawardena, Jeremy

    2015-08-01

    Experimental and theoretical studies have suggested that bifunctional enzymes catalyzing opposing modification and demodification reactions can confer steady-state concentration robustness to their substrates. However, the types of robustness and the biochemical basis for them have remained elusive. Here we report a systematic study of the most general biochemical reaction network for a bifunctional enzyme acting on a substrate with one modification site, along with eleven sub-networks with more specialized biochemical assumptions. We exploit ideas from computational algebraic geometry, introduced in previous work, to find a polynomial expression (an invariant) between the steady state concentrations of the modified and unmodified substrate for each network. We use these invariants to identify five classes of robust behavior: robust upper bounds on concentration, robust two-sided bounds on concentration ratio, hybrid robustness, absolute concentration robustness (ACR), and robust concentration ratio. This analysis demonstrates that robustness can take a variety of forms and that the type of robustness is sensitive to many biochemical details, with small changes in biochemistry leading to very different steady-state behaviors. In particular, we find that the widely-studied ACR requires highly specialized assumptions in addition to bifunctionality. An unexpected result is that the robust bounds derived from invariants are strictly tighter than those derived by ad hoc manipulation of the underlying differential equations, confirming the value of invariants as a tool to gain insight into biochemical reaction networks. Furthermore, invariants yield multiple experimentally testable predictions and illuminate new strategies for inferring enzymatic mechanisms from steady-state measurements.

  19. Identifying Hazards

    EPA Pesticide Factsheets

    The federal government has established a system of labeling hazardous materials to help identify the type of material and threat posed. Summaries of information on over 300 chemicals are maintained in the Envirofacts Master Chemical Integrator.

  20. A model to assess the Mars Telecommunications Network relay robustness

    NASA Technical Reports Server (NTRS)

    Girerd, Andre R.; Meshkat, Leila; Edwards, Charles D., Jr.; Lee, Charles H.

    2005-01-01

    The relatively long mission durations and compatible radio protocols of current and projected Mars orbiters have enabled the gradual development of a heterogeneous constellation providing proximity communication services for surface assets. The current and forecasted capability of this evolving network has reached the point that designers of future surface missions consider complete dependence on it. Such designers, along with those architecting network requirements, have a need to understand the robustness of projected communication service. A model has been created to identify the robustness of the Mars Network as a function of surface location and time. Due to the decade-plus time horizon considered, the network will evolve, with emerging productive nodes and nodes that cease or fail to contribute. The model is a flexible framework to holistically process node information into measures of capability robustness that can be visualized for maximum understanding. Outputs from JPL's Telecom Orbit Analysis Simulation Tool (TOAST) provide global telecom performance parameters for current and projected orbiters. Probabilistic estimates of orbiter fuel life are derived from orbit keeping burn rates, forecasted maneuver tasking, and anomaly resolution budgets. Orbiter reliability is estimated probabilistically. A flexible scheduling framework accommodates the projected mission queue as well as potential alterations.

  1. On the robustness of Herlihy's hierarchy

    NASA Technical Reports Server (NTRS)

    Jayanti, Prasad

    1993-01-01

    A wait-free hierarchy maps object types to levels in Z(+) U (infinity) and has the following property: if a type T is at level N, and T' is an arbitrary type, then there is a wait-free implementation of an object of type T', for N processes, using only registers and objects of type T. The infinite hierarchy defined by Herlihy is an example of a wait-free hierarchy. A wait-free hierarchy is robust if it has the following property: if T is at level N, and S is a finite set of types belonging to levels N - 1 or lower, then there is no wait-free implementation of an object of type T, for N processes, using any number and any combination of objects belonging to the types in S. Robustness implies that there are no clever ways of combining weak shared objects to obtain stronger ones. Contrary to what many researchers believe, we prove that Herlihy's hierarchy is not robust. We then define some natural variants of Herlihy's hierarchy, which are also infinite wait-free hierarchies. With the exception of one, which is still open, these are not robust either. We conclude with the open question of whether non-trivial robust wait-free hierarchies exist.

  2. Robust geostatistical analysis of spatial data

    NASA Astrophysics Data System (ADS)

    Papritz, Andreas; Künsch, Hans Rudolf; Schwierz, Cornelia; Stahel, Werner A.

    2013-04-01

    Most of the geostatistical software tools rely on non-robust algorithms. This is unfortunate, because outlying observations are rather the rule than the exception, in particular in environmental data sets. Outliers affect the modelling of the large-scale spatial trend, the estimation of the spatial dependence of the residual variation and the predictions by kriging. Identifying outliers manually is cumbersome and requires expertise because one needs parameter estimates to decide which observation is a potential outlier. Moreover, inference after the rejection of some observations is problematic. A better approach is to use robust algorithms that prevent automatically that outlying observations have undue influence. Former studies on robust geostatistics focused on robust estimation of the sample variogram and ordinary kriging without external drift. Furthermore, Richardson and Welsh (1995) proposed a robustified version of (restricted) maximum likelihood ([RE]ML) estimation for the variance components of a linear mixed model, which was later used by Marchant and Lark (2007) for robust REML estimation of the variogram. We propose here a novel method for robust REML estimation of the variogram of a Gaussian random field that is possibly contaminated by independent errors from a long-tailed distribution. It is based on robustification of estimating equations for the Gaussian REML estimation (Welsh and Richardson, 1997). Besides robust estimates of the parameters of the external drift and of the variogram, the method also provides standard errors for the estimated parameters, robustified kriging predictions at both sampled and non-sampled locations and kriging variances. Apart from presenting our modelling framework, we shall present selected simulation results by which we explored the properties of the new method. This will be complemented by an analysis a data set on heavy metal contamination of the soil in the vicinity of a metal smelter. Marchant, B.P. and Lark, R

  3. Horse metabolism and the photocatalytic process as a tool to identify metabolic products formed from dopant substances: the case of sildenafil.

    PubMed

    Medana, Claudio; Calza, Paola; Giancotti, Valeria; Dal Bello, Federica; Pasello, Emanuela; Montana, Marco; Baiocchi, Claudio

    2011-10-01

    Two horses were treated with sildenafil, and its metabolic products were sought in both urine and plasma samples. Prior to this, a simulative laboratory study had been done using a photocatalytic process, to identify all possible main and secondary transformation products, in a clean matrix; these were then sought in the biological samples. The transformation of sildenafil and the formation of intermediate products were evaluated adopting titanium dioxide as photocatalyst. Several products were formed and characterized using the HPLC/HRMS(n) technique. The main intermediates identified in these experimental conditions were the same as the major sildenafil metabolites found in in vivo studies on rats and horses. Concerning horse metabolism, sildenafil and the demethylated product (UK 103,320) were quantified in blood samples. Sildenafil propyloxide, de-ethyl, and demethyl sildenafil, were the main metabolites quantified in urine. Some more oxidized species, already formed in the photocatalytic process, were also found in urine and plasma samples of treated animals. Their formation involved hydroxylation on the aromatic ring, combined oxidation and dihydroxylation, N-demethylation on the pyrazole ring, and hydroxylation. These new findings could be of interest in further metabolism studies.

  4. Footprint Reduction Process: Using Remote Sensing and GIS Technologies to Identify Non-Contaminated Land Parcels on the Oak Ridge Reservation National Priorities List Site

    SciTech Connect

    Halsey, P.A.; Kendall, D.T.; King, A.L.; Storms, R.A.

    1998-12-09

    In 1989, the Agency for Toxic Substances and Disease Registry evaluated the entire 35,000-acre U. S: Department of Energy (DOE) Oak Ridge Reservation (ORR, located in Oak Ridge, TN) and placed it on the National Priorities List (NPL), making the ORR subject to Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) regulations. Although much of the ORR has not been impacted by previous federal activities, without investigation it is difficult to discern which parcels of land are free of surface contamination. In 1996, the DOE Oak Ridge Environmental Management Program (EM) funded the Footprint Reduction Project to: 1) develop a process to study the large areas of the ORR that are believed to be free of surface contamination and 2) initiate the delisting of the "clean" areas from the NPL. Although this project's goals do not include the transfer of federal property to non-federal entities, the process development team aimed to provide a final product with multiple uses. Therefore, the process was developed to meet the requirements of NPL delisting and the transfer of non- contaminated federal lands to future land users. Section 120 (h) of the CERCLA law identifies the requirements for the transfer of federal property that is currently part of an NPL site. Reviews of historical information (including aerial photography), field inspections, and the recorded chain of title documents for the property are required for the delisting of property prior to transfer from the federal government. Despite the widespread availability of remote sensing and other digital geographic data and geographic information systems (GIS) for the analysis of such data, historical aerial photography is the only geographic data source required for review under the CERCLA 120 (h) process. However, since the ORR Environmental Management Program had an established Remote Sensing Program, the Footprint Reduction Project included the development and application of a methodology

  5. Robust characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2016-04-01

    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  6. Robust matching for voice recognition

    NASA Astrophysics Data System (ADS)

    Higgins, Alan; Bahler, L.; Porter, J.; Blais, P.

    1994-10-01

    This paper describes an automated method of comparing a voice sample of an unknown individual with samples from known speakers in order to establish or verify the individual's identity. The method is based on a statistical pattern matching approach that employs a simple training procedure, requires no human intervention (transcription, work or phonetic marketing, etc.), and makes no assumptions regarding the expected form of the statistical distributions of the observations. The content of the speech material (vocabulary, grammar, etc.) is not assumed to be constrained in any way. An algorithm is described which incorporates frame pruning and channel equalization processes designed to achieve robust performance with reasonable computational resources. An experimental implementation demonstrating the feasibility of the concept is described.

  7. Robust Crossfeed Design for Hovering Rotorcraft

    NASA Technical Reports Server (NTRS)

    Catapang, David R.

    1993-01-01

    Control law design for rotorcraft fly-by-wire systems normally attempts to decouple angular responses using fixed-gain crossfeeds. This approach can lead to poor decoupling over the frequency range of pilot inputs and increase the load on the feedback loops. In order to improve the decoupling performance, dynamic crossfeeds may be adopted. Moreover, because of the large changes that occur in rotorcraft dynamics due to small changes about the nominal design condition, especially for near-hovering flight, the crossfeed design must be 'robust'. A new low-order matching method is presented here to design robust crossfeed compensators for multi-input, multi-output (MIMO) systems. The technique identifies degrees-of-freedom that can be decoupled using crossfeeds, given an anticipated set of parameter variations for the range of flight conditions of concern. Cross-coupling is then reduced for degrees-of-freedom that can use crossfeed compensation by minimizing off-axis response magnitude average and variance. Results are presented for the analysis of pitch, roll, yaw and heave coupling of the UH-60 Black Hawk helicopter in near-hovering flight. Robust crossfeeds are designed that show significant improvement in decoupling performance and robustness over nominal, single design point, compensators. The design method and results are presented in an easily used graphical format that lends significant physical insight to the design procedure. This plant pre-compensation technique is an appropriate preliminary step to the design of robust feedback control laws for rotorcraft.

  8. Accuracy vs. Robustness: Bi-criteria Optimized Ensemble of Metamodels

    DTIC Science & Technology

    2014-12-01

    Kriging , Support Vector Regression and Radial Basis Function), where uncertainties are modeled for evaluating robustness. Twenty-eight functions from...optimized ensemble framework to optimally identify the contributions from each metamodel ( Kriging , Support Vector Regression and Radial Basis Function...motivation, a bi-criteria (accuracy and robustness) ensemble optimization framework of three well-known metamodel techniques, namely Kriging (Matheron 1960

  9. Parameter identifiability and Extended Multiple Studies Analysis of a compartmental model for human vitamin A kinetics: fixing fractional transfer coefficients for the initial steps in the absorptive process.

    PubMed

    Park, Hyunjin; Green, Michael H

    2014-03-28

    In the existing compartmental models of human vitamin A metabolism, parameters related to the absorption of the isotopic oral dose have not been well identified. We hypothesised that fixing some poorly identified parameters related to vitamin A absorption would improve parameter identifiability and add statistical certainty to such models. In the present study, data for serum vitamin A kinetics in nine subjects given [2H8]retinyl acetate orally and a model with absorption fixed at 75 % were used to test this hypothesis. In addition to absorption efficiency, we fixed two other fractional transfer coefficients: one representing the initial processing of the ingested dose and the other representing the direct secretion of retinol bound to retinol-binding protein (RBP) from enterocytes into the plasma. The Windows version of Simulation, Analysis and Modeling software (WinSAAM) was used to fit serum tracer data v. time for each subject. Then, a population model was generated by WinSAAM's Extended Multiple Studies Analysis. All the parameters had fractional standard deviations < 0·5, and none of the pairs of parameters had a correlation coefficient >0·8 (accepted criteria for well-identified parameters). Similar to the values predicted by the original model, total traced mass for retinol was 1160 (sd 468) μmol, and the time for retinol to appear in the plasma bound to RBP was 31·3 (sd 4·4) h. In conclusion, we suggest that this approach holds promise for advancing compartmental modelling of vitamin A kinetics in humans when the dose must be administered orally.

  10. Robust automated knowledge capture.

    SciTech Connect

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  11. Robustness in Digital Hardware

    NASA Astrophysics Data System (ADS)

    Woods, Roger; Lightbody, Gaye

    The growth in electronics has probably been the equivalent of the Industrial Revolution in the past century in terms of how much it has transformed our daily lives. There is a great dependency on technology whether it is in the devices that control travel (e.g., in aircraft or cars), our entertainment and communication systems, or our interaction with money, which has been empowered by the onset of Internet shopping and banking. Despite this reliance, there is still a danger that at some stage devices will fail within the equipment's lifetime. The purpose of this chapter is to look at the factors causing failure and address possible measures to improve robustness in digital hardware technology and specifically chip technology, giving a long-term forecast that will not reassure the reader!

  12. Robust Rocket Engine Concept

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.

    1995-01-01

    The potential for a revolutionary step in the durability of reusable rocket engines is made possible by the combination of several emerging technologies. The recent creation and analytical demonstration of life extending (or damage mitigating) control technology enables rapid rocket engine transients with minimum fatigue and creep damage. This technology has been further enhanced by the formulation of very simple but conservative continuum damage models. These new ideas when combined with recent advances in multidisciplinary optimization provide the potential for a large (revolutionary) step in reusable rocket engine durability. This concept has been named the robust rocket engine concept (RREC) and is the basic contribution of this paper. The concept also includes consideration of design innovations to minimize critical point damage.

  13. The Robust Beauty of Ordinary Information

    ERIC Educational Resources Information Center

    Katsikopoulos, Konstantinos V.; Schooler, Lael J.; Hertwig, Ralph

    2010-01-01

    Heuristics embodying limited information search and noncompensatory processing of information can yield robust performance relative to computationally more complex models. One criticism raised against heuristics is the argument that complexity is hidden in the calculation of the cue order used to make predictions. We discuss ways to order cues…

  14. Designing for Reliability and Robustness

    NASA Technical Reports Server (NTRS)

    Svetlik, Randall G.; Moore, Cherice; Williams, Antony

    2017-01-01

    Long duration spaceflight has a negative effect on the human body, and exercise countermeasures are used on-board the International Space Station (ISS) to minimize bone and muscle loss, combatting these effects. Given the importance of these hardware systems to the health of the crew, this equipment must continue to be readily available. Designing spaceflight exercise hardware to meet high reliability and availability standards has proven to be challenging throughout the time the crewmembers have been living on ISS beginning in 2000. Furthermore, restoring operational capability after a failure is clearly time-critical, but can be problematic given the challenges of troubleshooting the problem from 220 miles away. Several best-practices have been leveraged in seeking to maximize availability of these exercise systems, including designing for robustness, implementing diagnostic instrumentation, relying on user feedback, and providing ample maintenance and sparing. These factors have enhanced the reliability of hardware systems, and therefore have contributed to keeping the crewmembers healthy upon return to Earth. This paper will review the failure history for three spaceflight exercise countermeasure systems identifying lessons learned that can help improve future systems. Specifically, the Treadmill with Vibration Isolation and Stabilization System (TVIS), Cycle Ergometer with Vibration Isolation and Stabilization System (CEVIS), and the Advanced Resistive Exercise Device (ARED) will be reviewed, analyzed, and conclusions identified so as to provide guidance for improving future exercise hardware designs. These lessons learned, paired with thorough testing, offer a path towards reduced system down-time.

  15. The Problem of Size in Robust Design

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri

    1997-01-01

    To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.

  16. Robust fuzzy logic stabilization with disturbance elimination.

    PubMed

    Danapalasingam, Kumeresan A

    2014-01-01

    A robust fuzzy logic controller is proposed for stabilization and disturbance rejection in nonlinear control systems of a particular type. The dynamic feedback controller is designed as a combination of a control law that compensates for nonlinear terms in a control system and a dynamic fuzzy logic controller that addresses unknown model uncertainties and an unmeasured disturbance. Since it is challenging to derive a highly accurate mathematical model, the proposed controller requires only nominal functions of a control system. In this paper, a mathematical derivation is carried out to prove that the controller is able to achieve asymptotic stability by processing state measurements. Robustness here refers to the ability of the controller to asymptotically steer the state vector towards the origin in the presence of model uncertainties and a disturbance input. Simulation results of the robust fuzzy logic controller application in a magnetic levitation system demonstrate the feasibility of the control design.

  17. Robust Fuzzy Logic Stabilization with Disturbance Elimination

    PubMed Central

    Danapalasingam, Kumeresan A.

    2014-01-01

    A robust fuzzy logic controller is proposed for stabilization and disturbance rejection in nonlinear control systems of a particular type. The dynamic feedback controller is designed as a combination of a control law that compensates for nonlinear terms in a control system and a dynamic fuzzy logic controller that addresses unknown model uncertainties and an unmeasured disturbance. Since it is challenging to derive a highly accurate mathematical model, the proposed controller requires only nominal functions of a control system. In this paper, a mathematical derivation is carried out to prove that the controller is able to achieve asymptotic stability by processing state measurements. Robustness here refers to the ability of the controller to asymptotically steer the state vector towards the origin in the presence of model uncertainties and a disturbance input. Simulation results of the robust fuzzy logic controller application in a magnetic levitation system demonstrate the feasibility of the control design. PMID:25177713

  18. Identifying outcome-based indicators and developing a curriculum for a continuing medical education programme on rational prescribing using a modified Delphi process

    PubMed Central

    Esmaily, Hamideh M; Savage, Carl; Vahidi, Rezagoli; Amini, Abolghasem; Zarrintan, Mohammad Hossein; Wahlstrom, Rolf

    2008-01-01

    Background Continuing medical education (CME) is compulsory for physicians in Iran. Recent studies in Iran show that modifications of CME elements are necessary to improve the effectiveness of the educational programmes. Other studies point to an inappropriate, even irrational drug prescribing. Based on a needs assessment study regarding CME for general physicians in the East Azerbaijan province in Iran, rational prescribing practice was recognized as a high priority issue. Considering different educational methods, outcome-based education has been proposed as a suitable approach for CME. The purpose of the study was to obtain experts' consensus about appropriate educational outcomes of rational prescribing for general physicians in CME and developing curricular contents for this education. Methods The study consisted of two phases: The first phase was conducted using a two-round Delphi consensus process to identify the outcome-based educational indicators regarding rational prescribing for general physicians in primary care (GPs). In the second phase the agreed indicators were submitted to panels of experts for assessment and determination of content for a CME program in the field. Results Twenty one learning outcomes were identified through a modified Delphi process. The indicators were used by the panels of experts and six educational topics were determined for the CME programme and the curricular content of each was defined. The topics were 1) Principles of prescription writing, 2) Adverse drug reactions, 3) Drug interactions, 4) Injections, 5) Antibiotic therapy, and 6) Anti-inflammatory agents therapy. One of the topics was not directly related to any outcome, raising a question about the need for a discussion on constructive alignment. Conclusions Consensus on learning outcomes was achieved and an educational guideline was designed. Before suggesting widespread use in the country the educational package should be tested in the CME context. PMID:18510774

  19. Identifying fouling events in a membrane-based drinking water treatment process using principal component analysis of fluorescence excitation-emission matrices.

    PubMed

    Peiris, Ramila H; Hallé, Cynthia; Budman, Hector; Moresoli, Christine; Peldszus, Sigrid; Huck, Peter M; Legge, Raymond L

    2010-01-01

    The identification of key foulants and the provision of early warning of high fouling events for drinking water treatment membrane processes is crucial for the development of effective countermeasures to membrane fouling, such as pretreatment. Principal foulants include organic, colloidal and particulate matter present in the membrane feed water. In this research, principal component analysis (PCA) of fluorescence excitation-emission matrices (EEMs) was identified as a viable tool for monitoring the performance of pre-treatment stages (in this case biological filtration), as well as ultrafiltration (UF) and nanofiltration (NF) membrane systems. In addition, fluorescence EEM-based principal component (PC) score plots, generated using the fluorescence EEMs obtained after just 1hour of UF or NF operation, could be related to high fouling events likely caused by elevated levels of particulate/colloid-like material in the biofilter effluents. The fluorescence EEM-based PCA approach presented here is sensitive enough to be used at low organic carbon levels and has potential as an early detection method to identify high fouling events, allowing appropriate operational countermeasures to be taken.

  20. Robust relativistic bit commitment

    NASA Astrophysics Data System (ADS)

    Chakraborty, Kaushik; Chailloux, André; Leverrier, Anthony

    2016-12-01

    Relativistic cryptography exploits the fact that no information can travel faster than the speed of light in order to obtain security guarantees that cannot be achieved from the laws of quantum mechanics alone. Recently, Lunghi et al. [Phys. Rev. Lett. 115, 030502 (2015), 10.1103/PhysRevLett.115.030502] presented a bit-commitment scheme where each party uses two agents that exchange classical information in a synchronized fashion, and that is both hiding and binding. A caveat is that the commitment time is intrinsically limited by the spatial configuration of the players, and increasing this time requires the agents to exchange messages during the whole duration of the protocol. While such a solution remains computationally attractive, its practicality is severely limited in realistic settings since all communication must remain perfectly synchronized at all times. In this work, we introduce a robust protocol for relativistic bit commitment that tolerates failures of the classical communication network. This is done by adding a third agent to both parties. Our scheme provides a quadratic improvement in terms of expected sustain time compared with the original protocol, while retaining the same level of security.

  1. Robust Weak Measurements

    NASA Astrophysics Data System (ADS)

    Tollaksen, Jeff; Aharonov, Yakir

    2006-03-01

    We introduce a new type of weak measurement which yields a quantum average of weak values that is robust, outside the range of eigenvalues, extends the valid regime for weak measurements, and for which the probability of obtaining the pre- and post-selected ensemble is not exponentially rare. This result extends the applicability of weak values, shifts the statistical interpretation previously attributed to weak values and suggests that the weak value is a property of every pre- and post-selected ensemble. We then apply this new weak measurement to Hardy's paradox. Usually the paradox is dismissed on grounds of counterfactuality, i.e., because the paradoxical effects appear only when one considers results of experiments which do not actually take place. We suggest a new set of measurements in connection with Hardy's scheme, and show that when they are actually performed, they yield strange and surprising outcomes. More generally, we claim that counterfactual paradoxes point to a deeper structure inherent to quantum mechanics characterized by weak values (Aharonov Y, Botero A, Popescu S, Reznik B, Tollaksen J, Physics Letters A, 301 (3-4): 130-138, 2002).

  2. Robust Control Feedback and Learning

    DTIC Science & Technology

    2002-11-30

    98-1-0026 5b. GRANT NUMBER Robust Control, Feedback and Learning F49620-98-1-0026 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Michael G...Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39.18 Final Report: ROBUST CONTROL FEEDBACK AND LEARNING AFOSR Grant F49620-98-1-0026 October 1...Philadelphia, PA, 2000. [16] M. G. Safonov. Recent advances in robust control, feedback and learning . In S. 0. R. Moheimani, editor, Perspectives in Robust

  3. Severely incapacitating mutations in patients with extreme short stature identify RNA-processing endoribonuclease RMRP as an essential cell growth regulator.

    PubMed

    Thiel, Christian T; Horn, Denise; Zabel, Bernhard; Ekici, Arif B; Salinas, Kelly; Gebhart, Erich; Rüschendorf, Franz; Sticht, Heinrich; Spranger, Jürgen; Müller, Dietmar; Zweier, Christiane; Schmitt, Mark E; Reis, André; Rauch, Anita

    2005-11-01

    The growth of an individual is deeply influenced by the regulation of cell growth and division, both of which also contribute to a wide variety of pathological conditions, including cancer, diabetes, and inflammation. To identify a major regulator of human growth, we performed positional cloning in an autosomal recessive type of profound short stature, anauxetic dysplasia. Homozygosity mapping led to the identification of novel mutations in the RMRP gene, which was previously known to cause two milder types of short stature with susceptibility to cancer, cartilage hair hypoplasia, and metaphyseal dysplasia without hypotrichosis. We show that different RMRP gene mutations lead to decreased cell growth by impairing ribosomal assembly and by altering cyclin-dependent cell cycle regulation. Clinical heterogeneity is explained by a correlation between the level and type of functional impairment in vitro and the severity of short stature or predisposition to cancer. Whereas the cartilage hair hypoplasia founder mutation affects both pathways intermediately, anauxetic dysplasia mutations do not affect B-cyclin messenger RNA (mRNA) levels but do severely incapacitate ribosomal assembly via defective endonucleolytic cleavage. Anauxetic dysplasia mutations thus lead to poor processing of ribosomal RNA while allowing normal mRNA processing and, therefore, genetically separate the different functions of RNase MRP.

  4. Robust compressive sensing of sparse signals: a review

    NASA Astrophysics Data System (ADS)

    Carrillo, Rafael E.; Ramirez, Ana B.; Arce, Gonzalo R.; Barner, Kenneth E.; Sadler, Brian M.

    2016-12-01

    Compressive sensing generally relies on the ℓ 2 norm for data fidelity, whereas in many applications, robust estimators are needed. Among the scenarios in which robust performance is required, applications where the sampling process is performed in the presence of impulsive noise, i.e., measurements are corrupted by outliers, are of particular importance. This article overviews robust nonlinear reconstruction strategies for sparse signals based on replacing the commonly used ℓ 2 norm by M-estimators as data fidelity functions. The derived methods outperform existing compressed sensing techniques in impulsive environments, while achieving good performance in light-tailed environments, thus offering a robust framework for CS.

  5. Robust estimation for ordinary differential equation models.

    PubMed

    Cao, J; Wang, L; Xu, J

    2011-12-01

    Applied scientists often like to use ordinary differential equations (ODEs) to model complex dynamic processes that arise in biology, engineering, medicine, and many other areas. It is interesting but challenging to estimate ODE parameters from noisy data, especially when the data have some outliers. We propose a robust method to address this problem. The dynamic process is represented with a nonparametric function, which is a linear combination of basis functions. The nonparametric function is estimated by a robust penalized smoothing method. The penalty term is defined with the parametric ODE model, which controls the roughness of the nonparametric function and maintains the fidelity of the nonparametric function to the ODE model. The basis coefficients and ODE parameters are estimated in two nested levels of optimization. The coefficient estimates are treated as an implicit function of ODE parameters, which enables one to derive the analytic gradients for optimization using the implicit function theorem. Simulation studies show that the robust method gives satisfactory estimates for the ODE parameters from noisy data with outliers. The robust method is demonstrated by estimating a predator-prey ODE model from real ecological data.

  6. Real Time & Power Efficient Adaptive - Robust Control

    NASA Astrophysics Data System (ADS)

    Ioan Gliga, Lavinius; Constantin Mihai, Cosmin; Lupu, Ciprian; Popescu, Dumitru

    2017-01-01

    A design procedure for a control system suited for dynamic variable processes is presented in this paper. The proposed adaptive - robust control strategy considers both adaptive control advantages and robust control benefits. It estimates the degradation of the system’s performances due to the dynamic variation in the process and it then utilizes it to determine when the system must be adapted with a redesign of the robust controller. A single integral criterion is used for the identification of the process, and for the design of the control algorithm, which is expressed in direct form, through a cost function defined in the space of the parameters of both the process and the controller. For the minimization of this nonlinear function, an adequate mathematical programming minimization method is used. The theoretical approach presented in this paper was validated for a closed loop control system, simulated in an application developed in C. Because of the reduced number of operations, this method is suitable for implementation on fast processes. Due to its effectiveness, it increases the idle time of the CPU, thereby saving electrical energy.

  7. Robust detection-isolation-accommodation for sensor failures

    NASA Technical Reports Server (NTRS)

    Weiss, J. L.; Pattipati, K. R.; Willsky, A. S.; Eterno, J. S.; Crawford, J. T.

    1985-01-01

    The results of a one year study to: (1) develop a theory for Robust Failure Detection and Identification (FDI) in the presence of model uncertainty, (2) develop a design methodology which utilizes the robust FDI ththeory, (3) apply the methodology to a sensor FDI problem for the F-100 jet engine, and (4) demonstrate the application of the theory to the evaluation of alternative FDI schemes are presented. Theoretical results in statistical discrimination are used to evaluate the robustness of residual signals (or parity relations) in terms of their usefulness for FDI. Furthermore, optimally robust parity relations are derived through the optimization of robustness metrics. The result is viewed as decentralization of the FDI process. A general structure for decentralized FDI is proposed and robustness metrics are used for determining various parameters of the algorithm.

  8. Robust Understanding of Statistical Variation

    ERIC Educational Resources Information Center

    Peters, Susan A.

    2011-01-01

    This paper presents a framework that captures the complexity of reasoning about variation in ways that are indicative of robust understanding and describes reasoning as a blend of design, data-centric, and modeling perspectives. Robust understanding is indicated by integrated reasoning about variation within each perspective and across…

  9. Robust, Optimal Subsonic Airfoil Shapes

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2014-01-01

    A method has been developed to create an airfoil robust enough to operate satisfactorily in different environments. This method determines a robust, optimal, subsonic airfoil shape, beginning with an arbitrary initial airfoil shape, and imposes the necessary constraints on the design. Also, this method is flexible and extendible to a larger class of requirements and changes in constraints imposed.

  10. Facial symmetry in robust anthropometrics.

    PubMed

    Kalina, Jan

    2012-05-01

    Image analysis methods commonly used in forensic anthropology do not have desirable robustness properties, which can be ensured by robust statistical methods. In this paper, the face localization in images is carried out by detecting symmetric areas in the images. Symmetry is measured between two neighboring rectangular areas in the images using a new robust correlation coefficient, which down-weights regions in the face violating the symmetry. Raw images of faces without usual preliminary transformations are considered. The robust correlation coefficient based on the least weighted squares regression yields very promising results also in the localization of such faces, which are not entirely symmetric. Standard methods of statistical machine learning are applied for comparison. The robust correlation analysis can be applicable to other problems of forensic anthropology.

  11. Robust Fault Detection Using Robust Z1 Estimation and Fuzzy Logic

    NASA Technical Reports Server (NTRS)

    Curry, Tramone; Collins, Emmanuel G., Jr.; Selekwa, Majura; Guo, Ten-Huei (Technical Monitor)

    2001-01-01

    This research considers the application of robust Z(sub 1), estimation in conjunction with fuzzy logic to robust fault detection for an aircraft fight control system. It begins with the development of robust Z(sub 1) estimators based on multiplier theory and then develops a fixed threshold approach to fault detection (FD). It then considers the use of fuzzy logic for robust residual evaluation and FD. Due to modeling errors and unmeasurable disturbances, it is difficult to distinguish between the effects of an actual fault and those caused by uncertainty and disturbance. Hence, it is the aim of a robust FD system to be sensitive to faults while remaining insensitive to uncertainty and disturbances. While fixed thresholds only allow a decision on whether a fault has or has not occurred, it is more valuable to have the residual evaluation lead to a conclusion related to the degree of, or probability of, a fault. Fuzzy logic is a viable means of determining the degree of a fault and allows the introduction of human observations that may not be incorporated in the rigorous threshold theory. Hence, fuzzy logic can provide a more reliable and informative fault detection process. Using an aircraft flight control system, the results of FD using robust Z(sub 1) estimation with a fixed threshold are demonstrated. FD that combines robust Z(sub 1) estimation and fuzzy logic is also demonstrated. It is seen that combining the robust estimator with fuzzy logic proves to be advantageous in increasing the sensitivity to smaller faults while remaining insensitive to uncertainty and disturbances.

  12. Efficacy of the core DNA barcodes in identifying processed and poorly conserved plant materials commonly used in South African traditional medicine

    PubMed Central

    Mankga, Ledile T.; Yessoufou, Kowiyou; Moteetee, Annah M.; Daru, Barnabas H.; van der Bank, Michelle

    2013-01-01

    Abstract Medicinal plants cover a broad range of taxa, which may be phylogenetically less related but morphologically very similar. Such morphological similarity between species may lead to misidentification and inappropriate use. Also the substitution of a medicinal plant by a cheaper alternative (e.g. other non-medicinal plant species), either due to misidentification, or deliberately to cheat consumers, is an issue of growing concern. In this study, we used DNA barcoding to identify commonly used medicinal plants in South Africa. Using the core plant barcodes, matK and rbcLa, obtained from processed and poorly conserved materials sold at the muthi traditional medicine market, we tested efficacy of the barcodes in species discrimination. Based on genetic divergence, PCR amplification efficiency and BLAST algorithm, we revealed varied discriminatory potentials for the DNA barcodes. In general, the barcodes exhibited high discriminatory power, indicating their effectiveness in verifying the identity of the most common plant species traded in South African medicinal markets. BLAST algorithm successfully matched 61% of the queries against a reference database, suggesting that most of the information supplied by sellers at traditional medicinal markets in South Africa is correct. Our findings reinforce the utility of DNA barcoding technique in limiting false identification that can harm public health. PMID:24453559

  13. The environmental "risky" region: identifying land degradation processes through integration of socio-economic and ecological indicators in a multivariate regionalization model.

    PubMed

    Salvati, Luca; Zitti, Marco

    2009-11-01

    Although several studies have assessed Land Degradation (LD) states in the Mediterranean basin through the use of composite indices, relatively few have evaluated the impact of specific LD drivers at the local scale. In this work, a computational strategy is introduced to define homogeneous areas at risk and the main factors acting as determinants of LD. The procedure consists of three steps and is applied to a set of ten environmental indicators available at the municipality scale in Latium, central Italy. A principal component analysis extracting latent patterns and simplifying data complexity was carried out on the original data matrix. Subsequently, a k-means cluster analysis was applied on a restricted number of meaningful, latent factors extracted by PCA in order to produce a classification of the study area into homogeneous regions. Finally, a stepwise discriminant analysis was performed to determine which indicators contributed the most to the definition of homogeneous regions. Three classes of "risky" regions were identified according to the main drivers of LD acting at the local scale. These include: (i) soil sealing (coupled with landscape fragmentation, fire risk, and related processes), (ii) soil salinization due to agricultural intensification, and (iii) soil erosion due to farmland depopulation and land abandonment in sloping areas. Areas at risk for LD covered 56 and 63% of the investigated areas in 1970 and 2000, respectively.

  14. Heterologous Expression Screens in Nicotiana benthamiana Identify a Candidate Effector of the Wheat Yellow Rust Pathogen that Associates with Processing Bodies

    PubMed Central

    Petre, Benjamin; Saunders, Diane G. O.; Sklenar, Jan; Lorrain, Cécile; Krasileva, Ksenia V.; Win, Joe; Duplessis, Sébastien; Kamoun, Sophien

    2016-01-01

    Rust fungal pathogens of wheat (Triticum spp.) affect crop yields worldwide. The molecular mechanisms underlying the virulence of these pathogens remain elusive, due to the limited availability of suitable molecular genetic research tools. Notably, the inability to perform high-throughput analyses of candidate virulence proteins (also known as effectors) impairs progress. We previously established a pipeline for the fast-forward screens of rust fungal candidate effectors in the model plant Nicotiana benthamiana. This pipeline involves selecting candidate effectors in silico and performing cell biology and protein-protein interaction assays in planta to gain insight into the putative functions of candidate effectors. In this study, we used this pipeline to identify and characterize sixteen candidate effectors from the wheat yellow rust fungal pathogen Puccinia striiformis f sp tritici. Nine candidate effectors targeted a specific plant subcellular compartment or protein complex, providing valuable information on their putative functions in plant cells. One candidate effector, PST02549, accumulated in processing bodies (P-bodies), protein complexes involved in mRNA decapping, degradation, and storage. PST02549 also associates with the P-body-resident ENHANCER OF mRNA DECAPPING PROTEIN 4 (EDC4) from N. benthamiana and wheat. We propose that P-bodies are a novel plant cell compartment targeted by pathogen effectors. PMID:26863009

  15. The Environmental ``Risky'' Region: Identifying Land Degradation Processes Through Integration of Socio-Economic and Ecological Indicators in a Multivariate Regionalization Model

    NASA Astrophysics Data System (ADS)

    Salvati, Luca; Zitti, Marco

    2009-11-01

    Although several studies have assessed Land Degradation (LD) states in the Mediterranean basin through the use of composite indices, relatively few have evaluated the impact of specific LD drivers at the local scale. In this work, a computational strategy is introduced to define homogeneous areas at risk and the main factors acting as determinants of LD. The procedure consists of three steps and is applied to a set of ten environmental indicators available at the municipality scale in Latium, central Italy. A principal component analysis extracting latent patterns and simplifying data complexity was carried out on the original data matrix. Subsequently, a k-means cluster analysis was applied on a restricted number of meaningful, latent factors extracted by PCA in order to produce a classification of the study area into homogeneous regions. Finally, a stepwise discriminant analysis was performed to determine which indicators contributed the most to the definition of homogeneous regions. Three classes of “risky” regions were identified according to the main drivers of LD acting at the local scale. These include: (i) soil sealing (coupled with landscape fragmentation, fire risk, and related processes), (ii) soil salinization due to agricultural intensification, and (iii) soil erosion due to farmland depopulation and land abandonment in sloping areas. Areas at risk for LD covered 56 and 63% of the investigated areas in 1970 and 2000, respectively.

  16. A digital signal processing-based bioinformatics approach to identifying the origins of HIV-1 non B subtypes infecting US Army personnel serving abroad.

    PubMed

    Nwankwo, Norbert

    2013-06-01

    Two HIV-1 non B isolates, 98US_MSC5007 and 98US_MSC5016, which have been identified amongst the US Army personnel serving abroad, are known to have originated from other nations. Notwithstanding, they are categorized as American strains. This is because their countries of origin are unknown. American isolates are basically B subtype. 98US_MSC5007 belongs to Circulating Recombinant Form (CRF02_AG) while 98US_MSC5016 is of the C clade. Both sub-groups are recognized to have originated from African and Asian continents. It has become necessary to properly determine the countries of origin of microbes and viruses. This is because diversity and cross-subtyping have been found to mitigate the designing and development of vaccine and therapeutic interventions. The aim of this study therefore is to identify the countries of origin of the two American isolates found amongst US Army personnel serving abroad. A Digital Signal Processing-based Bioinformatics technique called Informational Spectrum Method (ISM) has been engaged. ISM entails translating the amino acids sequences of the protein into numerical sequences (signals) by means of one biological parameter (Amino Acids Scale). The signals are then processed using Discrete Fourier Transform (DFT) in order to uncover and present the embedded biological information as Informational Spectra (IS). Spectral Position of Maximum Binding Interaction (SPMBI) is used. Several approaches including Phylogeny have preliminarily been employed in the determination of evolutionary trends of organisms and viruses. SPMBI has preliminarily been used to re-establish the semblance and common originality that exist between human and Chimpanzee, evolutionary roadmaps in the Influenza and HIV viruses. The results disclosed that 98US_MSC5007 shared same semblance and originality with a Nigeria isolate (92NG083) while 98US_MSC5016 with the Zairian isolates (ELI, MAL, and Z2/CDC-34). These results appear to demonstrate that the American soldiers

  17. 'Glocal' robustness analysis and model discrimination for circadian oscillators.

    PubMed

    Hafner, Marc; Koeppl, Heinz; Hasler, Martin; Wagner, Andreas

    2009-10-01

    To characterize the behavior and robustness of cellular circuits with many unknown parameters is a major challenge for systems biology. Its difficulty rises exponentially with the number of circuit components. We here propose a novel analysis method to meet this challenge. Our method identifies the region of a high-dimensional parameter space where a circuit displays an experimentally observed behavior. It does so via a Monte Carlo approach guided by principal component analysis, in order to allow efficient sampling of this space. This 'global' analysis is then supplemented by a 'local' analysis, in which circuit robustness is determined for each of the thousands of parameter sets sampled in the global analysis. We apply this method to two prominent, recent models of the cyanobacterial circadian oscillator, an autocatalytic model, and a model centered on consecutive phosphorylation at two sites of the KaiC protein, a key circadian regulator. For these models, we find that the two-sites architecture is much more robust than the autocatalytic one, both globally and locally, based on five different quantifiers of robustness, including robustness to parameter perturbations and to molecular noise. Our 'glocal' combination of global and local analyses can also identify key causes of high or low robustness. In doing so, our approach helps to unravel the architectural origin of robust circuit behavior. Complementarily, identifying fragile aspects of system behavior can aid in designing perturbation experiments that may discriminate between competing mechanisms and different parameter sets.

  18. Nanotechnology Based Environmentally Robust Primers

    SciTech Connect

    Barbee, T W Jr; Gash, A E; Satcher, J H Jr; Simpson, R L

    2003-03-18

    An initiator device structure consisting of an energetic metallic nano-laminate foil coated with a sol-gel derived energetic nano-composite has been demonstrated. The device structure consists of a precision sputter deposition synthesized nano-laminate energetic foil of non-toxic and non-hazardous metals along with a ceramic-based energetic sol-gel produced coating made up of non-toxic and non-hazardous components such as ferric oxide and aluminum metal. Both the nano-laminate and sol-gel technologies are versatile commercially viable processes that allow the ''engineering'' of properties such as mechanical sensitivity and energy output. The nano-laminate serves as the mechanically sensitive precision igniter and the energetic sol-gel functions as a low-cost, non-toxic, non-hazardous booster in the ignition train. In contrast to other energetic nanotechnologies these materials can now be safely manufactured at application required levels, are structurally robust, have reproducible and engineerable properties, and have excellent aging characteristics.

  19. Robust online Hamiltonian learning

    NASA Astrophysics Data System (ADS)

    Granade, Christopher E.; Ferrie, Christopher; Wiebe, Nathan; Cory, D. G.

    2012-10-01

    In this work we combine two distinct machine learning methodologies, sequential Monte Carlo and Bayesian experimental design, and apply them to the problem of inferring the dynamical parameters of a quantum system. We design the algorithm with practicality in mind by including parameters that control trade-offs between the requirements on computational and experimental resources. The algorithm can be implemented online (during experimental data collection), avoiding the need for storage and post-processing. Most importantly, our algorithm is capable of learning Hamiltonian parameters even when the parameters change from experiment-to-experiment, and also when additional noise processes are present and unknown. The algorithm also numerically estimates the Cramer-Rao lower bound, certifying its own performance.

  20. Robustness of airline route networks

    NASA Astrophysics Data System (ADS)

    Lordan, Oriol; Sallan, Jose M.; Escorihuela, Nuria; Gonzalez-Prieto, David

    2016-03-01

    Airlines shape their route network by defining their routes through supply and demand considerations, paying little attention to network performance indicators, such as network robustness. However, the collapse of an airline network can produce high financial costs for the airline and all its geographical area of influence. The aim of this study is to analyze the topology and robustness of the network route of airlines following Low Cost Carriers (LCCs) and Full Service Carriers (FSCs) business models. Results show that FSC hubs are more central than LCC bases in their route network. As a result, LCC route networks are more robust than FSC networks.

  1. Simple Robust Fixed Lag Smoothing

    DTIC Science & Technology

    1988-12-02

    SIMPLE ROBUST FIXED LAG SMOOTHING by ~N. D. Le R.D. Martin 4 TECHNICAL RlEPORT No. 149 December 1988 Department of Statistics, GN-22 Accesion For...frLsD1ist Special A- Z Simple Robust Fixed Lag Smoothing With Application To Radar Glint Noise * N. D. Le R. D. Martin Department of Statistics, GN...smoothers. The emphasis here is on fixed-lag smoothing , as opposed to the use of existing robust fixed interval smoothers (e.g., as in Martin, 1979

  2. Robust Control Design for Systems With Probabilistic Uncertainty

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.

    2005-01-01

    This paper presents a reliability- and robustness-based formulation for robust control synthesis for systems with probabilistic uncertainty. In a reliability-based formulation, the probability of violating design requirements prescribed by inequality constraints is minimized. In a robustness-based formulation, a metric which measures the tendency of a random variable/process to cluster close to a target scalar/function is minimized. A multi-objective optimization procedure, which combines stability and performance requirements in time and frequency domains, is used to search for robustly optimal compensators. Some of the fundamental differences between the proposed strategy and conventional robust control methods are: (i) unnecessary conservatism is eliminated since there is not need for convex supports, (ii) the most likely plants are favored during synthesis allowing for probabilistic robust optimality, (iii) the tradeoff between robust stability and robust performance can be explored numerically, (iv) the uncertainty set is closely related to parameters with clear physical meaning, and (v) compensators with improved robust characteristics for a given control structure can be synthesized.

  3. A robust method to heterogenise and recycle group 9 catalysts.

    PubMed

    Lucas, Stephanie J; Crossley, Benjamin D; Pettman, Alan J; Vassileiou, Antony D; Screen, Thomas E O; Blacker, A John; McGowan, Patrick C

    2013-06-21

    This paper provides a viable, reproducible and robust method for immobilising hydroxyl tethered iridium-rhodium complexes. The materials have been shown to be both effective and recyclable in the process of catalytic transfer hydrogenation with minimal metal leaching.

  4. Strain-Dependent Transcriptome Signatures for Robustness in Lactococcus lactis

    PubMed Central

    Dijkstra, Annereinou R.; Alkema, Wynand; Starrenburg, Marjo J. C.; van Hijum, Sacha A. F. T.; Bron, Peter A.

    2016-01-01

    Recently, we demonstrated that fermentation conditions have a strong impact on subsequent survival of Lactococcus lactis strain MG1363 during heat and oxidative stress, two important parameters during spray drying. Moreover, employment of a transcriptome-phenotype matching approach revealed groups of genes associated with robustness towards heat and/or oxidative stress. To investigate if other strains have similar or distinct transcriptome signatures for robustness, we applied an identical transcriptome-robustness phenotype matching approach on the L. lactis strains IL1403, KF147 and SK11, which have previously been demonstrated to display highly diverse robustness phenotypes. These strains were subjected to an identical fermentation regime as was performed earlier for strain MG1363 and consisted of twelve conditions, varying in the level of salt and/or oxygen, as well as fermentation temperature and pH. In the exponential phase of growth, cells were harvested for transcriptome analysis and assessment of heat and oxidative stress survival phenotypes. The variation in fermentation conditions resulted in differences in heat and oxidative stress survival of up to five 10-log units. Effects of the fermentation conditions on stress survival of the L. lactis strains were typically strain-dependent, although the fermentation conditions had mainly similar effects on the growth characteristics of the different strains. By association of the transcriptomes and robustness phenotypes highly strain-specific transcriptome signatures for robustness towards heat and oxidative stress were identified, indicating that multiple mechanisms exist to increase robustness and, as a consequence, robustness of each strain requires individual optimization. However, a relatively small overlap in the transcriptome responses of the strains was also identified and this generic transcriptome signature included genes previously associated with stress (ctsR and lplL) and novel genes, including nan

  5. Strain-Dependent Transcriptome Signatures for Robustness in Lactococcus lactis.

    PubMed

    Dijkstra, Annereinou R; Alkema, Wynand; Starrenburg, Marjo J C; Hugenholtz, Jeroen; van Hijum, Sacha A F T; Bron, Peter A

    2016-01-01

    Recently, we demonstrated that fermentation conditions have a strong impact on subsequent survival of Lactococcus lactis strain MG1363 during heat and oxidative stress, two important parameters during spray drying. Moreover, employment of a transcriptome-phenotype matching approach revealed groups of genes associated with robustness towards heat and/or oxidative stress. To investigate if other strains have similar or distinct transcriptome signatures for robustness, we applied an identical transcriptome-robustness phenotype matching approach on the L. lactis strains IL1403, KF147 and SK11, which have previously been demonstrated to display highly diverse robustness phenotypes. These strains were subjected to an identical fermentation regime as was performed earlier for strain MG1363 and consisted of twelve conditions, varying in the level of salt and/or oxygen, as well as fermentation temperature and pH. In the exponential phase of growth, cells were harvested for transcriptome analysis and assessment of heat and oxidative stress survival phenotypes. The variation in fermentation conditions resulted in differences in heat and oxidative stress survival of up to five 10-log units. Effects of the fermentation conditions on stress survival of the L. lactis strains were typically strain-dependent, although the fermentation conditions had mainly similar effects on the growth characteristics of the different strains. By association of the transcriptomes and robustness phenotypes highly strain-specific transcriptome signatures for robustness towards heat and oxidative stress were identified, indicating that multiple mechanisms exist to increase robustness and, as a consequence, robustness of each strain requires individual optimization. However, a relatively small overlap in the transcriptome responses of the strains was also identified and this generic transcriptome signature included genes previously associated with stress (ctsR and lplL) and novel genes, including nan

  6. Robust Optimization of Biological Protocols

    PubMed Central

    Flaherty, Patrick; Davis, Ronald W.

    2015-01-01

    When conducting high-throughput biological experiments, it is often necessary to develop a protocol that is both inexpensive and robust. Standard approaches are either not cost-effective or arrive at an optimized protocol that is sensitive to experimental variations. We show here a novel approach that directly minimizes the cost of the protocol while ensuring the protocol is robust to experimental variation. Our approach uses a risk-averse conditional value-at-risk criterion in a robust parameter design framework. We demonstrate this approach on a polymerase chain reaction protocol and show that our improved protocol is less expensive than the standard protocol and more robust than a protocol optimized without consideration of experimental variation. PMID:26417115

  7. Robust Portfolio Optimization Using Pseudodistances.

    PubMed

    Toma, Aida; Leoni-Aubin, Samuela

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature.

  8. Learning robust pulses for generating universal quantum gates

    NASA Astrophysics Data System (ADS)

    Dong, Daoyi; Wu, Chengzhi; Chen, Chunlin; Qi, Bo; Petersen, Ian R.; Nori, Franco

    2016-10-01

    Constructing a set of universal quantum gates is a fundamental task for quantum computation. The existence of noises, disturbances and fluctuations is unavoidable during the process of implementing quantum gates for most practical quantum systems. This paper employs a sampling-based learning method to find robust control pulses for generating a set of universal quantum gates. Numerical results show that the learned robust control fields are insensitive to disturbances, uncertainties and fluctuations during the process of realizing universal quantum gates.

  9. Learning robust pulses for generating universal quantum gates

    PubMed Central

    Dong, Daoyi; Wu, Chengzhi; Chen, Chunlin; Qi, Bo; Petersen, Ian R.; Nori, Franco

    2016-01-01

    Constructing a set of universal quantum gates is a fundamental task for quantum computation. The existence of noises, disturbances and fluctuations is unavoidable during the process of implementing quantum gates for most practical quantum systems. This paper employs a sampling-based learning method to find robust control pulses for generating a set of universal quantum gates. Numerical results show that the learned robust control fields are insensitive to disturbances, uncertainties and fluctuations during the process of realizing universal quantum gates. PMID:27782219

  10. Robust laser speckle recognition system for authenticity identification.

    PubMed

    Yeh, Chia-Hung; Sung, Po-Yi; Kuo, Chih-Hung; Yeh, Ruey-Nan

    2012-10-22

    This paper proposes a laser speckle recognition system for authenticity verification. Because of the unique imperfection surfaces of objects, laser speckle provides identifiable features for authentication. A Gabor filter, SIFT (Scale-Invariant Feature Transform), and projection were used to extract the features of laser speckle images. To accelerate the matching process, the extracted Gabor features were organized into an indexing structure using the K-means algorithm. Plastic cards were used as the target objects in the proposed system and the hardware of the speckle capturing system was built. The experimental results showed that the retrieval performance of the proposed method is accurate when the database contains 516 laser speckle images. The proposed system is robust and feasible for authenticity verification.

  11. Robust Online Hamiltonian Learning

    NASA Astrophysics Data System (ADS)

    Granade, Christopher; Ferrie, Christopher; Wiebe, Nathan; Cory, David

    2013-05-01

    In this talk, we introduce a machine-learning algorithm for the problem of inferring the dynamical parameters of a quantum system, and discuss this algorithm in the example of estimating the precession frequency of a single qubit in a static field. Our algorithm is designed with practicality in mind by including parameters that control trade-offs between the requirements on computational and experimental resources. The algorithm can be implemented online, during experimental data collection, or can be used as a tool for post-processing. Most importantly, our algorithm is capable of learning Hamiltonian parameters even when the parameters change from experiment-to-experiment, and also when additional noise processes are present and unknown. Finally, we discuss the performance of the our algorithm by appeal to the Cramer-Rao bound. This work was financially supported by the Canadian government through NSERC and CERC and by the United States government through DARPA. NW would like to acknowledge funding from USARO-DTO.

  12. A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Tradeoff on Phenotype Robustness in Biological Networks Part II: Ecological Networks.

    PubMed

    Chen, Bor-Sen; Lin, Ying-Po

    2013-01-01

    In ecological networks, network robustness should be large enough to confer intrinsic robustness for tolerating intrinsic parameter fluctuations, as well as environmental robustness for resisting environmental disturbances, so that the phenotype stability of ecological networks can be maintained, thus guaranteeing phenotype robustness. However, it is difficult to analyze the network robustness of ecological systems because they are complex nonlinear partial differential stochastic systems. This paper develops a unifying mathematical framework for investigating the principles of both robust stabilization and environmental disturbance sensitivity in ecological networks. We found that the phenotype robustness criterion for ecological networks is that if intrinsic robustness + environmental robustness ≦ network robustness, then the phenotype robustness can be maintained in spite of intrinsic parameter fluctuations and environmental disturbances. These results in robust ecological networks are similar to that in robust gene regulatory networks and evolutionary networks even they have different spatial-time scales.

  13. Robust controls with structured perturbations

    NASA Technical Reports Server (NTRS)

    Keel, Leehyun

    1993-01-01

    This final report summarizes the recent results obtained by the principal investigator and his coworkers on the robust stability and control of systems containing parametric uncertainty. The starting point is a generalization of Kharitonov's theorem obtained in 1989, and its generalization to the multilinear case, the singling out of extremal stability subsets, and other ramifications now constitutes an extensive and coherent theory of robust parametric stability that is summarized in the results contained here.

  14. Building robust conservation plans.

    PubMed

    Visconti, Piero; Joppa, Lucas

    2015-04-01

    Systematic conservation planning optimizes trade-offs between biodiversity conservation and human activities by accounting for socioeconomic costs while aiming to achieve prescribed conservation objectives. However, the most cost-efficient conservation plan can be very dissimilar to any other plan achieving the set of conservation objectives. This is problematic under conditions of implementation uncertainty (e.g., if all or part of the plan becomes unattainable). We determined through simulations of parallel implementation of conservation plans and habitat loss the conditions under which optimal plans have limited chances of implementation and where implementation attempts would fail to meet objectives. We then devised a new, flexible method for identifying conservation priorities and scheduling conservation actions. This method entails generating a number of alternative plans, calculating the similarity in site composition among all plans, and selecting the plan with the highest density of neighboring plans in similarity space. We compared our method with the classic method that maximizes cost efficiency with synthetic and real data sets. When implementation was uncertain--a common reality--our method provided higher likelihood of achieving conservation targets. We found that χ, a measure of the shortfall in objectives achieved by a conservation plan if the plan could not be implemented entirely, was the main factor determining the relative performance of a flexibility enhanced approach to conservation prioritization. Our findings should help planning authorities prioritize conservation efforts in the face of uncertainty about future condition and availability of sites.

  15. Meristem size contributes to the robustness of phyllotaxis in Arabidopsis

    PubMed Central

    Landrein, Benoit; Refahi, Yassin; Besnard, Fabrice; Hervieux, Nathan; Mirabet, Vincent; Boudaoud, Arezki; Vernoux, Teva; Hamant, Olivier

    2015-01-01

    Using the plant model Arabidopsis, the relationship between day length, the size of the shoot apical meristem, and the robustness of phyllotactic patterns were analysed. First, it was found that reducing day length leads to an increased meristem size and an increased number of alterations in the final positions of organs along the stem. Most of the phyllotactic defects could be related to an altered tempo of organ emergence, while not affecting the spatial positions of organ initiations at the meristem. A correlation was also found between meristem size and the robustness of phyllotaxis in two accessions (Col-0 and WS-4) and a mutant (clasp-1), independent of growth conditions. A reduced meristem size in clasp-1 was even associated with an increased robustness of the phyllotactic pattern, beyond what is observed in the wild type. Interestingly it was also possible to modulate the robustness of phyllotaxis in these different genotypes by changing day length. To conclude, it is shown first that robustness of the phyllotactic pattern is not maximal in the wild type, suggesting that, beyond its apparent stereotypical order, the robustness of phyllotaxis is regulated. Secondly, a role for day length in the robustness of the phyllotaxis was also identified, thus providing a new example of a link between patterning and environment in plants. Thirdly, the experimental results validate previous model predictions suggesting a contribution of meristem size in the robustness of phyllotaxis via the coupling between the temporal sequence and spatial pattern of organ initiations. PMID:25504644

  16. How robust is a robust policy? A comparative analysis of alternative robustness metrics for supporting robust decision analysis.

    NASA Astrophysics Data System (ADS)

    Kwakkel, Jan; Haasnoot, Marjolijn

    2015-04-01

    In response to climate and socio-economic change, in various policy domains there is increasingly a call for robust plans or policies. That is, plans or policies that performs well in a very large range of plausible futures. In the literature, a wide range of alternative robustness metrics can be found. The relative merit of these alternative conceptualizations of robustness has, however, received less attention. Evidently, different robustness metrics can result in different plans or policies being adopted. This paper investigates the consequences of several robustness metrics on decision making, illustrated here by the design of a flood risk management plan. A fictitious case, inspired by a river reach in the Netherlands is used. The performance of this system in terms of casualties, damages, and costs for flood and damage mitigation actions is explored using a time horizon of 100 years, and accounting for uncertainties pertaining to climate change and land use change. A set of candidate policy options is specified up front. This set of options includes dike raising, dike strengthening, creating more space for the river, and flood proof building and evacuation options. The overarching aim is to design an effective flood risk mitigation strategy that is designed from the outset to be adapted over time in response to how the future actually unfolds. To this end, the plan will be based on the dynamic adaptive policy pathway approach (Haasnoot, Kwakkel et al. 2013) being used in the Dutch Delta Program. The policy problem is formulated as a multi-objective robust optimization problem (Kwakkel, Haasnoot et al. 2014). We solve the multi-objective robust optimization problem using several alternative robustness metrics, including both satisficing robustness metrics and regret based robustness metrics. Satisficing robustness metrics focus on the performance of candidate plans across a large ensemble of plausible futures. Regret based robustness metrics compare the

  17. Robust speech coding using microphone arrays

    NASA Astrophysics Data System (ADS)

    Li, Zhao

    1998-09-01

    To achieve robustness and efficiency for voice communication in noise, the noise suppression and bandwidth compression processes are combined to form a joint process using input from an array of microphones. An adaptive beamforming technique with a set of robust linear constraints and a single quadratic inequality constraint is used to preserve desired signal and to cancel directional plus ambient noise in a small room environment. This robustly constrained array processor is found to be effective in limiting signal cancelation over a wide range of input SNRs (-10 dB to +10 dB). The resulting intelligibility gains (8-10 dB) provide significant improvement to subsequent CELP coding. In addition, the desired speech activity is detected by estimating Target-to-Jammer Ratios (TJR) using subband correlations between different microphone inputs or using signals within the Generalized Sidelobe Canceler directly. These two novel techniques of speech activity detection for coding are studied thoroughly in this dissertation. Each is subsequently incorporated with the adaptive array and a 4.8 kbps CELP coder to form a Variable Bit Kate (VBR) coder with noise canceling and Spatial Voice Activity Detection (SVAD) capabilities. This joint noise suppression and bandwidth compression system demonstrates large improvements in desired speech quality after coding, accurate desired speech activity detection in various types of interference, and a reduction in the information bits required to code the speech.

  18. Robust control technique for nuclear power plants

    SciTech Connect

    Murphy, G.V.; Bailey, J.M.

    1989-03-01

    This report summarizes the linear quadratic Guassian (LQG) design technique with loop transfer recovery (LQG/LTR) for design of control systems. The concepts of return ratio, return difference, inverse return difference, and singular values are summarized. The LQG/LTR design technique allows the synthesis of a robust control system. To illustrate the LQG/LTR technique, a linearized model of a simple process has been chosen. The process has three state variables, one input, and one output. Three control system design methods are compared: LQG, LQG/LTR, and a proportional plus integral controller (PI). 7 refs., 20 figs., 6 tabs.

  19. Robust, directed assembly of fluorescent nanodiamonds.

    PubMed

    Kianinia, Mehran; Shimoni, Olga; Bendavid, Avi; Schell, Andreas W; Randolph, Steven J; Toth, Milos; Aharonovich, Igor; Lobo, Charlene J

    2016-10-27

    Arrays of fluorescent nanoparticles are highly sought after for applications in sensing, nanophotonics and quantum communications. Here we present a simple and robust method of assembling fluorescent nanodiamonds into macroscopic arrays. Remarkably, the yield of this directed assembly process is greater than 90% and the assembled patterns withstand ultra-sonication for more than three hours. The assembly process is based on covalent bonding of carboxyl to amine functional carbon seeds and is applicable to any material, and to non-planar surfaces. Our results pave the way to directed assembly of sensors and nanophotonics devices.

  20. Vehicle active steering control research based on two-DOF robust internal model control

    NASA Astrophysics Data System (ADS)

    Wu, Jian; Liu, Yahui; Wang, Fengbo; Bao, Chunjiang; Sun, Qun; Zhao, Youqun

    2016-07-01

    Because of vehicle's external disturbances and model uncertainties, robust control algorithms have obtained popularity in vehicle stability control. The robust control usually gives up performance in order to guarantee the robustness of the control algorithm, therefore an improved robust internal model control(IMC) algorithm blending model tracking and internal model control is put forward for active steering system in order to reach high performance of yaw rate tracking with certain robustness. The proposed algorithm inherits the good model tracking ability of the IMC control and guarantees robustness to model uncertainties. In order to separate the design process of model tracking from the robustness design process, the improved 2 degree of freedom(DOF) robust internal model controller structure is given from the standard Youla parameterization. Simulations of double lane change maneuver and those of crosswind disturbances are conducted for evaluating the robust control algorithm, on the basis of a nonlinear vehicle simulation model with a magic tyre model. Results show that the established 2-DOF robust IMC method has better model tracking ability and a guaranteed level of robustness and robust performance, which can enhance the vehicle stability and handling, regardless of variations of the vehicle model parameters and the external crosswind interferences. Contradiction between performance and robustness of active steering control algorithm is solved and higher control performance with certain robustness to model uncertainties is obtained.

  1. Metadata-driven comparative analysis tool for sequences (meta-CATS): an automated process for identifying significant sequence variations that correlate with virus attributes.

    PubMed

    Pickett, B E; Liu, M; Sadat, E L; Squires, R B; Noronha, J M; He, S; Jen, W; Zaremba, S; Gu, Z; Zhou, L; Larsen, C N; Bosch, I; Gehrke, L; McGee, M; Klem, E B; Scheuermann, R H

    2013-12-01

    The Virus Pathogen Resource (ViPR; www.viprbrc.org) and Influenza Research Database (IRD; www.fludb.org) have developed a metadata-driven Comparative Analysis Tool for Sequences (meta-CATS), which performs statistical comparative analyses of nucleotide and amino acid sequence data to identify correlations between sequence variations and virus attributes (metadata). Meta-CATS guides users through: selecting a set of nucleotide or protein sequences; dividing them into multiple groups based on any associated metadata attribute (e.g. isolation location, host species); performing a statistical test at each aligned position; and identifying all residues that significantly differ between the groups. As proofs of concept, we have used meta-CATS to identify sequence biomarkers associated with dengue viruses isolated from different hemispheres, and to identify variations in the NS1 protein that are unique to each of the 4 dengue serotypes. Meta-CATS is made freely available to virology researchers to identify genotype-phenotype correlations for development of improved vaccines, diagnostics, and therapeutics.

  2. Robust Hitting with Dynamics Shaping

    NASA Astrophysics Data System (ADS)

    Yashima, Masahito; Yamawaki, Tasuku

    The present paper proposes the trajectory planning based on “the dynamics shaping” for a redundant robotic arm to hit a target robustly toward the desired direction, of which the concept is to shape the robot dynamics appropriately by changing its posture in order to achieve the robust motion. The positional error of the end-effector caused by unknown disturbances converges onto near the singular vector corresponding to its maximum singular value of the output controllability matrix of the robotic arm. Therefore, if we can control the direction of the singular vector by applying the dynamics shaping, we will be able to control the direction of the positional error of the end-effector caused by unknown disturbances. We propose a novel trajectory planning based on the dynamics shaping and verify numerically and experimentally that the robotic arm can robustly hit the target toward the desired direction with a simple open-loop control system even though the disturbance is applied.

  3. Panaceas, uncertainty, and the robust control framework in sustainability science

    PubMed Central

    Anderies, John M.; Rodriguez, Armando A.; Janssen, Marco A.; Cifdaloz, Oguzhan

    2007-01-01

    A critical challenge faced by sustainability science is to develop strategies to cope with highly uncertain social and ecological dynamics. This article explores the use of the robust control framework toward this end. After briefly outlining the robust control framework, we apply it to the traditional Gordon–Schaefer fishery model to explore fundamental performance–robustness and robustness–vulnerability trade-offs in natural resource management. We find that the classic optimal control policy can be very sensitive to parametric uncertainty. By exploring a large class of alternative strategies, we show that there are no panaceas: even mild robustness properties are difficult to achieve, and increasing robustness to some parameters (e.g., biological parameters) results in decreased robustness with respect to others (e.g., economic parameters). On the basis of this example, we extract some broader themes for better management of resources under uncertainty and for sustainability science in general. Specifically, we focus attention on the importance of a continual learning process and the use of robust control to inform this process. PMID:17881574

  4. Design optimization for cost and quality: The robust design approach

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  5. Enhancing robustness of coupled networks under targeted recoveries.

    PubMed

    Gong, Maoguo; Ma, Lijia; Cai, Qing; Jiao, Licheng

    2015-02-13

    Coupled networks are extremely fragile because a node failure of a network would trigger a cascade of failures on the entire system. Existing studies mainly focused on the cascading failures and the robustness of coupled networks when the networks suffer from attacks. In reality, it is necessary to recover the damaged networks, and there are cascading failures in recovery processes. In this study, firstly, we analyze the cascading failures of coupled networks during recoveries. Then, a recovery robustness index is presented for evaluating the resilience of coupled networks to cascading failures in the recovery processes. Finally, we propose a technique aiming at protecting several influential nodes for enhancing robustness of coupled networks under the recoveries, and adopt six strategies based on the potential knowledge of network centrality to find the influential nodes. Experiments on three coupling networks demonstrate that with a small number of influential nodes protected, the robustness of coupled networks under the recoveries can be greatly enhanced.

  6. Enhancing robustness of coupled networks under targeted recoveries

    PubMed Central

    Gong, Maoguo; Ma, Lijia; Cai, Qing; Jiao, Licheng

    2015-01-01

    Coupled networks are extremely fragile because a node failure of a network would trigger a cascade of failures on the entire system. Existing studies mainly focused on the cascading failures and the robustness of coupled networks when the networks suffer from attacks. In reality, it is necessary to recover the damaged networks, and there are cascading failures in recovery processes. In this study, firstly, we analyze the cascading failures of coupled networks during recoveries. Then, a recovery robustness index is presented for evaluating the resilience of coupled networks to cascading failures in the recovery processes. Finally, we propose a technique aiming at protecting several influential nodes for enhancing robustness of coupled networks under the recoveries, and adopt six strategies based on the potential knowledge of network centrality to find the influential nodes. Experiments on three coupling networks demonstrate that with a small number of influential nodes protected, the robustness of coupled networks under the recoveries can be greatly enhanced. PMID:25675980

  7. UNIX-based operating systems robustness evaluation

    NASA Technical Reports Server (NTRS)

    Chang, Yu-Ming

    1996-01-01

    Robust operating systems are required for reliable computing. Techniques for robustness evaluation of operating systems not only enhance the understanding of the reliability of computer systems, but also provide valuable feed- back to system designers. This thesis presents results from robustness evaluation experiments on five UNIX-based operating systems, which include Digital Equipment's OSF/l, Hewlett Packard's HP-UX, Sun Microsystems' Solaris and SunOS, and Silicon Graphics' IRIX. Three sets of experiments were performed. The methodology for evaluation tested (1) the exception handling mechanism, (2) system resource management, and (3) system capacity under high workload stress. An exception generator was used to evaluate the exception handling mechanism of the operating systems. Results included exit status of the exception generator and the system state. Resource management techniques used by individual operating systems were tested using programs designed to usurp system resources such as physical memory and process slots. Finally, the workload stress testing evaluated the effect of the workload on system performance by running a synthetic workload and recording the response time of local and remote user requests. Moderate to severe performance degradations were observed on the systems under stress.

  8. The robust beauty of ordinary information.

    PubMed

    Katsikopoulos, Konstantinos V; Schooler, Lael J; Hertwig, Ralph

    2010-10-01

    Heuristics embodying limited information search and noncompensatory processing of information can yield robust performance relative to computationally more complex models. One criticism raised against heuristics is the argument that complexity is hidden in the calculation of the cue order used to make predictions. We discuss ways to order cues that do not entail individual learning. Then we propose and test the thesis that when orders are learned individually, people's necessarily limited knowledge will curtail computational complexity while also achieving robustness. Using computer simulations, we compare the performance of the take-the-best heuristic--with dichotomized or undichotomized cues--to benchmarks such as the naïve Bayes algorithm across 19 environments. Even with minute sizes of training sets, take-the-best using undichotomized cues excels. For 10 environments, we probe people's intuitions about the direction of the correlation between cues and criterion. On the basis of these intuitions, in most of the environments take-the-best achieves the level of performance that would be expected from learning cue orders from 50% of the objects in the environments. Thus, ordinary information about cues--either gleaned from small training sets or intuited--can support robust performance without requiring Herculean computations.

  9. Key molecular processes of the diapause to post-diapause quiescence transition in the alfalfa leafcutting bee Megachile rotundata identified by comparative transcriptome analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Insect diapause (dormancy) synchronizes an insect’s life cycle to seasonal changes in the abiotic and biotic resources required for development and reproduction. Transcription analysis of Megachile rotundata diapause termination identified 399 post-diapause upregulated and 144 post-diapause down-reg...

  10. In Search for a Robust Design of Environmental Sensor Networks.

    PubMed

    Budi, Setia; Susanto, Ferry; de Souza, Paulo; Timms, Greg; Malhotra, Vishv; Turner, Paul

    2017-03-22

    This paper presents an approach to the design of Environmental Sensor Networks (ESN) which aims at providing a robust, fit-for-purpose network with minimum redundancy. A set of near optimum ESN designs is sought using an Evolutionary Algorithm, which incorporates redundancy and robustness as fitness functions. This work can assist the decision making process when determining the number of sensor nodes and how the nodes are going to be deployed in the region of interest.

  11. Designing a robust minimum variance controller using discrete slide mode controller approach.

    PubMed

    Alipouri, Yousef; Poshtan, Javad

    2013-03-01

    Designing minimum variance controllers (MVC) for nonlinear systems is confronted with many difficulties. The methods able to identify MIMO nonlinear systems are scarce. Harsh control signals produced by MVC are among other disadvantages of this controller. Besides, MVC is not a robust controller. In this article, the Vector ARX (VARX) model is used for simultaneously modeling the system and disturbance in order to tackle these disadvantages. For ensuring the robustness of the control loop, the discrete slide mode controller design approach is used in designing MVC and generalized MVC (GMVC). The proposed method for controller design is tested on a nonlinear experimental Four-Tank benchmark process and is compared with nonlinear MVCs designed by neural networks. In spite of the simplicity of designing GMVCs for the VARX models with uncertainty, the results show that the proposed method is accurate and implementable.

  12. Mental Models: A Robust Definition

    ERIC Educational Resources Information Center

    Rook, Laura

    2013-01-01

    Purpose: The concept of a mental model has been described by theorists from diverse disciplines. The purpose of this paper is to offer a robust definition of an individual mental model for use in organisational management. Design/methodology/approach: The approach adopted involves an interdisciplinary literature review of disciplines, including…

  13. Robust design of dynamic observers

    NASA Technical Reports Server (NTRS)

    Bhattacharyya, S. P.

    1974-01-01

    The two (identity) observer realizations z = Mz + Ky and z = transpose of Az + transpose of K(y - transpose of Cz), respectively called the open loop and closed loop realizations, for the linear system x = Ax, y = Cx are analyzed with respect to the requirement of robustness; i.e., the requirement that the observer continue to regulate the error x - z satisfactorily despite small variations in the observer parameters from the projected design values. The results show that the open loop realization is never robust, that robustness requires a closed loop implementation, and that the closed loop realization is robust with respect to small perturbations in the gains transpose of K if and only if the observer can be built to contain an exact replica of the unstable and underdamped dynamics of the system being observed. These results clarify the stringent accuracy requirements on both models and hardware that must be met before an observer can be considered for use in a control system.

  14. Starfish: Robust spectroscopic inference tools

    NASA Astrophysics Data System (ADS)

    Czekala, Ian; Andrews, Sean M.; Mandel, Kaisey S.; Hogg, David W.; Green, Gregory M.

    2015-05-01

    Starfish is a set of tools used for spectroscopic inference. It robustly determines stellar parameters using high resolution spectral models and uses Markov Chain Monte Carlo (MCMC) to explore the full posterior probability distribution of the stellar parameters. Additional potential applications include other types of spectra, such as unresolved stellar clusters or supernovae spectra.

  15. Shaping robust system through evolution

    NASA Astrophysics Data System (ADS)

    Kaneko, Kunihiko

    2008-06-01

    Biological functions are generated as a result of developmental dynamics that form phenotypes governed by genotypes. The dynamical system for development is shaped through genetic evolution following natural selection based on the fitness of the phenotype. Here we study how this dynamical system is robust to noise during development and to genetic change by mutation. We adopt a simplified transcription regulation network model to govern gene expression, which gives a fitness function. Through simulations of the network that undergoes mutation and selection, we show that a certain level of noise in gene expression is required for the network to acquire both types of robustness. The results reveal how the noise that cells encounter during development shapes any network's robustness, not only to noise but also to mutations. We also establish a relationship between developmental and mutational robustness through phenotypic variances caused by genetic variation and epigenetic noise. A universal relationship between the two variances is derived, akin to the fluctuation-dissipation relationship known in physics.

  16. Robust Portfolio Optimization Using Pseudodistances

    PubMed Central

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948

  17. Performance analysis of robust road sign identification

    NASA Astrophysics Data System (ADS)

    Ali, Nursabillilah M.; Mustafah, Y. M.; Rashid, N. K. A. M.

    2013-12-01

    This study describes performance analysis of a robust system for road sign identification that incorporated two stages of different algorithms. The proposed algorithms consist of HSV color filtering and PCA techniques respectively in detection and recognition stages. The proposed algorithms are able to detect the three standard types of colored images namely Red, Yellow and Blue. The hypothesis of the study is that road sign images can be used to detect and identify signs that are involved with the existence of occlusions and rotational changes. PCA is known as feature extraction technique that reduces dimensional size. The sign image can be easily recognized and identified by the PCA method as is has been used in many application areas. Based on the experimental result, it shows that the HSV is robust in road sign detection with minimum of 88% and 77% successful rate for non-partial and partial occlusions images. For successful recognition rates using PCA can be achieved in the range of 94-98%. The occurrences of all classes are recognized successfully is between 5% and 10% level of occlusions.

  18. Robust influence angle for clustering mixed data sets

    NASA Astrophysics Data System (ADS)

    Aziz, Nazrina

    2014-07-01

    The great importance in attempting to identify clusters of observations which may be present in a data is how close observations are to each other. Two observations are close when their dissimilarity is small. Some traditional distance functions cannot capture the pattern dissimilarity among the observations. The other demand is the dissimilarity measurement should have the ability to deal with a variety of data types. This article proposed a new dissimilarity measure namely, Robust Influence Angle (RIA) based on eigenstructure of the covariance matrix and robust principal component score. The proposed measurement is able to identify cluster of observation and it also has the ability to handle data set with mixed variables.

  19. Identifying Executable Plans

    NASA Technical Reports Server (NTRS)

    Bedrax-Weiss, Tania; Jonsson, Ari K.; Frank, Jeremy D.; McGann, Conor

    2003-01-01

    Generating plans for execution imposes a different set of requirements on the planning process than those imposed by planning alone. In highly unpredictable execution environments, a fully-grounded plan may become inconsistent frequently when the world fails to behave as expected. Intelligent execution permits making decisions when the most up-to-date information is available, ensuring fewer failures. Planning should acknowledge the capabilities of the execution system, both to ensure robust execution in the face of uncertainty, which also relieves the planner of the burden of making premature commitments. We present Plan Identification Functions (PIFs), which formalize what it means for a plan to be executable, md are used in conjunction with a complete model of system behavior to halt the planning process when an executable plan is found. We describe the implementation of plan identification functions for a temporal, constraint-based planner. This particular implementation allows the description of many different plan identification functions. characteristics crf the xectieonfvii rnm-enft,h e best plan to hand to the execution system will contain more or less commitment and information.

  20. Design analysis, robust methods, and stress classification

    SciTech Connect

    Bees, W.J.

    1993-01-01

    This special edition publication volume is comprised of papers presented at the 1993 ASME Pressure Vessels and Piping Conference, July 25--29, 1993 in Denver, Colorado. The papers were prepared for presentations in technical sessions developed under the auspices of the PVPD Committees on Computer Technology, Design and Analysis, Operations Applications and Components. The topics included are: Analysis of Pressure Vessels and Components; Expansion Joints; Robust Methods; Stress Classification; and Non-Linear Analysis. Individual papers have been processed separately for inclusion in the appropriate data bases.

  1. Robustness of network controllability in cascading failure

    NASA Astrophysics Data System (ADS)

    Chen, Shi-Ming; Xu, Yun-Fei; Nie, Sen

    2017-04-01

    It is demonstrated that controlling complex networks in practice needs more inputs than that predicted by the structural controllability framework. Besides, considering the networks usually faces to the external or internal failure, we define parameters to evaluate the control cost and the variation of controllability after cascades, exploring the effect of number of control inputs on the controllability for random networks and scale-free networks in the process of cascading failure. For different topological networks, the results show that the robustness of controllability will be stronger through allocating different control inputs and edge capacity.

  2. ANN-implemented robust vision model

    NASA Astrophysics Data System (ADS)

    Teng, Chungte; Ligomenides, Panos A.

    1991-02-01

    A robust vision model has been developed and implemented with a self-organizing/unsupervised artificial neural network (ANN) classifier-KART which is a novel hybrid model of a modified Kohonen''s feature map and the Carpenter/Grossberg''s ART architecture. The six moment invariants have been mapped onto a 7-dimensional unit hypersphere and have been applied to the KART classifier. In this paper the KART model will be presented. The non-adaptive neural implementations on the image processing and the moment invariant feature extraction will be discussed. In addition the simulation results that illustrate the capabilities of this model will also be provided. 1.

  3. Intracortical remodeling parameters are associated with measures of bone robustness.

    PubMed

    Goldman, Haviva M; Hampson, Naomi A; Guth, J Jared; Lin, David; Jepsen, Karl J

    2014-10-01

    Prior work identified a novel association between bone robustness and porosity, which may be part of a broader interaction whereby the skeletal system compensates for the natural variation in robustness (bone width relative to length) by modulating tissue-level mechanical properties to increase stiffness of slender bones and to reduce mass of robust bones. To further understand this association, we tested the hypothesis that the relationship between robustness and porosity is mediated through intracortical, BMU-based (basic multicellular unit) remodeling. We quantified cortical porosity, mineralization, and histomorphometry at two sites (38% and 66% of the length) in human cadaveric tibiae. We found significant correlations between robustness and several histomorphometric variables (e.g., % secondary tissue [R(2)  = 0.68, P < 0.004], total osteon area [R(2)  = 0.42, P < 0.04]) at the 66% site. Although these associations were weaker at the 38% site, significant correlations between histological variables were identified between the two sites indicating that both respond to the same global effects and demonstrate a similar character at the whole bone level. Thus, robust bones tended to have larger and more numerous osteons with less infilling, resulting in bigger pores and more secondary bone area. These results suggest that local regulation of BMU-based remodeling may be further modulated by a global signal associated with robustness, such that remodeling is suppressed in slender bones but not in robust bones. Elucidating this mechanism further is crucial for better understanding the complex adaptive nature of the skeleton, and how interindividual variation in remodeling differentially impacts skeletal aging and an individuals' potential response to prophylactic treatments.

  4. Methodology for the conceptual design of a robust and opportunistic system-of-systems

    NASA Astrophysics Data System (ADS)

    Talley, Diana Noonan

    Systems are becoming more complicated, complex, and interrelated. Designers have recognized the need to develop systems from a holistic perspective and design them as Systems-of-Systems (SoS). The design of the SoS, especially in the conceptual design phase, is generally characterized by significant uncertainty. As a result, it is possible for all three types of uncertainty (aleatory, epistemic, and error) and the associated factors of uncertainty (randomness, sampling, confusion, conflict, inaccuracy, ambiguity, vagueness, coarseness, and simplification) to affect the design process. While there are a number of existing SoS design methods, several gaps have been identified: the ability to modeling all of the factors of uncertainty at varying levels of knowledge; the ability to consider both the pernicious and propitious aspects of uncertainty; and, the ability to determine the value of reducing the uncertainty in the design process. While there are numerous uncertainty modeling theories, no one theory can effectively model every kind of uncertainty. This research presents a Hybrid Uncertainty Modeling Method (HUMM) that integrates techniques from the following theories: Probability Theory, Evidence Theory, Fuzzy Set Theory, and Info-Gap theory. The HUMM is capable of modeling all of the different factors of uncertainty and can model the uncertainty for multiple levels of knowledge. In the design process, there are both pernicious and propitious characteristics associated with the uncertainty. Existing design methods typically focus on developing robust designs that are insensitive to the associated uncertainty. These methods do not capitalize on the possibility of maximizing the potential benefit associated with the uncertainty. This research demonstrates how these deficiencies can be overcome by identifying the most robust and opportunistic design. In a design process it is possible that the most robust and opportunistic design will not be selected from the set

  5. Understanding and Identifying the Child at Risk for Auditory Processing Disorders: A Case Method Approach in Examining the Interdisciplinary Role of the School Nurse

    ERIC Educational Resources Information Center

    Neville, Kathleen; Foley, Marie; Gertner, Alan

    2011-01-01

    Despite receiving increased professional and public awareness since the initial American Speech Language Hearing Association (ASHA) statement defining Auditory Processing Disorders (APDs) in 1993 and the subsequent ASHA statement (2005), many misconceptions remain regarding APDs in school-age children among health and academic professionals. While…

  6. Identify Skills and Proficiency Levels Necessary for Entry-Level Employment for All Vocational Programs Using Computers to Process Data. Final Report.

    ERIC Educational Resources Information Center

    Crowe, Jacquelyn

    This study investigated computer and word processing operator skills necessary for employment in today's high technology office. The study was comprised of seven major phases: (1) identification of existing community college computer operator programs in the state of Washington; (2) attendance at an information management seminar; (3) production…

  7. Determining the rp-process flow through 56Ni: resonances in 57Cu(p,γ)58Zn identified with GRETINA.

    PubMed

    Langer, C; Montes, F; Aprahamian, A; Bardayan, D W; Bazin, D; Brown, B A; Browne, J; Crawford, H; Cyburt, R H; Domingo-Pardo, C; Gade, A; George, S; Hosmer, P; Keek, L; Kontos, A; Lee, I-Y; Lemasson, A; Lunderberg, E; Maeda, Y; Matos, M; Meisel, Z; Noji, S; Nunes, F M; Nystrom, A; Perdikakis, G; Pereira, J; Quinn, S J; Recchia, F; Schatz, H; Scott, M; Siegl, K; Simon, A; Smith, M; Spyrou, A; Stevens, J; Stroberg, S R; Weisshaar, D; Wheeler, J; Wimmer, K; Zegers, R G T

    2014-07-18

    An approach is presented to experimentally constrain previously unreachable (p, γ) reaction rates on nuclei far from stability in the astrophysical rp process. Energies of all critical resonances in the (57)Cu(p,γ)(58)Zn reaction are deduced by populating states in (58)Zn with a (d, n) reaction in inverse kinematics at 75 MeV/u, and detecting γ-ray-recoil coincidences with the state-of-the-art γ-ray tracking array GRETINA and the S800 spectrograph at the National Superconducting Cyclotron Laboratory. The results reduce the uncertainty in the (57)Cu(p,γ) reaction rate by several orders of magnitude. The effective lifetime of (56)Ni, an important waiting point in the rp process in x-ray bursts, can now be determined entirely from experimentally constrained reaction rates.

  8. Identifying the processes controlling the distribution of H2O2 in surface waters along a meridional transect in the eastern Atlantic

    NASA Astrophysics Data System (ADS)

    Steigenberger, S.; Croot, P. L.

    2008-02-01

    Hydrogen peroxide (H2O2) is an important oxidant for many bio-relevant trace metals and organic compounds and has potential as a tracer for mixing in near surface waters. In this study we combine H2O2 and bio-optical measurements with satellite data for a meridional transect from 46°N to 26°S in the eastern Atlantic in order to determine the key processes affecting its distribution. Surface H2O2 ranged from 21-123 nmol L-1, with maximum inventories (0-200 m) of 5.5-5.9 mmol m-2 found at 30°N and 25°S. Analyses showed a strong positive correlation of surface H2O2 with daily irradiances and recent precipitation, though poor correlations with CDOM suggest sunlight is the limiting reactant for H2O2 formation. Vertical distributions of H2O2 were controlled by a combination of mixing processes and phytoplankton activity. The present study highlights processes controlling global H2O2 distributions and points towards the development of parameterization schemes for prediction via satellite data.

  9. Algebraic connectivity and graph robustness.

    SciTech Connect

    Feddema, John Todd; Byrne, Raymond Harry; Abdallah, Chaouki T.

    2009-07-01

    Recent papers have used Fiedler's definition of algebraic connectivity to show that network robustness, as measured by node-connectivity and edge-connectivity, can be increased by increasing the algebraic connectivity of the network. By the definition of algebraic connectivity, the second smallest eigenvalue of the graph Laplacian is a lower bound on the node-connectivity. In this paper we show that for circular random lattice graphs and mesh graphs algebraic connectivity is a conservative lower bound, and that increases in algebraic connectivity actually correspond to a decrease in node-connectivity. This means that the networks are actually less robust with respect to node-connectivity as the algebraic connectivity increases. However, an increase in algebraic connectivity seems to correlate well with a decrease in the characteristic path length of these networks - which would result in quicker communication through the network. Applications of these results are then discussed for perimeter security.

  10. Robust background modelling in DIALS

    PubMed Central

    Parkhurst, James M.; Winter, Graeme; Waterman, David G.; Fuentes-Montero, Luis; Gildea, Richard J.; Murshudov, Garib N.; Evans, Gwyndaf

    2016-01-01

    A method for estimating the background under each reflection during integration that is robust in the presence of pixel outliers is presented. The method uses a generalized linear model approach that is more appropriate for use with Poisson distributed data than traditional approaches to pixel outlier handling in integration programs. The algorithm is most applicable to data with a very low background level where assumptions of a normal distribution are no longer valid as an approximation to the Poisson distribution. It is shown that traditional methods can result in the systematic underestimation of background values. This then results in the reflection intensities being overestimated and gives rise to a change in the overall distribution of reflection intensities in a dataset such that too few weak reflections appear to be recorded. Statistical tests performed during data reduction may mistakenly attribute this to merohedral twinning in the crystal. Application of the robust generalized linear model algorithm is shown to correct for this bias. PMID:27980508

  11. A Robust Streaming Media System

    NASA Astrophysics Data System (ADS)

    Youwei, Zhang

    Presently, application layer multicast protocols (ALM) are proposed as substitute for IP multicast and have made extraordinary achievements. Integrated with Multi-data-stream mode such as Multiple Description Coding (MDC), ALM becomes more scalable and robust in high-dynamic Internet environment compared with single data stream. Although MDC can provide a flexible data transmission style, the synchronization of different descriptions encoded from one video source is proved to be difficult due to different delay on diverse transmission paths. In this paper, an ALM system called HMDC is proposed to improve accepted video quality of streaming media, hosts can join the separate overlay trees in different layers simultaneously, then the maximum synchronized descriptions of the same layer are worked out to acquire the best video quality. Simulations implemented on Internet-like topology indicate that HMDC achieves better video quality, lower link stress, higher robustness and comparable latency compared with traditional ALM protocols.

  12. Single-sweep spectral analysis of contact heat evoked potentials: a novel approach to identify altered cortical processing after morphine treatment

    PubMed Central

    Hansen, Tine M; Graversen, Carina; Frøkjær, Jens B; Olesen, Anne E; Valeriani, Massimiliano; Drewes, Asbjørn M

    2015-01-01

    Aims The cortical response to nociceptive thermal stimuli recorded as contact heat evoked potentials (CHEPs) may be altered by morphine. However, previous studies have averaged CHEPs over multiple stimuli, which are confounded by jitter between sweeps. Thus, the aim was to assess single-sweep characteristics to identify alterations induced by morphine. Methods In a crossover study 15 single-sweep CHEPs were analyzed from 62 electroencephalography electrodes in 26 healthy volunteers before and after administration of morphine or placebo. Each sweep was decomposed by a continuous wavelet transform to obtain normalized spectral indices in the delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–12 Hz), beta (12–32 Hz) and gamma (32–80 Hz) bands. The average distribution over all sweeps and channels was calculated for the four recordings for each volunteer, and the two recordings before treatments were assessed for reproducibility. Baseline corrected spectral indices after morphine and placebo treatments were compared to identify alterations induced by morphine. Results Reproducibility between baseline CHEPs was demonstrated. As compared with placebo, morphine decreased the spectral indices in the delta and theta bands by 13% (P = 0.04) and 9% (P = 0.007), while the beta and gamma bands were increased by 10% (P = 0.006) and 24% (P = 0.04). Conclusion The decreases in the delta and theta band are suggested to represent a decrease in the pain specific morphology of the CHEPs, which indicates a diminished pain response after morphine administration. Hence, assessment of spectral indices in single-sweep CHEPs can be used to study cortical mechanisms induced by morphine treatment. PMID:25556985

  13. Gearbox design for uncertain load requirements using active robust optimization

    NASA Astrophysics Data System (ADS)

    Salomon, Shaul; Avigad, Gideon; Purshouse, Robin C.; Fleming, Peter J.

    2016-04-01

    Design and optimization of gear transmissions have been intensively studied, but surprisingly the robustness of the resulting optimal design to uncertain loads has never been considered. Active Robust (AR) optimization is a methodology to design products that attain robustness to uncertain or changing environmental conditions through adaptation. In this study the AR methodology is utilized to optimize the number of transmissions, as well as their gearing ratios, for an uncertain load demand. The problem is formulated as a bi-objective optimization problem where the objectives are to satisfy the load demand in the most energy efficient manner and to minimize production cost. The results show that this approach can find a set of robust designs, revealing a trade-off between energy efficiency and production cost. This can serve as a useful decision-making tool for the gearbox design process, as well as for other applications.

  14. High-content analysis to leverage a robust phenotypic profiling approach to vascular modulation.

    PubMed

    Isherwood, Beverley J; Walls, Rebecca E; Roberts, Mark E; Houslay, Thomas M; Brave, Sandra R; Barry, Simon T; Carragher, Neil O

    2013-12-01

    Phenotypic screening seeks to identify substances that modulate phenotypes in a desired manner with the aim of progressing first-in-class agents. Successful campaigns require physiological relevance, robust screening, and an ability to deconvolute perturbed pathways. High-content analysis (HCA) is increasingly used in cell biology and offers one approach to prosecution of phenotypic screens, but challenges exist in exploitation where data generated are high volume and complex. We combine development of an organotypic model with novel HCA tools to map phenotypic responses to pharmacological perturbations. We describe implementation for angiogenesis, a process that has long been a focus for therapeutic intervention but has lacked robust models that recapitulate more completely mechanisms involved. The study used human primary endothelial cells in co-culture with stromal fibroblasts to model multiple aspects of angiogenic signaling: cell interactions, proliferation, migration, and differentiation. Multiple quantitative descriptors were derived from automated microscopy using custom-designed algorithms. Data were extracted using a bespoke informatics platform that integrates processing, statistics, and feature display into a streamlined workflow for building and interrogating fingerprints. Ninety compounds were characterized, defining mode of action by phenotype. Our approach for assessing phenotypic outcomes in complex assay models is robust and capable of supporting a range of phenotypic screens at scale.

  15. Robust modular product family design

    NASA Astrophysics Data System (ADS)

    Jiang, Lan; Allada, Venkat

    2001-10-01

    This paper presents a modified Taguchi methodology to improve the robustness of modular product families against changes in customer requirements. The general research questions posed in this paper are: (1) How to effectively design a product family (PF) that is robust enough to accommodate future customer requirements. (2) How far into the future should designers look to design a robust product family? An example of a simplified vacuum product family is used to illustrate our methodology. In the example, customer requirements are selected as signal factors; future changes of customer requirements are selected as noise factors; an index called quality characteristic (QC) is set to evaluate the product vacuum family; and the module instance matrix (M) is selected as control factor. Initially a relation between the objective function (QC) and the control factor (M) is established, and then the feasible M space is systemically explored using a simplex method to determine the optimum M and the corresponding QC values. Next, various noise levels at different time points are introduced into the system. For each noise level, the optimal values of M and QC are computed and plotted on a QC-chart. The tunable time period of the control factor (the module matrix, M) is computed using the QC-chart. The tunable time period represents the maximum time for which a given control factor can be used to satisfy current and future customer needs. Finally, a robustness index is used to break up the tunable time period into suitable time periods that designers should consider while designing product families.

  16. Robust video hashing via multilinear subspace projections.

    PubMed

    Li, Mu; Monga, Vishal

    2012-10-01

    The goal of video hashing is to design hash functions that summarize videos by short fingerprints or hashes. While traditional applications of video hashing lie in database searches and content authentication, the emergence of websites such as YouTube and DailyMotion poses a challenging problem of anti-piracy video search. That is, hashes or fingerprints of an original video (provided to YouTube by the content owner) must be matched against those uploaded to YouTube by users to identify instances of "illegal" or undesirable uploads. Because the uploaded videos invariably differ from the original in their digital representation (owing to incidental or malicious distortions), robust video hashes are desired. We model videos as order-3 tensors and use multilinear subspace projections, such as a reduced rank parallel factor analysis (PARAFAC) to construct video hashes. We observe that, unlike most standard descriptors of video content, tensor-based subspace projections can offer excellent robustness while effectively capturing the spatio-temporal essence of the video for discriminability. We introduce randomization in the hash function by dividing the video into (secret key based) pseudo-randomly selected overlapping sub-cubes to prevent against intentional guessing and forgery. Detection theoretic analysis of the proposed hash-based video identification is presented, where we derive analytical approximations for error probabilities. Remarkably, these theoretic error estimates closely mimic empirically observed error probability for our hash algorithm. Furthermore, experimental receiver operating characteristic (ROC) curves reveal that the proposed tensor-based video hash exhibits enhanced robustness against both spatial and temporal video distortions over state-of-the-art video hashing techniques.

  17. Robust flight control of rotorcraft

    NASA Astrophysics Data System (ADS)

    Pechner, Adam Daniel

    With recent design improvement in fixed wing aircraft, there has been a considerable interest in the design of robust flight control systems to compensate for the inherent instability necessary to achieve desired performance. Such systems are designed for maximum available retention of stability and performance in the presence of significant vehicle damage or system failure. The rotorcraft industry has shown similar interest in adopting these reconfigurable flight control schemes specifically because of their ability to reject disturbance inputs and provide a significant amount of robustness for all but the most catastrophic of situations. The research summarized herein focuses on the extension of the pseudo-sliding mode control design procedure interpreted in the frequency domain. Application of the technique is employed and simulated on two well known helicopters, a simplified model of a hovering Sikorsky S-61 and the military's Black Hawk UH-60A also produced by Sikorsky. The Sikorsky helicopter model details are readily available and was chosen because it can be limited to pitch and roll motion reducing the number of degrees of freedom and yet contains two degrees of freedom, which is the minimum requirement in proving the validity of the pseudo-sliding control technique. The full order model of a hovering Black Hawk system was included both as a comparison to the S-61 helicopter design system and as a means to demonstrate the scaleability and effectiveness of the control technique on sophisticated systems where design robustness is of critical concern.

  18. Kaposi's sarcoma-associated herpesvirus microRNA single-nucleotide polymorphisms identified in clinical samples can affect microRNA processing, level of expression, and silencing activity.

    PubMed

    Han, Soo-Jin; Marshall, Vickie; Barsov, Eugene; Quiñones, Octavio; Ray, Alex; Labo, Nazzarena; Trivett, Matthew; Ott, David; Renne, Rolf; Whitby, Denise

    2013-11-01

    Kaposi's sarcoma-associated herpesvirus (KSHV) encodes 12 pre-microRNAs that can produce 25 KSHV mature microRNAs. We previously reported single-nucleotide polymorphisms (SNPs) in KSHV-encoded pre-microRNA and mature microRNA sequences from clinical samples (V. Marshall et al., J. Infect. Dis., 195:645-659, 2007). To determine whether microRNA SNPs affect pre-microRNA processing and, ultimately, mature microRNA expression levels, we performed a detailed comparative analysis of (i) mature microRNA expression levels, (ii) in vitro Drosha/Dicer processing, and (iii) RNA-induced silencing complex-dependent targeting of wild-type (wt) and variant microRNA genes. Expression of pairs of wt and variant pre-microRNAs from retroviral vectors and measurement of KSHV mature microRNA expression by real-time reverse transcription-PCR (RT-PCR) revealed differential expression levels that correlated with the presence of specific sequence polymorphisms. Measurement of KSHV mature microRNA expression in a panel of primary effusion lymphoma cell lines by real-time RT-PCR recapitulated some observed expression differences but suggested a more complex relationship between sequence differences and expression of mature microRNA. Furthermore, in vitro maturation assays demonstrated significant SNP-associated changes in Drosha/DGCR8 and/or Dicer processing. These data demonstrate that SNPs within KSHV-encoded pre-microRNAs are associated with differential microRNA expression levels. Given the multiple reports on the involvement of microRNAs in cancer, the biological significance of these phenotypic and genotypic variants merits further studies in patients with KSHV-associated malignancies.

  19. Kaposi's Sarcoma-Associated Herpesvirus MicroRNA Single-Nucleotide Polymorphisms Identified in Clinical Samples Can Affect MicroRNA Processing, Level of Expression, and Silencing Activity

    PubMed Central

    Han, Soo-Jin; Marshall, Vickie; Barsov, Eugene; Quiñones, Octavio; Ray, Alex; Labo, Nazzarena; Trivett, Matthew; Ott, David; Renne, Rolf

    2013-01-01

    Kaposi's sarcoma-associated herpesvirus (KSHV) encodes 12 pre-microRNAs that can produce 25 KSHV mature microRNAs. We previously reported single-nucleotide polymorphisms (SNPs) in KSHV-encoded pre-microRNA and mature microRNA sequences from clinical samples (V. Marshall et al., J. Infect. Dis., 195:645–659, 2007). To determine whether microRNA SNPs affect pre-microRNA processing and, ultimately, mature microRNA expression levels, we performed a detailed comparative analysis of (i) mature microRNA expression levels, (ii) in vitro Drosha/Dicer processing, and (iii) RNA-induced silencing complex-dependent targeting of wild-type (wt) and variant microRNA genes. Expression of pairs of wt and variant pre-microRNAs from retroviral vectors and measurement of KSHV mature microRNA expression by real-time reverse transcription-PCR (RT-PCR) revealed differential expression levels that correlated with the presence of specific sequence polymorphisms. Measurement of KSHV mature microRNA expression in a panel of primary effusion lymphoma cell lines by real-time RT-PCR recapitulated some observed expression differences but suggested a more complex relationship between sequence differences and expression of mature microRNA. Furthermore, in vitro maturation assays demonstrated significant SNP-associated changes in Drosha/DGCR8 and/or Dicer processing. These data demonstrate that SNPs within KSHV-encoded pre-microRNAs are associated with differential microRNA expression levels. Given the multiple reports on the involvement of microRNAs in cancer, the biological significance of these phenotypic and genotypic variants merits further studies in patients with KSHV-associated malignancies. PMID:24006441

  20. TARGET Researchers Identify Mutations in SIX1/2 and microRNA Processing Genes in Favorable Histology Wilms Tumor | Office of Cancer Genomics

    Cancer.gov

    TARGET researchers molecularly characterized favorable histology Wilms tumor (FHWT), a pediatric renal cancer. Comprehensive genome and transcript analyses revealed single-nucleotide substitution/deletion mutations in microRNA processing genes (15% of FHWT patients) and Sine Oculis Homeobox Homolog 1/2 (SIX1/2) genes (7% of FHWT patients). SIX1/2 genes play a critical role in renal development and were not previously associated with FHWT, thus presenting a novel role for SIX1/2 pathway aberrations in this disease.

  1. Robust object tracking in compressed image sequences

    NASA Astrophysics Data System (ADS)

    Mujica, Fernando; Murenzi, Romain; Smith, Mark J.; Leduc, Jean-Pierre

    1998-10-01

    Accurate object tracking is important in defense applications where an interceptor missile must hone into a target and track it through the pursuit until the strike occurs. The expense associated with an interceptor missile can be reduced through a distributed processing arrangement where the computing platform on which the tracking algorithm is run resides on the ground, and the interceptor need only carry the sensor and communications equipment as part of its electronics complement. In this arrangement, the sensor images are compressed, transmitted to the ground, and compressed to facilitate real-time downloading of the data over available bandlimited channels. The tracking algorithm is run on a ground-based computer while tracking results are transmitted back to the interceptor as soon as they become available. Compression and transmission in this scenario introduce distortion. If severe, these distortions can lead to erroneous tracking results. As a consequence, tracking algorithms employed for this purpose must be robust to compression distortions. In this paper we introduced a robust object racking algorithm based on the continuous wavelet transform. The algorithm processes image sequence data on a frame-by-frame basis, implicitly taking advantage of temporal history and spatial frame filtering to reduce the impact of compression artifacts. Test results show that tracking performance can be maintained at low transmission bit rates and can be used reliably in conjunction with many well-known image compression algorithms.

  2. Robust PCA based method for discovering differentially expressed genes.

    PubMed

    Liu, Jin-Xing; Wang, Yu-Tian; Zheng, Chun-Hou; Sha, Wen; Mi, Jian-Xun; Xu, Yong

    2013-01-01

    How to identify a set of genes that are relevant to a key biological process is an important issue in current molecular biology. In this paper, we propose a novel method to discover differentially expressed genes based on robust principal component analysis (RPCA). In our method, we treat the differentially and non-differentially expressed genes as perturbation signals S and low-rank matrix A, respectively. Perturbation signals S can be recovered from the gene expression data by using RPCA. To discover the differentially expressed genes associated with special biological progresses or functions, the scheme is given as follows. Firstly, the matrix D of expression data is decomposed into two adding matrices A and S by using RPCA. Secondly, the differentially expressed genes are identified based on matrix S. Finally, the differentially expressed genes are evaluated by the tools based on Gene Ontology. A larger number of experiments on hypothetical and real gene expression data are also provided and the experimental results show that our method is efficient and effective.

  3. An approach to identify SNPs in the gene encoding acetyl-CoA acetyltransferase-2 (ACAT-2) and their proposed role in metabolic processes in pig.

    PubMed

    Sodhi, Simrinder Singh; Ghosh, Mrinmoy; Song, Ki Duk; Sharma, Neelesh; Kim, Jeong Hyun; Kim, Nam Eun; Lee, Sung Jin; Kang, Chul Woong; Oh, Sung Jong; Jeong, Dong Kee

    2014-01-01

    The novel liver protein acetyl-CoA acetyltransferase-2 (ACAT2) is involved in the beta-oxidation and lipid metabolism. Its comprehensive relative expression, in silico non-synonymous single nucleotide polymorphism (nsSNP) analysis, as well as its annotation in terms of metabolic process with another protein from the same family, namely, acetyl-CoA acyltransferase-2 (ACAA2) was performed in Sus scrofa. This investigation was conducted to understand the most important nsSNPs of ACAT2 in terms of their effects on metabolic activities and protein conformation. The two most deleterious mutations at residues 122 (I to V) and 281 (R to H) were found in ACAT2. Validation of expression of genes in the laboratory also supported the idea of differential expression of ACAT2 and ACAA2 conceived through the in silico analysis. Analysis of the relative expression of ACAT2 and ACAA2 in the liver tissue of Jeju native pig showed that the former expressed significantly higher (P<0.05). Overall, the computational prediction supported by wet laboratory analysis suggests that ACAT2 might contribute more to metabolic processes than ACAA2 in swine. Further associations of SNPs in ACAT2 with production traits might guide efforts to improve growth performance in Jeju native pigs.

  4. Applying meta-pathway analyses through metagenomics to identify the functional properties of the major bacterial communities of a single spontaneous cocoa bean fermentation process sample.

    PubMed

    Illeghems, Koen; Weckx, Stefan; De Vuyst, Luc

    2015-09-01

    A high-resolution functional metagenomic analysis of a representative single sample of a Brazilian spontaneous cocoa bean fermentation process was carried out to gain insight into its bacterial community functioning. By reconstruction of microbial meta-pathways based on metagenomic data, the current knowledge about the metabolic capabilities of bacterial members involved in the cocoa bean fermentation ecosystem was extended. Functional meta-pathway analysis revealed the distribution of the metabolic pathways between the bacterial members involved. The metabolic capabilities of the lactic acid bacteria present were most associated with the heterolactic fermentation and citrate assimilation pathways. The role of Enterobacteriaceae in the conversion of substrates was shown through the use of the mixed-acid fermentation and methylglyoxal detoxification pathways. Furthermore, several other potential functional roles for Enterobacteriaceae were indicated, such as pectinolysis and citrate assimilation. Concerning acetic acid bacteria, metabolic pathways were partially reconstructed, in particular those related to responses toward stress, explaining their metabolic activities during cocoa bean fermentation processes. Further, the in-depth metagenomic analysis unveiled functionalities involved in bacterial competitiveness, such as the occurrence of CRISPRs and potential bacteriocin production. Finally, comparative analysis of the metagenomic data with bacterial genomes of cocoa bean fermentation isolates revealed the applicability of the selected strains as functional starter cultures.

  5. Identifying weaknesses in undergraduate programs within the context input process product model framework in view of faculty and library staff in 2014

    PubMed Central

    2016-01-01

    Purpose: Objective of this research is to find out weaknesses of undergraduate programs in terms of personnel and financial, organizational management and facilities in view of faculty and library staff, and determining factors that may facilitate program quality–improvement. Methods: This is a descriptive analytical survey research and from purpose aspect is an application evaluation study that undergraduate groups of selected faculties (Public Health, Nursing and Midwifery, Allied Medical Sciences and Rehabilitation) at Tehran University of Medical Sciences (TUMS) have been surveyed using context input process product model in 2014. Statistical population were consist of three subgroups including department head (n=10), faculty members (n=61), and library staff (n=10) with total population of 81 people. Data collected through three researcher-made questionnaires which were based on Likert scale. The data were then analyzed using descriptive and inferential statistics. Results: Results showed desirable and relatively desirable situation for factors in context, input, process, and product fields except for factors of administration and financial; and research and educational spaces and equipment which were in undesirable situation. Conclusion: Based on results, researcher highlighted weaknesses in the undergraduate programs of TUMS in terms of research and educational spaces and facilities, educational curriculum, administration and financial; and recommended some steps in terms of financial, organizational management and communication with graduates in order to improve the quality of this system. PMID:27240892

  6. Identifying overarching excipient properties towards an in-depth understanding of process and product performance for continuous twin-screw wet granulation.

    PubMed

    Willecke, N; Szepes, A; Wunderlich, M; Remon, J P; Vervaet, C; De Beer, T

    2017-02-14

    The overall objective of this work is to understand how excipient characteristics influence the process and product performance for a continuous twin-screw wet granulation process. The knowledge gained through this study is intended to be used for a Quality by Design (QbD)-based formulation design approach and formulation optimization. A total of 9 preferred fillers and 9 preferred binders were selected for this study. The selected fillers and binders were extensively characterized regarding their physico-chemical and solid state properties using 21 material characterization techniques. Subsequently, principal component analysis (PCA) was performed on the data sets of filler and binder characteristics in order to reduce the variety of single characteristics to a limited number of overarching properties. Four principal components (PC) explained 98.4% of the overall variability in the fillers data set, while three principal components explained 93.4% of the overall variability in the data set of binders. Both PCA models allowed in-depth evaluation of similarities and differences in the excipient properties.

  7. On identified predictive control

    NASA Technical Reports Server (NTRS)

    Bialasiewicz, Jan T.

    1993-01-01

    Self-tuning control algorithms are potential successors to manually tuned PID controllers traditionally used in process control applications. A very attractive design method for self-tuning controllers, which has been developed over recent years, is the long-range predictive control (LRPC). The success of LRPC is due to its effectiveness with plants of unknown order and dead-time which may be simultaneously nonminimum phase and unstable or have multiple lightly damped poles (as in the case of flexible structures or flexible robot arms). LRPC is a receding horizon strategy and can be, in general terms, summarized as follows. Using assumed long-range (or multi-step) cost function the optimal control law is found in terms of unknown parameters of the predictor model of the process, current input-output sequence, and future reference signal sequence. The common approach is to assume that the input-output process model is known or separately identified and then to find the parameters of the predictor model. Once these are known, the optimal control law determines control signal at the current time t which is applied at the process input and the whole procedure is repeated at the next time instant. Most of the recent research in this field is apparently centered around the LRPC formulation developed by Clarke et al., known as generalized predictive control (GPC). GPC uses ARIMAX/CARIMA model of the process in its input-output formulation. In this paper, the GPC formulation is used but the process predictor model is derived from the state space formulation of the ARIMAX model and is directly identified over the receding horizon, i.e., using current input-output sequence. The underlying technique in the design of identified predictive control (IPC) algorithm is the identification algorithm of observer/Kalman filter Markov parameters developed by Juang et al. at NASA Langley Research Center and successfully applied to identification of flexible structures.

  8. Utilization of PARAFAC-Modeled Excitation-Emission Matrix (EEM) Fluorescence Spectroscopy to Identify Biogeochemical Processing of Dissolved Organic Matter in a Northern Peatland.

    PubMed

    Tfaily, Malak M; Corbett, Jane E; Wilson, Rachel; Chanton, Jeffrey P; Glaser, Paul H; Cawley, Kaelin M; Jaffé, Rudolf; Cooper, William T

    2015-01-01

    In this study, we contrast the fluorescent properties of dissolved organic matter (DOM) in fens and bogs in a Northern Minnesota peatland using excitation emission matrix fluorescence spectroscopy with parallel factor analysis (EEM-PARAFAC). EEM-PARAFAC identified four humic-like components and one protein-like component and the dynamics of each were evaluated based on their distribution with depth as well as across sites differing in hydrology and major biological species. The PARAFAC-EEM experiments were supported by dissolved organic carbon measurements (DOC), optical spectroscopy (UV-Vis), and compositional characterization by ultrahigh resolution Fourier transform ion cyclotron resonance mass spectroscopy (FT-ICR MS). The FT-ICR MS data indicate that metabolism in peatlands reduces the molecular weights of individual components of DOM, and oxygen-rich less aromatic molecules are selectively biodegraded. Our data suggest that different hydrologic and biological conditions within the larger peat ecosystem drive molecular changes in DOM, resulting in distinctly different chemical compositions and unique fluorescent fingerprints. PARAFAC modeling of EEM data coupled with ultrahigh resolution FT-ICR MS has the potential to provide significant molecular-based information on DOM composition that will support efforts to better understand the composition, sources, and diagenetic status of DOM from different terrestrial and aquatic systems.

  9. A computational approach identifies two regions of Hepatitis C Virus E1 protein as interacting domains involved in viral fusion process

    PubMed Central

    Bruni, Roberto; Costantino, Angela; Tritarelli, Elena; Marcantonio, Cinzia; Ciccozzi, Massimo; Rapicetta, Maria; El Sawaf, Gamal; Giuliani, Alessandro; Ciccaglione, Anna Rita

    2009-01-01

    Background The E1 protein of Hepatitis C Virus (HCV) can be dissected into two distinct hydrophobic regions: a central domain containing an hypothetical fusion peptide (FP), and a C-terminal domain (CT) comprising two segments, a pre-anchor and a trans-membrane (TM) region. In the currently accepted model of the viral fusion process, the FP and the TM regions are considered to be closely juxtaposed in the post-fusion structure and their physical interaction cannot be excluded. In the present study, we took advantage of the natural sequence variability present among HCV strains to test, by purely sequence-based computational tools, the hypothesis that in this virus the fusion process involves the physical interaction of the FP and CT regions of E1. Results Two computational approaches were applied. The first one is based on the co-evolution paradigm of interacting peptides and consequently on the correlation between the distance matrices generated by the sequence alignment method applied to FP and CT primary structures, respectively. In spite of the relatively low random genetic drift between genotypes, co-evolution analysis of sequences from five HCV genotypes revealed a greater correlation between the FP and CT domains than respect to a control HCV sequence from Core protein, so giving a clear, albeit still inconclusive, support to the physical interaction hypothesis. The second approach relies upon a non-linear signal analysis method widely used in protein science called Recurrence Quantification Analysis (RQA). This method allows for a direct comparison of domains for the presence of common hydrophobicity patterns, on which the physical interaction is based upon. RQA greatly strengthened the reliability of the hypothesis by the scoring of a lot of cross-recurrences between FP and CT peptides hydrophobicity patterning largely outnumbering chance expectations and pointing to putative interaction sites. Intriguingly, mutations in the CT region of E1, reducing the

  10. Robust hashing for 3D models

    NASA Astrophysics Data System (ADS)

    Berchtold, Waldemar; Schäfer, Marcel; Rettig, Michael; Steinebach, Martin

    2014-02-01

    3D models and applications are of utmost interest in both science and industry. With the increment of their usage, their number and thereby the challenge to correctly identify them increases. Content identification is commonly done by cryptographic hashes. However, they fail as a solution in application scenarios such as computer aided design (CAD), scientific visualization or video games, because even the smallest alteration of the 3D model, e.g. conversion or compression operations, massively changes the cryptographic hash as well. Therefore, this work presents a robust hashing algorithm for 3D mesh data. The algorithm applies several different bit extraction methods. They are built to resist desired alterations of the model as well as malicious attacks intending to prevent correct allocation. The different bit extraction methods are tested against each other and, as far as possible, the hashing algorithm is compared to the state of the art. The parameters tested are robustness, security and runtime performance as well as False Acceptance Rate (FAR) and False Rejection Rate (FRR), also the probability calculation of hash collision is included. The introduced hashing algorithm is kept adaptive e.g. in hash length, to serve as a proper tool for all applications in practice.

  11. Identifying phonological processing deficits in Northern Sotho-speaking children: The use of non-word repetition as a language assessment tool in the South African context.

    PubMed

    Wilsenach, Carien

    2016-05-20

    Diagnostic testing of speech/language skills in the African languages spoken in South Africa is a challenging task, as standardised language tests in the official languages of South Africa barely exist. Commercially available language tests are in English, and have been standardised in other parts of the world. Such tests are often translated into African languages, a practice that speech language therapists deem linguistically and culturally inappropriate. In response to the need for developing clinical language assessment instruments that could be used in South Africa, this article reports on data collected with a Northern Sotho non-word repetition task (NRT). Non-word repetition measures various aspects of phonological processing, including phonological working memory (PWM), and is used widely by speech language therapists, linguists, and educational psychologists in the Western world. The design of a novel Northern Sotho NRT is described, and it is argued that the task could be used successfully in the South African context to discriminate between children with weak and strong Northern Sotho phonological processing ability, regardless of the language of learning and teaching. The NRT was piloted with 120 third graders, and showed moderate to strong correlations with other measures of PWM, such as digit span and English non-word repetition. Furthermore, the task was positively associated with both word and fluent reading in Northern Sotho, and it reliably predicted reading outcomes in the tested population. Suggestions are made for improving the current version of the Northern Sotho NRT, whereafter it should be suitable to test learners from various age groups.

  12. Designing Flood Management Systems for Joint Economic and Ecological Robustness

    NASA Astrophysics Data System (ADS)

    Spence, C. M.; Grantham, T.; Brown, C. M.; Poff, N. L.

    2015-12-01

    Freshwater ecosystems across the United States are threatened by hydrologic change caused by water management operations and non-stationary climate trends. Nonstationary hydrology also threatens flood management systems' performance. Ecosystem managers and flood risk managers need tools to design systems that achieve flood risk reduction objectives while sustaining ecosystem functions and services in an uncertain hydrologic future. Robust optimization is used in water resources engineering to guide system design under climate change uncertainty. Using principles introduced by Eco-Engineering Decision Scaling (EEDS), we extend robust optimization techniques to design flood management systems that meet both economic and ecological goals simultaneously across a broad range of future climate conditions. We use three alternative robustness indices to identify flood risk management solutions that preserve critical ecosystem functions in a case study from the Iowa River, where recent severe flooding has tested the limits of the existing flood management system. We seek design modifications to the system that both reduce expected cost of flood damage while increasing ecologically beneficial inundation of riparian floodplains across a wide range of plausible climate futures. The first robustness index measures robustness as the fraction of potential climate scenarios in which both engineering and ecological performance goals are met, implicitly weighting each climate scenario equally. The second index builds on the first by using climate projections to weight each climate scenario, prioritizing acceptable performance in climate scenarios most consistent with climate projections. The last index measures robustness as mean performance across all climate scenarios, but penalizes scenarios with worse performance than average, rewarding consistency. Results stemming from alternate robustness indices reflect implicit assumptions about attitudes toward risk and reveal the

  13. The Development and Robustness of Young Children's Understanding of Aspectuality

    ERIC Educational Resources Information Center

    Waters, Gillian M.; Beck, Sarah R.

    2009-01-01

    We investigated whether 6-year-olds' understanding of perceptual aspectuality was sufficiently robust to deal with the presence of irrelevant information. A total of 32 children chose whether to look or feel to locate a specific object (identifiable by sight or touch) from four objects that were hidden. In half of the trials, the objects were…

  14. A Comparison of Rasch Person Analysis and Robust Estimators.

    ERIC Educational Resources Information Center

    Smith, Richard M.

    1985-01-01

    Standard maximum likeliheed estimation was compared using two forms of robust estimation, BIWEIGHT (based on Tukey's Biweight) and AMTJACK (AMT-Robustified Jackknife), and Rasch model person analysis. The two procedures recovered the generating parameters, but Rasch person analysis also helped to identify the nature of a response disturbance. (GDC)

  15. Robust and efficient overset grid assembly for partitioned unstructured meshes

    SciTech Connect

    Roget, Beatrice Sitaraman, Jayanarayanan

    2014-03-01

    This paper presents a method to perform efficient and automated Overset Grid Assembly (OGA) on a system of overlapping unstructured meshes in a parallel computing environment where all meshes are partitioned into multiple mesh-blocks and processed on multiple cores. The main task of the overset grid assembler is to identify, in parallel, among all points in the overlapping mesh system, at which points the flow solution should be computed (field points), interpolated (receptor points), or ignored (hole points). Point containment search or donor search, an algorithm to efficiently determine the cell that contains a given point, is the core procedure necessary for accomplishing this task. Donor search is particularly challenging for partitioned unstructured meshes because of the complex irregular boundaries that are often created during partitioning. Another challenge arises because of the large variation in the type of mesh-block overlap and the resulting large load imbalance on multiple processors. Desirable traits for the grid assembly method are efficiency (requiring only a small fraction of the solver time), robustness (correct identification of all point types), and full automation (no user input required other than the mesh system). Additionally, the method should be scalable, which is an important challenge due to the inherent load imbalance. This paper describes a fully-automated grid assembly method, which can use two different donor search algorithms. One is based on the use of auxiliary grids and Exact Inverse Maps (EIM), and the other is based on the use of Alternating Digital Trees (ADT). The EIM method is demonstrated to be more efficient than the ADT method, while retaining robustness. An adaptive load re-balance algorithm is also designed and implemented, which considerably improves the scalability of the method.

  16. Robustness of mission plans for unmanned aircraft

    NASA Astrophysics Data System (ADS)

    Niendorf, Moritz

    , and criticalities are derived. This analysis is extended to Euclidean minimum spanning trees. This thesis aims at enabling increased mission performance by providing means of assessing the robustness and optimality of a mission and methods for identifying critical elements. Examples of the application to mission planning in contested environments, cargo aircraft mission planning, multi-objective mission planning, and planning optimal communication topologies for teams of unmanned aircraft are given.

  17. Understanding interdisciplinary health care teams: using simulation design processes from the Air Carrier Advanced Qualification Program to identify and train critical teamwork skills.

    PubMed

    Hamman, William R; Beaudin-Seiler, Beth M; Beaubien, Jeffrey M

    2010-09-01

    In the report "Five Years After 'To Err is Human' ", it was noted that "the combination of complexity, professional fragmentation, and a tradition of individualism, enhanced by a well-entrenched hierarchical authority structure and diffuse accountability, forms a daunting barrier to creating the habits and beliefs of common purpose, teamwork, and individual accountability for successful interdependence that a safe culture requires". Training physicians, nurses, and other professionals to work in teams is a concept that has been promoted by many patient safety experts. However the model of teamwork in healthcare is diffusely defined, no clear performance metrics have been established, and the use of simulation to train teams has been suboptimal. This paper reports on the first three years of work performed in the Michigan Economic Development Corporation (MEDC) Tri-Corridor life science grant to apply concepts and processes of simulation design that were developed in the air carrier industry to understand and train healthcare teams. This work has been monitored by the American Academy for the Advancement of Science (AAA) and is based on concepts designed in the Advanced Qualification Program (AQP) from the air carrier industry, which trains and assesses teamwork skills in the same manner as technical skills. This grant has formed the foundation for the Center of Excellence for Simulation Education and Research (CESR).

  18. Stochastic simulation and robust design optimization of integrated photonic filters

    NASA Astrophysics Data System (ADS)

    Weng, Tsui-Wei; Melati, Daniele; Melloni, Andrea; Daniel, Luca

    2017-01-01

    Manufacturing variations are becoming an unavoidable issue in modern fabrication processes; therefore, it is crucial to be able to include stochastic uncertainties in the design phase. In this paper, integrated photonic coupled ring resonator filters are considered as an example of significant interest. The sparsity structure in photonic circuits is exploited to construct a sparse combined generalized polynomial chaos model, which is then used to analyze related statistics and perform robust design optimization. Simulation results show that the optimized circuits are more robust to fabrication process variations and achieve a reduction of 11%-35% in the mean square errors of the 3 dB bandwidth compared to unoptimized nominal designs.

  19. Stochastic simulation and robust design optimization of integrated photonic filters

    NASA Astrophysics Data System (ADS)

    Weng, Tsui-Wei; Melati, Daniele; Melloni, Andrea; Daniel, Luca

    2016-07-01

    Manufacturing variations are becoming an unavoidable issue in modern fabrication processes; therefore, it is crucial to be able to include stochastic uncertainties in the design phase. In this paper, integrated photonic coupled ring resonator filters are considered as an example of significant interest. The sparsity structure in photonic circuits is exploited to construct a sparse combined generalized polynomial chaos model, which is then used to analyze related statistics and perform robust design optimization. Simulation results show that the optimized circuits are more robust to fabrication process variations and achieve a reduction of 11%-35% in the mean square errors of the 3 dB bandwidth compared to unoptimized nominal designs.

  20. Robust Registration of Dynamic Facial Sequences.

    PubMed

    Sariyanidi, Evangelos; Gunes, Hatice; Cavallaro, Andrea

    2017-04-01

    Accurate face registration is a key step for several image analysis applications. However, existing registration methods are prone to temporal drift errors or jitter among consecutive frames. In this paper, we propose an iterative rigid registration framework that estimates the misalignment with trained regressors. The input of the regressors is a robust motion representation that encodes the motion between a misaligned frame and the reference frame(s), and enables reliable performance under non-uniform illumination variations. Drift errors are reduced when the motion representation is computed from multiple reference frames. Furthermore, we use the L2 norm of the representation as a cue for performing coarse-to-fine registration efficiently. Importantly, the framework can identify registration failures and correct them. Experiments show that the proposed approach achieves significantly higher registration accuracy than the state-of-the-art techniques in challenging sequences.

  1. Singularity analysis and robust neighborhood statistics

    NASA Astrophysics Data System (ADS)

    Zuo, Renguang

    2015-04-01

    Neighborhood statistics involving data within small neighborhoods have the advantages of revealing more detailed local structures and spatial variations of spatial patterns, and provide less biased information compared with global statistics. However, the resulting neighborhood statistics are influenced by the size of neighborhood. Singularity analysis can be regarded as a type of robust neighborhood statistics. It measures the gradient of relative change within small neighborhoods. The value of singularity index at a location of z rarely relies on the element concentration at that location, but depends on the changes around z. From the multifractal theory viewpoint, the singularity index is independent of the size of neighborhood. Singularity analysis is a powerful tool to identify geochemical and geophysical anomalies in mineral exploration. Recent studies demonstrated singularity analysis can well detect the weak geochemical anomalies related to mineralization due to decaying and masking effects of covers.

  2. Contextualizing the Genes Altered in Bladder Neoplasms in Pediatric andTeen Patients Allows Identifying Two Main Classes of Biological ProcessesInvolved and New Potential Therapeutic Targets

    PubMed Central

    Porrello, A.; Piergentili, R. b

    2016-01-01

    Research on bladder neoplasms in pediatric and teen patients (BNPTP) has described 21 genes, which are variously involved in this disease and are mostly responsible for deregulated cell proliferation. However, due to the limited number of publications on this subject, it is still unclear what type of relationships there are among these genes and which are the chances that, while having different molecular functions, they i) act as downstream effector genes of well-known pro- or anti- proliferative stimuli and/or interplay with biochemical pathways having oncological relevance or ii) are specific and, possibly, early biomarkers of these pathologies. A Gene Ontology (GO)-based analysis showed that these 21 genes are involved in biological processes, which can be split into two main classes: cell regulation-based and differentiation/development-based. In order to understand the involvement/overlapping with main cancer-related pathways, we performed a meta-analysis dependent on the 189 oncogenic signatures of the Molecular Signatures Database (OSMSD) curated by the Broad Institute. We generated a binary matrix with 53 gene signatures having at least one hit; this analysis i) suggests that some genes of the original list show inconsistencies and might need to be experimentally re- assessed or evaluated as biomarkers (in particular, ACTA2) and ii) allows hypothesizing that important (proto)oncogenes (E2F3, ERBB2/HER2, CCND1, WNT1, and YAP1) and (putative) tumor suppressors (BRCA1, RBBP8/CTIP, and RB1-RBL2/p130) may participate in the onset of this disease or worsen the observed phenotype, thus expanding the list of possible molecular targets for the treatment of BNPTP. PMID:27013923

  3. Inductive robust principal component analysis.

    PubMed

    Bao, Bing-Kun; Liu, Guangcan; Xu, Changsheng; Yan, Shuicheng

    2012-08-01

    In this paper we address the error correction problem that is to uncover the low-dimensional subspace structure from high-dimensional observations, which are possibly corrupted by errors. When the errors are of Gaussian distribution, Principal Component Analysis (PCA) can find the optimal (in terms of least-square-error) low-rank approximation to highdimensional data. However, the canonical PCA method is known to be extremely fragile to the presence of gross corruptions. Recently, Wright et al. established a so-called Robust Principal Component Analysis (RPCA) method, which can well handle grossly corrupted data [14]. However, RPCA is a transductive method and does not handle well the new samples which are not involved in the training procedure. Given a new datum, RPCA essentially needs to recalculate over all the data, resulting in high computational cost. So, RPCA is inappropriate for the applications that require fast online computation. To overcome this limitation, in this paper we propose an Inductive Robust Principal Component Analysis (IRPCA) method. Given a set of training data, unlike RPCA that targets on recovering the original data matrix, IRPCA aims at learning the underlying projection matrix, which can be used to efficiently remove the possible corruptions in any datum. The learning is done by solving a nuclear norm regularized minimization problem, which is convex and can be solved in polynomial time. Extensive experiments on a benchmark human face dataset and two video surveillance datasets show that IRPCA can not only be robust to gross corruptions, but also handle well the new data in an efficient way.

  4. Robust Software Architecture for Robots

    NASA Technical Reports Server (NTRS)

    Aghazanian, Hrand; Baumgartner, Eric; Garrett, Michael

    2009-01-01

    Robust Real-Time Reconfigurable Robotics Software Architecture (R4SA) is the name of both a software architecture and software that embodies the architecture. The architecture was conceived in the spirit of current practice in designing modular, hard, realtime aerospace systems. The architecture facilitates the integration of new sensory, motor, and control software modules into the software of a given robotic system. R4SA was developed for initial application aboard exploratory mobile robots on Mars, but is adaptable to terrestrial robotic systems, real-time embedded computing systems in general, and robotic toys.

  5. Robust Soldier Crab Ball Gate

    NASA Astrophysics Data System (ADS)

    Gunji, Yukio-Pegio; Nishiyama, Yuta; Adamatzky, Andrew

    2011-09-01

    Based on the field observation of soldier crabs, we previously proposed a model for a swarm of soldier crabs. Here, we describe the interaction of coherent swarms in the simulation model, which is implemented in a logical gate. Because a swarm is generated by inherent perturbation, a swarm can be generated and maintained under highly perturbed conditions. Thus, the model reveals a robust logical gate rather than stable one. In addition, we show that the logical gate of swarms is also implemented by real soldier crabs (Mictyris guinotae).

  6. Recent Progress toward Robust Photocathodes

    SciTech Connect

    Mulhollan, G. A.; Bierman, J. C.

    2009-08-04

    RF photoinjectors for next generation spin-polarized electron accelerators require photo-cathodes capable of surviving RF gun operation. Free electron laser photoinjectors can benefit from more robust visible light excited photoemitters. A negative electron affinity gallium arsenide activation recipe has been found that diminishes its background gas susceptibility without any loss of near bandgap photoyield. The highest degree of immunity to carbon dioxide exposure was achieved with a combination of cesium and lithium. Activated amorphous silicon photocathodes evince advantageous properties for high current photoinjectors including low cost, substrate flexibility, visible light excitation and greatly reduced gas reactivity compared to gallium arsenide.

  7. Comparative proteome analysis of robust Saccharomyces cerevisiae insights into industrial continuous and batch fermentation.

    PubMed

    Cheng, Jing-Sheng; Qiao, Bin; Yuan, Ying-Jin

    2008-11-01

    A robust Saccharomyces cerevisiae strain has been widely applied in continuous and batch/fed-batch industrial fermentation. However, little is known about the molecular basis of fermentative behavior of this strain in the two realistic fermentation processes. In this paper, we presented comparative proteomic profiling of the industrial yeast in the industrial fermentation processes. The expression levels of most identified protein were closely interrelated with the different stages of fermentation processes. Our results indicate that, among the 47 identified protein spots, 17 of them belonging to 12 enzymes were involved in pentose phosphate, glycolysis, and gluconeogenesis pathways and glycerol biosynthetic process, indicating that a number of pathways will need to be inactivated to improve ethanol production. The differential expressions of eight oxidative response and heat-shock proteins were also identified, suggesting that it is necessary to keep the correct cellular redox or osmotic state in the two industrial fermentation processes. Moreover, there are significant differences in changes of protein levels between the two industrial fermentation processes, especially these proteins associated with the glycolysis and gluconeogenesis pathways. These findings provide a molecular understanding of physiological adaptation of industrial strain for optimizing the performance of industrial bioethanol fermentation.

  8. Parameter uncertainty-based pattern identification and optimization for robust decision making on watershed load reduction

    NASA Astrophysics Data System (ADS)

    Jiang, Qingsong; Su, Han; Liu, Yong; Zou, Rui; Ye, Rui; Guo, Huaicheng

    2017-04-01

    Nutrients loading reduction in watershed is essential for lake restoration from eutrophication. The efficient and optimal decision-making on loading reduction is generally based on water quality modeling and the quantitative identification of nutrient sources at the watershed scale. The modeling process is influenced inevitably by inherent uncertainties, especially by uncertain parameters due to equifinality. Therefore, the emerging question is: if there is parameter uncertainty, how to ensure the robustness of the optimal decisions? Based on simulation-optimization models, an integrated approach of pattern identification and analysis of robustness was proposed in this study that focuses on the impact of parameter uncertainty in water quality modeling. Here the pattern represents the discernable regularity of solutions for load reduction under multiple parameter sets. Pattern identification is achieved by using a hybrid clustering analysis (i.e., Ward-Hierarchical and K-means), which was flexible and efficient in analyzing Lake Bali near the Yangtze River in China. The results demonstrated that urban domestic nutrient load is the most potential source that should be reduced, and there are two patterns for Total Nitrogen (TN) reduction and three patterns for Total Phosphorus (TP) reduction. The patterns indicated different total reduction of nutrient loads, which reflect diverse decision preferences. The robust solution was identified by the highest accomplishment with the water quality at monitoring stations that were improved uniformly with this solution. We conducted a process analysis of robust decision-making that was based on pattern identification and uncertainty, which provides effective support for decision-making with preference under uncertainty.

  9. Statistical analysis of wines using a robust compositional biplot.

    PubMed

    Hron, K; Jelínková, M; Filzmoser, P; Kreuziger, R; Bednář, P; Barták, P

    2012-02-15

    Eight phenolic acids (vanillic, gentisic, protocatechuic, syringic, gallic, coumaric, ferulic and caffeic) were quantitatively determined in 30 commercially available wines from South Moravia by gas chromatography-mass spectrometry. Raw (untransformed) and centered log-ratio transformed data were evaluated by classical and robust version of principal component analysis (PCA). A robust compositional biplot of the centered log-ratio transformed data gives the best resolution of particular categories of wines. Vanillic, syringic and gallic acids were identified as presumed markers occurring in relatively higher concentrations in red wines. Gentisic and caffeic acid were tentatively suggested as prospective technological markers, reflecting presumably some kinds of technological aspects of wine making.

  10. Wavelet Filtering to Reduce Conservatism in Aeroservoelastic Robust Stability Margins

    NASA Technical Reports Server (NTRS)

    Brenner, Marty; Lind, Rick

    1998-01-01

    Wavelet analysis for filtering and system identification was used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins was reduced with parametric and nonparametric time-frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data was used to reduce the effects of external desirableness and unmodeled dynamics. Parametric estimates of modal stability were also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. F-18 high Alpha Research Vehicle aeroservoelastic flight test data demonstrated improved robust stability prediction by extension of the stability boundary beyond the flight regime.

  11. Robust Inflation from fibrous strings

    SciTech Connect

    Burgess, C.P.; Cicoli, M.; Alwis, S. de; Quevedo, F.

    2016-05-13

    Successful inflationary models should (i) describe the data well; (ii) arise generically from sensible UV completions; (iii) be insensitive to detailed fine-tunings of parameters and (iv) make interesting new predictions. We argue that a class of models with these properties is characterized by relatively simple potentials with a constant term and negative exponentials. We here continue earlier work exploring UV completions for these models — including the key (though often ignored) issue of modulus stabilisation — to assess the robustness of their predictions. We show that string models where the inflaton is a fibration modulus seem to be robust due to an effective rescaling symmetry, and fairly generic since most known Calabi-Yau manifolds are fibrations. This class of models is characterized by a generic relation between the tensor-to-scalar ratio r and the spectral index n{sub s} of the form r∝(n{sub s}−1){sup 2} where the proportionality constant depends on the nature of the effects used to develop the inflationary potential and the topology of the internal space. In particular we find that the largest values of the tensor-to-scalar ratio that can be obtained by generalizing the original set-up are of order r≲0.01. We contrast this general picture with specific popular models, such as the Starobinsky scenario and α-attractors. Finally, we argue the self consistency of large-field inflationary models can strongly constrain non-supersymmetric inflationary mechanisms.

  12. The Robustness of Acoustic Analogies

    NASA Technical Reports Server (NTRS)

    Freund, J. B.; Lele, S. K.; Wei, M.

    2004-01-01

    Acoustic analogies for the prediction of flow noise are exact rearrangements of the flow equations N(right arrow q) = 0 into a nominal sound source S(right arrow q) and sound propagation operator L such that L(right arrow q) = S(right arrow q). In practice, the sound source is typically modeled and the propagation operator inverted to make predictions. Since the rearrangement is exact, any sufficiently accurate model of the source will yield the correct sound, so other factors must determine the merits of any particular formulation. Using data from a two-dimensional mixing layer direct numerical simulation (DNS), we evaluate the robustness of two analogy formulations to different errors intentionally introduced into the source. The motivation is that since S can not be perfectly modeled, analogies that are less sensitive to errors in S are preferable. Our assessment is made within the framework of Goldstein's generalized acoustic analogy, in which different choices of a base flow used in constructing L give different sources S and thus different analogies. A uniform base flow yields a Lighthill-like analogy, which we evaluate against a formulation in which the base flow is the actual mean flow of the DNS. The more complex mean flow formulation is found to be significantly more robust to errors in the energetic turbulent fluctuations, but its advantage is less pronounced when errors are made in the smaller scales.

  13. Problems Identifying Independent and Dependent Variables

    ERIC Educational Resources Information Center

    Leatham, Keith R.

    2012-01-01

    This paper discusses one step from the scientific method--that of identifying independent and dependent variables--from both scientific and mathematical perspectives. It begins by analyzing an episode from a middle school mathematics classroom that illustrates the need for students and teachers alike to develop a robust understanding of…

  14. Automated robust registration of grossly misregistered whole-slide images with varying stains

    NASA Astrophysics Data System (ADS)

    Litjens, G.; Safferling, K.; Grabe, N.

    2016-03-01

    Cancer diagnosis and pharmaceutical research increasingly depend on the accurate quantification of cancer biomarkers. Identification of biomarkers is usually performed through immunohistochemical staining of cancer sections on glass slides. However, combination of multiple biomarkers from a wide variety of immunohistochemically stained slides is a tedious process in traditional histopathology due to the switching of glass slides and re-identification of regions of interest by pathologists. Digital pathology now allows us to apply image registration algorithms to digitized whole-slides to align the differing immunohistochemical stains automatically. However, registration algorithms need to be robust to changes in color due to differing stains and severe changes in tissue content between slides. In this work we developed a robust registration methodology to allow for fast coarse alignment of multiple immunohistochemical stains to the base hematyoxylin and eosin stained image. We applied HSD color model conversion to obtain a less stain color dependent representation of the whole-slide images. Subsequently, optical density thresholding and connected component analysis were used to identify the relevant regions for registration. Template matching using normalized mutual information was applied to provide initial translation and rotation parameters, after which a cost function-driven affine registration was performed. The algorithm was validated using 40 slides from 10 prostate cancer patients, with landmark registration error as a metric. Median landmark registration error was around 180 microns, which indicates performance is adequate for practical application. None of the registrations failed, indicating the robustness of the algorithm.

  15. Fast and Robust Segmentation and Classification for Change Detection in Urban Point Clouds

    NASA Astrophysics Data System (ADS)

    Roynard, X.; Deschaud, J.-E.; Goulette, F.

    2016-06-01

    Change detection is an important issue in city monitoring to analyse street furniture, road works, car parking, etc. For example, parking surveys are needed but are currently a laborious task involving sending operators in the streets to identify the changes in car locations. In this paper, we propose a method that performs a fast and robust segmentation and classification of urban point clouds, that can be used for change detection. We apply this method to detect the cars, as a particular object class, in order to perform parking surveys automatically. A recently proposed method already addresses the need for fast segmentation and classification of urban point clouds, using elevation images. The interest to work on images is that processing is much faster, proven and robust. However there may be a loss of information in complex 3D cases: for example when objects are one above the other, typically a car under a tree or a pedestrian under a balcony. In this paper we propose a method that retain the three-dimensional information while preserving fast computation times and improving segmentation and classification accuracy. It is based on fast region-growing using an octree, for the segmentation, and specific descriptors with Random-Forest for the classification. Experiments have been performed on large urban point clouds acquired by Mobile Laser Scanning. They show that the method is as fast as the state of the art, and that it gives more robust results in the complex 3D cases.

  16. Topological properties of robust biological and computational networks.

    PubMed

    Navlakha, Saket; He, Xin; Faloutsos, Christos; Bar-Joseph, Ziv

    2014-07-06

    Network robustness is an important principle in biology and engineering. Previous studies of global networks have identified both redundancy and sparseness as topological properties used by robust networks. By focusing on molecular subnetworks, or modules, we show that module topology is tightly linked to the level of environmental variability (noise) the module expects to encounter. Modules internal to the cell that are less exposed to environmental noise are more connected and less robust than external modules. A similar design principle is used by several other biological networks. We propose a simple change to the evolutionary gene duplication model which gives rise to the rich range of module topologies observed within real networks. We apply these observations to evaluate and design communication networks that are specifically optimized for noisy or malicious environments. Combined, joint analysis of biological and computational networks leads to novel algorithms and insights benefiting both fields.

  17. Robust optimization of nonlinear impulsive rendezvous with uncertainty

    NASA Astrophysics Data System (ADS)

    Luo, YaZhong; Yang, Zhen; Li, HengNian

    2014-04-01

    The optimal rendezvous trajectory designs in many current research efforts do not incorporate the practical uncertainties into the closed loop of the design. A robust optimization design method for a nonlinear rendezvous trajectory with uncertainty is proposed in this paper. One performance index related to the variances of the terminal state error is termed the robustness performance index, and a two-objective optimization model (including the minimum characteristic velocity and the minimum robustness performance index) is formulated on the basis of the Lambert algorithm. A multi-objective, non-dominated sorting genetic algorithm is employed to obtain the Pareto optimal solution set. It is shown that the proposed approach can be used to quickly obtain several inherent principles of the rendezvous trajectory by taking practical errors into account. Furthermore, this approach can identify the most preferable design space in which a specific solution for the actual application of the rendezvous control should be chosen.

  18. Topological properties of robust biological and computational networks

    PubMed Central

    Navlakha, Saket; He, Xin; Faloutsos, Christos; Bar-Joseph, Ziv

    2014-01-01

    Network robustness is an important principle in biology and engineering. Previous studies of global networks have identified both redundancy and sparseness as topological properties used by robust networks. By focusing on molecular subnetworks, or modules, we show that module topology is tightly linked to the level of environmental variability (noise) the module expects to encounter. Modules internal to the cell that are less exposed to environmental noise are more connected and less robust than external modules. A similar design principle is used by several other biological networks. We propose a simple change to the evolutionary gene duplication model which gives rise to the rich range of module topologies observed within real networks. We apply these observations to evaluate and design communication networks that are specifically optimized for noisy or malicious environments. Combined, joint analysis of biological and computational networks leads to novel algorithms and insights benefiting both fields. PMID:24789562

  19. Robust, multifunctional flood defenses in the Dutch rural riverine area

    NASA Astrophysics Data System (ADS)

    van Loon-Steensma, J. M.; Vellinga, P.

    2014-05-01

    This paper reviews the possible functions as well as strengths, weaknesses, opportunities, and threats for robust flood defenses in the rural riverine areas of the Netherlands on the basis of the recent literature and case studies at five locations in the Netherlands where dike reinforcement is planned. For each of the case studies semi-structured interviews with experts and stakeholders were conducted. At each of the five locations, suitable robust flood defenses could be identified that would contribute to the envisaged functions and ambitions for the respective areas. Primary strengths of a robust, multifunctional dike in comparison to a traditional dike appeared to be the more efficient space use due to the combination of different functions, a longer-term focus and greater safety.

  20. Robust Identification of Noncoding RNA from Transcriptomes Requires Phylogenetically-Informed Sampling

    PubMed Central

    Lai, Alicia Sook-Wei; Eldai, Hisham; Liu, Wenting; McGimpsey, Stephanie; Wheeler, Nicole E.; Biggs, Patrick J.; Thomson, Nick R.; Barquist, Lars; Poole, Anthony M.; Gardner, Paul P.

    2014-01-01

    Noncoding RNAs are integral to a wide range of biological processes, including translation, gene regulation, host-pathogen interactions and environmental sensing. While genomics is now a mature field, our capacity to identify noncoding RNA elements in bacterial and archaeal genomes is hampered by the difficulty of de novo identification. The emergence of new technologies for characterizing transcriptome outputs, notably RNA-seq, are improving noncoding RNA identification and expression quantification. However, a major challenge is to robustly distinguish functional outputs from transcriptional noise. To establish whether annotation of existing transcriptome data has effectively captured all functional outputs, we analysed over 400 publicly available RNA-seq datasets spanning 37 different Archaea and Bacteria. Using comparative tools, we identify close to a thousand highly-expressed candidate noncoding RNAs. However, our analyses reveal that capacity to identify noncoding RNA outputs is strongly dependent on phylogenetic sampling. Surprisingly, and in stark contrast to protein-coding genes, the phylogenetic window for effective use of comparative methods is perversely narrow: aggregating public datasets only produced one phylogenetic cluster where these tools could be used to robustly separate unannotated noncoding RNAs from a null hypothesis of transcriptional noise. Our results show that for the full potential of transcriptomics data to be realized, a change in experimental design is paramount: effective transcriptomics requires phylogeny-aware sampling. PMID:25357249

  1. Robust smile detection using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Celona, Luigi; Schettini, Raimondo

    2016-11-01

    We present a fully automated approach for smile detection. Faces are detected using a multiview face detector and aligned and scaled using automatically detected eye locations. Then, we use a convolutional neural network (CNN) to determine whether it is a smiling face or not. To this end, we investigate different shallow CNN architectures that can be trained even when the amount of learning data is limited. We evaluate our complete processing pipeline on the largest publicly available image database for smile detection in an uncontrolled scenario. We investigate the robustness of the method to different kinds of geometric transformations (rotation, translation, and scaling) due to imprecise face localization, and to several kinds of distortions (compression, noise, and blur). To the best of our knowledge, this is the first time that this type of investigation has been performed for smile detection. Experimental results show that our proposal outperforms state-of-the-art methods on both high- and low-quality images.

  2. Robust, ultrasmall organosilica nanoparticles without silica shells

    NASA Astrophysics Data System (ADS)

    Murray, Eoin; Born, Philip; Weber, Anika; Kraus, Tobias

    2014-07-01

    Traditionally, organosilica nanoparticles have been prepared inside micelles with an external silica shell for mechanical support. Here, we compare these hybrid core-shell particles with organosilica particles that are robust enough to be produced both inside micelles and alone in a sol-gel process. These particles form from octadecyltrimethoxy silane as silica source either in microemulsions, resulting in water-dispersible particles with a hydrophobic core, or precipitate from an aqueous mixture to form particles with both hydrophobic core and surface. We examine size and morphology of the particles by dynamic light scattering and transmission electron microscopy and show that the particles consist of Si-O-Si networks pervaded by alkyl chains using nuclear magnetic resonance, infrared spectroscopy, and thermogravimetric analysis.

  3. Mechanisms of mutational robustness in transcriptional regulation

    PubMed Central

    Payne, Joshua L.; Wagner, Andreas

    2015-01-01

    Robustness is the invariance of a phenotype in the face of environmental or genetic change. The phenotypes produced by transcriptional regulatory circuits are gene expression patterns that are to some extent robust to mutations. Here we review several causes of this robustness. They include robustness of individual transcription factor binding sites, homotypic clusters of such sites, redundant enhancers, transcription factors, redundant transcription factors, and the wiring of transcriptional regulatory circuits. Such robustness can either be an adaptation by itself, a byproduct of other adaptations, or the result of biophysical principles and non-adaptive forces of genome evolution. The potential consequences of such robustness include complex regulatory network topologies that arise through neutral evolution, as well as cryptic variation, i.e., genotypic divergence without phenotypic divergence. On the longest evolutionary timescales, the robustness of transcriptional regulation has helped shape life as we know it, by facilitating evolutionary innovations that helped organisms such as flowering plants and vertebrates diversify. PMID:26579194

  4. The structure of robust observers

    NASA Technical Reports Server (NTRS)

    Bhattacharyya, S. P.

    1975-01-01

    Conventional observers for linear time-invariant systems are shown to be structurally inadequate from a sensitivity standpoint. It is proved that if a linear dynamic system is to provide observer action despite arbitrary small perturbations in a specified subset of its parameters, it must: (1) be a closed loop system, be driven by the observer error, (2) possess redundancy, the observer must be generating, implicitly or explicitly, at least one linear combination of states that is already contained in the measurements, and (3) contain a perturbation-free model of the portion of the system observable from the external input to the observer. The procedure for design of robust observers possessing the above structural features is established and discussed.

  5. Probabilistic Reasoning for Plan Robustness

    NASA Technical Reports Server (NTRS)

    Schaffer, Steve R.; Clement, Bradley J.; Chien, Steve A.

    2005-01-01

    A planning system must reason about the uncertainty of continuous variables in order to accurately project the possible system state over time. A method is devised for directly reasoning about the uncertainty in continuous activity duration and resource usage for planning problems. By representing random variables as parametric distributions, computing projected system state can be simplified in some cases. Common approximation and novel methods are compared for over-constrained and lightly constrained domains. The system compares a few common approximation methods for an iterative repair planner. Results show improvements in robustness over the conventional non-probabilistic representation by reducing the number of constraint violations witnessed by execution. The improvement is more significant for larger problems and problems with higher resource subscription levels but diminishes as the system is allowed to accept higher risk levels.

  6. Advances in robust flight design

    NASA Technical Reports Server (NTRS)

    Wong, Kelvin K.; Dhand, Sanjeev K.

    1991-01-01

    Current launch vehicle trajectory design philosophies, generally based on maximizing payload capability, result in an expensive and time-consuming iteration in trajectory design for each mission. However, for a launch system that is not performance-driven, a flight design that is robust to variations in missions and provides single-engine-out capability can be highly cost-effective. This philosophy has led to the development of two flight design concepts to reduce recurring costs: standard trajectories and command multiplier steering. Preliminary analyses of these two concepts had proven the feasibility and showed encouraging results in applications to an Advanced Launch System vehicle. Recent progress has demonstrated the effective and efficient integration of the two concepts with minimal payload penalty.

  7. Robust holographic storage system design.

    PubMed

    Watanabe, Takahiro; Watanabe, Minoru

    2011-11-21

    Demand is increasing daily for large data storage systems that are useful for applications in spacecraft, space satellites, and space robots, which are all exposed to radiation-rich space environment. As candidates for use in space embedded systems, holographic storage systems are promising because they can easily provided the demanded large-storage capability. Particularly, holographic storage systems, which have no rotation mechanism, are demanded because they are virtually maintenance-free. Although a holographic memory itself is an extremely robust device even in a space radiation environment, its associated lasers and drive circuit devices are vulnerable. Such vulnerabilities sometimes engendered severe problems that prevent reading of all contents of the holographic memory, which is a turn-off failure mode of a laser array. This paper therefore presents a proposal for a recovery method for the turn-off failure mode of a laser array on a holographic storage system, and describes results of an experimental demonstration.

  8. Towards designing robust coupled networks.

    PubMed

    Schneider, Christian M; Yazdani, Nuri; Araújo, Nuno A M; Havlin, Shlomo; Herrmann, Hans J

    2013-01-01

    Natural and technological interdependent systems have been shown to be highly vulnerable due to cascading failures and an abrupt collapse of global connectivity under initial failure. Mitigating the risk by partial disconnection endangers their functionality. Here we propose a systematic strategy of selecting a minimum number of autonomous nodes that guarantee a smooth transition in robustness. Our method which is based on betweenness is tested on various examples including the famous 2003 electrical blackout of Italy. We show that, with this strategy, the necessary number of autonomous nodes can be reduced by a factor of five compared to a random choice. We also find that the transition to abrupt collapse follows tricritical scaling characterized by a set of exponents which is independent on the protection strategy.

  9. Towards designing robust coupled networks

    PubMed Central

    Schneider, Christian M.; Yazdani, Nuri; Araújo, Nuno A. M.; Havlin, Shlomo; Herrmann, Hans J.

    2013-01-01

    Natural and technological interdependent systems have been shown to be highly vulnerable due to cascading failures and an abrupt collapse of global connectivity under initial failure. Mitigating the risk by partial disconnection endangers their functionality. Here we propose a systematic strategy of selecting a minimum number of autonomous nodes that guarantee a smooth transition in robustness. Our method which is based on betweenness is tested on various examples including the famous 2003 electrical blackout of Italy. We show that, with this strategy, the necessary number of autonomous nodes can be reduced by a factor of five compared to a random choice. We also find that the transition to abrupt collapse follows tricritical scaling characterized by a set of exponents which is independent on the protection strategy. PMID:23752705

  10. Towards designing robust coupled networks

    NASA Astrophysics Data System (ADS)

    Schneider, Christian M.; Yazdani, Nuri; Araújo, Nuno A. M.; Havlin, Shlomo; Herrmann, Hans J.

    2013-06-01

    Natural and technological interdependent systems have been shown to be highly vulnerable due to cascading failures and an abrupt collapse of global connectivity under initial failure. Mitigating the risk by partial disconnection endangers their functionality. Here we propose a systematic strategy of selecting a minimum number of autonomous nodes that guarantee a smooth transition in robustness. Our method which is based on betweenness is tested on various examples including the famous 2003 electrical blackout of Italy. We show that, with this strategy, the necessary number of autonomous nodes can be reduced by a factor of five compared to a random choice. We also find that the transition to abrupt collapse follows tricritical scaling characterized by a set of exponents which is independent on the protection strategy.

  11. Robust stochastic mine production scheduling

    NASA Astrophysics Data System (ADS)

    Kumral, Mustafa

    2010-06-01

    The production scheduling of open pit mines aims to determine the extraction sequence of blocks such that the net present value (NPV) of a mining project is maximized under capacity and access constraints. This sequencing has significant effect on the profitability of the mining venture. However, given that the values of coefficients in the optimization procedure are obtained in a medium of sparse data and unknown future events, implementations based on deterministic models may lead to destructive consequences to the company. In this article, a robust stochastic optimization (RSO) approach is used to deal with mine production scheduling in a manner such that the solution is insensitive to changes in input data. The approach seeks a trade off between optimality and feasibility. The model is demonstrated on a case study. The findings showed that the approach can be used in mine production scheduling problems efficiently.

  12. How robust are distributed systems

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1989-01-01

    A distributed system is made up of large numbers of components operating asynchronously from one another and hence with imcomplete and inaccurate views of one another's state. Load fluctuations are common as new tasks arrive and active tasks terminate. Jointly, these aspects make it nearly impossible to arrive at detailed predictions for a system's behavior. It is important to the successful use of distributed systems in situations in which humans cannot provide the sorts of predictable realtime responsiveness of a computer, that the system be robust. The technology of today can too easily be affected by worn programs or by seemingly trivial mechanisms that, for example, can trigger stock market disasters. Inventors of a technology have an obligation to overcome flaws that can exact a human cost. A set of principles for guiding solutions to distributed computing problems is presented.

  13. Robust statistical fusion of image labels.

    PubMed

    Landman, Bennett A; Asman, Andrew J; Scoggins, Andrew G; Bogovic, John A; Xing, Fangxu; Prince, Jerry L

    2012-02-01

    Image labeling and parcellation (i.e., assigning structure to a collection of voxels) are critical tasks for the assessment of volumetric and morphometric features in medical imaging data. The process of image labeling is inherently error prone as images are corrupted by noise and artifacts. Even expert interpretations are subject to subjectivity and the precision of the individual raters. Hence, all labels must be considered imperfect with some degree of inherent variability. One may seek multiple independent assessments to both reduce this variability and quantify the degree of uncertainty. Existing techniques have exploited maximum a posteriori statistics to combine data from multiple raters and simultaneously estimate rater reliabilities. Although quite successful, wide-scale application has been hampered by unstable estimation with practical datasets, for example, with label sets with small or thin objects to be labeled or with partial or limited datasets. As well, these approaches have required each rater to generate a complete dataset, which is often impossible given both human foibles and the typical turnover rate of raters in a research or clinical environment. Herein, we propose a robust approach to improve estimation performance with small anatomical structures, allow for missing data, account for repeated label sets, and utilize training/catch trial data. With this approach, numerous raters can label small, overlapping portions of a large dataset, and rater heterogeneity can be robustly controlled while simultaneously estimating a single, reliable label set and characterizing uncertainty. The proposed approach enables many individuals to collaborate in the construction of large datasets for labeling tasks (e.g., human parallel processing) and reduces the otherwise detrimental impact of rater unavailability.

  14. Building Robust Systems with Fallible Construction (Elaboration de systemes informatiques robustes a l’architecutre faillible)

    DTIC Science & Technology

    2008-04-01

    IST-047 Building Robust Systems with Fallible Construction (Elaboration de systèmes informatiques robustes à l’architecture faillible) Final...IST-047 Building Robust Systems with Fallible Construction (Elaboration de systèmes informatiques robustes à l’architecture faillible...and cost investments. ES - 2 RTO-TR-IST-047 Elaboration de systèmes informatiques robustes à l’architecture faillible (RTO-TR-IST-047

  15. Sensitive Periods for Developing a Robust Trait of Appetitive Aggression

    <