Science.gov

Sample records for identifying robust process

  1. A P-Norm Robust Feature Extraction Method for Identifying Differentially Expressed Genes.

    PubMed

    Liu, Jian; Liu, Jin-Xing; Gao, Ying-Lian; Kong, Xiang-Zhen; Wang, Xue-Song; Wang, Dong

    2015-01-01

    In current molecular biology, it becomes more and more important to identify differentially expressed genes closely correlated with a key biological process from gene expression data. In this paper, based on the Schatten p-norm and Lp-norm, a novel p-norm robust feature extraction method is proposed to identify the differentially expressed genes. In our method, the Schatten p-norm is used as the regularization function to obtain a low-rank matrix and the Lp-norm is taken as the error function to improve the robustness to outliers in the gene expression data. The results on simulation data show that our method can obtain higher identification accuracies than the competitive methods. Numerous experiments on real gene expression data sets demonstrate that our method can identify more differentially expressed genes than the others. Moreover, we confirmed that the identified genes are closely correlated with the corresponding gene expression data. PMID:26201006

  2. Robustness

    NASA Technical Reports Server (NTRS)

    Ryan, R.

    1993-01-01

    Robustness is a buzz word common to all newly proposed space systems design as well as many new commercial products. The image that one conjures up when the word appears is a 'Paul Bunyon' (lumberjack design), strong and hearty; healthy with margins in all aspects of the design. In actuality, robustness is much broader in scope than margins, including such factors as simplicity, redundancy, desensitization to parameter variations, control of parameter variations (environments flucation), and operational approaches. These must be traded with concepts, materials, and fabrication approaches against the criteria of performance, cost, and reliability. This includes manufacturing, assembly, processing, checkout, and operations. The design engineer or project chief is faced with finding ways and means to inculcate robustness into an operational design. First, however, be sure he understands the definition and goals of robustness. This paper will deal with these issues as well as the need for the requirement for robustness.

  3. Numerical robust stability estimation in milling process

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoming; Zhu, Limin; Ding, Han; Xiong, Youlun

    2012-09-01

    The conventional prediction of milling stability has been extensively studied based on the assumptions that the milling process dynamics is time invariant. However, nominal cutting parameters cannot guarantee the stability of milling process at the shop floor level since there exists many uncertain factors in a practical manufacturing environment. This paper proposes a novel numerical method to estimate the upper and lower bounds of Lobe diagram, which is used to predict the milling stability in a robust way by taking into account the uncertain parameters of milling system. Time finite element method, a milling stability theory is adopted as the conventional deterministic model. The uncertain dynamics parameters are dealt with by the non-probabilistic model in which the parameters with uncertainties are assumed to be bounded and there is no need for probabilistic distribution densities functions. By doing so, interval instead of deterministic stability Lobe is obtained, which guarantees the stability of milling process in an uncertain milling environment. In the simulations, the upper and lower bounds of Lobe diagram obtained by the changes of modal parameters of spindle-tool system and cutting coefficients are given, respectively. The simulation results show that the proposed method is effective and can obtain satisfying bounds of Lobe diagrams. The proposed method is helpful for researchers at shop floor to making decision on machining parameters selection.

  4. Noise suppression methods for robust speech processing

    NASA Astrophysics Data System (ADS)

    Boll, S. F.; Kajiya, J.; Youngberg, J.; Petersen, T. L.; Ravindra, H.; Done, W.; Cox, B. V.; Cohen, E.

    1981-04-01

    Robust speech processing in practical operating environments requires effective environmental and processor noise suppression. This report describes the technical findings and accomplishments during the reporting period for the research program funded to develop real-time, compressed speech analysis-synthesis algorithms whose performance is invariant under signal contamination. Fulfillment of this requirement is necessary to insure reliable secure compressed speech transmission within realistic military command and control environments. Overall contributions resulting from this research program include the understanding of how environmental noise degrades narrow band, coded speech, development of appropriate real-time noise suppression algorithms, and development of speech parameter identification methods that consider signal contamination as a fundamental element in the estimation process. This report describes the research and results in the areas of noise suppression using the dual input adaptive noise cancellation articulation rate change techniques, spectral subtraction and a description of an experiment which demonstrated that the spectral substraction noise suppression algorithm can improve the intelligibility of 2400 bps, LPC-10 coded, helicopter speech by 10.6 points. In addition summaries are included of prior studies in Constant-Q signal analysis and synthesis, perceptual modelling, speech activity detection, and pole-zero modelling of noisy signals. Three recent studies in speech modelling using the critical band analysis-synthesis transform and using splines are then presented. Finally a list of major publications generated under this contract is given.

  5. Identifying Robust and Sensitive Frequency Bands for Interrogating Neural Oscillations

    PubMed Central

    Shackman, Alexander J.; McMenamin, Brenton W.; Maxwell, Jeffrey S.; Greischar, Lawrence L.; Davidson, Richard J.

    2010-01-01

    Recent years have seen an explosion of interest in using neural oscillations to characterize the mechanisms supporting cognition and emotion. Oftentimes, oscillatory activity is indexed by mean power density in predefined frequency bands. Some investigators use broad bands originally defined by prominent surface features of the spectrum. Others rely on narrower bands originally defined by spectral factor analysis (SFA). Presently, the robustness and sensitivity of these competing band definitions remains unclear. Here, a Monte Carlo-based SFA strategy was used to decompose the tonic (“resting” or “spontaneous”) electroencephalogram (EEG) into five bands: delta (1–5Hz), alpha-low (6–9Hz), alpha-high (10–11Hz), beta (12–19Hz), and gamma (>21Hz). This pattern was consistent across SFA methods, artifact correction/rejection procedures, scalp regions, and samples. Subsequent analyses revealed that SFA failed to deliver enhanced sensitivity; narrow alpha sub-bands proved no more sensitive than the classical broadband to individual differences in temperament or mean differences in task-induced activation. Other analyses suggested that residual ocular and muscular artifact was the dominant source of activity during quiescence in the delta and gamma bands. This was observed following threshold-based artifact rejection or independent component analysis (ICA)-based artifact correction, indicating that such procedures do not necessarily confer adequate protection. Collectively, these findings highlight the limitations of several commonly used EEG procedures and underscore the necessity of routinely performing exploratory data analyses, particularly data visualization, prior to hypothesis testing. They also suggest the potential benefits of using techniques other than SFA for interrogating high-dimensional EEG datasets in the frequency or time-frequency (event-related spectral perturbation, event-related synchronization / desynchronization) domains. PMID

  6. Using Many-Objective Optimization and Robust Decision Making to Identify Robust Regional Water Resource System Plans

    NASA Astrophysics Data System (ADS)

    Matrosov, E. S.; Huskova, I.; Harou, J. J.

    2015-12-01

    Water resource system planning regulations are increasingly requiring potential plans to be robust, i.e., perform well over a wide range of possible future conditions. Robust Decision Making (RDM) has shown success in aiding the development of robust plans under conditions of 'deep' uncertainty. Under RDM, decision makers iteratively improve the robustness of a candidate plan (or plans) by quantifying its vulnerabilities to future uncertain inputs and proposing ameliorations. RDM requires planners to have an initial candidate plan. However, if the initial plan is far from robust, it may take several iterations before planners are satisfied with its performance across the wide range of conditions. Identifying an initial candidate plan is further complicated if many possible alternative plans exist and if performance is assessed against multiple conflicting criteria. Planners may benefit from considering a plan that already balances multiple performance criteria and provides some level of robustness before the first RDM iteration. In this study we use many-objective evolutionary optimization to identify promising plans before undertaking RDM. This is done for a very large regional planning problem spanning the service area of four major water utilities in East England. The five-objective optimization is performed under an ensemble of twelve uncertainty scenarios to ensure the Pareto-approximate plans exhibit an initial level of robustness. New supply interventions include two reservoirs, one aquifer recharge and recovery scheme, two transfers from an existing reservoir, five reuse and five desalination schemes. Each option can potentially supply multiple demands at varying capacities resulting in 38 unique decisions. Four candidate portfolios were selected using trade-off visualization with the involved utilities. The performance of these plans was compared under a wider range of possible scenarios. The most balanced plan was then submitted into the vulnerability

  7. Nonlinear filtering for robust signal processing

    SciTech Connect

    Palmieri, F.

    1987-01-01

    A generalized framework for the description and design of a large class of nonlinear filters is proposed. Such a family includes, among others, the newly defined Ll-estimators, that generalize the order statistic filters (L-filters) and the nonrecursive linear filters (FIR). Such estimators are particularly efficient in filtering signals that do not follow gaussian distributions. They can be designed to restore signals and images corrupted by noise of impulsive type. Such filters are very appealing since they are suitable for being made robust against perturbations on the assumed model, or insensitive to the presence of spurious outliers in the data. The linear part of the filter is used to characterize their essential spectral behavior. It can be constrained to a given shape to obtain nonlinear filters that combine given frequency characteristics and noise immunity. The generalized nonlinear filters can also be used adaptively with the coefficients computed dynamically via LMS or RLS algorithms.

  8. On adaptive robustness approach to Anti-Jam signal processing

    NASA Astrophysics Data System (ADS)

    Poberezhskiy, Y. S.; Poberezhskiy, G. Y.

    An effective approach to exploiting statistical differences between desired and jamming signals named adaptive robustness is proposed and analyzed in this paper. It combines conventional Bayesian, adaptive, and robust approaches that are complementary to each other. This combining strengthens the advantages and mitigates the drawbacks of the conventional approaches. Adaptive robustness is equally applicable to both jammers and their victim systems. The capabilities required for realization of adaptive robustness in jammers and victim systems are determined. The employment of a specific nonlinear robust algorithm for anti-jam (AJ) processing is described and analyzed. Its effectiveness in practical situations has been proven analytically and confirmed by simulation. Since adaptive robustness can be used by both sides in electronic warfare, it is more advantageous for the fastest and most intelligent side. Many results obtained and discussed in this paper are also applicable to commercial applications such as communications in unregulated or poorly regulated frequency ranges and systems with cognitive capabilities.

  9. Identifying core features of adaptive metabolic mechanisms for chronic heat stress attenuation contributing to systems robustness.

    PubMed

    Gu, Jenny; Weber, Katrin; Klemp, Elisabeth; Winters, Gidon; Franssen, Susanne U; Wienpahl, Isabell; Huylmans, Ann-Kathrin; Zecher, Karsten; Reusch, Thorsten B H; Bornberg-Bauer, Erich; Weber, Andreas P M

    2012-05-01

    The contribution of metabolism to heat stress may play a significant role in defining robustness and recovery of systems; either by providing the energy and metabolites required for cellular homeostasis, or through the generation of protective osmolytes. However, the mechanisms by which heat stress attenuation could be adapted through metabolic processes as a stabilizing strategy against thermal stress are still largely unclear. We address this issue through metabolomic and transcriptomic profiles for populations along a thermal cline where two seagrass species, Zostera marina and Zostera noltii, were found in close proximity. Significant changes captured by these profile comparisons could be detected, with a larger response magnitude observed in northern populations to heat stress. Sucrose, fructose, and myo-inositol were identified to be the most responsive of the 29 analyzed organic metabolites. Many key enzymes in the Calvin cycle, glycolysis and pentose phosphate pathways also showed significant differential expression. The reported comparison suggests that adaptive mechanisms are involved through metabolic pathways to dampen the impacts of heat stress, and interactions between the metabolome and proteome should be further investigated in systems biology to understand robust design features against abiotic stress. PMID:22402787

  10. On-Line Robust Modal Stability Prediction using Wavelet Processing

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.; Lind, Rick

    1998-01-01

    Wavelet analysis for filtering and system identification has been used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins is reduced with parametric and nonparametric time- frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data is used to reduce the effects of external disturbances and unmodeled dynamics. Parametric estimates of modal stability are also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. The F-18 High Alpha Research Vehicle aeroservoelastic flight test data demonstrates improved robust stability prediction by extension of the stability boundary beyond the flight regime. Guidelines and computation times are presented to show the efficiency and practical aspects of these procedures for on-line implementation. Feasibility of the method is shown for processing flight data from time- varying nonstationary test points.

  11. Modelling System Processes to Support Uncertainty Analysis and Robustness Evaluation

    NASA Technical Reports Server (NTRS)

    Blackwell, Charles; Cuzzi, Jeffrey (Technical Monitor)

    1996-01-01

    In the use of advanced systems control techniques in the development of a dynamic system, results from effective mathematical modelling is required. Historically, in some cases the use of a model which only reflects the "expected" or "nominal" important -information about the system's internal processes has resulted in acceptable system performance, but it should be recognized that for those cases success was due to a combination of the remarkable inherent potential of feedback control for robustness and fortuitously wide margins between system performance requirements and system performance capability. In the cases of a CELSS development, no such fortuitous combinations should be expected, and it should be expected that the uncertainty in the information on the system's processes will have to be taken into account in order to generate a performance robust design. In this paper, we develop one perspective of the issue of providing robustness as mathematical modelling impacts it, and present some examples of model formats which serve the needed purpose.

  12. Robust process design and springback compensation of a decklid inner

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaojing; Grimm, Peter; Carleer, Bart; Jin, Weimin; Liu, Gang; Cheng, Yingchao

    2013-12-01

    Springback compensation is one of the key topics in current die face engineering. The accuracy of the springback simulation, the robustness of method planning and springback are considered to be the main factors which influences the effectiveness of springback compensation. In the present paper, the basic principles of springback compensation are presented firstly. These principles consist of an accurate full cycle simulation with final validation setting and the robust process design and optimization are discussed in detail via an industrial example, a decklid inner. Moreover, an effective compensation strategy is put forward based on the analysis of springback and the simulation based springback compensation is introduced in the phase of process design. In the end, the final verification and comparison in tryout and production is given in this paper, which verified that the methodology of robust springback compensation is effective during the die development.

  13. Consistent Robustness Analysis (CRA) Identifies Biologically Relevant Properties of Regulatory Network Models

    PubMed Central

    Saithong, Treenut; Painter, Kevin J.; Millar, Andrew J.

    2010-01-01

    Background A number of studies have previously demonstrated that “goodness of fit” is insufficient in reliably classifying the credibility of a biological model. Robustness and/or sensitivity analysis is commonly employed as a secondary method for evaluating the suitability of a particular model. The results of such analyses invariably depend on the particular parameter set tested, yet many parameter values for biological models are uncertain. Results Here, we propose a novel robustness analysis that aims to determine the “common robustness” of the model with multiple, biologically plausible parameter sets, rather than the local robustness for a particular parameter set. Our method is applied to two published models of the Arabidopsis circadian clock (the one-loop [1] and two-loop [2] models). The results reinforce current findings suggesting the greater reliability of the two-loop model and pinpoint the crucial role of TOC1 in the circadian network. Conclusions Consistent Robustness Analysis can indicate both the relative plausibility of different models and also the critical components and processes controlling each model. PMID:21179566

  14. Confronting Oahu's Water Woes: Identifying Scenarios for a Robust Evaluation of Policy Alternatives

    NASA Astrophysics Data System (ADS)

    van Rees, C. B.; Garcia, M. E.; Alarcon, T.; Sixt, G.

    2013-12-01

    The Pearl Harbor aquifer is the most important freshwater resource on Oahu (Hawaii, U.S.A), providing water to nearly half a million people. Recent studies show that current water use is reaching or exceeding sustainable yield. Climate change and increasing resident and tourist populations are predicted to further stress the aquifer. The island has lost huge tracts of freshwater and estuarine wetlands since human settlement; the dependence of many endemic, endangered species on these wetlands, as well as ecosystem benefits from wetlands, link humans and wildlife through water management. After the collapse of the sugar industry on Oahu (mid-1990s), the Waiahole ditch--a massive stream diversion bringing water from the island's windward to the leeward side--became a hotly disputed resource. Commercial interests and traditional farmers have clashed over the water, which could also serve to support the Pearl Harbor aquifer. Considering competing interests, impending scarcity, and uncertain future conditions, how can groundwater be managed most effectively? Complex water networks like this are characterized by conflicts between stakeholders, coupled human-natural systems, and future uncertainty. The Water Diplomacy Framework offers a model for analyzing such complex issues by integrating multiple disciplinary perspectives, identifying intervention points, and proposing sustainable solutions. The Water Diplomacy Framework is a theory and practice of implementing adaptive water management for complex problems by shifting the discussion from 'allocation of water' to 'benefit from water resources'. This is accomplished through an interactive process that includes stakeholder input, joint fact finding, collaborative scenario development, and a negotiated approach to value creation. Presented here are the results of the initial steps in a long term project to resolve water limitations on Oahu. We developed a conceptual model of the Pearl Harbor Aquifer system and identified

  15. Processing Robustness for A Phenylethynyl Terminated Polyimide Composite

    NASA Technical Reports Server (NTRS)

    Hou, Tan-Hung

    2004-01-01

    The processability of a phenylethynyl terminated imide resin matrix (designated as PETI-5) composite is investigated. Unidirectional prepregs are made by coating an N-methylpyrrolidone solution of the amide acid oligomer (designated as PETAA-5/NMP) onto unsized IM7 fibers. Two batches of prepregs are used: one is made by NASA in-house, and the other is from an industrial source. The composite processing robustness is investigated with respect to the prepreg shelf life, the effect of B-staging conditions, and the optimal processing window. Prepreg rheology and open hole compression (OHC) strengths are found not to be affected by prolonged (i.e., up to 60 days) ambient storage. Rheological measurements indicate that the PETAA-5/NMP processability is only slightly affected over a wide range of B-stage temperatures from 250 deg C to 300 deg C. The OHC strength values are statistically indistinguishable among laminates consolidated using various B-staging conditions. An optimal processing window is established by means of the response surface methodology. IM7/PETAA-5/NMP prepreg is more sensitive to consolidation temperature than to pressure. A good consolidation is achievable at 371 deg C (700 deg F)/100 Psi, which yields an RT OHC strength of 62 Ksi. However, processability declines dramatically at temperatures below 350 deg C (662 deg F), as evidenced by the OHC strength values. The processability of the IM7/LARC(TM) PETI-5 prepreg was found to be robust.

  16. Robust Read Channel System Directly Processing Asynchronous Sampling Data

    NASA Astrophysics Data System (ADS)

    Yamamoto, Akira; Mouri, Hiroki; Yamamoto, Takashi

    2006-02-01

    In this study, we describe a robust read channel employing a novel timing recovery system and a unique Viterbi detector which extracts channel timing and channel data directly from asynchronous sampling data. The timing recovery system in the proposed read channel has feed-forward architecture and consists entirely of digital circuits. Thus, it enables robust timing recovery at high-speed and has no performance deterioration caused by variations in analog circuits. The Viterbi detector not only detects maximum-likelihood data using a reference level generator, but also transforms asynchronous data into pseudosynchronous data using two clocks, such as an asynchronous clock generated by a frequency synthesizer and a pseudosynchronous clock generated by a timing detector. The proposed read channel has achieved a constant and fast frequency acquisition time against initial frequency error and has improved its bit error rate performance. This robust read channel system can be used for high-speed signal processing and LSIs using nanometer-scale semiconductor processes.

  17. Combining structured decision making and value-of-information analyses to identify robust management strategies.

    PubMed

    Moore, Joslin L; Runge, Michael C

    2012-10-01

    Structured decision making and value-of-information analyses can be used to identify robust management strategies even when uncertainty about the response of the system to management is high. We used these methods in a case study of management of the non-native invasive species gray sallow willow (Salix cinerea) in alpine Australia. Establishment of this species is facilitated by wildfire. Managers are charged with developing a management strategy despite extensive uncertainty regarding the frequency of fires, the willow's demography, and the effectiveness of management actions. We worked with managers in Victoria to conduct a formal decision analysis. We used a dynamic model to identify the best management strategy for a range of budgets. We evaluated the robustness of the strategies to uncertainty with value-of-information analyses. Results of the value-of-information analysis indicated that reducing uncertainty would not change which management strategy was identified as the best unless budgets increased substantially. This outcome suggests there would be little value in implementing adaptive management for the problem we analyzed. The value-of-information analyses also highlighted that the main driver of gray sallow willow invasion (i.e., fire frequency) is not necessarily the same factor that is most important for decision making (i.e., willow seed dispersal distance). Value of-information analyses enables managers to better target monitoring and research efforts toward factors critical to making the decision and to assess the need for adaptive management. PMID:22862796

  18. Process Architecture for Managing Digital Object Identifiers

    NASA Astrophysics Data System (ADS)

    Wanchoo, L.; James, N.; Stolte, E.

    2014-12-01

    In 2010, NASA's Earth Science Data and Information System (ESDIS) Project implemented a process for registering Digital Object Identifiers (DOIs) for data products distributed by Earth Observing System Data and Information System (EOSDIS). For the first 3 years, ESDIS evolved the process involving the data provider community in the development of processes for creating and assigning DOIs, and guidelines for the landing page. To accomplish this, ESDIS established two DOI User Working Groups: one for reviewing the DOI process whose recommendations were submitted to ESDIS in February 2014; and the other recently tasked to review and further develop DOI landing page guidelines for ESDIS approval by end of 2014. ESDIS has recently upgraded the DOI system from a manually-driven system to one that largely automates the DOI process. The new automated feature include: a) reviewing the DOI metadata, b) assigning of opaque DOI name if data provider chooses, and c) reserving, registering, and updating the DOIs. The flexibility of reserving the DOI allows data providers to embed and test the DOI in the data product metadata before formally registering with EZID. The DOI update process allows the changing of any DOI metadata except the DOI name unless the name has not been registered. Currently, ESDIS has processed a total of 557 DOIs of which 379 DOIs are registered with EZID and 178 are reserved with ESDIS. The DOI incorporates several metadata elements that effectively identify the data product and the source of availability. Of these elements, the Uniform Resource Locator (URL) attribute has the very important function of identifying the landing page which describes the data product. ESDIS in consultation with data providers in the Earth Science community is currently developing landing page guidelines that specify the key data product descriptive elements to be included on each data product's landing page. This poster will describe in detail the unique automated process and

  19. A robust sinusoidal signal processing method for interferometers

    NASA Astrophysics Data System (ADS)

    Wu, Xiang-long; Zhang, Hui; Tseng, Yang-Yu; Fan, Kuang-Chao

    2013-10-01

    Laser interferometers are widely used as a reference for length measurement. Reliable bidirectional optical fringe counting is normally obtained by using two orthogonally sinusoidal signals derived from the two outputs of an interferometer with path difference. These signals are subject to be disturbed by the geometrical errors of the moving target that causes the separation and shift of two interfering light spots on the detector. It results in typical Heydemann errors, including DC drift, amplitude variation and out-of-orthogonality of two sinusoidal signals that will seriously reduce the accuracy of fringe counting. This paper presents a robust sinusoidal signal processing method to correct the distorted waveforms by hardware. A corresponding circuit board has been designed. A linear stage equipped with a laser displacement interferometer and a height gauge equipped with a linear grating interferometer are used as the test beds. Experimental results show that, even with a seriously disturbed input waveform, the output Lissajous circle can always be stabilized after signal correction. This robust method increases the stability and reliability of the sinusoidal signals for data acquisition device to deal with pulse count and phase subdivision.

  20. Identifying a robust method to build RCMs ensemble as climate forcing for hydrological impact models

    NASA Astrophysics Data System (ADS)

    Olmos Giménez, P.; García Galiano, S. G.; Giraldo-Osorio, J. D.

    2016-06-01

    The regional climate models (RCMs) improve the understanding of the climate mechanism and are often used as climate forcing to hydrological impact models. Rainfall is the principal input to the water cycle, so special attention should be paid to its accurate estimation. However, climate change projections of rainfall events exhibit great divergence between RCMs. As a consequence, the rainfall projections, and the estimation of uncertainties, are better based in the combination of the information provided by an ensemble approach from different RCMs simulations. Taking into account the rainfall variability provided by different RCMs, the aims of this work are to evaluate the performance of two novel approaches based on the reliability ensemble averaging (REA) method for building RCMs ensembles of monthly precipitation over Spain. The proposed methodologies are based on probability density functions (PDFs) considering the variability of different levels of information, on the one hand of annual and seasonal rainfall, and on the other hand of monthly rainfall. The sensitivity of the proposed approaches, to two metrics for identifying the best ensemble building method, is evaluated. The plausible future scenario of rainfall for 2021-2050 over Spain, based on the more robust method, is identified. As a result, the rainfall projections are improved thus decreasing the uncertainties involved, to drive hydrological impacts models and therefore to reduce the cumulative errors in the modeling chain.

  1. Application of NMR Methods to Identify Detection Reagents for Use in the Development of Robust Nanosensors

    SciTech Connect

    Cosman, M; Krishnan, V V; Balhorn, R

    2004-04-29

    Nuclear Magnetic Resonance (NMR) spectroscopy is a powerful technique for studying bi-molecular interactions at the atomic scale. Our NMR lab is involved in the identification of small molecules, or ligands that bind to target protein receptors, such as tetanus (TeNT) and botulinum (BoNT) neurotoxins, anthrax proteins and HLA-DR10 receptors on non-Hodgkin's lymphoma cancer cells. Once low affinity binders are identified, they can be linked together to produce multidentate synthetic high affinity ligands (SHALs) that have very high specificity for their target protein receptors. An important nanotechnology application for SHALs is their use in the development of robust chemical sensors or biochips for the detection of pathogen proteins in environmental samples or body fluids. Here, we describe a recently developed NMR competition assay based on transferred nuclear Overhauser effect spectroscopy (trNOESY) that enables the identification of sets of ligands that bind to the same site, or a different site, on the surface of TeNT fragment C (TetC) than a known ''marker'' ligand, doxorubicin. Using this assay, we can identify the optimal pairs of ligands to be linked together for creating detection reagents, as well as estimate the relative binding constants for ligands competing for the same site.

  2. Product and Process Improvement Using Mixture-Process Variable Designs and Robust Optimization Techniques

    SciTech Connect

    Sahni, Narinder S.; Piepel, Gregory F.; Naes, Tormod

    2009-04-01

    The quality of an industrial product depends on the raw material proportions and the process variable levels, both of which need to be taken into account in designing a product. This article presents a case study from the food industry in which both kinds of variables were studied by combining a constrained mixture experiment design and a central composite process variable design. Based on the natural structure of the situation, a split-plot experiment was designed and models involving the raw material proportions and process variable levels (separately and combined) were fitted. Combined models were used to study: (i) the robustness of the process to variations in raw material proportions, and (ii) the robustness of the raw material recipes with respect to fluctuations in the process variable levels. Further, the expected variability in the robust settings was studied using the bootstrap.

  3. Robust Density-Based Clustering To Identify Metastable Conformational States of Proteins.

    PubMed

    Sittel, Florian; Stock, Gerhard

    2016-05-10

    A density-based clustering method is proposed that is deterministic, computationally efficient, and self-consistent in its parameter choice. By calculating a geometric coordinate space density for every point of a given data set, a local free energy is defined. On the basis of these free energy estimates, the frames are lumped into local free energy minima, ultimately forming microstates separated by local free energy barriers. The algorithm is embedded into a complete workflow to robustly generate Markov state models from molecular dynamics trajectories. It consists of (i) preprocessing of the data via principal component analysis in order to reduce the dimensionality of the problem, (ii) proposed density-based clustering to generate microstates, and (iii) dynamical clustering via the most probable path algorithm to construct metastable states. To characterize the resulting state-resolved conformational distribution, dihedral angle content color plots are introduced which identify structural differences of protein states in a concise way. To illustrate the performance of the method, three well-established model problems are adopted: conformational transitions of hepta-alanine, folding of villin headpiece, and functional dynamics of bovine pancreatic trypsin inhibitor. PMID:27058020

  4. Whole-Embryo Modeling of Early Segmentation in Drosophila Identifies Robust and Fragile Expression Domains

    PubMed Central

    Bieler, Jonathan; Pozzorini, Christian; Naef, Felix

    2011-01-01

    Segmentation of the Drosophila melanogaster embryo results from the dynamic establishment of spatial mRNA and protein patterns. Here, we exploit recent temporal mRNA and protein expression measurements on the full surface of the blastoderm to calibrate a dynamical model of the gap gene network on the entire embryo cortex. We model the early mRNA and protein dynamics of the gap genes hunchback, Kruppel, giant, and knirps, taking as regulatory inputs the maternal Bicoid and Caudal gradients, plus the zygotic Tailless and Huckebein proteins. The model captures the expression patterns faithfully, and its predictions are assessed from gap gene mutants. The inferred network shows an architecture based on reciprocal repression between gap genes that can stably pattern the embryo on a realistic geometry but requires complex regulations such as those involving the Hunchback monomer and dimers. Sensitivity analysis identifies the posterior domain of giant as among the most fragile features of an otherwise robust network, and hints at redundant regulations by Bicoid and Hunchback, possibly reflecting recent evolutionary changes in the gap-gene network in insects. PMID:21767480

  5. Stretching the limits of forming processes by robust optimization: A demonstrator

    NASA Astrophysics Data System (ADS)

    Wiebenga, J. H.; Atzema, E. H.; van den Boogaard, A. H.

    2013-12-01

    Robust design of forming processes using numerical simulations is gaining attention throughout the industry. In this work, it is demonstrated how robust optimization can assist in further stretching the limits of metal forming processes. A deterministic and a robust optimization study are performed, considering a stretch-drawing process of a hemispherical cup product. For the robust optimization study, both the effect of material and process scatter are taken into account. For quantifying the material scatter, samples of 41 coils of a drawing quality forming steel have been collected. The stochastic material behavior is obtained by a hybrid approach, combining mechanical testing and texture analysis, and efficiently implemented in a metamodel based optimization strategy. The deterministic and robust optimization results are subsequently presented and compared, demonstrating an increased process robustness and decreased number of product rejects by application of the robust optimization approach.

  6. Stretching the limits of forming processes by robust optimization: A demonstrator

    SciTech Connect

    Wiebenga, J. H.; Atzema, E. H.; Boogaard, A. H. van den

    2013-12-16

    Robust design of forming processes using numerical simulations is gaining attention throughout the industry. In this work, it is demonstrated how robust optimization can assist in further stretching the limits of metal forming processes. A deterministic and a robust optimization study are performed, considering a stretch-drawing process of a hemispherical cup product. For the robust optimization study, both the effect of material and process scatter are taken into account. For quantifying the material scatter, samples of 41 coils of a drawing quality forming steel have been collected. The stochastic material behavior is obtained by a hybrid approach, combining mechanical testing and texture analysis, and efficiently implemented in a metamodel based optimization strategy. The deterministic and robust optimization results are subsequently presented and compared, demonstrating an increased process robustness and decreased number of product rejects by application of the robust optimization approach.

  7. Robust syntaxin-4 immunoreactivity in mammalian horizontal cell processes

    PubMed Central

    HIRANO, ARLENE A.; BRANDSTÄTTER, JOHANN HELMUT; VILA, ALEJANDRO; BRECHA, NICHOLAS C.

    2009-01-01

    Horizontal cells mediate inhibitory feed-forward and feedback communication in the outer retina; however, mechanisms that underlie transmitter release from mammalian horizontal cells are poorly understood. Toward determining whether the molecular machinery for exocytosis is present in horizontal cells, we investigated the localization of syntaxin-4, a SNARE protein involved in targeting vesicles to the plasma membrane, in mouse, rat, and rabbit retinae using immunocytochemistry. We report robust expression of syntaxin-4 in the outer plexiform layer of all three species. Syntaxin-4 occurred in processes and tips of horizontal cells, with regularly spaced, thicker sandwich-like structures along the processes. Double labeling with syntaxin-4 and calbindin antibodies, a horizontal cell marker, demonstrated syntaxin-4 localization to horizontal cell processes; whereas, double labeling with PKC antibodies, a rod bipolar cell (RBC) marker, showed a lack of co-localization, with syntaxin-4 immunolabeling occurring just distal to RBC dendritic tips. Syntaxin-4 immunolabeling occurred within VGLUT-1-immunoreactive photoreceptor terminals and underneath synaptic ribbons, labeled by CtBP2/RIBEYE antibodies, consistent with localization in invaginating horizontal cell tips at photoreceptor triad synapses. Vertical sections of retina immunostained for syntaxin-4 and peanut agglutinin (PNA) established that the prominent patches of syntaxin-4 immunoreactivity were adjacent to the base of cone pedicles. Horizontal sections through the OPL indicate a one-to-one co-localization of syntaxin-4 densities at likely all cone pedicles, with syntaxin-4 immunoreactivity interdigitating with PNA labeling. Pre-embedding immuno-electron microscopy confirmed the subcellular localization of syntaxin-4 labeling to lateral elements at both rod and cone triad synapses. Finally, co-localization with SNAP-25, a possible binding partner of syntaxin-4, indicated co-expression of these SNARE proteins in

  8. Decisional tool to assess current and future process robustness in an antibody purification facility.

    PubMed

    Stonier, Adam; Simaria, Ana Sofia; Smith, Martin; Farid, Suzanne S

    2012-07-01

    Increases in cell culture titers in existing facilities have prompted efforts to identify strategies that alleviate purification bottlenecks while controlling costs. This article describes the application of a database-driven dynamic simulation tool to identify optimal purification sizing strategies and visualize their robustness to future titer increases. The tool harnessed the benefits of MySQL to capture the process, business, and risk features of multiple purification options and better manage the large datasets required for uncertainty analysis and optimization. The database was linked to a discrete-event simulation engine so as to model the dynamic features of biopharmaceutical manufacture and impact of resource constraints. For a given titer, the tool performed brute force optimization so as to identify optimal purification sizing strategies that minimized the batch material cost while maintaining the schedule. The tool was applied to industrial case studies based on a platform monoclonal antibody purification process in a multisuite clinical scale manufacturing facility. The case studies assessed the robustness of optimal strategies to batch-to-batch titer variability and extended this to assess the long-term fit of the platform process as titers increase from 1 to 10 g/L, given a range of equipment sizes available to enable scale intensification efforts. Novel visualization plots consisting of multiple Pareto frontiers with tie-lines connecting the position of optimal configurations over a given titer range were constructed. These enabled rapid identification of robust purification configurations given titer fluctuations and the facility limit that the purification suites could handle in terms of the maximum titer and hence harvest load. PMID:22641562

  9. Combining Dynamical Decoupling with Robust Optimal Control for Improved Quantum Information Processing

    NASA Astrophysics Data System (ADS)

    Grace, Matthew D.; Witzel, Wayne M.; Carroll, Malcolm S.

    2010-03-01

    Constructing high-fidelity control pulses that are robust to control and system/environment fluctuations is a crucial objective for quantum information processing (QIP). We combine dynamical decoupling (DD) with optimal control (OC) to identify control pulses that achieve this objective numerically. Previous DD work has shown that general errors up to (but not including) third order can be removed from π- and π/2-pulses without concatenation. By systematically integrating DD and OC, we are able to increase pulse fidelity beyond this limit. Our hybrid method of quantum control incorporates a newly-developed algorithm for robust OC, providing a nested DD-OC approach to generate robust controls. Motivated by solid-state QIP, we also incorporate relevant experimental constraints into this DD-OC formalism. To demonstrate the advantage of our approach, the resulting quantum controls are compared to previous DD results in open and uncertain model systems. This work was supported by the Laboratory Directed Research and Development program at Sandia National Laboratories. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  10. Commonsense Conceptions of Emergent Processes: Why Some Misconceptions Are Robust

    ERIC Educational Resources Information Center

    Chi, Michelene T. H.

    2005-01-01

    This article offers a plausible domain-general explanation for why some concepts of processes are resistant to instructional remediation although other, apparently similar concepts are more easily understood. The explanation assumes that processes may differ in ontological ways: that some processes (such as the apparent flow in diffusion of dye in…

  11. Identifying Fragilities in Biochemical Networks: Robust Performance Analysis of Fas Signaling-Induced Apoptosis

    PubMed Central

    Shoemaker, Jason E.; Doyle, Francis J.

    2008-01-01

    Proper control of apoptotic signaling is critical to immune response and development in multicellular organisms. Two tools from control engineering are applied to a mathematical model of Fas ligand signaling-induced apoptosis. Structured singular value analysis determines the volume in parameter space within which the system parameters may exist and still maintain efficacious signaling, but is limited to linear behaviors. Sensitivity analysis can be applied to nonlinear systems but is difficult to relate to performance criteria. Thus, structured singular value analysis is used to quantify performance during apoptosis rejection, ensuring that the system remains sensitive but not overly so to apoptotic stimuli. Sensitivity analysis is applied when the system has switched to the death-inducing, apoptotic steady state to determine parameters significant to maintaining the bistability. The analyses reveal that the magnitude of the death signal is fragile to perturbations in degradation parameters (failures in the ubiquitin/proteasome mechanism) while the timing of signal expression can be tuned by manipulating local parameters. Simultaneous parameter uncertainty highlights apoptotic fragility to disturbances in the ubiquitin/proteasome system. Sensitivity analysis reveals that the robust signaling characteristics of the apoptotic network is due to network architecture, and the apoptotic signaling threshold is best manipulated by interactions upstream of the apoptosome. PMID:18539637

  12. Advanced process monitoring and feedback control to enhance cell culture process production and robustness.

    PubMed

    Zhang, An; Tsang, Valerie Liu; Moore, Brandon; Shen, Vivian; Huang, Yao-Ming; Kshirsagar, Rashmi; Ryll, Thomas

    2015-12-01

    It is a common practice in biotherapeutic manufacturing to define a fixed-volume feed strategy for nutrient feeds, based on historical cell demand. However, once the feed volumes are defined, they are inflexible to batch-to-batch variations in cell growth and physiology and can lead to inconsistent productivity and product quality. In an effort to control critical quality attributes and to apply process analytical technology (PAT), a fully automated cell culture feedback control system has been explored in three different applications. The first study illustrates that frequent monitoring and automatically controlling the complex feed based on a surrogate (glutamate) level improved protein production. More importantly, the resulting feed strategy was translated into a manufacturing-friendly manual feed strategy without impact on product quality. The second study demonstrates the improved process robustness of an automated feed strategy based on online bio-capacitance measurements for cell growth. In the third study, glucose and lactate concentrations were measured online and were used to automatically control the glucose feed, which in turn changed lactate metabolism. These studies suggest that the auto-feedback control system has the potential to significantly increase productivity and improve robustness in manufacturing, with the goal of ensuring process performance and product quality consistency. PMID:26108810

  13. Natural Language Processing: Toward Large-Scale, Robust Systems.

    ERIC Educational Resources Information Center

    Haas, Stephanie W.

    1996-01-01

    Natural language processing (NLP) is concerned with getting computers to do useful things with natural language. Major applications include machine translation, text generation, information retrieval, and natural language interfaces. Reviews important developments since 1987 that have led to advances in NLP; current NLP applications; and problems…

  14. Leveraging the Cloud for Robust and Efficient Lunar Image Processing

    NASA Technical Reports Server (NTRS)

    Chang, George; Malhotra, Shan; Wolgast, Paul

    2011-01-01

    The Lunar Mapping and Modeling Project (LMMP) is tasked to aggregate lunar data, from the Apollo era to the latest instruments on the LRO spacecraft, into a central repository accessible by scientists and the general public. A critical function of this task is to provide users with the best solution for browsing the vast amounts of imagery available. The image files LMMP manages range from a few gigabytes to hundreds of gigabytes in size with new data arriving every day. Despite this ever-increasing amount of data, LMMP must make the data readily available in a timely manner for users to view and analyze. This is accomplished by tiling large images into smaller images using Hadoop, a distributed computing software platform implementation of the MapReduce framework, running on a small cluster of machines locally. Additionally, the software is implemented to use Amazon's Elastic Compute Cloud (EC2) facility. We also developed a hybrid solution to serve images to users by leveraging cloud storage using Amazon's Simple Storage Service (S3) for public data while keeping private information on our own data servers. By using Cloud Computing, we improve upon our local solution by reducing the need to manage our own hardware and computing infrastructure, thereby reducing costs. Further, by using a hybrid of local and cloud storage, we are able to provide data to our users more efficiently and securely. 12 This paper examines the use of a distributed approach with Hadoop to tile images, an approach that provides significant improvements in image processing time, from hours to minutes. This paper describes the constraints imposed on the solution and the resulting techniques developed for the hybrid solution of a customized Hadoop infrastructure over local and cloud resources in managing this ever-growing data set. It examines the performance trade-offs of using the more plentiful resources of the cloud, such as those provided by S3, against the bandwidth limitations such use

  15. Identifying robust communities and multi-community nodes by combining top-down and bottom-up approaches to clustering

    PubMed Central

    Gaiteri, Chris; Chen, Mingming; Szymanski, Boleslaw; Kuzmin, Konstantin; Xie, Jierui; Lee, Changkyu; Blanche, Timothy; Chaibub Neto, Elias; Huang, Su-Chun; Grabowski, Thomas; Madhyastha, Tara; Komashko, Vitalina

    2015-01-01

    Biological functions are carried out by groups of interacting molecules, cells or tissues, known as communities. Membership in these communities may overlap when biological components are involved in multiple functions. However, traditional clustering methods detect non-overlapping communities. These detected communities may also be unstable and difficult to replicate, because traditional methods are sensitive to noise and parameter settings. These aspects of traditional clustering methods limit our ability to detect biological communities, and therefore our ability to understand biological functions. To address these limitations and detect robust overlapping biological communities, we propose an unorthodox clustering method called SpeakEasy which identifies communities using top-down and bottom-up approaches simultaneously. Specifically, nodes join communities based on their local connections, as well as global information about the network structure. This method can quantify the stability of each community, automatically identify the number of communities, and quickly cluster networks with hundreds of thousands of nodes. SpeakEasy shows top performance on synthetic clustering benchmarks and accurately identifies meaningful biological communities in a range of datasets, including: gene microarrays, protein interactions, sorted cell populations, electrophysiology and fMRI brain imaging. PMID:26549511

  16. Identifying influential factors of business process performance using dependency analysis

    NASA Astrophysics Data System (ADS)

    Wetzstein, Branimir; Leitner, Philipp; Rosenberg, Florian; Dustdar, Schahram; Leymann, Frank

    2011-02-01

    We present a comprehensive framework for identifying influential factors of business process performance. In particular, our approach combines monitoring of process events and Quality of Service (QoS) measurements with dependency analysis to effectively identify influential factors. The framework uses data mining techniques to construct tree structures to represent dependencies of a key performance indicator (KPI) on process and QoS metrics. These dependency trees allow business analysts to determine how process KPIs depend on lower-level process metrics and QoS characteristics of the IT infrastructure. The structure of the dependencies enables a drill-down analysis of single factors of influence to gain a deeper knowledge why certain KPI targets are not met.

  17. The Robustness of Pathway Analysis in Identifying Potential Drug Targets in Non-Small Cell Lung Carcinoma

    PubMed Central

    Dalby, Andrew; Bailey, Ian

    2014-01-01

    The identification of genes responsible for causing cancers from gene expression data has had varied success. Often the genes identified depend on the methods used for detecting expression patterns, or on the ways that the data had been normalized and filtered. The use of gene set enrichment analysis is one way to introduce biological information in order to improve the detection of differentially expressed genes and pathways. In this paper we show that the use of network models while still subject to the problems of normalization is a more robust method for detecting pathways that are differentially overrepresented in lung cancer data. Such differences may provide opportunities for novel therapeutics. In addition, we present evidence that non-small cell lung carcinoma is not a series of homogeneous diseases; rather that there is a heterogeny within the genotype which defies phenotype classification. This diversity helps to explain the lack of progress in developing therapies against non-small cell carcinoma and suggests that drug development may consider multiple pathways as treatment targets.

  18. Phosphoproteomic profiling of tumor tissues identifies HSP27 Ser82 phosphorylation as a robust marker of early ischemia

    PubMed Central

    Zahari, Muhammad Saddiq; Wu, Xinyan; Pinto, Sneha M.; Nirujogi, Raja Sekhar; Kim, Min-Sik; Fetics, Barry; Philip, Mathew; Barnes, Sheri R.; Godfrey, Beverly; Gabrielson, Edward; Nevo, Erez; Pandey, Akhilesh

    2015-01-01

    Delays between tissue collection and tissue fixation result in ischemia and ischemia-associated changes in protein phosphorylation levels, which can misguide the examination of signaling pathway status. To identify a biomarker that serves as a reliable indicator of ischemic changes that tumor tissues undergo, we subjected harvested xenograft tumors to room temperature for 0, 2, 10 and 30 minutes before freezing in liquid nitrogen. Multiplex TMT-labeling was conducted to achieve precise quantitation, followed by TiO2 phosphopeptide enrichment and high resolution mass spectrometry profiling. LC-MS/MS analyses revealed phosphorylation level changes of a number of phosphosites in the ischemic samples. The phosphorylation of one of these sites, S82 of the heat shock protein 27 kDa (HSP27), was especially abundant and consistently upregulated in tissues with delays in freezing as short as 2 minutes. In order to eliminate effects of ischemia, we employed a novel cryogenic biopsy device which begins freezing tissues in situ before they are excised. Using this device, we showed that the upregulation of phosphorylation of S82 on HSP27 was abrogated. We thus demonstrate that our cryogenic biopsy device can eliminate ischemia-induced phosphoproteome alterations, and measurements of S82 on HSP27 can be used as a robust marker of ischemia in tissues. PMID:26329039

  19. Multiexperiment data processing in identifying model helicopter's yaw dynamics

    NASA Astrophysics Data System (ADS)

    Chen, Haosheng; Chen, Darong

    2003-09-01

    The multi-experiment data is usually needed in identifying a model helicopter's yaw dynamics. In order to strengthen the information of the dynamics and reduce the effect of the noise, a new kind of least square method by using a weighted criterion is investigated to estimate the model parameters. To calculate the factors of the weighted criterion, a neural perceptron is trained to determine the factors automatically. The simulated outputs of the model derived by this kind of method fit the measured outputs well. It is suggested that this kind of data processing method is useful in identifying the yaw dynamics and processing the multi-experiment data for the system identification.

  20. Robust Low Cost Aerospike/RLV Combustion Chamber by Advanced Vacuum Plasma Process

    NASA Technical Reports Server (NTRS)

    Holmes, Richard; Ellis, David; McKechnie

    1999-01-01

    Next-generation, regeneratively cooled rocket engines will require materials that can withstand high temperatures while retaining high thermal conductivity. At the same time, fabrication techniques must be cost efficient so that engine components can be manufactured within the constraints of a shrinking NASA budget. In recent years, combustion chambers of equivalent size to the Aerospike chamber have been fabricated at NASA-Marshall Space Flight Center (MSFC) using innovative, relatively low-cost, vacuum-plasma-spray (VPS) techniques. Typically, such combustion chambers are made of the copper alloy NARloy-Z. However, current research and development conducted by NASA-Lewis Research Center (LeRC) has identified a Cu-8Cr-4Nb alloy which possesses excellent high-temperature strength, creep resistance, and low cycle fatigue behavior combined with exceptional thermal stability. In fact, researchers at NASA-LeRC have demonstrated that powder metallurgy (P/M) Cu-8Cr-4Nb exhibits better mechanical properties at 1,200 F than NARloy-Z does at 1,000 F. The objective of this program was to develop and demonstrate the technology to fabricate high-performance, robust, inexpensive combustion chambers for advanced propulsion systems (such as Lockheed-Martin's VentureStar and NASA's Reusable Launch Vehicle, RLV) using the low-cost, VPS process to deposit Cu-8Cr-4Nb with mechanical properties that match or exceed those of P/M Cu-8Cr-4Nb. In addition, oxidation resistant and thermal barrier coatings can be incorporated as an integral part of the hot wall of the liner during the VPS process. Tensile properties of Cu-8Cr-4Nb material produced by VPS are reviewed and compared to material produced previously by extrusion. VPS formed combustion chamber liners have also been prepared and will be reported on following scheduled hot firing tests at NASA-Lewis.

  1. Optical wafer metrology sensors for process-robust CD and overlay control in semiconductor device manufacturing

    NASA Astrophysics Data System (ADS)

    den Boef, Arie J.

    2016-06-01

    This paper presents three optical wafer metrology sensors that are used in lithography for robustly measuring the shape and position of wafers and device patterns on these wafers. The first two sensors are a level sensor and an alignment sensor that measure, respectively, a wafer height map and a wafer position before a new pattern is printed on the wafer. The third sensor is an optical scatterometer that measures critical dimension-variations and overlay after the resist has been exposed and developed. These sensors have different optical concepts but they share the same challenge that sub-nm precision is required at high throughput on a large variety of processed wafers and in the presence of unknown wafer processing variations. It is the purpose of this paper to explain these challenges in more detail and give an overview of the various solutions that have been introduced over the years to come to process-robust optical wafer metrology.

  2. Identifying different types of stochastic processes with the same spectra

    NASA Astrophysics Data System (ADS)

    Kim, Jong U.; Kish, Laszlo B.; Schmera, Gabor

    2005-05-01

    We propose a new way of pattern recognition which can distinguish different stochastic processes even if they have the same power density spectrum. Known crosscorrelation techniques recognize only the same realizations of a stochastic process in the two signal channels. However, crosscorrelation techniques do not work for recognizing independent realizations of the same stochastic process because their crosscorrelation function and cross spectrum are zero. A method able to do that would have the potential to revolutionize identification and pattern recognition, techniques, including sensing and security applications. The new method we are proposing is able to identify independent realizations of the same process, and at the same time, does not give false alarm for different processes which are very similar in nature. We demonstrate the method by using different realizations of two different types of random telegram signals, which are indistinguishable with respect to power density spectra (PDS). We call this method bispectrum correlation coefficient (BCC) technique.

  3. Hybrid image processing for robust extraction of lean tissue on beef cut surface

    NASA Astrophysics Data System (ADS)

    Hwang, Heon; Park, Bosoon; Nguyen, Minh D.; Chen, Yud-Ren

    1996-02-01

    A hybrid image processing system which automatically separates lean tissues from the beef cut surface image and generates the lean tissue contour has been developed. Because of the inhomogeneous distribution and fuzzy pattern of fat and lean tissues on the beef cut, conventional image segmentation and contour generation algorithms suffer from heavy computing, algorithm complexness, and even poor robustness. The proposed system utilizes an artificial neural network to enhance the robustness of processing. The system is composed of three procedures such as pre-network, network based lean tissue segmentation and post- network procedure. At the pre-network stage, gray level images of beef cuts were segmented and resized appropriate to the network inputs. Features such as fat and bone were enhanced and the enhanced input image was converted to the grid pattern image, whose grid was formed as 4 by 4 pixel size. At the network stage, the normalized gray value of each grid image was taken as the network input. Pre-trained network generated the grid image output of the isolated lean tissue. A sequence of post-network processing was followed to obtain the detailed contour of the lean tissue. The training scheme of the network and separating performance were presented and analyzed. The developed hybrid system shows the feasibility of the human like robust object segmentation and contour generation for the complex fuzzy and irregular image.

  4. Robust control chart for change point detection of process variance in the presence of disturbances

    NASA Astrophysics Data System (ADS)

    Huat, Ng Kooi; Midi, Habshah

    2015-02-01

    A conventional control chart for detecting shifts in variance of a process is typically developed where in most circumstances the nominal value of variance is unknown and based upon one of the essential assumptions that the underlying distribution of the quality characteristic is normal. However, this is not always the case as it is fairly evident that the statistical estimates used for these charts are very sensitive to the occurrence of occasional outliers. This is for the reason that the robust control charts are put forward when the underlying normality assumption is not met, and served as a remedial measure to the problem of contamination in process data. Realizing that the existing approach, namely Biweight A pooled residuals method, appears to be resistance to localized disturbances but lack of efficiency when there are diffuse disturbances. To be concrete, diffuse disturbances are those that have equal change of being perturbed by any observation, while a localized disturbance will have effect on every member of a certain subsample or subsamples. Since the efficiency of estimators in the presence of disturbances can rely heavily upon whether the disturbances are distributed throughout the observations or concentrated in a few subsamples. Hence, to this end, in this paper we proposed a new robust MBAS control chart by means of subsample-based robust Modified Biweight A scale estimator in estimating the process standard deviation. It has strong resistance to both localized and diffuse disturbances as well as high efficiency when no disturbances are present. The performance of the proposed robust chart was evaluated based on some decision criteria through Monte Carlo simulation study.

  5. Some Results on the Analysis of Stochastic Processes with Uncertain Transition Probabilities and Robust Optimal Control

    SciTech Connect

    Keyong Li; Seong-Cheol Kang; I. Ch. Paschalidis

    2007-09-01

    This paper investigates stochastic processes that are modeled by a finite number of states but whose transition probabilities are uncertain and possibly time-varying. The treatment of uncertain transition probabilities is important because there appears to be a disconnection between the practice and theory of stochastic processes due to the difficulty of assigning exact probabilities to real-world events. Also, when the finite-state process comes as a reduced model of one that is more complicated in nature (possibly in a continuous state space), existing results do not facilitate rigorous analysis. Two approaches are introduced here. The first focuses on processes with one terminal state and the properties that affect their convergence rates. When a process is on a complicated graph, the bound of the convergence rate is not trivially related to that of the probabilities of individual transitions. Discovering the connection between the two led us to define two concepts which we call 'progressivity' and 'sortedness', and to a new comparison theorem for stochastic processes. An optimality criterion for robust optimal control also derives from this comparison theorem. In addition, this result is applied to the case of mission-oriented autonomous robot control to produce performance estimate within a control framework that we propose. The second approach is in the MDP frame work. We will introduce our preliminary work on optimistic robust optimization, which aims at finding solutions that guarantee the upper bounds of the accumulative discounted cost with prescribed probabilities. The motivation here is to address the issue that the standard robust optimal solution tends to be overly conservative.

  6. Robust Selection Algorithm (RSA) for Multi-Omic Biomarker Discovery; Integration with Functional Network Analysis to Identify miRNA Regulated Pathways in Multiple Cancers

    PubMed Central

    Sehgal, Vasudha; Seviour, Elena G.; Moss, Tyler J.; Mills, Gordon B.; Azencott, Robert; Ram, Prahlad T.

    2015-01-01

    MicroRNAs (miRNAs) play a crucial role in the maintenance of cellular homeostasis by regulating the expression of their target genes. As such, the dysregulation of miRNA expression has been frequently linked to cancer. With rapidly accumulating molecular data linked to patient outcome, the need for identification of robust multi-omic molecular markers is critical in order to provide clinical impact. While previous bioinformatic tools have been developed to identify potential biomarkers in cancer, these methods do not allow for rapid classification of oncogenes versus tumor suppressors taking into account robust differential expression, cutoffs, p-values and non-normality of the data. Here, we propose a methodology, Robust Selection Algorithm (RSA) that addresses these important problems in big data omics analysis. The robustness of the survival analysis is ensured by identification of optimal cutoff values of omics expression, strengthened by p-value computed through intensive random resampling taking into account any non-normality in the data and integration into multi-omic functional networks. Here we have analyzed pan-cancer miRNA patient data to identify functional pathways involved in cancer progression that are associated with selected miRNA identified by RSA. Our approach demonstrates the way in which existing survival analysis techniques can be integrated with a functional network analysis framework to efficiently identify promising biomarkers and novel therapeutic candidates across diseases. PMID:26505200

  7. Robust Selection Algorithm (RSA) for Multi-Omic Biomarker Discovery; Integration with Functional Network Analysis to Identify miRNA Regulated Pathways in Multiple Cancers.

    PubMed

    Sehgal, Vasudha; Seviour, Elena G; Moss, Tyler J; Mills, Gordon B; Azencott, Robert; Ram, Prahlad T

    2015-01-01

    MicroRNAs (miRNAs) play a crucial role in the maintenance of cellular homeostasis by regulating the expression of their target genes. As such, the dysregulation of miRNA expression has been frequently linked to cancer. With rapidly accumulating molecular data linked to patient outcome, the need for identification of robust multi-omic molecular markers is critical in order to provide clinical impact. While previous bioinformatic tools have been developed to identify potential biomarkers in cancer, these methods do not allow for rapid classification of oncogenes versus tumor suppressors taking into account robust differential expression, cutoffs, p-values and non-normality of the data. Here, we propose a methodology, Robust Selection Algorithm (RSA) that addresses these important problems in big data omics analysis. The robustness of the survival analysis is ensured by identification of optimal cutoff values of omics expression, strengthened by p-value computed through intensive random resampling taking into account any non-normality in the data and integration into multi-omic functional networks. Here we have analyzed pan-cancer miRNA patient data to identify functional pathways involved in cancer progression that are associated with selected miRNA identified by RSA. Our approach demonstrates the way in which existing survival analysis techniques can be integrated with a functional network analysis framework to efficiently identify promising biomarkers and novel therapeutic candidates across diseases. PMID:26505200

  8. Turning process monitoring using a robust and miniaturized non-incremental interferometric distance sensor

    NASA Astrophysics Data System (ADS)

    Günther, P.; Dreier, F.; Pfister, T.; Czarske, J.

    2011-05-01

    In-process shape measurements of rotating objects such as turning parts at a metal working lathe are of great importance for monitoring production processes or to enable zero-error production. Therefore, contactless and compact sensors with high temporal resolution as well as high precision are necessary. Furthermore, robust sensors are required which withstand the rough ambient conditions in production environment. Thus, we developed a miniaturized and robust non-incremental fiber-optic distance sensor with dimensions of only 30x40x90 mm3 which can be attached directly adjacent to the turning tool bit of a metal working lathe and allows precise in-process 3D shape measurements of turning parts. In this contribution we present the results of in-process shape measurements during the turning process at a metal working lathe using a miniaturized interferometric distance sensor. The absolute radius of the turning workpiece can be determined with micron precision. To proof the accuracy of the measurement results, comparative measurements with tactile sensors have to be performed.

  9. The limits of feedforward vision: recurrent processing promotes robust object recognition when objects are degraded.

    PubMed

    Wyatte, Dean; Curran, Tim; O'Reilly, Randall

    2012-11-01

    Everyday vision requires robustness to a myriad of environmental factors that degrade stimuli. Foreground clutter can occlude objects of interest, and complex lighting and shadows can decrease the contrast of items. How does the brain recognize visual objects despite these low-quality inputs? On the basis of predictions from a model of object recognition that contains excitatory feedback, we hypothesized that recurrent processing would promote robust recognition when objects were degraded by strengthening bottom-up signals that were weakened because of occlusion and contrast reduction. To test this hypothesis, we used backward masking to interrupt the processing of partially occluded and contrast reduced images during a categorization experiment. As predicted by the model, we found significant interactions between the mask and occlusion and the mask and contrast, such that the recognition of heavily degraded stimuli was differentially impaired by masking. The model provided a close fit of these results in an isomorphic version of the experiment with identical stimuli. The model also provided an intuitive explanation of the interactions between the mask and degradations, indicating that masking interfered specifically with the extensive recurrent processing necessary to amplify and resolve highly degraded inputs, whereas less degraded inputs did not require much amplification and could be rapidly resolved, making them less susceptible to masking. Together, the results of the experiment and the accompanying model simulations illustrate the limits of feedforward vision and suggest that object recognition is better characterized as a highly interactive, dynamic process that depends on the coordination of multiple brain areas. PMID:22905822

  10. Conceptual information processing: A robust approach to KBS-DBMS integration

    NASA Technical Reports Server (NTRS)

    Lazzara, Allen V.; Tepfenhart, William; White, Richard C.; Liuzzi, Raymond

    1987-01-01

    Integrating the respective functionality and architectural features of knowledge base and data base management systems is a topic of considerable interest. Several aspects of this topic and associated issues are addressed. The significance of integration and the problems associated with accomplishing that integration are discussed. The shortcomings of current approaches to integration and the need to fuse the capabilities of both knowledge base and data base management systems motivates the investigation of information processing paradigms. One such paradigm is concept based processing, i.e., processing based on concepts and conceptual relations. An approach to robust knowledge and data base system integration is discussed by addressing progress made in the development of an experimental model for conceptual information processing.

  11. Primary Polymer Aging Processes Identified from Weapon Headspace Chemicals

    SciTech Connect

    Chambers, D M; Bazan, J M; Ithaca, J G

    2002-03-25

    A current focus of our weapon headspace sampling work is the interpretation of the volatile chemical signatures that we are collecting. To help validate our interpretation we have been developing a laboratory-based material aging capability to simulate material decomposition chemistries identified. Key to establishing this capability has been the development of an automated approach to process, analyze, and quantify arrays of material combinations as a function of time and temperature. Our initial approach involves monitoring the formation and migration of volatile compounds produced when a material decomposes. This approach is advantageous in that it is nondestructive and provides a direct comparison with our weapon headspace surveillance initiative. Nevertheless, this approach requires us to identify volatile material residue and decomposition byproducts that are not typically monitored and reported in material aging studies. Similar to our weapon monitoring method, our principle laboratory-based method involves static headspace collection by solid phase microextraction (SPME) followed by gas chromatography/mass spectrometry (GC/MS). SPME is a sorbent collection technique that is ideally suited for preconcentration and delivery of trace gas-phase compounds for analysis by GC. When combined with MS, detection limits are routinely in the low- and sub-ppb ranges, even for semivolatile and polar compounds. To automate this process we incorporated a robotic sample processor configured for SPME collection. The completed system will thermally process, sample, and analyze a material sample. Quantification of the instrument response is another process that has been integrated into the system. The current system screens low-milligram quantities of material for the formation or outgas of small compounds as initial indicators of chemical decomposition. This emerging capability offers us a new approach to identify and non-intrusively monitor decomposition mechanisms that are

  12. A novel predictive control algorithm and robust stability criteria for integrating processes.

    PubMed

    Zhang, Bin; Yang, Weimin; Zong, Hongyuan; Wu, Zhiyong; Zhang, Weidong

    2011-07-01

    This paper introduces a novel predictive controller for single-input/single-output (SISO) integrating systems, which can be directly applied without pre-stabilizing the process. The control algorithm is designed on the basis of the tested step response model. To produce a bounded system response along the finite predictive horizon, the effect of the integrating mode must be zeroed while unmeasured disturbances exist. Here, a novel predictive feedback error compensation method is proposed to eliminate the permanent offset between the setpoint and the process output while the integrating system is affected by load disturbance. Also, a rotator factor is introduced in the performance index, which is contributed to the improvement robustness of the closed-loop system. Then on the basis of Jury's dominant coefficient criterion, a robust stability condition of the resulted closed loop system is given. There are only two parameters which need to be tuned for the controller, and each has a clear physical meaning, which is convenient for implementation of the control algorithm. Lastly, simulations are given to illustrate that the proposed algorithm can provide excellent closed loop performance compared with some reported methods. PMID:21353217

  13. CORROSION PROCESS IN REINFORCED CONCRETE IDENTIFIED BY ACOUSTIC EMISSION

    NASA Astrophysics Data System (ADS)

    Kawasaki, Yuma; Kitaura, Misuzu; Tomoda, Yuichi; Ohtsu, Masayasu

    Deterioration of Reinforced Concrete (RC) due to salt attack is known as one of serious problems. Thus, development of non-destructive evaluation (NDE) techniques is important to assess the corrosion process. Reinforcement in concrete normally does not corrode because of a passive film on the surface of reinforcement. When chloride concentration at reinfo rcement exceeds the threshold level, the passive film is destroyed. Thus maintenance is desirable at an early stage. In this study, to identify the onset of corrosion and the nucleation of corrosion-induced cracking in concrete due to expansion of corrosion products, continuous acoustic emission (AE) monitoring is applied. Accelerated corrosion and cyclic wet and dry tests are performed in a laboratory. The SiGMA (Simplified Green's functions for Moment tensor Analysis) proce dure is applied to AE waveforms to clarify source kinematics of micro-cracks locations, types and orientations. Results show that the onset of corrosion and the nu cleation of corrosion-induced cracking in concrete are successfully identified. Additionally, cross-sections inside the reinforcement are observed by a scanning electron microscope (SEM). From these results, a great promise for AE techniques to monitor salt damage at an early stage in RC structures is demonstrated.

  14. Robustness Tests in Determining the Earthquake Rupture Process: The June 23, 2001 Mw 8.4 Peru Earthquake

    NASA Astrophysics Data System (ADS)

    Das, S.; Robinson, D. P.

    2006-12-01

    The non-uniqueness of the problem of determining the rupture process details from analysis of body-wave seismograms was first discussed by Kostrov in 1974. We discuss how to use robustness tests together with inversion of synthetic data to identify the reliable properties of the rupture process obtained from inversion of broadband body wave data. We apply it to the great 2001 Peru earthquake. Twice in the last 200 years, a great earthquake in this region has been followed by a great earthquake in the immediately adjacent plate boundary to the south within about 10 years, indicating the potential for a major earthquake in this area in the near future. By inverting 19 pure SH-seismograms evenly distributed in azimuth around the fault, we find that the rupture was held up by a barrier and then overcame it, thereby producing the world's third largest earthquake since 1965, and we show that the stalling of the rupture in this earthquake is a robust feature. The rupture propagated for ~70 km, then skirted around a ~6000 km2 area of the fault and continued propagating for another ~200 km, returning to rupture this barrier after a ~30 second delay. The barrier has relatively low rupture speed, slip and aftershock density compared to its surroundings, and the time of the main energy release in the earthquake coincides with its rupture. We identify this barrier as a fracture zone on the subducting oceanic plate. Robinson, D. P., S. Das, A. B. Watts (2006), Earthquake rupture stalled by subducting fracture zone, Science, 312(5777), 1203-1205.

  15. Evolving Robust Gene Regulatory Networks

    PubMed Central

    Noman, Nasimul; Monjo, Taku; Moscato, Pablo; Iba, Hitoshi

    2015-01-01

    Design and implementation of robust network modules is essential for construction of complex biological systems through hierarchical assembly of ‘parts’ and ‘devices’. The robustness of gene regulatory networks (GRNs) is ascribed chiefly to the underlying topology. The automatic designing capability of GRN topology that can exhibit robust behavior can dramatically change the current practice in synthetic biology. A recent study shows that Darwinian evolution can gradually develop higher topological robustness. Subsequently, this work presents an evolutionary algorithm that simulates natural evolution in silico, for identifying network topologies that are robust to perturbations. We present a Monte Carlo based method for quantifying topological robustness and designed a fitness approximation approach for efficient calculation of topological robustness which is computationally very intensive. The proposed framework was verified using two classic GRN behaviors: oscillation and bistability, although the framework is generalized for evolving other types of responses. The algorithm identified robust GRN architectures which were verified using different analysis and comparison. Analysis of the results also shed light on the relationship among robustness, cooperativity and complexity. This study also shows that nature has already evolved very robust architectures for its crucial systems; hence simulation of this natural process can be very valuable for designing robust biological systems. PMID:25616055

  16. Robust Canonical Coherence for Quasi-Cyclostationary Processes: Geomagnetism and Seismicity in Peru

    NASA Astrophysics Data System (ADS)

    Lepage, K.; Thomson, D. J.

    2007-12-01

    Preliminary results suggesting a connection between long-period, geomagnetic fluctuations and long-period, seismic fluctuations are presented. Data from the seismic detector, NNA, situated in ~Naña, Peru, is compared to geomagnetic data from HUA, located in Huancayo, Peru. The high-pass filtered data from the two stations exhibits quasi-cyclostationary pulsation with daily periodicity, and suggests correspondence. The pulsation contains power predominantly between 2000 μ Hz and 8000 μ Hz, with the geomagnetic pulses leading by approximately 4 to 5 hours. A many data section, multitaper, robust canonical coherence analysis of the two, three component data sets is performed. The method, involving an adaptation, suitable for quasi-cyclostationary processes, of the technique presented in "Robust estimation of power spectra", (by Kleiner, Martin and Thomson, Journal of the Royal Statistical Society, Series B Methodological, 1979) is described. Simulations are presented exploring the applicability of the method. Canonical coherence is detected, predominantly between the geomagnetic field and the vertical component of seismic velocity, in the band of frequencies between 1500 μ Hz and 2500 μ Hz. Subsequent group delay estimates between the geomagnetic components and seismic velocity vertical at frequencies corresponding to large canonical coherence are computed. The estimated group delays are 8 min between geomagnetic east and seismic velocity vertical, 16 min between geomagnetic north and seismic velocity vertical and 11 min between geomagnetic vertical and seismic velocity vertical. Possible coupling mechanisms are discussed.

  17. Quantifying Community Assembly Processes and Identifying Features that Impose Them

    SciTech Connect

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; Chen, Xingyuan; Kennedy, David W.; Murray, Christopher J.; Rockhold, Mark L.; Konopka, Allan

    2013-06-06

    Across a set of ecological communities connected to each other through organismal dispersal (a ‘meta-community’), turnover in composition is governed by (ecological) Drift, Selection, and Dispersal Limitation. Quantitative estimates of these processes remain elusive, but would represent a common currency needed to unify community ecology. Using a novel analytical framework we quantitatively estimate the relative influences of Drift, Selection, and Dispersal Limitation on subsurface, sediment-associated microbial meta-communities. The communities we study are distributed across two geologic formations encompassing ~12,500m3 of uranium-contaminated sediments within the Hanford Site in eastern Washington State. We find that Drift consistently governs ~25% of spatial turnover in community composition; Selection dominates (governing ~60% of turnover) across spatially-structured habitats associated with fine-grained, low permeability sediments; and Dispersal Limitation is most influential (governing ~40% of turnover) across spatially-unstructured habitats associated with coarse-grained, highly-permeable sediments. Quantitative influences of Selection and Dispersal Limitation may therefore be predictable from knowledge of environmental structure. To develop a system-level conceptual model we extend our analytical framework to compare process estimates across formations, characterize measured and unmeasured environmental variables that impose Selection, and identify abiotic features that limit dispersal. Insights gained here suggest that community ecology can benefit from a shift in perspective; the quantitative approach developed here goes beyond the ‘niche vs. neutral’ dichotomy by moving towards a style of natural history in which estimates of Selection, Dispersal Limitation and Drift can be described, mapped and compared across ecological systems.

  18. Directed International Technological Change and Climate Policy: New Methods for Identifying Robust Policies Under Conditions of Deep Uncertainty

    NASA Astrophysics Data System (ADS)

    Molina-Perez, Edmundo

    It is widely recognized that international environmental technological change is key to reduce the rapidly rising greenhouse gas emissions of emerging nations. In 2010, the United Nations Framework Convention on Climate Change (UNFCCC) Conference of the Parties (COP) agreed to the creation of the Green Climate Fund (GCF). This new multilateral organization has been created with the collective contributions of COP members, and has been tasked with directing over USD 100 billion per year towards investments that can enhance the development and diffusion of clean energy technologies in both advanced and emerging nations (Helm and Pichler, 2015). The landmark agreement arrived at the COP 21 has reaffirmed the key role that the GCF plays in enabling climate mitigation as it is now necessary to align large scale climate financing efforts with the long-term goals agreed at Paris 2015. This study argues that because of the incomplete understanding of the mechanics of international technological change, the multiplicity of policy options and ultimately the presence of climate and technological change deep uncertainty, climate financing institutions such as the GCF, require new analytical methods for designing long-term robust investment plans. Motivated by these challenges, this dissertation shows that the application of new analytical methods, such as Robust Decision Making (RDM) and Exploratory Modeling (Lempert, Popper and Bankes, 2003) to the study of international technological change and climate policy provides useful insights that can be used for designing a robust architecture of international technological cooperation for climate change mitigation. For this study I developed an exploratory dynamic integrated assessment model (EDIAM) which is used as the scenario generator in a large computational experiment. The scope of the experimental design considers an ample set of climate and technological scenarios. These scenarios combine five sources of uncertainty

  19. Adaptive GSA-based optimal tuning of PI controlled servo systems with reduced process parametric sensitivity, robust stability and controller robustness.

    PubMed

    Precup, Radu-Emil; David, Radu-Codrut; Petriu, Emil M; Radac, Mircea-Bogdan; Preitl, Stefan

    2014-11-01

    This paper suggests a new generation of optimal PI controllers for a class of servo systems characterized by saturation and dead zone static nonlinearities and second-order models with an integral component. The objective functions are expressed as the integral of time multiplied by absolute error plus the weighted sum of the integrals of output sensitivity functions of the state sensitivity models with respect to two process parametric variations. The PI controller tuning conditions applied to a simplified linear process model involve a single design parameter specific to the extended symmetrical optimum (ESO) method which offers the desired tradeoff to several control system performance indices. An original back-calculation and tracking anti-windup scheme is proposed in order to prevent the integrator wind-up and to compensate for the dead zone nonlinearity of the process. The minimization of the objective functions is carried out in the framework of optimization problems with inequality constraints which guarantee the robust stability with respect to the process parametric variations and the controller robustness. An adaptive gravitational search algorithm (GSA) solves the optimization problems focused on the optimal tuning of the design parameter specific to the ESO method and of the anti-windup tracking gain. A tuning method for PI controllers is proposed as an efficient approach to the design of resilient control systems. The tuning method and the PI controllers are experimentally validated by the adaptive GSA-based tuning of PI controllers for the angular position control of a laboratory servo system. PMID:25330468

  20. Improved Signal Processing Technique Leads to More Robust Self Diagnostic Accelerometer System

    NASA Technical Reports Server (NTRS)

    Tokars, Roger; Lekki, John; Jaros, Dave; Riggs, Terrence; Evans, Kenneth P.

    2010-01-01

    The self diagnostic accelerometer (SDA) is a sensor system designed to actively monitor the health of an accelerometer. In this case an accelerometer is considered healthy if it can be determined that it is operating correctly and its measurements may be relied upon. The SDA system accomplishes this by actively monitoring the accelerometer for a variety of failure conditions including accelerometer structural damage, an electrical open circuit, and most importantly accelerometer detachment. In recent testing of the SDA system in emulated engine operating conditions it has been found that a more robust signal processing technique was necessary. An improved accelerometer diagnostic technique and test results of the SDA system utilizing this technique are presented here. Furthermore, the real time, autonomous capability of the SDA system to concurrently compensate for effects from real operating conditions such as temperature changes and mechanical noise, while monitoring the condition of the accelerometer health and attachment, will be demonstrated.

  1. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering.

    PubMed

    Shanechi, Maryam M; Orsborn, Amy L; Carmena, Jose M

    2016-04-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain's behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user's motor intention during CLDA-a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter

  2. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to

  3. Processing and Properties of Fiber Reinforced Polymeric Matrix Composites. Part 2; Processing Robustness of IM7/PETI Polyimide Composites

    NASA Technical Reports Server (NTRS)

    Hou, Tan-Hung

    1996-01-01

    The processability of a phenylethynyl terminated imide (PETI) resin matrix composite was investigated. Unidirectional prepregs were made by coating an N-methylpyrrolidone solution of the amide acid oligomer onto unsized IM7. Two batches of prepregs were used: one was made by NASA in-house, and the other was from an industrial source. The composite processing robustness was investigated with respect to the effect of B-staging conditions, the prepreg shelf life, and the optimal processing window. Rheological measurements indicated that PETI's processability was only slightly affected over a wide range of B-staging temperatures (from 250 C to 300 C). The open hole compression (OHC) strength values were statistically indistinguishable among specimens consolidated using various B-staging conditions. Prepreg rheology and OHC strengths were also found not to be affected by prolonged (i.e., up to 60 days) ambient storage. An optimal processing window was established using response surface methodology. It was found that IM7/PETI composite is more sensitive to the consolidation temperature than to the consolidation pressure. A good consolidation was achievable at 371 C/100 Psi, which yielded an OHC strength of 62 Ksi at room temperature. However, processability declined dramatically at temperatures below 350 C.

  4. Delays in auditory processing identified in preschool children with FASD

    PubMed Central

    Stephen, Julia M.; Kodituwakku, Piyadasa W.; Kodituwakku, Elizabeth L.; Romero, Lucinda; Peters, Amanda M.; Sharadamma, Nirupama Muniswamy; Caprihan, Arvind; Coffman, Brian A.

    2012-01-01

    Background Both sensory and cognitive deficits have been associated with prenatal exposure to alcohol; however, very few studies have focused on sensory deficits in preschool aged children. Since sensory skills develop early, characterization of sensory deficits using novel imaging methods may reveal important neural markers of prenatal alcohol exposure. Materials and Methods Participants in this study were 10 children with a fetal alcohol spectrum disorder (FASD) and 15 healthy control children aged 3-6 years. All participants had normal hearing as determined by clinical screens. We measured their neurophysiological responses to auditory stimuli (1000 Hz, 72 dB tone) using magnetoencephalography (MEG). We used a multi-dipole spatio-temporal modeling technique (CSST – Ranken et al. 2002) to identify the location and timecourse of cortical activity in response to the auditory tones. The timing and amplitude of the left and right superior temporal gyrus sources associated with activation of left and right primary/secondary auditory cortices were compared across groups. Results There was a significant delay in M100 and M200 latencies for the FASD children relative to the HC children (p = 0.01), when including age as a covariate. The within-subjects effect of hemisphere was not significant. A comparable delay in M100 and M200 latencies was observed in children across the FASD subtypes. Discussion Auditory delay revealed by MEG in children with FASD may prove to be a useful neural marker of information processing difficulties in young children with prenatal alcohol exposure. The fact that delayed auditory responses were observed across the FASD spectrum suggests that it may be a sensitive measure of alcohol-induced brain damage. Therefore, this measure in conjunction with other clinical tools may prove useful for early identification of alcohol affected children, particularly those without dysmorphia. PMID:22458372

  5. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations, 2. Robustness of Techniques

    SciTech Connect

    Helton, J.C.; Kleijnen, J.P.C.

    1999-03-24

    Procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses are described and illustrated. These procedures attempt to detect increasingly complex patterns in scatterplots and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. A sequence of example analyses with a large model for two-phase fluid flow illustrates how the individual procedures can differ in the variables that they identify as having effects on particular model outcomes. The example analyses indicate that the use of a sequence of procedures is a good analysis strategy and provides some assurance that an important effect is not overlooked.

  6. Chatter Stability in Turning and Milling with in Process Identified Process Damping

    NASA Astrophysics Data System (ADS)

    Kurata, Yusuke; Merdol, S. Doruk; Altintas, Yusuf; Suzuki, Norikazu; Shamoto, Eiji

    Process damping in metal cutting is caused by the contact between the flank face of the cutting tool and the wavy surface finish, which is known to damp chatter vibrations. An analytical model with process damping has already been developed and verified in earlier research, in which the damping coefficient is considered to be proportional to the ratio of vibration and cutting velocities. This paper presents in process identification of the process damping force coefficient derived from cutting tests. Plunge turning is used to create a continuous reduction in cutting speed as the tool reduces the diameter of a cylindrical workpiece. When chatter stops at a critical cutting speed, the process damping coefficient is estimated by inverse solution of the stability law. It is shown that the stability lobes constructed by the identified process damping coefficient agrees with experiments conducted in both turning and milling.

  7. Identifying Process Variables in Career Counseling: A Research Agenda.

    ERIC Educational Resources Information Center

    Heppner, Mary J.; Heppner, P. Paul

    2003-01-01

    Outlines areas for career counseling process research: examining the working alliance; reconceptualizing career counseling as learning; investigating process/outcome differences due to client and counselor attributes; examining influential session events; using a common problem resolution metric; examining change longitudinally; examining…

  8. A Robust Gold Deconvolution Approach for LiDAR Waveform Data Processing to Characterize Vegetation Structure

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Popescu, S. C.; Krause, K.; Sheridan, R.; Ku, N. W.

    2014-12-01

    Increasing attention has been paid in the remote sensing community to the next generation Light Detection and Ranging (lidar) waveform data systems for extracting information on topography and the vertical structure of vegetation. However, processing waveform lidar data raises some challenges compared to analyzing discrete return data. The overall goal of this study was to present a robust de-convolution algorithm- Gold algorithm used to de-convolve waveforms in a lidar dataset acquired within a 60 x 60m study area located in the Harvard Forest in Massachusetts. The waveform lidar data was collected by the National Ecological Observatory Network (NEON). Specific objectives were to: (1) explore advantages and limitations of various waveform processing techniques to derive topography and canopy height information; (2) develop and implement a novel de-convolution algorithm, the Gold algorithm, to extract elevation and canopy metrics; and (3) compare results and assess accuracy. We modeled lidar waveforms with a mixture of Gaussian functions using the Non-least squares (NLS) algorithm implemented in R and derived a Digital Terrain Model (DTM) and canopy height. We compared our waveform-derived topography and canopy height measurements using the Gold de-convolution algorithm to results using the Richardson-Lucy algorithm. Our findings show that the Gold algorithm performed better than the Richardson-Lucy algorithm in terms of recovering the hidden echoes and detecting false echoes for generating a DTM, which indicates that the Gold algorithm could potentially be applied to processing of waveform lidar data to derive information on terrain elevation and canopy characteristics.

  9. Data-based robust multiobjective optimization of interconnected processes: energy efficiency case study in papermaking.

    PubMed

    Afshar, Puya; Brown, Martin; Maciejowski, Jan; Wang, Hong

    2011-12-01

    Reducing energy consumption is a major challenge for "energy-intensive" industries such as papermaking. A commercially viable energy saving solution is to employ data-based optimization techniques to obtain a set of "optimized" operational settings that satisfy certain performance indices. The difficulties of this are: 1) the problems of this type are inherently multicriteria in the sense that improving one performance index might result in compromising the other important measures; 2) practical systems often exhibit unknown complex dynamics and several interconnections which make the modeling task difficult; and 3) as the models are acquired from the existing historical data, they are valid only locally and extrapolations incorporate risk of increasing process variability. To overcome these difficulties, this paper presents a new decision support system for robust multiobjective optimization of interconnected processes. The plant is first divided into serially connected units to model the process, product quality, energy consumption, and corresponding uncertainty measures. Then multiobjective gradient descent algorithm is used to solve the problem in line with user's preference information. Finally, the optimization results are visualized for analysis and decision making. In practice, if further iterations of the optimization algorithm are considered, validity of the local models must be checked prior to proceeding to further iterations. The method is implemented by a MATLAB-based interactive tool DataExplorer supporting a range of data analysis, modeling, and multiobjective optimization techniques. The proposed approach was tested in two U.K.-based commercial paper mills where the aim was reducing steam consumption and increasing productivity while maintaining the product quality by optimization of vacuum pressures in forming and press sections. The experimental results demonstrate the effectiveness of the method. PMID:22147299

  10. Global transcriptomic analysis of Cyanothece 51142 reveals robust diurnal oscillation of central metabolic processes

    SciTech Connect

    Stockel, Jana; Welsh, Eric A.; Liberton, Michelle L.; Kunnavakkam, Rangesh V.; Aurora, Rajeev; Pakrasi, Himadri B.

    2008-04-22

    Cyanobacteria are oxygenic photosynthetic organisms, and the only prokaryotes known to have a circadian cycle. Unicellular diazotrophic cyanobacteria such as Cyanothece 51142 can fix atmospheric nitrogen, a process exquisitely sensitive to oxygen. Thus, the intracellular environment of Cyanothece oscillates between aerobic and anaerobic conditions during a day-night cycle. This is accomplished by temporal separation of two processes: photosynthesis during the day, and nitrogen fixation at night. While previous studies have examined periodic changes transcript levels for a limited number of genes in Cyanothece and other unicellular diazotrophic cyanobacteria, a comprehensive study of transcriptional activity in a nitrogen-fixing cyanobacterium is necessary to understand the impact of the temporal separation of photosynthesis and nitrogen fixation on global gene regulation and cellular metabolism. We have examined the expression patterns of nearly 5000 genes in Cyanothece 51142 during two consecutive diurnal periods. We found that ~30% of these genes exhibited robust oscillating expression profiles. Interestingly, this set included genes for almost all central metabolic processes in Cyanothece. A transcriptional network of all genes with significantly oscillating transcript levels revealed that the majority of genes in numerous individual pathways, such as glycolysis, pentose phosphate pathway and glycogen metabolism, were co-regulated and maximally expressed at distinct phases during the diurnal cycle. Our analyses suggest that the demands of nitrogen fixation greatly influence major metabolic activities inside Cyanothece cells and thus drive various cellular activities. These studies provide a comprehensive picture of how a physiologically relevant diurnal light-dark cycle influences the metabolism in a photosynthetic bacterium

  11. A robust color signal processing with wide dynamic range WRGB CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Kawada, Shun; Kuroda, Rihito; Sugawa, Shigetoshi

    2011-01-01

    We have developed a robust color reproduction methodology by a simple calculation with a new color matrix using the formerly developed wide dynamic range WRGB lateral overflow integration capacitor (LOFIC) CMOS image sensor. The image sensor was fabricated through a 0.18 μm CMOS technology and has a 45 degrees oblique pixel array, the 4.2 μm effective pixel pitch and the W pixels. A W pixel was formed by replacing one of the two G pixels in the Bayer RGB color filter. The W pixel has a high sensitivity through the visible light waveband. An emerald green and yellow (EGY) signal is generated from the difference between the W signal and the sum of RGB signals. This EGY signal mainly includes emerald green and yellow lights. These colors are difficult to be reproduced accurately by the conventional simple linear matrix because their wave lengths are in the valleys of the spectral sensitivity characteristics of the RGB pixels. A new linear matrix based on the EGY-RGB signal was developed. Using this simple matrix, a highly accurate color processing with a large margin to the sensitivity fluctuation and noise has been achieved.

  12. Correlation analysis for long time series by robustly estimated autoregressive stochastic processes

    NASA Astrophysics Data System (ADS)

    Schuh, Wolf-Dieter; Brockmann, Jan-Martin; Kargoll, Boris

    2015-04-01

    Modern sensors and satellite missions deliver huge data sets and long time series of observations. These data sets have to be handled with care because of changing correlations, conspicuous data and possible outliers. Tailored concepts for data selection and robust techniques to estimate the correlation characteristics allow for a better/optimal exploitation of the information of these measurements. In this presentation we give an overview of standard techniques for estimating correlations occurring in long time series in the time domain as well as in the frequency domain. We discuss the pros and cons especially with the focus on the intensified occurrence of conspicuous data and outliers. We present a concept to classify the measurements and isolate conspicuous data. We propose to describe the varying correlation behavior of the measurement series by an autoregressive stochastic process and give some hints how to construct adaptive filters to decorrelate the measurement series and to handle the huge covariance matrices. As study object we use time series from gravity gradient data collected during the GOCE low orbit operation campaign (LOOC). Due to the low orbit these data from 13-Jun-2014 to 21-Oct-2014 have more or less the same potential to recover the Earth gravity field with the same accuracy than all the data from the rest of the entire mission. Therefore these data are extraordinarily valuable but hard to handle, because of conspicuous data due to maneuvers during the orbit lowering phases, overall increase in drag, saturation of ion thrusters and other (currently) unexplained effects.

  13. Fabrication of robust micro-patterned polymeric films via static breath-figure process and vulcanization.

    PubMed

    Li, Lei; Zhong, Yawen; Gong, Jianliang; Li, Jian; Huang, Jin; Ma, Zhi

    2011-02-15

    Here, we present the preparation of thermally stable and solvent resistant micro-patterned polymeric films via static breath-figure process and sequent vulcanization, with a commercially available triblock polymer, polystyrene-b-polyisoprene-b-polystyrene (SIS). The vulcanized honeycomb structured SIS films became self-supported and resistant to a wide range of organic solvents and thermally stable up to 350°C for 2h, an increase of more than 300K as compared to the uncross-linked films. This superior robustness could be attributed to the high degree of polyisoprene cross-linking. The versatility of the methodology was demonstrated by applying to another commercially available triblock polymer, polystyrene-b-polybutadiene-b-polystyrene (SBS). Particularly, hydroxy groups were introduced into SBS by hydroboration. The functionalized two-dimensional micro-patterns feasible for site-directed grafting were created by the hydroxyl-containing polymers. In addition, the fixed microporous structures could be replicated to fabricate textured positive PDMS stamps. This simple technique offers new prospects in the field of micro-patterns, soft lithography and templates. PMID:21168143

  14. Accelerated evaluation of the robustness of treatment plans against geometric uncertainties by Gaussian processes

    NASA Astrophysics Data System (ADS)

    Sobotta, B.; Söhn, M.; Alber, M.

    2012-12-01

    In order to provide a consistently high quality treatment, it is of great interest to assess the robustness of a treatment plan under the influence of geometric uncertainties. One possible method to implement this is to run treatment simulations for all scenarios that may arise from these uncertainties. These simulations may be evaluated in terms of the statistical distribution of the outcomes (as given by various dosimetric quality metrics) or statistical moments thereof, e.g. mean and/or variance. This paper introduces a method to compute the outcome distribution and all associated values of interest in a very efficient manner. This is accomplished by substituting the original patient model with a surrogate provided by a machine learning algorithm. This Gaussian process (GP) is trained to mimic the behavior of the patient model based on only very few samples. Once trained, the GP surrogate takes the place of the patient model in all subsequent calculations.The approach is demonstrated on two examples. The achieved computational speedup is more than one order of magnitude.

  15. Quantitative Morphometry of Electrophysiologically Identified CA3b Interneurons Reveals Robust Local Geometry and Distinct Cell Classes

    PubMed Central

    Ascoli, Giorgio A.; Brown, Kerry M.; Calixto, Eduardo; Card, J. Patrick; Galvan, E. J.; Perez-Rosello, T.; Barrionuevo, Germán

    2010-01-01

    The morphological and electrophysiological diversity of inhibitory cells in hippocampal area CA3 may underlie specific computational roles and is not yet fully elucidated. In particular, interneurons with somata in strata radiatum (R) and lacunosum-moleculare (L-M) receive converging stimulation from the dentate gyrus and entorhinal cortex as well as within CA3. Although these cells express different forms of synaptic plasticity, their axonal trees and connectivity are still largely unknown. We investigated the branching and spatial patterns, plus the membrane and synaptic properties, of rat CA3b R and L-M interneurons digitally reconstructed after intracellular labeling. We found considerable variability within but no difference between the two layers, and no correlation between morphological and biophysical properties. Nevertheless, two cell types were identified based on the number of dendritic bifurcations, with significantly different anatomical and electrophysiological features. Axons generally branched an order of magnitude more than dendrites. However, interneurons on both sides of the R/L-M boundary revealed surprisingly modular axo-dendritic arborizations with consistently uniform local branch geometry. Both axons and dendrites followed a lamellar organization, and axons displayed a spatial preference towards the fissure. Moreover, only a small fraction of the axonal arbor extended to the outer portion of the invaded volume, and tended to return towards the proximal region. In contrast, dendritic trees demonstrated more limited but isotropic volume occupancy. These results suggest a role of predominantly local feedforward and lateral inhibitory control for both R and L-M interneurons. Such role may be essential to balance the extensive recurrent excitation of area CA3 underlying hippocampal autoassociative memory function. PMID:19496174

  16. Method for processing seismic data to identify anomalous absorption zones

    DOEpatents

    Taner, M. Turhan

    2006-01-03

    A method is disclosed for identifying zones anomalously absorptive of seismic energy. The method includes jointly time-frequency decomposing seismic traces, low frequency bandpass filtering the decomposed traces to determine a general trend of mean frequency and bandwidth of the seismic traces, and high frequency bandpass filtering the decomposed traces to determine local variations in the mean frequency and bandwidth of the seismic traces. Anomalous zones are determined where there is difference between the general trend and the local variations.

  17. Identifying and tracking dynamic processes in social networks

    NASA Astrophysics Data System (ADS)

    Chung, Wayne; Savell, Robert; Schütt, Jan-Peter; Cybenko, George

    2006-05-01

    The detection and tracking of embedded malicious subnets in an active social network can be computationally daunting due to the quantity of transactional data generated in the natural interaction of large numbers of actors comprising a network. In addition, detection of illicit behavior may be further complicated by evasive strategies designed to camouflage the activities of the covert subnet. In this work, we move beyond traditional static methods of social network analysis to develop a set of dynamic process models which encode various modes of behavior in active social networks. These models will serve as the basis for a new application of the Process Query System (PQS) to the identification and tracking of covert dynamic processes in social networks. We present a preliminary result from application of our technique in a real-world data stream-- the Enron email corpus.

  18. A Robust Power Remote Manipulator for Use in Waste Sorting, Processing, and Packaging - 12158

    SciTech Connect

    Cole, Matt; Martin, Scott

    2012-07-01

    Disposition of radioactive waste is one of the Department of Energy's (DOE's) highest priorities. A critical component of the waste disposition strategy is shipment of Transuranic (TRU) waste from DOE's Oak Ridge Reservation to the Waste Isolation Plant Project (WIPP) in Carlsbad, New Mexico. This is the mission of the DOE TRU Waste Processing Center (TWPC). The remote-handled TRU waste at the Oak Ridge Reservation is currently in a mixed waste form that must be repackaged in to meet WIPP Waste Acceptance Criteria (WAC). Because this remote-handled legacy waste is very diverse, sorting, size reducing, and packaging will require equipment flexibility and strength that is not possible with standard master-slave manipulators. To perform the wide range of tasks necessary with such diverse, highly contaminated material, TWPC worked with S.A. Technology (SAT) to modify SAT's Power Remote Manipulator (PRM) technology to provide the processing center with an added degree of dexterity and high load handling capability inside its shielded cells. TWPC and SAT incorporated innovative technologies into the PRM design to better suit the operations required at TWPC, and to increase the overall capability of the PRM system. Improving on an already proven PRM system will ensure that TWPC gains the capabilities necessary to efficiently complete its TRU waste disposition mission. The collaborative effort between TWPC and S.A. Technology has yielded an extremely capable and robust solution to perform the wide range of tasks necessary to repackage TRU waste containers at TWPC. Incorporating innovative technologies into a proven manipulator system, these PRMs are expected to be an important addition to the capabilities available to shielded cell operators. The PRMs provide operators with the ability to reach anywhere in the cell, lift heavy objects, perform size reduction associated with the disposition of noncompliant waste. Factory acceptance testing of the TWPC Powered Remote

  19. Robust Low Cost Liquid Rocket Combustion Chamber by Advanced Vacuum Plasma Process

    NASA Technical Reports Server (NTRS)

    Holmes, Richard; Elam, Sandra; Ellis, David L.; McKechnie, Timothy; Hickman, Robert; Rose, M. Franklin (Technical Monitor)

    2001-01-01

    Next-generation, regeneratively cooled rocket engines will require materials that can withstand high temperatures while retaining high thermal conductivity. Fabrication techniques must be cost efficient so that engine components can be manufactured within the constraints of shrinking budgets. Three technologies have been combined to produce an advanced liquid rocket engine combustion chamber at NASA-Marshall Space Flight Center (MSFC) using relatively low-cost, vacuum-plasma-spray (VPS) techniques. Copper alloy NARloy-Z was replaced with a new high performance Cu-8Cr-4Nb alloy developed by NASA-Glenn Research Center (GRC), which possesses excellent high-temperature strength, creep resistance, and low cycle fatigue behavior combined with exceptional thermal stability. Functional gradient technology, developed building composite cartridges for space furnaces was incorporated to add oxidation resistant and thermal barrier coatings as an integral part of the hot wall of the liner during the VPS process. NiCrAlY, utilized to produce durable protective coating for the space shuttle high pressure fuel turbopump (BPFTP) turbine blades, was used as the functional gradient material coating (FGM). The FGM not only serves as a protection from oxidation or blanching, the main cause of engine failure, but also serves as a thermal barrier because of its lower thermal conductivity, reducing the temperature of the combustion liner 200 F, from 1000 F to 800 F producing longer life. The objective of this program was to develop and demonstrate the technology to fabricate high-performance, robust, inexpensive combustion chambers for advanced propulsion systems (such as Lockheed-Martin's VentureStar and NASA's Reusable Launch Vehicle, RLV) using the low-cost VPS process. VPS formed combustion chamber test articles have been formed with the FGM hot wall built in and hot fire tested, demonstrating for the first time a coating that will remain intact through the hot firing test, and with

  20. An Excel Workbook for Identifying Redox Processes in Ground Water

    USGS Publications Warehouse

    Jurgens, Bryant C.; McMahon, Peter B.; Chapelle, Francis H.; Eberts, Sandra M.

    2009-01-01

    The reduction/oxidation (redox) condition of ground water affects the concentration, transport, and fate of many anthropogenic and natural contaminants. The redox state of a ground-water sample is defined by the dominant type of reduction/oxidation reaction, or redox process, occurring in the sample, as inferred from water-quality data. However, because of the difficulty in defining and applying a systematic redox framework to samples from diverse hydrogeologic settings, many regional water-quality investigations do not attempt to determine the predominant redox process in ground water. Recently, McMahon and Chapelle (2008) devised a redox framework that was applied to a large number of samples from 15 principal aquifer systems in the United States to examine the effect of redox processes on water quality. This framework was expanded by Chapelle and others (in press) to use measured sulfide data to differentiate between iron(III)- and sulfate-reducing conditions. These investigations showed that a systematic approach to characterize redox conditions in ground water could be applied to datasets from diverse hydrogeologic settings using water-quality data routinely collected in regional water-quality investigations. This report describes the Microsoft Excel workbook, RedoxAssignment_McMahon&Chapelle.xls, that assigns the predominant redox process to samples using the framework created by McMahon and Chapelle (2008) and expanded by Chapelle and others (in press). Assignment of redox conditions is based on concentrations of dissolved oxygen (O2), nitrate (NO3-), manganese (Mn2+), iron (Fe2+), sulfate (SO42-), and sulfide (sum of dihydrogen sulfide [aqueous H2S], hydrogen sulfide [HS-], and sulfide [S2-]). The logical arguments for assigning the predominant redox process to each sample are performed by a program written in Microsoft Visual Basic for Applications (VBA). The program is called from buttons on the main worksheet. The number of samples that can be analyzed

  1. Identifying User Needs and the Participative Design Process

    NASA Astrophysics Data System (ADS)

    Meiland, Franka; Dröes, Rose-Marie; Sävenstedt, Stefan; Bergvall-Kåreborn, Birgitta; Andersson, Anna-Lena

    As the number of persons with dementia increases and also the demands on care and support at home, additional solutions to support persons with dementia are needed. The COGKNOW project aims to develop an integrated, user-driven cognitive prosthetic device to help persons with dementia. The project focuses on support in the areas of memory, social contact, daily living activities and feelings of safety. The design process is user-participatory and consists of iterative cycles at three test sites across Europe. In the first cycle persons with dementia and their carers (n = 17) actively participated in the developmental process. Based on their priorities of needs and solutions, on their disabilities and after discussion between the team, a top four list of Information and Communication Technology (ICT) solutions was made and now serves as the basis for development: in the area of remembering - day and time orientation support, find mobile service and reminding service, in the area of social contact - telephone support by picture dialling, in the area of daily activities - media control support through a music playback and radio function, and finally, in the area of safety - a warning service to indicate when the front door is open and an emergency contact service to enhance feelings of safety. The results of this first project phase show that, in general, the people with mild dementia as well as their carers were able to express and prioritize their (unmet) needs, and the kind of technological assistance they preferred in the selected areas. In next phases it will be tested if the user-participatory design and multidisciplinary approach employed in the COGKNOW project result in a user-friendly, useful device that positively impacts the autonomy and quality of life of persons with dementia and their carers.

  2. Integrated Process Monitoring based on Systems of Sensors for Enhanced Nuclear Safeguards Sensitivity and Robustness

    SciTech Connect

    Humberto E. Garcia

    2014-07-01

    This paper illustrates safeguards benefits that process monitoring (PM) can have as a diversion deterrent and as a complementary safeguards measure to nuclear material accountancy (NMA). In order to infer the possible existence of proliferation-driven activities, the objective of NMA-based methods is often to statistically evaluate materials unaccounted for (MUF) computed by solving a given mass balance equation related to a material balance area (MBA) at every material balance period (MBP), a particular objective for a PM-based approach may be to statistically infer and evaluate anomalies unaccounted for (AUF) that may have occurred within a MBP. Although possibly being indicative of proliferation-driven activities, the detection and tracking of anomaly patterns is not trivial because some executed events may be unobservable or unreliably observed as others. The proposed similarity between NMA- and PM-based approaches is important as performance metrics utilized for evaluating NMA-based methods, such as detection probability (DP) and false alarm probability (FAP), can also be applied for assessing PM-based safeguards solutions. To this end, AUF count estimates can be translated into significant quantity (SQ) equivalents that may have been diverted within a given MBP. A diversion alarm is reported if this mass estimate is greater than or equal to the selected value for alarm level (AL), appropriately chosen to optimize DP and FAP based on the particular characteristics of the monitored MBA, the sensors utilized, and the data processing method employed for integrating and analyzing collected measurements. To illustrate the application of the proposed PM approach, a protracted diversion of Pu in a waste stream was selected based on incomplete fuel dissolution in a dissolver unit operation, as this diversion scenario is considered to be problematic for detection using NMA-based methods alone. Results demonstrate benefits of conducting PM under a system

  3. A Benchmark Data Set to Evaluate the Illumination Robustness of Image Processing Algorithms for Object Segmentation and Classification.

    PubMed

    Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus

    2015-01-01

    Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects. PMID:26191792

  4. A Benchmark Data Set to Evaluate the Illumination Robustness of Image Processing Algorithms for Object Segmentation and Classification

    PubMed Central

    Khan, Arif ul Maula; Mikut, Ralf; Reischl, Markus

    2015-01-01

    Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects. PMID:26191792

  5. Processing of Perceptual Information Is More Robust than Processing of Conceptual Information in Preschool-Age Children: Evidence from Costs of Switching

    ERIC Educational Resources Information Center

    Fisher, Anna V.

    2011-01-01

    Is processing of conceptual information as robust as processing of perceptual information early in development? Existing empirical evidence is insufficient to answer this question. To examine this issue, 3- to 5-year-old children were presented with a flexible categorization task, in which target items (e.g., an open red umbrella) shared category…

  6. Deep transcriptome-sequencing and proteome analysis of the hydrothermal vent annelid Alvinella pompejana identifies the CvP-bias as a robust measure of eukaryotic thermostability

    PubMed Central

    2013-01-01

    Background Alvinella pompejana is an annelid worm that inhabits deep-sea hydrothermal vent sites in the Pacific Ocean. Living at a depth of approximately 2500 meters, these worms experience extreme environmental conditions, including high temperature and pressure as well as high levels of sulfide and heavy metals. A. pompejana is one of the most thermotolerant metazoans, making this animal a subject of great interest for studies of eukaryotic thermoadaptation. Results In order to complement existing EST resources we performed deep sequencing of the A. pompejana transcriptome. We identified several thousand novel protein-coding transcripts, nearly doubling the sequence data for this annelid. We then performed an extensive survey of previously established prokaryotic thermoadaptation measures to search for global signals of thermoadaptation in A. pompejana in comparison with mesophilic eukaryotes. In an orthologous set of 457 proteins, we found that the best indicator of thermoadaptation was the difference in frequency of charged versus polar residues (CvP-bias), which was highest in A. pompejana. CvP-bias robustly distinguished prokaryotic thermophiles from prokaryotic mesophiles, as well as the thermophilic fungus Chaetomium thermophilum from mesophilic eukaryotes. Experimental values for thermophilic proteins supported higher CvP-bias as a measure of thermal stability when compared to their mesophilic orthologs. Proteome-wide mean CvP-bias also correlated with the body temperatures of homeothermic birds and mammals. Conclusions Our work extends the transcriptome resources for A. pompejana and identifies the CvP-bias as a robust and widely applicable measure of eukaryotic thermoadaptation. Reviewer This article was reviewed by Sándor Pongor, L. Aravind and Anthony M. Poole. PMID:23324115

  7. Individualized relapse prediction: personality measures and striatal and insular activity during reward-processing robustly predict relapse*

    PubMed Central

    Gowin, Joshua L.; Ball, Tali M.; Wittmann, Marc; Tapert, Susan F.; Paulus, Martin P.

    2015-01-01

    Background Nearly half of individuals with substance use disorders relapse in the year after treatment. A diagnostic tool to help clinicians make decisions regarding treatment does not exist for psychiatric conditions. Identifying individuals with high risk for relapse to substance use following abstinence has profound clinical consequences. This study aimed to develop neuroimaging as a robust tool to predict relapse. Methods 68 methamphetamine-dependent adults (15 female) were recruited from 28-day inpatient treatment. During treatment, participants completed a functional MRI scan that examined brain activation during reward processing. Patients were followed 1 year later to assess abstinence. We examined brain activation during reward processing between relapsing and abstaining individuals and employed three random forest prediction models (clinical and personality measures, neuroimaging measures, a combined model) to generate predictions for each participant regarding their relapse likelihood. Results 18 individuals relapsed. There were significant group by reward-size interactions for neural activation in the left insula and right striatum for rewards. Abstaining individuals showed increased activation for large, risky relative to small, safe rewards, whereas relapsing individuals failed to show differential activation between reward types. All three random forest models yielded good test characteristics such that a positive test for relapse yielded a likelihood ratio 2.63, whereas a negative test had a likelihood ratio of 0.48. Conclusions These findings suggest that neuroimaging can be developed in combination with other measures as an instrument to predict relapse, advancing tools providers can use to make decisions about individualized treatment of substance use disorders. PMID:25977206

  8. A two-step patterning process increases the robustness of periodic patterning in the fly eye.

    PubMed

    Gavish, Avishai; Barkai, Naama

    2016-06-01

    Complex periodic patterns can self-organize through dynamic interactions between diffusible activators and inhibitors. In the biological context, self-organized patterning is challenged by spatial heterogeneities ('noise') inherent to biological systems. How spatial variability impacts the periodic patterning mechanism and how it can be buffered to ensure precise patterning is not well understood. We examine the effect of spatial heterogeneity on the periodic patterning of the fruit fly eye, an organ composed of ∼800 miniature eye units (ommatidia) whose periodic arrangement along a hexagonal lattice self-organizes during early stages of fly development. The patterning follows a two-step process, with an initial formation of evenly spaced clusters of ∼10 cells followed by a subsequent refinement of each cluster into a single selected cell. Using a probabilistic approach, we calculate the rate of patterning errors resulting from spatial heterogeneities in cell size, position and biosynthetic capacity. Notably, error rates were largely independent of the desired cluster size but followed the distributions of signaling speeds. Pre-formation of large clusters therefore greatly increases the reproducibility of the overall periodic arrangement, suggesting that the two-stage patterning process functions to guard the pattern against errors caused by spatial heterogeneities. Our results emphasize the constraints imposed on self-organized patterning mechanisms by the need to buffer stochastic effects. Author summary Complex periodic patterns are common in nature and are observed in physical, chemical and biological systems. Understanding how these patterns are generated in a precise manner is a key challenge. Biological patterns are especially intriguing, as they are generated in a noisy environment; cell position and cell size, for example, are subject to stochastic variations, as are the strengths of the chemical signals mediating cell-to-cell communication. The need

  9. Differential Allelic Expression in the Human Genome: A Robust Approach To Identify Genetic and Epigenetic Cis-Acting Mechanisms Regulating Gene Expression

    PubMed Central

    Serre, David; Gurd, Scott; Ge, Bing; Sladek, Robert; Sinnett, Donna; Harmsen, Eef; Bibikova, Marina; Chudin, Eugene; Barker, David L.; Dickinson, Todd; Fan, Jian-Bing; Hudson, Thomas J.

    2008-01-01

    The recent development of whole genome association studies has lead to the robust identification of several loci involved in different common human diseases. Interestingly, some of the strongest signals of association observed in these studies arise from non-coding regions located in very large introns or far away from any annotated genes, raising the possibility that these regions are involved in the etiology of the disease through some unidentified regulatory mechanisms. These findings highlight the importance of better understanding the mechanisms leading to inter-individual differences in gene expression in humans. Most of the existing approaches developed to identify common regulatory polymorphisms are based on linkage/association mapping of gene expression to genotypes. However, these methods have some limitations, notably their cost and the requirement of extensive genotyping information from all the individuals studied which limits their applications to a specific cohort or tissue. Here we describe a robust and high-throughput method to directly measure differences in allelic expression for a large number of genes using the Illumina Allele-Specific Expression BeadArray platform and quantitative sequencing of RT-PCR products. We show that this approach allows reliable identification of differences in the relative expression of the two alleles larger than 1.5-fold (i.e., deviations of the allelic ratio larger than 60∶40) and offers several advantages over the mapping of total gene expression, particularly for studying humans or outbred populations. Our analysis of more than 80 individuals for 2,968 SNPs located in 1,380 genes confirms that differential allelic expression is a widespread phenomenon affecting the expression of 20% of human genes and shows that our method successfully captures expression differences resulting from both genetic and epigenetic cis-acting mechanisms. PMID:18454203

  10. New results on the robust stability of PID controllers with gain and phase margins for UFOPTD processes.

    PubMed

    Jin, Q B; Liu, Q; Huang, B

    2016-03-01

    This paper considers the problem of determining all the robust PID (proportional-integral-derivative) controllers in terms of the gain and phase margins (GPM) for open-loop unstable first order plus time delay (UFOPTD) processes. It is the first time that the feasible ranges of the GPM specifications provided by a PID controller are given for UFOPTD processes. A gain and phase margin tester is used to modify the original model, and the ranges of the margin specifications are derived such that the modified model can be stabilized by a stabilizing PID controller based on Hermite-Biehlers Theorem. Furthermore, we obtain all the controllers satisfying a given margin specification. Simulation studies show how to use the results to design a robust PID controller. PMID:26708658

  11. Identifying robust large-scale flood risk mitigation strategies: A quasi-2D hydraulic model as a tool for the Po river

    NASA Astrophysics Data System (ADS)

    Castellarin, Attilio; Domeneghetti, Alessio; Brath, Armando

    2011-01-01

    This paper focuses on the identification of large-scale flood risk mitigation strategies for the middle-lower reach of River Po, the longest Italian river and the largest in terms of streamflow. This study develops and tests the applicability of a quasi-2D hydraulic model to aid the identification of large-scale flood risk mitigation strategies relative to a 500-year flood event other than levee heightening, which is not technically viable nor economically conceivable for the case study. Different geometrical configurations of the embankment system are considered and modelled in the study: no overtopping; overtopping and levee breaching; overtopping without levee breaching. The quasi-2D model resulted in being a very useful tool for (1) addressing the problem of flood risk mitigation from a global - perspective (i.e., entire middle-lower reach of River Po), (2) identifying critical reaches, inundation areas and corresponding overflow volumes, and (3) generating reliable boundary conditions for smaller scale studies aimed at further analyzing the hypothesized flood mitigation strategies using more complex modelling tools (e.g., fully 2D approaches). These are crucial tasks for institutions and public bodies in charge of formulating robust flood risk management strategies for large European rivers, in the light of the recent Directive 2007/60/EC on the assessment and management of flood risks ( European Parliament, 2007).

  12. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    PubMed

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. PMID:25463325

  13. Mechanisms for Robust Cognition.

    PubMed

    Walsh, Matthew M; Gluck, Kevin A

    2015-08-01

    To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within variable environments. This raises the question, how do cognitive systems achieve similarly high degrees of robustness? The aim of this study was to identify a set of mechanisms that enhance robustness in cognitive systems. We identify three mechanisms that enhance robustness in biological and engineered systems: system control, redundancy, and adaptability. After surveying the psychological literature for evidence of these mechanisms, we provide simulations illustrating how each contributes to robust cognition in a different psychological domain: psychomotor vigilance, semantic memory, and strategy selection. These simulations highlight features of a mathematical approach for quantifying robustness, and they provide concrete examples of mechanisms for robust cognition. PMID:25352094

  14. Identifying critical success factors for designing selection processes into postgraduate specialty training: the case of UK general practice.

    PubMed

    Plint, Simon; Patterson, Fiona

    2010-06-01

    The UK national recruitment process into general practice training has been developed over several years, with incremental introduction of stages which have been piloted and validated. Previously independent processes, which encouraged multiple applications and produced inconsistent outcomes, have been replaced by a robust national process which has high reliability and predictive validity, and is perceived to be fair by candidates and allocates applicants equitably across the country. Best selection practice involves a job analysis which identifies required competencies, then designs reliable assessment methods to measure them, and over the long term ensures that the process has predictive validity against future performance. The general practitioner recruitment process introduced machine markable short listing assessments for the first time in the UK postgraduate recruitment context, and also adopted selection centre workplace simulations. The key success factors have been identified as corporate commitment to the goal of a national process, with gradual convergence maintaining locus of control rather than the imposition of change without perceived legitimate authority. PMID:20547597

  15. OGS#PETSc approach for robust and efficient simulations of strongly coupled hydrothermal processes in EGS reservoirs

    NASA Astrophysics Data System (ADS)

    Watanabe, Norihiro; Blucher, Guido; Cacace, Mauro; Kolditz, Olaf

    2016-04-01

    A robust and computationally efficient solution is important for 3D modelling of EGS reservoirs. This is particularly the case when the reservoir model includes hydraulic conduits such as induced or natural fractures, fault zones, and wellbore open-hole sections. The existence of such hydraulic conduits results in heterogeneous flow fields and in a strengthened coupling between fluid flow and heat transport processes via temperature dependent fluid properties (e.g. density and viscosity). A commonly employed partitioned solution (or operator-splitting solution) may not robustly work for such strongly coupled problems its applicability being limited by small time step sizes (e.g. 5-10 days) whereas the processes have to be simulated for 10-100 years. To overcome this limitation, an alternative approach is desired which can guarantee a robust solution of the coupled problem with minor constraints on time step sizes. In this work, we present a Newton-Raphson based monolithic coupling approach implemented in the OpenGeoSys simulator (OGS) combined with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library. The PETSc library is used for both linear and nonlinear solvers as well as MPI-based parallel computations. The suggested method has been tested by application to the 3D reservoir site of Groß Schönebeck, in northern Germany. Results show that the exact Newton-Raphson approach can also be limited to small time step sizes (e.g. one day) due to slight oscillations in the temperature field. The usage of a line search technique and modification of the Jacobian matrix were necessary to achieve robust convergence of the nonlinear solution. For the studied example, the proposed monolithic approach worked even with a very large time step size of 3.5 years.

  16. Robust Kriged Kalman Filtering

    SciTech Connect

    Baingana, Brian; Dall'Anese, Emiliano; Mateos, Gonzalo; Giannakis, Georgios B.

    2015-11-11

    Although the kriged Kalman filter (KKF) has well-documented merits for prediction of spatial-temporal processes, its performance degrades in the presence of outliers due to anomalous events, or measurement equipment failures. This paper proposes a robust KKF model that explicitly accounts for presence of measurement outliers. Exploiting outlier sparsity, a novel l1-regularized estimator that jointly predicts the spatial-temporal process at unmonitored locations, while identifying measurement outliers is put forth. Numerical tests are conducted on a synthetic Internet protocol (IP) network, and real transformer load data. Test results corroborate the effectiveness of the novel estimator in joint spatial prediction and outlier identification.

  17. Robustness and assortativity for diffusion-like processes in scale-free networks

    NASA Astrophysics Data System (ADS)

    D'Agostino, G.; Scala, A.; Zlatić, V.; Caldarelli, G.

    2012-03-01

    By analysing the diffusive dynamics of epidemics and of distress in complex networks, we study the effect of the assortativity on the robustness of the networks. We first determine by spectral analysis the thresholds above which epidemics/failures can spread; we then calculate the slowest diffusional times. Our results shows that disassortative networks exhibit a higher epidemiological threshold and are therefore easier to immunize, while in assortative networks there is a longer time for intervention before epidemic/failure spreads. Moreover, we study by computer simulations the sandpile cascade model, a diffusive model of distress propagation (financial contagion). We show that, while assortative networks are more prone to the propagation of epidemic/failures, degree-targeted immunization policies increases their resilience to systemic risk.

  18. Robustness and Assortativity for Diffusion-like Processes in Scale- free Networks

    NASA Astrophysics Data System (ADS)

    Scala, Antonio; D'Agostino, Gregorio; Zlatic, Vinko; Caldarelli, Guido

    2012-02-01

    By analyzing the diffusive dynamics of epidemics and of distress in complex networks, we study the effect of the assortativity on the robustness of the networks. We first determine by spectral analysis the thresholds above which epidemics/failures can spread; we then calculate the slowest diffusional times. Our results shows that disassortative networks exhibit a higher epidemiological threshold and are therefore easier to immunize, while in assortative networks there is a longer time for intervention before epidemic/failure spreads. Moreover, we study by computer simulations a diffusive model of distress propagation (financial contagion). We show that, while assortative networks are more prone to the propagation of epidemic/failures, degree-targeted immunization policies increases their resilience to systemic risk.

  19. Process description language: an experiment in robust programming for manufacturing systems

    NASA Astrophysics Data System (ADS)

    Spooner, Natalie R.; Creak, G. Alan

    1998-10-01

    Maintaining stable, robust, and consistent software is difficult in face of the increasing rate of change of customers' preferences, materials, manufacturing techniques, computer equipment, and other characteristic features of manufacturing systems. It is argued that software is commonly difficult to keep up to date because many of the implications of these changing features on software details are obscure. A possible solution is to use a software generation system in which the transformation of system properties into system software is made explicit. The proposed generation system stores the system properties, such as machine properties, product properties and information on manufacturing techniques, in databases. As a result this information, on which system control is based, can also be made available to other programs. In particular, artificial intelligence programs such as fault diagnosis programs, can benefit from using the same information as the control system, rather than a separate database which must be developed and maintained separately to ensure consistency. Experience in developing a simplified model of such a system is presented.

  20. Robust quantitative scratch assay

    PubMed Central

    Vargas, Andrea; Angeli, Marc; Pastrello, Chiara; McQuaid, Rosanne; Li, Han; Jurisicova, Andrea; Jurisica, Igor

    2016-01-01

    The wound healing assay (or scratch assay) is a technique frequently used to quantify the dependence of cell motility—a central process in tissue repair and evolution of disease—subject to various treatments conditions. However processing the resulting data is a laborious task due its high throughput and variability across images. This Robust Quantitative Scratch Assay algorithm introduced statistical outputs where migration rates are estimated, cellular behaviour is distinguished and outliers are identified among groups of unique experimental conditions. Furthermore, the RQSA decreased measurement errors and increased accuracy in the wound boundary at comparable processing times compared to previously developed method (TScratch). Availability and implementation: The RQSA is freely available at: http://ophid.utoronto.ca/RQSA/RQSA_Scripts.zip. The image sets used for training and validation and results are available at: (http://ophid.utoronto.ca/RQSA/trainingSet.zip, http://ophid.utoronto.ca/RQSA/validationSet.zip, http://ophid.utoronto.ca/RQSA/ValidationSetResults.zip, http://ophid.utoronto.ca/RQSA/ValidationSet_H1975.zip, http://ophid.utoronto.ca/RQSA/ValidationSet_H1975Results.zip, http://ophid.utoronto.ca/RQSA/RobustnessSet.zip, http://ophid.utoronto.ca/RQSA/RobustnessSet.zip). Supplementary Material is provided for detailed description of the development of the RQSA. Contact: juris@ai.utoronto.ca Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26722119

  1. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  2. Identification, characterization and HPLC quantification of process-related impurities in Trelagliptin succinate bulk drug: Six identified as new compounds.

    PubMed

    Zhang, Hui; Sun, Lili; Zou, Liang; Hui, Wenkai; Liu, Lei; Zou, Qiaogen; Ouyang, Pingkai

    2016-09-01

    A sensitive, selective and stability indicating reversed-phase LC method was developed for the determination of process related impurities of Trelagliptin succinate in bulk drug. Six impurities were identified by LC-MS. Further, their structures were characterized and confirmed utilizing LC-MS/MS, IR and NMR spectral data. The most probable mechanisms for the formation of these impurities were also discussed. To the best of our knowledge, six structures among these impurities are new compounds and have not been reported previously. The superior separation was achieved on an InertSustain C18 (250mm×4.6mm, 5μm) column in a gradient mixture of acetonitrile and 20mmol potassium dihydrogen phosphate with 0.25% triethylamine (pH adjusted to 3.5 with phosphate acid). The method was validated as per regulatory guidelines to demonstrate system suitability, specificity, sensitivity, linearity, robustness, and stability. PMID:27209451

  3. Nuclear robustness of the r process in neutron-star mergers

    NASA Astrophysics Data System (ADS)

    Mendoza-Temis, Joel de Jesús; Wu, Meng-Ru; Langanke, Karlheinz; Martínez-Pinedo, Gabriel; Bauswein, Andreas; Janka, Hans-Thomas

    2015-11-01

    We have performed r -process calculations for matter ejected dynamically in neutron star mergers based on a complete set of trajectories from a three-dimensional relativistic smoothed particle hydrodynamic simulation with a total ejected mass of ˜1.7 ×10-3M⊙ . Our calculations consider an extended nuclear network, including spontaneous, β - and neutron-induced fission and adopting fission yield distributions from the abla code. In particular we have studied the sensitivity of the r -process abundances to nuclear masses by using different models. Most of the trajectories, corresponding to 90% of the ejected mass, follow a relatively slow expansion allowing for all neutrons to be captured. The resulting abundances are very similar to each other and reproduce the general features of the observed r -process abundance (the second and third peaks, the rare-earth peak, and the lead peak) for all mass models as they are mainly determined by the fission yields. We find distinct differences in the predictions of the mass models at and just above the third peak, which can be traced back to different predictions of neutron separation energies for r -process nuclei around neutron number N =130 . In all simulations, we find that the second peak around A ˜130 is produced by the fission yields of the material that piles up in nuclei with A ≳250 due to the substantially longer β -decay half-lives found in this region. The third peak around A ˜195 is generated in a competition between neutron captures and β decays during r -process freeze-out. The remaining trajectories, which contribute 10% by mass to the total integrated abundances, follow such a fast expansion that the r process does not use all the neutrons. This also leads to a larger variation of abundances among trajectories, as fission does not dominate the r -process dynamics. The resulting abundances are in between those associated to the r and s processes. The total integrated abundances are dominated by

  4. The Robustness of Proofreading to Crowding-Induced Pseudo-Processivity in the MAPK Pathway

    PubMed Central

    Ouldridge, Thomas E.; Rein ten Wolde, Pieter

    2014-01-01

    Double phosphorylation of protein kinases is a common feature of signaling cascades. This motif may reduce cross-talk between signaling pathways because the second phosphorylation site allows for proofreading, especially when phosphorylation is distributive rather than processive. Recent studies suggest that phosphorylation can be pseudo-processive in the crowded cellular environment, since rebinding after the first phosphorylation is enhanced by slow diffusion. Here, we use a simple model with unsaturated reactants to show that specificity for one substrate over another drops as rebinding increases and pseudo-processive behavior becomes possible. However, this loss of specificity with increased rebinding is typically also observed if two distinct enzyme species are required for phosphorylation, i.e., when the system is necessarily distributive. Thus the loss of specificity is due to an intrinsic reduction in selectivity with increased rebinding, which benefits inefficient reactions, rather than pseudo-processivity itself. We also show that proofreading can remain effective when the intended signaling pathway exhibits high levels of rebinding-induced pseudo-processivity, unlike other proposed advantages of the dual phosphorylation motif. PMID:25418311

  5. Evaluation of robust wave image processing methods for magnetic resonance elastography.

    PubMed

    Li, Bing Nan; Shan, Xiang; Xiang, Kui; An, Ning; Xu, Jinzhang; Huang, Weimin; Kobayashi, Etsuko

    2014-11-01

    Magnetic resonance elastography (MRE) is a promising modality for in vivo quantification and visualization of soft tissue elasticity. It involves three stages of processes for (1) external excitation, (2) wave imaging and (3) elasticity reconstruction. One of the important issues to be addressed in MRE is wave image processing and enhancement. In this study we approach it from three different ways including phase unwrapping, directional filtering and noise suppression. The relevant solutions were addressed briefly. Some of them were implemented and evaluated on both simulated and experimental MRE datasets. The results confirm that wave image enhancement is indispensable before carrying out MRE elasticity reconstruction. PMID:25222934

  6. Robust carrier formation process in low-band gap organic photovoltaics

    NASA Astrophysics Data System (ADS)

    Yonezawa, Kouhei; Kamioka, Hayato; Yasuda, Takeshi; Han, Liyuan; Moritomo, Yutaka

    2013-10-01

    By means of femto-second time-resolved spectroscopy, we investigated the carrier formation process against film morphology and temperature (T) in highly-efficient organic photovoltaic, poly[[4,8-bis[(2-ethylhexyl)oxy]benzo[1,2-b:4,5-b '] dithiophene-2,6-diyl][3-fluoro-2-[(2-ethylhexyl)carbonyl]thieno[3,4-b] thiophenediyl

  7. An Ontology for Identifying Cyber Intrusion Induced Faults in Process Control Systems

    NASA Astrophysics Data System (ADS)

    Hieb, Jeffrey; Graham, James; Guan, Jian

    This paper presents an ontological framework that permits formal representations of process control systems, including elements of the process being controlled and the control system itself. A fault diagnosis algorithm based on the ontological model is also presented. The algorithm can identify traditional process elements as well as control system elements (e.g., IP network and SCADA protocol) as fault sources. When these elements are identified as a likely fault source, the possibility exists that the process fault is induced by a cyber intrusion. A laboratory-scale distillation column is used to illustrate the model and the algorithm. Coupled with a well-defined statistical process model, this fault diagnosis approach provides cyber security enhanced fault diagnosis information to plant operators and can help identify that a cyber attack is underway before a major process failure is experienced.

  8. Optimization of Tape Winding Process Parameters to Enhance the Performance of Solid Rocket Nozzle Throat Back Up Liners using Taguchi's Robust Design Methodology

    NASA Astrophysics Data System (ADS)

    Nath, Nayani Kishore

    2016-06-01

    The throat back up liners is used to protect the nozzle structural members from the severe thermal environment in solid rocket nozzles. The throat back up liners is made with E-glass phenolic prepregs by tape winding process. The objective of this work is to demonstrate the optimization of process parameters of tape winding process to achieve better insulative resistance using Taguchi's robust design methodology. In this method four control factors machine speed, roller pressure, tape tension, tape temperature that were investigated for the tape winding process. The presented work was to study the cogency and acceptability of Taguchi's methodology in manufacturing of throat back up liners. The quality characteristic identified was Back wall temperature. Experiments carried out using L{9/'} (34) orthogonal array with three levels of four different control factors. The test results were analyzed using smaller the better criteria for Signal to Noise ratio in order to optimize the process. The experimental results were analyzed conformed and successfully used to achieve the minimum back wall temperature of the throat back up liners. The enhancement in performance of the throat back up liners was observed by carrying out the oxy-acetylene tests. The influence of back wall temperature on the performance of throat back up liners was verified by ground firing test.

  9. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    NASA Astrophysics Data System (ADS)

    McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.

    2015-04-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.

  10. Filling the gaps: A robust description of adhesive birth-death-movement processes

    NASA Astrophysics Data System (ADS)

    Johnston, Stuart T.; Baker, Ruth E.; Simpson, Matthew J.

    2016-04-01

    Existing continuum descriptions of discrete adhesive birth-death-movement processes provide accurate predictions of the average discrete behavior for limited parameter regimes. Here we present an alternative continuum description in terms of the dynamics of groups of contiguous occupied and vacant lattice sites. Our method provides more accurate predictions, is valid in parameter regimes that could not be described by previous continuum descriptions, and provides information about the spatial clustering of occupied sites. Furthermore, we present a simple analytic approximation of the spatial clustering of occupied sites at late time, when the system reaches its steady-state configuration.

  11. A Robust Multi-Scale Modeling System for the Study of Cloud and Precipitation Processes

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2012-01-01

    During the past decade, numerical weather and global non-hydrostatic models have started using more complex microphysical schemes originally developed for high resolution cloud resolving models (CRMs) with 1-2 km or less horizontal resolutions. These microphysical schemes affect the dynamic through the release of latent heat (buoyancy loading and pressure gradient) the radiation through the cloud coverage (vertical distribution of cloud species), and surface processes through rainfall (both amount and intensity). Recently, several major improvements of ice microphysical processes (or schemes) have been developed for cloud-resolving model (Goddard Cumulus Ensemble, GCE, model) and regional scale (Weather Research and Forecast, WRF) model. These improvements include an improved 3-ICE (cloud ice, snow and graupel) scheme (Lang et al. 2010); a 4-ICE (cloud ice, snow, graupel and hail) scheme and a spectral bin microphysics scheme and two different two-moment microphysics schemes. The performance of these schemes has been evaluated by using observational data from TRMM and other major field campaigns. In this talk, we will present the high-resolution (1 km) GeE and WRF model simulations and compared the simulated model results with observation from recent field campaigns [i.e., midlatitude continental spring season (MC3E; 2010), high latitude cold-season (C3VP, 2007; GCPEx, 2012), and tropical oceanic (TWP-ICE, 2006)].

  12. PRIME--A Process for Remediating Identified Marginal Education Candidates Revisited

    ERIC Educational Resources Information Center

    Riley, Gena; Notar, Charles; Owens, Lynetta; Harper, Cynthia

    2011-01-01

    The article traces the history of PRIME--A Process for Remediating Identified Marginal Education Candidates since 1996. The philosophy has not changed from its inception. Procedure identifies individuals who may be at risk for successful completion of their programs or who possess those traits associated with rapid attrition in the teaching…

  13. Uncertainties and robustness of the ignition process in type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Iapichino, L.; Lesaffre, P.

    2010-03-01

    Context. It is widely accepted that the onset of the explosive carbon burning in the core of a carbon-oxygen white dwarf (CO WD) triggers the ignition of a type Ia supernova (SN Ia). The features of the ignition are among the few free parameters of the SN Ia explosion theory. Aims: We explore the role for the ignition process of two different issues: firstly, the ignition is studied in WD models coming from different accretion histories. Secondly, we estimate how a different reaction rate for C-burning can affect the ignition. Methods: Two-dimensional hydrodynamical simulations of temperature perturbations in the WD core (“bubbles”) are performed with the FLASH code. In order to evaluate the impact of the C-burning reaction rate on the WD model, the evolution code FLASH_THE_TORTOISE from Lesaffre et al. (2006, MNRAS, 368, 187) is used. Results: In different WD models a key role is played by the different gravitational acceleration in the progenitor's core. As a consequence, the ignition is disfavored at a large distance from the WD center in models with a larger central density, resulting from the evolution of initially more massive progenitors. Changes in the C reaction rate at T ⪉ 5 × 10^8~K slightly influence the ignition density in the WD core, while the ignition temperature is almost unaffected. Recent measurements of new resonances in the C-burning reaction rate (Spillane et al. 2007, Phys. Rev. Lett., 98, 122501) do not affect the core conditions of the WD significantly. Conclusions: This simple analysis, performed on the features of the temperature perturbations in the WD core, should be extended in the framework of the state-of-the-art numerical tools for studying the turbulent convection and ignition in the WD core. Future measurements of the C-burning reactions cross section at low energy, though certainly useful, are not expected to affect our current understanding of the ignition process dramatically.

  14. Testing the robustness, to changes in process, of a scaling relationship between soil grading and geomorphology using a pedogenesis model

    NASA Astrophysics Data System (ADS)

    Willgoose, G. R.; Welivitiya, W. D. D. P.; Hancock, G. R.

    2014-12-01

    Using the mARM1D pedogenesis model (which simulated armouring and weathering processes on a hillslope) previous work by Cohen found a strong log-log linear relationship between the size distribution of the soil (e.g. d50), the contributing area and the local slope. However, Cohen performed his simulations using only one set of grading data, one climate, one geology and did his simulations over a relatively limited range of area and slope combinations. A model based on mARM, called SSSPAM, that generalises the modelled processes has been developed, and calibrated to mARM. This calibration was used as the starting point for a parametric study of the robustness to changes in environmental conditions and weathering conditions of the area-slope and d50 relationship, different initial soil gradings and weathering conditions, different geology, and a broader range of area and slope combinations. This parametric study assessed the influence of changes in the model parameters on the soil evolution results. These simulations confirmed the robustness of the area-slope and d50 relationship discovered by Cohen using mARM. We also demonstrated that the area-slope-diameter relationship is not only true for d50 but for the entire grading range (e.g. d10, d90). The results strengthen our confidence in the generality of the log-log linear scaling relationship between area, slope and soil grading. The paper will present the results of our parametric study and will highlight the potential uses of the relationship for digital soil mapping and better characterization of soils in environmental models.

  15. Robust Suppression of HIV Replication by Intracellularly Expressed Reverse Transcriptase Aptamers Is Independent of Ribozyme Processing

    PubMed Central

    Lange, Margaret J; Sharma, Tarun K; Whatley, Angela S; Landon, Linda A; Tempesta, Michael A; Johnson, Marc C; Burke, Donald H

    2012-01-01

    RNA aptamers that bind human immunodeficiency virus 1 (HIV-1) reverse transcriptase (RT) also inhibit viral replication, making them attractive as therapeutic candidates and potential tools for dissecting viral pathogenesis. However, it is not well understood how aptamer-expression context and cellular RNA pathways govern aptamer accumulation and net antiviral bioactivity. Using a previously-described expression cassette in which aptamers were flanked by two “minimal core” hammerhead ribozymes, we observed only weak suppression of pseudotyped HIV. To evaluate the importance of the minimal ribozymes, we replaced them with extended, tertiary-stabilized hammerhead ribozymes with enhanced self-cleavage activity, in addition to noncleaving ribozymes with active site mutations. Both the active and inactive versions of the extended hammerhead ribozymes increased inhibition of pseudotyped virus, indicating that processing is not necessary for bioactivity. Clonal stable cell lines expressing aptamers from these modified constructs strongly suppressed infectious virus, and were more effective than minimal ribozymes at high viral multiplicity of infection (MOI). Tertiary stabilization greatly increased aptamer accumulation in viral and subcellular compartments, again regardless of self-cleavage capability. We therefore propose that the increased accumulation is responsible for increased suppression, that the bioactive form of the aptamer is one of the uncleaved or partially cleaved transcripts, and that tertiary stabilization increases transcript stability by reducing exonuclease degradation. PMID:22948672

  16. Extreme temperature robust optical sensor designs and fault-tolerant signal processing

    DOEpatents

    Riza, Nabeel Agha; Perez, Frank

    2012-01-17

    Silicon Carbide (SiC) probe designs for extreme temperature and pressure sensing uses a single crystal SiC optical chip encased in a sintered SiC material probe. The SiC chip may be protected for high temperature only use or exposed for both temperature and pressure sensing. Hybrid signal processing techniques allow fault-tolerant extreme temperature sensing. Wavelength peak-to-peak (or null-to-null) collective spectrum spread measurement to detect wavelength peak/null shift measurement forms a coarse-fine temperature measurement using broadband spectrum monitoring. The SiC probe frontend acts as a stable emissivity Black-body radiator and monitoring the shift in radiation spectrum enables a pyrometer. This application combines all-SiC pyrometry with thick SiC etalon laser interferometry within a free-spectral range to form a coarse-fine temperature measurement sensor. RF notch filtering techniques improve the sensitivity of the temperature measurement where fine spectral shift or spectrum measurements are needed to deduce temperature.

  17. A robust and representative lower bound on object processing speed in humans.

    PubMed

    Bieniek, Magdalena M; Bennett, Patrick J; Sekuler, Allison B; Rousselet, Guillaume A

    2016-07-01

    How early does the brain decode object categories? Addressing this question is critical to constrain the type of neuronal architecture supporting object categorization. In this context, much effort has been devoted to estimating face processing speed. With onsets estimated from 50 to 150 ms, the timing of the first face-sensitive responses in humans remains controversial. This controversy is due partially to the susceptibility of dynamic brain measurements to filtering distortions and analysis issues. Here, using distributions of single-trial event-related potentials (ERPs), causal filtering, statistical analyses at all electrodes and time points, and effective correction for multiple comparisons, we present evidence that the earliest categorical differences start around 90 ms following stimulus presentation. These results were obtained from a representative group of 120 participants, aged 18-81, who categorized images of faces and noise textures. The results were reliable across testing days, as determined by test-retest assessment in 74 of the participants. Furthermore, a control experiment showed similar ERP onsets for contrasts involving images of houses or white noise. Face onsets did not change with age, suggesting that face sensitivity occurs within 100 ms across the adult lifespan. Finally, the simplicity of the face-texture contrast, and the dominant midline distribution of the effects, suggest the face responses were evoked by relatively simple image properties and are not face specific. Our results provide a new lower benchmark for the earliest neuronal responses to complex objects in the human visual system. PMID:26469359

  18. Robust fetal QRS detection from noninvasive abdominal electrocardiogram based on channel selection and simultaneous multichannel processing.

    PubMed

    Ghaffari, Ali; Mollakazemi, Mohammad Javad; Atyabi, Seyyed Abbas; Niknazar, Mohammad

    2015-12-01

    The purpose of this study is to provide a new method for detecting fetal QRS complexes from non-invasive fetal electrocardiogram (fECG) signal. Despite most of the current fECG processing methods which are based on separation of fECG from maternal ECG (mECG), in this study, fetal heart rate (FHR) can be extracted with high accuracy without separation of fECG from mECG. Furthermore, in this new approach thoracic channels are not necessary. These two aspects have reduced the required computational operations. Consequently, the proposed approach can be efficiently applied to different real-time healthcare and medical devices. In this work, a new method is presented for selecting the best channel which carries strongest fECG. Each channel is scored based on two criteria of noise distribution and good fetal heartbeat visibility. Another important aspect of this study is the simultaneous and combinatorial use of available fECG channels via the priority given by their scores. A combination of geometric features and wavelet-based techniques was adopted to extract FHR. Based on fetal geometric features, fECG signals were divided into three categories, and different strategies were employed to analyze each category. The method was validated using three datasets including Noninvasive fetal ECG database, DaISy and PhysioNet/Computing in Cardiology Challenge 2013. Finally, the obtained results were compared with other studies. The adopted strategies such as multi-resolution analysis, not separating fECG and mECG, intelligent channels scoring and using them simultaneously are the factors that caused the promising performance of the method. PMID:26462679

  19. A point process approach to identifying and tracking transitions in neural spiking dynamics in the subthalamic nucleus of Parkinson's patients

    NASA Astrophysics Data System (ADS)

    Deng, Xinyi; Eskandar, Emad N.; Eden, Uri T.

    2013-12-01

    Understanding the role of rhythmic dynamics in normal and diseased brain function is an important area of research in neural electrophysiology. Identifying and tracking changes in rhythms associated with spike trains present an additional challenge, because standard approaches for continuous-valued neural recordings—such as local field potential, magnetoencephalography, and electroencephalography data—require assumptions that do not typically hold for point process data. Additionally, subtle changes in the history dependent structure of a spike train have been shown to lead to robust changes in rhythmic firing patterns. Here, we propose a point process modeling framework to characterize the rhythmic spiking dynamics in spike trains, test for statistically significant changes to those dynamics, and track the temporal evolution of such changes. We first construct a two-state point process model incorporating spiking history and develop a likelihood ratio test to detect changes in the firing structure. We then apply adaptive state-space filters and smoothers to track these changes through time. We illustrate our approach with a simulation study as well as with experimental data recorded in the subthalamic nucleus of Parkinson's patients performing an arm movement task. Our analyses show that during the arm movement task, neurons underwent a complex pattern of modulation of spiking intensity characterized initially by a release of inhibitory control at 20-40 ms after a spike, followed by a decrease in excitatory influence at 40-60 ms after a spike.

  20. Discriminating sediment archives and sedimentary processes in the arid endorheic Ejina Basin, NW China using a robust geochemical approach

    NASA Astrophysics Data System (ADS)

    Yu, Kaifeng; Hartmann, Kai; Nottebaum, Veit; Stauch, Georg; Lu, Huayu; Zeeden, Christian; Yi, Shuangwen; Wünnemann, Bernd; Lehmkuhl, Frank

    2016-04-01

    Geochemical characteristics have been intensively used to assign sediment properties to paleoclimate and provenance. Nonetheless, in particular concerning the arid context, bulk geochemistry of different sediment archives and corresponding process interpretations are hitherto elusive. The Ejina Basin, with its suite of different sediment archives, is known as one of the main sources for the loess accumulation on the Chinese Loess Plateau. In order to understand mechanisms along this supra-regional sediment cascade, it is crucial to decipher the archive characteristics and formation processes. To address these issues, five profiles in different geomorphological contexts were selected. Analyses of X-ray fluorescence and diffraction, grain size, optically stimulated luminescence and radiocarbon dating were performed. Robust factor analysis was applied to reduce the attribute space to the process space of sedimentation history. Five sediment archives from three lithologic units exhibit geochemical characteristics as follows: (i) aeolian sands have high contents of Zr and Hf, whereas only Hf can be regarded as a valuable indicator to discriminate the coarse sand proportion; (ii) sandy loess has high Ca and Sr contents which both exhibit broad correlations with the medium to coarse silt proportions; (iii) lacustrine clays have high contents of felsic, ferromagnesian and mica source elements e.g., K, Fe, Ti, V, and Ni; (iv) fluvial sands have high contents of Mg, Cl and Na which may be enriched in evaporite minerals; (v) alluvial gravels have high contents of Cr which may originate from nearby Cr-rich bedrock. Temporal variations can be illustrated by four robust factors: weathering intensity, silicate-bearing mineral abundance, saline/alkaline magnitude and quasi-constant aeolian input. In summary, the bulk-composition of the late Quaternary sediments in this arid context is governed by the nature of the source terrain, weak chemical weathering, authigenic minerals

  1. A robust post-processing method to determine skin friction in turbulent boundary layers from the velocity profile

    NASA Astrophysics Data System (ADS)

    Rodríguez-López, Eduardo; Bruce, Paul J. K.; Buxton, Oliver R. H.

    2015-04-01

    The present paper describes a method to extrapolate the mean wall shear stress, , and the accurate relative position of a velocity probe with respect to the wall, , from an experimentally measured mean velocity profile in a turbulent boundary layer. Validation is made between experimental and direct numerical simulation data of turbulent boundary layer flows with independent measurement of the shear stress. The set of parameters which minimize the residual error with respect to the canonical description of the boundary layer profile is taken as the solution. Several methods are compared, testing different descriptions of the canonical mean velocity profile (with and without overshoot over the logarithmic law) and different definitions of the residual function of the optimization. The von Kármán constant is used as a parameter of the fitting process in order to avoid any hypothesis regarding its value that may be affected by different initial or boundary conditions of the flow. Results show that the best method provides an accuracy of for the estimation of the friction velocity and for the position of the wall. The robustness of the method is tested including unconverged near-wall measurements, pressure gradient, and reduced number of points; the importance of the location of the first point is also tested, and it is shown that the method presents a high robustness even in highly distorted flows, keeping the aforementioned accuracies if one acquires at least one data point in . The wake component and the thickness of the boundary layer are also simultaneously extrapolated from the mean velocity profile. This results in the first study, to the knowledge of the authors, where a five-parameter fitting is carried out without any assumption on the von Kármán constant and the limits of the logarithmic layer further from its existence.

  2. Efficient and mechanically robust stretchable organic light-emitting devices by a laser-programmable buckling process

    PubMed Central

    Yin, Da; Feng, Jing; Ma, Rui; Liu, Yue-Feng; Zhang, Yong-Lai; Zhang, Xu-Lin; Bi, Yan-Gang; Chen, Qi-Dai; Sun, Hong-Bo

    2016-01-01

    Stretchable organic light-emitting devices are becoming increasingly important in the fast-growing fields of wearable displays, biomedical devices and health-monitoring technology. Although highly stretchable devices have been demonstrated, their luminous efficiency and mechanical stability remain impractical for the purposes of real-life applications. This is due to significant challenges arising from the high strain-induced limitations on the structure design of the device, the materials used and the difficulty of controlling the stretch-release process. Here we have developed a laser-programmable buckling process to overcome these obstacles and realize a highly stretchable organic light-emitting diode with unprecedented efficiency and mechanical robustness. The strained device luminous efficiency −70 cd A−1 under 70% strain - is the largest to date and the device can accommodate 100% strain while exhibiting only small fluctuations in performance over 15,000 stretch-release cycles. This work paves the way towards fully stretchable organic light-emitting diodes that can be used in wearable electronic devices. PMID:27187936

  3. Efficient and mechanically robust stretchable organic light-emitting devices by a laser-programmable buckling process

    NASA Astrophysics Data System (ADS)

    Yin, Da; Feng, Jing; Ma, Rui; Liu, Yue-Feng; Zhang, Yong-Lai; Zhang, Xu-Lin; Bi, Yan-Gang; Chen, Qi-Dai; Sun, Hong-Bo

    2016-05-01

    Stretchable organic light-emitting devices are becoming increasingly important in the fast-growing fields of wearable displays, biomedical devices and health-monitoring technology. Although highly stretchable devices have been demonstrated, their luminous efficiency and mechanical stability remain impractical for the purposes of real-life applications. This is due to significant challenges arising from the high strain-induced limitations on the structure design of the device, the materials used and the difficulty of controlling the stretch-release process. Here we have developed a laser-programmable buckling process to overcome these obstacles and realize a highly stretchable organic light-emitting diode with unprecedented efficiency and mechanical robustness. The strained device luminous efficiency -70 cd A-1 under 70% strain - is the largest to date and the device can accommodate 100% strain while exhibiting only small fluctuations in performance over 15,000 stretch-release cycles. This work paves the way towards fully stretchable organic light-emitting diodes that can be used in wearable electronic devices.

  4. Efficient and mechanically robust stretchable organic light-emitting devices by a laser-programmable buckling process.

    PubMed

    Yin, Da; Feng, Jing; Ma, Rui; Liu, Yue-Feng; Zhang, Yong-Lai; Zhang, Xu-Lin; Bi, Yan-Gang; Chen, Qi-Dai; Sun, Hong-Bo

    2016-01-01

    Stretchable organic light-emitting devices are becoming increasingly important in the fast-growing fields of wearable displays, biomedical devices and health-monitoring technology. Although highly stretchable devices have been demonstrated, their luminous efficiency and mechanical stability remain impractical for the purposes of real-life applications. This is due to significant challenges arising from the high strain-induced limitations on the structure design of the device, the materials used and the difficulty of controlling the stretch-release process. Here we have developed a laser-programmable buckling process to overcome these obstacles and realize a highly stretchable organic light-emitting diode with unprecedented efficiency and mechanical robustness. The strained device luminous efficiency -70 cd A(-1) under 70% strain - is the largest to date and the device can accommodate 100% strain while exhibiting only small fluctuations in performance over 15,000 stretch-release cycles. This work paves the way towards fully stretchable organic light-emitting diodes that can be used in wearable electronic devices. PMID:27187936

  5. The role of the PIRT process in identifying code improvements and executing code development

    SciTech Connect

    Wilson, G.E.; Boyack, B.E.

    1997-07-01

    In September 1988, the USNRC issued a revised ECCS rule for light water reactors that allows, as an option, the use of best estimate (BE) plus uncertainty methods in safety analysis. The key feature of this licensing option relates to quantification of the uncertainty in the determination that an NPP has a {open_quotes}low{close_quotes} probability of violating the safety criteria specified in 10 CFR 50. To support the 1988 licensing revision, the USNRC and its contractors developed the CSAU evaluation methodology to demonstrate the feasibility of the BE plus uncertainty approach. The PIRT process, Step 3 in the CSAU methodology, was originally formulated to support the BE plus uncertainty licensing option as executed in the CSAU approach to safety analysis. Subsequent work has shown the PIRT process to be a much more powerful tool than conceived in its original form. Through further development and application, the PIRT process has shown itself to be a robust means to establish safety analysis computer code phenomenological requirements in their order of importance to such analyses. Used early in research directed toward these objectives, PIRT results also provide the technical basis and cost effective organization for new experimental programs needed to improve the safety analysis codes for new applications. The primary purpose of this paper is to describe the generic PIRT process, including typical and common illustrations from prior applications. The secondary objective is to provide guidance to future applications of the process to help them focus, in a graded approach, on systems, components, processes and phenomena that have been common in several prior applications.

  6. 25 CFR 170.501 - What happens when the review process identifies areas for improvement?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false What happens when the review process identifies areas for... WATER INDIAN RESERVATION ROADS PROGRAM Planning, Design, and Construction of Indian Reservation Roads Program Facilities Program Reviews and Management Systems § 170.501 What happens when the review...

  7. 25 CFR 170.501 - What happens when the review process identifies areas for improvement?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 25 Indians 1 2013-04-01 2013-04-01 false What happens when the review process identifies areas for... WATER INDIAN RESERVATION ROADS PROGRAM Planning, Design, and Construction of Indian Reservation Roads Program Facilities Program Reviews and Management Systems § 170.501 What happens when the review...

  8. 25 CFR 170.501 - What happens when the review process identifies areas for improvement?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false What happens when the review process identifies areas for... WATER INDIAN RESERVATION ROADS PROGRAM Planning, Design, and Construction of Indian Reservation Roads Program Facilities Program Reviews and Management Systems § 170.501 What happens when the review...

  9. 25 CFR 170.501 - What happens when the review process identifies areas for improvement?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 25 Indians 1 2012-04-01 2011-04-01 true What happens when the review process identifies areas for... WATER INDIAN RESERVATION ROADS PROGRAM Planning, Design, and Construction of Indian Reservation Roads Program Facilities Program Reviews and Management Systems § 170.501 What happens when the review...

  10. 25 CFR 170.501 - What happens when the review process identifies areas for improvement?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 1 2014-04-01 2014-04-01 false What happens when the review process identifies areas for... WATER INDIAN RESERVATION ROADS PROGRAM Planning, Design, and Construction of Indian Reservation Roads Program Facilities Program Reviews and Management Systems § 170.501 What happens when the review...

  11. 20 CFR 1010.300 - What processes are to be implemented to identify covered persons?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false What processes are to be implemented to identify covered persons? 1010.300 Section 1010.300 Employees' Benefits OFFICE OF THE ASSISTANT SECRETARY FOR VETERANS' EMPLOYMENT AND TRAINING SERVICE, DEPARTMENT OF LABOR APPLICATION OF PRIORITY OF SERVICE FOR COVERED PERSONS Applying Priority...

  12. Clocking the social mind by identifying mental processes in the IAT with electrical neuroimaging.

    PubMed

    Schiller, Bastian; Gianotti, Lorena R R; Baumgartner, Thomas; Nash, Kyle; Koenig, Thomas; Knoch, Daria

    2016-03-01

    Why do people take longer to associate the word "love" with outgroup words (incongruent condition) than with ingroup words (congruent condition)? Despite the widespread use of the implicit association test (IAT), it has remained unclear whether this IAT effect is due to additional mental processes in the incongruent condition, or due to longer duration of the same processes. Here, we addressed this previously insoluble issue by assessing the spatiotemporal evolution of brain electrical activity in 83 participants. From stimulus presentation until response production, we identified seven processes. Crucially, all seven processes occurred in the same temporal sequence in both conditions, but participants needed more time to perform one early occurring process (perceptual processing) and one late occurring process (implementing cognitive control to select the motor response) in the incongruent compared with the congruent condition. We also found that the latter process contributed to individual differences in implicit bias. These results advance understanding of the neural mechanics of response time differences in the IAT: They speak against theories that explain the IAT effect as due to additional processes in the incongruent condition and speak in favor of theories that assume a longer duration of specific processes in the incongruent condition. More broadly, our data analysis approach illustrates the potential of electrical neuroimaging to illuminate the temporal organization of mental processes involved in social cognition. PMID:26903643

  13. On the estimation of robustness and filtering ability of dynamic biochemical networks under process delays, internal parametric perturbations and external disturbances.

    PubMed

    Chen, Bor-Sen; Chen, Po-Wei

    2009-12-01

    Inherently, biochemical regulatory networks suffer from process delays, internal parametrical perturbations as well as external disturbances. Robustness is the property to maintain the functions of intracellular biochemical regulatory networks despite these perturbations. In this study, system and signal processing theories are employed for measurement of robust stability and filtering ability of linear and nonlinear time-delay biochemical regulatory networks. First, based on Lyapunov stability theory, the robust stability of biochemical network is measured for the tolerance of additional process delays and additive internal parameter fluctuations. Then the filtering ability of attenuating additive external disturbances is estimated for time-delay biochemical regulatory networks. In order to overcome the difficulty of solving the Hamilton Jacobi inequality (HJI), the global linearization technique is employed to simplify the measurement procedure by a simple linear matrix inequality (LMI) method. Finally, an example is given in silico to illustrate how to measure the robust stability and filtering ability of a nonlinear time-delay perturbative biochemical network. This robust stability and filtering ability measurement for biochemical network has potential application to synthetic biology, gene therapy and drug design. PMID:19788895

  14. Pilot-scale investigation of the robustness and efficiency of a copper-based treated wood wastes recycling process.

    PubMed

    Coudert, Lucie; Blais, Jean-François; Mercier, Guy; Cooper, Paul; Gastonguay, Louis; Morris, Paul; Janin, Amélie; Reynier, Nicolas

    2013-10-15

    The disposal of metal-bearing treated wood wastes is becoming an environmental challenge. An efficient recycling process based on sulfuric acid leaching has been developed to remove metals from copper-based treated wood chips (0robustness of this technology in removing metals from copper-based treated wood wastes at a pilot plant scale (130-L reactor tank). After 3 × 2 h leaching steps followed by 3 × 7 min rinsing steps, up to 97.5% of As, 87.9% of Cr, and 96.1% of Cu were removed from CCA-treated wood wastes with different initial metal loading (>7.3 kgm(-3)) and more than 94.5% of Cu was removed from ACQ-, CA- and MCQ-treated wood. The treatment of effluents by precipitation-coagulation was highly efficient; allowing removals more than 93% for the As, Cr, and Cu contained in the effluent. The economic analysis included operating costs, indirect costs and revenues related to remediated wood sales. The economic analysis concluded that CCA-treated wood wastes remediation can lead to a benefit of 53.7 US$t(-1) or a cost of 35.5 US$t(-1) and that ACQ-, CA- and MCQ-treated wood wastes recycling led to benefits ranging from 9.3 to 21.2 US$t(-1). PMID:23954815

  15. Torque coordinating robust control of shifting process for dry dual clutch transmission equipped in a hybrid car

    NASA Astrophysics Data System (ADS)

    Zhao, Z.-G.; Chen, H.-J.; Yang, Y.-Y.; He, L.

    2015-09-01

    For a hybrid car equipped with dual clutch transmission (DCT), the coordination control problems of clutches and power sources are investigated while taking full advantage of the integrated starter generator motor's fast response speed and high accuracy (speed and torque). First, a dynamic model of the shifting process is established, the vehicle acceleration is quantified according to the intentions of the driver, and the torque transmitted by clutches is calculated based on the designed disengaging principle during the torque phase. Next, a robust H∞ controller is designed to ensure speed synchronisation despite the existence of model uncertainties, measurement noise, and engine torque lag. The engine torque lag and measurement noise are used as external disturbances to initially modify the output torque of the power source. Additionally, during the torque switch phase, the torque of the power sources is smoothly transitioned to the driver's demanded torque. Finally, the torque of the power sources is further distributed based on the optimisation of system efficiency, and the throttle opening of the engine is constrained to avoid sharp torque variations. The simulation results verify that the proposed control strategies effectively address the problem of coordinating control of clutches and power sources, establishing a foundation for the application of DCT in hybrid cars.

  16. RNA-ID, a highly sensitive and robust method to identify cis-regulatory sequences using superfolder GFP and a fluorescence-based assay

    PubMed Central

    Dean, Kimberly M.; Grayhack, Elizabeth J.

    2012-01-01

    We have developed a robust and sensitive method, called RNA-ID, to screen for cis-regulatory sequences in RNA using fluorescence-activated cell sorting (FACS) of yeast cells bearing a reporter in which expression of both superfolder green fluorescent protein (GFP) and yeast codon-optimized mCherry red fluorescent protein (RFP) is driven by the bidirectional GAL1,10 promoter. This method recapitulates previously reported progressive inhibition of translation mediated by increasing numbers of CGA codon pairs, and restoration of expression by introduction of a tRNA with an anticodon that base pairs exactly with the CGA codon. This method also reproduces effects of paromomycin and context on stop codon read-through. Five key features of this method contribute to its effectiveness as a selection for regulatory sequences: The system exhibits greater than a 250-fold dynamic range, a quantitative and dose-dependent response to known inhibitory sequences, exquisite resolution that allows nearly complete physical separation of distinct populations, and a reproducible signal between different cells transformed with the identical reporter, all of which are coupled with simple methods involving ligation-independent cloning, to create large libraries. Moreover, we provide evidence that there are sequences within a 9-nt library that cause reduced GFP fluorescence, suggesting that there are novel cis-regulatory sequences to be found even in this short sequence space. This method is widely applicable to the study of both RNA-mediated and codon-mediated effects on expression. PMID:23097427

  17. idTarget: a web server for identifying protein targets of small chemical molecules with robust scoring functions and a divide-and-conquer docking approach

    PubMed Central

    Wang, Jui-Chih; Chu, Pei-Ying; Chen, Chung-Ming; Lin, Jung-Hsin

    2012-01-01

    Identification of possible protein targets of small chemical molecules is an important step for unravelling their underlying causes of actions at the molecular level. To this end, we construct a web server, idTarget, which can predict possible binding targets of a small chemical molecule via a divide-and-conquer docking approach, in combination with our recently developed scoring functions based on robust regression analysis and quantum chemical charge models. Affinity profiles of the protein targets are used to provide the confidence levels of prediction. The divide-and-conquer docking approach uses adaptively constructed small overlapping grids to constrain the searching space, thereby achieving better docking efficiency. Unlike previous approaches that screen against a specific class of targets or a limited number of targets, idTarget screen against nearly all protein structures deposited in the Protein Data Bank (PDB). We show that idTarget is able to reproduce known off-targets of drugs or drug-like compounds, and the suggested new targets could be prioritized for further investigation. idTarget is freely available as a web-based server at http://idtarget.rcas.sinica.edu.tw. PMID:22649057

  18. A hybrid approach identifies metabolic signatures of high-producers for chinese hamster ovary clone selection and process optimization.

    PubMed

    Popp, Oliver; Müller, Dirk; Didzus, Katharina; Paul, Wolfgang; Lipsmeier, Florian; Kirchner, Florian; Niklas, Jens; Mauch, Klaus; Beaucamp, Nicola

    2016-09-01

    In-depth characterization of high-producer cell lines and bioprocesses is vital to ensure robust and consistent production of recombinant therapeutic proteins in high quantity and quality for clinical applications. This requires applying appropriate methods during bioprocess development to enable meaningful characterization of CHO clones and processes. Here, we present a novel hybrid approach for supporting comprehensive characterization of metabolic clone performance. The approach combines metabolite profiling with multivariate data analysis and fluxomics to enable a data-driven mechanistic analysis of key metabolic traits associated with desired cell phenotypes. We applied the methodology to quantify and compare metabolic performance in a set of 10 recombinant CHO-K1 producer clones and a host cell line. The comprehensive characterization enabled us to derive an extended set of clone performance criteria that not only captured growth and product formation, but also incorporated information on intracellular clone physiology and on metabolic changes during the process. These criteria served to establish a quantitative clone ranking and allowed us to identify metabolic differences between high-producing CHO-K1 clones yielding comparably high product titers. Through multivariate data analysis of the combined metabolite and flux data we uncovered common metabolic traits characteristic of high-producer clones in the screening setup. This included high intracellular rates of glutamine synthesis, low cysteine uptake, reduced excretion of aspartate and glutamate, and low intracellular degradation rates of branched-chain amino acids and of histidine. Finally, the above approach was integrated into a workflow that enables standardized high-content selection of CHO producer clones in a high-throughput fashion. In conclusion, the combination of quantitative metabolite profiling, multivariate data analysis, and mechanistic network model simulations can identify metabolic

  19. Centimeter-Level Robust Gnss-Aided Inertial Post-Processing for Mobile Mapping Without Local Reference Stations

    NASA Astrophysics Data System (ADS)

    Hutton, J. J.; Gopaul, N.; Zhang, X.; Wang, J.; Menon, V.; Rieck, D.; Kipka, A.; Pastor, F.

    2016-06-01

    For almost two decades mobile mapping systems have done their georeferencing using Global Navigation Satellite Systems (GNSS) to measure position and inertial sensors to measure orientation. In order to achieve cm level position accuracy, a technique referred to as post-processed carrier phase differential GNSS (DGNSS) is used. For this technique to be effective the maximum distance to a single Reference Station should be no more than 20 km, and when using a network of Reference Stations the distance to the nearest station should no more than about 70 km. This need to set up local Reference Stations limits productivity and increases costs, especially when mapping large areas or long linear features such as roads or pipelines. An alternative technique to DGNSS for high-accuracy positioning from GNSS is the so-called Precise Point Positioning or PPP method. In this case instead of differencing the rover observables with the Reference Station observables to cancel out common errors, an advanced model for every aspect of the GNSS error chain is developed and parameterized to within an accuracy of a few cm. The Trimble Centerpoint RTX positioning solution combines the methodology of PPP with advanced ambiguity resolution technology to produce cm level accuracies without the need for local reference stations. It achieves this through a global deployment of highly redundant monitoring stations that are connected through the internet and are used to determine the precise satellite data with maximum accuracy, robustness, continuity and reliability, along with advance algorithms and receiver and antenna calibrations. This paper presents a new post-processed realization of the Trimble Centerpoint RTX technology integrated into the Applanix POSPac MMS GNSS-Aided Inertial software for mobile mapping. Real-world results from over 100 airborne flights evaluated against a DGNSS network reference are presented which show that the post-processed Centerpoint RTX solution agrees with

  20. A Novel Mini-DNA Barcoding Assay to Identify Processed Fins from Internationally Protected Shark Species

    PubMed Central

    Fields, Andrew T.; Abercrombie, Debra L.; Eng, Rowena; Feldheim, Kevin; Chapman, Demian D.

    2015-01-01

    There is a growing need to identify shark products in trade, in part due to the recent listing of five commercially important species on the Appendices of the Convention on International Trade in Endangered Species (CITES; porbeagle, Lamna nasus, oceanic whitetip, Carcharhinus longimanus scalloped hammerhead, Sphyrna lewini, smooth hammerhead, S. zygaena and great hammerhead S. mokarran) in addition to three species listed in the early part of this century (whale, Rhincodon typus, basking, Cetorhinus maximus, and white, Carcharodon carcharias). Shark fins are traded internationally to supply the Asian dried seafood market, in which they are used to make the luxury dish shark fin soup. Shark fins usually enter international trade with their skin still intact and can be identified using morphological characters or standard DNA-barcoding approaches. Once they reach Asia and are traded in this region the skin is removed and they are treated with chemicals that eliminate many key diagnostic characters and degrade their DNA (“processed fins”). Here, we present a validated mini-barcode assay based on partial sequences of the cytochrome oxidase I gene that can reliably identify the processed fins of seven of the eight CITES listed shark species. We also demonstrate that the assay can even frequently identify the species or genus of origin of shark fin soup (31 out of 50 samples). PMID:25646789

  1. A novel mini-DNA barcoding assay to identify processed fins from internationally protected shark species.

    PubMed

    Fields, Andrew T; Abercrombie, Debra L; Eng, Rowena; Feldheim, Kevin; Chapman, Demian D

    2015-01-01

    There is a growing need to identify shark products in trade, in part due to the recent listing of five commercially important species on the Appendices of the Convention on International Trade in Endangered Species (CITES; porbeagle, Lamna nasus, oceanic whitetip, Carcharhinus longimanus scalloped hammerhead, Sphyrna lewini, smooth hammerhead, S. zygaena and great hammerhead S. mokarran) in addition to three species listed in the early part of this century (whale, Rhincodon typus, basking, Cetorhinus maximus, and white, Carcharodon carcharias). Shark fins are traded internationally to supply the Asian dried seafood market, in which they are used to make the luxury dish shark fin soup. Shark fins usually enter international trade with their skin still intact and can be identified using morphological characters or standard DNA-barcoding approaches. Once they reach Asia and are traded in this region the skin is removed and they are treated with chemicals that eliminate many key diagnostic characters and degrade their DNA ("processed fins"). Here, we present a validated mini-barcode assay based on partial sequences of the cytochrome oxidase I gene that can reliably identify the processed fins of seven of the eight CITES listed shark species. We also demonstrate that the assay can even frequently identify the species or genus of origin of shark fin soup (31 out of 50 samples). PMID:25646789

  2. Determination of all feasible robust PID controllers for open-loop unstable plus time delay processes with gain margin and phase margin specifications.

    PubMed

    Wang, Yuan-Jay

    2014-03-01

    This paper proposes a novel alternative method to graphically compute all feasible gain and phase margin specifications-oriented robust PID controllers for open-loop unstable plus time delay (OLUPTD) processes. This method is applicable to general OLUPTD processes without constraint on system order. To retain robustness for OLUPTD processes subject to positive or negative gain variations, the downward gain margin (GM(down)), upward gain margin (GM(up)), and phase margin (PM) are considered. A virtual gain-phase margin tester compensator is incorporated to guarantee the concerned system satisfies certain robust safety margins. In addition, the stability equation method and the parameter plane method are exploited to portray the stability boundary and the constant gain margin (GM) boundary as well as the constant PM boundary. The overlapping region of these boundaries is graphically determined and denotes the GM and PM specifications-oriented region (GPMSOR). Alternatively, the GPMSOR characterizes all feasible robust PID controllers which achieve the pre-specified safety margins. In particular, to achieve optimal gain tuning, the controller gains are searched within the GPMSOR to minimize the integral of the absolute error (IAE) or the integral of the squared error (ISE) performance criterion. Thus, an optimal PID controller gain set is successfully found within the GPMSOR and guarantees the OLUPTD processes with a pre-specified GM and PM as well as a minimum IAE or ISE. Consequently, both robustness and performance can be simultaneously assured. Further, the design procedures are summarized as an algorithm to help rapidly locate the GPMSOR and search an optimal PID gain set. Finally, three highly cited examples are provided to illustrate the design process and to demonstrate the effectiveness of the proposed method. PMID:24462232

  3. Method for identifying biochemical and chemical reactions and micromechanical processes using nanomechanical and electronic signal identification

    DOEpatents

    Holzrichter, John F.; Siekhaus, Wigbert J.

    1997-01-01

    A scanning probe microscope, such as an atomic force microscope (AFM) or a scanning tunneling microscope (STM), is operated in a stationary mode on a site where an activity of interest occurs to measure and identify characteristic time-varying micromotions caused by biological, chemical, mechanical, electrical, optical, or physical processes. The tip and cantilever assembly of an AFM is used as a micromechanical detector of characteristic micromotions transmitted either directly by a site of interest or indirectly through the surrounding medium. Alternatively, the exponential dependence of the tunneling current on the size of the gap in the STM is used to detect micromechanical movement. The stationary mode of operation can be used to observe dynamic biological processes in real time and in a natural environment, such as polymerase processing of DNA for determining the sequence of a DNA molecule.

  4. Method for identifying biochemical and chemical reactions and micromechanical processes using nanomechanical and electronic signal identification

    DOEpatents

    Holzrichter, J.F.; Siekhaus, W.J.

    1997-04-15

    A scanning probe microscope, such as an atomic force microscope (AFM) or a scanning tunneling microscope (STM), is operated in a stationary mode on a site where an activity of interest occurs to measure and identify characteristic time-varying micromotions caused by biological, chemical, mechanical, electrical, optical, or physical processes. The tip and cantilever assembly of an AFM is used as a micromechanical detector of characteristic micromotions transmitted either directly by a site of interest or indirectly through the surrounding medium. Alternatively, the exponential dependence of the tunneling current on the size of the gap in the STM is used to detect micromechanical movement. The stationary mode of operation can be used to observe dynamic biological processes in real time and in a natural environment, such as polymerase processing of DNA for determining the sequence of a DNA molecule. 6 figs.

  5. A stable isotope approach and its application for identifying nitrate source and transformation process in water.

    PubMed

    Xu, Shiguo; Kang, Pingping; Sun, Ya

    2016-01-01

    Nitrate contamination of water is a worldwide environmental problem. Recent studies have demonstrated that the nitrogen (N) and oxygen (O) isotopes of nitrate (NO3(-)) can be used to trace nitrogen dynamics including identifying nitrate sources and nitrogen transformation processes. This paper analyzes the current state of identifying nitrate sources and nitrogen transformation processes using N and O isotopes of nitrate. With regard to nitrate sources, δ(15)N-NO3(-) and δ(18)O-NO3(-) values typically vary between sources, allowing the sources to be isotopically fingerprinted. δ(15)N-NO3(-) is often effective at tracing NO(-)3 sources from areas with different land use. δ(18)O-NO3(-) is more useful to identify NO3(-) from atmospheric sources. Isotopic data can be combined with statistical mixing models to quantify the relative contributions of NO3(-) from multiple delineated sources. With regard to N transformation processes, N and O isotopes of nitrate can be used to decipher the degree of nitrogen transformation by such processes as nitrification, assimilation, and denitrification. In some cases, however, isotopic fractionation may alter the isotopic fingerprint associated with the delineated NO3(-) source(s). This problem may be addressed by combining the N and O isotopic data with other types of, including the concentration of selected conservative elements, e.g., chloride (Cl(-)), boron isotope (δ(11)B), and sulfur isotope (δ(35)S) data. Future studies should focus on improving stable isotope mixing models and furthering our understanding of isotopic fractionation by conducting laboratory and field experiments in different environments. PMID:26541149

  6. A computation using mutually exclusive processing is sufficient to identify specific Hedgehog signaling components

    PubMed Central

    Spratt, Spencer J.

    2013-01-01

    A system of more than one part can be deciphered by observing differences between the parts. A simple way to do this is by recording something absolute displaying a trait in one part and not in another: in other words, mutually exclusive computation. Conditional directed expression in vivo offers processing in more than one part of the system giving increased computation power for biological systems analysis. Here, I report the consideration of these aspects in the development of an in vivo screening assay that appears sufficient to identify components specific to a system. PMID:24391661

  7. A Low Processing Cost Adaptive Algorithm Identifying Nonlinear Unknown System with Piecewise Linear Curve

    NASA Astrophysics Data System (ADS)

    Fujii, Kensaku; Aoki, Ryo; Muneyasu, Mitsuji

    This paper proposes an adaptive algorithm for identifying unknown systems containing nonlinear amplitude characteristics. Usually, the nonlinearity is so small as to be negligible. However, in low cost systems, such as acoustic echo canceller using a small loudspeaker, the nonlinearity deteriorates the performance of the identification. Several methods preventing the deterioration, polynomial or Volterra series approximations, have been hence proposed and studied. However, the conventional methods require high processing cost. In this paper, we propose a method approximating the nonlinear characteristics with a piecewise linear curve and show using computer simulations that the performance can be extremely improved. The proposed method can also reduce the processing cost to only about twice that of the linear adaptive filter system.

  8. Identifying potential misfit items in cognitive process of learning engineering mathematics based on Rasch model

    NASA Astrophysics Data System (ADS)

    Ataei, Sh; Mahmud, Z.; Khalid, M. N.

    2014-04-01

    The students learning outcomes clarify what students should know and be able to demonstrate after completing their course. So, one of the issues on the process of teaching and learning is how to assess students' learning. This paper describes an application of the dichotomous Rasch measurement model in measuring the cognitive process of engineering students' learning of mathematics. This study provides insights into the perspective of 54 engineering students' cognitive ability in learning Calculus III based on Bloom's Taxonomy on 31 items. The results denote that some of the examination questions are either too difficult or too easy for the majority of the students. This analysis yields FIT statistics which are able to identify if there is data departure from the Rasch theoretical model. The study has identified some potential misfit items based on the measurement of ZSTD where the removal misfit item was accomplished based on the MNSQ outfit of above 1.3 or less than 0.7 logit. Therefore, it is recommended that these items be reviewed or revised to better match the range of students' ability in the respective course.

  9. Approaches to robustness

    NASA Astrophysics Data System (ADS)

    Cox, Henry; Heaney, Kevin D.

    2003-04-01

    The term robustness in signal processing applications usually refers to approaches that are not degraded significantly when the assumptions that were invoked in defining the processing algorithm are no longer valid. Highly tuned algorithms that fall apart in real-world conditions are useless. The classic example is super-directive arrays of closely spaced elements. The very narrow beams and high directivity could be predicted under ideal conditions, could not be achieved under realistic conditions of amplitude, phase and position errors. The robust design tries to take into account the real environment as part of the optimization problem. This problem led to the introduction of the white noise gain constraint and diagonal loading in adaptive beam forming. Multiple linear constraints have been introduced in pursuit of robustness. Sonar systems such as towed arrays operate in less than ideal conditions, making robustness a concern. A special problem in sonar systems is failed array elements. This leads to severe degradation in beam patterns and bearing response patterns. Another robustness issue arises in matched field processing that uses an acoustic propagation model in the beamforming. Knowledge of the environmental parameters is usually limited. This paper reviews the various approaches to achieving robustness in sonar systems.

  10. Transposon mutagenesis identifies genes and cellular processes driving epithelial-mesenchymal transition in hepatocellular carcinoma.

    PubMed

    Kodama, Takahiro; Newberg, Justin Y; Kodama, Michiko; Rangel, Roberto; Yoshihara, Kosuke; Tien, Jean C; Parsons, Pamela H; Wu, Hao; Finegold, Milton J; Copeland, Neal G; Jenkins, Nancy A

    2016-06-14

    Epithelial-mesenchymal transition (EMT) is thought to contribute to metastasis and chemoresistance in patients with hepatocellular carcinoma (HCC), leading to their poor prognosis. The genes driving EMT in HCC are not yet fully understood, however. Here, we show that mobilization of Sleeping Beauty (SB) transposons in immortalized mouse hepatoblasts induces mesenchymal liver tumors on transplantation to nude mice. These tumors show significant down-regulation of epithelial markers, along with up-regulation of mesenchymal markers and EMT-related transcription factors (EMT-TFs). Sequencing of transposon insertion sites from tumors identified 233 candidate cancer genes (CCGs) that were enriched for genes and cellular processes driving EMT. Subsequent trunk driver analysis identified 23 CCGs that are predicted to function early in tumorigenesis and whose mutation or alteration in patients with HCC is correlated with poor patient survival. Validation of the top trunk drivers identified in the screen, including MET (MET proto-oncogene, receptor tyrosine kinase), GRB2-associated binding protein 1 (GAB1), HECT, UBA, and WWE domain containing 1 (HUWE1), lysine-specific demethylase 6A (KDM6A), and protein-tyrosine phosphatase, nonreceptor-type 12 (PTPN12), showed that deregulation of these genes activates an EMT program in human HCC cells that enhances tumor cell migration. Finally, deregulation of these genes in human HCC was found to confer sorafenib resistance through apoptotic tolerance and reduced proliferation, consistent with recent studies showing that EMT contributes to the chemoresistance of tumor cells. Our unique cell-based transposon mutagenesis screen appears to be an excellent resource for discovering genes involved in EMT in human HCC and potentially for identifying new drug targets. PMID:27247392

  11. Modular Energy-Efficient and Robust Paradigms for a Disaster-Recovery Process over Wireless Sensor Networks.

    PubMed

    Razaque, Abdul; Elleithy, Khaled

    2015-01-01

    Robust paradigms are a necessity, particularly for emerging wireless sensor network (WSN) applications. The lack of robust and efficient paradigms causes a reduction in the provision of quality of service (QoS) and additional energy consumption. In this paper, we introduce modular energy-efficient and robust paradigms that involve two archetypes: (1) the operational medium access control (O-MAC) hybrid protocol and (2) the pheromone termite (PT) model. The O-MAC protocol controls overhearing and congestion and increases the throughput, reduces the latency and extends the network lifetime. O-MAC uses an optimized data frame format that reduces the channel access time and provides faster data delivery over the medium. Furthermore, O-MAC uses a novel randomization function that avoids channel collisions. The PT model provides robust routing for single and multiple links and includes two new significant features: (1) determining the packet generation rate to avoid congestion and (2) pheromone sensitivity to determine the link capacity prior to sending the packets on each link. The state-of-the-art research in this work is based on improving both the QoS and energy efficiency. To determine the strength of O-MAC with the PT model; we have generated and simulated a disaster recovery scenario using a network simulator (ns-3.10) that monitors the activities of disaster recovery staff; hospital staff and disaster victims brought into the hospital. Moreover; the proposed paradigm can be used for general purpose applications. Finally; the QoS metrics of the O-MAC and PT paradigms are evaluated and compared with other known hybrid protocols involving the MAC and routing features. The simulation results indicate that O-MAC with PT produced better outcomes. PMID:26153768

  12. Modular Energy-Efficient and Robust Paradigms for a Disaster-Recovery Process over Wireless Sensor Networks

    PubMed Central

    Razaque, Abdul; Elleithy, Khaled

    2015-01-01

    Robust paradigms are a necessity, particularly for emerging wireless sensor network (WSN) applications. The lack of robust and efficient paradigms causes a reduction in the provision of quality of service (QoS) and additional energy consumption. In this paper, we introduce modular energy-efficient and robust paradigms that involve two archetypes: (1) the operational medium access control (O-MAC) hybrid protocol and (2) the pheromone termite (PT) model. The O-MAC protocol controls overhearing and congestion and increases the throughput, reduces the latency and extends the network lifetime. O-MAC uses an optimized data frame format that reduces the channel access time and provides faster data delivery over the medium. Furthermore, O-MAC uses a novel randomization function that avoids channel collisions. The PT model provides robust routing for single and multiple links and includes two new significant features: (1) determining the packet generation rate to avoid congestion and (2) pheromone sensitivity to determine the link capacity prior to sending the packets on each link. The state-of-the-art research in this work is based on improving both the QoS and energy efficiency. To determine the strength of O-MAC with the PT model; we have generated and simulated a disaster recovery scenario using a network simulator (ns-3.10) that monitors the activities of disaster recovery staff; hospital staff and disaster victims brought into the hospital. Moreover; the proposed paradigm can be used for general purpose applications. Finally; the QoS metrics of the O-MAC and PT paradigms are evaluated and compared with other known hybrid protocols involving the MAC and routing features. The simulation results indicate that O-MAC with PT produced better outcomes. PMID:26153768

  13. Scan-pattern and signal processing for microvasculature visualization with complex SD-OCT: tissue-motion artifacts robustness and decorrelation time - blood vessel characteristics

    NASA Astrophysics Data System (ADS)

    Matveev, Lev A.; Zaitsev, Vladimir Y.; Gelikonov, Grigory V.; Matveyev, Alexandr L.; Moiseev, Alexander A.; Ksenofontov, Sergey Y.; Gelikonov, Valentin M.; Demidov, Valentin; Vitkin, Alex

    2015-03-01

    We propose a modification of OCT scanning pattern and corresponding signal processing for 3D visualizing blood microcirculation from complex-signal B-scans. We describe the scanning pattern modifications that increase the methods' robustness to bulk tissue motion artifacts, with speed up to several cm/s. Based on these modifications, OCT-based angiography becomes more realistic under practical measurement conditions. For these scan patterns, we apply novel signal processing to separate the blood vessels with different decorrelation times, by varying of effective temporal diversity of processed signals.

  14. Robust Regression.

    PubMed

    Huang, Dong; Cabral, Ricardo; De la Torre, Fernando

    2016-02-01

    Discriminative methods (e.g., kernel regression, SVM) have been extensively used to solve problems such as object recognition, image alignment and pose estimation from images. These methods typically map image features ( X) to continuous (e.g., pose) or discrete (e.g., object category) values. A major drawback of existing discriminative methods is that samples are directly projected onto a subspace and hence fail to account for outliers common in realistic training sets due to occlusion, specular reflections or noise. It is important to notice that existing discriminative approaches assume the input variables X to be noise free. Thus, discriminative methods experience significant performance degradation when gross outliers are present. Despite its obvious importance, the problem of robust discriminative learning has been relatively unexplored in computer vision. This paper develops the theory of robust regression (RR) and presents an effective convex approach that uses recent advances on rank minimization. The framework applies to a variety of problems in computer vision including robust linear discriminant analysis, regression with missing data, and multi-label classification. Several synthetic and real examples with applications to head pose estimation from images, image and video classification and facial attribute classification with missing data are used to illustrate the benefits of RR. PMID:26761740

  15. Pharmaceutical screen identifies novel target processes for activation of autophagy with a broad translational potential.

    PubMed

    Chauhan, Santosh; Ahmed, Zahra; Bradfute, Steven B; Arko-Mensah, John; Mandell, Michael A; Won Choi, Seong; Kimura, Tomonori; Blanchet, Fabien; Waller, Anna; Mudd, Michal H; Jiang, Shanya; Sklar, Larry; Timmins, Graham S; Maphis, Nicole; Bhaskar, Kiran; Piguet, Vincent; Deretic, Vojo

    2015-01-01

    Autophagy is a conserved homeostatic process active in all human cells and affecting a spectrum of diseases. Here we use a pharmaceutical screen to discover new mechanisms for activation of autophagy. We identify a subset of pharmaceuticals inducing autophagic flux with effects in diverse cellular systems modelling specific stages of several human diseases such as HIV transmission and hyperphosphorylated tau accumulation in Alzheimer's disease. One drug, flubendazole, is a potent inducer of autophagy initiation and flux by affecting acetylated and dynamic microtubules in a reciprocal way. Disruption of dynamic microtubules by flubendazole results in mTOR deactivation and dissociation from lysosomes leading to TFEB (transcription factor EB) nuclear translocation and activation of autophagy. By inducing microtubule acetylation, flubendazole activates JNK1 leading to Bcl-2 phosphorylation, causing release of Beclin1 from Bcl-2-Beclin1 complexes for autophagy induction, thus uncovering a new approach to inducing autophagic flux that may be applicable in disease treatment. PMID:26503418

  16. Systems and processes for identifying features and determining feature associations in groups of documents

    DOEpatents

    Rose, Stuart J.; Cowley, Wendy E.; Crow, Vernon L.

    2016-01-12

    Systems and computer-implemented processes for identification of features and determination of feature associations in a group of documents can involve providing a plurality of keywords identified among the terms of at least some of the documents. A value measure can be calculated for each keyword. High-value keywords are defined as those keywords having value measures that exceed a threshold. For each high-value keyword, term-document associations (TDA) are accessed. The TDA characterize measures of association between each term and at least some documents in the group. A processor quantifies similarities between unique pairs of high-value keywords based on the TDA for each respective high-value keyword and generates a similarity matrix that indicates one or more sets that each comprise highly associated high-value keywords.

  17. Towards a typology of business process management professionals: identifying patterns of competences through latent semantic analysis

    NASA Astrophysics Data System (ADS)

    Müller, Oliver; Schmiedel, Theresa; Gorbacheva, Elena; vom Brocke, Jan

    2016-01-01

    While researchers have analysed the organisational competences that are required for successful Business Process Management (BPM) initiatives, individual BPM competences have not yet been studied in detail. In this study, latent semantic analysis is used to examine a collection of 1507 BPM-related job advertisements in order to develop a typology of BPM professionals. This empirical analysis reveals distinct ideal types and profiles of BPM professionals on several levels of abstraction. A closer look at these ideal types and profiles confirms that BPM is a boundary-spanning field that requires interdisciplinary sets of competence that range from technical competences to business and systems competences. Based on the study's findings, it is posited that individual and organisational alignment with the identified ideal types and profiles is likely to result in high employability and organisational BPM success.

  18. Pharmaceutical screen identifies novel target processes for activation of autophagy with a broad translational potential

    PubMed Central

    Chauhan, Santosh; Ahmed, Zahra; Bradfute, Steven B.; Arko-Mensah, John; Mandell, Michael A.; Won Choi, Seong; Kimura, Tomonori; Blanchet, Fabien; Waller, Anna; Mudd, Michal H.; Jiang, Shanya; Sklar, Larry; Timmins, Graham S.; Maphis, Nicole; Bhaskar, Kiran; Piguet, Vincent; Deretic, Vojo

    2015-01-01

    Autophagy is a conserved homeostatic process active in all human cells and affecting a spectrum of diseases. Here we use a pharmaceutical screen to discover new mechanisms for activation of autophagy. We identify a subset of pharmaceuticals inducing autophagic flux with effects in diverse cellular systems modelling specific stages of several human diseases such as HIV transmission and hyperphosphorylated tau accumulation in Alzheimer's disease. One drug, flubendazole, is a potent inducer of autophagy initiation and flux by affecting acetylated and dynamic microtubules in a reciprocal way. Disruption of dynamic microtubules by flubendazole results in mTOR deactivation and dissociation from lysosomes leading to TFEB (transcription factor EB) nuclear translocation and activation of autophagy. By inducing microtubule acetylation, flubendazole activates JNK1 leading to Bcl-2 phosphorylation, causing release of Beclin1 from Bcl-2-Beclin1 complexes for autophagy induction, thus uncovering a new approach to inducing autophagic flux that may be applicable in disease treatment. PMID:26503418

  19. Finding Jumps in Otherwise Smooth Curves: Identifying Critical Events in Political Processes

    PubMed Central

    Ratkovic, Marc T.

    2010-01-01

    Many social processes are stable and smooth in general, with discrete jumps. We develop a sequential segmentation spline method that can identify both the location and the number of discontinuities in a series of observations with a time component, while fitting a smooth spline between jumps, using a modified Bayesian Information Criterion statistic as a stopping rule. We explore the method in a large-n, unbalanced panel setting with George W. Bush’s approval data, a small-n time series with median DW-NOMINATE scores for each Congress over time, and a series of simulations. We compare the method to several extant smoothers, and the method performs favorably in terms of visual inspection, residual properties, and event detection. Finally, we discuss extensions of the method. PMID:20721311

  20. Identifying Areas for Improvement in the HIV Screening Process of a High-Prevalence Emergency Department.

    PubMed

    Zucker, Jason; Cennimo, David; Sugalski, Gregory; Swaminthan, Shobha

    2016-06-01

    Since 1993, the Centers for Disease Control recommendations for HIV testing were extended to include persons obtaining care in the emergency department (ED). Situated in Newark, New Jersey, the University Hospital (UH) ED serves a community with a greater than 2% HIV prevalence, and a recent study showed a UH ED HIV seroprevalence of 6.5%, of which 33% were unknown diagnoses. Electronic records for patients seen in the UH ED from October 1st, 2014, to February 28th, 2015, were obtained. Information was collected on demographics, ED diagnosis, triage time, and HIV testing. Random sampling of 500 patients was performed to identify those eligible for screening. Univariate and multivariate analysis was done to assess screening characteristics. Only 9% (8.8-9.3%) of patients eligible for screening were screened in the ED. Sixteen percent (15.7-16.6%) of those in the age group18-25 and 12% (11.6-12.3%) of those in the age group 26-35 were screened, whereas 8% (7.8-8.2%) of those in the age group 35-45 were screened. 19.6% (19-20.1%) of eligible patients in fast track were screened versus 1.7% (1.6-1.8%) in the main ED. Eighty-five percent of patients screened were triaged between 6 a.m. and 8 p.m. with 90% of all screening tests done by the HIV counseling, testing, and referral services. Due to the high prevalence of HIV, urban EDs play an integral public health role in the early identification and linkage to care of patients with HIV. By evaluating our current screening process, we identified opportunities to improve our screening process and reduce missed opportunities for diagnosis. PMID:27286295

  1. Identifying children with autism spectrum disorder based on their face processing abnormality: A machine learning framework.

    PubMed

    Liu, Wenbo; Li, Ming; Yi, Li

    2016-08-01

    The atypical face scanning patterns in individuals with Autism Spectrum Disorder (ASD) has been repeatedly discovered by previous research. The present study examined whether their face scanning patterns could be potentially useful to identify children with ASD by adopting the machine learning algorithm for the classification purpose. Particularly, we applied the machine learning method to analyze an eye movement dataset from a face recognition task [Yi et al., 2016], to classify children with and without ASD. We evaluated the performance of our model in terms of its accuracy, sensitivity, and specificity of classifying ASD. Results indicated promising evidence for applying the machine learning algorithm based on the face scanning patterns to identify children with ASD, with a maximum classification accuracy of 88.51%. Nevertheless, our study is still preliminary with some constraints that may apply in the clinical practice. Future research should shed light on further valuation of our method and contribute to the development of a multitask and multimodel approach to aid the process of early detection and diagnosis of ASD. Autism Res 2016, 9: 888-898. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. PMID:27037971

  2. DPNuc: Identifying Nucleosome Positions Based on the Dirichlet Process Mixture Model.

    PubMed

    Chen, Huidong; Guan, Jihong; Zhou, Shuigeng

    2015-01-01

    Nucleosomes and the free linker DNA between them assemble the chromatin. Nucleosome positioning plays an important role in gene transcription regulation, DNA replication and repair, alternative splicing, and so on. With the rapid development of ChIP-seq, it is possible to computationally detect the positions of nucleosomes on chromosomes. However, existing methods cannot provide accurate and detailed information about the detected nucleosomes, especially for the nucleosomes with complex configurations where overlaps and noise exist. Meanwhile, they usually require some prior knowledge of nucleosomes as input, such as the size or the number of the unknown nucleosomes, which may significantly influence the detection results. In this paper, we propose a novel approach DPNuc for identifying nucleosome positions based on the Dirichlet process mixture model. In our method, Markov chain Monte Carlo (MCMC) simulations are employed to determine the mixture model with no need of prior knowledge about nucleosomes. Compared with three existing methods, our approach can provide more detailed information of the detected nucleosomes and can more reasonably reveal the real configurations of the chromosomes; especially, our approach performs better in the complex overlapping situations. By mapping the detected nucleosomes to a synthetic benchmark nucleosome map and two existing benchmark nucleosome maps, it is shown that our approach achieves a better performance in identifying nucleosome positions and gets a higher F-score. Finally, we show that our approach can more reliably detect the size distribution of nucleosomes. PMID:26671796

  3. Identifying Hydrologic Processes in Agricultural Watersheds Using Precipitation-Runoff Models

    USGS Publications Warehouse

    Linard, Joshua I.; Wolock, David M.; Webb, Richard M.T.; Wieczorek, Michael E.

    2009-01-01

    Understanding the fate and transport of agricultural chemicals applied to agricultural fields will assist in designing the most effective strategies to prevent water-quality impairments. At a watershed scale, the processes controlling the fate and transport of agricultural chemicals are generally understood only conceptually. To examine the applicability of conceptual models to the processes actually occurring, two precipitation-runoff models - the Soil and Water Assessment Tool (SWAT) and the Water, Energy, and Biogeochemical Model (WEBMOD) - were applied in different agricultural settings of the contiguous United States. Each model, through different physical processes, simulated the transport of water to a stream from the surface, the unsaturated zone, and the saturated zone. Models were calibrated for watersheds in Maryland, Indiana, and Nebraska. The calibrated sets of input parameters for each model at each watershed are discussed, and the criteria used to validate the models are explained. The SWAT and WEBMOD model results at each watershed conformed to each other and to the processes identified in each watershed's conceptual hydrology. In Maryland the conceptual understanding of the hydrology indicated groundwater flow was the largest annual source of streamflow; the simulation results for the validation period confirm this. The dominant source of water to the Indiana watershed was thought to be tile drains. Although tile drains were not explicitly simulated in the SWAT model, a large component of streamflow was received from lateral flow, which could be attributed to tile drains. Being able to explicitly account for tile drains, WEBMOD indicated water from tile drains constituted most of the annual streamflow in the Indiana watershed. The Nebraska models indicated annual streamflow was composed primarily of perennial groundwater flow and infiltration-excess runoff, which conformed to the conceptual hydrology developed for that watershed. The hydrologic

  4. Hominin cognitive evolution: identifying patterns and processes in the fossil and archaeological record

    PubMed Central

    Shultz, Susanne; Nelson, Emma; Dunbar, Robin I. M.

    2012-01-01

    As only limited insight into behaviour is available from the archaeological record, much of our understanding of historical changes in human cognition is restricted to identifying changes in brain size and architecture. Using both absolute and residual brain size estimates, we show that hominin brain evolution was likely to be the result of a mix of processes; punctuated changes at approximately 100 kya, 1 Mya and 1.8 Mya are supplemented by gradual within-lineage changes in Homo erectus and Homo sapiens sensu lato. While brain size increase in Homo in Africa is a gradual process, migration of hominins into Eurasia is associated with step changes at approximately 400 kya and approximately 100 kya. We then demonstrate that periods of rapid change in hominin brain size are not temporally associated with changes in environmental unpredictability or with long-term palaeoclimate trends. Thus, we argue that commonly used global sea level or Indian Ocean dust palaeoclimate records provide little evidence for either the variability selection or aridity hypotheses explaining changes in hominin brain size. Brain size change at approximately 100 kya is coincident with demographic change and the appearance of fully modern language. However, gaps remain in our understanding of the external pressures driving encephalization, which will only be filled by novel applications of the fossil, palaeoclimatic and archaeological records. PMID:22734056

  5. Beyond Element-Wise Interactions: Identifying Complex Interactions in Biological Processes

    PubMed Central

    Kendrick, Keith; Feng, Jianfeng

    2009-01-01

    Background Biological processes typically involve the interactions of a number of elements (genes, cells) acting on each others. Such processes are often modelled as networks whose nodes are the elements in question and edges pairwise relations between them (transcription, inhibition). But more often than not, elements actually work cooperatively or competitively to achieve a task. Or an element can act on the interaction between two others, as in the case of an enzyme controlling a reaction rate. We call “complex” these types of interaction and propose ways to identify them from time-series observations. Methodology We use Granger Causality, a measure of the interaction between two signals, to characterize the influence of an enzyme on a reaction rate. We extend its traditional formulation to the case of multi-dimensional signals in order to capture group interactions, and not only element interactions. Our method is extensively tested on simulated data and applied to three biological datasets: microarray data of the Saccharomyces cerevisiae yeast, local field potential recordings of two brain areas and a metabolic reaction. Conclusions Our results demonstrate that complex Granger causality can reveal new types of relation between signals and is particularly suited to biological data. Our approach raises some fundamental issues of the systems biology approach since finding all complex causalities (interactions) is an NP hard problem. PMID:19774090

  6. Use of Sulphur and Boron Isotopes to Identify Natural Gas Processing Emissions Sources

    NASA Astrophysics Data System (ADS)

    Bradley, C. E.; Norman, A.; Wieser, M. E.

    2003-12-01

    Natural gas processing results in the emission of large amounts of gaseous pollutants as a result of planned and / or emergency flaring, sulphur incineration, and in the course of normal operation. Since many gas plants often contribute to the same air shed, it is not possible to conclusively determine the sources, amounts, and characteristics of pollution from a particular processing facility using traditional methods. However, sulphur isotopes have proven useful in the apportionment of sources of atmospheric sulphate (Norman et al., 1999), and boron isotopes have been shown to be of use in tracing coal contamination through groundwater (Davidson and Bassett, 1993). In this study, both sulphur and boron isotopes have been measured at source, receptor, and control sites, and, if emissions prove to be sufficiently distinct isotopically, they will be used to identify and apportion emissions downwind. Sulphur is present in natural gas as hydrogen sulphide (H2S), which is combusted to sulphur dioxide (SO2) prior to its release to the atmosphere, while boron is present both in hydrocarbon deposits as well as in any water used in the process. Little is known about the isotopic abundance variations of boron in hydrocarbon reservoirs, but Krouse (1991) has shown that the sulphur isotope composition of H2S in reservoirs varies according to both the concentration and the method of formation of H2S. As a result, gas plants processing gas from different reservoirs are expected to produce emissions with unique isotopic compositions. Samples were collected using a high-volume air sampler placed directly downwind of several gas plants, as well as at a receptor site and a control site. Aerosol sulphate and boron were collected on quartz fibre filters, while SO2 was collected on potassium hydroxide-impregnated cellulose filters. Solid sulphur samples were taken from those plants that process sulphur in order to compare the isotopic composition with atmospheric measurements. A

  7. Hillslopes to Hollows to Channels: Identifying Process Transitions and Domains using Characteristic Scaling Relations

    NASA Astrophysics Data System (ADS)

    Williams, K.; Locke, W. W.

    2011-12-01

    Headwater catchments are partitioned into hillslopes, unchanneled valleys (hollows), and channels. Low order (less than or equal to two) channels comprise most of the stream length in the drainage network so defining where hillslopes end and hollows begin, and where hollows end and channels begin, is important for calibration and verification of hydrologic runoff and sediment production modeling. We test the use of landscape scaling relations to detect flow regimes characteristic of diffusive, concentrated, and incisive runoff, and use these flow regimes as proxies for hillslope, hollow, and channeled landforms. We use LiDAR-derived digital elevation models (DEMs) of two pairs of headwater catchments in southwest and north-central Montana to develop scaling relations of flowpath length, total stream power, and contributing area. The catchment pairs contrast low versus high drainage density and north versus south aspect. Inflections in scaling relations of contributing area and flowpath length in a single basin (modified Hack's law) and contributing area and total stream power were used to identify hillslope and fluvial process domain transitions. In the modified Hack's law, inflections in the slope of the log-log power law are hypothesized to correspond to changes in flow regime used as proxies for hillslope, hollow, and channeled landforms. Similarly, rate of change of total stream power with contributing area is hypothesized to become constant and then decrease at the hillslope to fluvial domain transition. Power law scaling of frequency-magnitude plots of curvature and an aspect-related parameter were also tested as an indicator of the transition from scale-dependent hillslope length to the scale invariant fluvial domain. Curvature and aspect were calculated at each cell in spectrally filtered DEMs. Spectral filtering by fast Fourier and wavelet transforms enhances detection of fine-scale fluvial features by removing long wavelength topography. Using the

  8. Identifying and processing the gap between perceived and actual agreement in breast pathology interpretation.

    PubMed

    Carney, Patricia A; Allison, Kimberly H; Oster, Natalia V; Frederick, Paul D; Morgan, Thomas R; Geller, Berta M; Weaver, Donald L; Elmore, Joann G

    2016-07-01

    We examined how pathologists' process their perceptions of how their interpretations on diagnoses for breast pathology cases agree with a reference standard. To accomplish this, we created an individualized self-directed continuing medical education program that showed pathologists interpreting breast specimens how their interpretations on a test set compared with a reference diagnosis developed by a consensus panel of experienced breast pathologists. After interpreting a test set of 60 cases, 92 participating pathologists were asked to estimate how their interpretations compared with the standard for benign without atypia, atypia, ductal carcinoma in situ and invasive cancer. We then asked pathologists their thoughts about learning about differences in their perceptions compared with actual agreement. Overall, participants tended to overestimate their agreement with the reference standard, with a mean difference of 5.5% (75.9% actual agreement; 81.4% estimated agreement), especially for atypia and were least likely to overestimate it for invasive breast cancer. Non-academic affiliated pathologists were more likely to more closely estimate their performance relative to academic affiliated pathologists (77.6 vs 48%; P=0.001), whereas participants affiliated with an academic medical center were more likely to underestimate agreement with their diagnoses compared with non-academic affiliated pathologists (40 vs 6%). Before the continuing medical education program, nearly 55% (54.9%) of participants could not estimate whether they would overinterpret the cases or underinterpret them relative to the reference diagnosis. Nearly 80% (79.8%) reported learning new information from this individualized web-based continuing medical education program, and 23.9% of pathologists identified strategies they would change their practice to improve. In conclusion, when evaluating breast pathology specimens, pathologists do a good job of estimating their diagnostic agreement with a

  9. Identifying vegetation's influence on multi-scale fluvial processes based on plant trait adaptations

    NASA Astrophysics Data System (ADS)

    Manners, R.; Merritt, D. M.; Wilcox, A. C.; Scott, M.

    2015-12-01

    Riparian vegetation-geomorphic interactions are critical to the physical and biological function of riparian ecosystems, yet we lack a mechanistic understanding of these interactions and predictive ability at the reach to watershed scale. Plant functional groups, or groupings of species that have similar traits, either in terms of a plant's life history strategy (e.g., drought tolerance) or morphology (e.g., growth form), may provide an expression of vegetation-geomorphic interactions. We are developing an approach that 1) identifies where along a river corridor plant functional groups exist and 2) links the traits that define functional groups and their impact on fluvial processes. The Green and Yampa Rivers in Dinosaur National Monument have wide variations in hydrology, hydraulics, and channel morphology, as well as a large dataset of species presence. For these rivers, we build a predictive model of the probable presence of plant functional groups based on site-specific aspects of the flow regime (e.g., inundation probability and duration), hydraulic characteristics (e.g., velocity), and substrate size. Functional group traits are collected from the literature and measured in the field. We found that life-history traits more strongly predicted functional group presence than did morphological traits. However, some life-history traits, important for determining the likelihood of a plant existing along an environmental gradient, are directly related to the morphological properties of the plant, important for the plant's impact on fluvial processes. For example, stem density (i.e., dry mass divided by volume of stem) is positively correlated to drought tolerance and is also related to the modulus of elasticity. Growth form, which is related to the plant's susceptibility to biomass-removing fluvial disturbances, is also related to frontal area. Using this approach, we can identify how plant community composition and distribution shifts with a change to the flow

  10. Identifying Highly Penetrant Disease Causal Mutations Using Next Generation Sequencing: Guide to Whole Process

    PubMed Central

    Erzurumluoglu, A. Mesut; Shihab, Hashem A.; Baird, Denis; Richardson, Tom G.; Day, Ian N. M.; Gaunt, Tom R.

    2015-01-01

    Recent technological advances have created challenges for geneticists and a need to adapt to a wide range of new bioinformatics tools and an expanding wealth of publicly available data (e.g., mutation databases, and software). This wide range of methods and a diversity of file formats used in sequence analysis is a significant issue, with a considerable amount of time spent before anyone can even attempt to analyse the genetic basis of human disorders. Another point to consider that is although many possess “just enough” knowledge to analyse their data, they do not make full use of the tools and databases that are available and also do not fully understand how their data was created. The primary aim of this review is to document some of the key approaches and provide an analysis schema to make the analysis process more efficient and reliable in the context of discovering highly penetrant causal mutations/genes. This review will also compare the methods used to identify highly penetrant variants when data is obtained from consanguineous individuals as opposed to nonconsanguineous; and when Mendelian disorders are analysed as opposed to common-complex disorders. PMID:26106619

  11. Comparison of Remote Sensing Image Processing Techniques to Identify Tornado Damage Areas from Landsat TM Data

    PubMed Central

    Myint, Soe W.; Yuan, May; Cerveny, Randall S.; Giri, Chandra P.

    2008-01-01

    Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and object-oriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques.

  12. Comparison of remote sensing image processing techniques to identify tornado damage areas from Landsat TM data

    USGS Publications Warehouse

    Myint, S.W.; Yuan, M.; Cerveny, R.S.; Giri, C.P.

    2008-01-01

    Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and objectoriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. ?? 2008 by MDPI.

  13. Identifying weathering processes by Si isotopes in two small catchments in the Black Forest (Germany)

    NASA Astrophysics Data System (ADS)

    Steinhoefel, G.; Breuer, J.; von Blanckenburg, F.; Horn, I.; Kaczorek, D.; Sommer, M.

    2013-12-01

    isotopically light Si with Fe-oxides, which shifts surface water to δ30Si values up to 1.1‰. The Si isotope signature of the main stream depends on variable proportion of inflowing surface water and groundwater. The results on these small catchments demonstrate that Si isotopes are a powerful tool to identify weathering processes and the sources of dissolved Si, which can now be used to constrain the isotope signature of large river systems.

  14. On the processes generating latitudinal richness gradients: identifying diagnostic patterns and predictions

    SciTech Connect

    Hurlbert, Allen H.; Stegen, James C.

    2014-12-02

    Many processes have been put forward to explain the latitudinal gradient in species richness. Here, we use a simulation model to examine four of the most common hypotheses and identify patterns that might be diagnostic of those four hypotheses. The hypotheses examined include (1) tropical niche conservatism, or the idea that the tropics are more diverse because a tropical clade origin has allowed more time for diversification in the tropics and has resulted in few species adapted to extra-tropical climates. (2) The productivity, or energetic constraints, hypothesis suggests that species richness is limited by the amount of biologically available energy in a region. (3) The tropical stability hypothesis argues that major climatic fluctuations and glacial cycles in extratropical regions have led to greater extinction rates and less opportunity for specialization relative to the tropics. (4) Finally, the speciation rates hypothesis suggests that the latitudinal richness gradient arises from a parallel gradient in rates of speciation. We found that tropical niche conservatism can be distinguished from the other three scenarios by phylogenies which are more balanced than expected, no relationship between mean root distance and richness across regions, and a homogeneous rate of speciation across clades and through time. The energy gradient, speciation gradient, and disturbance gradient scenarios all exhibited phylogenies which were more imbalanced than expected, showed a negative relationship between mean root distance and richness, and diversity-dependence of speciation rate estimates through time. Using Bayesian Analysis of Macroevolutionary Mixtures on the simulated phylogenies, we found that the relationship between speciation rates and latitude could distinguish among these three scenarios. We emphasize the importance of considering multiple hypotheses and focusing on diagnostic predictions instead of predictions that are consistent with more than one hypothesis.

  15. Robust conversion of marrow cells to skeletal muscle with formation of marrow-derived muscle cell colonies: A multifactorial process

    SciTech Connect

    Abedi, Mehrdad; Greer, Deborah A.; Colvin, Gerald A.; Demers, Delia A.; Dooner, Mark S.; Harpel, Jasha A.; Weier, Heinz-Ulrich G.; Lambert, Jean-Francois; Quesenberry, Peter J.

    2004-01-10

    Murine marrow cells are capable of repopulating skeletal muscle fibers. A point of concern has been the robustness of such conversions. We have investigated the impact of type of cell delivery, muscle injury, nature of delivered cell, and stem cell mobilizations on marrow to muscle conversion. We transplanted GFP transgenic marrow into irradiated C57BL/6 mice and then injured anterior tibialis muscle by cardiotoxin. One month after injury, sections were analyzed by standard and deconvolutional microscopy for expression of muscle and hematopietic markers. Irradiation was essential to conversion although whether by injury or induction of chimerism is not clear. Cardiotoxin and to a lesser extent PBS injected muscles showed significant number of GFP+ muscle fibers while uninjected muscles showed only rare GFP+ cells. Marrow conversion to muscle was increased by two cycles of G-CSF mobilization and to a lesser extent with G-CSF and steel or GM-CSF. Transplantation of female GFP to male C57 BL/6 and GFP to Rosa26 mice showed fusion of donor cells to recipient muscle. High numbers of donor derived muscle colonies and up to12 percent GFP positive muscle cells were seen after mobilization or direct injection. These levels of donor muscle chimerism approach levels which could be clinically significant in developing strategies for the treatment of muscular dystrophies. In summary, the conversion of marrow to skeletal muscle cells is based on cell fusion and is critically dependent on injury. This conversion is also numerically significant and increases with mobilization.

  16. Energy Landscape Reveals That the Budding Yeast Cell Cycle Is a Robust and Adaptive Multi-stage Process

    PubMed Central

    Lv, Cheng; Li, Xiaoguang; Li, Fangting; Li, Tiejun

    2015-01-01

    Quantitatively understanding the robustness, adaptivity and efficiency of cell cycle dynamics under the influence of noise is a fundamental but difficult question to answer for most eukaryotic organisms. Using a simplified budding yeast cell cycle model perturbed by intrinsic noise, we systematically explore these issues from an energy landscape point of view by constructing an energy landscape for the considered system based on large deviation theory. Analysis shows that the cell cycle trajectory is sharply confined by the ambient energy barrier, and the landscape along this trajectory exhibits a generally flat shape. We explain the evolution of the system on this flat path by incorporating its non-gradient nature. Furthermore, we illustrate how this global landscape changes in response to external signals, observing a nice transformation of the landscapes as the excitable system approaches a limit cycle system when nutrients are sufficient, as well as the formation of additional energy wells when the DNA replication checkpoint is activated. By taking into account the finite volume effect, we find additional pits along the flat cycle path in the landscape associated with the checkpoint mechanism of the cell cycle. The difference between the landscapes induced by intrinsic and extrinsic noise is also discussed. In our opinion, this meticulous structure of the energy landscape for our simplified model is of general interest to other cell cycle dynamics, and the proposed methods can be applied to study similar biological systems. PMID:25794282

  17. CONTAINER MATERIALS, FABRICATION AND ROBUSTNESS

    SciTech Connect

    Dunn, K.; Louthan, M.; Rawls, G.; Sindelar, R.; Zapp, P.; Mcclard, J.

    2009-11-10

    The multi-barrier 3013 container used to package plutonium-bearing materials is robust and thereby highly resistant to identified degradation modes that might cause failure. The only viable degradation mechanisms identified by a panel of technical experts were pressurization within and corrosion of the containers. Evaluations of the container materials and the fabrication processes and resulting residual stresses suggest that the multi-layered containers will mitigate the potential for degradation of the outer container and prevent the release of the container contents to the environment. Additionally, the ongoing surveillance programs and laboratory studies should detect any incipient degradation of containers in the 3013 storage inventory before an outer container is compromised.

  18. Robust control of accelerators

    SciTech Connect

    Johnson, W.J.D. ); Abdallah, C.T. )

    1990-01-01

    The problem of controlling the variations in the rf power system can be effectively cast as an application of modern control theory. Two components of this theory are obtaining a model and a feedback structure. The model inaccuracies influence the choice of a particular controller structure. Because of the modeling uncertainty, one has to design either a variable, adaptive controller or a fixed, robust controller to achieve the desired objective. The adaptive control scheme usually results in very complex hardware; and, therefore, shall not be pursued in this research. In contrast, the robust control methods leads to simpler hardware. However, robust control requires a more accurate mathematical model of the physical process than is required by adaptive control. Our research at the Los Alamos National Laboratory (LANL) and the University of New Mexico (UNM) has led to the development and implementation of a new robust rf power feedback system. In this paper, we report on our research progress. In section one, the robust control problem for the rf power system and the philosophy adopted for the beginning phase of our research is presented. In section two, the results of our proof-of-principle experiments are presented. In section three, we describe the actual controller configuration that is used in LANL FEL physics experiments. The novelty of our approach is that the control hardware is implemented directly in rf without demodulating, compensating, and then remodulating.

  19. Robust control of accelerators

    NASA Astrophysics Data System (ADS)

    Joel, W.; Johnson, D.; Chaouki, Abdallah T.

    1991-07-01

    The problem of controlling the variations in the rf power system can be effectively cast as an application of modern control theory. Two components of this theory are obtaining a model and a feedback structure. The model inaccuracies influence the choice of a particular controller structure. Because of the modelling uncertainty, one has to design either a variable, adaptive controller or a fixed, robust controller to achieve the desired objective. The adaptive control scheme usually results in very complex hardware; and, therefore, shall not be pursued in this research. In contrast, the robust control method leads to simpler hardware. However, robust control requires a more accurate mathematical model of the physical process than is required by adaptive control. Our research at the Los Alamos National Laboratory (LANL) and the University of New Mexico (UNM) has led to the development and implementation of a new robust rf power feedback system. In this article, we report on our research progress. In section 1, the robust control problem for the rf power system and the philosophy adopted for the beginning phase of our research is presented. In section 2, the results of our proof-of-principle experiments are presented. In section 3, we describe the actual controller configuration that is used in LANL FEL physics experiments. The novelty of our approach is that the control hardware is implemented directly in rf. without demodulating, compensating, and then remodulating.

  20. Reliable and robust entanglement witness

    NASA Astrophysics Data System (ADS)

    Yuan, Xiao; Mei, Quanxin; Zhou, Shan; Ma, Xiongfeng

    2016-04-01

    Entanglement, a critical resource for quantum information processing, needs to be witnessed in many practical scenarios. Theoretically, witnessing entanglement is by measuring a special Hermitian observable, called an entanglement witness (EW), which has non-negative expected outcomes for all separable states but can have negative expectations for certain entangled states. In practice, an EW implementation may suffer from two problems. The first one is reliability. Due to unreliable realization devices, a separable state could be falsely identified as an entangled one. The second problem relates to robustness. A witness may not be optimal for a target state and fail to identify its entanglement. To overcome the reliability problem, we employ a recently proposed measurement-device-independent entanglement witness scheme, in which the correctness of the conclusion is independent of the implemented measurement devices. In order to overcome the robustness problem, we optimize the EW to draw a better conclusion given certain experimental data. With the proposed EW scheme, where only data postprocessing needs to be modified compared to the original measurement-device-independent scheme, one can efficiently take advantage of the measurement results to maximally draw reliable conclusions.

  1. On the robustness of the r-process in neutron-star mergers against variations of nuclear masses

    NASA Astrophysics Data System (ADS)

    Mendoza-Temis, J. J.; Wu, M. R.; Martínez-Pinedo, G.; Langanke, K.; Bauswein, A.; Janka, H.-T.; Frank, A.

    2016-07-01

    r-process calculations have been performed for matter ejected dynamically in neutron star mergers (NSM), such calculations are based on a complete set of trajectories from a three-dimensional relativistic smoothed particle hydrodynamic (SPH) simulation. Our calculations consider an extended nuclear reaction network, including spontaneous, β- and neutron-induced fission and adopting fission yield distributions from the ABLA code. In this contribution we have studied the sensitivity of the r-process abundances to nuclear masses by using diferent mass models for the calculation of neutron capture cross sections via the statistical model. Most of the trajectories, corresponding to 90% of the ejected mass, follow a relatively slow expansion allowing for all neutrons to be captured. The resulting abundances are very similar to each other and reproduce the general features of the observed r-process abundance (the second and third peaks, the rare-earth peak and the lead peak) for all mass models as they are mainly determined by the fission yields. We find distinct differences in the predictions of the mass models at and just above the third peak, which can be traced back to different predictions of neutron separation energies for r-process nuclei around neutron number N = 130.

  2. Identifying Leadership Potential: The Process of Principals within a Charter School Network

    ERIC Educational Resources Information Center

    Waidelich, Lynn A.

    2012-01-01

    The importance of strong educational leadership for American K-12 schools cannot be overstated. As such, school districts need to actively recruit and develop leaders. One way to do so is for school officials to become more strategic in leadership identification and development. If contemporary leaders are strategic about whom they identify and…

  3. Students' Conceptual Knowledge and Process Skills in Civic Education: Identifying Cognitive Profiles and Classroom Correlates

    ERIC Educational Resources Information Center

    Zhang, Ting; Torney-Purta, Judith; Barber, Carolyn

    2012-01-01

    In 2 related studies framed by social constructivism theory, the authors explored a fine-grained analysis of adolescents' civic conceptual knowledge and skills and investigated them in relation to factors such as teachers' qualifications and students' classroom experiences. In Study 1 (with about 2,800 U.S. students), the authors identified 4…

  4. Identifying the hazard characteristics of powder byproducts generated from semiconductor fabrication processes.

    PubMed

    Choi, Kwang-Min; An, Hee-Chul; Kim, Kwan-Sick

    2015-01-01

    Semiconductor manufacturing processes generate powder particles as byproducts which potentially could affect workers' health. The chemical composition, size, shape, and crystal structure of these powder particles were investigated by scanning electron microscopy equipped with an energy dispersive spectrometer, Fourier transform infrared spectrometry, and X-ray diffractometry. The powders generated in diffusion and chemical mechanical polishing processes were amorphous silica. The particles in the chemical vapor deposition (CVD) and etch processes were TiO(2) and Al(2)O(3), and Al(2)O(3) particles, respectively. As for metallization, WO(3), TiO(2), and Al(2)O(3) particles were generated from equipment used for tungsten and barrier metal (TiN) operations. In photolithography, the size and shape of the powder particles showed 1-10 μm and were of spherical shape. In addition, the powders generated from high-current and medium-current processes for ion implantation included arsenic (As), whereas the high-energy process did not include As. For all samples collected using a personal air sampler during preventive maintenance of process equipment, the mass concentrations of total airborne particles were < 1 μg, which is the detection limit of the microbalance. In addition, the mean mass concentrations of airborne PM10 (particles less than 10 μm in diameter) using direct-reading aerosol monitor by area sampling were between 0.00 and 0.02 μg/m(3). Although the exposure concentration of airborne particles during preventive maintenance is extremely low, it is necessary to make continuous improvements to the process and work environment, because the influence of chronic low-level exposure cannot be excluded. PMID:25192369

  5. Identifying temporal and causal contributions of neural processes underlying the Implicit Association Test (IAT)

    PubMed Central

    Forbes, Chad E.; Cameron, Katherine A.; Grafman, Jordan; Barbey, Aron; Solomon, Jeffrey; Ritter, Walter; Ruchkin, Daniel S.

    2012-01-01

    The Implicit Association Test (IAT) is a popular behavioral measure that assesses the associative strength between outgroup members and stereotypical and counterstereotypical traits. Less is known, however, about the degree to which the IAT reflects automatic processing. Two studies examined automatic processing contributions to a gender-IAT using a data driven, social neuroscience approach. Performance on congruent (e.g., categorizing male names with synonyms of strength) and incongruent (e.g., categorizing female names with synonyms of strength) IAT blocks were separately analyzed using EEG (event-related potentials, or ERPs, and coherence; Study 1) and lesion (Study 2) methodologies. Compared to incongruent blocks, performance on congruent IAT blocks was associated with more positive ERPs that manifested in frontal and occipital regions at automatic processing speeds, occipital regions at more controlled processing speeds and was compromised by volume loss in the anterior temporal lobe (ATL), insula and medial PFC. Performance on incongruent blocks was associated with volume loss in supplementary motor areas, cingulate gyrus and a region in medial PFC similar to that found for congruent blocks. Greater coherence was found between frontal and occipital regions to the extent individuals exhibited more bias. This suggests there are separable neural contributions to congruent and incongruent blocks of the IAT but there is also a surprising amount of overlap. Given the temporal and regional neural distinctions, these results provide converging evidence that stereotypic associative strength assessed by the IAT indexes automatic processing to a degree. PMID:23226123

  6. Accessing spoilage features of osmotolerant yeasts identified from kiwifruit plantation and processing environment in Shaanxi, China.

    PubMed

    Niu, Chen; Yuan, Yahong; Hu, Zhongqiu; Wang, Zhouli; Liu, Bin; Wang, Huxuan; Yue, Tianli

    2016-09-01

    Osmotolerant yeasts originating from kiwifruit industrial chain can result in spoilage incidences, while little information is available about their species and spoilage features. This work identified possible spoilage osmotolerant yeasts from orchards and a manufacturer (quick-freeze kiwifruit manufacturer) in main producing areas in Shaanxi, China and further characterized their spoilage features. A total of 86 osmotolerant isolates dispersing over 29 species were identified through 26S rDNA sequencing at the D1/D2 domain, among which Hanseniaspora uvarum occurred most frequently and have intimate relationships with kiwifruit. RAPD analysis indicated a high variability of this species from sampling regions. The correlation of genotypes with origins was established except for isolates from Zhouzhi orchards, and the mobility of H. uvarum from orchard to the manufacturer can be speculated and contributed to spoilage sourcing. The manufacturing environment favored the inhabitance of osmotolerant yeasts more than the orchard by giving high positive sample ratio or osmotolerant yeast ratio. The growth curves under various glucose levels were fitted by Grofit R package and the obtained growth parameters indicated phenotypic diversity in the H. uvarum and the rest species. Wickerhamomyces anomalus (OM14) and Candida glabrata (OZ17) were the most glucose tolerant species and availability of high glucose would assist them to produce more gas. The test osmotolerant species were odor altering in kiwifruit concentrate juice. 3-Methyl-1-butanol, phenylethyl alcohol, phenylethyl acetate, 5-hydroxymethylfurfural (5-HMF) and ethyl acetate were the most altered compound identified by GC/MS in the juice. Particularly, W. anomalus produced 4-vinylguaiacol and M. guilliermondii produced 4-ethylguaiacol that would imperil product acceptance. The study determines the target spoilers as well as offering a detailed spoilage features, which will be instructive in implementing preventative

  7. Stress test: identifying crowding stress-tolerant hybrids in processing sweet corn

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Improvement in tolerance to intense competition at high plant populations (i.e. crowding stress) is a major genetic driver of corn yield gain the last half-century. Recent research found differences in crowding stress tolerance among a few modern processing sweet corn hybrids; however, a larger asse...

  8. Sociometric Effects in Small Classroom Groups Using Curricula Identified as Process-Oriented.

    ERIC Educational Resources Information Center

    Nickse, Ruth S.; Ripple, Richard E.

    This study was an attempt fo document aspects of small group work in classrooms engaged in the process education curricula called "Materials and Activities for Teachers and Children" (MATCH). Data on student-student interaction was related to small group work and gathered by paper-and-pencil sociometric questionnaires and measures of group…

  9. 20 CFR 1010.300 - What processes are to be implemented to identify covered persons?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... service delivery programs or Web sites in order to provide covered persons with timely and useful... FOR VETERANS' EMPLOYMENT AND TRAINING SERVICE, DEPARTMENT OF LABOR APPLICATION OF PRIORITY OF SERVICE FOR COVERED PERSONS Applying Priority of Service § 1010.300 What processes are to be implemented...

  10. Identifying the Neural Correlates Underlying Social Pain: Implications for Developmental Processes

    ERIC Educational Resources Information Center

    Eisenberger, Naomi I.

    2006-01-01

    Although the need for social connection is critical for early social development as well as for psychological well-being throughout the lifespan, relatively little is known about the neural processes involved in maintaining social connections. The following review summarizes what is known regarding the neural correlates underlying feeling of…

  11. Identifying Process Variables for a Low Atmospheric Pressure Stunning/Killing System

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Current systems for pre-slaughter gas stunning/killing of broilers use process gases such as carbon dioxide, argon, or a mixture of these gases with air or oxygen. Both carbon dioxide and argon work by displacing oxygen to induce hypoxia in the bird, leading to unconsciousness and ultimately death....

  12. A national effort to identify fry processing clones with low acrylamide-forming potential

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Acrylamide is a suspected human carcinogen. Processed potato products, such as chips and fries, contribute to dietary intake of acrylamide. One of the most promising approaches to reducing acrylamide consumption is to develop and commercialize new potato varieties with low acrylamide-forming potenti...

  13. Identify in a Canadian Urban Community. A Process Report of the Brunskill Subproject. Project Canada West.

    ERIC Educational Resources Information Center

    Burke, M.; And Others

    The purpose of this subproject is to guide students to meet and interact with individuals from the many subcultures in a community (see ED 055 011). This progress report of the second year's activities includes information on the process of curriculum development, the materials developed, evaluation, roles of supporting agencies, behavioral…

  14. Using Workflow Modeling to Identify Areas to Improve Genetic Test Processes in the University of Maryland Translational Pharmacogenomics Project

    PubMed Central

    Cutting, Elizabeth M.; Overby, Casey L.; Banchero, Meghan; Pollin, Toni; Kelemen, Mark; Shuldiner, Alan R.; Beitelshees, Amber L.

    2015-01-01

    Delivering genetic test results to clinicians is a complex process. It involves many actors and multiple steps, requiring all of these to work together in order to create an optimal course of treatment for the patient. We used information gained from focus groups in order to illustrate the current process of delivering genetic test results to clinicians. We propose a business process model and notation (BPMN) representation of this process for a Translational Pharmacogenomics Project being implemented at the University of Maryland Medical Center, so that personalized medicine program implementers can identify areas to improve genetic testing processes. We found that the current process could be improved to reduce input errors, better inform and notify clinicians about the implications of certain genetic tests, and make results more easily understood. We demonstrate our use of BPMN to improve this important clinical process for CYP2C19 genetic testing in patients undergoing invasive treatment of coronary heart disease. PMID:26958179

  15. ROBUSTNESS OF THE CSSX PROCESS TO FEED VARIATION: EFFICIENT CESIUM REMOVAL FROM THE HIGH POTASSIUM WASTES AT HANFORD

    SciTech Connect

    Delmau, Laetitia Helene; Birdwell Jr, Joseph F; McFarlane, Joanna; Moyer, Bruce A

    2010-01-01

    This contribution finds the Caustic-Side Solvent Extraction (CSSX) process to be effective for the removal of cesium from the Hanford tank-waste supernatant solutions. The Hanford waste types are more challenging than those at the Savannah River Site (SRS) in that they contain significantly higher levels of potassium, the chief competing ion in the extraction of cesium. By use of a computerized CSSX thermodynamic model, it was calculated that the higher levels of potassium depress the cesium distribution ratio (D{sub Cs}), as validated to within {+-}11% by the measurement of D{sub Cs} values on various Hanford waste-simulant compositions. A simple analog model equation that can be readily applied in a spreadsheet for estimating the D{sub Cs} values for the varying waste compositions was developed and shown to yield nearly identical estimates as the computerized CSSX model. It is concluded from the batch distribution experiments, the physical-property measurements, the equilibrium modeling, the flowsheet calculations, and the contactor sizing that the CSSX process as currently formulated for cesium removal from alkaline salt waste at the SRS is capable of treating similar Hanford tank feeds, albeit with more stages. For the most challenging Hanford waste composition tested, 31 stages would be required to provide a cesium decontamination factor (DF) of 5000 and a concentration factor (CF) of 2. Commercial contacting equipment with rotor diameters of 10 in. for extraction and 5 in. for stripping should have the capacity to meet throughput requirements, but testing will be required to confirm that the needed efficiency and hydraulic performance are actually obtainable. Markedly improved flowsheet performance was calculated based on experimental distribution ratios determined for an improved solvent formulation employing the more soluble cesium extractant BEHBCalixC6 used with alternative scrub and strip solutions, respectively 0.1 M NaOH and 0.010 M boric acid. The

  16. Deep Proteome Analysis Identifies Age-Related Processes in C. elegans.

    PubMed

    Narayan, Vikram; Ly, Tony; Pourkarimi, Ehsan; Murillo, Alejandro Brenes; Gartner, Anton; Lamond, Angus I; Kenyon, Cynthia

    2016-08-01

    Effective network analysis of protein data requires high-quality proteomic datasets. Here, we report a near doubling in coverage of the C. elegans adult proteome, identifying >11,000 proteins in total with ∼9,400 proteins reproducibly detected in three biological replicates. Using quantitative mass spectrometry, we identify proteins whose abundances vary with age, revealing a concerted downregulation of proteins involved in specific metabolic pathways and upregulation of cellular stress responses with advancing age. Among these are ∼30 peroxisomal proteins, including the PRX-5/PEX5 import protein. Functional experiments confirm that protein import into the peroxisome is compromised in vivo in old animals. We also studied the behavior of the set of age-variant proteins in chronologically age-matched, long-lived daf-2 insulin/IGF-1-pathway mutants. Unexpectedly, the levels of many of these age-variant proteins did not scale with extended lifespan. This indicates that, despite their youthful appearance and extended lifespans, not all aspects of aging are reset in these long-lived mutants. PMID:27453442

  17. Exploring High-Dimensional Data Space: Identifying Optimal Process Conditions in Photovoltaics: Preprint

    SciTech Connect

    Suh, C.; Glynn, S.; Scharf, J.; Contreras, M. A.; Noufi, R.; Jones, W. B.; Biagioni, D.

    2011-07-01

    We demonstrate how advanced exploratory data analysis coupled to data-mining techniques can be used to scrutinize the high-dimensional data space of photovoltaics in the context of thin films of Al-doped ZnO (AZO), which are essential materials as a transparent conducting oxide (TCO) layer in CuInxGa1-xSe2 (CIGS) solar cells. AZO data space, wherein each sample is synthesized from a different process history and assessed with various characterizations, is transformed, reorganized, and visualized in order to extract optimal process conditions. The data-analysis methods used include parallel coordinates, diffusion maps, and hierarchical agglomerative clustering algorithms combined with diffusion map embedding.

  18. Exploring High-Dimensional Data Space: Identifying Optimal Process Conditions in Photovoltaics

    SciTech Connect

    Suh, C.; Biagioni, D.; Glynn, S.; Scharf, J.; Contreras, M. A.; Noufi, R.; Jones, W. B.

    2011-01-01

    We demonstrate how advanced exploratory data analysis coupled to data-mining techniques can be used to scrutinize the high-dimensional data space of photovoltaics in the context of thin films of Al-doped ZnO (AZO), which are essential materials as a transparent conducting oxide (TCO) layer in CuIn{sub x}Ga{sub 1-x}Se{sub 2} (CIGS) solar cells. AZO data space, wherein each sample is synthesized from a different process history and assessed with various characterizations, is transformed, reorganized, and visualized in order to extract optimal process conditions. The data-analysis methods used include parallel coordinates, diffusion maps, and hierarchical agglomerative clustering algorithms combined with diffusion map embedding.

  19. Octopaminergic Modulation of Temporal Frequency Coding in an Identified Optic Flow-Processing Interneuron

    PubMed Central

    Longden, Kit D.; Krapp, Holger G.

    2010-01-01

    Flying generates predictably different patterns of optic flow compared with other locomotor states. A sensorimotor system tuned to rapid responses and a high bandwidth of optic flow would help the animal to avoid wasting energy through imprecise motor action. However, neural processing that covers a higher input bandwidth itself comes at higher energetic costs which would be a poor investment when the animal was not flying. How does the blowfly adjust the dynamic range of its optic flow-processing neurons to the locomotor state? Octopamine (OA) is a biogenic amine central to the initiation and maintenance of flight in insects. We used an OA agonist chlordimeform (CDM) to simulate the widespread OA release during flight and recorded the effects on the temporal frequency coding of the H2 cell. This cell is a visual interneuron known to be involved in flight stabilization reflexes. The application of CDM resulted in (i) an increase in the cell's spontaneous activity, expanding the inhibitory signaling range (ii) an initial response gain to moving gratings (20–60 ms post-stimulus) that depended on the temporal frequency of the grating and (iii) a reduction in the rate and magnitude of motion adaptation that was also temporal frequency-dependent. To our knowledge, this is the first demonstration that the application of a neuromodulator can induce velocity-dependent alterations in the gain of a wide-field optic flow-processing neuron. The observed changes in the cell's response properties resulted in a 33% increase of the cell's information rate when encoding random changes in temporal frequency of the stimulus. The increased signaling range and more rapid, longer lasting responses employed more spikes to encode each bit, and so consumed a greater amount of energy. It appears that for the fly investing more energy in sensory processing during flight is more efficient than wasting energy on under-performing motor control. PMID:21152339

  20. Identifying scale-emergent, nonlinear, asynchronous processes of wetland methane exchange

    NASA Astrophysics Data System (ADS)

    Sturtevant, Cove; Ruddell, Benjamin L.; Knox, Sara Helen; Verfaillie, Joseph; Matthes, Jaclyn Hatala; Oikawa, Patricia Y.; Baldocchi, Dennis

    2016-01-01

    Methane (CH4) exchange in wetlands is complex, involving nonlinear asynchronous processes across diverse time scales. These processes and time scales are poorly characterized at the whole-ecosystem level, yet are crucial for accurate representation of CH4 exchange in process models. We used a combination of wavelet analysis and information theory to analyze interactions between whole-ecosystem CH4 flux and biophysical drivers in two restored wetlands of Northern California from hourly to seasonal time scales, explicitly questioning assumptions of linear, synchronous, single-scale analysis. Although seasonal variability in CH4 exchange was dominantly and synchronously controlled by soil temperature, water table fluctuations, and plant activity were important synchronous and asynchronous controls at shorter time scales that propagated to the seasonal scale. Intermittent, subsurface water table decline promoted short-term pulses of methane emission but ultimately decreased seasonal CH4 emission through subsequent inhibition after rewetting. Methane efflux also shared information with evapotranspiration from hourly to multiday scales and the strength and timing of hourly and diel interactions suggested the strong importance of internal gas transport in regulating short-term emission. Traditional linear correlation analysis was generally capable of capturing the major diel and seasonal relationships, but mesoscale, asynchronous interactions and nonlinear, cross-scale effects were unresolved yet important for a deeper understanding of methane flux dynamics. We encourage wider use of these methods to aid interpretation and modeling of long-term continuous measurements of trace gas and energy exchange.

  1. Cholinesterase-Targeting microRNAs Identified in silico Affect Specific Biological Processes

    PubMed Central

    Hanin, Geula; Soreq, Hermona

    2011-01-01

    MicroRNAs (miRs) have emerged as important gene silencers affecting many target mRNAs. Here, we report the identification of 244 miRs that target the 3′-untranslated regions of different cholinesterase transcripts: 116 for butyrylcholinesterase (BChE), 47 for the synaptic acetylcholinesterase (AChE-S) splice variant, and 81 for the normally rare splice variant AChE-R. Of these, 11 and 6 miRs target both AChE-S and AChE-R, and AChE-R and BChE transcripts, respectively. BChE and AChE-S showed no overlapping miRs, attesting to their distinct modes of miR regulation. Generally, miRs can suppress a number of targets; thereby controlling an entire battery of functions. To evaluate the importance of the cholinesterase-targeted miRs in other specific biological processes we searched for their other experimentally validated target transcripts and analyzed the gene ontology enriched biological processes these transcripts are involved in. Interestingly, a number of the resulting categories are also related to cholinesterases. They include, for BChE, response to glucocorticoid stimulus, and for AChE, response to wounding and two child terms of neuron development: regulation of axonogenesis and regulation of dendrite morphogenesis. Importantly, all of the AChE-targeting miRs found to be related to these selected processes were directed against the normally rare AChE-R splice variant, with three of them, including the neurogenesis regulator miR-132, also directed against AChE-S. Our findings point at the AChE-R splice variant as particularly susceptible to miR regulation, highlight those biological functions of cholinesterases that are likely to be subject to miR post-transcriptional control, demonstrate the selectivity of miRs in regulating specific biological processes, and open new venues for targeted interference with these specific processes. PMID:22007158

  2. Comparative assessment of genomic DNA extraction processes for Plasmodium: Identifying the appropriate method.

    PubMed

    Mann, Riti; Sharma, Supriya; Mishra, Neelima; Valecha, Neena; Anvikar, Anupkumar R

    2015-12-01

    Plasmodium DNA, in addition to being used for molecular diagnosis of malaria, find utility in monitoring patient responses to antimalarial drugs, drug resistance studies, genotyping and sequencing purposes. Over the years, numerous protocols have been proposed for extracting Plasmodium DNA from a variety of sources. Given that DNA isolation is fundamental to successful molecular studies, here we review the most commonly used methods for Plasmodium genomic DNA isolation, emphasizing their pros and cons. A comparison of these existing methods has been made, to evaluate their appropriateness for use in different applications and identify the method suitable for a particular laboratory based study. Selection of a suitable and accessible DNA extraction method for Plasmodium requires consideration of many factors, the most important being sensitivity, cost-effectiveness and, purity and stability of isolated DNA. Need of the hour is to accentuate on the development of a method that upholds well on all these parameters. PMID:26714505

  3. Identifying Human Disease Genes through Cross-Species Gene Mapping of Evolutionary Conserved Processes

    PubMed Central

    Poot, Martin; Badea, Alexandra; Williams, Robert W.; Kas, Martien J.

    2011-01-01

    Background Understanding complex networks that modulate development in humans is hampered by genetic and phenotypic heterogeneity within and between populations. Here we present a method that exploits natural variation in highly diverse mouse genetic reference panels in which genetic and environmental factors can be tightly controlled. The aim of our study is to test a cross-species genetic mapping strategy, which compares data of gene mapping in human patients with functional data obtained by QTL mapping in recombinant inbred mouse strains in order to prioritize human disease candidate genes. Methodology We exploit evolutionary conservation of developmental phenotypes to discover gene variants that influence brain development in humans. We studied corpus callosum volume in a recombinant inbred mouse panel (C57BL/6J×DBA/2J, BXD strains) using high-field strength MRI technology. We aligned mouse mapping results for this neuro-anatomical phenotype with genetic data from patients with abnormal corpus callosum (ACC) development. Principal Findings From the 61 syndromes which involve an ACC, 51 human candidate genes have been identified. Through interval mapping, we identified a single significant QTL on mouse chromosome 7 for corpus callosum volume with a QTL peak located between 25.5 and 26.7 Mb. Comparing the genes in this mouse QTL region with those associated with human syndromes (involving ACC) and those covered by copy number variations (CNV) yielded a single overlap, namely HNRPU in humans and Hnrpul1 in mice. Further analysis of corpus callosum volume in BXD strains revealed that the corpus callosum was significantly larger in BXD mice with a B genotype at the Hnrpul1 locus than in BXD mice with a D genotype at Hnrpul1 (F = 22.48, p<9.87*10−5). Conclusion This approach that exploits highly diverse mouse strains provides an efficient and effective translational bridge to study the etiology of human developmental disorders, such as autism and schizophrenia

  4. The June 2014 eruption at Piton de la Fournaise: Robust methods developed for monitoring challenging eruptive processes

    NASA Astrophysics Data System (ADS)

    Villeneuve, N.; Ferrazzini, V.; Di Muro, A.; Peltier, A.; Beauducel, F.; Roult, G. C.; Lecocq, T.; Brenguier, F.; Vlastelic, I.; Gurioli, L.; Guyard, S.; Catry, T.; Froger, J. L.; Coppola, D.; Harris, A. J. L.; Favalli, M.; Aiuppa, A.; Liuzzo, M.; Giudice, G.; Boissier, P.; Brunet, C.; Catherine, P.; Fontaine, F. J.; Henriette, L.; Lauret, F.; Riviere, A.; Kowalski, P.

    2014-12-01

    After almost 3.5 years of quiescence, Piton de la Fournaise (PdF) produced a small summit eruption on 20 June 2014 at 21:35 (GMT). The eruption lasted 20 hours and was preceded by: i) onset of deep eccentric seismicity (15-20 km bsl; 9 km NW of the volcano summit) in March and April 2014; ii) enhanced CO2 soil flux along the NW rift zone; iii) increase in the number and energy of shallow (<1.5 km asl) VT events. The increase in VT events occurred on 9 June. Their signature, and shallow location, was not characteristic of an eruptive crisis. However, at 20:06 on 20/06 their character changed. This was 74 minutes before the onset of tremor. Deformations then began at 20:20. Since 2007, PdF has emitted small magma volumes (<3 Mm3) in events preceded by weak and short precursory phases. To respond to this challenging activity style, new monitoring methods were deployed at OVPF. While the JERK and MSNoise methods were developed for processing of seismic data, borehole tiltmeters and permanent monitoring of summit gas emissions, plus CO2 soil flux, were used to track precursory activity. JERK, based on an analysis of the acceleration slope of a broad-band seismometer data, allowed advanced notice of the new eruption by 50 minutes. MSNoise, based on seismic velocity determination, showed a significant decrease 7 days before the eruption. These signals were coupled with change in summit fumarole composition. Remote sensing allowed the following syn-eruptive observations: - INSAR confirmed measurements made by the OVPF geodetic network, showing that deformation was localized around the eruptive fissures; - A SPOT5 image acquired at 05:41 on 21/06 allowed definition of the flow field area (194 500 m2); - A MODIS image acquired at 06:35 on 21/06 gave a lava discharge rate of 6.9±2.8 m3 s-1, giving an erupted volume of 0.3 and 0.4 Mm3. - This rate was used with the DOWNFLOW and FLOWGO models, calibrated with the textural data from Piton's 2010 lava, to run lava flow

  5. Establishment of a Cost-Effective and Robust Planning Basis for the Processing of M-91 Waste at the Hanford Site

    SciTech Connect

    Johnson, Wayne L.; Parker, Brian M.

    2004-07-30

    This report identifies and evaluates viable alternatives for the accelerated processing of Hanford Site transuranic (TRU) and mixed low-level wastes (MLLW) that cannot be processed using existing site capabilities. Accelerated processing of these waste streams will lead to earlier reduction of risk and considerable life-cycle cost savings. The processing need is to handle both oversized MLLW and TRU containers as well as containers with surface contact dose rates greater than 200 mrem/hr. This capability is known as the ''M-91'' processing capability required by the Tri-Party Agreement milestone M-91--01. The new, phased approach proposed in this evaluation would use a combination of existing and planned processing capabilities to treat and more easily manage contact-handled waste streams first and would provide for earlier processing of these wastes.

  6. Pervasive robustness in biological systems.

    PubMed

    Félix, Marie-Anne; Barkoulas, Michalis

    2015-08-01

    Robustness is characterized by the invariant expression of a phenotype in the face of a genetic and/or environmental perturbation. Although phenotypic variance is a central measure in the mapping of the genotype and environment to the phenotype in quantitative evolutionary genetics, robustness is also a key feature in systems biology, resulting from nonlinearities in quantitative relationships between upstream and downstream components. In this Review, we provide a synthesis of these two lines of investigation, converging on understanding how variation propagates across biological systems. We critically assess the recent proliferation of studies identifying robustness-conferring genes in the context of the nonlinearity in biological systems. PMID:26184598

  7. Modelling evapotranspiration during precipitation deficits: identifying critical processes in a land surface model

    NASA Astrophysics Data System (ADS)

    Ukkola, A.; Pitman, A.; Decker, M. R.; De Kauwe, M. G.; Abramowitz, G.; Wang, Y.; Kala, J.

    2015-12-01

    Surface fluxes from land surface models (LSM) have traditionally been evaluated against monthly, seasonal or annual mean states. Previous studies have noted the limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions but very few studies have systematically evaluated LSMs during rainfall deficits. We investigate the performance of the Community Atmosphere Biosphere Land Exchange (CABLE) LSM in simulating latent heat fluxes in offline mode. CABLE is evaluated against eddy covariance measurements of latent heat flux across 20 flux tower sites at sub-annual to inter-annual time scales, with a focus on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux is explored by employing alternative representations of hydrology, soil properties, leaf area index and stomatal conductance. We demonstrate the critical role of hydrological processes for capturing observed declines in latent heat. The effects of soil, LAI and stomatal conductance are shown to be highly site-specific. The default CABLE performs reasonably well at annual scales despite grossly underestimating latent heat during rainfall deficits, highlighting the importance for evaluating models explicitly under water-stressed conditions across multiple vegetation and climate regimes. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions but remaining deficiencies point to future research needs.

  8. Modelling evapotranspiration during precipitation deficits: identifying critical processes in a land surface model

    NASA Astrophysics Data System (ADS)

    Ukkola, A. M.; Pitman, A. J.; Decker, M.; De Kauwe, M. G.; Abramowitz, G.; Kala, J.; Wang, Y.-P.

    2015-10-01

    Surface fluxes from land surface models (LSM) have traditionally been evaluated against monthly, seasonal or annual mean states. The limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions has been previously noted, but very few studies have systematically evaluated these models during rainfall deficits. We evaluated latent heat flux simulated by the Community Atmosphere Biosphere Land Exchange (CABLE) LSM across 20 flux tower sites at sub-annual to inter-annual time scales, in particular focusing on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux are explored by employing alternative representations of hydrology, leaf area index, soil properties and stomatal conductance. We found that the representation of hydrological processes was critical for capturing observed declines in latent heat during rainfall deficits. By contrast, the effects of soil properties, LAI and stomatal conductance are shown to be highly site-specific. Whilst the standard model performs reasonably well at annual scales as measured by common metrics, it grossly underestimates latent heat during rainfall deficits. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions but remaining biases point to future research needs. Our results highlight the importance of evaluating LSMs under water-stressed conditions and across multiple plant functional types and climate regimes.

  9. Identifying the Institutional Decision Process to Introduce Decentralized Sanitation in the City of Kunming (China)

    NASA Astrophysics Data System (ADS)

    Medilanski, Edi; Chuan, Liang; Mosler, Hans-Joachim; Schertenleib, Roland; Larsen, Tove A.

    2007-05-01

    We conducted a study of the institutional barriers to introducing urine source separation in the urban area of Kunming, China. On the basis of a stakeholder analysis, we constructed stakeholder diagrams showing the relative importance of decision-making power and (positive) interest in the topic. A hypothetical decision-making process for the urban case was derived based on a successful pilot project in a periurban area. All our results were evaluated by the stakeholders. We concluded that although a number of primary stakeholders have a large interest in testing urine source separation also in an urban context, most of the key stakeholders would be reluctant to this idea. However, the success in the periurban area showed that even a single, well-received pilot project can trigger the process of broad dissemination of new technologies. Whereas the institutional setting for such a pilot project is favorable in Kunming, a major challenge will be to adapt the technology to the demands of an urban population. Methodologically, we developed an approach to corroborate a stakeholder analysis with the perception of the stakeholders themselves. This is important not only in order to validate the analysis but also to bridge the theoretical gap between stakeholder analysis and stakeholder involvement. We also show that in disagreement with the assumption of most policy theories, local stakeholders consider informal decision pathways to be of great importance in actual policy-making.

  10. Modelling evapotranspiration during precipitation deficits: Identifying critical processes in a land surface model

    DOE PAGESBeta

    Ukkola, Anna M.; Pitman, Andy J.; Decker, Mark; De Kauwe, Martin G.; Abramowitz, Gab; Kala, Jatin; Wang, Ying -Ping

    2016-06-21

    Surface fluxes from land surface models (LSMs) have traditionally been evaluated against monthly, seasonal or annual mean states. The limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions has been previously noted, but very few studies have systematically evaluated these models during rainfall deficits. We evaluated latent heat fluxes simulated by the Community Atmosphere Biosphere Land Exchange (CABLE) LSM across 20 flux tower sites at sub-annual to inter-annual timescales, in particular focusing on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux was explored by employing alternative representations of hydrology, leafmore » area index, soil properties and stomatal conductance. We found that the representation of hydrological processes was critical for capturing observed declines in latent heat during rainfall deficits. By contrast, the effects of soil properties, LAI and stomatal conductance were highly site-specific. Whilst the standard model performs reasonably well at annual scales as measured by common metrics, it grossly underestimates latent heat during rainfall deficits. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions, but remaining biases point to future research needs. Lastly, our results highlight the importance of evaluating LSMs under water-stressed conditions and across multiple plant functional types and climate regimes.« less

  11. Unsupervised image processing scheme for transistor photon emission analysis in order to identify defect location

    NASA Astrophysics Data System (ADS)

    Chef, Samuel; Jacquir, Sabir; Sanchez, Kevin; Perdu, Philippe; Binczak, Stéphane

    2015-01-01

    The study of the light emitted by transistors in a highly scaled complementary metal oxide semiconductor (CMOS) integrated circuit (IC) has become a key method with which to analyze faulty devices, track the failure root cause, and have candidate locations for where to start the physical analysis. The localization of defective areas in IC corresponds to a reliability check and gives information to the designer to improve the IC design. The scaling of CMOS leads to an increase in the number of active nodes inside the acquisition area. There are also more differences between the spot's intensities. In order to improve the identification of all of the photon emission spots, we introduce an unsupervised processing scheme. It is based on iterative thresholding decomposition (ITD) and mathematical morphology operations. It unveils all of the emission spots and removes most of the noise from the database thanks to a succession of image processing. The ITD approach based on five thresholding methods is tested on 15 photon emission databases (10 real cases and 5 simulated cases). The photon emission areas' localization is compared to an expert identification and the estimation quality is quantified using the object consistency error.

  12. Modelling evapotranspiration during precipitation deficits: identifying critical processes in a land surface model

    NASA Astrophysics Data System (ADS)

    Ukkola, Anna M.; Pitman, Andy J.; Decker, Mark; De Kauwe, Martin G.; Abramowitz, Gab; Kala, Jatin; Wang, Ying-Ping

    2016-06-01

    Surface fluxes from land surface models (LSMs) have traditionally been evaluated against monthly, seasonal or annual mean states. The limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions has been previously noted, but very few studies have systematically evaluated these models during rainfall deficits. We evaluated latent heat fluxes simulated by the Community Atmosphere Biosphere Land Exchange (CABLE) LSM across 20 flux tower sites at sub-annual to inter-annual timescales, in particular focusing on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux was explored by employing alternative representations of hydrology, leaf area index, soil properties and stomatal conductance. We found that the representation of hydrological processes was critical for capturing observed declines in latent heat during rainfall deficits. By contrast, the effects of soil properties, LAI and stomatal conductance were highly site-specific. Whilst the standard model performs reasonably well at annual scales as measured by common metrics, it grossly underestimates latent heat during rainfall deficits. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions, but remaining biases point to future research needs. Our results highlight the importance of evaluating LSMs under water-stressed conditions and across multiple plant functional types and climate regimes.

  13. Using Statistical Process Control Charts to Identify the Steroids Era in Major League Baseball: An Educational Exercise

    ERIC Educational Resources Information Center

    Hill, Stephen E.; Schvaneveldt, Shane J.

    2011-01-01

    This article presents an educational exercise in which statistical process control charts are constructed and used to identify the Steroids Era in American professional baseball. During this period (roughly 1993 until the present), numerous baseball players were alleged or proven to have used banned, performance-enhancing drugs. Also observed…

  14. ESL Teachers' Perceptions of the Process for Identifying Adolescent Latino English Language Learners with Specific Learning Disabilities

    ERIC Educational Resources Information Center

    Ferlis, Emily C.

    2012-01-01

    This dissertation examines the question "how do ESL teachers perceive the prereferral process for identifying adolescent Latino English language learners with specific learning disabilities?" The study fits within the Latino Critical Race Theory framework and employs an interpretive phenomenological qualitative research approach.…

  15. Joint-specific DNA methylation and transcriptome signatures in rheumatoid arthritis identify distinct pathogenic processes.

    PubMed

    Ai, Rizi; Hammaker, Deepa; Boyle, David L; Morgan, Rachel; Walsh, Alice M; Fan, Shicai; Firestein, Gary S; Wang, Wei

    2016-01-01

    Stratifying patients on the basis of molecular signatures could facilitate development of therapeutics that target pathways specific to a particular disease or tissue location. Previous studies suggest that pathogenesis of rheumatoid arthritis (RA) is similar in all affected joints. Here we show that distinct DNA methylation and transcriptome signatures not only discriminate RA fibroblast-like synoviocytes (FLS) from osteoarthritis FLS, but also distinguish RA FLS isolated from knees and hips. Using genome-wide methods, we show differences between RA knee and hip FLS in the methylation of genes encoding biological pathways, such as IL-6 signalling via JAK-STAT pathway. Furthermore, differentially expressed genes are identified between knee and hip FLS using RNA-sequencing. Double-evidenced genes that are both differentially methylated and expressed include multiple HOX genes. Joint-specific DNA signatures suggest that RA disease mechanisms might vary from joint to joint, thus potentially explaining some of the diversity of drug responses in RA patients. PMID:27282753

  16. Joint-specific DNA methylation and transcriptome signatures in rheumatoid arthritis identify distinct pathogenic processes

    PubMed Central

    Ai, Rizi; Hammaker, Deepa; Boyle, David L.; Morgan, Rachel; Walsh, Alice M.; Fan, Shicai; Firestein, Gary S.; Wang, Wei

    2016-01-01

    Stratifying patients on the basis of molecular signatures could facilitate development of therapeutics that target pathways specific to a particular disease or tissue location. Previous studies suggest that pathogenesis of rheumatoid arthritis (RA) is similar in all affected joints. Here we show that distinct DNA methylation and transcriptome signatures not only discriminate RA fibroblast-like synoviocytes (FLS) from osteoarthritis FLS, but also distinguish RA FLS isolated from knees and hips. Using genome-wide methods, we show differences between RA knee and hip FLS in the methylation of genes encoding biological pathways, such as IL-6 signalling via JAK-STAT pathway. Furthermore, differentially expressed genes are identified between knee and hip FLS using RNA-sequencing. Double-evidenced genes that are both differentially methylated and expressed include multiple HOX genes. Joint-specific DNA signatures suggest that RA disease mechanisms might vary from joint to joint, thus potentially explaining some of the diversity of drug responses in RA patients. PMID:27282753

  17. Identifying sources and processes influencing nitrogen export to a small stream using dual isotopes of nitrate

    NASA Astrophysics Data System (ADS)

    Lohse, K. A.; Sanderman, J.; Amundson, R.

    2009-12-01

    Interactions between plant and microbial reactions exert strong controls on sources and export of nitrate to headwater streams. Yet quantifying this interaction is challenging due to spatial and temporal changes in these processes. Topography has been hypothesized to play a large role in these processes, yet few studies have coupled measurement of soil nitrogen cycling to hydrologic losses of N. In water limited environments such as Mediterranean grasslands, we hypothesized that seasonal shifts in runoff mechanisms and flow paths would change stream water sources of nitrate from deep subsoil sources to near-surface sources. In theory, these changes can be quantified using mixing models and dual isotopes of nitrate. We examined the temporal patterns of N stream export using hydrometric methods and dual isotopes of nitrate in a small headwater catchment on the coast of Northern California. A plot of stream water 15N-nitrate and 18O-nitrate with known isotopic value of nitrate in rainwater, fertilizer, and soil N confirmed that the nitrate was primarily microbial nitrate. Plots of 15N-nitrate and the inverse nitrate concentration, as well as the log of nitrate concentration, indicated both mixing and fractionation via denitrification. Further analysis of soil water 15N-nitrate and 18O-nitrate revealed two denitrification vectors for both surface and subsurface soil waters (slopes of 0.50 ±0.1) that constrained the stream water 15N and 18O-nitrate values indicating mixing of two soil water sources. Analysis of mixing models showed shifts in surface and subsurface soil water nitrate sources to stream water along with progressive denitrification over the course of the season.

  18. On the processes generating latitudinal richness gradients: identifying diagnostic patterns and predictions

    PubMed Central

    Hurlbert, Allen H.; Stegen, James C.

    2014-01-01

    We use a simulation model to examine four of the most common hypotheses for the latitudinal richness gradient and identify patterns that might be diagnostic of those four hypotheses. The hypotheses examined include (1) tropical niche conservatism, or the idea that the tropics are more diverse because a tropical clade origin has allowed more time for diversification in the tropics and has resulted in few species adapted to extra-tropical climates. (2) The ecological limits hypothesis suggests that species richness is limited by the amount of biologically available energy in a region. (3) The speciation rates hypothesis suggests that the latitudinal gradient arises from a gradient in speciation rates. (4) Finally, the tropical stability hypothesis argues that climatic fluctuations and glacial cycles in extratropical regions have led to greater extinction rates and less opportunity for specialization relative to the tropics. We found that tropical niche conservatism can be distinguished from the other three scenarios by phylogenies which are more balanced than expected, no relationship between mean root distance (MRD) and richness across regions, and a homogeneous rate of speciation across clades and through time. The energy gradient, speciation gradient, and disturbance gradient scenarios all produced phylogenies which were more imbalanced than expected, showed a negative relationship between MRD and richness, and diversity-dependence of speciation rate estimates through time. We found that the relationship between speciation rates and latitude could distinguish among these three scenarios, with no relation expected under the ecological limits hypothesis, a negative relationship expected under the speciation rates hypothesis, and a positive relationship expected under the tropical stability hypothesis. We emphasize the importance of considering multiple hypotheses and focusing on diagnostic predictions instead of predictions that are consistent with multiple

  19. Robustness of metabolic networks

    NASA Astrophysics Data System (ADS)

    Jeong, Hawoong

    2009-03-01

    We investigated the robustness of cellular metabolism by simulating the system-level computational models, and also performed the corresponding experiments to validate our predictions. We address the cellular robustness from the ``metabolite''-framework by using the novel concept of ``flux-sum,'' which is the sum of all incoming or outgoing fluxes (they are the same under the pseudo-steady state assumption). By estimating the changes of the flux-sum under various genetic and environmental perturbations, we were able to clearly decipher the metabolic robustness; the flux-sum around an essential metabolite does not change much under various perturbations. We also identified the list of the metabolites essential to cell survival, and then ``acclimator'' metabolites that can control the cell growth were discovered. Furthermore, this concept of ``metabolite essentiality'' should be useful in developing new metabolic engineering strategies for improved production of various bioproducts and designing new drugs that can fight against multi-antibiotic resistant superbacteria by knocking-down the enzyme activities around an essential metabolite. Finally, we combined a regulatory network with the metabolic network to investigate its effect on dynamic properties of cellular metabolism.

  20. Identifying influential nodes in a wound healing-related network of biological processes using mean first-passage time

    NASA Astrophysics Data System (ADS)

    Arodz, Tomasz; Bonchev, Danail

    2015-02-01

    In this study we offer an approach to network physiology, which proceeds from transcriptomic data and uses gene ontology analysis to identify the biological processes most enriched in several critical time points of wound healing process (days 0, 3 and 7). The top-ranking differentially expressed genes for each process were used to build two networks: one with all proteins regulating the transcription of selected genes, and a second one involving the proteins from the signaling pathways that activate the transcription factors. The information from these networks is used to build a network of the most enriched processes with undirected links weighted proportionally to the count of shared genes between the pair of processes, and directed links weighted by the count of relationships connecting genes from one process to genes from the other. In analyzing the network thus built we used an approach based on random walks and accounting for the temporal aspects of the spread of a signal in the network (mean-first passage time, MFPT). The MFPT scores allowed identifying the top influential, as well as the top essential biological processes, which vary with the progress in the healing process. Thus, the most essential for day 0 was found to be the Wnt-receptor signaling pathway, well known for its crucial role in wound healing, while in day 3 this was the regulation of NF-kB cascade, essential for matrix remodeling in the wound healing process. The MFPT-based scores correctly reflected the pattern of the healing process dynamics to be highly concentrated around several processes between day 0 and day 3, and becoming more diffuse at day 7.

  1. Semantic Processing to Identify Adverse Drug Event Information from Black Box Warnings

    PubMed Central

    Culbertson, Adam; Fiszman, Marcelo; Shin, Dongwook; Rindflesch, Thomas C.

    2014-01-01

    Adverse drug events account for two million combined injuries, hospitalizations, or deaths each year. Furthermore, there are few comprehensive, up-to-date, and free sources of drug information. Clinical decision support systems may significantly mitigate the number of adverse drug events. However, these systems depend on up-to-date, comprehensive, and codified data to serve as input. The DailyMed website, a resource managed by the FDA and NLM, contains all currently approved drugs. We used a semantic natural language processing approach that successfully extracted information for adverse drug events, at-risk conditions, and susceptible populations from black box warning labels on this site. The precision, recall, and F-score were, 94%, 52%, 0.67 for adverse drug events; 80%, 53%, and 0.64 for conditions; and 95%, 44%, 0.61 for populations. Overall performance was 90% precision, 51% recall, and 0.65 F-Score. Information extracted can be stored in a structured format and may support clinical decision support systems. PMID:25954348

  2. Identifying the processes underpinning anticipation and decision-making in a dynamic time-constrained task.

    PubMed

    Roca, André; Ford, Paul R; McRobert, Allistair P; Mark Williams, A

    2011-08-01

    A novel, representative task was used to examine skill-based differences in the perceptual and cognitive processes underlying performance on a dynamic, externally paced task. Skilled and less skilled soccer players were required to move and interact with life-size, action sequences involving 11 versus 11 soccer situations filmed from the perspective of a central defender in soccer. The ability of participants to anticipate the intentions of their opponents and to make decisions about how they should respond was measured across two separate experiments. In Experiment 1, visual search behaviors were examined using an eye-movement registration system. In Experiment 2, retrospective verbal reports of thinking were gathered from a new sample of skilled and less skilled participants. Skilled participants were more accurate than less skilled participants at anticipating the intentions of opponents and in deciding on an appropriate course of action. The skilled players employed a search strategy involving more fixations of shorter duration in a different sequential order and toward more disparate and informative locations in the display when compared with the less skilled counterparts. The skilled players generated a greater number of verbal report statements with a higher proportion of evaluation, prediction, and planning statements than the less skilled players, suggesting they employed more complex domain-specific memory representations to solve the task. Theoretical, methodological, and practical implications are discussed. PMID:21305386

  3. Isotopic investigations of dissolved organic N in soils identifies N mineralization as a major sink process

    NASA Astrophysics Data System (ADS)

    Wanek, Wolfgang; Prommer, Judith; Hofhansl, Florian

    2016-04-01

    Dissolved organic nitrogen (DON) is a major component of transfer processes in the global nitrogen (N) cycle, contributing to atmospheric N deposition, terrestrial N losses and aquatic N inputs. In terrestrial ecosystems several sources and sinks contribute to belowground DON pools but yet are hard to quantify. In soils, DON is released by desorption of soil organic N and by microbial lysis. Major losses from the DON pool occur via sorption, hydrological losses and by soil N mineralization. Sorption/desorption, lysis and hydrological losses are expected to exhibit no 15N fractionation therefore allowing to trace different DON sources. Soil N mineralization of DON has been commonly assumed to have no or only a small isotope effect of between 0-4‰, however isotope fractionation by N mineralization has rarely been measured and might be larger than anticipated. Depending on the degree of 15N fractionation by soil N mineralization, we would expect DON to become 15N-enriched relative to bulk soil N, and dissolved inorganic N (DIN; ammonium and nitrate) to become 15N-depleted relative to both, bulk soil N and DON. Isotopic analyses of soil organic N, DON and DIN might therefore provide insights into the relative contributions of different sources and sink processes. This study therefore aimed at a better understanding of the isotopic signatures of DON and its controls in soils. We investigated the concentration and isotopic composition of bulk soil N, DON and DIN in a wide range of sites, covering arable, grassland and forest ecosystems in Austria across an altitudinal transect. Isotopic composition of ammonium, nitrate and DON were measured in soil extracts after chemical conversion to N2O by purge-and-trap isotope ratio mass spectrometry. We found that delta15N values of DON ranged between -0.4 and 7.6‰, closely tracking the delta15N values of bulk soils. However, DON was 15N-enriched relative to bulk soil N by 1.5±1.3‰ (1 SD), and inorganic N was 15N

  4. Identifying sources and processes influencing nitrogen export to a small stream using dual isotopes of nitrate

    NASA Astrophysics Data System (ADS)

    Lohse, Kathleen A.; Sanderman, Jonathan; Amundson, Ronald

    2013-09-01

    Topography plays a critical role in controlling rates of nitrogen (N) transformation and loss to streams through its effects on reaction and transport, yet few studies have coupled measurements of soil N cycling within a catchment to hydrologic N losses and sources of those losses. We examined the processes controlling temporal patterns of stream N export using hydrometric methods and dual isotopes of nitrate (NO3-) in a small headwater catchment on the coast of Northern California. Soil nitrate pools accumulated in the hollow during the dry summer due to sustained rates of net nitrification and elevated soil moisture, and then contributed to the first flush of NO3- in macropore soil-water and stream water in the winter. Macropore soil-waters had higher concentrations of all forms of N than matrix soil-waters, especially in the hollow. A plot of stream water δ15N versus δ18O values in NO3- indicated that NO3- was primarily derived from nitrification or microbial NO3-. Further analysis revealed a mixing of two microbial NO3- sources combined with seasonal progressive denitrification. Mass balance estimates suggested microbial NO3- was consumed by denitrification when conditions of high NO3-, dissolved organic matter, and soil-water contents converged. Our study is the first to show a mixing of two sources of microbial NO3- and seasonal progressive denitrification using dual isotopes. Our observations suggest that the physical conditions in the convergent hollow are important constraints on stream N chemistry, and that shifts in runoff mechanisms and flow paths control the source and mixing of NO3- from various watershed sources.

  5. Robustness Elasticity in Complex Networks

    PubMed Central

    Matisziw, Timothy C.; Grubesic, Tony H.; Guo, Junyu

    2012-01-01

    Network robustness refers to a network’s resilience to stress or damage. Given that most networks are inherently dynamic, with changing topology, loads, and operational states, their robustness is also likely subject to change. However, in most analyses of network structure, it is assumed that interaction among nodes has no effect on robustness. To investigate the hypothesis that network robustness is not sensitive or elastic to the level of interaction (or flow) among network nodes, this paper explores the impacts of network disruption, namely arc deletion, over a temporal sequence of observed nodal interactions for a large Internet backbone system. In particular, a mathematical programming approach is used to identify exact bounds on robustness to arc deletion for each epoch of nodal interaction. Elasticity of the identified bounds relative to the magnitude of arc deletion is assessed. Results indicate that system robustness can be highly elastic to spatial and temporal variations in nodal interactions within complex systems. Further, the presence of this elasticity provides evidence that a failure to account for nodal interaction can confound characterizations of complex networked systems. PMID:22808060

  6. A Thermodynamic-Based Interpretation of Protein Expression Heterogeneity in Different Glioblastoma Multiforme Tumors Identifies Tumor-Specific Unbalanced Processes.

    PubMed

    Kravchenko-Balasha, Nataly; Johnson, Hannah; White, Forest M; Heath, James R; Levine, R D

    2016-07-01

    We describe a thermodynamic-motivated, information theoretic analysis of proteomic data collected from a series of 8 glioblastoma multiforme (GBM) tumors. GBMs are considered here as prototypes of heterogeneous cancers. That heterogeneity is viewed here as manifesting in different unbalanced biological processes that are associated with thermodynamic-like constraints. The analysis yields a molecular description of a stable steady state that is common across all tumors. It also resolves molecular descriptions of unbalanced processes that are shared by several tumors, such as hyperactivated phosphoprotein signaling networks. Further, it resolves unbalanced processes that provide unique classifiers of tumor subgroups. The results of the theoretical interpretation are compared against those of statistical multivariate methods and are shown to provide a superior level of resolution for identifying unbalanced processes in GBM tumors. The identification of specific constraints for each GBM tumor suggests tumor-specific combination therapies that may reverse this imbalance. PMID:27035264

  7. Robust verification analysis

    NASA Astrophysics Data System (ADS)

    Rider, William; Witkowski, Walt; Kamm, James R.; Wildey, Tim

    2016-02-01

    We introduce a new methodology for inferring the accuracy of computational simulations through the practice of solution verification. We demonstrate this methodology on examples from computational heat transfer, fluid dynamics and radiation transport. Our methodology is suited to both well- and ill-behaved sequences of simulations. Our approach to the analysis of these sequences of simulations incorporates expert judgment into the process directly via a flexible optimization framework, and the application of robust statistics. The expert judgment is systematically applied as constraints to the analysis, and together with the robust statistics guards against over-emphasis on anomalous analysis results. We have named our methodology Robust Verification. Our methodology is based on utilizing multiple constrained optimization problems to solve the verification model in a manner that varies the analysis' underlying assumptions. Constraints applied in the analysis can include expert judgment regarding convergence rates (bounds and expectations) as well as bounding values for physical quantities (e.g., positivity of energy or density). This approach then produces a number of error models, which are then analyzed through robust statistical techniques (median instead of mean statistics). This provides self-contained, data and expert informed error estimation including uncertainties for both the solution itself and order of convergence. Our method produces high quality results for the well-behaved cases relatively consistent with existing practice. The methodology can also produce reliable results for ill-behaved circumstances predicated on appropriate expert judgment. We demonstrate the method and compare the results with standard approaches used for both code and solution verification on well-behaved and ill-behaved simulations.

  8. An Evaluation of a Natural Language Processing Tool for Identifying and Encoding Allergy Information in Emergency Department Clinical Notes

    PubMed Central

    Goss, Foster R.; Plasek, Joseph M.; Lau, Jason J.; Seger, Diane L.; Chang, Frank Y.; Zhou, Li

    2014-01-01

    Emergency department (ED) visits due to allergic reactions are common. Allergy information is often recorded in free-text provider notes; however, this domain has not yet been widely studied by the natural language processing (NLP) community. We developed an allergy module built on the MTERMS NLP system to identify and encode food, drug, and environmental allergies and allergic reactions. The module included updates to our lexicon using standard terminologies, and novel disambiguation algorithms. We developed an annotation schema and annotated 400 ED notes that served as a gold standard for comparison to MTERMS output. MTERMS achieved an F-measure of 87.6% for the detection of allergen names and no known allergies, 90% for identifying true reactions in each allergy statement where true allergens were also identified, and 69% for linking reactions to their allergen. These preliminary results demonstrate the feasibility using NLP to extract and encode allergy information from clinical notes. PMID:25954363

  9. Robust automatic target recognition in FLIR imagery

    NASA Astrophysics Data System (ADS)

    Soyman, Yusuf

    2012-05-01

    In this paper, a robust automatic target recognition algorithm in FLIR imagery is proposed. Target is first segmented out from the background using wavelet transform. Segmentation process is accomplished by parametric Gabor wavelet transformation. Invariant features that belong to the target, which is segmented out from the background, are then extracted via moments. Higher-order moments, while providing better quality for identifying the image, are more sensitive to noise. A trade-off study is then performed on a few moments that provide effective performance. Bayes method is used for classification, using Mahalanobis distance as the Bayes' classifier. Results are assessed based on false alarm rates. The proposed method is shown to be robust against rotations, translations and scale effects. Moreover, it is shown to effectively perform under low-contrast objects in FLIR images. Performance comparisons are also performed on both GPU and CPU. Results indicate that GPU has superior performance over CPU.

  10. Hétérochronies dans l'évolution des hominidés. Le développement dentaire des australopithécines «robustes»Heterochronic process in hominid evolution. The dental development in 'robust' australopithecines.

    NASA Astrophysics Data System (ADS)

    Ramirez Rozzi, Fernando V.

    2000-10-01

    Heterochrony is defined as an evolutionary modification in time and in the relative rate of development [6]. Growth (size), development (shape), and age (adult) are the three fundamental factors of ontogeny and have to be known to carry out a study on heterochronies. These three factors have been analysed in 24 Plio-Pleistocene hominid molars from Omo, Ethiopia, attributed to A. afarensis and robust australopithecines ( A. aethiopicus and A. aff. aethiopicus) . Molars were grouped into three chronological periods. The analysis suggests that morphological modifications through time are due to heterochronic process, a neoteny ( A. afarensis - robust australopithecine clade) and a time hypermorphosis ( A. aethiopicus - A. aff. aethiopicus).

  11. Assessment Approach for Identifying Compatibility of Restoration Projects with Geomorphic and Flooding Processes in Gravel Bed Rivers.

    PubMed

    DeVries, Paul; Aldrich, Robert

    2015-08-01

    A critical requirement for a successful river restoration project in a dynamic gravel bed river is that it be compatible with natural hydraulic and sediment transport processes operating at the reach scale. The potential for failure is greater at locations where the influence of natural processes is inconsistent with intended project function and performance. We present an approach using practical GIS, hydrologic, hydraulic, and sediment transport analyses to identify locations where specific restoration project types have the greatest likelihood of working as intended because their function and design are matched with flooding and morphologic processes. The key premise is to identify whether a specific river analysis segment (length ~1-10 bankfull widths) within a longer reach is geomorphically active or inactive in the context of vertical and lateral stabilities, and hydrologically active for floodplain connectivity. Analyses involve empirical channel geometry relations, aerial photographic time series, LiDAR data, HEC-RAS hydraulic modeling, and a time-integrated sediment transport budget to evaluate trapping efficiency within each segment. The analysis segments are defined by HEC-RAS model cross sections. The results have been used effectively to identify feasible projects in a variety of alluvial gravel bed river reaches with lengths between 11 and 80 km and 2-year flood magnitudes between ~350 and 1330 m(3)/s. Projects constructed based on the results have all performed as planned. In addition, the results provide key criteria for formulating erosion and flood management plans. PMID:25910870

  12. Assessment Approach for Identifying Compatibility of Restoration Projects with Geomorphic and Flooding Processes in Gravel Bed Rivers

    NASA Astrophysics Data System (ADS)

    DeVries, Paul; Aldrich, Robert

    2015-08-01

    A critical requirement for a successful river restoration project in a dynamic gravel bed river is that it be compatible with natural hydraulic and sediment transport processes operating at the reach scale. The potential for failure is greater at locations where the influence of natural processes is inconsistent with intended project function and performance. We present an approach using practical GIS, hydrologic, hydraulic, and sediment transport analyses to identify locations where specific restoration project types have the greatest likelihood of working as intended because their function and design are matched with flooding and morphologic processes. The key premise is to identify whether a specific river analysis segment (length ~1-10 bankfull widths) within a longer reach is geomorphically active or inactive in the context of vertical and lateral stabilities, and hydrologically active for floodplain connectivity. Analyses involve empirical channel geometry relations, aerial photographic time series, LiDAR data, HEC-RAS hydraulic modeling, and a time-integrated sediment transport budget to evaluate trapping efficiency within each segment. The analysis segments are defined by HEC-RAS model cross sections. The results have been used effectively to identify feasible projects in a variety of alluvial gravel bed river reaches with lengths between 11 and 80 km and 2-year flood magnitudes between ~350 and 1330 m3/s. Projects constructed based on the results have all performed as planned. In addition, the results provide key criteria for formulating erosion and flood management plans.

  13. Reasoning about anomalies: a study of the analytical process of detecting and identifying anomalous behavior in maritime traffic data

    NASA Astrophysics Data System (ADS)

    Riveiro, Maria; Falkman, Göran; Ziemke, Tom; Kronhamn, Thomas

    2009-05-01

    The goal of visual analytical tools is to support the analytical reasoning process, maximizing human perceptual, understanding and reasoning capabilities in complex and dynamic situations. Visual analytics software must be built upon an understanding of the reasoning process, since it must provide appropriate interactions that allow a true discourse with the information. In order to deepen our understanding of the human analytical process and guide developers in the creation of more efficient anomaly detection systems, this paper investigates how is the human analytical process of detecting and identifying anomalous behavior in maritime traffic data. The main focus of this work is to capture the entire analysis process that an analyst goes through, from the raw data to the detection and identification of anomalous behavior. Three different sources are used in this study: a literature survey of the science of analytical reasoning, requirements specified by experts from organizations with interest in port security and user field studies conducted in different marine surveillance control centers. Furthermore, this study elaborates on how to support the human analytical process using data mining, visualization and interaction methods. The contribution of this paper is twofold: (1) within visual analytics, contribute to the science of analytical reasoning with practical understanding of users tasks in order to develop a taxonomy of interactions that support the analytical reasoning process and (2) within anomaly detection, facilitate the design of future anomaly detector systems when fully automatic approaches are not viable and human participation is needed.

  14. Robust efficient video fingerprinting

    NASA Astrophysics Data System (ADS)

    Puri, Manika; Lubin, Jeffrey

    2009-02-01

    We have developed a video fingerprinting system with robustness and efficiency as the primary and secondary design criteria. In extensive testing, the system has shown robustness to cropping, letter-boxing, sub-titling, blur, drastic compression, frame rate changes, size changes and color changes, as well as to the geometric distortions often associated with camcorder capture in cinema settings. Efficiency is afforded by a novel two-stage detection process in which a fast matching process first computes a number of likely candidates, which are then passed to a second slower process that computes the overall best match with minimal false alarm probability. One key component of the algorithm is a maximally stable volume computation - a three-dimensional generalization of maximally stable extremal regions - that provides a content-centric coordinate system for subsequent hash function computation, independent of any affine transformation or extensive cropping. Other key features include an efficient bin-based polling strategy for initial candidate selection, and a final SIFT feature-based computation for final verification. We describe the algorithm and its performance, and then discuss additional modifications that can provide further improvement to efficiency and accuracy.

  15. Formosa Plastics Corporation: Plant-Wide Assessment of Texas Plant Identifies Opportunities for Improving Process Efficiency and Reducing Energy Costs

    SciTech Connect

    2005-01-01

    At Formosa Plastics Corporation's plant in Point Comfort, Texas, a plant-wide assessment team analyzed process energy requirements, reviewed new technologies for applicability, and found ways to improve the plant's energy efficiency. The assessment team identified the energy requirements of each process and compared actual energy consumption with theoretical process requirements. The team estimated that total annual energy savings would be about 115,000 MBtu for natural gas and nearly 14 million kWh for electricity if the plant makes several improvements, which include upgrading the gas compressor impeller, improving the vent blower system, and recovering steam condensate for reuse. Total annual cost savings could be $1.5 million. The U.S. Department of Energy's Industrial Technologies Program cosponsored this assessment.

  16. Hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) and its application to predicting key process variables.

    PubMed

    He, Yan-Lin; Xu, Yuan; Geng, Zhi-Qiang; Zhu, Qun-Xiong

    2016-03-01

    In this paper, a hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) is proposed. Firstly, an improved functional link neural network with small norm of expanded weights and high input-output correlation (SNEWHIOC-FLNN) was proposed for enhancing the generalization performance of FLNN. Unlike the traditional FLNN, the expanded variables of the original inputs are not directly used as the inputs in the proposed SNEWHIOC-FLNN model. The original inputs are attached to some small norm of expanded weights. As a result, the correlation coefficient between some of the expanded variables and the outputs is enhanced. The larger the correlation coefficient is, the more relevant the expanded variables tend to be. In the end, the expanded variables with larger correlation coefficient are selected as the inputs to improve the performance of the traditional FLNN. In order to test the proposed SNEWHIOC-FLNN model, three UCI (University of California, Irvine) regression datasets named Housing, Concrete Compressive Strength (CCS), and Yacht Hydro Dynamics (YHD) are selected. Then a hybrid model based on the improved FLNN integrating with partial least square (IFLNN-PLS) was built. In IFLNN-PLS model, the connection weights are calculated using the partial least square method but not the error back propagation algorithm. Lastly, IFLNN-PLS was developed as an intelligent measurement model for accurately predicting the key variables in the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. Simulation results illustrated that the IFLNN-PLS could significant improve the prediction performance. PMID:26685746

  17. Testing the realism of model structures to identify karst system processes using water quality and quantity signatures

    NASA Astrophysics Data System (ADS)

    Hartmann, A.; Wagener, T.; Rimmer, A.; Lange, J.; Brielmann, H.; Weiler, M.

    2012-12-01

    Many hydrological systems exhibit complex subsurface flow and storage behavior. Runoff observations often only provide insufficient information for unique process identification of complex hydrologic systems. Quantitative modeling of water and solute fluxes presents a potentially more powerful avenue to explore whether hypotheses about system functioning can be rejected or conditionally accepted. In this study we developed and tested four hydrological model structures, based on different hypotheses about subsurface flow and storage behavior, to identify the functioning of a large Mediterranean karst system. Using eight different system signatures, i.e. indicators of particular hydrodynamic and hydrochemical characteristics of the karst system, we applied a novel model evaluation strategy to identify the best conceptual model representation of the karst system. Our approach consists of three stages: (1) evaluation of model performance with respect to system signatures using automatic calibration, (2) evaluation of parameter identifiability using Sobol's sensitivity analysis, and (3) evaluation of model plausibility by combining the results of stages (1) and (2). These evaluation stages eliminated three model structures and lead to a unique hypothesis about the functioning of the studied karst system. We used the estimated parameter values to further quantify subsurface processes. The remaining model is able to simultaneously provide high performances for all eight system signatures. Our approach demonstrates the benefits of interpreting different tracers in a hydrologically meaningful way during model evaluation and identification.

  18. Testing the realism of model structures to identify karst system processes using water quality and quantity signatures

    NASA Astrophysics Data System (ADS)

    Hartmann, A.; Wagener, T.; Rimmer, A.; Lange, J.; Brielmann, H.; Weiler, M.

    2013-06-01

    Many hydrological systems exhibit complex subsurface flow and storage behavior. Runoff observations often only provide insufficient information for unique process identification. Quantitative modeling of water and solute fluxes presents a potentially more powerful avenue to explore whether hypotheses about system functioning can be rejected or conditionally accepted. In this study we developed and tested four hydrological model structures, based on different hypotheses about subsurface flow and storage behavior, to identify the functioning of a large Mediterranean karst system. Using eight different system signatures, i.e., indicators of particular hydrodynamic and hydrochemical characteristics of the karst system, we applied a novel model evaluation strategy to identify the best conceptual model representation of the karst system within our set of possible system representations. Our approach to test model realism consists of three stages: (1) evaluation of model performance with respect to system signatures using automatic calibration, (2) evaluation of parameter identifiability using Sobol's sensitivity analysis, and (3) evaluation of model plausibility by combining the results of stages (1) and (2). These evaluation stages eliminated three out of four model structures and lead to a unique hypothesis about the functioning of the studied karst system. We used the estimated parameter values to further quantify subsurface processes. The chosen model is able to simultaneously provide high performances for eight system signatures with realistic parameter values. Our approach demonstrates the benefits of interpreting different tracers in a hydrologically meaningful way during model evaluation and identification.

  19. Robust indexing for automatic data collection

    SciTech Connect

    Sauter, Nicholas K.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.

    2003-12-09

    We present improved methods for indexing diffraction patterns from macromolecular crystals. The novel procedures include a more robust way to verify the position of the incident X-ray beam on the detector, an algorithm to verify that the deduced lattice basis is consistent with the observations, and an alternative approach to identify the metric symmetry of the lattice. These methods help to correct failures commonly experienced during indexing, and increase the overall success rate of the process. Rapid indexing, without the need for visual inspection, will play an important role as beamlines at synchrotron sources prepare for high-throughput automation.

  20. Using Analytic Hierarchy Process to Identify the Nurses with High Stress-Coping Capability: Model and Application

    PubMed Central

    F. C. PAN, Frank

    2014-01-01

    Abstract Background Nurses have long been relied as the major labor force in hospitals. Featured with complicated and highly labor-intensive job requirement, multiple pressures from different sources was inevitable. Success in identifying stresses and accordingly coping with such stresses is important for job performance of nurses, and service quality of a hospital. Purpose of this research is to identify the determinants of nurses' capabilities. Methods A modified Analytic Hierarchy Process (AHP) was adopted. Overall, 105 nurses from several randomly selected hospitals in southern Taiwan were investigated to generate factors. Ten experienced practitioners were included as the expert in the AHP to produce weights of each criterion. Six nurses from two regional hospitals were then selected to test the model. Results Four factors are then identified as the second level of hierarchy. The study result shows that the family factor is the most important factor, and followed by the personal attributes. Top three sub-criteria that attribute to the nurse's stress-coping capability are children's education, good career plan, and healthy family. The practical simulation provided evidence for the usefulness of this model. Conclusion The study suggested including these key determinants into the practice of human-resource management, and restructuring the hospital's organization, creating an employee-support system as well as a family-friendly working climate. The research provided evidence that supports the usefulness of AHP in identifying the key factors that help stabilizing a nursing team. PMID:25988086

  1. Genes Involved in the Osteoarthritis Process Identified through Genome Wide Expression Analysis in Articular Cartilage; the RAAK Study

    PubMed Central

    Bovée, Judith V. M. G.; Bomer, Nils; van der Breggen, Ruud; Lakenberg, Nico; Keurentjes, J. Christiaan; Goeman, Jelle J.; Slagboom, P. Eline; Nelissen, Rob G. H. H.; Bos, Steffan D.; Meulenbelt, Ingrid

    2014-01-01

    Objective Identify gene expression profiles associated with OA processes in articular cartilage and determine pathways changing during the disease process. Methods Genome wide gene expression was determined in paired samples of OA affected and preserved cartilage of the same joint using microarray analysis for 33 patients of the RAAK study. Results were replicated in independent samples by RT-qPCR and immunohistochemistry. Profiles were analyzed with the online analysis tools DAVID and STRING to identify enrichment for specific pathways and protein-protein interactions. Results Among the 1717 genes that were significantly differently expressed between OA affected and preserved cartilage we found significant enrichment for genes involved in skeletal development (e.g. TNFRSF11B and FRZB). Also several inflammatory genes such as CD55, PTGES and TNFAIP6, previously identified in within-joint analyses as well as in analyses comparing preserved cartilage from OA affected joints versus healthy cartilage were among the top genes. Of note was the high up-regulation of NGF in OA cartilage. RT-qPCR confirmed differential expression for 18 out of 19 genes with expression changes of 2-fold or higher, and immunohistochemistry of selected genes showed a concordant change in protein expression. Most of these changes associated with OA severity (Mankin score) but were independent of joint-site or sex. Conclusion We provide further insights into the ongoing OA pathophysiological processes in cartilage, in particular into differences in macroscopically intact cartilage compared to OA affected cartilage, which seem relatively consistent and independent of sex or joint. We advocate that development of treatment could benefit by focusing on these similarities in gene expression changes and/or pathways. PMID:25054223

  2. Robust springback compensation

    NASA Astrophysics Data System (ADS)

    Carleer, Bart; Grimm, Peter

    2013-12-01

    Springback simulation and springback compensation are more and more applied in productive use of die engineering. In order to successfully compensate a tool accurate springback results are needed as well as an effective compensation approach. In this paper a methodology has been introduce in order to effectively compensate tools. First step is the full process simulation meaning that not only the drawing operation will be simulated but also all secondary operations like trimming and flanging. Second will be the verification whether the process is robust meaning that it obtains repeatable results. In order to effectively compensate a minimum clamping concept will be defined. Once these preconditions are fulfilled the tools can be compensated effectively.

  3. Enabling Rapid and Robust Structural Analysis During Conceptual Design

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.; Padula, Sharon L.; Li, Wu

    2015-01-01

    This paper describes a multi-year effort to add a structural analysis subprocess to a supersonic aircraft conceptual design process. The desired capabilities include parametric geometry, automatic finite element mesh generation, static and aeroelastic analysis, and structural sizing. The paper discusses implementation details of the new subprocess, captures lessons learned, and suggests future improvements. The subprocess quickly compares concepts and robustly handles large changes in wing or fuselage geometry. The subprocess can rank concepts with regard to their structural feasibility and can identify promising regions of the design space. The automated structural analysis subprocess is deemed robust and rapid enough to be included in multidisciplinary conceptual design and optimization studies.

  4. Comparison of the Analytic Hierarchy Process and Incomplete Analytic Hierarchy Process for identifying customer preferences in the Texas retail energy provider market

    NASA Astrophysics Data System (ADS)

    Davis, Christopher

    The competitive market for retail energy providers in Texas has been in existence for 10 years. When the market opened in 2002, 5 energy providers existed, offering, on average, 20 residential product plans in total. As of January 2012, there are now 115 energy providers in Texas offering over 300 residential product plans for customers. With the increase in providers and product plans, customers can be bombarded with information and suffer from the "too much choice" effect. The goal of this praxis is to aid customers in the decision making process of identifying an energy provider and product plan. Using the Analytic Hierarchy Process (AHP), a hierarchical decomposition decision making tool, and the Incomplete Analytic Hierarchy Process (IAHP), a modified version of AHP, customers can prioritize criteria such as price, rate type, customer service, and green energy products to identify the provider and plan that best meets their needs. To gather customer data, a survey tool has been developed for customers to complete the pairwise comparison process. Results are compared for the Incomplete AHP and AHP method to determine if the Incomplete AHP method is just as accurate, but more efficient, than the traditional AHP method.

  5. Robustness in multicellular systems

    NASA Astrophysics Data System (ADS)

    Xavier, Joao

    2011-03-01

    Cells and organisms cope with the task of maintaining their phenotypes in the face of numerous challenges. Much attention has recently been paid to questions of how cells control molecular processes to ensure robustness. However, many biological functions are multicellular and depend on interactions, both physical and chemical, between cells. We use a combination of mathematical modeling and molecular biology experiments to investigate the features that convey robustness to multicellular systems. Cell populations must react to external perturbations by sensing environmental cues and acting coordinately in response. At the same time, they face a major challenge: the emergence of conflict from within. Multicellular traits are prone to cells with exploitative phenotypes that do not contribute to shared resources yet benefit from them. This is true in populations of single-cell organisms that have social lifestyles, where conflict can lead to the emergence of social ``cheaters,'' as well as in multicellular organisms, where conflict can lead to the evolution of cancer. I will describe features that diverse multicellular systems can have to eliminate potential conflicts as well as external perturbations.

  6. A cross-sectional study to identify organisational processes associated with nurse-reported quality and patient safety

    PubMed Central

    Tvedt, Christine; Sjetne, Ingeborg Strømseng; Helgeland, Jon; Bukholm, Geir

    2012-01-01

    Objectives The purpose of this study was to identify organisational processes and structures that are associated with nurse-reported patient safety and quality of nursing. Design This is an observational cross-sectional study using survey methods. Setting Respondents from 31 Norwegian hospitals with more than 85 beds were included in the survey. Participants All registered nurses working in direct patient care in a position of 20% or more were invited to answer the survey. In this study, 3618 nurses from surgical and medical wards responded (response rate 58.9). Nurses' practice environment was defined as organisational processes and measured by the Nursing Work Index Revised and items from Hospital Survey on Patient Safety Culture. Outcome measures Nurses' assessments of patient safety, quality of nursing, confidence in how their patients manage after discharge and frequency of adverse events were used as outcome measures. Results Quality system, nurse–physician relation, patient safety management and staff adequacy were process measures associated with nurse-reported work-related and patient-related outcomes, but we found no associations with nurse participation, education and career and ward leadership. Most organisational structures were non-significant in the multilevel model except for nurses’ affiliations to medical department and hospital type. Conclusions Organisational structures may have minor impact on how nurses perceive work-related and patient-related outcomes, but the findings in this study indicate that there is a considerable potential to address organisational design in improvement of patient safety and quality of care. PMID:23263021

  7. Exploiting Cloud Radar Doppler Spectra of Mixed-Phase Clouds during ACCEPT Field Experiment to Identify Microphysical Processes

    NASA Astrophysics Data System (ADS)

    Kalesse, H.; Myagkov, A.; Seifert, P.; Buehl, J.

    2015-12-01

    Cloud radar Doppler spectra offer much information about cloud processes. By analyzing millimeter radar Doppler spectra from cloud-top to -base in mixed-phase clouds in which super-cooled liquid-layers are present we try to tell the microphysical evolution story of particles that are present by disentangling the contributions of the solid and liquid particles to the total radar returns. Instead of considering vertical profiles, dynamical effects are taken into account by following the particle population evolution along slanted paths which are caused by horizontal advection of the cloud. The goal is to identify regions in which different microphysical processes such as new particle formation (nucleation), water vapor deposition, aggregation, riming, or sublimation occurr. Cloud radar measurements are supplemented by Doppler lidar and Raman lidar observations as well as observations with MWR, wind profiler, and radio sondes. The presence of super-cooled liquid layers is identified by positive liquid water paths in MWR measurements, the vertical location of liquid layers (in non-raining systems and below lidar extinction) is derived from regions of high-backscatter and low depolarization in Raman lidar observations. In collocated cloud radar measurements, we try to identify cloud phase in the cloud radar Doppler spectrum via location of the Doppler peak(s), the existence of multi-modalities or the spectral skewness. Additionally, within the super-cooled liquid layers, the radar-identified liquid droplets are used as air motion tracer to correct the radar Doppler spectrum for vertical air motion w. These radar-derived estimates of w are validated by independent estimates of w from collocated Doppler lidar measurements. A 35 GHz vertically pointing cloud Doppler radar (METEK MIRA-35) in linear depolarization (LDR) mode is used. Data is from the deployment of the Leipzig Aerosol and Cloud Remote Observations System (LACROS) during the Analysis of the Composition of

  8. Identifying biogeochemical processes beneath stormwater infiltration ponds in support of a new best management practice for groundwater protection

    USGS Publications Warehouse

    O'Reilly, Andrew M.; Chang, Ni-Bin; Wanielista, Martin P.; Xuan, Zhemin

    2011-01-01

     When applying a stormwater infiltration pond best management practice (BMP) for protecting the quality of underlying groundwater, a common constituent of concern is nitrate. Two stormwater infiltration ponds, the SO and HT ponds, in central Florida, USA, were monitored. A temporal succession of biogeochemical processes was identified beneath the SO pond, including oxygen reduction, denitrification, manganese and iron reduction, and methanogenesis. In contrast, aerobic conditions persisted beneath the HT pond, resulting in nitrate leaching into groundwater. Biogeochemical differences likely are related to soil textural and hydraulic properties that control surface/subsurface oxygen exchange. A new infiltration BMP was developed and a full-scale application was implemented for the HT pond. Preliminary results indicate reductions in nitrate concentration exceeding 50% in soil water and shallow groundwater beneath the HT pond.

  9. Comparing Four Instructional Techniques for Promoting Robust Knowledge

    ERIC Educational Resources Information Center

    Richey, J. Elizabeth; Nokes-Malach, Timothy J.

    2015-01-01

    Robust knowledge serves as a common instructional target in academic settings. Past research identifying characteristics of experts' knowledge across many domains can help clarify the features of robust knowledge as well as ways of assessing it. We review the expertise literature and identify three key features of robust knowledge (deep,…

  10. Acetylome study in mouse adipocytes identifies targets of SIRT1 deacetylation in chromatin organization and RNA processing.

    PubMed

    Kim, Sun-Yee; Sim, Choon Kiat; Tang, Hui; Han, Weiping; Zhang, Kangling; Xu, Feng

    2016-05-15

    SIRT1 is a key protein deacetylase that regulates cellular metabolism through lysine deacetylation on both histones and non-histone proteins. Lysine acetylation is a wide-spread post-translational modification found on many regulatory proteins and it plays an essential role in cell signaling, transcription and metabolism. In mice, SIRT1 has known protective functions during high-fat diet but the acetylome regulated by SIRT1 in adipocytes is not completely understood. Here we conducted acetylome analyses in murine adipocytes treated with small-molecule modulators that inhibit or activate the deacetylase activity of SIRT1. We identified a total of 302 acetylated peptides from 78 proteins in this study. From the list of potential SIRT1 targets, we selected seven candidates and further verified that six of them can be deacetylated by SIRT1 in-vitro. Among them, half of the SIRT1 targets are involved in regulating chromatin structure and the other half is involved in RNA processing. Our results provide a resource for further SIRT1 target validation in fat cells and suggest a potential role of SIRT1 in the regulation of chromatin structure and RNA processing, which may possibly extend to other cell types as well. PMID:27021582