Science.gov

Sample records for identifying robust process

  1. Robust regulation of anaerobic digestion processes.

    PubMed

    Mailleret, L; Bernard, O; Steyer, J P

    2003-01-01

    This paper deals with the problem of controlling anaerobic digestion processes. A two-step (i.e. acidogenesis-methanization) mass balance model is considered for a 1 m3 fixed bed digester treating industrial wine distillery wastewater. The control law aims at regulating the organic pollution level while avoiding washout of biomass. To this end, a simple output feedback controller is considered which regulates a variable strongly related to the Chemical Oxygen Demand (COD). Numerical simulations assuming noisy measurements first illustrate the robustness of this control procedure. Then, the regulating procedure is implemented on the considered anaerobic digestion process in order to validate and demonstrate its efficiency in real life experiments.

  2. The 'Robust' roster: exploring the nurse rostering process.

    PubMed

    Drake, Robert G

    2014-09-01

    To identify and explore the relationships between stages of the rostering process and the robustness of the worked roster. Once published, a nurse roster is often subject to many changes. However, post-approval changes and their implications are rarely examined. Consequently, there is little evidence to determine whether a 'worked' roster was safe, efficient or fair. Electronic rostering systems provide greater transparency of the rostering process allowing postapproval changes to be examined more thoroughly. Using quantitative data, this study compares the outcomes from different stages of the roster process with the shifts breaking roster rules. This study covered the period November 2009-January 2013 and included forty-two roster periods from fifteen wards. For each of the rosters, data specifying the type of shift assignment (request, manual and automatic) and number of shifts changed after approval (response variables) were captured. Linear regression analysis was then used to identify and explore the relationships between these response variables and the number of shifts breaking rules. Roster robustness is unaffected by the number of staff requests, Yet, how shifts are assigned before approval and the number of changes postapproval have a marked effect on the robustness of the roster. Roster 'robustness' is determined by the quality of the approved roster and subsequent postapproval demand- and supply-driven changes. Despite evidence that e-rostering can improve roster robustness, many Ward Managers prefer to roster manually. On some wards, rosters are approved, regardless of the number of rule breakages occurring. © 2014 John Wiley & Sons Ltd.

  3. Robustness

    NASA Technical Reports Server (NTRS)

    Ryan, R.

    1993-01-01

    Robustness is a buzz word common to all newly proposed space systems design as well as many new commercial products. The image that one conjures up when the word appears is a 'Paul Bunyon' (lumberjack design), strong and hearty; healthy with margins in all aspects of the design. In actuality, robustness is much broader in scope than margins, including such factors as simplicity, redundancy, desensitization to parameter variations, control of parameter variations (environments flucation), and operational approaches. These must be traded with concepts, materials, and fabrication approaches against the criteria of performance, cost, and reliability. This includes manufacturing, assembly, processing, checkout, and operations. The design engineer or project chief is faced with finding ways and means to inculcate robustness into an operational design. First, however, be sure he understands the definition and goals of robustness. This paper will deal with these issues as well as the need for the requirement for robustness.

  4. Numerical robust stability estimation in milling process

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoming; Zhu, Limin; Ding, Han; Xiong, Youlun

    2012-09-01

    The conventional prediction of milling stability has been extensively studied based on the assumptions that the milling process dynamics is time invariant. However, nominal cutting parameters cannot guarantee the stability of milling process at the shop floor level since there exists many uncertain factors in a practical manufacturing environment. This paper proposes a novel numerical method to estimate the upper and lower bounds of Lobe diagram, which is used to predict the milling stability in a robust way by taking into account the uncertain parameters of milling system. Time finite element method, a milling stability theory is adopted as the conventional deterministic model. The uncertain dynamics parameters are dealt with by the non-probabilistic model in which the parameters with uncertainties are assumed to be bounded and there is no need for probabilistic distribution densities functions. By doing so, interval instead of deterministic stability Lobe is obtained, which guarantees the stability of milling process in an uncertain milling environment. In the simulations, the upper and lower bounds of Lobe diagram obtained by the changes of modal parameters of spindle-tool system and cutting coefficients are given, respectively. The simulation results show that the proposed method is effective and can obtain satisfying bounds of Lobe diagrams. The proposed method is helpful for researchers at shop floor to making decision on machining parameters selection.

  5. A P-Norm Robust Feature Extraction Method for Identifying Differentially Expressed Genes

    PubMed Central

    Liu, Jian; Liu, Jin-Xing; Gao, Ying-Lian; Kong, Xiang-Zhen; Wang, Xue-Song; Wang, Dong

    2015-01-01

    In current molecular biology, it becomes more and more important to identify differentially expressed genes closely correlated with a key biological process from gene expression data. In this paper, based on the Schatten p-norm and Lp-norm, a novel p-norm robust feature extraction method is proposed to identify the differentially expressed genes. In our method, the Schatten p-norm is used as the regularization function to obtain a low-rank matrix and the Lp-norm is taken as the error function to improve the robustness to outliers in the gene expression data. The results on simulation data show that our method can obtain higher identification accuracies than the competitive methods. Numerous experiments on real gene expression data sets demonstrate that our method can identify more differentially expressed genes than the others. Moreover, we confirmed that the identified genes are closely correlated with the corresponding gene expression data. PMID:26201006

  6. Qualitative Robustness for General Stochastic Processes.

    DTIC Science & Technology

    1982-10-01

    and weak pointvise robustness are equivalent. The following proposition gives a necessary and sufficient condition for strong pointwise robustness... Proposition 4.1.(T ) is strongly pointwise robust at if liven. 0 c > 0, there exists 6 > 0 such that n(4.1) r) X (E I Snf(zX )>~1 C. Proof. Let Au(x EXr...function of x~ and (c) Sis stationary and eraodic. Then (T) is strongly pointwise ro- proof. By Proposition 4.lis enough to prove that given C > 0

  7. Identifying robust and sensitive frequency bands for interrogating neural oscillations.

    PubMed

    Shackman, Alexander J; McMenamin, Brenton W; Maxwell, Jeffrey S; Greischar, Lawrence L; Davidson, Richard J

    2010-07-15

    Recent years have seen an explosion of interest in using neural oscillations to characterize the mechanisms supporting cognition and emotion. Oftentimes, oscillatory activity is indexed by mean power density in predefined frequency bands. Some investigators use broad bands originally defined by prominent surface features of the spectrum. Others rely on narrower bands originally defined by spectral factor analysis (SFA). Presently, the robustness and sensitivity of these competing band definitions remains unclear. Here, a Monte Carlo-based SFA strategy was used to decompose the tonic ("resting" or "spontaneous") electroencephalogram (EEG) into five bands: delta (1-5Hz), alpha-low (6-9Hz), alpha-high (10-11Hz), beta (12-19Hz), and gamma (>21Hz). This pattern was consistent across SFA methods, artifact correction/rejection procedures, scalp regions, and samples. Subsequent analyses revealed that SFA failed to deliver enhanced sensitivity; narrow alpha sub-bands proved no more sensitive than the classical broadband to individual differences in temperament or mean differences in task-induced activation. Other analyses suggested that residual ocular and muscular artifact was the dominant source of activity during quiescence in the delta and gamma bands. This was observed following threshold-based artifact rejection or independent component analysis (ICA)-based artifact correction, indicating that such procedures do not necessarily confer adequate protection. Collectively, these findings highlight the limitations of several commonly used EEG procedures and underscore the necessity of routinely performing exploratory data analyses, particularly data visualization, prior to hypothesis testing. They also suggest the potential benefits of using techniques other than SFA for interrogating high-dimensional EEG datasets in the frequency or time-frequency (event-related spectral perturbation, event-related synchronization/desynchronization) domains.

  8. Identifying Robust and Sensitive Frequency Bands for Interrogating Neural Oscillations

    PubMed Central

    Shackman, Alexander J.; McMenamin, Brenton W.; Maxwell, Jeffrey S.; Greischar, Lawrence L.; Davidson, Richard J.

    2010-01-01

    Recent years have seen an explosion of interest in using neural oscillations to characterize the mechanisms supporting cognition and emotion. Oftentimes, oscillatory activity is indexed by mean power density in predefined frequency bands. Some investigators use broad bands originally defined by prominent surface features of the spectrum. Others rely on narrower bands originally defined by spectral factor analysis (SFA). Presently, the robustness and sensitivity of these competing band definitions remains unclear. Here, a Monte Carlo-based SFA strategy was used to decompose the tonic (“resting” or “spontaneous”) electroencephalogram (EEG) into five bands: delta (1–5Hz), alpha-low (6–9Hz), alpha-high (10–11Hz), beta (12–19Hz), and gamma (>21Hz). This pattern was consistent across SFA methods, artifact correction/rejection procedures, scalp regions, and samples. Subsequent analyses revealed that SFA failed to deliver enhanced sensitivity; narrow alpha sub-bands proved no more sensitive than the classical broadband to individual differences in temperament or mean differences in task-induced activation. Other analyses suggested that residual ocular and muscular artifact was the dominant source of activity during quiescence in the delta and gamma bands. This was observed following threshold-based artifact rejection or independent component analysis (ICA)-based artifact correction, indicating that such procedures do not necessarily confer adequate protection. Collectively, these findings highlight the limitations of several commonly used EEG procedures and underscore the necessity of routinely performing exploratory data analyses, particularly data visualization, prior to hypothesis testing. They also suggest the potential benefits of using techniques other than SFA for interrogating high-dimensional EEG datasets in the frequency or time-frequency (event-related spectral perturbation, event-related synchronization / desynchronization) domains. PMID

  9. Using Many-Objective Optimization and Robust Decision Making to Identify Robust Regional Water Resource System Plans

    NASA Astrophysics Data System (ADS)

    Matrosov, E. S.; Huskova, I.; Harou, J. J.

    2015-12-01

    Water resource system planning regulations are increasingly requiring potential plans to be robust, i.e., perform well over a wide range of possible future conditions. Robust Decision Making (RDM) has shown success in aiding the development of robust plans under conditions of 'deep' uncertainty. Under RDM, decision makers iteratively improve the robustness of a candidate plan (or plans) by quantifying its vulnerabilities to future uncertain inputs and proposing ameliorations. RDM requires planners to have an initial candidate plan. However, if the initial plan is far from robust, it may take several iterations before planners are satisfied with its performance across the wide range of conditions. Identifying an initial candidate plan is further complicated if many possible alternative plans exist and if performance is assessed against multiple conflicting criteria. Planners may benefit from considering a plan that already balances multiple performance criteria and provides some level of robustness before the first RDM iteration. In this study we use many-objective evolutionary optimization to identify promising plans before undertaking RDM. This is done for a very large regional planning problem spanning the service area of four major water utilities in East England. The five-objective optimization is performed under an ensemble of twelve uncertainty scenarios to ensure the Pareto-approximate plans exhibit an initial level of robustness. New supply interventions include two reservoirs, one aquifer recharge and recovery scheme, two transfers from an existing reservoir, five reuse and five desalination schemes. Each option can potentially supply multiple demands at varying capacities resulting in 38 unique decisions. Four candidate portfolios were selected using trade-off visualization with the involved utilities. The performance of these plans was compared under a wider range of possible scenarios. The most balanced plan was then submitted into the vulnerability

  10. On adaptive robustness approach to Anti-Jam signal processing

    NASA Astrophysics Data System (ADS)

    Poberezhskiy, Y. S.; Poberezhskiy, G. Y.

    An effective approach to exploiting statistical differences between desired and jamming signals named adaptive robustness is proposed and analyzed in this paper. It combines conventional Bayesian, adaptive, and robust approaches that are complementary to each other. This combining strengthens the advantages and mitigates the drawbacks of the conventional approaches. Adaptive robustness is equally applicable to both jammers and their victim systems. The capabilities required for realization of adaptive robustness in jammers and victim systems are determined. The employment of a specific nonlinear robust algorithm for anti-jam (AJ) processing is described and analyzed. Its effectiveness in practical situations has been proven analytically and confirmed by simulation. Since adaptive robustness can be used by both sides in electronic warfare, it is more advantageous for the fastest and most intelligent side. Many results obtained and discussed in this paper are also applicable to commercial applications such as communications in unregulated or poorly regulated frequency ranges and systems with cognitive capabilities.

  11. Robust Motion Processing in the Visual Cortex

    NASA Astrophysics Data System (ADS)

    Sederberg, Audrey; Liu, Julia; Kaschube, Matthias

    2009-03-01

    Direction selectivity is an important model system for studying cortical processing. The role of inhibition in models of direction selectivity in the visual cortex is not well understood. We probe the selectivity of an integrate-and-fire neuron with a noisy background on top of a deterministic input current determined by a temporal-lag model for selectivity, including first only excitatory inputs and later both excitatory and inhibitory input. In this model, postsynaptic potentials are fully synchronous for the preferred direction and maximally dispersed in time for the null direction. Further, any inhibitory inputs lag excitatory inputs, as Priebe and Ferster have observed (2005). At any level of input strength, the selectivity is weak when only excitatory inputs are considered. The inclusion of inhibition significantly strengthens selectivity, and this selectivity is preserved over a wide range of background noise levels and for short stimulus durations. We conclude that inhibition likely plays an essential role in the mechanism underlying direction selectivity.

  12. Identifying a robust design space for glycosylation during monoclonal antibody production.

    PubMed

    St Amand, Melissa M; Hayes, James; Radhakrishnan, Devesh; Fernandez, Janice; Meyer, Bill; Robinson, Anne S; Ogunnaike, Babatunde A

    2016-09-01

    Glycan distribution has been identified as a "critical quality attribute" for many biopharmaceutical products, including monoclonal antibodies. Consequently, determining quantitatively how process variables affect glycan distribution is important during process development to control antibody glycosylation. In this work, we assess the effect of six bioreactor process variables on the glycan distribution of an IgG1 produced in CHO cells. Our analysis established that glucose and glutamine media concentration, temperature, pH, agitation rate, and dissolved oxygen (DO) had small but significant effects on the relative percentage of various glycans. In addition, we assessed glycosylation enzyme transcript levels and intracellular sugar nucleotide concentrations within the CHO cells to provide a biological explanation for the observed effects on glycan distributions. From these results we identified a robust operating region, or design space, in which the IgG1 could be produced with a consistent glycan distribution. Since our results indicate that perturbations to bioreactor process variables will cause only small (even if significant) changes to the relative percentage of various glycans (<±1.5%)-changes that are too small to affect the bioactivity and efficacy of this IgG1 significantly-it follows that the glycan distribution obtained will be consistent even with relatively large variations in bioreactor process variables. However, for therapeutic proteins where bioactivity and efficacy are affected by small changes to the relative percentage of glycans, the same analysis would identify the manipulated variables capable of changing glycan distribution, and hence can be used to implement a glycosylation control strategy. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1149-1162, 2016. © 2016 American Institute of Chemical Engineers.

  13. On-Line Robust Modal Stability Prediction using Wavelet Processing

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.; Lind, Rick

    1998-01-01

    Wavelet analysis for filtering and system identification has been used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins is reduced with parametric and nonparametric time- frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data is used to reduce the effects of external disturbances and unmodeled dynamics. Parametric estimates of modal stability are also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. The F-18 High Alpha Research Vehicle aeroservoelastic flight test data demonstrates improved robust stability prediction by extension of the stability boundary beyond the flight regime. Guidelines and computation times are presented to show the efficiency and practical aspects of these procedures for on-line implementation. Feasibility of the method is shown for processing flight data from time- varying nonstationary test points.

  14. Modelling System Processes to Support Uncertainty Analysis and Robustness Evaluation

    NASA Technical Reports Server (NTRS)

    Blackwell, Charles; Cuzzi, Jeffrey (Technical Monitor)

    1996-01-01

    In the use of advanced systems control techniques in the development of a dynamic system, results from effective mathematical modelling is required. Historically, in some cases the use of a model which only reflects the "expected" or "nominal" important -information about the system's internal processes has resulted in acceptable system performance, but it should be recognized that for those cases success was due to a combination of the remarkable inherent potential of feedback control for robustness and fortuitously wide margins between system performance requirements and system performance capability. In the cases of a CELSS development, no such fortuitous combinations should be expected, and it should be expected that the uncertainty in the information on the system's processes will have to be taken into account in order to generate a performance robust design. In this paper, we develop one perspective of the issue of providing robustness as mathematical modelling impacts it, and present some examples of model formats which serve the needed purpose.

  15. Robust Intratumor Partitioning to Identify High-Risk Subregions in Lung Cancer: A Pilot Study.

    PubMed

    Wu, Jia; Gensheimer, Michael F; Dong, Xinzhe; Rubin, Daniel L; Napel, Sandy; Diehn, Maximilian; Loo, Billy W; Li, Ruijiang

    2016-08-01

    To develop an intratumor partitioning framework for identifying high-risk subregions from (18)F-fluorodeoxyglucose positron emission tomography (FDG-PET) and computed tomography (CT) imaging and to test whether tumor burden associated with the high-risk subregions is prognostic of outcomes in lung cancer. In this institutional review board-approved retrospective study, we analyzed the pretreatment FDG-PET and CT scans of 44 lung cancer patients treated with radiation therapy. A novel, intratumor partitioning method was developed, based on a 2-stage clustering process: first at the patient level, each tumor was over-segmented into many superpixels by k-means clustering of integrated PET and CT images; next, tumor subregions were identified by merging previously defined superpixels via population-level hierarchical clustering. The volume associated with each of the subregions was evaluated using Kaplan-Meier analysis regarding its prognostic capability in predicting overall survival (OS) and out-of-field progression (OFP). Three spatially distinct subregions were identified within each tumor that were highly robust to uncertainty in PET/CT co-registration. Among these, the volume of the most metabolically active and metabolically heterogeneous solid component of the tumor was predictive of OS and OFP on the entire cohort, with a concordance index or CI of 0.66-0.67. When restricting the analysis to patients with stage III disease (n=32), the same subregion achieved an even higher CI of 0.75 (hazard ratio 3.93, log-rank P=.002) for predicting OS, and a CI of 0.76 (hazard ratio 4.84, log-rank P=.002) for predicting OFP. In comparison, conventional imaging markers, including tumor volume, maximum standardized uptake value, and metabolic tumor volume using threshold of 50% standardized uptake value maximum, were not predictive of OS or OFP, with CI mostly below 0.60 (log-rank P>.05). We propose a robust intratumor partitioning method to identify clinically relevant, high

  16. Processing Robustness for A Phenylethynyl Terminated Polyimide Composite

    NASA Technical Reports Server (NTRS)

    Hou, Tan-Hung

    2004-01-01

    The processability of a phenylethynyl terminated imide resin matrix (designated as PETI-5) composite is investigated. Unidirectional prepregs are made by coating an N-methylpyrrolidone solution of the amide acid oligomer (designated as PETAA-5/NMP) onto unsized IM7 fibers. Two batches of prepregs are used: one is made by NASA in-house, and the other is from an industrial source. The composite processing robustness is investigated with respect to the prepreg shelf life, the effect of B-staging conditions, and the optimal processing window. Prepreg rheology and open hole compression (OHC) strengths are found not to be affected by prolonged (i.e., up to 60 days) ambient storage. Rheological measurements indicate that the PETAA-5/NMP processability is only slightly affected over a wide range of B-stage temperatures from 250 deg C to 300 deg C. The OHC strength values are statistically indistinguishable among laminates consolidated using various B-staging conditions. An optimal processing window is established by means of the response surface methodology. IM7/PETAA-5/NMP prepreg is more sensitive to consolidation temperature than to pressure. A good consolidation is achievable at 371 deg C (700 deg F)/100 Psi, which yields an RT OHC strength of 62 Ksi. However, processability declines dramatically at temperatures below 350 deg C (662 deg F), as evidenced by the OHC strength values. The processability of the IM7/LARC(TM) PETI-5 prepreg was found to be robust.

  17. Robust fusion-based processing for military polarimetric imaging systems

    NASA Astrophysics Data System (ADS)

    Hickman, Duncan L.; Smith, Moira I.; Kim, Kyung Su; Choi, Hyun-Jin

    2017-05-01

    Polarisation information within a scene can be exploited in military systems to give enhanced automatic target detection and recognition (ATD/R) performance. However, the performance gain achieved is highly dependent on factors such as the geometry, viewing conditions, and the surface finish of the target. Such performance sensitivities are highly undesirable in many tactical military systems where operational conditions can vary significantly and rapidly during a mission. Within this paper, a range of processing architectures and fusion methods is considered in terms of their practical viability and operational robustness for systems requiring ATD/R. It is shown that polarisation information can give useful performance gains but, to retained system robustness, the introduction of polarimetric processing should be done in such a way as to not compromise other discriminatory scene information in the spectral and spatial domains. The analysis concludes that polarimetric data can be effectively integrated with conventional intensity-based ATD/R by either adapting the ATD/R processing function based on the scene polarisation or else by detection-level fusion. Both of these approaches avoid the introduction of processing bottlenecks and limit the impact of processing on system latency.

  18. Robust design of binary countercurrent adsorption separation processes

    SciTech Connect

    Storti, G. ); Mazzotti, M.; Morbidelli, M.; Carra, S. )

    1993-03-01

    The separation of a binary mixture, using a third component having intermediate adsorptivity as desorbent, in a four section countercurrent adsorption separation unit is considered. A procedure for the optimal and robust design of the unit is developed in the frame of Equilibrium Theory, using a model where the adsorption equilibria are described through the constant selectivity stoichiometric model, while mass-transfer resistances and axial mixing are neglected. By requiring that the unit achieves complete separation, it is possible to identify a set of implicity constraints on the operating parameters, that is, the flow rate ratios in the four sections of the unit. From these constraints explicit bounds on the operating parameters are obtained, thus yielding a region in the operating parameters space, which can be drawn a priori in terms of the adsorption equilibrium constants and the feed composition. This result provides a very convenient tool to determine both optimal and robust operating conditions. The latter issue is addressed by first analyzing the various possible sources of disturbances, as well as their effect on the separation performance. Next, the criteria for the robust design of the unit are discussed. Finally, these theoretical findings are compared with a set of experimental results obtained in a six port simulated moving bed adsorption separation unit operated in the vapor phase.

  19. Confronting Oahu's Water Woes: Identifying Scenarios for a Robust Evaluation of Policy Alternatives

    NASA Astrophysics Data System (ADS)

    van Rees, C. B.; Garcia, M. E.; Alarcon, T.; Sixt, G.

    2013-12-01

    The Pearl Harbor aquifer is the most important freshwater resource on Oahu (Hawaii, U.S.A), providing water to nearly half a million people. Recent studies show that current water use is reaching or exceeding sustainable yield. Climate change and increasing resident and tourist populations are predicted to further stress the aquifer. The island has lost huge tracts of freshwater and estuarine wetlands since human settlement; the dependence of many endemic, endangered species on these wetlands, as well as ecosystem benefits from wetlands, link humans and wildlife through water management. After the collapse of the sugar industry on Oahu (mid-1990s), the Waiahole ditch--a massive stream diversion bringing water from the island's windward to the leeward side--became a hotly disputed resource. Commercial interests and traditional farmers have clashed over the water, which could also serve to support the Pearl Harbor aquifer. Considering competing interests, impending scarcity, and uncertain future conditions, how can groundwater be managed most effectively? Complex water networks like this are characterized by conflicts between stakeholders, coupled human-natural systems, and future uncertainty. The Water Diplomacy Framework offers a model for analyzing such complex issues by integrating multiple disciplinary perspectives, identifying intervention points, and proposing sustainable solutions. The Water Diplomacy Framework is a theory and practice of implementing adaptive water management for complex problems by shifting the discussion from 'allocation of water' to 'benefit from water resources'. This is accomplished through an interactive process that includes stakeholder input, joint fact finding, collaborative scenario development, and a negotiated approach to value creation. Presented here are the results of the initial steps in a long term project to resolve water limitations on Oahu. We developed a conceptual model of the Pearl Harbor Aquifer system and identified

  20. Robust media processing on programmable power-constrained systems

    NASA Astrophysics Data System (ADS)

    McVeigh, Jeff

    2005-03-01

    To achieve consumer-level quality, media systems must process continuous streams of audio and video data while maintaining exacting tolerances on sampling rate, jitter, synchronization, and latency. While it is relatively straightforward to design fixed-function hardware implementations to satisfy worst-case conditions, there is a growing trend to utilize programmable multi-tasking solutions for media applications. The flexibility of these systems enables support for multiple current and future media formats, which can reduce design costs and time-to-market. This paper provides practical engineering solutions to achieve robust media processing on such systems, with specific attention given to power-constrained platforms. The techniques covered in this article utilize the fundamental concepts of algorithm and software optimization, software/hardware partitioning, stream buffering, hierarchical prioritization, and system resource and power management. A novel enhancement to dynamically adjust processor voltage and frequency based on buffer fullness to reduce system power consumption is examined in detail. The application of these techniques is provided in a case study of a portable video player implementation based on a general-purpose processor running a non real-time operating system that achieves robust playback of synchronized H.264 video and MP3 audio from local storage and streaming over 802.11.

  1. Exploring critical pathways for urban water management to identify robust strategies under deep uncertainties.

    PubMed

    Urich, Christian; Rauch, Wolfgang

    2014-12-01

    Long-term projections for key drivers needed in urban water infrastructure planning such as climate change, population growth, and socio-economic changes are deeply uncertain. Traditional planning approaches heavily rely on these projections, which, if a projection stays unfulfilled, can lead to problematic infrastructure decisions causing high operational costs and/or lock-in effects. New approaches based on exploratory modelling take a fundamentally different view. Aim of these is, to identify an adaptation strategy that performs well under many future scenarios, instead of optimising a strategy for a handful. However, a modelling tool to support strategic planning to test the implication of adaptation strategies under deeply uncertain conditions for urban water management does not exist yet. This paper presents a first step towards a new generation of such strategic planning tools, by combing innovative modelling tools, which coevolve the urban environment and urban water infrastructure under many different future scenarios, with robust decision making. The developed approach is applied to the city of Innsbruck, Austria, which is spatially explicitly evolved 20 years into the future under 1000 scenarios to test the robustness of different adaptation strategies. Key findings of this paper show that: (1) Such an approach can be used to successfully identify parameter ranges of key drivers in which a desired performance criterion is not fulfilled, which is an important indicator for the robustness of an adaptation strategy; and (2) Analysis of the rich dataset gives new insights into the adaptive responses of agents to key drivers in the urban system by modifying a strategy.

  2. A Three-Gene Model to Robustly Identify Breast Cancer Molecular Subtypes

    PubMed Central

    Desmedt, Christine; Loi, Sherene; Culhane, Aedin C.; Bontempi, Gianluca; Quackenbush, John; Sotiriou, Christos

    2012-01-01

    Background Single sample predictors (SSPs) and Subtype classification models (SCMs) are gene expression–based classifiers used to identify the four primary molecular subtypes of breast cancer (basal-like, HER2-enriched, luminal A, and luminal B). SSPs use hierarchical clustering, followed by nearest centroid classification, based on large sets of tumor-intrinsic genes. SCMs use a mixture of Gaussian distributions based on sets of genes with expression specifically correlated with three key breast cancer genes (estrogen receptor [ER], HER2, and aurora kinase A [AURKA]). The aim of this study was to compare the robustness, classification concordance, and prognostic value of these classifiers with those of a simplified three-gene SCM in a large compendium of microarray datasets. Methods Thirty-six publicly available breast cancer datasets (n = 5715) were subjected to molecular subtyping using five published classifiers (three SSPs and two SCMs) and SCMGENE, the new three-gene (ER, HER2, and AURKA) SCM. We used the prediction strength statistic to estimate robustness of the classification models, defined as the capacity of a classifier to assign the same tumors to the same subtypes independently of the dataset used to fit it. We used Cohen κ and Cramer V coefficients to assess concordance between the subtype classifiers and association with clinical variables, respectively. We used Kaplan–Meier survival curves and cross-validated partial likelihood to compare prognostic value of the resulting classifications. All statistical tests were two-sided. Results SCMs were statistically significantly more robust than SSPs, with SCMGENE being the most robust because of its simplicity. SCMGENE was statistically significantly concordant with published SCMs (κ = 0.65–0.70) and SSPs (κ = 0.34–0.59), statistically significantly associated with ER (V = 0.64), HER2 (V = 0.52) status, and histological grade (V = 0.55), and yielded similar strong prognostic value. Conclusion

  3. Robust interval-based regulation for anaerobic digestion processes.

    PubMed

    Alcaraz-González, V; Harmand, J; Rapaport, A; Steyer, J P; González-Alvarez, V; Pelayo-Ortiz, C

    2005-01-01

    A robust regulation law is applied to the stabilization of a class of biochemical reactors exhibiting partially known highly nonlinear dynamic behavior. An uncertain environment with the presence of unknown inputs is considered. Based on some structural and operational conditions, this regulation law is shown to exponentially stabilize the aforementioned bioreactors around a desired set-point. This approach is experimentally applied and validated on a pilot-scale (1 m3) anaerobic digestion process for the treatment of raw industrial wine distillery wastewater where the objective is the regulation of the chemical oxygen demand (COD) by using the dilution rate as the manipulated variable. Despite large disturbances on the input COD and state and parametric uncertainties, this regulation law gave excellent performances leading the output COD towards its set-point and keeping it inside a pre-specified interval.

  4. Product and Process Improvement Using Mixture-Process Variable Designs and Robust Optimization Techniques

    SciTech Connect

    Sahni, Narinder S.; Piepel, Gregory F.; Naes, Tormod

    2009-04-01

    The quality of an industrial product depends on the raw material proportions and the process variable levels, both of which need to be taken into account in designing a product. This article presents a case study from the food industry in which both kinds of variables were studied by combining a constrained mixture experiment design and a central composite process variable design. Based on the natural structure of the situation, a split-plot experiment was designed and models involving the raw material proportions and process variable levels (separately and combined) were fitted. Combined models were used to study: (i) the robustness of the process to variations in raw material proportions, and (ii) the robustness of the raw material recipes with respect to fluctuations in the process variable levels. Further, the expected variability in the robust settings was studied using the bootstrap.

  5. Robust global identifiability theory using potentials--Application to compartmental models.

    PubMed

    Wongvanich, N; Hann, C E; Sirisena, H R

    2015-04-01

    This paper presents a global practical identifiability theory for analyzing and identifying linear and nonlinear compartmental models. The compartmental system is prolonged onto the potential jet space to formulate a set of input-output equations that are integrals in terms of the measured data, which allows for robust identification of parameters without requiring any simulation of the model differential equations. Two classes of linear and non-linear compartmental models are considered. The theory is first applied to analyze the linear nitrous oxide (N2O) uptake model. The fitting accuracy of the identified models from differential jet space and potential jet space identifiability theories is compared with a realistic noise level of 3% which is derived from sensor noise data in the literature. The potential jet space approach gave a match that was well within the coefficient of variation. The differential jet space formulation was unstable and not suitable for parameter identification. The proposed theory is then applied to a nonlinear immunological model for mastitis in cows. In addition, the model formulation is extended to include an iterative method which allows initial conditions to be accurately identified. With up to 10% noise, the potential jet space theory predicts the normalized population concentration infected with pathogens, to within 9% of the true curve. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Process Architecture for Managing Digital Object Identifiers

    NASA Astrophysics Data System (ADS)

    Wanchoo, L.; James, N.; Stolte, E.

    2014-12-01

    In 2010, NASA's Earth Science Data and Information System (ESDIS) Project implemented a process for registering Digital Object Identifiers (DOIs) for data products distributed by Earth Observing System Data and Information System (EOSDIS). For the first 3 years, ESDIS evolved the process involving the data provider community in the development of processes for creating and assigning DOIs, and guidelines for the landing page. To accomplish this, ESDIS established two DOI User Working Groups: one for reviewing the DOI process whose recommendations were submitted to ESDIS in February 2014; and the other recently tasked to review and further develop DOI landing page guidelines for ESDIS approval by end of 2014. ESDIS has recently upgraded the DOI system from a manually-driven system to one that largely automates the DOI process. The new automated feature include: a) reviewing the DOI metadata, b) assigning of opaque DOI name if data provider chooses, and c) reserving, registering, and updating the DOIs. The flexibility of reserving the DOI allows data providers to embed and test the DOI in the data product metadata before formally registering with EZID. The DOI update process allows the changing of any DOI metadata except the DOI name unless the name has not been registered. Currently, ESDIS has processed a total of 557 DOIs of which 379 DOIs are registered with EZID and 178 are reserved with ESDIS. The DOI incorporates several metadata elements that effectively identify the data product and the source of availability. Of these elements, the Uniform Resource Locator (URL) attribute has the very important function of identifying the landing page which describes the data product. ESDIS in consultation with data providers in the Earth Science community is currently developing landing page guidelines that specify the key data product descriptive elements to be included on each data product's landing page. This poster will describe in detail the unique automated process and

  7. Microphone-Independent Robust Signal Processing Using Probabilistic Optimum Filtering

    DTIC Science & Technology

    1994-01-01

    11. A. Acero , "Acoustical and Environmental Robustness in Automatic Speech Recognition," Ph.D. Thesis, Carnegie-MeLton University, Sep- tember 1990...12. R.M. Stem, FJ-I. Leu, Y. Ohshima, T.M. Sullivan, and A. Acero , "Multiple Approaches to Robust Speech Recognition," 1992 Interna- tional

  8. Application of NMR Methods to Identify Detection Reagents for Use in the Development of Robust Nanosensors

    SciTech Connect

    Cosman, M; Krishnan, V V; Balhorn, R

    2004-04-29

    Nuclear Magnetic Resonance (NMR) spectroscopy is a powerful technique for studying bi-molecular interactions at the atomic scale. Our NMR lab is involved in the identification of small molecules, or ligands that bind to target protein receptors, such as tetanus (TeNT) and botulinum (BoNT) neurotoxins, anthrax proteins and HLA-DR10 receptors on non-Hodgkin's lymphoma cancer cells. Once low affinity binders are identified, they can be linked together to produce multidentate synthetic high affinity ligands (SHALs) that have very high specificity for their target protein receptors. An important nanotechnology application for SHALs is their use in the development of robust chemical sensors or biochips for the detection of pathogen proteins in environmental samples or body fluids. Here, we describe a recently developed NMR competition assay based on transferred nuclear Overhauser effect spectroscopy (trNOESY) that enables the identification of sets of ligands that bind to the same site, or a different site, on the surface of TeNT fragment C (TetC) than a known ''marker'' ligand, doxorubicin. Using this assay, we can identify the optimal pairs of ligands to be linked together for creating detection reagents, as well as estimate the relative binding constants for ligands competing for the same site.

  9. Robust syntaxin-4 immunoreactivity in mammalian horizontal cell processes

    PubMed Central

    HIRANO, ARLENE A.; BRANDSTÄTTER, JOHANN HELMUT; VILA, ALEJANDRO; BRECHA, NICHOLAS C.

    2009-01-01

    Horizontal cells mediate inhibitory feed-forward and feedback communication in the outer retina; however, mechanisms that underlie transmitter release from mammalian horizontal cells are poorly understood. Toward determining whether the molecular machinery for exocytosis is present in horizontal cells, we investigated the localization of syntaxin-4, a SNARE protein involved in targeting vesicles to the plasma membrane, in mouse, rat, and rabbit retinae using immunocytochemistry. We report robust expression of syntaxin-4 in the outer plexiform layer of all three species. Syntaxin-4 occurred in processes and tips of horizontal cells, with regularly spaced, thicker sandwich-like structures along the processes. Double labeling with syntaxin-4 and calbindin antibodies, a horizontal cell marker, demonstrated syntaxin-4 localization to horizontal cell processes; whereas, double labeling with PKC antibodies, a rod bipolar cell (RBC) marker, showed a lack of co-localization, with syntaxin-4 immunolabeling occurring just distal to RBC dendritic tips. Syntaxin-4 immunolabeling occurred within VGLUT-1-immunoreactive photoreceptor terminals and underneath synaptic ribbons, labeled by CtBP2/RIBEYE antibodies, consistent with localization in invaginating horizontal cell tips at photoreceptor triad synapses. Vertical sections of retina immunostained for syntaxin-4 and peanut agglutinin (PNA) established that the prominent patches of syntaxin-4 immunoreactivity were adjacent to the base of cone pedicles. Horizontal sections through the OPL indicate a one-to-one co-localization of syntaxin-4 densities at likely all cone pedicles, with syntaxin-4 immunoreactivity interdigitating with PNA labeling. Pre-embedding immuno-electron microscopy confirmed the subcellular localization of syntaxin-4 labeling to lateral elements at both rod and cone triad synapses. Finally, co-localization with SNAP-25, a possible binding partner of syntaxin-4, indicated co-expression of these SNARE proteins in

  10. Stretching the limits of forming processes by robust optimization: A demonstrator

    SciTech Connect

    Wiebenga, J. H.; Atzema, E. H.; Boogaard, A. H. van den

    2013-12-16

    Robust design of forming processes using numerical simulations is gaining attention throughout the industry. In this work, it is demonstrated how robust optimization can assist in further stretching the limits of metal forming processes. A deterministic and a robust optimization study are performed, considering a stretch-drawing process of a hemispherical cup product. For the robust optimization study, both the effect of material and process scatter are taken into account. For quantifying the material scatter, samples of 41 coils of a drawing quality forming steel have been collected. The stochastic material behavior is obtained by a hybrid approach, combining mechanical testing and texture analysis, and efficiently implemented in a metamodel based optimization strategy. The deterministic and robust optimization results are subsequently presented and compared, demonstrating an increased process robustness and decreased number of product rejects by application of the robust optimization approach.

  11. Robust processing of phase dislocations based on combined unwrapping and inpainting approaches.

    PubMed

    Xia, Haiting; Montresor, Silvio; Guo, Rongxin; Li, Junchang; Olchewsky, François; Desse, Jean-Michel; Picart, Pascal

    2017-01-15

    This Letter proposes a robust processing of phase dislocations to recover continuous phase maps. The approach is based on combined unwrapping and inpainting methods. Phase dislocations are determined using an estimator based on the second order phase gradient. The algorithm is validated using a realistic simulation of phase dislocations, and the phase restoration exhibits only weak errors. A comparison with other inpainting algorithms is also provided, demonstrating the suitability of the approach. The approach is applied to experimental data from off-axis digital holographic interferometry. The phase dislocation from phase data from a wake flow at Mach 0.73 are identified and processed. Excellent phase restoration can be appreciated.

  12. Particle filter-based robust state and parameter estimation for nonlinear process systems with variable parameters

    NASA Astrophysics Data System (ADS)

    Zhu, Zhiliang; Meng, Zhiqiang; Cao, Tingting; Zhang, Zhengjiang; Dai, Yuxing

    2017-06-01

    State and parameter estimation (SPE) plays an important role in process monitoring, online optimization, and process control. The estimation of states and parameters is generally solved simultaneously in the SPE problem, where the parameters to be estimated are specified as augmented states. When state and/or measurement equations are highly nonlinear and the posterior probability of the state is non-Gaussian, particle filter (PF) is commonly used for SPE. However, when the parameters switch with the operating conditions, the change of parameters cannot be detected and tracked by the conventional SPE method. This paper proposes a PF-based robust SPE method for a nonlinear process system with variable parameters. The measurement test criterion based on observation error is introduced to indirectly identify whether the parameters are changed. Based on the result of identification, the variances of the particles are modified adaptively for the tracking of the changed parameters. Finally, reliable SPE can be derived through iterative particles. The proposed PF-based robust SPE method is applied to two nonlinear process systems. The results demonstrate the effectiveness and robustness of the proposed method.

  13. Surrogate models for identifying robust, high yield regions of parameter space for ICF implosion simulations

    NASA Astrophysics Data System (ADS)

    Humbird, Kelli; Peterson, J. Luc; Brandon, Scott; Field, John; Nora, Ryan; Spears, Brian

    2016-10-01

    Next-generation supercomputer architecture and in-transit data analysis have been used to create a large collection of 2-D ICF capsule implosion simulations. The database includes metrics for approximately 60,000 implosions, with x-ray images and detailed physics parameters available for over 20,000 simulations. To map and explore this large database, surrogate models for numerous quantities of interest are built using supervised machine learning algorithms. Response surfaces constructed using the predictive capabilities of the surrogates allow for continuous exploration of parameter space without requiring additional simulations. High performing regions of the input space are identified to guide the design of future experiments. In particular, a model for the yield built using a random forest regression algorithm has a cross validation score of 94.3% and is consistently conservative for high yield predictions. The model is used to search for robust volumes of parameter space where high yields are expected, even given variations in other input parameters. Surrogates for additional quantities of interest relevant to ignition are used to further characterize the high yield regions. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, Lawrence Livermore National Security, LLC. LLNL-ABS-697277.

  14. Working Toward Robust Process Monitoring for Safeguards Applications

    SciTech Connect

    Krichinsky, Alan M; Bell, Lisa S; Gilligan, Kimberly V; Laughter, Mark D; Miller, Paul; Pickett, Chris A; Richardson, Dave; Rowe, Nathan C; Younkin, James R

    2010-01-01

    New safeguards technologies allow continuous monitoring of plant processes. Efforts to deploy these technologies, as described in a preponderance of literature, typically have consisted of case studies attempting to prove their efficacy in proof-of-principle installations. While the enhanced safeguards capabilities of continuous monitoring have been established, studies thus far have not addressed such challenges as manipulation of a system by a host nation. To prevent this and other such vulnerabilities, one technology, continuous load cell monitoring, was reviewed. This paper will present vulnerabilities as well as mitigation strategies that were identified.

  15. Integrated, multicohort analysis of systemic sclerosis identifies robust transcriptional signature of disease severity

    PubMed Central

    Lofgren, Shane; Aren, Kathleen; Arroyo, Esperanza; Cheung, Peggie; Kuo, Alex; Valenzuela, Antonia; Haemel, Anna; Wolters, Paul J.; Gordon, Jessica; Spiera, Robert; Assassi, Shervin; Boin, Francesco; Chung, Lorinda; Fiorentino, David; Utz, Paul J.; Whitfield, Michael L.

    2016-01-01

    Systemic sclerosis (SSc) is a rare autoimmune disease with the highest case-fatality rate of all connective tissue diseases. Current efforts to determine patient response to a given treatment using the modified Rodnan skin score (mRSS) are complicated by interclinician variability, confounding, and the time required between sequential mRSS measurements to observe meaningful change. There is an unmet critical need for an objective metric of SSc disease severity. Here, we performed an integrated, multicohort analysis of SSc transcriptome data across 7 datasets from 6 centers composed of 515 samples. Using 158 skin samples from SSc patients and healthy controls recruited at 2 centers as a discovery cohort, we identified a 415-gene expression signature specific for SSc, and validated its ability to distinguish SSc patients from healthy controls in an additional 357 skin samples from 5 independent cohorts. Next, we defined the SSc skin severity score (4S). In every SSc cohort of skin biopsy samples analyzed in our study, 4S correlated significantly with mRSS, allowing objective quantification of SSc disease severity. Using transcriptome data from the largest longitudinal trial of SSc patients to date, we showed that 4S allowed us to objectively monitor individual SSc patients over time, as (a) the change in 4S of a patient is significantly correlated with change in the mRSS, and (b) the change in 4S at 12 months of treatment could predict the change in mRSS at 24 months. Our results suggest that 4S could be used to distinguish treatment responders from nonresponders prior to mRSS change. Our results demonstrate the potential clinical utility of a novel robust molecular signature and a computational approach to SSc disease severity quantification. PMID:28018971

  16. Robust optimization of metal forming processes using a metamodel-based strategy

    SciTech Connect

    Wiebenga, J. H.; Klaseboer, G.; Boogaard, A. H. van den

    2011-05-04

    Robustness, optimization and Finite Element (FE) simulations are of major importance for achieving better products and cost reductions in the metal forming industry. In this paper, a metamodel-based robust optimization strategy is proposed for metal forming processes. The applicability of the strategy is demonstrated by application to an analytical test function and an industrial V-bending process. The results of both applications underline the importance of including uncertainty and robustness explicitly in the optimization procedure.

  17. Impact of genetic background and experimental reproducibility on identifying chemical compounds with robust longevity effects

    PubMed Central

    Lucanic, Mark; Plummer, W. Todd; Chen, Esteban; Harke, Jailynn; Foulger, Anna C.; Onken, Brian; Coleman-Hulbert, Anna L.; Dumas, Kathleen J.; Guo, Suzhen; Johnson, Erik; Bhaumik, Dipa; Xue, Jian; Crist, Anna B.; Presley, Michael P.; Harinath, Girish; Sedore, Christine A.; Chamoli, Manish; Kamat, Shaunak; Chen, Michelle K.; Angeli, Suzanne; Chang, Christina; Willis, John H.; Edgar, Daniel; Royal, Mary Anne; Chao, Elizabeth A.; Patel, Shobhna; Garrett, Theo; Ibanez-Ventoso, Carolina; Hope, June; Kish, Jason L; Guo, Max; Lithgow, Gordon J.; Driscoll, Monica; Phillips, Patrick C.

    2017-01-01

    Limiting the debilitating consequences of ageing is a major medical challenge of our time. Robust pharmacological interventions that promote healthy ageing across diverse genetic backgrounds may engage conserved longevity pathways. Here we report results from the Caenorhabditis Intervention Testing Program in assessing longevity variation across 22 Caenorhabditis strains spanning 3 species, using multiple replicates collected across three independent laboratories. Reproducibility between test sites is high, whereas individual trial reproducibility is relatively low. Of ten pro-longevity chemicals tested, six significantly extend lifespan in at least one strain. Three reported dietary restriction mimetics are mainly effective across C. elegans strains, indicating species and strain-specific responses. In contrast, the amyloid dye ThioflavinT is both potent and robust across the strains. Our results highlight promising pharmacological leads and demonstrate the importance of assessing lifespans of discrete cohorts across repeat studies to capture biological variation in the search for reproducible ageing interventions. PMID:28220799

  18. Commonsense Conceptions of Emergent Processes: Why Some Misconceptions Are Robust

    ERIC Educational Resources Information Center

    Chi, Michelene T. H.

    2005-01-01

    This article offers a plausible domain-general explanation for why some concepts of processes are resistant to instructional remediation although other, apparently similar concepts are more easily understood. The explanation assumes that processes may differ in ontological ways: that some processes (such as the apparent flow in diffusion of dye in…

  19. Commonsense Conceptions of Emergent Processes: Why Some Misconceptions Are Robust

    ERIC Educational Resources Information Center

    Chi, Michelene T. H.

    2005-01-01

    This article offers a plausible domain-general explanation for why some concepts of processes are resistant to instructional remediation although other, apparently similar concepts are more easily understood. The explanation assumes that processes may differ in ontological ways: that some processes (such as the apparent flow in diffusion of dye in…

  20. A bottom-up robust optimization framework for identifying river basin development pathways under deep climate uncertainty

    NASA Astrophysics Data System (ADS)

    Taner, M. U.; Ray, P.; Brown, C.

    2016-12-01

    Hydroclimatic nonstationarity due to climate change poses challenges for long-term water infrastructure planning in river basin systems. While designing strategies that are flexible or adaptive hold intuitive appeal, development of well-performing strategies requires rigorous quantitative analysis that address uncertainties directly while making the best use of scientific information on the expected evolution of future climate. Multi-stage robust optimization (RO) offers a potentially effective and efficient technique for addressing the problem of staged basin-level planning under climate change, however the necessity of assigning probabilities to future climate states or scenarios is an obstacle to implementation, given that methods to reliably assign probabilities to future climate states are not well developed. We present a method that overcomes this challenge by creating a bottom-up RO-based framework that decreases the dependency on probability distributions of future climate and rather employs them after optimization to aid selection amongst competing alternatives. The iterative process yields a vector of `optimal' decision pathways each under the associated set of probabilistic assumptions. In the final phase, the vector of optimal decision pathways is evaluated to identify the solutions that are least sensitive to the scenario probabilities and are most-likely conditional on the climate information. The framework is illustrated for the planning of new dam and hydro-agricultural expansions projects in the Niger River Basin over a 45-year planning period from 2015 to 2060.

  1. Robust control of lithographic process in semiconductor manufacturing

    NASA Astrophysics Data System (ADS)

    Kang, Wei; Mao, John

    2005-05-01

    In this paper, a stability analysis is conducted for several feedback controllers of photolithography processes. We emphasize the stability of process controllers in the presence of model mismatch, and other uncertainties such as system drift and unknown noise. Real data of critical dimension (CD) in shallow trench isolation area from an Intel manufacturing fab is used for model analysis. The feedbacks studied in this paper include a controller based on an adaptive model, and several controllers based on existing estimation methods such as EWMA, extended EWMA, and d-EWMA. Both theoretical analysis and computer simulations are presented to show the stability of the controlled process under these feedbacks.

  2. Robust Prediction for Stationary Processes. 2D Enriched Version.

    DTIC Science & Technology

    1987-11-24

    When i=! thwn for any e~, thle anise at any of" thle nominal processes 1,2.3, or 4 is the same Sfor the prediLtk)[s inl (29) and (30), as expected. (2...For any mi>l1 thle anise at thle nominal process ;.t viz. e(;.yG1 ), i=1,2 converges to e(f.vrnm*) as e- As c-*1, eq *In ) converges to a Except for

  3. Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy

    NASA Astrophysics Data System (ADS)

    Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.

    2011-08-01

    The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.

  4. Natural Language Processing: Toward Large-Scale, Robust Systems.

    ERIC Educational Resources Information Center

    Haas, Stephanie W.

    1996-01-01

    Natural language processing (NLP) is concerned with getting computers to do useful things with natural language. Major applications include machine translation, text generation, information retrieval, and natural language interfaces. Reviews important developments since 1987 that have led to advances in NLP; current NLP applications; and problems…

  5. Natural Language Processing: Toward Large-Scale, Robust Systems.

    ERIC Educational Resources Information Center

    Haas, Stephanie W.

    1996-01-01

    Natural language processing (NLP) is concerned with getting computers to do useful things with natural language. Major applications include machine translation, text generation, information retrieval, and natural language interfaces. Reviews important developments since 1987 that have led to advances in NLP; current NLP applications; and problems…

  6. Development of a robust reverse tone pattern transfer process

    NASA Astrophysics Data System (ADS)

    Khusnatdinov, Niyaz; Doyle, Gary; Resnick, Douglas J.; Ye, Zhengmao; LaBrake, Dwayne; Milligan, Brennan; Alokozai, Fred; Chen, Jerry

    2017-03-01

    Pattern transfer is critical to any lithographic technology, and plays a significant role in defining the critical features in a device layer. As both the memory and logic roadmaps continue to advance, greater importance is placed on the scheme used to do the etching. For many critical layers, a need has developed which requires a multilayer stack to be defined in order to perform the pattern transfer. There are many cases however, where this standard approach does not provide the best results in terms of critical dimension (CD) fidelity and CD uniformity. As an example, when defining a contact pattern, it may be advantageous to apply a bright field mask (in order to maximize the normalized inverse log slope (NILS)) over the more conventional dark field mask. The result of applying the bright field mask in combination with positive imaging resist is to define an array of pillar patterns, which then must be converted back to holes before etching the underlying dielectric material. There have been several publications on tone reversal that is introduced in the resist process itself, but often an etch transfer process is applied to reverse the pattern tone. The purpose of this paper is to describe the use of a three layer reverse tone process (RTP) that is capable of reversing the tone of every printed feature type. The process utilizes a resist pattern, a hardmask layer and an additional protection layer. The three layer approach overcomes issues encountered when using a single masking layer. Successful tone reversal was demonstrated both on 300mm wafers and imprint masks, including the largest features in the pattern, with dimensions as great as 60 microns. Initial in-field CD uniformity is promising. CDs shifted by about 2.6nm and no change was observed in either LER or LWR. Follow-up work is required to statistically qualify in-field CDU and also understand both across wafer uniformity and feature linearity.

  7. Robust Signal Processing for Damaged Vehicles with Uncertainty (Preprint)

    DTIC Science & Technology

    2011-02-23

    models are parametric reduced order models (PROMs). Initially, global PROMs have been developed (see e.g. the work of Balmes (1996); Balmes , et al. (2004...Analysis’, AIAA Journal, Vol. 14(11), pp.1633–1635 E. Balmes , (1996), ‘Parametric Families of Reduced Finite Element Modes: Theory and Application...Mechanical Systems and Signal Processing, Vol. 10(4), pp.381–394 E. Balmes , F. Ravary and D. Langlais,, (2004), ‘Uncertainty Propagation in Modal Analysis

  8. Leveraging the Cloud for Robust and Efficient Lunar Image Processing

    NASA Technical Reports Server (NTRS)

    Chang, George; Malhotra, Shan; Wolgast, Paul

    2011-01-01

    The Lunar Mapping and Modeling Project (LMMP) is tasked to aggregate lunar data, from the Apollo era to the latest instruments on the LRO spacecraft, into a central repository accessible by scientists and the general public. A critical function of this task is to provide users with the best solution for browsing the vast amounts of imagery available. The image files LMMP manages range from a few gigabytes to hundreds of gigabytes in size with new data arriving every day. Despite this ever-increasing amount of data, LMMP must make the data readily available in a timely manner for users to view and analyze. This is accomplished by tiling large images into smaller images using Hadoop, a distributed computing software platform implementation of the MapReduce framework, running on a small cluster of machines locally. Additionally, the software is implemented to use Amazon's Elastic Compute Cloud (EC2) facility. We also developed a hybrid solution to serve images to users by leveraging cloud storage using Amazon's Simple Storage Service (S3) for public data while keeping private information on our own data servers. By using Cloud Computing, we improve upon our local solution by reducing the need to manage our own hardware and computing infrastructure, thereby reducing costs. Further, by using a hybrid of local and cloud storage, we are able to provide data to our users more efficiently and securely. 12 This paper examines the use of a distributed approach with Hadoop to tile images, an approach that provides significant improvements in image processing time, from hours to minutes. This paper describes the constraints imposed on the solution and the resulting techniques developed for the hybrid solution of a customized Hadoop infrastructure over local and cloud resources in managing this ever-growing data set. It examines the performance trade-offs of using the more plentiful resources of the cloud, such as those provided by S3, against the bandwidth limitations such use

  9. Robustness of the Process of Nucleoid Exclusion of Protein Aggregates in Escherichia coli

    PubMed Central

    Neeli-Venkata, Ramakanth; Martikainen, Antti; Gupta, Abhishekh; Gonçalves, Nadia; Fonseca, Jose

    2016-01-01

    ABSTRACT Escherichia coli segregates protein aggregates to the poles by nucleoid exclusion. Combined with cell divisions, this generates heterogeneous aggregate distributions in subsequent cell generations. We studied the robustness of this process with differing medium richness and antibiotics stress, which affect nucleoid size, using multimodal, time-lapse microscopy of live cells expressing both a fluorescently tagged chaperone (IbpA), which identifies in vivo the location of aggregates, and HupA-mCherry, a fluorescent variant of a nucleoid-associated protein. We find that the relative sizes of the nucleoid's major and minor axes change widely, in a positively correlated fashion, with medium richness and antibiotic stress. The aggregate's distribution along the major cell axis also changes between conditions and in agreement with the nucleoid exclusion phenomenon. Consequently, the fraction of aggregates at the midcell region prior to cell division differs between conditions, which will affect the degree of asymmetries in the partitioning of aggregates between cells of future generations. Finally, from the location of the peak of anisotropy in the aggregate displacement distribution, the nucleoid relative size, and the spatiotemporal aggregate distribution, we find that the exclusion of detectable aggregates from midcell is most pronounced in cells with mid-sized nucleoids, which are most common under optimal conditions. We conclude that the aggregate management mechanisms of E. coli are significantly robust but are not immune to stresses due to the tangible effect that these have on nucleoid size. IMPORTANCE Escherichia coli segregates protein aggregates to the poles by nucleoid exclusion. From live single-cell microscopy studies of the robustness of this process to various stresses known to affect nucleoid size, we find that nucleoid size and aggregate preferential locations change concordantly between conditions. Also, the degree of influence of the nucleoid

  10. A Two-Step Method to Identify Positive Deviant Physician Organizations of Accountable Care Organizations with Robust Performance Management Systems.

    PubMed

    Pimperl, Alexander F; Rodriguez, Hector P; Schmittdiel, Julie A; Shortell, Stephen M

    2017-04-06

    To identify positive deviant (PD) physician organizations of Accountable Care Organizations (ACOs) with robust performance management systems (PMSYS). Third National Survey of Physician Organizations (NSPO3, n = 1,398). Organizational and external factors from NSPO3 were analyzed. Linear regression estimated the association of internal and contextual factors on PMSYS. Two cutpoints (75th/90th percentiles) identified PDs with the largest residuals and highest PMSYS scores. A total of 65 and 41 PDs were identified using 75th and 90th percentiles cutpoints, respectively. The 90th percentile more strongly differentiated PDs from non-PDs. Having a high proportion of vulnerable patients appears to constrain PMSYS development. Our PD identification method increases the likelihood that PD organizations selected for in-depth inquiry are high-performing organizations that exceed expectations. © Health Research and Educational Trust.

  11. Probability fold change: a robust computational approach for identifying differentially expressed gene lists.

    PubMed

    Deng, Xutao; Xu, Jun; Hui, James; Wang, Charles

    2009-02-01

    Identifying genes that are differentially expressed under different experimental conditions is a fundamental task in microarray studies. However, different ranking methods generate very different gene lists, and this could profoundly impact follow-up analyses and biological interpretation. Therefore, developing improved ranking methods are critical in microarray data analysis. We developed a new algorithm, the probabilistic fold change (PFC), which ranks genes based on a confidence interval estimate of fold change. We performed extensive testing using multiple benchmark data sources including the MicroArray Quality Control (MAQC) data sets. We corroborated our observations with MAQC data sets using qRT-PCR data sets and Latin square spike-in data sets. Along with PFC, we tested six other popular ranking algorithms including Mean Fold Change (FC), SAM, t-statistic (T), Bayesian-t (BAYT), Intensity-Conditional Fold Change (CFC), and Rank Product (RP). PFC achieved reproducibility and accuracy that are consistently among the best of the seven ranking algorithms while other ranking algorithms would show weakness in some cases. Contrary to common belief, our results demonstrated that statistical accuracy will not translate to biological reproducibility and therefore both quality aspects need to be evaluated.

  12. Robust Linear Transmit/Receive Processing for Correlated MIMO Downlink with Imperfect CSI

    NASA Astrophysics Data System (ADS)

    Li, Hao; Xu, Changqing; Fan, Pingzhi

    In this paper we investigate designing optimal linear transmit/receive processing filters for multiuser MIMO downlinks with imperfect channel state information (CSI) and spatial fading correlation between antenna array at BS. A robust scheme is proposed to obtain the optimal linear transmit/receive filters in the sense of minimizing the average sum mean square error (SMSE) conditional on noisy channel estimates under a per-user transmit power constraint. Using an iterative procedure, the proposed scheme extends the existing optimization algorithm for uncorrelated single-user MIMO systems with perfect CSI to solve the problem of minimizing SMSE in spatially correlated MIMO downlinks with imperfect CSI. Comparing with non-robust scheme, we show that robust scheme efficiently mitigates the BER loss induced by imperfect CSI. In addition, the impact of fading correlation at BS on the performance of the proposed robust scheme is analyzed.

  13. A preferential design approach for energy-efficient and robust implantable neural signal processing hardware.

    PubMed

    Narasimhan, Seetharam; Chiel, Hillel J; Bhunia, Swarup

    2009-01-01

    For implantable neural interface applications, it is important to compress data and analyze spike patterns across multiple channels in real time. Such a computational task for online neural data processing requires an innovative circuit-architecture level design approach for low-power, robust and area-efficient hardware implementation. Conventional microprocessor or Digital Signal Processing (DSP) chips would dissipate too much power and are too large in size for an implantable system. In this paper, we propose a novel hardware design approach, referred to as "Preferential Design" that exploits the nature of the neural signal processing algorithm to achieve a low-voltage, robust and area-efficient implementation using nanoscale process technology. The basic idea is to isolate the critical components with respect to system performance and design them more conservatively compared to the noncritical ones. This allows aggressive voltage scaling for low power operation while ensuring robustness and area efficiency. We have applied the proposed approach to a neural signal processing algorithm using the Discrete Wavelet Transform (DWT) and observed significant improvement in power and robustness over conventional design.

  14. Identifying robustness in the regulation of collective foraging of ant colonies using an interaction-based model with backward bifurcation.

    PubMed

    Udiani, Oyita; Pinter-Wollman, Noa; Kang, Yun

    2015-02-21

    Collective behaviors in social insect societies often emerge from simple local rules. However, little is known about how these behaviors are dynamically regulated in response to environmental changes. Here, we use a compartmental modeling approach to identify factors that allow harvester ant colonies to regulate collective foraging activity in response to their environment. We propose a set of differential equations describing the dynamics of: (1) available foragers inside the nest, (2) active foragers outside the nest, and (3) successful returning foragers, to understand how colony-specific parameters, such as baseline number of foragers, interactions among foragers, food discovery rates, successful forager return rates, and foraging duration might influence collective foraging dynamics, while maintaining functional robustness to perturbations. Our analysis indicates that the model can undergo a forward (transcritical) bifurcation or a backward bifurcation depending on colony-specific parameters. In the former case, foraging activity persists when the average number of recruits per successful returning forager is larger than one. In the latter case, the backward bifurcation creates a region of bistability in which the size and fate of foraging activity depends on the distribution of the foraging workforce among the model's compartments. We validate the model with experimental data from harvester ants (Pogonomyrmex barbatus) and perform sensitivity analysis. Our model provides insights on how simple, local interactions can achieve an emergent and robust regulatory system of collective foraging activity in ant colonies.

  15. Methylome-wide association study of whole blood DNA in the Norfolk Island isolate identifies robust loci associated with age.

    PubMed

    Benton, Miles C; Sutherland, Heidi G; Macartney-Coxson, Donia; Haupt, Larisa M; Lea, Rodney A; Griffiths, Lyn R

    2017-02-28

    Epigenetic regulation of various genomic functions, including gene expression, provide mechanisms whereby an organism can dynamically respond to changes in its environment and modify gene expression accordingly. One epigenetic mechanism implicated in human aging and age-related disorders is DNA methylation. Isolated populations such as Norfolk Island (NI) should be advantageous for the identification of epigenetic factors related to aging due to reduced genetic and environmental variation. Here we conducted a methylome-wide association study of age using whole blood DNA in 24 healthy female individuals from the NI genetic isolate (aged 24-47 years). We analysed 450K methylation array data using a machine learning approach (GLMnet) to identify age-associated CpGs. We identified 497 CpG sites, mapping to 422 genes, associated with age, with 11 sites previously associated with age. The strongest associations identified were for a single CpG site in MYOF and an extended region within the promoter of DDO. These hits were validated in curated public data from 2316 blood samples (MARMAL-AID). This study is the first to report robust age associations for MYOF and DDO, both of which have plausible functional roles in aging. This study also illustrates the value of genetic isolates to reveal new associations with epigenome-level data.

  16. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE PAGES

    Dai, Heng; Ye, Ming; Walker, Anthony P.; ...

    2017-03-28

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  17. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    NASA Astrophysics Data System (ADS)

    Dai, Heng; Ye, Ming; Walker, Anthony P.; Chen, Xingyuan

    2017-04-01

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods with variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. For demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.

  18. Intuitive robust stability metric for PID control of self-regulating processes.

    PubMed

    Arbogast, Jeffrey E; Beauregard, Brett M; Cooper, Douglas J

    2008-10-01

    Published methods establish how plant-model mismatch in the process gain and dead time impacts closed-loop stability. However, these methods assume no plant-model mismatch in the process time constant. The work presented here proposes the robust stability factor metric, RSF, to examine the effect of plant-model mismatch in the process gain, dead time, and time constant. The RSF is presented in two forms: an equation form and a visual form displayed on robustness plots derived from the Bode and Nyquist stability criteria. This understanding of robust stability is reinforced through visual examples of how closed-loop performance changes with various levels of plant-model mismatch. One example shows how plant-model mismatch in the time constant can impact closed-loop stability as much as plant-model mismatch in the gain and/or dead time. Theoretical discussion shows that the impact is greater for small dead time to time constant ratios. As the closed-loop time constant used in Internal Model Control (IMC) tuning decreases, the impact becomes significant for a larger range of dead time to time constant ratios. To complete the presentation, the RSF is used to compare the robust stability of IMC-PI tuning to other PI, PID, and PID with Filter tuning correlations.

  19. Robust Design of Sheet Metal Forming Process Based on Kriging Metamodel

    NASA Astrophysics Data System (ADS)

    Xie, Yanmin

    2011-08-01

    Nowadays, sheet metal forming processes design is not a trivial task due to the complex issues to be taken into account (conflicting design goals, complex shapes forming and so on). Optimization methods have also been widely applied in sheet metal forming. Therefore, proper design methods to reduce time and costs have to be developed mostly based on computer aided procedures. At the same time, the existence of variations during manufacturing processes significantly may influence final product quality, rendering non-robust optimal solutions. In this paper, a small size of design of experiments is conducted to investigate how a stochastic behavior of noise factors affects drawing quality. The finite element software (LS_DYNA) is used to simulate the complex sheet metal stamping processes. The Kriging metamodel is adopted to map the relation between input process parameters and part quality. Robust design models for sheet metal forming process integrate adaptive importance sampling with Kriging model, in order to minimize impact of the variations and achieve reliable process parameters. In the adaptive sample, an improved criterion is used to provide direction in which additional training samples can be added to better the Kriging model. Nonlinear functions as test functions and a square stamping example (NUMISHEET'93) are employed to verify the proposed method. Final results indicate application feasibility of the aforesaid method proposed for multi-response robust design.

  20. Optical wafer metrology sensors for process-robust CD and overlay control in semiconductor device manufacturing

    NASA Astrophysics Data System (ADS)

    den Boef, Arie J.

    2016-06-01

    This paper presents three optical wafer metrology sensors that are used in lithography for robustly measuring the shape and position of wafers and device patterns on these wafers. The first two sensors are a level sensor and an alignment sensor that measure, respectively, a wafer height map and a wafer position before a new pattern is printed on the wafer. The third sensor is an optical scatterometer that measures critical dimension-variations and overlay after the resist has been exposed and developed. These sensors have different optical concepts but they share the same challenge that sub-nm precision is required at high throughput on a large variety of processed wafers and in the presence of unknown wafer processing variations. It is the purpose of this paper to explain these challenges in more detail and give an overview of the various solutions that have been introduced over the years to come to process-robust optical wafer metrology.

  1. Evaluation of public cancer datasets and signatures identifies TP53 mutant signatures with robust prognostic and predictive value.

    PubMed

    Lehmann, Brian David; Ding, Yan; Viox, Daniel Joseph; Jiang, Ming; Zheng, Yi; Liao, Wang; Chen, Xi; Xiang, Wei; Yi, Yajun

    2015-03-26

    Systematic analysis of cancer gene-expression patterns using high-throughput transcriptional profiling technologies has led to the discovery and publication of hundreds of gene-expression signatures. However, few public signature values have been cross-validated over multiple studies for the prediction of cancer prognosis and chemosensitivity in the neoadjuvant setting. To analyze the prognostic and predictive values of publicly available signatures, we have implemented a systematic method for high-throughput and efficient validation of a large number of datasets and gene-expression signatures. Using this method, we performed a meta-analysis including 351 publicly available signatures, 37,000 random signatures, and 31 breast cancer datasets. Survival analyses and pathologic responses were used to assess prediction of prognosis, chemoresponsiveness, and chemo-drug sensitivity. Among 31 breast cancer datasets and 351 public signatures, we identified 22 validation datasets, two robust prognostic signatures (BRmet50 and PMID18271932Sig33) in breast cancer and one signature (PMID20813035Sig137) specific for prognosis prediction in patients with ER-negative tumors. The 22 validation datasets demonstrated enhanced ability to distinguish cancer gene profiles from random gene profiles. Both prognostic signatures are composed of genes associated with TP53 mutations and were able to stratify the good and poor prognostic groups successfully in 82%and 68% of the 22 validation datasets, respectively. We then assessed the abilities of the two signatures to predict treatment responses of breast cancer patients treated with commonly used chemotherapeutic regimens. Both BRmet50 and PMID18271932Sig33 retrospectively identified those patients with an insensitive response to neoadjuvant chemotherapy (mean positive predictive values 85%-88%). Among those patients predicted to be treatment sensitive, distant relapse-free survival (DRFS) was improved (negative predictive values 87

  2. Robust Estimation of Transition Matrices in High Dimensional Heavy-tailed Vector Autoregressive Processes

    PubMed Central

    Qiu, Huitong; Xu, Sheng; Han, Fang; Liu, Han; Caffo, Brian

    2016-01-01

    Gaussian vector autoregressive (VAR) processes have been extensively studied in the literature. However, Gaussian assumptions are stringent for heavy-tailed time series that frequently arises in finance and economics. In this paper, we develop a unified framework for modeling and estimating heavy-tailed VAR processes. In particular, we generalize the Gaussian VAR model by an elliptical VAR model that naturally accommodates heavy-tailed time series. Under this model, we develop a quantile-based robust estimator for the transition matrix of the VAR process. We show that the proposed estimator achieves parametric rates of convergence in high dimensions. This is the first work in analyzing heavy-tailed high dimensional VAR processes. As an application of the proposed framework, we investigate Granger causality in the elliptical VAR process, and show that the robust transition matrix estimator induces sign-consistent estimators of Granger causality. The empirical performance of the proposed methodology is demonstrated by both synthetic and real data. We show that the proposed estimator is robust to heavy tails, and exhibit superior performance in stock price prediction. PMID:28133642

  3. Robust Low Cost Aerospike/RLV Combustion Chamber by Advanced Vacuum Plasma Process

    NASA Technical Reports Server (NTRS)

    Holmes, Richard; Ellis, David; McKechnie

    1999-01-01

    Next-generation, regeneratively cooled rocket engines will require materials that can withstand high temperatures while retaining high thermal conductivity. At the same time, fabrication techniques must be cost efficient so that engine components can be manufactured within the constraints of a shrinking NASA budget. In recent years, combustion chambers of equivalent size to the Aerospike chamber have been fabricated at NASA-Marshall Space Flight Center (MSFC) using innovative, relatively low-cost, vacuum-plasma-spray (VPS) techniques. Typically, such combustion chambers are made of the copper alloy NARloy-Z. However, current research and development conducted by NASA-Lewis Research Center (LeRC) has identified a Cu-8Cr-4Nb alloy which possesses excellent high-temperature strength, creep resistance, and low cycle fatigue behavior combined with exceptional thermal stability. In fact, researchers at NASA-LeRC have demonstrated that powder metallurgy (P/M) Cu-8Cr-4Nb exhibits better mechanical properties at 1,200 F than NARloy-Z does at 1,000 F. The objective of this program was to develop and demonstrate the technology to fabricate high-performance, robust, inexpensive combustion chambers for advanced propulsion systems (such as Lockheed-Martin's VentureStar and NASA's Reusable Launch Vehicle, RLV) using the low-cost, VPS process to deposit Cu-8Cr-4Nb with mechanical properties that match or exceed those of P/M Cu-8Cr-4Nb. In addition, oxidation resistant and thermal barrier coatings can be incorporated as an integral part of the hot wall of the liner during the VPS process. Tensile properties of Cu-8Cr-4Nb material produced by VPS are reviewed and compared to material produced previously by extrusion. VPS formed combustion chamber liners have also been prepared and will be reported on following scheduled hot firing tests at NASA-Lewis.

  4. Robust Low Cost Aerospike/RLV Combustion Chamber by Advanced Vacuum Plasma Process

    NASA Technical Reports Server (NTRS)

    Holmes, Richard; Ellis, David; McKechnie

    1999-01-01

    Next-generation, regeneratively cooled rocket engines will require materials that can withstand high temperatures while retaining high thermal conductivity. At the same time, fabrication techniques must be cost efficient so that engine components can be manufactured within the constraints of a shrinking NASA budget. In recent years, combustion chambers of equivalent size to the Aerospike chamber have been fabricated at NASA-Marshall Space Flight Center (MSFC) using innovative, relatively low-cost, vacuum-plasma-spray (VPS) techniques. Typically, such combustion chambers are made of the copper alloy NARloy-Z. However, current research and development conducted by NASA-Lewis Research Center (LeRC) has identified a Cu-8Cr-4Nb alloy which possesses excellent high-temperature strength, creep resistance, and low cycle fatigue behavior combined with exceptional thermal stability. In fact, researchers at NASA-LeRC have demonstrated that powder metallurgy (P/M) Cu-8Cr-4Nb exhibits better mechanical properties at 1,200 F than NARloy-Z does at 1,000 F. The objective of this program was to develop and demonstrate the technology to fabricate high-performance, robust, inexpensive combustion chambers for advanced propulsion systems (such as Lockheed-Martin's VentureStar and NASA's Reusable Launch Vehicle, RLV) using the low-cost, VPS process to deposit Cu-8Cr-4Nb with mechanical properties that match or exceed those of P/M Cu-8Cr-4Nb. In addition, oxidation resistant and thermal barrier coatings can be incorporated as an integral part of the hot wall of the liner during the VPS process. Tensile properties of Cu-8Cr-4Nb material produced by VPS are reviewed and compared to material produced previously by extrusion. VPS formed combustion chamber liners have also been prepared and will be reported on following scheduled hot firing tests at NASA-Lewis.

  5. Identifying influential factors of business process performance using dependency analysis

    NASA Astrophysics Data System (ADS)

    Wetzstein, Branimir; Leitner, Philipp; Rosenberg, Florian; Dustdar, Schahram; Leymann, Frank

    2011-02-01

    We present a comprehensive framework for identifying influential factors of business process performance. In particular, our approach combines monitoring of process events and Quality of Service (QoS) measurements with dependency analysis to effectively identify influential factors. The framework uses data mining techniques to construct tree structures to represent dependencies of a key performance indicator (KPI) on process and QoS metrics. These dependency trees allow business analysts to determine how process KPIs depend on lower-level process metrics and QoS characteristics of the IT infrastructure. The structure of the dependencies enables a drill-down analysis of single factors of influence to gain a deeper knowledge why certain KPI targets are not met.

  6. Processing of novel identifiability and duration in children and adults.

    PubMed

    Wetzel, Nicole; Widmann, Andreas; Schröger, Erich

    2011-01-01

    In a passive auditory oddball paradigm identifiability and duration of task-irrelevant novel sounds (novels) were varied in children aged 7-8 and in adults. Event-related potentials (ERPs) elicited by identifiable novels were augmented compared to ERPs elicited by non-identifiable novels around 200ms after stimulus onset. This identifiability effect occurs in children and adults, showing that identifiable novels are processed differently from non-identifiable novels in both age groups. However, only in children the identifiability effect continued for short novels after 300ms. This indicates that children cannot inhibit processing of meaningful task-irrelevant information as efficiently as adults. Moreover, long novels caused more positive amplitudes than short novels in a time-window of 400-600ms in children but not in adults, showing children's increased susceptibility to physically rich sounds. Results are discussed in the framework of current models of involuntary attention referring to the ERP-components N1/Mismatch Negativity, P2, early P3a, and late P3a. Copyright © 2010 Elsevier B.V. All rights reserved.

  7. Phosphoproteomic profiling of tumor tissues identifies HSP27 Ser82 phosphorylation as a robust marker of early ischemia

    PubMed Central

    Zahari, Muhammad Saddiq; Wu, Xinyan; Pinto, Sneha M.; Nirujogi, Raja Sekhar; Kim, Min-Sik; Fetics, Barry; Philip, Mathew; Barnes, Sheri R.; Godfrey, Beverly; Gabrielson, Edward; Nevo, Erez; Pandey, Akhilesh

    2015-01-01

    Delays between tissue collection and tissue fixation result in ischemia and ischemia-associated changes in protein phosphorylation levels, which can misguide the examination of signaling pathway status. To identify a biomarker that serves as a reliable indicator of ischemic changes that tumor tissues undergo, we subjected harvested xenograft tumors to room temperature for 0, 2, 10 and 30 minutes before freezing in liquid nitrogen. Multiplex TMT-labeling was conducted to achieve precise quantitation, followed by TiO2 phosphopeptide enrichment and high resolution mass spectrometry profiling. LC-MS/MS analyses revealed phosphorylation level changes of a number of phosphosites in the ischemic samples. The phosphorylation of one of these sites, S82 of the heat shock protein 27 kDa (HSP27), was especially abundant and consistently upregulated in tissues with delays in freezing as short as 2 minutes. In order to eliminate effects of ischemia, we employed a novel cryogenic biopsy device which begins freezing tissues in situ before they are excised. Using this device, we showed that the upregulation of phosphorylation of S82 on HSP27 was abrogated. We thus demonstrate that our cryogenic biopsy device can eliminate ischemia-induced phosphoproteome alterations, and measurements of S82 on HSP27 can be used as a robust marker of ischemia in tissues. PMID:26329039

  8. A robust two-stage design identifying the optimal biological dose for phase I/II clinical trials.

    PubMed

    Zang, Yong; Lee, J Jack

    2017-01-15

    We propose a robust two-stage design to identify the optimal biological dose for phase I/II clinical trials evaluating both toxicity and efficacy outcomes. In the first stage of dose finding, we use the Bayesian model averaging continual reassessment method to monitor the toxicity outcomes and adopt an isotonic regression method based on the efficacy outcomes to guide dose escalation. When the first stage ends, we use the Dirichlet-multinomial distribution to jointly model the toxicity and efficacy outcomes and pick the candidate doses based on a three-dimensional volume ratio. The selected candidate doses are then seamlessly advanced to the second stage for dose validation. Both toxicity and efficacy outcomes are continuously monitored so that any overly toxic and/or less efficacious dose can be dropped from the study as the trial continues. When the phase I/II trial ends, we select the optimal biological dose as the dose obtaining the minimal value of the volume ratio within the candidate set. An advantage of the proposed design is that it does not impose a monotonically increasing assumption on the shape of the dose-efficacy curve. We conduct extensive simulation studies to examine the operating characteristics of the proposed design. The simulation results show that the proposed design has desirable operating characteristics across different shapes of the underlying true dose-toxicity and dose-efficacy curves. The software to implement the proposed design is available upon request. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Amino acid positions subject to multiple co-evolutionary constraints can be robustly identified by their eigenvector network centrality scores

    PubMed Central

    Parente, Daniel J.; Ray, J. Christian J.; Swint-Kruse, Liskin

    2015-01-01

    As proteins evolve, amino acid positions key to protein structure or function are subject to mutational constraints. These positions can be detected by analyzing sequence families for amino acid conservation or for co-evolution between pairs of positions. Co-evolutionary scores are usually rank-ordered and thresholded to reveal the top pairwise scores, but they also can be treated as weighted networks. Here, we used network analyses to bypass a major complication of co-evolution studies: For a given sequence alignment, alternative algorithms usually identify different, top pairwise scores. We reconciled results from five commonly-used, mathematically divergent algorithms (ELSC, McBASC, OMES, SCA, and ZNMI), using the LacI/GalR and 1,6-bisphosphate aldolase protein families as models. Calculations used unthresholded co-evolution scores from which column-specific properties such as sequence entropy and random noise were subtracted; “central” positions were identified by calculating various network centrality scores. When compared among algorithms, network centrality methods, particularly eigenvector centrality, showed markedly better agreement than comparisons of the top pairwise scores. Positions with large centrality scores occurred at key structural locations and/or were functionally sensitive to mutations. Further, the top central positions often differed from those with top pairwise co-evolution scores: Instead of a few strong scores, central positions often had multiple, moderate scores. We conclude that eigenvector centrality calculations reveal a robust evolutionary pattern of constraints – detectable by divergent algorithms – that occur at key protein locations. Finally, we discuss the fact that multiple patterns co-exist in evolutionary data that, together, give rise to emergent protein functions. PMID:26503808

  10. Robust control chart for change point detection of process variance in the presence of disturbances

    NASA Astrophysics Data System (ADS)

    Huat, Ng Kooi; Midi, Habshah

    2015-02-01

    A conventional control chart for detecting shifts in variance of a process is typically developed where in most circumstances the nominal value of variance is unknown and based upon one of the essential assumptions that the underlying distribution of the quality characteristic is normal. However, this is not always the case as it is fairly evident that the statistical estimates used for these charts are very sensitive to the occurrence of occasional outliers. This is for the reason that the robust control charts are put forward when the underlying normality assumption is not met, and served as a remedial measure to the problem of contamination in process data. Realizing that the existing approach, namely Biweight A pooled residuals method, appears to be resistance to localized disturbances but lack of efficiency when there are diffuse disturbances. To be concrete, diffuse disturbances are those that have equal change of being perturbed by any observation, while a localized disturbance will have effect on every member of a certain subsample or subsamples. Since the efficiency of estimators in the presence of disturbances can rely heavily upon whether the disturbances are distributed throughout the observations or concentrated in a few subsamples. Hence, to this end, in this paper we proposed a new robust MBAS control chart by means of subsample-based robust Modified Biweight A scale estimator in estimating the process standard deviation. It has strong resistance to both localized and diffuse disturbances as well as high efficiency when no disturbances are present. The performance of the proposed robust chart was evaluated based on some decision criteria through Monte Carlo simulation study.

  11. Some Results on the Analysis of Stochastic Processes with Uncertain Transition Probabilities and Robust Optimal Control

    SciTech Connect

    Keyong Li; Seong-Cheol Kang; I. Ch. Paschalidis

    2007-09-01

    This paper investigates stochastic processes that are modeled by a finite number of states but whose transition probabilities are uncertain and possibly time-varying. The treatment of uncertain transition probabilities is important because there appears to be a disconnection between the practice and theory of stochastic processes due to the difficulty of assigning exact probabilities to real-world events. Also, when the finite-state process comes as a reduced model of one that is more complicated in nature (possibly in a continuous state space), existing results do not facilitate rigorous analysis. Two approaches are introduced here. The first focuses on processes with one terminal state and the properties that affect their convergence rates. When a process is on a complicated graph, the bound of the convergence rate is not trivially related to that of the probabilities of individual transitions. Discovering the connection between the two led us to define two concepts which we call 'progressivity' and 'sortedness', and to a new comparison theorem for stochastic processes. An optimality criterion for robust optimal control also derives from this comparison theorem. In addition, this result is applied to the case of mission-oriented autonomous robot control to produce performance estimate within a control framework that we propose. The second approach is in the MDP frame work. We will introduce our preliminary work on optimistic robust optimization, which aims at finding solutions that guarantee the upper bounds of the accumulative discounted cost with prescribed probabilities. The motivation here is to address the issue that the standard robust optimal solution tends to be overly conservative.

  12. Robust matched-field processing using a coherent broadband white noise constraint processor.

    PubMed

    Debever, Claire; Kuperman, W A

    2007-10-01

    Adaptive matched-field processing (MFP) is not only very sensitive to mismatch, but also requires the received sound levels to exceed a threshold signal-to-noise ratio. Furthermore, acoustic sources and interferers have to move slowly enough across resolution cells so that a full rank cross-spectral density matrix can be constructed. Coherent-broadband MFP takes advantage of the temporal complexity of the signal, and therefore offers an additional gain over narrow-band processing by augmenting the dimension of the data space. However, the sensitivity to mismatch is also increased in the process, since a single constraint is usually not enough to achieve robustness and the snapshot requirement becomes even more problematic. The white noise constraint method, typically used for narrow-band processing, is applied to a previously derived broadband processor to enhance its robustness to environmental mismatch and snapshot deficiency. The broadband white noise constraint theory is presented and validated through simulation and experimental data. The dynamic range bias obtained from the snapshot-deficient processing is shown to be consistent with that previously presented in the literature for a single frequency.

  13. Conceptual information processing: A robust approach to KBS-DBMS integration

    NASA Technical Reports Server (NTRS)

    Lazzara, Allen V.; Tepfenhart, William; White, Richard C.; Liuzzi, Raymond

    1987-01-01

    Integrating the respective functionality and architectural features of knowledge base and data base management systems is a topic of considerable interest. Several aspects of this topic and associated issues are addressed. The significance of integration and the problems associated with accomplishing that integration are discussed. The shortcomings of current approaches to integration and the need to fuse the capabilities of both knowledge base and data base management systems motivates the investigation of information processing paradigms. One such paradigm is concept based processing, i.e., processing based on concepts and conceptual relations. An approach to robust knowledge and data base system integration is discussed by addressing progress made in the development of an experimental model for conceptual information processing.

  14. Identifying microorganisms responsible for ecologically significant biogeochemical processes.

    PubMed

    Madsen, Eugene L

    2005-05-01

    Throughout evolutionary time, and each day in every habitat throughout the globe, microorganisms have been responsible for maintaining the biosphere. Despite the crucial part that they play in the cycling of nutrients in habitats such as soils, sediments and waters, only rarely have the microorganisms actually responsible for key processes been identified. Obstacles that have traditionally impeded fundamental microbial ecology inquiries are now yielding to technical advancements that have important parallels in medical microbiology. The pace of new discoveries that document ecological processes and their causative agents will no doubt accelerate in the near future, and might assist in ecosystem management.

  15. Meta-analysis on blood transcriptomic studies identifies consistently coexpressed protein–protein interaction modules as robust markers of human aging

    PubMed Central

    van den Akker, Erik B; Passtoors, Willemijn M; Jansen, Rick; van Zwet, Erik W; Goeman, Jelle J; Hulsman, Marc; Emilsson, Valur; Perola, Markus; Willemsen, Gonneke; Penninx, Brenda WJH; Heijmans, Bas T; Maier, Andrea B; Boomsma, Dorret I; Kok, Joost N; Slagboom, Pieternella E; Reinders, Marcel JT; Beekman, Marian

    2014-01-01

    The bodily decline that occurs with advancing age strongly impacts on the prospects for future health and life expectancy. Despite the profound role of age in disease etiology, knowledge about the molecular mechanisms driving the process of aging in humans is limited. Here, we used an integrative network-based approach for combining multiple large-scale expression studies in blood (2539 individuals) with protein–protein Interaction (PPI) data for the detection of consistently coexpressed PPI modules that may reflect key processes that change throughout the course of normative aging. Module detection followed by a meta-analysis on chronological age identified fifteen consistently coexpressed PPI modules associated with chronological age, including a highly significant module (P = 3.5 × 10−38) enriched for ‘T-cell activation’ marking age-associated shifts in lymphocyte blood cell counts (R2 = 0.603; P = 1.9 × 10−10). Adjusting the analysis in the compendium for the ‘T-cell activation’ module showed five consistently coexpressed PPI modules that robustly associated with chronological age and included modules enriched for ‘Translational elongation’, ‘Cytolysis’ and ‘DNA metabolic process’. In an independent study of 3535 individuals, four of five modules consistently associated with chronological age, underpinning the robustness of the approach. We found three of five modules to be significantly enriched with aging-related genes, as defined by the GenAge database, and association with prospective survival at high ages for one of the modules including ASF1A. The hereby-detected age-associated and consistently coexpressed PPI modules therefore may provide a molecular basis for future research into mechanisms underlying human aging. PMID:24119000

  16. Identifying Activation Centers with Spatial Cox Point Processes Using fMRI Data.

    PubMed

    Ray, Meredith; Kang, Jian; Zhang, Hongmei

    2016-01-01

    We developed a Bayesian clustering method to identify significant regions of brain activation. Coordinate-based meta data originating from functional magnetic resonance imaging (fMRI) were of primary interest. Individual fMRI has the ability to measure the intensity of blood flow and oxygen to a location within the brain that was activated by a given thought or emotion. The proposed method performed clustering on two levels, latent foci center and study activation center, with a spatial Cox point process utilizing the Dirichlet process to describe the distribution of foci. Intensity was modeled as a function of distance between the focus and the center of the cluster of foci using a Gaussian kernel. Simulation studies were conducted to evaluate the sensitivity and robustness of the method with respect to cluster identification and underlying data distributions. We applied the method to a meta data set to identify emotion foci centers.

  17. SU-D-207B-05: Robust Intra-Tumor Partitioning to Identify High-Risk Subregions for Prognosis in Lung Cancer

    SciTech Connect

    Wu, J; Gensheimer, M; Dong, X; Rubin, D; Napel, S; Diehn, M; Loo, B; Li, R

    2016-06-15

    Purpose: To develop an intra-tumor partitioning framework for identifying high-risk subregions from 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) and CT imaging, and to test whether tumor burden associated with the high-risk subregions is prognostic of outcomes in lung cancer. Methods: In this institutional review board-approved retrospective study, we analyzed the pre-treatment FDG-PET and CT scans of 44 lung cancer patients treated with radiotherapy. A novel, intra-tumor partitioning method was developed based on a two-stage clustering process: first at patient-level, each tumor was over-segmented into many superpixels by k-means clustering of integrated PET and CT images; next, tumor subregions were identified by merging previously defined superpixels via population-level hierarchical clustering. The volume associated with each of the subregions was evaluated using Kaplan-Meier analysis regarding its prognostic capability in predicting overall survival (OS) and out-of-field progression (OFP). Results: Three spatially distinct subregions were identified within each tumor, which were highly robust to uncertainty in PET/CT co-registration. Among these, the volume of the most metabolically active and metabolically heterogeneous solid component of the tumor was predictive of OS and OFP on the entire cohort, with a concordance index or CI = 0.66–0.67. When restricting the analysis to patients with stage III disease (n = 32), the same subregion achieved an even higher CI = 0.75 (HR = 3.93, logrank p = 0.002) for predicting OS, and a CI = 0.76 (HR = 4.84, logrank p = 0.002) for predicting OFP. In comparison, conventional imaging markers including tumor volume, SUVmax and MTV50 were not predictive of OS or OFP, with CI mostly below 0.60 (p < 0.001). Conclusion: We propose a robust intra-tumor partitioning method to identify clinically relevant, high-risk subregions in lung cancer. We envision that this approach will be applicable to identifying useful

  18. RECORD processing - A robust pathway to component-resolved HR-PGSE NMR diffusometry

    NASA Astrophysics Data System (ADS)

    Stilbs, Peter

    2010-12-01

    It is demonstrated that very robust spectral component separation can be achieved through global least-squares CORE data analysis of automatically or manually selected spectral regions in complex NMR spectra in a high-resolution situation. This procedure (acronym RECORD) only takes a few seconds and quite significantly improves the effective signal/noise of the experiment as compared to individual frequency channel fitting, like in the generic HR-DOSY approach or when using basic peak height or integral fitting. Results from RECORD processing can be further used as starting value estimates for subsequent CORE analysis of spectral data with higher degree of spectral overlap.

  19. Characteristics of chaotic processes in electrocardiographically identified ventricular arrhythmia.

    PubMed

    Mysiak, Andrzej; Kobusiak-Prokopowicz, Małgorzata; Kaaz, Konrad; Jarczewska, Kamila; Glabisz, Wojciech

    2017-01-01

    The theory of chaos proves a deterministic mechanism of induction of multiple complex processes previously thought to be random in nature. This research explains how these complex processes develop. The aim of the study was to test the hypothesis of the chaotic nature of myocardial electrical events during ventricular tachycardia (VT) and ventricular fibrillation (VF). Original hardware and software was developed for digitalization of on-line electrocardiography (ECG) data, with the functions of automatic and manual identification as well as categoriza-tion of specific ventricular arrhythmias. Patient ECGs were recorded by specially developed measuring equipment (M2TT). Available ECG sampling frequency was 20,000 Hz, and it was possible to analyze the signal retrospectively. Digital ECG of the sinus rhythm (SR) was analyzed with non-sustained VT, VT and VF. The signals were then subjected to mathematical analysis. Using wavelet analysis, signals carrying frequencies from various ranges were isolated from baseline and each of these isolated signals was subjected to Fourier transformation to check on differences in the Fourier power spectra of the analyzed VT and VF signals. Ventricular tachycardia identified based on ECG fulfills the criteria of a chaotic process, while no such properties were found for SR and VF. Information obtained by the ECG is used to record myo-cardial electrical signals, but they are not sufficient to differentiate between an advanced chaotic state and the process of linear expansion of electrical activation within the myocardium. Electrophysiological study requires advanced methods to record the signal of myocardial electrical activity, as ECG is not sufficiently sensitive to identify the features of a chaotic process during VF. (Cardiol J 2017; 24, 2: 151-158).

  20. Primary Polymer Aging Processes Identified from Weapon Headspace Chemicals

    SciTech Connect

    Chambers, D M; Bazan, J M; Ithaca, J G

    2002-03-25

    A current focus of our weapon headspace sampling work is the interpretation of the volatile chemical signatures that we are collecting. To help validate our interpretation we have been developing a laboratory-based material aging capability to simulate material decomposition chemistries identified. Key to establishing this capability has been the development of an automated approach to process, analyze, and quantify arrays of material combinations as a function of time and temperature. Our initial approach involves monitoring the formation and migration of volatile compounds produced when a material decomposes. This approach is advantageous in that it is nondestructive and provides a direct comparison with our weapon headspace surveillance initiative. Nevertheless, this approach requires us to identify volatile material residue and decomposition byproducts that are not typically monitored and reported in material aging studies. Similar to our weapon monitoring method, our principle laboratory-based method involves static headspace collection by solid phase microextraction (SPME) followed by gas chromatography/mass spectrometry (GC/MS). SPME is a sorbent collection technique that is ideally suited for preconcentration and delivery of trace gas-phase compounds for analysis by GC. When combined with MS, detection limits are routinely in the low- and sub-ppb ranges, even for semivolatile and polar compounds. To automate this process we incorporated a robotic sample processor configured for SPME collection. The completed system will thermally process, sample, and analyze a material sample. Quantification of the instrument response is another process that has been integrated into the system. The current system screens low-milligram quantities of material for the formation or outgas of small compounds as initial indicators of chemical decomposition. This emerging capability offers us a new approach to identify and non-intrusively monitor decomposition mechanisms that are

  1. Development of a robust calibration model for nonlinear in-line process data

    PubMed

    Despagne; Massart; Chabot

    2000-04-01

    A comparative study involving a global linear method (partial least squares), a local linear method (locally weighted regression), and a nonlinear method (neural networks) has been performed in order to implement a calibration model on an industrial process. The models were designed to predict the water content in a reactor during a distillation process, using in-line measurements from a near-infrared analyzer. Curved effects due to changes in temperature and variations between the different batches make the problem particularly challenging. The influence of spectral range selection and data preprocessing has been studied. With each calibration method, specific procedures have been applied to promote model robustness. In particular, the use of a monitoring set with neural networks does not always prevent overfitting. Therefore, we developed a model selection criterion based on the determination of the median of monitoring error over replicate trials. The back-propagation neural network models selected were found to outperform the other methods on independent test data.

  2. Low Power and Robust Domino Circuit with Process Variations Tolerance for High Speed Digital Signal Processing

    NASA Astrophysics Data System (ADS)

    Wang, Jinhui; Peng, Xiaohong; Li, Xinxin; Hou, Ligang; Wu, Wuchen

    Utilizing the sleep switch transistor technique and dual threshold voltage technique, a source following evaluation gate (SEFG) based domino circuit is presented in this paper for simultaneously suppressing the leakage current and enhancing noise immunity. Simulation results show that the leakage current of the proposed design can be reduced by 43%, 62%, and 67% while improving 19.7%, 3.4 %, and 12.5% noise margin as compared to standard low threshold voltage circuit, standard dual threshold voltage circuit, and SEFG structure, respectively. Also, the inputs and clock signals combination static state dependent leakage current characteristic is analyzed and the minimum leakage states of different domino AND gates are obtained. At last, the leakage power characteristic under process variations is discussed.

  3. Robust Canonical Coherence for Quasi-Cyclostationary Processes: Geomagnetism and Seismicity in Peru

    NASA Astrophysics Data System (ADS)

    Lepage, K.; Thomson, D. J.

    2007-12-01

    Preliminary results suggesting a connection between long-period, geomagnetic fluctuations and long-period, seismic fluctuations are presented. Data from the seismic detector, NNA, situated in ~Naña, Peru, is compared to geomagnetic data from HUA, located in Huancayo, Peru. The high-pass filtered data from the two stations exhibits quasi-cyclostationary pulsation with daily periodicity, and suggests correspondence. The pulsation contains power predominantly between 2000 μ Hz and 8000 μ Hz, with the geomagnetic pulses leading by approximately 4 to 5 hours. A many data section, multitaper, robust canonical coherence analysis of the two, three component data sets is performed. The method, involving an adaptation, suitable for quasi-cyclostationary processes, of the technique presented in "Robust estimation of power spectra", (by Kleiner, Martin and Thomson, Journal of the Royal Statistical Society, Series B Methodological, 1979) is described. Simulations are presented exploring the applicability of the method. Canonical coherence is detected, predominantly between the geomagnetic field and the vertical component of seismic velocity, in the band of frequencies between 1500 μ Hz and 2500 μ Hz. Subsequent group delay estimates between the geomagnetic components and seismic velocity vertical at frequencies corresponding to large canonical coherence are computed. The estimated group delays are 8 min between geomagnetic east and seismic velocity vertical, 16 min between geomagnetic north and seismic velocity vertical and 11 min between geomagnetic vertical and seismic velocity vertical. Possible coupling mechanisms are discussed.

  4. High Fidelity Springback Simulation and Compensation with Robust Forming Process Design

    NASA Astrophysics Data System (ADS)

    Lee, Intaek; Carleer, B. D.; Haage, S.

    2011-08-01

    For the efficient virtual try-out loop, geometric change and bending angles have been compensated during last 20 years. This approach was based on some restrictions like pure bending, plane strain state and isotropic behavior. For more complex forming processes, this has been applied without reviewing this limitation. Analytical force consideration to reduce the amount of springback is also another idea to compensate geometrical displacement adjustment efficiently. In other view, the springback prediction accuracy is also one major topic with various material model developments. All these topics are absolutely of high importance in order to increase springback accuracy and effective compensation to help reduce the trials efforts. But the focus should not only be on these advances issues since the basics must be right as well. Based on our long term experiences in simulation and several project outcomes covered 20 different parts during last 2 years, we will present our experience and investigations in order to geometrically compensate forming process. It is emphasized that we did not just simulate the springback but the compensated surfaces where brought into the real tools. So, the experiences are not only based on numerical analysis but of all this part a physical tryout has been performed as well. The experiences are consolidated in a set of principles of robust springback compensation. These principles will be illustrated and explained at an example part, a B-pillar upper reinforcement. Indeed, it is can be seen that that compensation is a straight forward activity. However, our experiences have shown that it is only a straight forward activity in case certain `boundary conditions are fulfilled. We will discuss some of this boundary condition using the B-pillar upper example. Through this study, basic requirements for successful springabck compensation, the full scope of simulation range, set-up the nominal springback simulation and robustness of forming

  5. Optimisation of multiplet identifier processing on a PLAYSTATION® 3

    NASA Astrophysics Data System (ADS)

    Hattori, Masami; Mizuno, Takashi

    2010-02-01

    To enable high-performance computing (HPC) for applications with large datasets using a Sony® PLAYSTATION® 3 (PS3™) video game console, we configured a hybrid system consisting of a Windows® PC and a PS3™. To validate this system, we implemented the real-time multiplet identifier (RTMI) application, which identifies multiplets of microearthquakes in terms of the similarity of their waveforms. The cross-correlation computation, which is a core algorithm of the RTMI application, was optimised for the PS3™ platform, while the rest of the computation, including data input and output remained on the PC. With this configuration, the core part of the algorithm ran 69 times faster than the original program, accelerating total computation speed more than five times. As a result, the system processed up to 2100 total microseismic events, whereas the original implementation had a limit of 400 events. These results indicate that this system enables high-performance computing for large datasets using the PS3™, as long as data transfer time is negligible compared with computation time.

  6. CORROSION PROCESS IN REINFORCED CONCRETE IDENTIFIED BY ACOUSTIC EMISSION

    NASA Astrophysics Data System (ADS)

    Kawasaki, Yuma; Kitaura, Misuzu; Tomoda, Yuichi; Ohtsu, Masayasu

    Deterioration of Reinforced Concrete (RC) due to salt attack is known as one of serious problems. Thus, development of non-destructive evaluation (NDE) techniques is important to assess the corrosion process. Reinforcement in concrete normally does not corrode because of a passive film on the surface of reinforcement. When chloride concentration at reinfo rcement exceeds the threshold level, the passive film is destroyed. Thus maintenance is desirable at an early stage. In this study, to identify the onset of corrosion and the nucleation of corrosion-induced cracking in concrete due to expansion of corrosion products, continuous acoustic emission (AE) monitoring is applied. Accelerated corrosion and cyclic wet and dry tests are performed in a laboratory. The SiGMA (Simplified Green's functions for Moment tensor Analysis) proce dure is applied to AE waveforms to clarify source kinematics of micro-cracks locations, types and orientations. Results show that the onset of corrosion and the nu cleation of corrosion-induced cracking in concrete are successfully identified. Additionally, cross-sections inside the reinforcement are observed by a scanning electron microscope (SEM). From these results, a great promise for AE techniques to monitor salt damage at an early stage in RC structures is demonstrated.

  7. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to

  8. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering.

    PubMed

    Shanechi, Maryam M; Orsborn, Amy L; Carmena, Jose M

    2016-04-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain's behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user's motor intention during CLDA-a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter

  9. Improved Signal Processing Technique Leads to More Robust Self Diagnostic Accelerometer System

    NASA Technical Reports Server (NTRS)

    Tokars, Roger; Lekki, John; Jaros, Dave; Riggs, Terrence; Evans, Kenneth P.

    2010-01-01

    The self diagnostic accelerometer (SDA) is a sensor system designed to actively monitor the health of an accelerometer. In this case an accelerometer is considered healthy if it can be determined that it is operating correctly and its measurements may be relied upon. The SDA system accomplishes this by actively monitoring the accelerometer for a variety of failure conditions including accelerometer structural damage, an electrical open circuit, and most importantly accelerometer detachment. In recent testing of the SDA system in emulated engine operating conditions it has been found that a more robust signal processing technique was necessary. An improved accelerometer diagnostic technique and test results of the SDA system utilizing this technique are presented here. Furthermore, the real time, autonomous capability of the SDA system to concurrently compensate for effects from real operating conditions such as temperature changes and mechanical noise, while monitoring the condition of the accelerometer health and attachment, will be demonstrated.

  10. Processing and Properties of Fiber Reinforced Polymeric Matrix Composites. Part 2; Processing Robustness of IM7/PETI Polyimide Composites

    NASA Technical Reports Server (NTRS)

    Hou, Tan-Hung

    1996-01-01

    The processability of a phenylethynyl terminated imide (PETI) resin matrix composite was investigated. Unidirectional prepregs were made by coating an N-methylpyrrolidone solution of the amide acid oligomer onto unsized IM7. Two batches of prepregs were used: one was made by NASA in-house, and the other was from an industrial source. The composite processing robustness was investigated with respect to the effect of B-staging conditions, the prepreg shelf life, and the optimal processing window. Rheological measurements indicated that PETI's processability was only slightly affected over a wide range of B-staging temperatures (from 250 C to 300 C). The open hole compression (OHC) strength values were statistically indistinguishable among specimens consolidated using various B-staging conditions. Prepreg rheology and OHC strengths were also found not to be affected by prolonged (i.e., up to 60 days) ambient storage. An optimal processing window was established using response surface methodology. It was found that IM7/PETI composite is more sensitive to the consolidation temperature than to the consolidation pressure. A good consolidation was achievable at 371 C/100 Psi, which yielded an OHC strength of 62 Ksi at room temperature. However, processability declined dramatically at temperatures below 350 C.

  11. Global transcriptomic analysis of Cyanothece 51142 reveals robust diurnal oscillation of central metabolic processes.

    PubMed

    Stöckel, Jana; Welsh, Eric A; Liberton, Michelle; Kunnvakkam, Rangesh; Aurora, Rajeev; Pakrasi, Himadri B

    2008-04-22

    Cyanobacteria are photosynthetic organisms and are the only prokaryotes known to have a circadian lifestyle. Unicellular diazotrophic cyanobacteria such as Cyanothece sp. ATCC 51142 produce oxygen and can also fix atmospheric nitrogen, a process exquisitely sensitive to oxygen. To accommodate such antagonistic processes, the intracellular environment of Cyanothece oscillates between aerobic and anaerobic conditions during a day-night cycle. This is accomplished by temporal separation of the two processes: photosynthesis during the day and nitrogen fixation at night. Although previous studies have examined periodic changes in transcript levels for a limited number of genes in Cyanothece and other unicellular diazotrophic cyanobacteria, a comprehensive study of transcriptional activity in a nitrogen-fixing cyanobacterium is necessary to understand the impact of the temporal separation of photosynthesis and nitrogen fixation on global gene regulation and cellular metabolism. We have examined the expression patterns of nearly 5,000 genes in Cyanothece 51142 during two consecutive diurnal periods. Our analysis showed that approximately 30% of these genes exhibited robust oscillating expression profiles. Interestingly, this set included genes for almost all central metabolic processes in Cyanothece 51142. A transcriptional network of all genes with significantly oscillating transcript levels revealed that the majority of genes encoding enzymes in numerous individual biochemical pathways, such as glycolysis, oxidative pentose phosphate pathway, and glycogen metabolism, were coregulated and maximally expressed at distinct phases during the diurnal cycle. These studies provide a comprehensive picture of how a physiologically relevant diurnal light-dark cycle influences the metabolism in a photosynthetic bacterium.

  12. a Robust Parallel Framework for Massive Spatial Data Processing on High Performance Clusters

    NASA Astrophysics Data System (ADS)

    Guan, X.

    2012-07-01

    Massive spatial data requires considerable computing power for real-time processing. With the help of the development of multicore technology and computer component cost reduction in recent years, high performance clusters become the only economically viable solution for this requirement. Massive spatial data processing demands heavy I/O operations however, and should be characterized as a data-intensive application. Data-intensive application parallelization strategies are imcompatible with currently available procssing frameworks, which are basically designed for traditional compute-intensive applications. In this paper we introduce a Split-and-Merge paradigm for spatial data processing and also propose a robust parallel framework in a cluster environment to support this paradigm. The Split-and-Merge paradigm efficiently exploits data parallelism for massive data processing. The proposed framework is based on the open-source TORQUE project and hosted on a multicore-enabled Linux cluster. One common LiDAR point cloud algorithm, Delaunay triangulation, was implemented on the proposed framework to evaluate its efficiency and scalability. Experimental results demonstrate that the system provides efficient performance speedup.

  13. Inference of Longevity-Related Genes from a Robust Coexpression Network of Seed Maturation Identifies Regulators Linking Seed Storability to Biotic Defense-Related Pathways

    PubMed Central

    Righetti, Karima; Vu, Joseph Ly; Pelletier, Sandra; Vu, Benoit Ly; Glaab, Enrico; Lalanne, David; Pasha, Asher; Patel, Rohan V.; Provart, Nicholas J.; Verdier, Jerome; Leprince, Olivier

    2015-01-01

    Seed longevity, the maintenance of viability during storage, is a crucial factor for preservation of genetic resources and ensuring proper seedling establishment and high crop yield. We used a systems biology approach to identify key genes regulating the acquisition of longevity during seed maturation of Medicago truncatula. Using 104 transcriptomes from seed developmental time courses obtained in five growth environments, we generated a robust, stable coexpression network (MatNet), thereby capturing the conserved backbone of maturation. Using a trait-based gene significance measure, a coexpression module related to the acquisition of longevity was inferred from MatNet. Comparative analysis of the maturation processes in M. truncatula and Arabidopsis thaliana seeds and mining Arabidopsis interaction databases revealed conserved connectivity for 87% of longevity module nodes between both species. Arabidopsis mutant screening for longevity and maturation phenotypes demonstrated high predictive power of the longevity cross-species network. Overrepresentation analysis of the network nodes indicated biological functions related to defense, light, and auxin. Characterization of defense-related wrky3 and nf-x1-like1 (nfxl1) transcription factor mutants demonstrated that these genes regulate some of the network nodes and exhibit impaired acquisition of longevity during maturation. These data suggest that seed longevity evolved by co-opting existing genetic pathways regulating the activation of defense against pathogens. PMID:26410298

  14. Online integrity monitoring in the protein A step of mAb production processes-increasing reliability and process robustness.

    PubMed

    Bork, Christopher; Holdridge, Sarah; Walter, Mark; Fallon, Eric; Pohlscheidt, Michael

    2014-01-01

    The purification of recombinant proteins and antibodies using large packed-bed columns is a key component in most biotechnology purification processes. Because of its efficiency and established practice in the industry, column chromatography is a state of the art technology with a proven capability for removal of impurities, viral clearance, and process efficiency. In general, the validation and monitoring of chromatographic operations-especially of critical process parameters-is required to ensure robust product quality and compliance with health authority expectations. One key aspect of chromatography that needs to be monitored is the integrity of the packed bed, since this is often critical to achieving sufficient separation of protein species. Identification of potential column integrity issues before they occur is important for both product quality and economic efficiency. In this article, we examine how transition analysis techniques can be utilized to monitor column integrity. A case study on the application of this method during a large scale Protein A capture step in an antibody purification process shows how it can assist with improving process knowledge and increasing the efficiency of manufacturing operations. © 2013 American Institute of Chemical Engineers.

  15. Fast and Robust Inversion of Earthquake Source Rupture Process with Applications to Earthquake Emergency Response

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Zhang, Y.

    2010-12-01

    A fast and robust technique for inversion of earthquake source rupture process was developed and applied to some of the recent significant earthquakes worldwide. Since May 2008, source rupture processes of about 20 significant earthquakes worldwide were inverted by using the newly developed technique and the inverted results were timely released on the website within 3 to 5 hours after the occurrence of the earthquakes. These earthquakes included the MW7.8 Wenchuan, Sichuan, earthquake of 12 May 2008, the MW 6.3 L’Aquila, Italy, earthquake of 6 April 2009, the MW 7.0 Haiti earthquake of 12 January 2010, the MW 8.8 Chile earthquake of 27 February, 2010, the MW 6.5 Jiaxian, Taiwan, earthquake of 4 March 2010, the MW7.2 Mexico earthquake of 4 April 2010, the MW7.8 Sumatra earthquake of 6 April 2010, and the MW6.9 Yushu, Qinghai, earthquake of 14 April 2010. It is found that in addition to the usual earthquake source parameters, the fast inverted results of the spatial-temporal rupture process of the earthquake sources provided important information such as the possible disastrous areas and the timely release of these results proved very useful to earthquake emergency response and seismic disaster relief efforts.

  16. Evolving Robust Gene Regulatory Networks

    PubMed Central

    Noman, Nasimul; Monjo, Taku; Moscato, Pablo; Iba, Hitoshi

    2015-01-01

    Design and implementation of robust network modules is essential for construction of complex biological systems through hierarchical assembly of ‘parts’ and ‘devices’. The robustness of gene regulatory networks (GRNs) is ascribed chiefly to the underlying topology. The automatic designing capability of GRN topology that can exhibit robust behavior can dramatically change the current practice in synthetic biology. A recent study shows that Darwinian evolution can gradually develop higher topological robustness. Subsequently, this work presents an evolutionary algorithm that simulates natural evolution in silico, for identifying network topologies that are robust to perturbations. We present a Monte Carlo based method for quantifying topological robustness and designed a fitness approximation approach for efficient calculation of topological robustness which is computationally very intensive. The proposed framework was verified using two classic GRN behaviors: oscillation and bistability, although the framework is generalized for evolving other types of responses. The algorithm identified robust GRN architectures which were verified using different analysis and comparison. Analysis of the results also shed light on the relationship among robustness, cooperativity and complexity. This study also shows that nature has already evolved very robust architectures for its crucial systems; hence simulation of this natural process can be very valuable for designing robust biological systems. PMID:25616055

  17. Quantifying Community Assembly Processes and Identifying Features that Impose Them

    SciTech Connect

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; Chen, Xingyuan; Kennedy, David W.; Murray, Christopher J.; Rockhold, Mark L.; Konopka, Allan

    2013-06-06

    Across a set of ecological communities connected to each other through organismal dispersal (a ‘meta-community’), turnover in composition is governed by (ecological) Drift, Selection, and Dispersal Limitation. Quantitative estimates of these processes remain elusive, but would represent a common currency needed to unify community ecology. Using a novel analytical framework we quantitatively estimate the relative influences of Drift, Selection, and Dispersal Limitation on subsurface, sediment-associated microbial meta-communities. The communities we study are distributed across two geologic formations encompassing ~12,500m3 of uranium-contaminated sediments within the Hanford Site in eastern Washington State. We find that Drift consistently governs ~25% of spatial turnover in community composition; Selection dominates (governing ~60% of turnover) across spatially-structured habitats associated with fine-grained, low permeability sediments; and Dispersal Limitation is most influential (governing ~40% of turnover) across spatially-unstructured habitats associated with coarse-grained, highly-permeable sediments. Quantitative influences of Selection and Dispersal Limitation may therefore be predictable from knowledge of environmental structure. To develop a system-level conceptual model we extend our analytical framework to compare process estimates across formations, characterize measured and unmeasured environmental variables that impose Selection, and identify abiotic features that limit dispersal. Insights gained here suggest that community ecology can benefit from a shift in perspective; the quantitative approach developed here goes beyond the ‘niche vs. neutral’ dichotomy by moving towards a style of natural history in which estimates of Selection, Dispersal Limitation and Drift can be described, mapped and compared across ecological systems.

  18. Directed International Technological Change and Climate Policy: New Methods for Identifying Robust Policies Under Conditions of Deep Uncertainty

    NASA Astrophysics Data System (ADS)

    Molina-Perez, Edmundo

    It is widely recognized that international environmental technological change is key to reduce the rapidly rising greenhouse gas emissions of emerging nations. In 2010, the United Nations Framework Convention on Climate Change (UNFCCC) Conference of the Parties (COP) agreed to the creation of the Green Climate Fund (GCF). This new multilateral organization has been created with the collective contributions of COP members, and has been tasked with directing over USD 100 billion per year towards investments that can enhance the development and diffusion of clean energy technologies in both advanced and emerging nations (Helm and Pichler, 2015). The landmark agreement arrived at the COP 21 has reaffirmed the key role that the GCF plays in enabling climate mitigation as it is now necessary to align large scale climate financing efforts with the long-term goals agreed at Paris 2015. This study argues that because of the incomplete understanding of the mechanics of international technological change, the multiplicity of policy options and ultimately the presence of climate and technological change deep uncertainty, climate financing institutions such as the GCF, require new analytical methods for designing long-term robust investment plans. Motivated by these challenges, this dissertation shows that the application of new analytical methods, such as Robust Decision Making (RDM) and Exploratory Modeling (Lempert, Popper and Bankes, 2003) to the study of international technological change and climate policy provides useful insights that can be used for designing a robust architecture of international technological cooperation for climate change mitigation. For this study I developed an exploratory dynamic integrated assessment model (EDIAM) which is used as the scenario generator in a large computational experiment. The scope of the experimental design considers an ample set of climate and technological scenarios. These scenarios combine five sources of uncertainty

  19. Delays in auditory processing identified in preschool children with FASD

    PubMed Central

    Stephen, Julia M.; Kodituwakku, Piyadasa W.; Kodituwakku, Elizabeth L.; Romero, Lucinda; Peters, Amanda M.; Sharadamma, Nirupama Muniswamy; Caprihan, Arvind; Coffman, Brian A.

    2012-01-01

    Background Both sensory and cognitive deficits have been associated with prenatal exposure to alcohol; however, very few studies have focused on sensory deficits in preschool aged children. Since sensory skills develop early, characterization of sensory deficits using novel imaging methods may reveal important neural markers of prenatal alcohol exposure. Materials and Methods Participants in this study were 10 children with a fetal alcohol spectrum disorder (FASD) and 15 healthy control children aged 3-6 years. All participants had normal hearing as determined by clinical screens. We measured their neurophysiological responses to auditory stimuli (1000 Hz, 72 dB tone) using magnetoencephalography (MEG). We used a multi-dipole spatio-temporal modeling technique (CSST – Ranken et al. 2002) to identify the location and timecourse of cortical activity in response to the auditory tones. The timing and amplitude of the left and right superior temporal gyrus sources associated with activation of left and right primary/secondary auditory cortices were compared across groups. Results There was a significant delay in M100 and M200 latencies for the FASD children relative to the HC children (p = 0.01), when including age as a covariate. The within-subjects effect of hemisphere was not significant. A comparable delay in M100 and M200 latencies was observed in children across the FASD subtypes. Discussion Auditory delay revealed by MEG in children with FASD may prove to be a useful neural marker of information processing difficulties in young children with prenatal alcohol exposure. The fact that delayed auditory responses were observed across the FASD spectrum suggests that it may be a sensitive measure of alcohol-induced brain damage. Therefore, this measure in conjunction with other clinical tools may prove useful for early identification of alcohol affected children, particularly those without dysmorphia. PMID:22458372

  20. Identifying critical success factors for designing selection processes into postgraduate specialty training: the case of UK general practice.

    PubMed

    Plint, Simon; Patterson, Fiona

    2010-06-01

    The UK national recruitment process into general practice training has been developed over several years, with incremental introduction of stages which have been piloted and validated. Previously independent processes, which encouraged multiple applications and produced inconsistent outcomes, have been replaced by a robust national process which has high reliability and predictive validity, and is perceived to be fair by candidates and allocates applicants equitably across the country. Best selection practice involves a job analysis which identifies required competencies, then designs reliable assessment methods to measure them, and over the long term ensures that the process has predictive validity against future performance. The general practitioner recruitment process introduced machine markable short listing assessments for the first time in the UK postgraduate recruitment context, and also adopted selection centre workplace simulations. The key success factors have been identified as corporate commitment to the goal of a national process, with gradual convergence maintaining locus of control rather than the imposition of change without perceived legitimate authority.

  1. Using DRS during breast conserving surgery: identifying robust optical parameters and influence of inter-patient variation

    PubMed Central

    de Boer, Lisanne L.; Hendriks, Benno H. W.; van Duijnhoven, Frederieke; Peeters-Baas, Marie-Jeanne T. F. D. Vrancken; Van de Vijver, Koen; Loo, Claudette E.; Jóźwiak, Katarzyna; Sterenborg, Henricus J. C. M.; Ruers, Theo J. M.

    2016-01-01

    Successful breast conserving surgery consists of complete removal of the tumor while sparing healthy surrounding tissue. Despite currently available imaging and margin assessment tools, recognizing tumor tissue at a resection margin during surgery is challenging. Diffuse reflectance spectroscopy (DRS), which uses light for tissue characterization, can potentially guide surgeons to prevent tumor positive margins. However, inter-patient variation and changes in tissue physiology occurring during the resection might hamper this light-based technology. Here we investigate how inter-patient variation and tissue status (in vivo vs ex vivo) affect the performance of the DRS optical parameters. In vivo and ex vivo measurements of 45 breast cancer patients were obtained and quantified with an analytical model to acquire the optical parameters. The optical parameter representing the ratio between fat and water provided the best discrimination between normal and tumor tissue, with an area under the receiver operating characteristic curve of 0.94. There was no substantial influence of other patient factors such as menopausal status on optical measurements. Contrary to expectations, normalization of the optical parameters did not improve the discriminative power. Furthermore, measurements taken in vivo were not significantly different from the measurements taken ex vivo. These findings indicate that DRS is a robust technology for the detection of tumor tissue during breast conserving surgery. PMID:28018735

  2. Global transcriptomic analysis of Cyanothece 51142 reveals robust diurnal oscillation of central metabolic processes

    SciTech Connect

    Stockel, Jana; Welsh, Eric A.; Liberton, Michelle L.; Kunnavakkam, Rangesh V.; Aurora, Rajeev; Pakrasi, Himadri B.

    2008-04-22

    Cyanobacteria are oxygenic photosynthetic organisms, and the only prokaryotes known to have a circadian cycle. Unicellular diazotrophic cyanobacteria such as Cyanothece 51142 can fix atmospheric nitrogen, a process exquisitely sensitive to oxygen. Thus, the intracellular environment of Cyanothece oscillates between aerobic and anaerobic conditions during a day-night cycle. This is accomplished by temporal separation of two processes: photosynthesis during the day, and nitrogen fixation at night. While previous studies have examined periodic changes transcript levels for a limited number of genes in Cyanothece and other unicellular diazotrophic cyanobacteria, a comprehensive study of transcriptional activity in a nitrogen-fixing cyanobacterium is necessary to understand the impact of the temporal separation of photosynthesis and nitrogen fixation on global gene regulation and cellular metabolism. We have examined the expression patterns of nearly 5000 genes in Cyanothece 51142 during two consecutive diurnal periods. We found that ~30% of these genes exhibited robust oscillating expression profiles. Interestingly, this set included genes for almost all central metabolic processes in Cyanothece. A transcriptional network of all genes with significantly oscillating transcript levels revealed that the majority of genes in numerous individual pathways, such as glycolysis, pentose phosphate pathway and glycogen metabolism, were co-regulated and maximally expressed at distinct phases during the diurnal cycle. Our analyses suggest that the demands of nitrogen fixation greatly influence major metabolic activities inside Cyanothece cells and thus drive various cellular activities. These studies provide a comprehensive picture of how a physiologically relevant diurnal light-dark cycle influences the metabolism in a photosynthetic bacterium

  3. Data-based robust multiobjective optimization of interconnected processes: energy efficiency case study in papermaking.

    PubMed

    Afshar, Puya; Brown, Martin; Maciejowski, Jan; Wang, Hong

    2011-12-01

    Reducing energy consumption is a major challenge for "energy-intensive" industries such as papermaking. A commercially viable energy saving solution is to employ data-based optimization techniques to obtain a set of "optimized" operational settings that satisfy certain performance indices. The difficulties of this are: 1) the problems of this type are inherently multicriteria in the sense that improving one performance index might result in compromising the other important measures; 2) practical systems often exhibit unknown complex dynamics and several interconnections which make the modeling task difficult; and 3) as the models are acquired from the existing historical data, they are valid only locally and extrapolations incorporate risk of increasing process variability. To overcome these difficulties, this paper presents a new decision support system for robust multiobjective optimization of interconnected processes. The plant is first divided into serially connected units to model the process, product quality, energy consumption, and corresponding uncertainty measures. Then multiobjective gradient descent algorithm is used to solve the problem in line with user's preference information. Finally, the optimization results are visualized for analysis and decision making. In practice, if further iterations of the optimization algorithm are considered, validity of the local models must be checked prior to proceeding to further iterations. The method is implemented by a MATLAB-based interactive tool DataExplorer supporting a range of data analysis, modeling, and multiobjective optimization techniques. The proposed approach was tested in two U.K.-based commercial paper mills where the aim was reducing steam consumption and increasing productivity while maintaining the product quality by optimization of vacuum pressures in forming and press sections. The experimental results demonstrate the effectiveness of the method.

  4. A robust post-processing workflow for datasets with motion artifacts in diffusion kurtosis imaging.

    PubMed

    Li, Xianjun; Yang, Jian; Gao, Jie; Luo, Xue; Zhou, Zhenyu; Hu, Yajie; Wu, Ed X; Wan, Mingxi

    2014-01-01

    The aim of this study was to develop a robust post-processing workflow for motion-corrupted datasets in diffusion kurtosis imaging (DKI). The proposed workflow consisted of brain extraction, rigid registration, distortion correction, artifacts rejection, spatial smoothing and tensor estimation. Rigid registration was utilized to correct misalignments. Motion artifacts were rejected by using local Pearson correlation coefficient (LPCC). The performance of LPCC in characterizing relative differences between artifacts and artifact-free images was compared with that of the conventional correlation coefficient in 10 randomly selected DKI datasets. The influence of rejected artifacts with information of gradient directions and b values for the parameter estimation was investigated by using mean square error (MSE). The variance of noise was used as the criterion for MSEs. The clinical practicality of the proposed workflow was evaluated by the image quality and measurements in regions of interest on 36 DKI datasets, including 18 artifact-free (18 pediatric subjects) and 18 motion-corrupted datasets (15 pediatric subjects and 3 essential tremor patients). The relative difference between artifacts and artifact-free images calculated by LPCC was larger than that of the conventional correlation coefficient (p<0.05). It indicated that LPCC was more sensitive in detecting motion artifacts. MSEs of all derived parameters from the reserved data after the artifacts rejection were smaller than the variance of the noise. It suggested that influence of rejected artifacts was less than influence of noise on the precision of derived parameters. The proposed workflow improved the image quality and reduced the measurement biases significantly on motion-corrupted datasets (p<0.05). The proposed post-processing workflow was reliable to improve the image quality and the measurement precision of the derived parameters on motion-corrupted DKI datasets. The workflow provided an effective post-processing

  5. A Robust Gold Deconvolution Approach for LiDAR Waveform Data Processing to Characterize Vegetation Structure

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Popescu, S. C.; Krause, K.; Sheridan, R.; Ku, N. W.

    2014-12-01

    Increasing attention has been paid in the remote sensing community to the next generation Light Detection and Ranging (lidar) waveform data systems for extracting information on topography and the vertical structure of vegetation. However, processing waveform lidar data raises some challenges compared to analyzing discrete return data. The overall goal of this study was to present a robust de-convolution algorithm- Gold algorithm used to de-convolve waveforms in a lidar dataset acquired within a 60 x 60m study area located in the Harvard Forest in Massachusetts. The waveform lidar data was collected by the National Ecological Observatory Network (NEON). Specific objectives were to: (1) explore advantages and limitations of various waveform processing techniques to derive topography and canopy height information; (2) develop and implement a novel de-convolution algorithm, the Gold algorithm, to extract elevation and canopy metrics; and (3) compare results and assess accuracy. We modeled lidar waveforms with a mixture of Gaussian functions using the Non-least squares (NLS) algorithm implemented in R and derived a Digital Terrain Model (DTM) and canopy height. We compared our waveform-derived topography and canopy height measurements using the Gold de-convolution algorithm to results using the Richardson-Lucy algorithm. Our findings show that the Gold algorithm performed better than the Richardson-Lucy algorithm in terms of recovering the hidden echoes and detecting false echoes for generating a DTM, which indicates that the Gold algorithm could potentially be applied to processing of waveform lidar data to derive information on terrain elevation and canopy characteristics.

  6. Accelerated evaluation of the robustness of treatment plans against geometric uncertainties by Gaussian processes.

    PubMed

    Sobotta, B; Söhn, M; Alber, M

    2012-12-07

    In order to provide a consistently high quality treatment, it is of great interest to assess the robustness of a treatment plan under the influence of geometric uncertainties. One possible method to implement this is to run treatment simulations for all scenarios that may arise from these uncertainties. These simulations may be evaluated in terms of the statistical distribution of the outcomes (as given by various dosimetric quality metrics) or statistical moments thereof, e.g. mean and/or variance. This paper introduces a method to compute the outcome distribution and all associated values of interest in a very efficient manner. This is accomplished by substituting the original patient model with a surrogate provided by a machine learning algorithm. This Gaussian process (GP) is trained to mimic the behavior of the patient model based on only very few samples. Once trained, the GP surrogate takes the place of the patient model in all subsequent calculations.The approach is demonstrated on two examples. The achieved computational speedup is more than one order of magnitude.

  7. Fabrication of robust micro-patterned polymeric films via static breath-figure process and vulcanization.

    PubMed

    Li, Lei; Zhong, Yawen; Gong, Jianliang; Li, Jian; Huang, Jin; Ma, Zhi

    2011-02-15

    Here, we present the preparation of thermally stable and solvent resistant micro-patterned polymeric films via static breath-figure process and sequent vulcanization, with a commercially available triblock polymer, polystyrene-b-polyisoprene-b-polystyrene (SIS). The vulcanized honeycomb structured SIS films became self-supported and resistant to a wide range of organic solvents and thermally stable up to 350°C for 2h, an increase of more than 300K as compared to the uncross-linked films. This superior robustness could be attributed to the high degree of polyisoprene cross-linking. The versatility of the methodology was demonstrated by applying to another commercially available triblock polymer, polystyrene-b-polybutadiene-b-polystyrene (SBS). Particularly, hydroxy groups were introduced into SBS by hydroboration. The functionalized two-dimensional micro-patterns feasible for site-directed grafting were created by the hydroxyl-containing polymers. In addition, the fixed microporous structures could be replicated to fabricate textured positive PDMS stamps. This simple technique offers new prospects in the field of micro-patterns, soft lithography and templates.

  8. A robust color signal processing with wide dynamic range WRGB CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Kawada, Shun; Kuroda, Rihito; Sugawa, Shigetoshi

    2011-01-01

    We have developed a robust color reproduction methodology by a simple calculation with a new color matrix using the formerly developed wide dynamic range WRGB lateral overflow integration capacitor (LOFIC) CMOS image sensor. The image sensor was fabricated through a 0.18 μm CMOS technology and has a 45 degrees oblique pixel array, the 4.2 μm effective pixel pitch and the W pixels. A W pixel was formed by replacing one of the two G pixels in the Bayer RGB color filter. The W pixel has a high sensitivity through the visible light waveband. An emerald green and yellow (EGY) signal is generated from the difference between the W signal and the sum of RGB signals. This EGY signal mainly includes emerald green and yellow lights. These colors are difficult to be reproduced accurately by the conventional simple linear matrix because their wave lengths are in the valleys of the spectral sensitivity characteristics of the RGB pixels. A new linear matrix based on the EGY-RGB signal was developed. Using this simple matrix, a highly accurate color processing with a large margin to the sensitivity fluctuation and noise has been achieved.

  9. Broad-band waveforms and robust processing for marine CSEM surveys

    NASA Astrophysics Data System (ADS)

    Myer, David; Constable, Steven; Key, Kerry

    2011-02-01

    In the marine controlled-source electromagnetic method, the Earth response varies in frequency; therefore, using a wide range of frequencies may better constrain geological structure than using a single frequency or only a few closely spaced frequencies. Binary waveforms, such as the square wave, provide a number of frequencies, though many are limited in usefulness because of the rapid decline of amplitude with frequency. Binary waveform design can be improved by recognizing that the class of doubly symmetric waveforms has special properties: they are compact, have controlled phase, are never polarizing and can be described by a simple closed-form mathematical solution. Using this solution, we discovered a compact waveform in which the amplitudes of the third and seventh harmonics are maximized and which has a signal-to-noise advantage at higher frequencies over several other common waveforms. Compact waveforms make possible improved methods for time-series processing. Using short time windows and a first-difference pre-whitener lessens spectral contamination from magnetotelluric signal and oceanographic noise; robust stacking reduces bias from time-series noise transients; and accurate variance estimates may be derived from averages of waveform-length Fourier transform windows of the time-series.

  10. On the robustness of prime response retrieval processes: evidence from auditory negative priming without probe interference.

    PubMed

    Mayr, Susanne; Buchner, Axel

    2014-02-01

    Visual negative priming has been shown to depend on the presence of probe distractors, a finding that has been traditionally seen to support the episodic retrieval model of negative priming; however, facilitated prime-to-probe contingency learning might also underlie this effect. In four sound identification experiments, the role of probe distractor interference in auditory negative priming was investigated. In each experiment, a group of participants was exposed to probe distractor interference while another group ran the task in the absence of probe distractors. Experiments 1A, 1B, and 1C varied in the extent to which fast versus accurate responding was required. Between Experiments 1 and 2, the spatial cueing of the to-be-attended ear was varied. Whereas participants switched ears from prime to probe in Experiment 1, they kept a stable attentional focus throughout Experiment 2. For trials with probe distractors, a negative priming effect was present in all experiments. For trials without probe distractors, the only ubiquitous after-effect of ignoring a prime distractor was an increase of prime response errors in ignored repetition compared to control trials, indicating that prime response retrieval processes took place. Whether negative priming beyond this error increase was found depended on the stability of the attentional focus. The findings suggest that several mechanisms underlie auditory negative priming with the only robust one being prime response retrieval.

  11. Robust superhydrophobic transparent coatings fabricated by a low-temperature sol-gel process

    NASA Astrophysics Data System (ADS)

    Huang, Wei-Heng; Lin, Chao-Sung

    2014-06-01

    A coating with robust, superhydrophobic, and transparent properties was fabricated on glass substrates by a sol-gel method at a temperature of 80 °C. The coating was formed in a solution containing silica nanoparticles and silicic acid, in which the ratio of silica nanoparticles and silicic acid was varied to tune the roughness of the coating. Subsequently, the as-deposited coating was dipped with a low surface energy material, 1H,1H,2H,2H-perfluorooctyltrichloro silane. The coated glass substrate was characterized in terms of surface morphology, optical transmittance, water- and CH2I2-contact angles, and its chemical as well as mechanical stability was evaluated by ultrasonication in ethanol for 120 min. The results showed that the coating had a water contact angle exceeding 160°, a sliding angle lower than 10°, a CH2I2 static contact angle of approximately 150°. The transmittance of the coating was reduced by less than 5% compared to that of the bare glass substrate at wavelengths above 500 nm. Moreover, the properties of the coating hardly changed after the ultrasonication test and still retained the superhydrophobicity after water dropping impact. Because the fabrication process is performed under low temperatures, it is feasible for scale-up production at low energy consumptions.

  12. Cancer as the Disintegration of Robustness: Population-Level Variance in Gene Expression Identifies Key Differences Between Tobacco- and HPV-Associated Oropharyngeal Carcinogenesis.

    PubMed

    Ben-Dayan, Miriam M; MacCarthy, Thomas; Schlecht, Nicolas F; Belbin, Thomas J; Childs, Geoffrey; Smith, Richard V; Prystowsky, Michael B; Bergman, Aviv

    2015-11-01

    Oropharyngeal squamous cell carcinoma is associated both with tobacco use and with human papillomavirus (HPV) infection. It is argued that carcinogen-driven tumorigenesis is a distinct disease from its virally driven counterpart. We hypothesized that tumorigenesis is the result of a loss of genotypic robustness resulting in an increase in phenotypic variation in tumors compared with adjacent histologically normal tissues, and that carcinogen-driven tumorigenesis results in greater variation than its virally driven counterpart. To examine the loss of robustness in carcinogen-driven and virally driven oropharyngeal squamous cell carcinoma samples, and to identify potential pathways involved. We used coefficients of variation for messenger RNA and microRNA expression to measure the loss of robustness in oropharyngeal squamous cell carcinoma samples. Tumors were compared with matched normal tissues, and were further categorized by HPV and patient smoking status. Weighted gene coexpression networks were constructed for genes with highly variable expression among the HPV⁻ tumors from smokers. We observed more genes with variable messenger RNA expression in tumors compared with normal tissues, regardless of HPV and smoking status, and more microRNAs with variable expression in HPV⁻ and HPV⁺ tumors from smoking patients than from nonsmokers. For both the messenger RNA and microRNA data, we observed more variance among HPV⁻ tumors from smokers compared with HPV⁺ tumors from nonsmokers. The gene coexpression network construction highlighted pathways that have lost robustness in carcinogen-induced tumors but appear stable in virally induced tumors. Using coefficients of variation and coexpression networks, we identified multiple altered pathways that may play a role in carcinogen-driven tumorigenesis.

  13. Robust Low Cost Liquid Rocket Combustion Chamber by Advanced Vacuum Plasma Process

    NASA Technical Reports Server (NTRS)

    Holmes, Richard; Elam, Sandra; Ellis, David L.; McKechnie, Timothy; Hickman, Robert; Rose, M. Franklin (Technical Monitor)

    2001-01-01

    Next-generation, regeneratively cooled rocket engines will require materials that can withstand high temperatures while retaining high thermal conductivity. Fabrication techniques must be cost efficient so that engine components can be manufactured within the constraints of shrinking budgets. Three technologies have been combined to produce an advanced liquid rocket engine combustion chamber at NASA-Marshall Space Flight Center (MSFC) using relatively low-cost, vacuum-plasma-spray (VPS) techniques. Copper alloy NARloy-Z was replaced with a new high performance Cu-8Cr-4Nb alloy developed by NASA-Glenn Research Center (GRC), which possesses excellent high-temperature strength, creep resistance, and low cycle fatigue behavior combined with exceptional thermal stability. Functional gradient technology, developed building composite cartridges for space furnaces was incorporated to add oxidation resistant and thermal barrier coatings as an integral part of the hot wall of the liner during the VPS process. NiCrAlY, utilized to produce durable protective coating for the space shuttle high pressure fuel turbopump (BPFTP) turbine blades, was used as the functional gradient material coating (FGM). The FGM not only serves as a protection from oxidation or blanching, the main cause of engine failure, but also serves as a thermal barrier because of its lower thermal conductivity, reducing the temperature of the combustion liner 200 F, from 1000 F to 800 F producing longer life. The objective of this program was to develop and demonstrate the technology to fabricate high-performance, robust, inexpensive combustion chambers for advanced propulsion systems (such as Lockheed-Martin's VentureStar and NASA's Reusable Launch Vehicle, RLV) using the low-cost VPS process. VPS formed combustion chamber test articles have been formed with the FGM hot wall built in and hot fire tested, demonstrating for the first time a coating that will remain intact through the hot firing test, and with

  14. Robust Low Cost Liquid Rocket Combustion Chamber by Advanced Vacuum Plasma Process

    NASA Technical Reports Server (NTRS)

    Holmes, Richard; Elam, Sandra; Ellis, David L.; McKechnie, Timothy; Hickman, Robert; Rose, M. Franklin (Technical Monitor)

    2001-01-01

    Next-generation, regeneratively cooled rocket engines will require materials that can withstand high temperatures while retaining high thermal conductivity. Fabrication techniques must be cost efficient so that engine components can be manufactured within the constraints of shrinking budgets. Three technologies have been combined to produce an advanced liquid rocket engine combustion chamber at NASA-Marshall Space Flight Center (MSFC) using relatively low-cost, vacuum-plasma-spray (VPS) techniques. Copper alloy NARloy-Z was replaced with a new high performance Cu-8Cr-4Nb alloy developed by NASA-Glenn Research Center (GRC), which possesses excellent high-temperature strength, creep resistance, and low cycle fatigue behavior combined with exceptional thermal stability. Functional gradient technology, developed building composite cartridges for space furnaces was incorporated to add oxidation resistant and thermal barrier coatings as an integral part of the hot wall of the liner during the VPS process. NiCrAlY, utilized to produce durable protective coating for the space shuttle high pressure fuel turbopump (BPFTP) turbine blades, was used as the functional gradient material coating (FGM). The FGM not only serves as a protection from oxidation or blanching, the main cause of engine failure, but also serves as a thermal barrier because of its lower thermal conductivity, reducing the temperature of the combustion liner 200 F, from 1000 F to 800 F producing longer life. The objective of this program was to develop and demonstrate the technology to fabricate high-performance, robust, inexpensive combustion chambers for advanced propulsion systems (such as Lockheed-Martin's VentureStar and NASA's Reusable Launch Vehicle, RLV) using the low-cost VPS process. VPS formed combustion chamber test articles have been formed with the FGM hot wall built in and hot fire tested, demonstrating for the first time a coating that will remain intact through the hot firing test, and with

  15. A Robust Power Remote Manipulator for Use in Waste Sorting, Processing, and Packaging - 12158

    SciTech Connect

    Cole, Matt; Martin, Scott

    2012-07-01

    Disposition of radioactive waste is one of the Department of Energy's (DOE's) highest priorities. A critical component of the waste disposition strategy is shipment of Transuranic (TRU) waste from DOE's Oak Ridge Reservation to the Waste Isolation Plant Project (WIPP) in Carlsbad, New Mexico. This is the mission of the DOE TRU Waste Processing Center (TWPC). The remote-handled TRU waste at the Oak Ridge Reservation is currently in a mixed waste form that must be repackaged in to meet WIPP Waste Acceptance Criteria (WAC). Because this remote-handled legacy waste is very diverse, sorting, size reducing, and packaging will require equipment flexibility and strength that is not possible with standard master-slave manipulators. To perform the wide range of tasks necessary with such diverse, highly contaminated material, TWPC worked with S.A. Technology (SAT) to modify SAT's Power Remote Manipulator (PRM) technology to provide the processing center with an added degree of dexterity and high load handling capability inside its shielded cells. TWPC and SAT incorporated innovative technologies into the PRM design to better suit the operations required at TWPC, and to increase the overall capability of the PRM system. Improving on an already proven PRM system will ensure that TWPC gains the capabilities necessary to efficiently complete its TRU waste disposition mission. The collaborative effort between TWPC and S.A. Technology has yielded an extremely capable and robust solution to perform the wide range of tasks necessary to repackage TRU waste containers at TWPC. Incorporating innovative technologies into a proven manipulator system, these PRMs are expected to be an important addition to the capabilities available to shielded cell operators. The PRMs provide operators with the ability to reach anywhere in the cell, lift heavy objects, perform size reduction associated with the disposition of noncompliant waste. Factory acceptance testing of the TWPC Powered Remote

  16. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations, 2. Robustness of Techniques

    SciTech Connect

    Helton, J.C.; Kleijnen, J.P.C.

    1999-03-24

    Procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses are described and illustrated. These procedures attempt to detect increasingly complex patterns in scatterplots and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. A sequence of example analyses with a large model for two-phase fluid flow illustrates how the individual procedures can differ in the variables that they identify as having effects on particular model outcomes. The example analyses indicate that the use of a sequence of procedures is a good analysis strategy and provides some assurance that an important effect is not overlooked.

  17. Identifying and tracking dynamic processes in social networks

    NASA Astrophysics Data System (ADS)

    Chung, Wayne; Savell, Robert; Schütt, Jan-Peter; Cybenko, George

    2006-05-01

    The detection and tracking of embedded malicious subnets in an active social network can be computationally daunting due to the quantity of transactional data generated in the natural interaction of large numbers of actors comprising a network. In addition, detection of illicit behavior may be further complicated by evasive strategies designed to camouflage the activities of the covert subnet. In this work, we move beyond traditional static methods of social network analysis to develop a set of dynamic process models which encode various modes of behavior in active social networks. These models will serve as the basis for a new application of the Process Query System (PQS) to the identification and tracking of covert dynamic processes in social networks. We present a preliminary result from application of our technique in a real-world data stream-- the Enron email corpus.

  18. Method for processing seismic data to identify anomalous absorption zones

    DOEpatents

    Taner, M. Turhan

    2006-01-03

    A method is disclosed for identifying zones anomalously absorptive of seismic energy. The method includes jointly time-frequency decomposing seismic traces, low frequency bandpass filtering the decomposed traces to determine a general trend of mean frequency and bandwidth of the seismic traces, and high frequency bandpass filtering the decomposed traces to determine local variations in the mean frequency and bandwidth of the seismic traces. Anomalous zones are determined where there is difference between the general trend and the local variations.

  19. Integrated Process Monitoring based on Systems of Sensors for Enhanced Nuclear Safeguards Sensitivity and Robustness

    SciTech Connect

    Humberto E. Garcia

    2014-07-01

    This paper illustrates safeguards benefits that process monitoring (PM) can have as a diversion deterrent and as a complementary safeguards measure to nuclear material accountancy (NMA). In order to infer the possible existence of proliferation-driven activities, the objective of NMA-based methods is often to statistically evaluate materials unaccounted for (MUF) computed by solving a given mass balance equation related to a material balance area (MBA) at every material balance period (MBP), a particular objective for a PM-based approach may be to statistically infer and evaluate anomalies unaccounted for (AUF) that may have occurred within a MBP. Although possibly being indicative of proliferation-driven activities, the detection and tracking of anomaly patterns is not trivial because some executed events may be unobservable or unreliably observed as others. The proposed similarity between NMA- and PM-based approaches is important as performance metrics utilized for evaluating NMA-based methods, such as detection probability (DP) and false alarm probability (FAP), can also be applied for assessing PM-based safeguards solutions. To this end, AUF count estimates can be translated into significant quantity (SQ) equivalents that may have been diverted within a given MBP. A diversion alarm is reported if this mass estimate is greater than or equal to the selected value for alarm level (AL), appropriately chosen to optimize DP and FAP based on the particular characteristics of the monitored MBA, the sensors utilized, and the data processing method employed for integrating and analyzing collected measurements. To illustrate the application of the proposed PM approach, a protracted diversion of Pu in a waste stream was selected based on incomplete fuel dissolution in a dissolver unit operation, as this diversion scenario is considered to be problematic for detection using NMA-based methods alone. Results demonstrate benefits of conducting PM under a system

  20. Mirror, Mirror on the Wall: Identifying Processes of Classroom Assessment.

    ERIC Educational Resources Information Center

    Rea-Dickins, Pauline

    2001-01-01

    Examines classroom Assessment in the English-as-an-additional-language (EAL) school context. Using data from teacher interviews, classroom observations, video and audio recordings of learners, and lesson transcripts, traces different stages in the teacher assessment process and presents a working model for analysis of teacher decision making in…

  1. Processing of Perceptual Information Is More Robust than Processing of Conceptual Information in Preschool-Age Children: Evidence from Costs of Switching

    ERIC Educational Resources Information Center

    Fisher, Anna V.

    2011-01-01

    Is processing of conceptual information as robust as processing of perceptual information early in development? Existing empirical evidence is insufficient to answer this question. To examine this issue, 3- to 5-year-old children were presented with a flexible categorization task, in which target items (e.g., an open red umbrella) shared category…

  2. Processing of Perceptual Information Is More Robust than Processing of Conceptual Information in Preschool-Age Children: Evidence from Costs of Switching

    ERIC Educational Resources Information Center

    Fisher, Anna V.

    2011-01-01

    Is processing of conceptual information as robust as processing of perceptual information early in development? Existing empirical evidence is insufficient to answer this question. To examine this issue, 3- to 5-year-old children were presented with a flexible categorization task, in which target items (e.g., an open red umbrella) shared category…

  3. Use of uncertainty polytope to describe constraint processes with uncertain time-delay for robust model predictive control applications.

    PubMed

    Huang, Gongsheng; Wang, Shengwei

    2009-10-01

    This paper studies the application of robust model predictive control (MPC) in a constraint process suffering from time-delay uncertainty. The process is described using a transfer function and sampled into a discrete model for computer control design. A polytope is firstly developed to describe the uncertain discrete model due to the process's time-delay uncertainty. Based on the proposed description, a linear matrix inequality (LMI) based MPC algorithm is employed and modified to design a robust controller for such a constraint process. In case studies, the effect of time-delay uncertainty on the control performance of a standard MPC algorithm is investigated, and the proposed description and the modified control algorithm are validated in the temperature control of a typical air-handling unit.

  4. Application of thermoluminescence technique to identify radiation processed foods

    NASA Astrophysics Data System (ADS)

    Kiyak, N.

    1995-02-01

    Research studies reported by various authors have shown that a few methods one of which is thermoluminescence technique- may be suitable for identification of some certain irradiated spicies and food containing bones. This study is an application of the thermoluminescence technique for identifying the irradiated samples. The investigation was carried out on different types of foodstuffs such as onions, potatoes and kiwi. Measurements show that the technique can be applied as a reliable method to distinguish the irradiated food products from non-irradiated ones. The results demonstrate also that it is possible to use this method for determining the absorbed dose of irradiated samples from the established dose-effect curve.

  5. Robustness of movement models: can models bridge the gap between temporal scales of data sets and behavioural processes?

    PubMed

    Schlägel, Ulrike E; Lewis, Mark A

    2016-12-01

    Discrete-time random walks and their extensions are common tools for analyzing animal movement data. In these analyses, resolution of temporal discretization is a critical feature. Ideally, a model both mirrors the relevant temporal scale of the biological process of interest and matches the data sampling rate. Challenges arise when resolution of data is too coarse due to technological constraints, or when we wish to extrapolate results or compare results obtained from data with different resolutions. Drawing loosely on the concept of robustness in statistics, we propose a rigorous mathematical framework for studying movement models' robustness against changes in temporal resolution. In this framework, we define varying levels of robustness as formal model properties, focusing on random walk models with spatially-explicit component. With the new framework, we can investigate whether models can validly be applied to data across varying temporal resolutions and how we can account for these different resolutions in statistical inference results. We apply the new framework to movement-based resource selection models, demonstrating both analytical and numerical calculations, as well as a Monte Carlo simulation approach. While exact robustness is rare, the concept of approximate robustness provides a promising new direction for analyzing movement models.

  6. A Benchmark Data Set to Evaluate the Illumination Robustness of Image Processing Algorithms for Object Segmentation and Classification.

    PubMed

    Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus

    2015-01-01

    Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects.

  7. A Benchmark Data Set to Evaluate the Illumination Robustness of Image Processing Algorithms for Object Segmentation and Classification

    PubMed Central

    Khan, Arif ul Maula; Mikut, Ralf; Reischl, Markus

    2015-01-01

    Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects. PMID:26191792

  8. MicroRNA profiling of clear cell renal cell cancer identifies a robust signature to define renal malignancy.

    PubMed

    Jung, Monika; Mollenkopf, Hans-Joachim; Grimm, Christina; Wagner, Ina; Albrecht, Marco; Waller, Tobias; Pilarsky, Christian; Johannsen, Manfred; Stephan, Carsten; Lehrach, Hans; Nietfeld, Wilfried; Rudel, Thomas; Jung, Klaus; Kristiansen, Glen

    2009-09-01

    MicroRNAs are short single-stranded RNAs that are associated with gene regulation at the transcriptional and translational level. Changes in their expression were found in a variety of human cancers. Only few data are available on microRNAs in clear cell renal cell carcinoma (ccRCC). We performed genome-wide expression profiling of microRNAs using microarray analysis and quantification of specific microRNAs by TaqMan real-time RT-PCR. Matched malignant and non-malignant tissue samples from two independent sets of 12 and 72 ccRCC were profiled. The microarray-based experiments identified 13 over-expressed and 20 down-regulated microRNAs in malignant samples. Expression in ccRCC tissue samples compared with matched non-malignant samples measured by RT-PCR was increased on average by 2.7- to 23-fold for the hsa-miR-16, -452*, -224, -155 and -210, but decreased by 4.8- to 138-fold for hsa-miR-200b, -363, -429, -200c, -514 and -141. No significant associations between these differentially expressed microRNAs and the clinico-pathological factors tumour stage, tumour grade and survival rate were found. Nevertheless, malignant and non-malignant tissue could clearly be differentiated by their microRNA profile. A combination of miR-141 and miR-155 resulted in a 97% overall correct classification of samples. The presented differential microRNA pattern provides a solid basis for further validation, including functional studies.

  9. Quantitative morphometry of electrophysiologically identified CA3b interneurons reveals robust local geometry and distinct cell classes.

    PubMed

    Ascoli, Giorgio A; Brown, Kerry M; Calixto, Eduardo; Card, J Patrick; Galván, E J; Perez-Rosello, T; Barrionuevo, Germán

    2009-08-20

    The morphological and electrophysiological diversity of inhibitory cells in hippocampal area CA3 may underlie specific computational roles and is not yet fully elucidated. In particular, interneurons with somata in strata radiatum (R) and lacunosum-moleculare (L-M) receive converging stimulation from the dentate gyrus and entorhinal cortex as well as within CA3. Although these cells express different forms of synaptic plasticity, their axonal trees and connectivity are still largely unknown. We investigated the branching and spatial patterns, plus the membrane and synaptic properties, of rat CA3b R and L-M interneurons digitally reconstructed after intracellular labeling. We found considerable variability within but no difference between the two layers, and no correlation between morphological and biophysical properties. Nevertheless, two cell types were identified based on the number of dendritic bifurcations, with significantly different anatomical and electrophysiological features. Axons generally branched an order of magnitude more than dendrites. However, interneurons on both sides of the R/L-M boundary revealed surprisingly modular axodendritic arborizations with consistently uniform local branch geometry. Both axons and dendrites followed a lamellar organization, and axons displayed a spatial preference toward the fissure. Moreover, only a small fraction of the axonal arbor extended to the outer portion of the invaded volume, and tended to return toward the proximal region. In contrast, dendritic trees demonstrated more limited but isotropic volume occupancy. These results suggest a role of predominantly local feedforward and lateral inhibitory control for both R and L-M interneurons. Such a role may be essential to balance the extensive recurrent excitation of area CA3 underlying hippocampal autoassociative memory function.

  10. Identifiability and Estimation in Random Translations of Marked Point Processes.

    DTIC Science & Technology

    1982-10-01

    inversion of the Laplace Transform Hermit* Distribution Parmetrc and Non- parametric Estimation 20 ABSTRACT ContInwue en reverse side If neeee ww ftn id... Parametric Estimation of h(.) and P(.) If h(.) and P(.) belong to a certain known family of functions with some unknown parameters, the expression (3...complex type of data is required and nothing can be learned about the arrival process. Non- Parametric Estimation of P(’) Let h(.) be a completely

  11. An Excel Workbook for Identifying Redox Processes in Ground Water

    USGS Publications Warehouse

    Jurgens, Bryant C.; McMahon, Peter B.; Chapelle, Francis H.; Eberts, Sandra M.

    2009-01-01

    The reduction/oxidation (redox) condition of ground water affects the concentration, transport, and fate of many anthropogenic and natural contaminants. The redox state of a ground-water sample is defined by the dominant type of reduction/oxidation reaction, or redox process, occurring in the sample, as inferred from water-quality data. However, because of the difficulty in defining and applying a systematic redox framework to samples from diverse hydrogeologic settings, many regional water-quality investigations do not attempt to determine the predominant redox process in ground water. Recently, McMahon and Chapelle (2008) devised a redox framework that was applied to a large number of samples from 15 principal aquifer systems in the United States to examine the effect of redox processes on water quality. This framework was expanded by Chapelle and others (in press) to use measured sulfide data to differentiate between iron(III)- and sulfate-reducing conditions. These investigations showed that a systematic approach to characterize redox conditions in ground water could be applied to datasets from diverse hydrogeologic settings using water-quality data routinely collected in regional water-quality investigations. This report describes the Microsoft Excel workbook, RedoxAssignment_McMahon&Chapelle.xls, that assigns the predominant redox process to samples using the framework created by McMahon and Chapelle (2008) and expanded by Chapelle and others (in press). Assignment of redox conditions is based on concentrations of dissolved oxygen (O2), nitrate (NO3-), manganese (Mn2+), iron (Fe2+), sulfate (SO42-), and sulfide (sum of dihydrogen sulfide [aqueous H2S], hydrogen sulfide [HS-], and sulfide [S2-]). The logical arguments for assigning the predominant redox process to each sample are performed by a program written in Microsoft Visual Basic for Applications (VBA). The program is called from buttons on the main worksheet. The number of samples that can be analyzed

  12. Robust Library Building for Autonomous Classification of Downhole Geophysical Logs Using Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Silversides, Katherine L.; Melkumyan, Arman

    2017-03-01

    Machine learning techniques such as Gaussian Processes can be used to identify stratigraphically important features in geophysical logs. The marker shales in the banded iron formation hosted iron ore deposits of the Hamersley Ranges, Western Australia, form distinctive signatures in the natural gamma logs. The identification of these marker shales is important for stratigraphic identification of unit boundaries for the geological modelling of the deposit. Machine learning techniques each have different unique properties that will impact the results. For Gaussian Processes (GPs), the output values are inclined towards the mean value, particularly when there is not sufficient information in the library. The impact that these inclinations have on the classification can vary depending on the parameter values selected by the user. Therefore, when applying machine learning techniques, care must be taken to fit the technique to the problem correctly. This study focuses on optimising the settings and choices for training a GPs system to identify a specific marker shale. We show that the final results converge even when different, but equally valid starting libraries are used for the training. To analyse the impact on feature identification, GP models were trained so that the output was inclined towards a positive, neutral or negative output. For this type of classification, the best results were when the pull was towards a negative output. We also show that the GP output can be adjusted by using a standard deviation coefficient that changes the balance between certainty and accuracy in the results.

  13. Robust Library Building for Autonomous Classification of Downhole Geophysical Logs Using Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Silversides, Katherine L.; Melkumyan, Arman

    2016-12-01

    Machine learning techniques such as Gaussian Processes can be used to identify stratigraphically important features in geophysical logs. The marker shales in the banded iron formation hosted iron ore deposits of the Hamersley Ranges, Western Australia, form distinctive signatures in the natural gamma logs. The identification of these marker shales is important for stratigraphic identification of unit boundaries for the geological modelling of the deposit. Machine learning techniques each have different unique properties that will impact the results. For Gaussian Processes (GPs), the output values are inclined towards the mean value, particularly when there is not sufficient information in the library. The impact that these inclinations have on the classification can vary depending on the parameter values selected by the user. Therefore, when applying machine learning techniques, care must be taken to fit the technique to the problem correctly. This study focuses on optimising the settings and choices for training a GPs system to identify a specific marker shale. We show that the final results converge even when different, but equally valid starting libraries are used for the training. To analyse the impact on feature identification, GP models were trained so that the output was inclined towards a positive, neutral or negative output. For this type of classification, the best results were when the pull was towards a negative output. We also show that the GP output can be adjusted by using a standard deviation coefficient that changes the balance between certainty and accuracy in the results.

  14. Highly robust hydrogels via a fast, simple and cytocompatible dual crosslinking-based process.

    PubMed

    Costa, Ana M S; Mano, João F

    2015-11-07

    A highly robust hydrogel device made from a single biopolymer formulation is reported. Owing to the presence of covalent and non-covalent crosslinks, these engineered systems were able to (i) sustain a compressive strength of ca. 20 MPa, (ii) quickly recover upon unloading, and (iii) encapsulate cells with high viability rates.

  15. Identifying User Needs and the Participative Design Process

    NASA Astrophysics Data System (ADS)

    Meiland, Franka; Dröes, Rose-Marie; Sävenstedt, Stefan; Bergvall-Kåreborn, Birgitta; Andersson, Anna-Lena

    As the number of persons with dementia increases and also the demands on care and support at home, additional solutions to support persons with dementia are needed. The COGKNOW project aims to develop an integrated, user-driven cognitive prosthetic device to help persons with dementia. The project focuses on support in the areas of memory, social contact, daily living activities and feelings of safety. The design process is user-participatory and consists of iterative cycles at three test sites across Europe. In the first cycle persons with dementia and their carers (n = 17) actively participated in the developmental process. Based on their priorities of needs and solutions, on their disabilities and after discussion between the team, a top four list of Information and Communication Technology (ICT) solutions was made and now serves as the basis for development: in the area of remembering - day and time orientation support, find mobile service and reminding service, in the area of social contact - telephone support by picture dialling, in the area of daily activities - media control support through a music playback and radio function, and finally, in the area of safety - a warning service to indicate when the front door is open and an emergency contact service to enhance feelings of safety. The results of this first project phase show that, in general, the people with mild dementia as well as their carers were able to express and prioritize their (unmet) needs, and the kind of technological assistance they preferred in the selected areas. In next phases it will be tested if the user-participatory design and multidisciplinary approach employed in the COGKNOW project result in a user-friendly, useful device that positively impacts the autonomy and quality of life of persons with dementia and their carers.

  16. Individualized relapse prediction: personality measures and striatal and insular activity during reward-processing robustly predict relapse*

    PubMed Central

    Gowin, Joshua L.; Ball, Tali M.; Wittmann, Marc; Tapert, Susan F.; Paulus, Martin P.

    2015-01-01

    Background Nearly half of individuals with substance use disorders relapse in the year after treatment. A diagnostic tool to help clinicians make decisions regarding treatment does not exist for psychiatric conditions. Identifying individuals with high risk for relapse to substance use following abstinence has profound clinical consequences. This study aimed to develop neuroimaging as a robust tool to predict relapse. Methods 68 methamphetamine-dependent adults (15 female) were recruited from 28-day inpatient treatment. During treatment, participants completed a functional MRI scan that examined brain activation during reward processing. Patients were followed 1 year later to assess abstinence. We examined brain activation during reward processing between relapsing and abstaining individuals and employed three random forest prediction models (clinical and personality measures, neuroimaging measures, a combined model) to generate predictions for each participant regarding their relapse likelihood. Results 18 individuals relapsed. There were significant group by reward-size interactions for neural activation in the left insula and right striatum for rewards. Abstaining individuals showed increased activation for large, risky relative to small, safe rewards, whereas relapsing individuals failed to show differential activation between reward types. All three random forest models yielded good test characteristics such that a positive test for relapse yielded a likelihood ratio 2.63, whereas a negative test had a likelihood ratio of 0.48. Conclusions These findings suggest that neuroimaging can be developed in combination with other measures as an instrument to predict relapse, advancing tools providers can use to make decisions about individualized treatment of substance use disorders. PMID:25977206

  17. A two-step patterning process increases the robustness of periodic patterning in the fly eye.

    PubMed

    Gavish, Avishai; Barkai, Naama

    2016-06-01

    Complex periodic patterns can self-organize through dynamic interactions between diffusible activators and inhibitors. In the biological context, self-organized patterning is challenged by spatial heterogeneities ('noise') inherent to biological systems. How spatial variability impacts the periodic patterning mechanism and how it can be buffered to ensure precise patterning is not well understood. We examine the effect of spatial heterogeneity on the periodic patterning of the fruit fly eye, an organ composed of ∼800 miniature eye units (ommatidia) whose periodic arrangement along a hexagonal lattice self-organizes during early stages of fly development. The patterning follows a two-step process, with an initial formation of evenly spaced clusters of ∼10 cells followed by a subsequent refinement of each cluster into a single selected cell. Using a probabilistic approach, we calculate the rate of patterning errors resulting from spatial heterogeneities in cell size, position and biosynthetic capacity. Notably, error rates were largely independent of the desired cluster size but followed the distributions of signaling speeds. Pre-formation of large clusters therefore greatly increases the reproducibility of the overall periodic arrangement, suggesting that the two-stage patterning process functions to guard the pattern against errors caused by spatial heterogeneities. Our results emphasize the constraints imposed on self-organized patterning mechanisms by the need to buffer stochastic effects. Author summary Complex periodic patterns are common in nature and are observed in physical, chemical and biological systems. Understanding how these patterns are generated in a precise manner is a key challenge. Biological patterns are especially intriguing, as they are generated in a noisy environment; cell position and cell size, for example, are subject to stochastic variations, as are the strengths of the chemical signals mediating cell-to-cell communication. The need

  18. New results on the robust stability of PID controllers with gain and phase margins for UFOPTD processes.

    PubMed

    Jin, Q B; Liu, Q; Huang, B

    2016-03-01

    This paper considers the problem of determining all the robust PID (proportional-integral-derivative) controllers in terms of the gain and phase margins (GPM) for open-loop unstable first order plus time delay (UFOPTD) processes. It is the first time that the feasible ranges of the GPM specifications provided by a PID controller are given for UFOPTD processes. A gain and phase margin tester is used to modify the original model, and the ranges of the margin specifications are derived such that the modified model can be stabilized by a stabilizing PID controller based on Hermite-Biehlers Theorem. Furthermore, we obtain all the controllers satisfying a given margin specification. Simulation studies show how to use the results to design a robust PID controller.

  19. Deep transcriptome-sequencing and proteome analysis of the hydrothermal vent annelid Alvinella pompejana identifies the CvP-bias as a robust measure of eukaryotic thermostability

    PubMed Central

    2013-01-01

    Background Alvinella pompejana is an annelid worm that inhabits deep-sea hydrothermal vent sites in the Pacific Ocean. Living at a depth of approximately 2500 meters, these worms experience extreme environmental conditions, including high temperature and pressure as well as high levels of sulfide and heavy metals. A. pompejana is one of the most thermotolerant metazoans, making this animal a subject of great interest for studies of eukaryotic thermoadaptation. Results In order to complement existing EST resources we performed deep sequencing of the A. pompejana transcriptome. We identified several thousand novel protein-coding transcripts, nearly doubling the sequence data for this annelid. We then performed an extensive survey of previously established prokaryotic thermoadaptation measures to search for global signals of thermoadaptation in A. pompejana in comparison with mesophilic eukaryotes. In an orthologous set of 457 proteins, we found that the best indicator of thermoadaptation was the difference in frequency of charged versus polar residues (CvP-bias), which was highest in A. pompejana. CvP-bias robustly distinguished prokaryotic thermophiles from prokaryotic mesophiles, as well as the thermophilic fungus Chaetomium thermophilum from mesophilic eukaryotes. Experimental values for thermophilic proteins supported higher CvP-bias as a measure of thermal stability when compared to their mesophilic orthologs. Proteome-wide mean CvP-bias also correlated with the body temperatures of homeothermic birds and mammals. Conclusions Our work extends the transcriptome resources for A. pompejana and identifies the CvP-bias as a robust and widely applicable measure of eukaryotic thermoadaptation. Reviewer This article was reviewed by Sándor Pongor, L. Aravind and Anthony M. Poole. PMID:23324115

  20. A robust hidden semi-Markov model with application to aCGH data processing.

    PubMed

    Ding, Jiarui; Shah, Sohrab

    2013-01-01

    Hidden semi-Markov models are effective at modelling sequences with succession of homogenous zones by choosing appropriate state duration distributions. To compensate for model mis-specification and provide protection against outliers, we design a robust hidden semi-Markov model with Student's t mixture models as the emission distributions. The proposed approach is used to model array based comparative genomic hybridization data. Experiments conducted on the benchmark data from the Coriell cell lines, and glioblastoma multiforme data illustrate the reliability of the technique.

  1. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    PubMed

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. Published by Elsevier B.V.

  2. Multiple feedback loop design in the tryptophan regulatory network of Escherichia coli suggests a paradigm for robust regulation of processes in series

    PubMed Central

    Bhartiya, Sharad; Chaudhary, Nikhil; Venkatesh, K.V; Doyle, Francis J

    2005-01-01

    Biological networks have evolved through adaptation in uncertain environments. Of the different possible design paradigms, some may offer functional advantages over others. These designs can be quantified by the structure of the network resulting from molecular interactions and the parameter values. One may, therefore, like to identify the design motif present in the evolved network that makes it preferable over other alternatives. In this work, we focus on the regulatory networks characterized by serially arranged processes, which are regulated by multiple feedback loops. Specifically, we consider the tryptophan system present in Escherichia coli, which may be conceptualized as three processes in series, namely transcription, translation and tryptophan synthesis. The multiple feedback loop motif results from three distinct negative feedback loops, namely genetic repression, mRNA attenuation and enzyme inhibition. A framework is introduced to identify the key design components of this network responsible for its physiological performance. We demonstrate that the multiple feedback loop motif, as seen in the tryptophan system, enables robust performance to variations in system parameters while maintaining a rapid response to achieve homeostasis. Superior performance, if arising from a design principle, is intrinsic and, therefore, inherent to any similarly designed system, either natural or engineered. An experimental engineering implementation of the multiple feedback loop design on a two-tank system supports the generality of the robust attributes offered by the design. PMID:16849267

  3. OGS#PETSc approach for robust and efficient simulations of strongly coupled hydrothermal processes in EGS reservoirs

    NASA Astrophysics Data System (ADS)

    Watanabe, Norihiro; Blucher, Guido; Cacace, Mauro; Kolditz, Olaf

    2016-04-01

    A robust and computationally efficient solution is important for 3D modelling of EGS reservoirs. This is particularly the case when the reservoir model includes hydraulic conduits such as induced or natural fractures, fault zones, and wellbore open-hole sections. The existence of such hydraulic conduits results in heterogeneous flow fields and in a strengthened coupling between fluid flow and heat transport processes via temperature dependent fluid properties (e.g. density and viscosity). A commonly employed partitioned solution (or operator-splitting solution) may not robustly work for such strongly coupled problems its applicability being limited by small time step sizes (e.g. 5-10 days) whereas the processes have to be simulated for 10-100 years. To overcome this limitation, an alternative approach is desired which can guarantee a robust solution of the coupled problem with minor constraints on time step sizes. In this work, we present a Newton-Raphson based monolithic coupling approach implemented in the OpenGeoSys simulator (OGS) combined with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library. The PETSc library is used for both linear and nonlinear solvers as well as MPI-based parallel computations. The suggested method has been tested by application to the 3D reservoir site of Groß Schönebeck, in northern Germany. Results show that the exact Newton-Raphson approach can also be limited to small time step sizes (e.g. one day) due to slight oscillations in the temperature field. The usage of a line search technique and modification of the Jacobian matrix were necessary to achieve robust convergence of the nonlinear solution. For the studied example, the proposed monolithic approach worked even with a very large time step size of 3.5 years.

  4. Robustness and Assortativity for Diffusion-like Processes in Scale- free Networks

    NASA Astrophysics Data System (ADS)

    Scala, Antonio; D'Agostino, Gregorio; Zlatic, Vinko; Caldarelli, Guido

    2012-02-01

    By analyzing the diffusive dynamics of epidemics and of distress in complex networks, we study the effect of the assortativity on the robustness of the networks. We first determine by spectral analysis the thresholds above which epidemics/failures can spread; we then calculate the slowest diffusional times. Our results shows that disassortative networks exhibit a higher epidemiological threshold and are therefore easier to immunize, while in assortative networks there is a longer time for intervention before epidemic/failure spreads. Moreover, we study by computer simulations a diffusive model of distress propagation (financial contagion). We show that, while assortative networks are more prone to the propagation of epidemic/failures, degree-targeted immunization policies increases their resilience to systemic risk.

  5. Robustness and assortativity for diffusion-like processes in scale-free networks

    NASA Astrophysics Data System (ADS)

    D'Agostino, G.; Scala, A.; Zlatić, V.; Caldarelli, G.

    2012-03-01

    By analysing the diffusive dynamics of epidemics and of distress in complex networks, we study the effect of the assortativity on the robustness of the networks. We first determine by spectral analysis the thresholds above which epidemics/failures can spread; we then calculate the slowest diffusional times. Our results shows that disassortative networks exhibit a higher epidemiological threshold and are therefore easier to immunize, while in assortative networks there is a longer time for intervention before epidemic/failure spreads. Moreover, we study by computer simulations the sandpile cascade model, a diffusive model of distress propagation (financial contagion). We show that, while assortative networks are more prone to the propagation of epidemic/failures, degree-targeted immunization policies increases their resilience to systemic risk.

  6. Stroke-associated pattern of gene expression previously identified by machine-learning is diagnostically robust in an independent patient population.

    PubMed

    O'Connell, Grant C; Chantler, Paul D; Barr, Taura L

    2017-12-01

    Our group recently employed genome-wide transcriptional profiling in tandem with machine-learning based analysis to identify a ten-gene pattern of differential expression in peripheral blood which may have utility for detection of stroke. The objective of this study was to assess the diagnostic capacity and temporal stability of this stroke-associated transcriptional signature in an independent patient population. Publicly available whole blood microarray data generated from 23 ischemic stroke patients at 3, 5, and 24 h post-symptom onset, as well from 23 cardiovascular disease controls, were obtained via the National Center for Biotechnology Information Gene Expression Omnibus. Expression levels of the ten candidate genes (ANTXR2, STK3, PDK4, CD163, MAL, GRAP, ID3, CTSZ, KIF1B, and PLXDC2) were extracted, compared between groups, and evaluated for their discriminatory ability at each time point. We observed a largely identical pattern of differential expression between stroke patients and controls across the ten candidate genes as reported in our prior work. Furthermore, the coordinate expression levels of the ten candidate genes were able to discriminate between stroke patients and controls with levels of sensitivity and specificity upwards of 90% across all three time points. These findings confirm the diagnostic robustness of the previously identified pattern of differential expression in an independent patient population, and further suggest that it is temporally stable over the first 24 h of stroke pathology.

  7. Processivity of peptidoglycan synthesis provides a built-in mechanism for the robustness of straight-rod cell morphology

    PubMed Central

    Sliusarenko, Oleksii; Cabeen, Matthew T.; Wolgemuth, Charles W.; Jacobs-Wagner, Christine; Emonet, Thierry

    2010-01-01

    The propagation of cell shape across generations is remarkably robust in most bacteria. Even when deformations are acquired, growing cells progressively recover their original shape once the deforming factors are eliminated. For instance, straight-rod-shaped bacteria grow curved when confined to circular microchambers, but straighten in a growth-dependent fashion when released. Bacterial cell shape is maintained by the peptidoglycan (PG) cell wall, a giant macromolecule of glycan strands that are synthesized by processive enzymes and cross-linked by peptide chains. Changes in cell geometry require modifying the PG and therefore depend directly on the molecular-scale properties of PG structure and synthesis. Using a mathematical model we quantify the straightening of curved Caulobacter crescentus cells after disruption of the cell-curving crescentin structure. We observe that cells straighten at a rate that is about half (57%) the cell growth rate. Next we show that in the absence of other effects there exists a mathematical relationship between the rate of cell straightening and the processivity of PG synthesis—the number of subunits incorporated before termination of synthesis. From the measured rate of cell straightening this relationship predicts processivity values that are in good agreement with our estimates from published data. Finally, we consider the possible role of three other mechanisms in cell straightening. We conclude that regardless of the involvement of other factors, intrinsic properties of PG processivity provide a robust mechanism for cell straightening that is hardwired to the cell wall synthesis machinery. PMID:20479277

  8. The Self in Movement: Being Identified and Identifying Oneself in the Process of Migration and Asylum Seeking.

    PubMed

    Watzlawik, Meike; Brescó de Luna, Ignacio

    2017-03-15

    How migration influences the processes of identity development has been under longstanding scrutiny in the social sciences. Usually, stage models have been suggested, and different strategies for acculturation (e.g., integration, assimilation, separation, and marginalization) have been considered as ways to make sense of the psychological transformations of migrants as a group. On an individual level, however, identity development is a more complex endeavor: Identity does not just develop by itself, but is constructed as an ongoing process. To capture these processes, we will look at different aspects of migration and asylum seeking; for example, the cultural-specific values and expectations of the hosting (European) countries (e.g., as identifier), but also of the arriving individuals/groups (e.g., identified as refugees). Since the two may contradict each other, negotiations between identities claims and identity assignments become necessary. Ways to solve these contradictions are discussed, with a special focus on the experienced (and often missing) agency in different settings upon arrival in a new country. In addition, it will be shown how sudden events (e.g., 9/11, the Charlie Hebdo attack) may challenge identity processes in different ways.

  9. Robust Multi-Length Scale Deformation Process Design for the Control of Microstructure-Sensitive Material Properties

    DTIC Science & Technology

    2007-07-18

    die underfill caused by material porosity This problem studies the effect of a random voids in the design of flashless closed die forging processes...provides a robust way to estimate the statistics of the extent of die underfill as a result of a random distribution of voids in the billet. The initial...2.38) i=1 where fo = 0.03 is the mean void fraction. A 9x9 grid was used for computing the statistics. The mean underfill was estimated to be

  10. Identifying the demographic processes relevant for species conservation in human-impacted areas: does the model matter?

    PubMed

    González, Edgar J; Rees, Mark; Martorell, Carlos

    2013-02-01

    The identification of the demographic processes responsible for the decline in population growth rate (λ) in disturbed areas would allow conservation efforts to be efficiently directed. Integral projection models (IPMs) are used for this purpose, but it is unclear whether the conclusions drawn from their analysis are sensitive to how functional structures (the functions that describe how survival, growth and fecundity vary with individual size) are selected. We constructed 12 IPMs that differed in their functional structure by combining two reproduction models and three functional expressions (generalized linear, cubic and additive models), each with and without simplification. Models were parameterized with data from two populations of two endangered cacti subject to different disturbance intensities. For each model, we identified the demographic processes that most affected λ in the presence of disturbance. Simulations were performed on artificial data and analyzed as above to assess the generality of the results. In both empirical and simulated data, the same processes were identified as making the largest contribution to changes in λ regardless of the functional structure. The major differences in the results were due to misspecification of the fecundity functions, whilst functional expression and model simplification had lesser effects. Therefore, as long as the demographic attributes of the species are well known and incorporated into the model, IPMs will robustly identify the processes that most affect the growth of populations subject to disturbance, making them a reliable tool for developing conservation strategies.

  11. Robust Modulo Remaindering and Applications in Radar and Sensor Signal Processing

    DTIC Science & Technology

    2015-08-27

    frequency estimator based on double sub-segment phase difference, IEEE Signal Processing Letters, vol. 22, no. 8, pp. 1055 -1059, Aug. 2015...IEEE Signal Processing Letters, vol. 22, no. 8, pp. 1055 -1059, Aug. 2015. Changes in research objectives (if any): Change in AFOSR Program Manager

  12. Robust factor selection in early cell culture process development for the production of a biosimilar monoclonal antibody.

    PubMed

    Sokolov, Michael; Ritscher, Jonathan; MacKinnon, Nicola; Bielser, Jean-Marc; Brühlmann, David; Rothenhäusler, Dominik; Thanei, Gian; Soos, Miroslav; Stettler, Matthieu; Souquet, Jonathan; Broly, Hervé; Morbidelli, Massimo; Butté, Alessandro

    2017-01-01

    This work presents a multivariate methodology combining principal component analysis, the Mahalanobis distance and decision trees for the selection of process factors and their levels in early process development of generic molecules. It is applied to a high throughput study testing more than 200 conditions for the production of a biosimilar monoclonal antibody at microliter scale. The methodology provides the most important selection criteria for the process design in order to improve product quality towards the quality attributes of the originator molecule. Robustness of the selections is ensured by cross-validation of each analysis step. The concluded selections are then successfully validated with an external data set. Finally, the results are compared to those obtained with a widely used software revealing similarities and clear advantages of the presented methodology. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 33:181-191, 2017. © 2016 American Institute of Chemical Engineers.

  13. Robust Kriged Kalman Filtering

    SciTech Connect

    Baingana, Brian; Dall'Anese, Emiliano; Mateos, Gonzalo; Giannakis, Georgios B.

    2015-11-11

    Although the kriged Kalman filter (KKF) has well-documented merits for prediction of spatial-temporal processes, its performance degrades in the presence of outliers due to anomalous events, or measurement equipment failures. This paper proposes a robust KKF model that explicitly accounts for presence of measurement outliers. Exploiting outlier sparsity, a novel l1-regularized estimator that jointly predicts the spatial-temporal process at unmonitored locations, while identifying measurement outliers is put forth. Numerical tests are conducted on a synthetic Internet protocol (IP) network, and real transformer load data. Test results corroborate the effectiveness of the novel estimator in joint spatial prediction and outlier identification.

  14. MIMO model of an interacting series process for Robust MPC via System Identification.

    PubMed

    Wibowo, Tri Chandra S; Saad, Nordin

    2010-07-01

    This paper discusses the empirical modeling using system identification technique with a focus on an interacting series process. The study is carried out experimentally using a gaseous pilot plant as the process, in which the dynamic of such a plant exhibits the typical dynamic of an interacting series process. Three practical approaches are investigated and their performances are evaluated. The models developed are also examined in real-time implementation of a linear model predictive control. The selected model is able to reproduce the main dynamic characteristics of the plant in open-loop and produces zero steady-state errors in closed-loop control system. Several issues concerning the identification process and the construction of a MIMO state space model for a series interacting process are deliberated.

  15. 14CO2 processing using an improved and robust molecular sieve cartridge

    NASA Astrophysics Data System (ADS)

    Wotte, Anja; Wordell-Dietrich, Patrick; Wacker, Lukas; Don, Axel; Rethemeyer, Janet

    2017-06-01

    Radiocarbon (14C) analysis on CO2 can provide valuable information on the carbon cycle as different carbon pools differ in their 14C signature. While fresh, biogenic carbon shows atmospheric 14C concentrations, fossil carbon is 14C free. As shown in previous studies, CO2 can be collected for 14C analysis using molecular sieve cartridges (MSC). These devices have previously been made of plastic and glass, which can easily be damaged during transport. We thus constructed a robust MSC suitable for field application under tough conditions or in remote areas, which is entirely made of stainless steel. The new MSC should also be tight over several months to allow long sampling campaigns and transport times, which was proven by a one year storage test. The reliability of the 14CO2 results obtained with the MSC was evaluated by detailed tests of different procedures to clean the molecular sieve (zeolite type 13X) and for the adsorption and desorption of CO2 from the zeolite using a vacuum rig. We show that the 14CO2 results are not affected by any contamination of modern or fossil origin, cross contamination from previous samples, and by carbon isotopic fractionation. In addition, we evaluated the direct CO2 transfer from the MSC into the automatic graphitization equipment AGE with the subsequent 14C AMS analysis as graphite. This semi-automatic approach can be fully automated in the future, which would allow a high sample throughput. We obtained very promising, low blank values between 0.0018 and 0.0028 F14C (equivalent to 50,800 and 47,200 yrs BP), which are within the analytical background and lower than results obtained in previous studies.

  16. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  17. A New Hybrid BFOA-PSO Optimization Technique for Decoupling and Robust Control of Two-Coupled Distillation Column Process.

    PubMed

    Abdelkarim, Noha; Mohamed, Amr E; El-Garhy, Ahmed M; Dorrah, Hassen T

    2016-01-01

    The two-coupled distillation column process is a physically complicated system in many aspects. Specifically, the nested interrelationship between system inputs and outputs constitutes one of the significant challenges in system control design. Mostly, such a process is to be decoupled into several input/output pairings (loops), so that a single controller can be assigned for each loop. In the frame of this research, the Brain Emotional Learning Based Intelligent Controller (BELBIC) forms the control structure for each decoupled loop. The paper's main objective is to develop a parameterization technique for decoupling and control schemes, which ensures robust control behavior. In this regard, the novel optimization technique Bacterial Swarm Optimization (BSO) is utilized for the minimization of summation of the integral time-weighted squared errors (ITSEs) for all control loops. This optimization technique constitutes a hybrid between two techniques, which are the Particle Swarm and Bacterial Foraging algorithms. According to the simulation results, this hybridized technique ensures low mathematical burdens and high decoupling and control accuracy. Moreover, the behavior analysis of the proposed BELBIC shows a remarkable improvement in the time domain behavior and robustness over the conventional PID controller.

  18. A New Hybrid BFOA-PSO Optimization Technique for Decoupling and Robust Control of Two-Coupled Distillation Column Process

    PubMed Central

    Mohamed, Amr E.; Dorrah, Hassen T.

    2016-01-01

    The two-coupled distillation column process is a physically complicated system in many aspects. Specifically, the nested interrelationship between system inputs and outputs constitutes one of the significant challenges in system control design. Mostly, such a process is to be decoupled into several input/output pairings (loops), so that a single controller can be assigned for each loop. In the frame of this research, the Brain Emotional Learning Based Intelligent Controller (BELBIC) forms the control structure for each decoupled loop. The paper's main objective is to develop a parameterization technique for decoupling and control schemes, which ensures robust control behavior. In this regard, the novel optimization technique Bacterial Swarm Optimization (BSO) is utilized for the minimization of summation of the integral time-weighted squared errors (ITSEs) for all control loops. This optimization technique constitutes a hybrid between two techniques, which are the Particle Swarm and Bacterial Foraging algorithms. According to the simulation results, this hybridized technique ensures low mathematical burdens and high decoupling and control accuracy. Moreover, the behavior analysis of the proposed BELBIC shows a remarkable improvement in the time domain behavior and robustness over the conventional PID controller. PMID:27807444

  19. Use of a qualitative methodological scaffolding process to design robust interprofessional studies.

    PubMed

    Wener, Pamela; Woodgate, Roberta L

    2013-07-01

    Increasingly, researchers are using qualitative methodology to study interprofessional collaboration (IPC). With this increase in use, there seems to be an appreciation for how qualitative studies allow us to understand the unique individual or group experience in more detail and form a basis for policy change and innovative interventions. Furthermore, there is an increased understanding of the potential of studying new or emerging phenomena qualitatively to inform further large-scale studies. Although there is a current trend toward greater acceptance of the value of qualitative studies describing the experiences of IPC, these studies are mostly descriptive in nature. Applying a process suggested by Crotty (1998) may encourage researchers to consider the value in situating research questions within a broader theoretical framework that will inform the overall research approach including methodology and methods. This paper describes the application of a process to a research project and then illustrates how this process encouraged iterative cycles of thinking and doing. The authors describe each step of the process, shares decision-making points, as well as suggests an additional step to the process. Applying this approach to selecting data collection methods may serve to guide and support the qualitative researcher in creating a well-designed study approach.

  20. Robust carrier formation process in low-band gap organic photovoltaics

    NASA Astrophysics Data System (ADS)

    Yonezawa, Kouhei; Kamioka, Hayato; Yasuda, Takeshi; Han, Liyuan; Moritomo, Yutaka

    2013-10-01

    By means of femto-second time-resolved spectroscopy, we investigated the carrier formation process against film morphology and temperature (T) in highly-efficient organic photovoltaic, poly[[4,8-bis[(2-ethylhexyl)oxy]benzo[1,2-b:4,5-b '] dithiophene-2,6-diyl][3-fluoro-2-[(2-ethylhexyl)carbonyl]thieno[3,4-b] thiophenediyl

  1. A sensitivity analysis of process design parameters, commodity prices and robustness on the economics of odour abatement technologies.

    PubMed

    Estrada, José M; Kraakman, N J R Bart; Lebrero, Raquel; Muñoz, Raúl

    2012-01-01

    The sensitivity of the economics of the five most commonly applied odour abatement technologies (biofiltration, biotrickling filtration, activated carbon adsorption, chemical scrubbing and a hybrid technology consisting of a biotrickling filter coupled with carbon adsorption) towards design parameters and commodity prices was evaluated. Besides, the influence of the geographical location on the Net Present Value calculated for a 20 years lifespan (NPV20) of each technology and its robustness towards typical process fluctuations and operational upsets were also assessed. This comparative analysis showed that biological techniques present lower operating costs (up to 6 times) and lower sensitivity than their physical/chemical counterparts, with the packing material being the key parameter affecting their operating costs (40-50% of the total operating costs). The use of recycled or partially treated water (e.g. secondary effluent in wastewater treatment plants) offers an opportunity to significantly reduce costs in biological techniques. Physical/chemical technologies present a high sensitivity towards H2S concentration, which is an important drawback due to the fluctuating nature of malodorous emissions. The geographical analysis evidenced high NPV20 variations around the world for all the technologies evaluated, but despite the differences in wage and price levels, biofiltration and biotrickling filtration are always the most cost-efficient alternatives (NPV20). When, in an economical evaluation, the robustness is as relevant as the overall costs (NPV20), the hybrid technology would move up next to BTF as the most preferred technologies.

  2. A Robust Multi-Scale Modeling System for the Study of Cloud and Precipitation Processes

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2012-01-01

    During the past decade, numerical weather and global non-hydrostatic models have started using more complex microphysical schemes originally developed for high resolution cloud resolving models (CRMs) with 1-2 km or less horizontal resolutions. These microphysical schemes affect the dynamic through the release of latent heat (buoyancy loading and pressure gradient) the radiation through the cloud coverage (vertical distribution of cloud species), and surface processes through rainfall (both amount and intensity). Recently, several major improvements of ice microphysical processes (or schemes) have been developed for cloud-resolving model (Goddard Cumulus Ensemble, GCE, model) and regional scale (Weather Research and Forecast, WRF) model. These improvements include an improved 3-ICE (cloud ice, snow and graupel) scheme (Lang et al. 2010); a 4-ICE (cloud ice, snow, graupel and hail) scheme and a spectral bin microphysics scheme and two different two-moment microphysics schemes. The performance of these schemes has been evaluated by using observational data from TRMM and other major field campaigns. In this talk, we will present the high-resolution (1 km) GeE and WRF model simulations and compared the simulated model results with observation from recent field campaigns [i.e., midlatitude continental spring season (MC3E; 2010), high latitude cold-season (C3VP, 2007; GCPEx, 2012), and tropical oceanic (TWP-ICE, 2006)].

  3. Optimization of Tape Winding Process Parameters to Enhance the Performance of Solid Rocket Nozzle Throat Back Up Liners using Taguchi's Robust Design Methodology

    NASA Astrophysics Data System (ADS)

    Nath, Nayani Kishore

    2017-08-01

    The throat back up liners is used to protect the nozzle structural members from the severe thermal environment in solid rocket nozzles. The throat back up liners is made with E-glass phenolic prepregs by tape winding process. The objective of this work is to demonstrate the optimization of process parameters of tape winding process to achieve better insulative resistance using Taguchi's robust design methodology. In this method four control factors machine speed, roller pressure, tape tension, tape temperature that were investigated for the tape winding process. The presented work was to study the cogency and acceptability of Taguchi's methodology in manufacturing of throat back up liners. The quality characteristic identified was Back wall temperature. Experiments carried out using L 9 ' (34) orthogonal array with three levels of four different control factors. The test results were analyzed using smaller the better criteria for Signal to Noise ratio in order to optimize the process. The experimental results were analyzed conformed and successfully used to achieve the minimum back wall temperature of the throat back up liners. The enhancement in performance of the throat back up liners was observed by carrying out the oxy-acetylene tests. The influence of back wall temperature on the performance of throat back up liners was verified by ground firing test.

  4. Optimization of Tape Winding Process Parameters to Enhance the Performance of Solid Rocket Nozzle Throat Back Up Liners using Taguchi's Robust Design Methodology

    NASA Astrophysics Data System (ADS)

    Nath, Nayani Kishore

    2016-06-01

    The throat back up liners is used to protect the nozzle structural members from the severe thermal environment in solid rocket nozzles. The throat back up liners is made with E-glass phenolic prepregs by tape winding process. The objective of this work is to demonstrate the optimization of process parameters of tape winding process to achieve better insulative resistance using Taguchi's robust design methodology. In this method four control factors machine speed, roller pressure, tape tension, tape temperature that were investigated for the tape winding process. The presented work was to study the cogency and acceptability of Taguchi's methodology in manufacturing of throat back up liners. The quality characteristic identified was Back wall temperature. Experiments carried out using L{9/'} (34) orthogonal array with three levels of four different control factors. The test results were analyzed using smaller the better criteria for Signal to Noise ratio in order to optimize the process. The experimental results were analyzed conformed and successfully used to achieve the minimum back wall temperature of the throat back up liners. The enhancement in performance of the throat back up liners was observed by carrying out the oxy-acetylene tests. The influence of back wall temperature on the performance of throat back up liners was verified by ground firing test.

  5. Uncertainties and robustness of the ignition process in type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Iapichino, L.; Lesaffre, P.

    2010-03-01

    Context. It is widely accepted that the onset of the explosive carbon burning in the core of a carbon-oxygen white dwarf (CO WD) triggers the ignition of a type Ia supernova (SN Ia). The features of the ignition are among the few free parameters of the SN Ia explosion theory. Aims: We explore the role for the ignition process of two different issues: firstly, the ignition is studied in WD models coming from different accretion histories. Secondly, we estimate how a different reaction rate for C-burning can affect the ignition. Methods: Two-dimensional hydrodynamical simulations of temperature perturbations in the WD core (“bubbles”) are performed with the FLASH code. In order to evaluate the impact of the C-burning reaction rate on the WD model, the evolution code FLASH_THE_TORTOISE from Lesaffre et al. (2006, MNRAS, 368, 187) is used. Results: In different WD models a key role is played by the different gravitational acceleration in the progenitor's core. As a consequence, the ignition is disfavored at a large distance from the WD center in models with a larger central density, resulting from the evolution of initially more massive progenitors. Changes in the C reaction rate at T ⪉ 5 × 10^8~K slightly influence the ignition density in the WD core, while the ignition temperature is almost unaffected. Recent measurements of new resonances in the C-burning reaction rate (Spillane et al. 2007, Phys. Rev. Lett., 98, 122501) do not affect the core conditions of the WD significantly. Conclusions: This simple analysis, performed on the features of the temperature perturbations in the WD core, should be extended in the framework of the state-of-the-art numerical tools for studying the turbulent convection and ignition in the WD core. Future measurements of the C-burning reactions cross section at low energy, though certainly useful, are not expected to affect our current understanding of the ignition process dramatically.

  6. Towards a robust assessment of bridge clogging processes in flood risk management

    NASA Astrophysics Data System (ADS)

    Gschnitzer, T.; Gems, B.; Mazzorana, B.; Aufleger, M.

    2017-02-01

    River managers are aware that wood-clogging mechanisms frequently trigger damage-causing processes like structural damages at bridges, sudden channel outbursts, and occasionally, major displacements of the water course. To successfully mitigate flood risks related to the transport of large wood (LW), river managers need a guideline for an accurate and reliable risk assessment procedure and the design of river sections and bridges that are endangered of LW clogging. In recent years, comprehensive research dealing with the triggers of wood-clogging mechanisms at bridges and the corresponding impacts on flood risk was accomplished at the University of Innsbruck. A large set of laboratory experiments in a rectangular flume was conducted. In this paper we provide an overall view of these tests and present our findings. By applying a logistic regression analysis, the available knowledge on the influence of geometrical, hydraulic, and wood-related parameters on LW clogging probabilities is processed in a generalized form. Based on the experimental modeling results a practice-oriented guideline that supports the assessment of flood risk induced by LW clogging, is presented. In this context, two specific local structural protection measures at the bridge, aiming for a significant decrease of the entrapment probabilities, are illustrated: (i) a deflecting baffle installed on the upstream face of the bridge and (ii) a channel constriction leading to a change in flow state and a corresponding increase of the flow velocities and the freeboard at the bridge cross section. The presented guideline is based on a three-step approach: estimation of LW potential, entrainment, and transport; clogging scenario at the bridge; and the impact on channel and floodplain hydraulics. For a specific bridge susceptible to potential clogging caused by LW entrapment, it allows for a qualitative evaluation of potential LW entrainment in the upstream river segments, its transport toward the

  7. Acquisition, processing, and visualization of big data as applied to robust multivariate impact models

    NASA Astrophysics Data System (ADS)

    Romeo, L.; Rose, K.; Bauer, J. R.; Dick, D.; Nelson, J.; Bunn, A.; Buenau, K. E.; Coleman, A. M.

    2016-02-01

    Increased offshore oil exploration and production emphasizes the need for environmental, social, and economic impact models that require big data from disparate sources to conduct thorough multi-scale analyses. The National Energy Technology Laboratory's Cumulative Spatial Impact Layers (CSILs) and Spatially Weighted Impact Model (SWIM) are user-driven flexible suites of GIS-based tools that can efficiently process, integrate, visualize, and analyze a wide variety of big datasets that are acquired to better to understand potential impacts for oil spill prevention and response readiness needs. These tools provide solutions to address a range of stakeholder questions and aid in prioritization decisions needed when responding to oil spills. This is particularly true when highlighting ecologically sensitive areas and spatially analyzing which species may be at risk. Model outputs provide unique geospatial visualizations of potential impacts and informational reports based on user preferences. The spatio-temporal capabilities of these tools can be leveraged to a number of anthropogenic and natural disasters enabling decision-makers to be better informed to potential impacts and response needs.

  8. Extreme temperature robust optical sensor designs and fault-tolerant signal processing

    DOEpatents

    Riza, Nabeel Agha [Oviedo, FL; Perez, Frank [Tujunga, CA

    2012-01-17

    Silicon Carbide (SiC) probe designs for extreme temperature and pressure sensing uses a single crystal SiC optical chip encased in a sintered SiC material probe. The SiC chip may be protected for high temperature only use or exposed for both temperature and pressure sensing. Hybrid signal processing techniques allow fault-tolerant extreme temperature sensing. Wavelength peak-to-peak (or null-to-null) collective spectrum spread measurement to detect wavelength peak/null shift measurement forms a coarse-fine temperature measurement using broadband spectrum monitoring. The SiC probe frontend acts as a stable emissivity Black-body radiator and monitoring the shift in radiation spectrum enables a pyrometer. This application combines all-SiC pyrometry with thick SiC etalon laser interferometry within a free-spectral range to form a coarse-fine temperature measurement sensor. RF notch filtering techniques improve the sensitivity of the temperature measurement where fine spectral shift or spectrum measurements are needed to deduce temperature.

  9. Robust Suppression of HIV Replication by Intracellularly Expressed Reverse Transcriptase Aptamers Is Independent of Ribozyme Processing

    PubMed Central

    Lange, Margaret J; Sharma, Tarun K; Whatley, Angela S; Landon, Linda A; Tempesta, Michael A; Johnson, Marc C; Burke, Donald H

    2012-01-01

    RNA aptamers that bind human immunodeficiency virus 1 (HIV-1) reverse transcriptase (RT) also inhibit viral replication, making them attractive as therapeutic candidates and potential tools for dissecting viral pathogenesis. However, it is not well understood how aptamer-expression context and cellular RNA pathways govern aptamer accumulation and net antiviral bioactivity. Using a previously-described expression cassette in which aptamers were flanked by two “minimal core” hammerhead ribozymes, we observed only weak suppression of pseudotyped HIV. To evaluate the importance of the minimal ribozymes, we replaced them with extended, tertiary-stabilized hammerhead ribozymes with enhanced self-cleavage activity, in addition to noncleaving ribozymes with active site mutations. Both the active and inactive versions of the extended hammerhead ribozymes increased inhibition of pseudotyped virus, indicating that processing is not necessary for bioactivity. Clonal stable cell lines expressing aptamers from these modified constructs strongly suppressed infectious virus, and were more effective than minimal ribozymes at high viral multiplicity of infection (MOI). Tertiary stabilization greatly increased aptamer accumulation in viral and subcellular compartments, again regardless of self-cleavage capability. We therefore propose that the increased accumulation is responsible for increased suppression, that the bioactive form of the aptamer is one of the uncleaved or partially cleaved transcripts, and that tertiary stabilization increases transcript stability by reducing exonuclease degradation. PMID:22948672

  10. A robust method for the reconstruction of disparity maps based on multilevel processing of stereo color image pairs

    NASA Astrophysics Data System (ADS)

    Kravchenko, V. F.; Ponomaryov, V. I.; Pustovoit, V. I.; Sadovnychiy, S. N.

    2017-08-01

    A novel method for the reconstruction of disparity maps (DMs) with robust properties to nonideal registration conditions, reflections, and noise in stereo color image pairs has been substantiated for the first time. The novel approach proposes a scheme for image DM reconstruction where Jaccard distance metric is used as a proximity criterion in stereo image pair matching. A physical interpretation of the method that allows the quality of the formed DMs to be improved significantly is given. A processing block diagram has been developed in accordance with the novel approach. Simulations of the novel DM reconstruction method have shown an advantage of the proposed DM reconstruction scheme in terms of generally recognized criteria, such as the structural similarity index measure and the bad matching pixels, and when visually comparing the formed DMs.

  11. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    NASA Astrophysics Data System (ADS)

    McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.

    2015-04-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.

  12. Efficient and Stable Vacuum-Free-Processed Perovskite Solar Cells Enabled by a Robust Solution-Processed Hole Transport Layer.

    PubMed

    Chang, Chih-Yu; Tsai, Bo-Chou; Hsiao, Yu-Cheng

    2017-05-09

    Here, efficient and stable vacuum-free processed perovskite solar cells (PSCs) are demonstrated by employing solutionprocessed molybdenum tris-[1-(trifluoroethanoyl)-2-(trifluoromethyl)ethane-1,2-dithiolene] (Mo(tfd-COCF3 )3 )-doped poly(3,4-ethylenedioxythiophene) (PEDOT) film as hole transport layer (HTL). Our results indicate that the incorporation of Mo(tfd-COCF3 )3 dopant can induce p-doping through charge transfer from the highest occupied molecular orbital (HOMO) level of the PEDOT host to the electron affinity of Mo(tfd-COCF3 )3 , leading to an increase in conductivity by more than three orders of magnitude. With this newly developed p-doped film as HTL in planar heterojunction PSCs, a high power conversion efficiency (PCE) up to 18.47 % can be achieved, which exceeds that of the device with commonly used HTL 2,2',7,7'-tetrakis(N,N-di-p-methoxyphenyl-amine)9,9'-spirobifluorene (spiro-OMeTAD). Taking the advantage of the high conductivity of this doped film, a prominent PCE as high as 15.58 % is also demonstrated even when a large HTL thickness of 220 nm is used. Importantly, the high quality film of this HTL is capable of acting as an effective passivation layer to keep the underlying perovskite layer intact during solution-processed Ag-nanoparticles layer deposition. The resulting vacuum-free PSCs deliver an impressive PCE of 14.81 %, which represents the highest performance ever reported for vacuum-free PSCs. Furthermore, the resulting devices show good ambient stability without encapsulation. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. A robust post-processing method to determine skin friction in turbulent boundary layers from the velocity profile

    NASA Astrophysics Data System (ADS)

    Rodríguez-López, Eduardo; Bruce, Paul J. K.; Buxton, Oliver R. H.

    2015-04-01

    The present paper describes a method to extrapolate the mean wall shear stress, , and the accurate relative position of a velocity probe with respect to the wall, , from an experimentally measured mean velocity profile in a turbulent boundary layer. Validation is made between experimental and direct numerical simulation data of turbulent boundary layer flows with independent measurement of the shear stress. The set of parameters which minimize the residual error with respect to the canonical description of the boundary layer profile is taken as the solution. Several methods are compared, testing different descriptions of the canonical mean velocity profile (with and without overshoot over the logarithmic law) and different definitions of the residual function of the optimization. The von Kármán constant is used as a parameter of the fitting process in order to avoid any hypothesis regarding its value that may be affected by different initial or boundary conditions of the flow. Results show that the best method provides an accuracy of for the estimation of the friction velocity and for the position of the wall. The robustness of the method is tested including unconverged near-wall measurements, pressure gradient, and reduced number of points; the importance of the location of the first point is also tested, and it is shown that the method presents a high robustness even in highly distorted flows, keeping the aforementioned accuracies if one acquires at least one data point in . The wake component and the thickness of the boundary layer are also simultaneously extrapolated from the mean velocity profile. This results in the first study, to the knowledge of the authors, where a five-parameter fitting is carried out without any assumption on the von Kármán constant and the limits of the logarithmic layer further from its existence.

  14. Efficient and mechanically robust stretchable organic light-emitting devices by a laser-programmable buckling process

    PubMed Central

    Yin, Da; Feng, Jing; Ma, Rui; Liu, Yue-Feng; Zhang, Yong-Lai; Zhang, Xu-Lin; Bi, Yan-Gang; Chen, Qi-Dai; Sun, Hong-Bo

    2016-01-01

    Stretchable organic light-emitting devices are becoming increasingly important in the fast-growing fields of wearable displays, biomedical devices and health-monitoring technology. Although highly stretchable devices have been demonstrated, their luminous efficiency and mechanical stability remain impractical for the purposes of real-life applications. This is due to significant challenges arising from the high strain-induced limitations on the structure design of the device, the materials used and the difficulty of controlling the stretch-release process. Here we have developed a laser-programmable buckling process to overcome these obstacles and realize a highly stretchable organic light-emitting diode with unprecedented efficiency and mechanical robustness. The strained device luminous efficiency −70 cd A−1 under 70% strain - is the largest to date and the device can accommodate 100% strain while exhibiting only small fluctuations in performance over 15,000 stretch-release cycles. This work paves the way towards fully stretchable organic light-emitting diodes that can be used in wearable electronic devices. PMID:27187936

  15. Efficient and mechanically robust stretchable organic light-emitting devices by a laser-programmable buckling process.

    PubMed

    Yin, Da; Feng, Jing; Ma, Rui; Liu, Yue-Feng; Zhang, Yong-Lai; Zhang, Xu-Lin; Bi, Yan-Gang; Chen, Qi-Dai; Sun, Hong-Bo

    2016-05-17

    Stretchable organic light-emitting devices are becoming increasingly important in the fast-growing fields of wearable displays, biomedical devices and health-monitoring technology. Although highly stretchable devices have been demonstrated, their luminous efficiency and mechanical stability remain impractical for the purposes of real-life applications. This is due to significant challenges arising from the high strain-induced limitations on the structure design of the device, the materials used and the difficulty of controlling the stretch-release process. Here we have developed a laser-programmable buckling process to overcome these obstacles and realize a highly stretchable organic light-emitting diode with unprecedented efficiency and mechanical robustness. The strained device luminous efficiency -70 cd A(-1) under 70% strain - is the largest to date and the device can accommodate 100% strain while exhibiting only small fluctuations in performance over 15,000 stretch-release cycles. This work paves the way towards fully stretchable organic light-emitting diodes that can be used in wearable electronic devices.

  16. Efficient and mechanically robust stretchable organic light-emitting devices by a laser-programmable buckling process

    NASA Astrophysics Data System (ADS)

    Yin, Da; Feng, Jing; Ma, Rui; Liu, Yue-Feng; Zhang, Yong-Lai; Zhang, Xu-Lin; Bi, Yan-Gang; Chen, Qi-Dai; Sun, Hong-Bo

    2016-05-01

    Stretchable organic light-emitting devices are becoming increasingly important in the fast-growing fields of wearable displays, biomedical devices and health-monitoring technology. Although highly stretchable devices have been demonstrated, their luminous efficiency and mechanical stability remain impractical for the purposes of real-life applications. This is due to significant challenges arising from the high strain-induced limitations on the structure design of the device, the materials used and the difficulty of controlling the stretch-release process. Here we have developed a laser-programmable buckling process to overcome these obstacles and realize a highly stretchable organic light-emitting diode with unprecedented efficiency and mechanical robustness. The strained device luminous efficiency -70 cd A-1 under 70% strain - is the largest to date and the device can accommodate 100% strain while exhibiting only small fluctuations in performance over 15,000 stretch-release cycles. This work paves the way towards fully stretchable organic light-emitting diodes that can be used in wearable electronic devices.

  17. Observations on the Use of SCAN To Identify Children at Risk for Central Auditory Processing Disorder.

    ERIC Educational Resources Information Center

    Emerson, Maria F.; And Others

    1997-01-01

    The SCAN: A Screening Test for Auditory Processing Disorders was administered to 14 elementary children with a history of otitis media and 14 typical children, to evaluate the validity of the test in identifying children with central auditory processing disorder. Another experiment found that test results differed based on the testing environment…

  18. Observations on the Use of SCAN To Identify Children at Risk for Central Auditory Processing Disorder.

    ERIC Educational Resources Information Center

    Emerson, Maria F.; And Others

    1997-01-01

    The SCAN: A Screening Test for Auditory Processing Disorders was administered to 14 elementary children with a history of otitis media and 14 typical children, to evaluate the validity of the test in identifying children with central auditory processing disorder. Another experiment found that test results differed based on the testing environment…

  19. A point process approach to identifying and tracking transitions in neural spiking dynamics in the subthalamic nucleus of Parkinson's patients

    NASA Astrophysics Data System (ADS)

    Deng, Xinyi; Eskandar, Emad N.; Eden, Uri T.

    2013-12-01

    Understanding the role of rhythmic dynamics in normal and diseased brain function is an important area of research in neural electrophysiology. Identifying and tracking changes in rhythms associated with spike trains present an additional challenge, because standard approaches for continuous-valued neural recordings—such as local field potential, magnetoencephalography, and electroencephalography data—require assumptions that do not typically hold for point process data. Additionally, subtle changes in the history dependent structure of a spike train have been shown to lead to robust changes in rhythmic firing patterns. Here, we propose a point process modeling framework to characterize the rhythmic spiking dynamics in spike trains, test for statistically significant changes to those dynamics, and track the temporal evolution of such changes. We first construct a two-state point process model incorporating spiking history and develop a likelihood ratio test to detect changes in the firing structure. We then apply adaptive state-space filters and smoothers to track these changes through time. We illustrate our approach with a simulation study as well as with experimental data recorded in the subthalamic nucleus of Parkinson's patients performing an arm movement task. Our analyses show that during the arm movement task, neurons underwent a complex pattern of modulation of spiking intensity characterized initially by a release of inhibitory control at 20-40 ms after a spike, followed by a decrease in excitatory influence at 40-60 ms after a spike.

  20. Are all letters really processed equally and in parallel? Further evidence of a robust first letter advantage.

    PubMed

    Scaltritti, Michele; Balota, David A

    2013-10-01

    This present study examined accuracy and response latency of letter processing as a function of position within a horizontal array. In a series of 4 Experiments, target-strings were briefly (33ms for Experiments 1 to 3, 83ms for Experiment 4) displayed and both forward and backward masked. Participants then made a two alternative forced choice. The two alternative responses differed just in one element of the string, and position of mismatch was systematically manipulated. In Experiment 1, words of different lengths (from 3 to 6 letters) were presented in separate blocks. Across different lengths, there was a robust advantage in performance when the alternative response was different for the letter occurring at the first position, compared to when the difference occurred at any other position. Experiment 2 replicated this finding with the same materials used in Experiment 1, but with words of different lengths randomly intermixed within blocks. Experiment 3 provided evidence of the first position advantage with legal nonwords and strings of consonants, but did not provide any first position advantage for non-alphabetic symbols. The lack of a first position advantage for symbols was replicated in Experiment 4, where target-strings were displayed for a longer duration (83ms). Taken together these results suggest that the first position advantage is a phenomenon that occurs specifically and selectively for letters, independent of lexical constraints. We argue that the results are consistent with models that assume a processing advantage for coding letters in the first position, and are inconsistent with the commonly held assumption in visual word recognition models that letters are equally processed in parallel independent of letter position.

  1. Are All Letters Really Processed Equally and in Parallel? Further Evidence of a Robust First Letter Advantage

    PubMed Central

    Scaltritti, Michele; Balota, David A.

    2013-01-01

    This present study examined accuracy and response latency of letter processing as a function of position within a horizontal array. In a series of 4 Experiments, target-strings were briefly (33 ms for Experiment 1 to 3, 83 ms for Experiment 4) displayed and both forward and backward masked. Participants then made a two alternative forced choice. The two alternative responses differed just in one element of the string, and position of mismatch was systematically manipulated. In Experiment 1, words of different lengths (from 3 to 6 letters) were presented in separate blocks. Across different lengths, there was a robust advantage in performance when the alternative response was different for the letter occurring at the first position, compared to when the difference occurred at any other position. Experiment 2 replicated this finding with the same materials used in Experiment 1, but with words of different lengths randomly intermixed within blocks. Experiment 3 provided evidence of the first position advantage with legal nonwords and strings of consonants, but did not provide any first position advantage for non-alphabetic symbols. The lack of a first position advantage for symbols was replicated in Experiment 4, where target-strings were displayed for a longer duration (83 ms). Taken together these results suggest that the first position advantage is a phenomenon that occurs specifically and selectively for letters, independent of lexical constraints. We argue that the results are consistent with models that assume a processing advantage for coding letters in the first position, and are inconsistent with the commonly held assumption in visual word recognition models that letters are equally processed in parallel independent of letter position. PMID:24012723

  2. Pilot-scale investigation of the robustness and efficiency of a copper-based treated wood wastes recycling process.

    PubMed

    Coudert, Lucie; Blais, Jean-François; Mercier, Guy; Cooper, Paul; Gastonguay, Louis; Morris, Paul; Janin, Amélie; Reynier, Nicolas

    2013-10-15

    The disposal of metal-bearing treated wood wastes is becoming an environmental challenge. An efficient recycling process based on sulfuric acid leaching has been developed to remove metals from copper-based treated wood chips (0robustness of this technology in removing metals from copper-based treated wood wastes at a pilot plant scale (130-L reactor tank). After 3 × 2 h leaching steps followed by 3 × 7 min rinsing steps, up to 97.5% of As, 87.9% of Cr, and 96.1% of Cu were removed from CCA-treated wood wastes with different initial metal loading (>7.3 kgm(-3)) and more than 94.5% of Cu was removed from ACQ-, CA- and MCQ-treated wood. The treatment of effluents by precipitation-coagulation was highly efficient; allowing removals more than 93% for the As, Cr, and Cu contained in the effluent. The economic analysis included operating costs, indirect costs and revenues related to remediated wood sales. The economic analysis concluded that CCA-treated wood wastes remediation can lead to a benefit of 53.7 US$t(-1) or a cost of 35.5 US$t(-1) and that ACQ-, CA- and MCQ-treated wood wastes recycling led to benefits ranging from 9.3 to 21.2 US$t(-1).

  3. Torque coordinating robust control of shifting process for dry dual clutch transmission equipped in a hybrid car

    NASA Astrophysics Data System (ADS)

    Zhao, Z.-G.; Chen, H.-J.; Yang, Y.-Y.; He, L.

    2015-09-01

    For a hybrid car equipped with dual clutch transmission (DCT), the coordination control problems of clutches and power sources are investigated while taking full advantage of the integrated starter generator motor's fast response speed and high accuracy (speed and torque). First, a dynamic model of the shifting process is established, the vehicle acceleration is quantified according to the intentions of the driver, and the torque transmitted by clutches is calculated based on the designed disengaging principle during the torque phase. Next, a robust H∞ controller is designed to ensure speed synchronisation despite the existence of model uncertainties, measurement noise, and engine torque lag. The engine torque lag and measurement noise are used as external disturbances to initially modify the output torque of the power source. Additionally, during the torque switch phase, the torque of the power sources is smoothly transitioned to the driver's demanded torque. Finally, the torque of the power sources is further distributed based on the optimisation of system efficiency, and the throttle opening of the engine is constrained to avoid sharp torque variations. The simulation results verify that the proposed control strategies effectively address the problem of coordinating control of clutches and power sources, establishing a foundation for the application of DCT in hybrid cars.

  4. The role of the PIRT process in identifying code improvements and executing code development

    SciTech Connect

    Wilson, G.E.; Boyack, B.E.

    1997-07-01

    In September 1988, the USNRC issued a revised ECCS rule for light water reactors that allows, as an option, the use of best estimate (BE) plus uncertainty methods in safety analysis. The key feature of this licensing option relates to quantification of the uncertainty in the determination that an NPP has a {open_quotes}low{close_quotes} probability of violating the safety criteria specified in 10 CFR 50. To support the 1988 licensing revision, the USNRC and its contractors developed the CSAU evaluation methodology to demonstrate the feasibility of the BE plus uncertainty approach. The PIRT process, Step 3 in the CSAU methodology, was originally formulated to support the BE plus uncertainty licensing option as executed in the CSAU approach to safety analysis. Subsequent work has shown the PIRT process to be a much more powerful tool than conceived in its original form. Through further development and application, the PIRT process has shown itself to be a robust means to establish safety analysis computer code phenomenological requirements in their order of importance to such analyses. Used early in research directed toward these objectives, PIRT results also provide the technical basis and cost effective organization for new experimental programs needed to improve the safety analysis codes for new applications. The primary purpose of this paper is to describe the generic PIRT process, including typical and common illustrations from prior applications. The secondary objective is to provide guidance to future applications of the process to help them focus, in a graded approach, on systems, components, processes and phenomena that have been common in several prior applications.

  5. Magnetoencephalography identifies rapid temporal processing deficit in autism and language impairment.

    PubMed

    Oram Cardy, Janis E; Flagg, Elissa J; Roberts, Wendy; Brian, Jessica; Roberts, Timothy P L

    2005-03-15

    Deficient rapid temporal processing may contribute to impaired language development by interfering with the processing of brief acoustic transitions crucial for speech perception. Using magnetoencephalography, evoked neural activity (M50, M100) to two 40 ms tones passively presented in rapid succession was recorded in 10 neurologically normal adults and 40 8-17-year-olds with autism, specific language impairment, Asperger syndrome or typical development. While 80% of study participants with intact language (Asperger syndrome, typical development, adults) showed identifiable responses to the second tone, which presented rapid temporal processing demands, 65% of study participants with impaired language (autism, specific language impairment) did not, despite having shown identifiable responses to the first tone. Rapid temporal processing impairments may be fundamentally associated with impairments in language rather than autism spectrum disorder.

  6. Clocking the social mind by identifying mental processes in the IAT with electrical neuroimaging

    PubMed Central

    Schiller, Bastian; Gianotti, Lorena R. R.; Baumgartner, Thomas; Nash, Kyle; Koenig, Thomas; Knoch, Daria

    2016-01-01

    Why do people take longer to associate the word “love” with outgroup words (incongruent condition) than with ingroup words (congruent condition)? Despite the widespread use of the implicit association test (IAT), it has remained unclear whether this IAT effect is due to additional mental processes in the incongruent condition, or due to longer duration of the same processes. Here, we addressed this previously insoluble issue by assessing the spatiotemporal evolution of brain electrical activity in 83 participants. From stimulus presentation until response production, we identified seven processes. Crucially, all seven processes occurred in the same temporal sequence in both conditions, but participants needed more time to perform one early occurring process (perceptual processing) and one late occurring process (implementing cognitive control to select the motor response) in the incongruent compared with the congruent condition. We also found that the latter process contributed to individual differences in implicit bias. These results advance understanding of the neural mechanics of response time differences in the IAT: They speak against theories that explain the IAT effect as due to additional processes in the incongruent condition and speak in favor of theories that assume a longer duration of specific processes in the incongruent condition. More broadly, our data analysis approach illustrates the potential of electrical neuroimaging to illuminate the temporal organization of mental processes involved in social cognition. PMID:26903643

  7. Clocking the social mind by identifying mental processes in the IAT with electrical neuroimaging.

    PubMed

    Schiller, Bastian; Gianotti, Lorena R R; Baumgartner, Thomas; Nash, Kyle; Koenig, Thomas; Knoch, Daria

    2016-03-08

    Why do people take longer to associate the word "love" with outgroup words (incongruent condition) than with ingroup words (congruent condition)? Despite the widespread use of the implicit association test (IAT), it has remained unclear whether this IAT effect is due to additional mental processes in the incongruent condition, or due to longer duration of the same processes. Here, we addressed this previously insoluble issue by assessing the spatiotemporal evolution of brain electrical activity in 83 participants. From stimulus presentation until response production, we identified seven processes. Crucially, all seven processes occurred in the same temporal sequence in both conditions, but participants needed more time to perform one early occurring process (perceptual processing) and one late occurring process (implementing cognitive control to select the motor response) in the incongruent compared with the congruent condition. We also found that the latter process contributed to individual differences in implicit bias. These results advance understanding of the neural mechanics of response time differences in the IAT: They speak against theories that explain the IAT effect as due to additional processes in the incongruent condition and speak in favor of theories that assume a longer duration of specific processes in the incongruent condition. More broadly, our data analysis approach illustrates the potential of electrical neuroimaging to illuminate the temporal organization of mental processes involved in social cognition.

  8. Centimeter-Level Robust Gnss-Aided Inertial Post-Processing for Mobile Mapping Without Local Reference Stations

    NASA Astrophysics Data System (ADS)

    Hutton, J. J.; Gopaul, N.; Zhang, X.; Wang, J.; Menon, V.; Rieck, D.; Kipka, A.; Pastor, F.

    2016-06-01

    For almost two decades mobile mapping systems have done their georeferencing using Global Navigation Satellite Systems (GNSS) to measure position and inertial sensors to measure orientation. In order to achieve cm level position accuracy, a technique referred to as post-processed carrier phase differential GNSS (DGNSS) is used. For this technique to be effective the maximum distance to a single Reference Station should be no more than 20 km, and when using a network of Reference Stations the distance to the nearest station should no more than about 70 km. This need to set up local Reference Stations limits productivity and increases costs, especially when mapping large areas or long linear features such as roads or pipelines. An alternative technique to DGNSS for high-accuracy positioning from GNSS is the so-called Precise Point Positioning or PPP method. In this case instead of differencing the rover observables with the Reference Station observables to cancel out common errors, an advanced model for every aspect of the GNSS error chain is developed and parameterized to within an accuracy of a few cm. The Trimble Centerpoint RTX positioning solution combines the methodology of PPP with advanced ambiguity resolution technology to produce cm level accuracies without the need for local reference stations. It achieves this through a global deployment of highly redundant monitoring stations that are connected through the internet and are used to determine the precise satellite data with maximum accuracy, robustness, continuity and reliability, along with advance algorithms and receiver and antenna calibrations. This paper presents a new post-processed realization of the Trimble Centerpoint RTX technology integrated into the Applanix POSPac MMS GNSS-Aided Inertial software for mobile mapping. Real-world results from over 100 airborne flights evaluated against a DGNSS network reference are presented which show that the post-processed Centerpoint RTX solution agrees with

  9. A Method for Identifying Contours in Processing Digital Images from Computer Tomograph

    NASA Astrophysics Data System (ADS)

    Roşu, Şerban; Pater, Flavius; Costea, Dan; Munteanu, Mihnea; Roşu, Doina; Fratila, Mihaela

    2011-09-01

    The first step in digital processing of two-dimensional computed tomography images is to identify the contour of component elements. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating new algorithms and methods in medical 2D and 3D imagery.

  10. Identifying Children Who Use a Perseverative Text Processing Strategy. Technical Report #15.

    ERIC Educational Resources Information Center

    Kimmel, Susan; MacGinitie, Walter H.

    To identify children who use a perseverative text processing strategy and to examine the effects of this strategy on recall and comprehenson, 255 fifth and sixth graders were screened for large differences between regressed standard scores for inductively (main idea last) and deductively (main idea first) structured paragraphs. Sixteen Ss were…

  11. 25 CFR 170.501 - What happens when the review process identifies areas for improvement?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false What happens when the review process identifies areas for improvement? 170.501 Section 170.501 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAND AND WATER INDIAN RESERVATION ROADS PROGRAM Planning, Design, and Construction of Indian Reservation Roads...

  12. Model of areas for identifying risks influencing the compliance of technological processes and products

    NASA Astrophysics Data System (ADS)

    Misztal, A.; Belu, N.

    2016-08-01

    Operation of every company is associated with the risk of interfering with proper performance of its fundamental processes. This risk is associated with various internal areas of the company, as well as the environment in which it operates. From the point of view of ensuring compliance of the course of specific technological processes and, consequently, product conformity with requirements, it is important to identify these threats and eliminate or reduce the risk of their occurrence. The purpose of this article is to present a model of areas of identifying risk affecting the compliance of processes and products, which is based on multiregional targeted monitoring of typical places of interference and risk management methods. The model is based on the verification of risk analyses carried out in small and medium-sized manufacturing companies in various industries..

  13. Determination of all feasible robust PID controllers for open-loop unstable plus time delay processes with gain margin and phase margin specifications.

    PubMed

    Wang, Yuan-Jay

    2014-03-01

    This paper proposes a novel alternative method to graphically compute all feasible gain and phase margin specifications-oriented robust PID controllers for open-loop unstable plus time delay (OLUPTD) processes. This method is applicable to general OLUPTD processes without constraint on system order. To retain robustness for OLUPTD processes subject to positive or negative gain variations, the downward gain margin (GM(down)), upward gain margin (GM(up)), and phase margin (PM) are considered. A virtual gain-phase margin tester compensator is incorporated to guarantee the concerned system satisfies certain robust safety margins. In addition, the stability equation method and the parameter plane method are exploited to portray the stability boundary and the constant gain margin (GM) boundary as well as the constant PM boundary. The overlapping region of these boundaries is graphically determined and denotes the GM and PM specifications-oriented region (GPMSOR). Alternatively, the GPMSOR characterizes all feasible robust PID controllers which achieve the pre-specified safety margins. In particular, to achieve optimal gain tuning, the controller gains are searched within the GPMSOR to minimize the integral of the absolute error (IAE) or the integral of the squared error (ISE) performance criterion. Thus, an optimal PID controller gain set is successfully found within the GPMSOR and guarantees the OLUPTD processes with a pre-specified GM and PM as well as a minimum IAE or ISE. Consequently, both robustness and performance can be simultaneously assured. Further, the design procedures are summarized as an algorithm to help rapidly locate the GPMSOR and search an optimal PID gain set. Finally, three highly cited examples are provided to illustrate the design process and to demonstrate the effectiveness of the proposed method.

  14. Robust computational techniques for studying Stokes flow within deformable domains: Applications to global scale planetary formation processes

    NASA Astrophysics Data System (ADS)

    Furuichi, M.; May, D.

    2014-12-01

    We develop numerical schemes for solving global scale Stokes flow systems employing the "sticky air" (approximate free surface) boundary condition. Our target application considers the dynamics of planetary growth involving long time-scale global core formation process, for which the interaction between the surface geometry and interior dynamics play an important role (e.g. Golabek et.al. 2009, Lin et.al. 2009). The solution of Stokes flow problems including a free surface is one of the grand challenges of computational geodynamics due to the numerical instability arising at the deformable surface (e.g. Kaus et.al. 2010, Duretz et.al. 2011, Kramer et.al. 2012). Here, we present two strategies for the efficient solution of the Stokes flow system using the "spherical Cartesian" approach (Gerya and Yuen 2007). The first technique addresses the robustness of the Stokes flow with respect to the viscosity jump arising between the sticky air and planetary body (e.g. Furuichi et.al. 2009). For this we employ preconditioned iterative solvers utilising mixed precision arithmetic (Furuich et.al. 2011). Our numerical experiment shows that the mixed precision approach using double-double precision arithmetic improves convergence of the Krylov solver w.r.t to increasing viscosity jump without significantly increasing the calculation time (~20%). The second strategy introduces an implicit advection scheme for the stable time integration for the deformable free surface. The Stokes flow system becomes numerically stiff when the dynamical time scale associated with the surface deformation is very short in comparison to the time scale associated with other physical process, such as thermal convection. In this work, we propose to treat the advection as a coordinate non-linearity coupled to the momentum equation, thereby defining a fully implicit time integration scheme. Such an integrator scheme permits large time step size to be used without introducing spurious numerical

  15. Identification, characterization and HPLC quantification of process-related impurities in Trelagliptin succinate bulk drug: Six identified as new compounds.

    PubMed

    Zhang, Hui; Sun, Lili; Zou, Liang; Hui, Wenkai; Liu, Lei; Zou, Qiaogen; Ouyang, Pingkai

    2016-09-05

    A sensitive, selective and stability indicating reversed-phase LC method was developed for the determination of process related impurities of Trelagliptin succinate in bulk drug. Six impurities were identified by LC-MS. Further, their structures were characterized and confirmed utilizing LC-MS/MS, IR and NMR spectral data. The most probable mechanisms for the formation of these impurities were also discussed. To the best of our knowledge, six structures among these impurities are new compounds and have not been reported previously. The superior separation was achieved on an InertSustain C18 (250mm×4.6mm, 5μm) column in a gradient mixture of acetonitrile and 20mmol potassium dihydrogen phosphate with 0.25% triethylamine (pH adjusted to 3.5 with phosphate acid). The method was validated as per regulatory guidelines to demonstrate system suitability, specificity, sensitivity, linearity, robustness, and stability. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. A hybrid approach identifies metabolic signatures of high-producers for chinese hamster ovary clone selection and process optimization.

    PubMed

    Popp, Oliver; Müller, Dirk; Didzus, Katharina; Paul, Wolfgang; Lipsmeier, Florian; Kirchner, Florian; Niklas, Jens; Mauch, Klaus; Beaucamp, Nicola

    2016-09-01

    In-depth characterization of high-producer cell lines and bioprocesses is vital to ensure robust and consistent production of recombinant therapeutic proteins in high quantity and quality for clinical applications. This requires applying appropriate methods during bioprocess development to enable meaningful characterization of CHO clones and processes. Here, we present a novel hybrid approach for supporting comprehensive characterization of metabolic clone performance. The approach combines metabolite profiling with multivariate data analysis and fluxomics to enable a data-driven mechanistic analysis of key metabolic traits associated with desired cell phenotypes. We applied the methodology to quantify and compare metabolic performance in a set of 10 recombinant CHO-K1 producer clones and a host cell line. The comprehensive characterization enabled us to derive an extended set of clone performance criteria that not only captured growth and product formation, but also incorporated information on intracellular clone physiology and on metabolic changes during the process. These criteria served to establish a quantitative clone ranking and allowed us to identify metabolic differences between high-producing CHO-K1 clones yielding comparably high product titers. Through multivariate data analysis of the combined metabolite and flux data we uncovered common metabolic traits characteristic of high-producer clones in the screening setup. This included high intracellular rates of glutamine synthesis, low cysteine uptake, reduced excretion of aspartate and glutamate, and low intracellular degradation rates of branched-chain amino acids and of histidine. Finally, the above approach was integrated into a workflow that enables standardized high-content selection of CHO producer clones in a high-throughput fashion. In conclusion, the combination of quantitative metabolite profiling, multivariate data analysis, and mechanistic network model simulations can identify metabolic

  17. Soil geomorphology: Identifying relations between the scale of spatial variation and soil processes using the variogram

    NASA Astrophysics Data System (ADS)

    Kerry, Ruth; Oliver, Margaret A.

    2011-07-01

    Practitioners of geostatistics often fail to make associations between the patterns of variation on their maps of kriged predictions and the physical processes that might generate them. Geostatistical approaches that use knowledge of the underlying process have been proposed to reduce the number of soil samples required. Also, if the scale of variation identified by the variogram can be linked to process, it could be incorporated as a scale factor into existing deterministic models. To attempt to relate the scale and pattern of variation in topsoil properties to processes, the topsoil of four field sites in southern England on different parent materials and with a variety of topography was sampled. Relations between variogram range, topography and parent material were examined. The variogram ranges for parent materials that result in heavier soil textures tended to be longer as were those for plateau and valley areas compared with slopes. Possible links between these patterns and soil processes were investigated by examining: 1) changes in soil characteristics with depth, 2) moving correlations, 3) variation in soil properties with topography within fields, 4) directional variograms and 5) derived topographic attributes. The predominance of vertical rather than lateral movements of water through the soil in plateau and valley positions was identified as responsible for the longer range of variograms in these locations. Hydraulic conductivity was suggested as a possible underlying cause of differences in the scale of variation in the topsoil between parent materials.

  18. Robustness analysis of a green chemistry-based model for the classification of silver nanoparticles synthesis processes

    EPA Science Inventory

    This paper proposes a robustness analysis based on Multiple Criteria Decision Aiding (MCDA). The ensuing model was used to assess the implementation of green chemistry principles in the synthesis of silver nanoparticles. Its recommendations were also compared to an earlier develo...

  19. Genome-wide analyses of chitin synthases identify horizontal gene transfers towards bacteria and allow a robust and unifying classification into fungi.

    PubMed

    Gonçalves, Isabelle R; Brouillet, Sophie; Soulié, Marie-Christine; Gribaldo, Simonetta; Sirven, Catherine; Charron, Noémie; Boccara, Martine; Choquer, Mathias

    2016-11-24

    Chitin, the second most abundant biopolymer on earth after cellulose, is found in probably all fungi, many animals (mainly invertebrates), several protists and a few algae, playing an essential role in the development of many of them. This polysaccharide is produced by type 2 glycosyltransferases, called chitin synthases (CHS). There are several contradictory classifications of CHS isoenzymes and, as regards their evolutionary history, their origin and diversity is still a matter of debate. A genome-wide analysis resulted in the detection of more than eight hundred putative chitin synthases in proteomes associated with about 130 genomes. Phylogenetic analyses were performed with special care to avoid any pitfalls associated with the peculiarities of these sequences (e.g. highly variable regions, truncated or recombined sequences, long-branch attraction). This allowed us to revise and unify the fungal CHS classification and to study the evolutionary history of the CHS multigenic family. This update has the advantage of being user-friendly due to the development of a dedicated website ( http://wwwabi.snv.jussieu.fr/public/CHSdb ), and it includes any correspondences with previously published classifications and mutants. Concerning the evolutionary history of CHS, this family has mainly evolved via duplications and losses. However, it is likely that several horizontal gene transfers (HGT) also occurred in eukaryotic microorganisms and, even more surprisingly, in bacteria. This comprehensive multi-species analysis contributes to the classification of fungal CHS, in particular by optimizing its robustness, consensuality and accessibility. It also highlights the importance of HGT in the evolutionary history of CHS and describes bacterial chs genes for the first time. Many of the bacteria that have acquired a chitin synthase are plant pathogens (e.g. Dickeya spp; Pectobacterium spp; Brenneria spp; Agrobacterium vitis and Pseudomonas cichorii). Whether they are able to

  20. A novel mini-DNA barcoding assay to identify processed fins from internationally protected shark species.

    PubMed

    Fields, Andrew T; Abercrombie, Debra L; Eng, Rowena; Feldheim, Kevin; Chapman, Demian D

    2015-01-01

    There is a growing need to identify shark products in trade, in part due to the recent listing of five commercially important species on the Appendices of the Convention on International Trade in Endangered Species (CITES; porbeagle, Lamna nasus, oceanic whitetip, Carcharhinus longimanus scalloped hammerhead, Sphyrna lewini, smooth hammerhead, S. zygaena and great hammerhead S. mokarran) in addition to three species listed in the early part of this century (whale, Rhincodon typus, basking, Cetorhinus maximus, and white, Carcharodon carcharias). Shark fins are traded internationally to supply the Asian dried seafood market, in which they are used to make the luxury dish shark fin soup. Shark fins usually enter international trade with their skin still intact and can be identified using morphological characters or standard DNA-barcoding approaches. Once they reach Asia and are traded in this region the skin is removed and they are treated with chemicals that eliminate many key diagnostic characters and degrade their DNA ("processed fins"). Here, we present a validated mini-barcode assay based on partial sequences of the cytochrome oxidase I gene that can reliably identify the processed fins of seven of the eight CITES listed shark species. We also demonstrate that the assay can even frequently identify the species or genus of origin of shark fin soup (31 out of 50 samples).

  1. A Novel Mini-DNA Barcoding Assay to Identify Processed Fins from Internationally Protected Shark Species

    PubMed Central

    Fields, Andrew T.; Abercrombie, Debra L.; Eng, Rowena; Feldheim, Kevin; Chapman, Demian D.

    2015-01-01

    There is a growing need to identify shark products in trade, in part due to the recent listing of five commercially important species on the Appendices of the Convention on International Trade in Endangered Species (CITES; porbeagle, Lamna nasus, oceanic whitetip, Carcharhinus longimanus scalloped hammerhead, Sphyrna lewini, smooth hammerhead, S. zygaena and great hammerhead S. mokarran) in addition to three species listed in the early part of this century (whale, Rhincodon typus, basking, Cetorhinus maximus, and white, Carcharodon carcharias). Shark fins are traded internationally to supply the Asian dried seafood market, in which they are used to make the luxury dish shark fin soup. Shark fins usually enter international trade with their skin still intact and can be identified using morphological characters or standard DNA-barcoding approaches. Once they reach Asia and are traded in this region the skin is removed and they are treated with chemicals that eliminate many key diagnostic characters and degrade their DNA (“processed fins”). Here, we present a validated mini-barcode assay based on partial sequences of the cytochrome oxidase I gene that can reliably identify the processed fins of seven of the eight CITES listed shark species. We also demonstrate that the assay can even frequently identify the species or genus of origin of shark fin soup (31 out of 50 samples). PMID:25646789

  2. Robust Active Portfolio Management

    DTIC Science & Technology

    2006-11-27

    the Markowitz mean-variance model led to development of the Capital Asset Pricing Model ( CAPM ) for asset pricing [35, 29, 23] which remains one of the...active portfolio management. Our model uses historical returns and equilibrium expected returns predicted by the CAPM to identify assets that are...we construct robust models for active portfolio management in a market with transaction costs. The goal of these robust models is to control the impact

  3. Method for identifying biochemical and chemical reactions and micromechanical processes using nanomechanical and electronic signal identification

    DOEpatents

    Holzrichter, John F.; Siekhaus, Wigbert J.

    1997-01-01

    A scanning probe microscope, such as an atomic force microscope (AFM) or a scanning tunneling microscope (STM), is operated in a stationary mode on a site where an activity of interest occurs to measure and identify characteristic time-varying micromotions caused by biological, chemical, mechanical, electrical, optical, or physical processes. The tip and cantilever assembly of an AFM is used as a micromechanical detector of characteristic micromotions transmitted either directly by a site of interest or indirectly through the surrounding medium. Alternatively, the exponential dependence of the tunneling current on the size of the gap in the STM is used to detect micromechanical movement. The stationary mode of operation can be used to observe dynamic biological processes in real time and in a natural environment, such as polymerase processing of DNA for determining the sequence of a DNA molecule.

  4. Method for identifying biochemical and chemical reactions and micromechanical processes using nanomechanical and electronic signal identification

    DOEpatents

    Holzrichter, J.F.; Siekhaus, W.J.

    1997-04-15

    A scanning probe microscope, such as an atomic force microscope (AFM) or a scanning tunneling microscope (STM), is operated in a stationary mode on a site where an activity of interest occurs to measure and identify characteristic time-varying micromotions caused by biological, chemical, mechanical, electrical, optical, or physical processes. The tip and cantilever assembly of an AFM is used as a micromechanical detector of characteristic micromotions transmitted either directly by a site of interest or indirectly through the surrounding medium. Alternatively, the exponential dependence of the tunneling current on the size of the gap in the STM is used to detect micromechanical movement. The stationary mode of operation can be used to observe dynamic biological processes in real time and in a natural environment, such as polymerase processing of DNA for determining the sequence of a DNA molecule. 6 figs.

  5. A stable isotope approach and its application for identifying nitrate source and transformation process in water.

    PubMed

    Xu, Shiguo; Kang, Pingping; Sun, Ya

    2016-01-01

    Nitrate contamination of water is a worldwide environmental problem. Recent studies have demonstrated that the nitrogen (N) and oxygen (O) isotopes of nitrate (NO3(-)) can be used to trace nitrogen dynamics including identifying nitrate sources and nitrogen transformation processes. This paper analyzes the current state of identifying nitrate sources and nitrogen transformation processes using N and O isotopes of nitrate. With regard to nitrate sources, δ(15)N-NO3(-) and δ(18)O-NO3(-) values typically vary between sources, allowing the sources to be isotopically fingerprinted. δ(15)N-NO3(-) is often effective at tracing NO(-)3 sources from areas with different land use. δ(18)O-NO3(-) is more useful to identify NO3(-) from atmospheric sources. Isotopic data can be combined with statistical mixing models to quantify the relative contributions of NO3(-) from multiple delineated sources. With regard to N transformation processes, N and O isotopes of nitrate can be used to decipher the degree of nitrogen transformation by such processes as nitrification, assimilation, and denitrification. In some cases, however, isotopic fractionation may alter the isotopic fingerprint associated with the delineated NO3(-) source(s). This problem may be addressed by combining the N and O isotopic data with other types of, including the concentration of selected conservative elements, e.g., chloride (Cl(-)), boron isotope (δ(11)B), and sulfur isotope (δ(35)S) data. Future studies should focus on improving stable isotope mixing models and furthering our understanding of isotopic fractionation by conducting laboratory and field experiments in different environments.

  6. Efficiency of operation and robustness of internal image processing for a new type of phosphorplate scanner connected to HIS and PACS

    NASA Astrophysics Data System (ADS)

    Bellon, Erwin; Feron, Michel; Van den Bosch, Bart; Pauwels, Herman; Dhaenens, Frans; Houtput, Wilfried; Vanautgaerden, Mark; Baert, Albert L.; Suetens, Paul; Marchal, Guy

    1996-05-01

    We report on our experience with a recently introduced phosphorplate system (AGFA Diagnostic Center) from the viewpoint of overall operation efficiency. A first factor that determines efficiency is the time it takes to enter patient and examination information. A second factor is robustness of the automated image processing algorithms provided in the CR, as this determines the need for interactive image reprocessing on the workstation or for film retake. Both factors are strongly influenced by the integration of the modality within the HIS, whereby information about the patient and the examination request is automatically transferred to the phosphorplate system. Problems related to wrongly entered patient information are virtually eliminated. In comparison with manual entry of patient demographic data, efficiency has increased significantly. The examination information provided by the HIS helps the CR system to select optimal processing parameters automatically in the majority of situations. Furthermore, the image processing algorithms turn out to be rather robust and independent of pathology. We believe that both the HIS connection and the robustness of internal image processing contribute to making the current percentage of retakes and reprocessing in the order of 1.2% and 0.9% respectively, compared to more than 8% of retakes in the previous analogue systems.

  7. Benchmarking in the process of donation after brain death: a methodology to identify best performer hospitals.

    PubMed

    Matesanz, R; Coll, E; Domínguez-Gil, B; de la Rosa, G; Marazuela, R; Arráez, V; Elorrieta, P; Fernández-García, A; Fernández-Renedo, C; Galán, J; Gómez-Marinero, P; Martín-Delagebasala, C; Martín-Jiménez, S; Masnou, N; Salamero, P; Sánchez-Ibáñez, J; Serna, E; Martínez-Soba, F; Pastor-Rodríguez, A; Bouzas, E; Castro, P

    2012-09-01

    A benchmarking approach was developed in Spain to identify and spread critical success factors in the process of donation after brain death. This paper describes the methodology to identify the best performer hospitals in the period 2003-2007 with 106 hospitals throughout the country participating in the project. The process of donation after brain death was structured into three phases: referral of possible donors after brain death (DBD) to critical care units (CCUs) from outside units, management of possible DBDs within the CCUs and obtaining consent for organ donation. Indicators to assess performance in each phase were constructed and the factors influencing these indicators were studied to ensure that comparable groups of hospitals could be established. Availability of neurosurgery and CCU resources had a positive impact on the referral of possible DBDs to CCUs and those hospitals with fewer annual potential DBDs more frequently achieved 100% consent rates. Hospitals were grouped into each subprocess according to influencing factors. Hospitals with the best results were identified for each phase and hospital group. The subsequent study of their practices will lead to the identification of critical factors for success, which implemented in an adapted way should fortunately lead to increasing organ availability. © Copyright 2012 The American Society of Transplantation and the American Society of Transplant Surgeons.

  8. Modular Energy-Efficient and Robust Paradigms for a Disaster-Recovery Process over Wireless Sensor Networks

    PubMed Central

    Razaque, Abdul; Elleithy, Khaled

    2015-01-01

    Robust paradigms are a necessity, particularly for emerging wireless sensor network (WSN) applications. The lack of robust and efficient paradigms causes a reduction in the provision of quality of service (QoS) and additional energy consumption. In this paper, we introduce modular energy-efficient and robust paradigms that involve two archetypes: (1) the operational medium access control (O-MAC) hybrid protocol and (2) the pheromone termite (PT) model. The O-MAC protocol controls overhearing and congestion and increases the throughput, reduces the latency and extends the network lifetime. O-MAC uses an optimized data frame format that reduces the channel access time and provides faster data delivery over the medium. Furthermore, O-MAC uses a novel randomization function that avoids channel collisions. The PT model provides robust routing for single and multiple links and includes two new significant features: (1) determining the packet generation rate to avoid congestion and (2) pheromone sensitivity to determine the link capacity prior to sending the packets on each link. The state-of-the-art research in this work is based on improving both the QoS and energy efficiency. To determine the strength of O-MAC with the PT model; we have generated and simulated a disaster recovery scenario using a network simulator (ns-3.10) that monitors the activities of disaster recovery staff; hospital staff and disaster victims brought into the hospital. Moreover; the proposed paradigm can be used for general purpose applications. Finally; the QoS metrics of the O-MAC and PT paradigms are evaluated and compared with other known hybrid protocols involving the MAC and routing features. The simulation results indicate that O-MAC with PT produced better outcomes. PMID:26153768

  9. Modular Energy-Efficient and Robust Paradigms for a Disaster-Recovery Process over Wireless Sensor Networks.

    PubMed

    Razaque, Abdul; Elleithy, Khaled

    2015-07-06

    Robust paradigms are a necessity, particularly for emerging wireless sensor network (WSN) applications. The lack of robust and efficient paradigms causes a reduction in the provision of quality of service (QoS) and additional energy consumption. In this paper, we introduce modular energy-efficient and robust paradigms that involve two archetypes: (1) the operational medium access control (O-MAC) hybrid protocol and (2) the pheromone termite (PT) model. The O-MAC protocol controls overhearing and congestion and increases the throughput, reduces the latency and extends the network lifetime. O-MAC uses an optimized data frame format that reduces the channel access time and provides faster data delivery over the medium. Furthermore, O-MAC uses a novel randomization function that avoids channel collisions. The PT model provides robust routing for single and multiple links and includes two new significant features: (1) determining the packet generation rate to avoid congestion and (2) pheromone sensitivity to determine the link capacity prior to sending the packets on each link. The state-of-the-art research in this work is based on improving both the QoS and energy efficiency. To determine the strength of O-MAC with the PT model; we have generated and simulated a disaster recovery scenario using a network simulator (ns-3.10) that monitors the activities of disaster recovery staff; hospital staff and disaster victims brought into the hospital. Moreover; the proposed paradigm can be used for general purpose applications. Finally; the QoS metrics of the O-MAC and PT paradigms are evaluated and compared with other known hybrid protocols involving the MAC and routing features. The simulation results indicate that O-MAC with PT produced better outcomes.

  10. Modeling of 2D diffusion processes based on microscopy data: parameter estimation and practical identifiability analysis

    PubMed Central

    2013-01-01

    Background Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. Methods We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. Results and conclusion As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods. PMID:24267545

  11. Robustness. [in space systems

    NASA Technical Reports Server (NTRS)

    Ryan, Robert

    1993-01-01

    The concept of rubustness includes design simplicity, component and path redundancy, desensitization to the parameter and environment variations, control of parameter variations, and punctual operations. These characteristics must be traded with functional concepts, materials, and fabrication approach against the criteria of performance, cost, and reliability. The paper describes the robustness design process, which includes the following seven major coherent steps: translation of vision into requirements, definition of the robustness characteristics desired, criteria formulation of required robustness, concept selection, detail design, manufacturing and verification, operations.

  12. Robustness. [in space systems

    NASA Technical Reports Server (NTRS)

    Ryan, Robert

    1993-01-01

    The concept of rubustness includes design simplicity, component and path redundancy, desensitization to the parameter and environment variations, control of parameter variations, and punctual operations. These characteristics must be traded with functional concepts, materials, and fabrication approach against the criteria of performance, cost, and reliability. The paper describes the robustness design process, which includes the following seven major coherent steps: translation of vision into requirements, definition of the robustness characteristics desired, criteria formulation of required robustness, concept selection, detail design, manufacturing and verification, operations.

  13. Identifying influential nodes based on graph signal processing in complex networks

    NASA Astrophysics Data System (ADS)

    Zhao, Jia; Yu, Li; Li, Jing-Ru; Zhou, Peng

    2015-05-01

    Identifying influential nodes in complex networks is of both theoretical and practical importance. Existing methods identify influential nodes based on their positions in the network and assume that the nodes are homogeneous. However, node heterogeneity (i.e., different attributes such as interest, energy, age, and so on) ubiquitously exists and needs to be taken into consideration. In this paper, we conduct an investigation into node attributes and propose a graph signal processing based centrality (GSPC) method to identify influential nodes considering both the node attributes and the network topology. We first evaluate our GSPC method using two real-world datasets. The results show that our GSPC method effectively identifies influential nodes, which correspond well with the underlying ground truth. This is compatible to the previous eigenvector centrality and principal component centrality methods under circumstances where the nodes are homogeneous. In addition, spreading analysis shows that the GSPC method has a positive effect on the spreading dynamics. Project supported by the National Natural Science Foundation of China (Grant No. 61231010) and the Fundamental Research Funds for the Central Universities, China (Grant No. HUST No. 2012QN076).

  14. Identifying geochemical processes using End Member Mixing Analysis to decouple chemical components for mixing ratio calculations

    NASA Astrophysics Data System (ADS)

    Pelizardi, Flavia; Bea, Sergio A.; Carrera, Jesús; Vives, Luis

    2017-07-01

    Mixing calculations (i.e., the calculation of the proportions in which end-members are mixed in a sample) are essential for hydrological research and water management. However, they typically require the use of conservative species, a condition that may be difficult to meet due to chemical reactions. Mixing calculation also require identifying end-member waters, which is usually achieved through End Member Mixing Analysis (EMMA). We present a methodology to help in the identification of both end-members and such reactions, so as to improve mixing ratio calculations. The proposed approach consists of: (1) identifying the potential chemical reactions with the help of EMMA; (2) defining decoupled conservative chemical components consistent with those reactions; (3) repeat EMMA with the decoupled (i.e., conservative) components, so as to identify end-members waters; and (4) computing mixing ratios using the new set of components and end-members. The approach is illustrated by application to two synthetic mixing examples involving mineral dissolution and cation exchange reactions. Results confirm that the methodology can be successfully used to identify geochemical processes affecting the mixtures, thus improving the accuracy of mixing ratios calculations and relaxing the need for conservative species.

  15. Objectively identifying landmark use and predicting flight trajectories of the homing pigeon using Gaussian processes

    PubMed Central

    Mann, Richard; Freeman, Robin; Osborne, Michael; Garnett, Roman; Armstrong, Chris; Meade, Jessica; Biro, Dora; Guilford, Tim; Roberts, Stephen

    2011-01-01

    Pigeons home along idiosyncratic habitual routes from familiar locations. It has been suggested that memorized visual landmarks underpin this route learning. However, the inability to experimentally alter the landscape on large scales has hindered the discovery of the particular features to which birds attend. Here, we present a method for objectively classifying the most informative regions of animal paths. We apply this method to flight trajectories from homing pigeons to identify probable locations of salient visual landmarks. We construct and apply a Gaussian process model of flight trajectory generation for pigeons trained to home from specific release sites. The model shows increasing predictive power as the birds become familiar with the sites, mirroring the animal's learning process. We subsequently find that the most informative elements of the flight trajectories coincide with landscape features that have previously been suggested as important components of the homing task. PMID:20656739

  16. Exon-level expression analyses identify MYCN and NTRK1 as major determinants of alternative exon usage and robustly predict primary neuroblastoma outcome

    PubMed Central

    Schramm, A; Schowe, B; Fielitz, K; Heilmann, M; Martin, M; Marschall, T; Köster, J; Vandesompele, J; Vermeulen, J; de Preter, K; Koster, J; Versteeg, R; Noguera, R; Speleman, F; Rahmann, S; Eggert, A; Morik, K; Schulte, J H

    2012-01-01

    Background: Using mRNA expression-derived signatures as predictors of individual patient outcome has been a goal ever since the introduction of microarrays. Here, we addressed whether analyses of tumour mRNA at the exon level can improve on the predictive power and classification accuracy of gene-based expression profiles using neuroblastoma as a model. Methods: In a patient cohort comprising 113 primary neuroblastoma specimens expression profiling using exon-level analyses was performed to define predictive signatures using various machine-learning techniques. Alternative transcript use was calculated from relative exon expression. Validation of alternative transcripts was achieved using qPCR- and cell-based approaches. Results: Both predictors derived from the gene or the exon levels resulted in prediction accuracies >80% for both event-free and overall survival and proved as independent prognostic markers in multivariate analyses. Alternative transcript use was most prominently linked to the amplification status of the MYCN oncogene, expression of the TrkA/NTRK1 neurotrophin receptor and survival. Conclusion: As exon level-based prediction yields comparable, but not significantly better, prediction accuracy than gene expression-based predictors, gene-based assays seem to be sufficiently precise for predicting outcome of neuroblastoma patients. However, exon-level analyses provide added knowledge by identifying alternative transcript use, which should deepen the understanding of neuroblastoma biology. PMID:23047593

  17. Exon-level expression analyses identify MYCN and NTRK1 as major determinants of alternative exon usage and robustly predict primary neuroblastoma outcome.

    PubMed

    Schramm, A; Schowe, B; Fielitz, K; Heilmann, M; Martin, M; Marschall, T; Köster, J; Vandesompele, J; Vermeulen, J; de Preter, K; Koster, J; Versteeg, R; Noguera, R; Speleman, F; Rahmann, S; Eggert, A; Morik, K; Schulte, J H

    2012-10-09

    Using mRNA expression-derived signatures as predictors of individual patient outcome has been a goal ever since the introduction of microarrays. Here, we addressed whether analyses of tumour mRNA at the exon level can improve on the predictive power and classification accuracy of gene-based expression profiles using neuroblastoma as a model. In a patient cohort comprising 113 primary neuroblastoma specimens expression profiling using exon-level analyses was performed to define predictive signatures using various machine-learning techniques. Alternative transcript use was calculated from relative exon expression. Validation of alternative transcripts was achieved using qPCR- and cell-based approaches. Both predictors derived from the gene or the exon levels resulted in prediction accuracies >80% for both event-free and overall survival and proved as independent prognostic markers in multivariate analyses. Alternative transcript use was most prominently linked to the amplification status of the MYCN oncogene, expression of the TrkA/NTRK1 neurotrophin receptor and survival. As exon level-based prediction yields comparable, but not significantly better, prediction accuracy than gene expression-based predictors, gene-based assays seem to be sufficiently precise for predicting outcome of neuroblastoma patients. However, exon-level analyses provide added knowledge by identifying alternative transcript use, which should deepen the understanding of neuroblastoma biology.

  18. Methodological systematic review identifies major limitations in prioritization processes for updating.

    PubMed

    Martínez García, Laura; Pardo-Hernandez, Hector; Superchi, Cecilia; Niño de Guzman, Ena; Ballesteros, Monica; Ibargoyen Roteta, Nora; McFarlane, Emma; Posso, Margarita; Roqué I Figuls, Marta; Rotaeche Del Campo, Rafael; Sanabria, Andrea Juliana; Selva, Anna; Solà, Ivan; Vernooij, Robin W M; Alonso-Coello, Pablo

    2017-06-01

    The aim of the study was to identify and describe strategies to prioritize the updating of systematic reviews (SRs), health technology assessments (HTAs), or clinical guidelines (CGs). We conducted an SR of studies describing one or more methods to prioritize SRs, HTAs, or CGs for updating. We searched MEDLINE (PubMed, from 1966 to August 2016) and The Cochrane Methodology Register (The Cochrane Library, Issue 8 2016). We hand searched abstract books, reviewed reference lists, and contacted experts. Two reviewers independently screened the references and extracted data. We included 14 studies. Six studies were classified as descriptive (6 of 14, 42.9%) and eight as implementation studies (8 of 14, 57.1%). Six studies reported an updating strategy (6 of 14, 42.9%), six a prioritization process (6 of 14, 42.9%), and two a prioritization criterion (2 of 14, 14.2%). Eight studies focused on SRs (8 of 14, 57.1%), six studies focused on CGs (6 of 14, 42.9%), and none were about HTAs. We identified 76 prioritization criteria that can be applied when prioritizing documents for updating. The most frequently cited criteria were as follows: available evidence (19 of 76, 25.0%), clinical relevance (10 of 76; 13.2%), and users' interest (10 of 76; 13.2%). There is wide variability and suboptimal reporting of the methods used to develop and implement processes to prioritize updating of SRs, HTAs, and CGs. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Identifying potential misfit items in cognitive process of learning engineering mathematics based on Rasch model

    NASA Astrophysics Data System (ADS)

    Ataei, Sh; Mahmud, Z.; Khalid, M. N.

    2014-04-01

    The students learning outcomes clarify what students should know and be able to demonstrate after completing their course. So, one of the issues on the process of teaching and learning is how to assess students' learning. This paper describes an application of the dichotomous Rasch measurement model in measuring the cognitive process of engineering students' learning of mathematics. This study provides insights into the perspective of 54 engineering students' cognitive ability in learning Calculus III based on Bloom's Taxonomy on 31 items. The results denote that some of the examination questions are either too difficult or too easy for the majority of the students. This analysis yields FIT statistics which are able to identify if there is data departure from the Rasch theoretical model. The study has identified some potential misfit items based on the measurement of ZSTD where the removal misfit item was accomplished based on the MNSQ outfit of above 1.3 or less than 0.7 logit. Therefore, it is recommended that these items be reviewed or revised to better match the range of students' ability in the respective course.

  20. Identifying Blood Biomarkers and Physiological Processes That Distinguish Humans with Superior Performance under Psychological Stress

    PubMed Central

    Cooksey, Amanda M.; Momen, Nausheen; Stocker, Russell; Burgess, Shane C.

    2009-01-01

    Background Attrition of students from aviation training is a serious financial and operational concern for the U.S. Navy. Each late stage navy aviator training failure costs the taxpayer over $1,000,000 and ultimately results in decreased operational readiness of the fleet. Currently, potential aviators are selected based on the Aviation Selection Test Battery (ASTB), which is a series of multiple-choice tests that evaluate basic and aviation-related knowledge and ability. However, the ASTB does not evaluate a person's response to stress. This is important because operating sophisticated aircraft demands exceptional performance and causes high psychological stress. Some people are more resistant to this type of stress, and consequently better able to cope with the demands of naval aviation, than others. Methodology/Principal Findings Although many psychological studies have examined psychological stress resistance none have taken advantage of the human genome sequence. Here we use high-throughput -omic biology methods and a novel statistical data normalization method to identify plasma proteins associated with human performance under psychological stress. We identified proteins involved in four basic physiological processes: innate immunity, cardiac function, coagulation and plasma lipid physiology. Conclusions/Significance The proteins identified here further elucidate the physiological response to psychological stress and suggest a hypothesis that stress-susceptible pilots may be more prone to shock. This work also provides potential biomarkers for screening humans for capability of superior performance under stress. PMID:20020041

  1. Identifying blood biomarkers and physiological processes that distinguish humans with superior performance under psychological stress.

    PubMed

    Cooksey, Amanda M; Momen, Nausheen; Stocker, Russell; Burgess, Shane C

    2009-12-18

    Attrition of students from aviation training is a serious financial and operational concern for the U.S. Navy. Each late stage navy aviator training failure costs the taxpayer over $1,000,000 and ultimately results in decreased operational readiness of the fleet. Currently, potential aviators are selected based on the Aviation Selection Test Battery (ASTB), which is a series of multiple-choice tests that evaluate basic and aviation-related knowledge and ability. However, the ASTB does not evaluate a person's response to stress. This is important because operating sophisticated aircraft demands exceptional performance and causes high psychological stress. Some people are more resistant to this type of stress, and consequently better able to cope with the demands of naval aviation, than others. Although many psychological studies have examined psychological stress resistance none have taken advantage of the human genome sequence. Here we use high-throughput -omic biology methods and a novel statistical data normalization method to identify plasma proteins associated with human performance under psychological stress. We identified proteins involved in four basic physiological processes: innate immunity, cardiac function, coagulation and plasma lipid physiology. The proteins identified here further elucidate the physiological response to psychological stress and suggest a hypothesis that stress-susceptible pilots may be more prone to shock. This work also provides potential biomarkers for screening humans for capability of superior performance under stress.

  2. Can Future Academic Surgeons be Identified in the Residency Ranking Process?

    PubMed

    Beninato, Toni; Kleiman, David A; Zarnegar, Rasa; Fahey, Thomas J

    2016-01-01

    The goal of surgical residency training programs is to train competent surgeons. Academic surgical training programs also have as a mission training future academicians-surgical scientists, teachers, and leaders. However, selection of surgical residents is dependent on a relatively unscientific process. Here we sought to determine how well the residency selection process is able to identify future academicians in surgery. Rank lists from an academic surgical residency program from 1992 to 1997 were examined. All ranked candidates׳ career paths after residency were reviewed to determine whether they stayed in academics, were university affiliated, or in private practice. The study was performed at New York Presbyterian Hospital-Weill Cornell Medical College, New York, NY. A total of 663 applicants for general surgery residency participated in this study. In total 6 rank lists were evaluated, which included 663 candidates. Overall 76% remained in a general surgery subspecialty. Of those who remained in general surgery, 49% were in private practice, 20% were university affiliated, and 31% had academic careers. Approximately 47% of candidates that were ranked in the top 20 had ≥20 publications, with decreasing percentages as rank number increased. There was a strong correlation between the candidates׳ rank position and pursuing an academic career (p < 0.001, R(2) = 0.89). Graduates of surgical residency who were ranked highly at the time of the residency match were more likely to pursue an academic career. The residency selection process can identify candidates likely to be future academicians. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  3. Transposon mutagenesis identifies genes and cellular processes driving epithelial-mesenchymal transition in hepatocellular carcinoma

    PubMed Central

    Kodama, Takahiro; Newberg, Justin Y.; Kodama, Michiko; Rangel, Roberto; Yoshihara, Kosuke; Tien, Jean C.; Parsons, Pamela H.; Wu, Hao; Finegold, Milton J.; Copeland, Neal G.; Jenkins, Nancy A.

    2016-01-01

    Epithelial-mesenchymal transition (EMT) is thought to contribute to metastasis and chemoresistance in patients with hepatocellular carcinoma (HCC), leading to their poor prognosis. The genes driving EMT in HCC are not yet fully understood, however. Here, we show that mobilization of Sleeping Beauty (SB) transposons in immortalized mouse hepatoblasts induces mesenchymal liver tumors on transplantation to nude mice. These tumors show significant down-regulation of epithelial markers, along with up-regulation of mesenchymal markers and EMT-related transcription factors (EMT-TFs). Sequencing of transposon insertion sites from tumors identified 233 candidate cancer genes (CCGs) that were enriched for genes and cellular processes driving EMT. Subsequent trunk driver analysis identified 23 CCGs that are predicted to function early in tumorigenesis and whose mutation or alteration in patients with HCC is correlated with poor patient survival. Validation of the top trunk drivers identified in the screen, including MET (MET proto-oncogene, receptor tyrosine kinase), GRB2-associated binding protein 1 (GAB1), HECT, UBA, and WWE domain containing 1 (HUWE1), lysine-specific demethylase 6A (KDM6A), and protein-tyrosine phosphatase, nonreceptor-type 12 (PTPN12), showed that deregulation of these genes activates an EMT program in human HCC cells that enhances tumor cell migration. Finally, deregulation of these genes in human HCC was found to confer sorafenib resistance through apoptotic tolerance and reduced proliferation, consistent with recent studies showing that EMT contributes to the chemoresistance of tumor cells. Our unique cell-based transposon mutagenesis screen appears to be an excellent resource for discovering genes involved in EMT in human HCC and potentially for identifying new drug targets. PMID:27247392

  4. Robust Nanoparticles

    DTIC Science & Technology

    2015-01-21

    advanced the chemis1:Iy of ftmctional nanopruiicles and used these patiicles in advanced materials assembly for the fabrication of nanopatiicle...polymer ligands, and the robustness resulting fi:om ligand cross-linking post- assembly . The project developed a facile evaporative assembly method...used these particles in advanced materials assembly for the fabrication of nanoparticle-based mesostructures. These hybrid materials possess extremely

  5. Process-oriented modelling to identify main drivers of erosion-induced carbon fluxes

    NASA Astrophysics Data System (ADS)

    Wilken, Florian; Sommer, Michael; Van Oost, Kristof; Bens, Oliver; Fiener, Peter

    2017-05-01

    Coupled modelling of soil erosion, carbon redistribution, and turnover has received great attention over the last decades due to large uncertainties regarding erosion-induced carbon fluxes. For a process-oriented representation of event dynamics, coupled soil-carbon erosion models have been developed. However, there are currently few models that represent tillage erosion, preferential water erosion, and transport of different carbon fractions (e.g. mineral bound carbon, carbon encapsulated by soil aggregates). We couple a process-oriented multi-class sediment transport model with a carbon turnover model (MCST-C) to identify relevant redistribution processes for carbon dynamics. The model is applied for two arable catchments (3.7 and 7.8 ha) located in the Tertiary Hills about 40 km north of Munich, Germany. Our findings indicate the following: (i) redistribution by tillage has a large effect on erosion-induced vertical carbon fluxes and has a large carbon sequestration potential; (ii) water erosion has a minor effect on vertical fluxes, but episodic soil organic carbon (SOC) delivery controls the long-term erosion-induced carbon balance; (iii) delivered sediments are highly enriched in SOC compared to the parent soil, and sediment delivery is driven by event size and catchment connectivity; and (iv) soil aggregation enhances SOC deposition due to the transformation of highly mobile carbon-rich fine primary particles into rather immobile soil aggregates.

  6. Main Nuclear Physics requirements for the robustness of r-process nucleosynthesis calculations in slow ejecta from neutron-star mergers (NSM)

    NASA Astrophysics Data System (ADS)

    Mendoza-Temis, J. J.; Frank, A.

    2017-07-01

    Here we deal with r-process nucleosynthesis simulations for matter ejected dynamically in NSM, recently a number of such simulations (see [1, 2, 3, 4]) have display a robust pattern in their final yields. It is the main goal of this contribution to address at the main requirements from the nuclear physics point of view in order to guarantee a robust pattern in the final r-process abundances for mass numbers A > 120. Our results suggest that one can achieve such behaviour for slow ejecta from neutron-star mergers as long as fission cycling is involved. On the other hand, using a representative r-process calculation as a working example, we explored the main stages in the evolution of an r-process. Finally, we conclude that fine tunning of local features in the pattern of final yields (2nd, rare earth and 3rd r-process peaks) depend on an interplay between the mass surface and other nuclear structure properties involved.

  7. Robustness testing in pharmaceutical freeze-drying: inter-relation of process conditions and product quality attributes studied for a vaccine formulation.

    PubMed

    Schneid, Stefan C; Stärtzel, Peter M; Lettner, Patrick; Gieseler, Henning

    2011-01-01

    The recent US Food and Drug Administration (FDA) legislation has introduced the evaluation of the Design Space of critical process parameters in manufacturing processes. In freeze-drying, a "formulation" is expected to be robust when minor deviations of the product temperature do not negatively affect the final product quality attributes. To evaluate "formulation" robustness by investigating the effect of elevated product temperature on product quality using a bacterial vaccine solution. The vaccine solution was characterized by freeze-dry microscopy to determine the critical formulation temperature. A conservative cycle was developed using the SMART™ mode of a Lyostar II freeze dryer. Product temperature was elevated to imitate intermediate and aggressive cycle conditions. The final product was analyzed using X-ray powder diffraction (XRPD), scanning electron microscopy (SEM), Karl Fischer, and modulated differential scanning calorimetry (MDSC), and the life cell count (LCC) during accelerated stability testing. The cakes processed at intermediate and aggressive conditions displayed larger pores with microcollapse of walls and stronger loss in LCC than the conservatively processed product, especially during stability testing. For all process conditions, a loss of the majority of cells was observed during storage. For freeze-drying of life bacterial vaccine solutions, the product temperature profile during primary drying appeared to be inter-related to product quality attributes.

  8. Towards a typology of business process management professionals: identifying patterns of competences through latent semantic analysis

    NASA Astrophysics Data System (ADS)

    Müller, Oliver; Schmiedel, Theresa; Gorbacheva, Elena; vom Brocke, Jan

    2016-01-01

    While researchers have analysed the organisational competences that are required for successful Business Process Management (BPM) initiatives, individual BPM competences have not yet been studied in detail. In this study, latent semantic analysis is used to examine a collection of 1507 BPM-related job advertisements in order to develop a typology of BPM professionals. This empirical analysis reveals distinct ideal types and profiles of BPM professionals on several levels of abstraction. A closer look at these ideal types and profiles confirms that BPM is a boundary-spanning field that requires interdisciplinary sets of competence that range from technical competences to business and systems competences. Based on the study's findings, it is posited that individual and organisational alignment with the identified ideal types and profiles is likely to result in high employability and organisational BPM success.

  9. A more robust model of the biodiesel reaction, allowing identification of process conditions for significantly enhanced rate and water tolerance.

    PubMed

    Eze, Valentine C; Phan, Anh N; Harvey, Adam P

    2014-03-01

    A more robust kinetic model of base-catalysed transesterification than the conventional reaction scheme has been developed. All the relevant reactions in the base-catalysed transesterification of rapeseed oil (RSO) to fatty acid methyl ester (FAME) were investigated experimentally, and validated numerically in a model implemented using MATLAB. It was found that including the saponification of RSO and FAME side reactions and hydroxide-methoxide equilibrium data explained various effects that are not captured by simpler conventional models. Both the experiment and modelling showed that the "biodiesel reaction" can reach the desired level of conversion (>95%) in less than 2min. Given the right set of conditions, the transesterification can reach over 95% conversion, before the saponification losses become significant. This means that the reaction must be performed in a reactor exhibiting good mixing and good control of residence time, and the reaction mixture must be quenched rapidly as it leaves the reactor.

  10. Identifying Hydrologic Processes in Agricultural Watersheds Using Precipitation-Runoff Models

    USGS Publications Warehouse

    Linard, Joshua I.; Wolock, David M.; Webb, Richard M.T.; Wieczorek, Michael

    2009-01-01

    Understanding the fate and transport of agricultural chemicals applied to agricultural fields will assist in designing the most effective strategies to prevent water-quality impairments. At a watershed scale, the processes controlling the fate and transport of agricultural chemicals are generally understood only conceptually. To examine the applicability of conceptual models to the processes actually occurring, two precipitation-runoff models - the Soil and Water Assessment Tool (SWAT) and the Water, Energy, and Biogeochemical Model (WEBMOD) - were applied in different agricultural settings of the contiguous United States. Each model, through different physical processes, simulated the transport of water to a stream from the surface, the unsaturated zone, and the saturated zone. Models were calibrated for watersheds in Maryland, Indiana, and Nebraska. The calibrated sets of input parameters for each model at each watershed are discussed, and the criteria used to validate the models are explained. The SWAT and WEBMOD model results at each watershed conformed to each other and to the processes identified in each watershed's conceptual hydrology. In Maryland the conceptual understanding of the hydrology indicated groundwater flow was the largest annual source of streamflow; the simulation results for the validation period confirm this. The dominant source of water to the Indiana watershed was thought to be tile drains. Although tile drains were not explicitly simulated in the SWAT model, a large component of streamflow was received from lateral flow, which could be attributed to tile drains. Being able to explicitly account for tile drains, WEBMOD indicated water from tile drains constituted most of the annual streamflow in the Indiana watershed. The Nebraska models indicated annual streamflow was composed primarily of perennial groundwater flow and infiltration-excess runoff, which conformed to the conceptual hydrology developed for that watershed. The hydrologic

  11. Identifying children with autism spectrum disorder based on their face processing abnormality: A machine learning framework.

    PubMed

    Liu, Wenbo; Li, Ming; Yi, Li

    2016-08-01

    The atypical face scanning patterns in individuals with Autism Spectrum Disorder (ASD) has been repeatedly discovered by previous research. The present study examined whether their face scanning patterns could be potentially useful to identify children with ASD by adopting the machine learning algorithm for the classification purpose. Particularly, we applied the machine learning method to analyze an eye movement dataset from a face recognition task [Yi et al., 2016], to classify children with and without ASD. We evaluated the performance of our model in terms of its accuracy, sensitivity, and specificity of classifying ASD. Results indicated promising evidence for applying the machine learning algorithm based on the face scanning patterns to identify children with ASD, with a maximum classification accuracy of 88.51%. Nevertheless, our study is still preliminary with some constraints that may apply in the clinical practice. Future research should shed light on further valuation of our method and contribute to the development of a multitask and multimodel approach to aid the process of early detection and diagnosis of ASD. Autism Res 2016, 9: 888-898. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  12. An ENU mutagenesis screen identifies novel and known genes involved in epigenetic processes in the mouse

    PubMed Central

    2013-01-01

    Background We have used a sensitized ENU mutagenesis screen to produce mouse lines that carry mutations in genes required for epigenetic regulation. We call these lines Modifiers of murine metastable epialleles (Mommes). Results We report a basic molecular and phenotypic characterization for twenty of the Momme mouse lines, and in each case we also identify the causative mutation. Three of the lines carry a mutation in a novel epigenetic modifier, Rearranged L-myc fusion (Rlf), and one gene, Rap-interacting factor 1 (Rif1), has not previously been reported to be involved in transcriptional regulation in mammals. Many of the other lines are novel alleles of known epigenetic regulators. For two genes, Rlf and Widely-interspaced zinc finger (Wiz), we describe the first mouse mutants. All of the Momme mutants show some degree of homozygous embryonic lethality, emphasizing the importance of epigenetic processes. The penetrance of lethality is incomplete in a number of cases. Similarly, abnormalities in phenotype seen in the heterozygous individuals of some lines occur with incomplete penetrance. Conclusions Recent advances in sequencing enhance the power of sensitized mutagenesis screens to identify the function of previously uncharacterized factors and to discover additional functions for previously characterized proteins. The observation of incomplete penetrance of phenotypes in these inbred mutant mice, at various stages of development, is of interest. Overall, the Momme collection of mouse mutants provides a valuable resource for researchers across many disciplines. PMID:24025402

  13. DPNuc: Identifying Nucleosome Positions Based on the Dirichlet Process Mixture Model.

    PubMed

    Chen, Huidong; Guan, Jihong; Zhou, Shuigeng

    2015-01-01

    Nucleosomes and the free linker DNA between them assemble the chromatin. Nucleosome positioning plays an important role in gene transcription regulation, DNA replication and repair, alternative splicing, and so on. With the rapid development of ChIP-seq, it is possible to computationally detect the positions of nucleosomes on chromosomes. However, existing methods cannot provide accurate and detailed information about the detected nucleosomes, especially for the nucleosomes with complex configurations where overlaps and noise exist. Meanwhile, they usually require some prior knowledge of nucleosomes as input, such as the size or the number of the unknown nucleosomes, which may significantly influence the detection results. In this paper, we propose a novel approach DPNuc for identifying nucleosome positions based on the Dirichlet process mixture model. In our method, Markov chain Monte Carlo (MCMC) simulations are employed to determine the mixture model with no need of prior knowledge about nucleosomes. Compared with three existing methods, our approach can provide more detailed information of the detected nucleosomes and can more reasonably reveal the real configurations of the chromosomes; especially, our approach performs better in the complex overlapping situations. By mapping the detected nucleosomes to a synthetic benchmark nucleosome map and two existing benchmark nucleosome maps, it is shown that our approach achieves a better performance in identifying nucleosome positions and gets a higher F-score. Finally, we show that our approach can more reliably detect the size distribution of nucleosomes.

  14. An ENU mutagenesis screen identifies novel and known genes involved in epigenetic processes in the mouse.

    PubMed

    Daxinger, Lucia; Harten, Sarah K; Oey, Harald; Epp, Trevor; Isbel, Luke; Huang, Edward; Whitelaw, Nadia; Apedaile, Anwyn; Sorolla, Anabel; Yong, Joan; Bharti, Vandhana; Sutton, Joanne; Ashe, Alyson; Pang, Zhenyi; Wallace, Nathan; Gerhardt, Daniel J; Blewitt, Marnie E; Jeddeloh, Jeffrey A; Whitelaw, Emma

    2013-01-01

    We have used a sensitized ENU mutagenesis screen to produce mouse lines that carry mutations in genes required for epigenetic regulation. We call these lines Modifiers of murine metastable epialleles (Mommes). We report a basic molecular and phenotypic characterization for twenty of the Momme mouse lines, and in each case we also identify the causative mutation. Three of the lines carry a mutation in a novel epigenetic modifier, Rearranged L-myc fusion (Rlf), and one gene, Rap-interacting factor 1 (Rif1), has not previously been reported to be involved in transcriptional regulation in mammals. Many of the other lines are novel alleles of known epigenetic regulators. For two genes, Rlf and Widely-interspaced zinc finger (Wiz), we describe the first mouse mutants. All of the Momme mutants show some degree of homozygous embryonic lethality, emphasizing the importance of epigenetic processes. The penetrance of lethality is incomplete in a number of cases. Similarly ,abnormalities in phenotype seen in the heterozygous individuals of some lines occur with incomplete penetrance. Recent advances in sequencing enhance the power of sensitized mutagenesis screens to identify the function of previously uncharacterized factors and to discover additional functions for previously characterized proteins. The observation of incomplete penetrance of phenotypes in these inbred mutant mice, at various stages of development, is of interest. Overall, the Momme collection of mouse mutants provides a valuable resource for researchers across many disciplines.

  15. An event-specific DNA microarray to identify genetically modified organisms in processed foods.

    PubMed

    Kim, Jae-Hwan; Kim, Su-Youn; Lee, Hyungjae; Kim, Young-Rok; Kim, Hae-Yeong

    2010-05-26

    We developed an event-specific DNA microarray system to identify 19 genetically modified organisms (GMOs), including two GM soybeans (GTS-40-3-2 and A2704-12), thirteen GM maizes (Bt176, Bt11, MON810, MON863, NK603, GA21, T25, TC1507, Bt10, DAS59122-7, TC6275, MIR604, and LY038), three GM canolas (GT73, MS8xRF3, and T45), and one GM cotton (LLcotton25). The microarray included 27 oligonucleotide probes optimized to identify endogenous reference targets, event-specific targets, screening targets (35S promoter and nos terminator), and an internal target (18S rRNA gene). Thirty-seven maize-containing food products purchased from South Korean and US markets were tested for the presence of GM maize using this microarray system. Thirteen GM maize events were simultaneously detected using multiplex PCR coupled with microarray on a single chip, at a limit of detection of approximately 0.5%. Using the system described here, we detected GM maize in 11 of the 37 food samples tested. These results suggest that an event-specific DNA microarray system can reliably detect GMOs in processed foods.

  16. A process to identify military injury prevention priorities based on injury type and limited duty days.

    PubMed

    Ruscio, Bruce A; Jones, Bruce H; Bullock, Steven H; Burnham, Bruce R; Canham-Chervak, Michelle; Rennix, Christopher P; Wells, Timothy S; Smith, Jack W

    2010-01-01

    Injuries, one of the leading public health problems in an otherwise healthy military population, affect operational readiness, increase healthcare costs, and result in disabilities and fatalities. This paper describes a systematic, data-driven, injury prevention-decision making process to rank potential injury prevention targets. Medical surveillance and safety report data on injuries for 2004 were reviewed. Nonfatal injury diagnoses (ICD-9-CM codes) obtained from the Defense Medical Surveillance System were ranked according to incident visit frequency and estimated limited duty days. Data on the top five injury types resulting in the greatest estimated limited duty days were matched with hospitalization and Service Safety Centers' accident investigation data to identify leading causes. Experts scored and ranked the causes using predetermined criteria that considered the importance of the problem, preventability, feasibility, timeliness of intervention establishment/results, and ability to evaluate. Department of Defense (DoD) and Service-specific injury prevention priorities were identified. Unintentional injuries lead all other medical conditions for number of medical encounters, individuals affected, and hospital bed days. The top ten injuries resulted in an estimated 25 million days of limited duty. Injury-related musculoskeletal conditions were a leading contributor to days of limited duty. Sports and physical training were the leading cause, followed by falls. A systematic approach to injury prevention-decision making supports the DoD's goal of ensuring a healthy, fit force. The methodology described here advances this capability. Immediate follow-up efforts should employ both medical and safety data sets to identify and monitor injury prevention priorities. Published by Elsevier Inc.

  17. Hominin cognitive evolution: identifying patterns and processes in the fossil and archaeological record

    PubMed Central

    Shultz, Susanne; Nelson, Emma; Dunbar, Robin I. M.

    2012-01-01

    As only limited insight into behaviour is available from the archaeological record, much of our understanding of historical changes in human cognition is restricted to identifying changes in brain size and architecture. Using both absolute and residual brain size estimates, we show that hominin brain evolution was likely to be the result of a mix of processes; punctuated changes at approximately 100 kya, 1 Mya and 1.8 Mya are supplemented by gradual within-lineage changes in Homo erectus and Homo sapiens sensu lato. While brain size increase in Homo in Africa is a gradual process, migration of hominins into Eurasia is associated with step changes at approximately 400 kya and approximately 100 kya. We then demonstrate that periods of rapid change in hominin brain size are not temporally associated with changes in environmental unpredictability or with long-term palaeoclimate trends. Thus, we argue that commonly used global sea level or Indian Ocean dust palaeoclimate records provide little evidence for either the variability selection or aridity hypotheses explaining changes in hominin brain size. Brain size change at approximately 100 kya is coincident with demographic change and the appearance of fully modern language. However, gaps remain in our understanding of the external pressures driving encephalization, which will only be filled by novel applications of the fossil, palaeoclimatic and archaeological records. PMID:22734056

  18. Hominin cognitive evolution: identifying patterns and processes in the fossil and archaeological record.

    PubMed

    Shultz, Susanne; Nelson, Emma; Dunbar, Robin I M

    2012-08-05

    As only limited insight into behaviour is available from the archaeological record, much of our understanding of historical changes in human cognition is restricted to identifying changes in brain size and architecture. Using both absolute and residual brain size estimates, we show that hominin brain evolution was likely to be the result of a mix of processes; punctuated changes at approximately 100 kya, 1 Mya and 1.8 Mya are supplemented by gradual within-lineage changes in Homo erectus and Homo sapiens sensu lato. While brain size increase in Homo in Africa is a gradual process, migration of hominins into Eurasia is associated with step changes at approximately 400 kya and approximately 100 kya. We then demonstrate that periods of rapid change in hominin brain size are not temporally associated with changes in environmental unpredictability or with long-term palaeoclimate trends. Thus, we argue that commonly used global sea level or Indian Ocean dust palaeoclimate records provide little evidence for either the variability selection or aridity hypotheses explaining changes in hominin brain size. Brain size change at approximately 100 kya is coincident with demographic change and the appearance of fully modern language. However, gaps remain in our understanding of the external pressures driving encephalization, which will only be filled by novel applications of the fossil, palaeoclimatic and archaeological records.

  19. Dual blockade of FAAH and MAGL identifies behavioral processes regulated by endocannabinoid crosstalk in vivo

    PubMed Central

    Long, Jonathan Z.; Nomura, Daniel K.; Vann, Robert E.; Walentiny, D. Matthew; Booker, Lamont; Jin, Xin; Burston, James J.; Sim-Selley, Laura J.; Lichtman, Aron H.; Wiley, Jenny L.; Cravatt, Benjamin F.

    2009-01-01

    Δ9-Tetrahydrocannabinol (THC), the psychoactive component of marijuana, and other direct cannabinoid receptor (CB1) agonists produce a number of neurobehavioral effects in mammals that range from the beneficial (analgesia) to the untoward (abuse potential). Why, however, this full spectrum of activities is not observed upon pharmacological inhibition or genetic deletion of either fatty acid amide hydrolase (FAAH) or monoacylglycerol lipase (MAGL), enzymes that regulate the two major endocannabinoids anandamide (AEA) and 2-arachidonoylglycerol (2-AG), respectively, has remained unclear. Here, we describe a selective and efficacious dual FAAH/MAGL inhibitor, JZL195, and show that this agent exhibits broad activity in the tetrad test for CB1 agonism, causing analgesia, hypomotilty, and catalepsy. Comparison of JZL195 to specific FAAH and MAGL inhibitors identified behavioral processes that were regulated by a single endocannabinoid pathway (e.g., hypomotility by the 2-AG/MAGL pathway) and, interestingly, those where disruption of both FAAH and MAGL produced additive effects that were reversed by a CB1 antagonist. Falling into this latter category was drug discrimination behavior, where dual FAAH/MAGL blockade, but not disruption of either FAAH or MAGL alone, produced THC-like responses that were reversed by a CB1 antagonist. These data indicate that AEA and 2-AG signaling pathways interact to regulate specific behavioral processes in vivo, including those relevant to drug abuse, thus providing a potential mechanistic basis for the distinct pharmacological profiles of direct CB1 agonists and inhibitors of individual endocannabinoid degradative enzymes. PMID:19918051

  20. Multicomponent statistical analysis to identify flow and transport processes in a highly-complex environment

    NASA Astrophysics Data System (ADS)

    Moeck, Christian; Radny, Dirk; Borer, Paul; Rothardt, Judith; Auckenthaler, Adrian; Berg, Michael; Schirmer, Mario

    2016-11-01

    A combined approach of multivariate statistical analysis, namely factor analysis (FA) and hierarchical cluster analysis (HCA), interpretation of geochemical processes, stable water isotope data and organic micropollutants enabling to assess spatial patterns of water types was performed for a study area in Switzerland, where drinking water production is close to different potential input pathways for contamination. To avoid drinking water contamination, artificial groundwater recharge with surface water into an aquifer is used to create a hydraulic barrier between potential intake pathways for contamination and drinking water extraction wells. Inter-aquifer mixing in the subsurface is identified, where a high amount of artificial infiltrated surface water is mixed with a lesser amount of water originating from the regional flow pathway in the vicinity of drinking water extraction wells. The spatial distribution of different water types can be estimated and a conceptual system understanding is developed. Results of the multivariate statistical analysis are comparable with gained information from isotopic data and organic micropollutants analyses. The integrated approach using different kinds of observations can be easily transferred to a variety of hydrological settings to synthesise and evaluate large hydrochemical datasets. The combination of additional data with different information content is conceivable and enabled effective interpretation of hydrological processes. Using the applied approach leads to more sound conceptual system understanding acting as the very basis to develop improved water resources management practices in a sustainable way.

  1. Use of Sulphur and Boron Isotopes to Identify Natural Gas Processing Emissions Sources

    NASA Astrophysics Data System (ADS)

    Bradley, C. E.; Norman, A.; Wieser, M. E.

    2003-12-01

    Natural gas processing results in the emission of large amounts of gaseous pollutants as a result of planned and / or emergency flaring, sulphur incineration, and in the course of normal operation. Since many gas plants often contribute to the same air shed, it is not possible to conclusively determine the sources, amounts, and characteristics of pollution from a particular processing facility using traditional methods. However, sulphur isotopes have proven useful in the apportionment of sources of atmospheric sulphate (Norman et al., 1999), and boron isotopes have been shown to be of use in tracing coal contamination through groundwater (Davidson and Bassett, 1993). In this study, both sulphur and boron isotopes have been measured at source, receptor, and control sites, and, if emissions prove to be sufficiently distinct isotopically, they will be used to identify and apportion emissions downwind. Sulphur is present in natural gas as hydrogen sulphide (H2S), which is combusted to sulphur dioxide (SO2) prior to its release to the atmosphere, while boron is present both in hydrocarbon deposits as well as in any water used in the process. Little is known about the isotopic abundance variations of boron in hydrocarbon reservoirs, but Krouse (1991) has shown that the sulphur isotope composition of H2S in reservoirs varies according to both the concentration and the method of formation of H2S. As a result, gas plants processing gas from different reservoirs are expected to produce emissions with unique isotopic compositions. Samples were collected using a high-volume air sampler placed directly downwind of several gas plants, as well as at a receptor site and a control site. Aerosol sulphate and boron were collected on quartz fibre filters, while SO2 was collected on potassium hydroxide-impregnated cellulose filters. Solid sulphur samples were taken from those plants that process sulphur in order to compare the isotopic composition with atmospheric measurements. A

  2. Robust computational vision

    NASA Astrophysics Data System (ADS)

    Schunck, Brian G.

    1993-08-01

    This paper presents a paradigm for formulating reliable machine vision algorithms using methods from robust statistics. Machine vision is the process of estimating features from images by fitting a model to visual data. Vision research has produced an understanding of the physics and mathematics of visual processes. The fact that computer graphics programs can produce realistic renderings of artificial scenes indicates that our understanding of vision processes must be quite good. The premise of this paper is that the problem in applying computer vision in realistic scenes is not the fault of the theory of vision. We have good models for visual phenomena, but can do a better job of applying the models to images. Our understanding of vision must be used in computations that are robust to the kinds of errors that occur in visual signals. This paper argues that vision algorithms should be formulated using methods from robust regression. The nature of errors in visual signals is discussed, and a prescription for formulating robust algorithms is described. To illustrate the concepts, robust methods have been applied to several problems: surface reconstruction, dynamic stereo, image flow estimation, and edge detection.

  3. Robust conversion of marrow cells to skeletal muscle with formation of marrow-derived muscle cell colonies: A multifactorial process

    SciTech Connect

    Abedi, Mehrdad; Greer, Deborah A.; Colvin, Gerald A.; Demers, Delia A.; Dooner, Mark S.; Harpel, Jasha A.; Weier, Heinz-Ulrich G.; Lambert, Jean-Francois; Quesenberry, Peter J.

    2004-01-10

    Murine marrow cells are capable of repopulating skeletal muscle fibers. A point of concern has been the robustness of such conversions. We have investigated the impact of type of cell delivery, muscle injury, nature of delivered cell, and stem cell mobilizations on marrow to muscle conversion. We transplanted GFP transgenic marrow into irradiated C57BL/6 mice and then injured anterior tibialis muscle by cardiotoxin. One month after injury, sections were analyzed by standard and deconvolutional microscopy for expression of muscle and hematopietic markers. Irradiation was essential to conversion although whether by injury or induction of chimerism is not clear. Cardiotoxin and to a lesser extent PBS injected muscles showed significant number of GFP+ muscle fibers while uninjected muscles showed only rare GFP+ cells. Marrow conversion to muscle was increased by two cycles of G-CSF mobilization and to a lesser extent with G-CSF and steel or GM-CSF. Transplantation of female GFP to male C57 BL/6 and GFP to Rosa26 mice showed fusion of donor cells to recipient muscle. High numbers of donor derived muscle colonies and up to12 percent GFP positive muscle cells were seen after mobilization or direct injection. These levels of donor muscle chimerism approach levels which could be clinically significant in developing strategies for the treatment of muscular dystrophies. In summary, the conversion of marrow to skeletal muscle cells is based on cell fusion and is critically dependent on injury. This conversion is also numerically significant and increases with mobilization.

  4. Energy Landscape Reveals That the Budding Yeast Cell Cycle Is a Robust and Adaptive Multi-stage Process

    PubMed Central

    Lv, Cheng; Li, Xiaoguang; Li, Fangting; Li, Tiejun

    2015-01-01

    Quantitatively understanding the robustness, adaptivity and efficiency of cell cycle dynamics under the influence of noise is a fundamental but difficult question to answer for most eukaryotic organisms. Using a simplified budding yeast cell cycle model perturbed by intrinsic noise, we systematically explore these issues from an energy landscape point of view by constructing an energy landscape for the considered system based on large deviation theory. Analysis shows that the cell cycle trajectory is sharply confined by the ambient energy barrier, and the landscape along this trajectory exhibits a generally flat shape. We explain the evolution of the system on this flat path by incorporating its non-gradient nature. Furthermore, we illustrate how this global landscape changes in response to external signals, observing a nice transformation of the landscapes as the excitable system approaches a limit cycle system when nutrients are sufficient, as well as the formation of additional energy wells when the DNA replication checkpoint is activated. By taking into account the finite volume effect, we find additional pits along the flat cycle path in the landscape associated with the checkpoint mechanism of the cell cycle. The difference between the landscapes induced by intrinsic and extrinsic noise is also discussed. In our opinion, this meticulous structure of the energy landscape for our simplified model is of general interest to other cell cycle dynamics, and the proposed methods can be applied to study similar biological systems. PMID:25794282

  5. Identifying vegetation's influence on multi-scale fluvial processes based on plant trait adaptations

    NASA Astrophysics Data System (ADS)

    Manners, R.; Merritt, D. M.; Wilcox, A. C.; Scott, M.

    2015-12-01

    Riparian vegetation-geomorphic interactions are critical to the physical and biological function of riparian ecosystems, yet we lack a mechanistic understanding of these interactions and predictive ability at the reach to watershed scale. Plant functional groups, or groupings of species that have similar traits, either in terms of a plant's life history strategy (e.g., drought tolerance) or morphology (e.g., growth form), may provide an expression of vegetation-geomorphic interactions. We are developing an approach that 1) identifies where along a river corridor plant functional groups exist and 2) links the traits that define functional groups and their impact on fluvial processes. The Green and Yampa Rivers in Dinosaur National Monument have wide variations in hydrology, hydraulics, and channel morphology, as well as a large dataset of species presence. For these rivers, we build a predictive model of the probable presence of plant functional groups based on site-specific aspects of the flow regime (e.g., inundation probability and duration), hydraulic characteristics (e.g., velocity), and substrate size. Functional group traits are collected from the literature and measured in the field. We found that life-history traits more strongly predicted functional group presence than did morphological traits. However, some life-history traits, important for determining the likelihood of a plant existing along an environmental gradient, are directly related to the morphological properties of the plant, important for the plant's impact on fluvial processes. For example, stem density (i.e., dry mass divided by volume of stem) is positively correlated to drought tolerance and is also related to the modulus of elasticity. Growth form, which is related to the plant's susceptibility to biomass-removing fluvial disturbances, is also related to frontal area. Using this approach, we can identify how plant community composition and distribution shifts with a change to the flow

  6. Identifying sources and processes controlling the sulphur cycle in the Canyon Creek watershed, Alberta, Canada.

    PubMed

    Nightingale, Michael; Mayer, Bernhard

    2012-01-01

    Sources and processes affecting the sulphur cycle in the Canyon Creek watershed in Alberta (Canada) were investigated. The catchment is important for water supply and recreational activities and is also a source of oil and natural gas. Water was collected from 10 locations along an 8 km stretch of Canyon Creek including three so-called sulphur pools, followed by the chemical and isotopic analyses on water and its major dissolved species. The δ(2)H and δ(18)O values of the water plotted near the regional meteoric water line, indicating a meteoric origin of the water and no contribution from deeper formation waters. Calcium, magnesium and bicarbonate were the dominant ions in the upstream portion of the watershed, whereas sulphate was the dominant anion in the water from the three sulphur pools. The isotopic composition of sulphate (δ(34)S and δ(18)O) revealed three major sulphate sources with distinct isotopic compositions throughout the catchment: (1) a combination of sulphate from soils and sulphide oxidation in the bedrock in the upper reaches of Canyon Creek; (2) sulphide oxidation in pyrite-rich shales in the lower reaches of Canyon Creek and (3) dissolution of Devonian anhydrite constituting the major sulphate source for the three sulphur pools in the central portion of the watershed. The presence of H(2)S in the sulphur pools with δ(34)S values ∼30 ‰ lower than those of sulphate further indicated the occurrence of bacterial (dissimilatory) sulphate reduction. This case study reveals that δ(34)S values of surface water systems can vary by more than 20 ‰ over short geographic distances and that isotope analyses are an effective tool to identify sources and processes that govern the sulphur cycle in watersheds.

  7. Disinvestment for re-allocation: a process to identify priorities in healthcare.

    PubMed

    Nuti, Sabina; Vainieri, Milena; Bonini, Anna

    2010-05-01

    Resource scarcity and increasing service demand lead health systems to cope with choices within constrained budgets. The aim of the paper is to describe the study carried out in the Tuscan Health System in Italy on how to set priorities in the disinvestment process for re-allocation. The analysis was based on 2007 data benchmarking of the Tuscan Health System with an impact on the level of resources used. For each indicator, the first step was to estimate the gap between the performance of each Health Authority (HA) and the best performance or the regional average. The second step was to measure this gap in terms of financial value. The results of the analysis demonstrated that, at the regional level, 2-7% of the healthcare budget can be re-allocated if all the institutions achieve the regional average or the best practice. The implications of this study can be useful for policy makers and the HA top management. In the context of resource scarcity, it allows managers to identify the areas where the institutions can achieve a higher level of efficiency without negative effects on quality of care and instead re-allocate resources toward services with more value for patients. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.

  8. Comparison of remote sensing image processing techniques to identify tornado damage areas from Landsat TM data

    USGS Publications Warehouse

    Myint, S.W.; Yuan, M.; Cerveny, R.S.; Giri, C.P.

    2008-01-01

    Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and objectoriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. ?? 2008 by MDPI.

  9. Identifying Highly Penetrant Disease Causal Mutations Using Next Generation Sequencing: Guide to Whole Process

    PubMed Central

    Erzurumluoglu, A. Mesut; Shihab, Hashem A.; Baird, Denis; Richardson, Tom G.; Day, Ian N. M.; Gaunt, Tom R.

    2015-01-01

    Recent technological advances have created challenges for geneticists and a need to adapt to a wide range of new bioinformatics tools and an expanding wealth of publicly available data (e.g., mutation databases, and software). This wide range of methods and a diversity of file formats used in sequence analysis is a significant issue, with a considerable amount of time spent before anyone can even attempt to analyse the genetic basis of human disorders. Another point to consider that is although many possess “just enough” knowledge to analyse their data, they do not make full use of the tools and databases that are available and also do not fully understand how their data was created. The primary aim of this review is to document some of the key approaches and provide an analysis schema to make the analysis process more efficient and reliable in the context of discovering highly penetrant causal mutations/genes. This review will also compare the methods used to identify highly penetrant variants when data is obtained from consanguineous individuals as opposed to nonconsanguineous; and when Mendelian disorders are analysed as opposed to common-complex disorders. PMID:26106619

  10. Comparison of Remote Sensing Image Processing Techniques to Identify Tornado Damage Areas from Landsat TM Data

    PubMed Central

    Myint, Soe W.; Yuan, May; Cerveny, Randall S.; Giri, Chandra P.

    2008-01-01

    Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and object-oriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. PMID:27879757

  11. Identifying Processes of Fish Habitat Development Within a Hierarchical Framework of Spatial Scales

    NASA Astrophysics Data System (ADS)

    Rempel, L. L.; Church, M.; Rice, S. P.

    2004-05-01

    We developed a hierarchical classification of riverine habitats for the gravel reach of Fraser River, British Columbia, which provided a spatial framework to examine the physical processes involved in habitat development. Along the 80-km reach between Hope and Mission, Fraser River has a wandering channel morphology with annual sediment deposition on the order of 300,000 m3/yr. Deposition is spatially variable, with some areas of net erosion and other areas of up to 2.7 m net fill over the past 50 years. The tendency for gravel to be deposited along bar edges and islands creates outstanding habitat for at least 28 species of fish. At the highest level of the classification, we divide the river into 5 sub-reaches (10-20 km scale), each of which varies in morphological expression and sediment gradational tendency. These distinctions are believed to be important in strategic planning for flood and fisheries management. At the intermediate level, we identify major gravel bar units along the river (1 km scale). Nested within gravel bars are habitat units, which represent the finest level of the hierarchical classification and the spatial scale most relevant to fish (10-100 m scale). A basic bar unit consists of a cross-over riffle, gravel bar and adjacent pool. Gravel bars often have an elevated island core that is reflective of bar longevity and relative stability. Processes of sediment deposition and erosion preserve themselves as signature features on the surface of gravel bars. These features generally correspond with sedimentary units of relatively uniform grain texture, which are the building blocks of complex bar morphology and fish habitat. Systematic study of the topography and sedimentology of such features over the past 4 years has enabled us to elucidate seasonal processes of bar development and habitat creation. Changes in stage, as well as large-scale differences between sub-reaches in channel gradient and channel confinement affect sedimentation patterns

  12. Dynamics robustness of cascading systems.

    PubMed

    Young, Jonathan T; Hatakeyama, Tetsuhiro S; Kaneko, Kunihiko

    2017-03-01

    A most important property of biochemical systems is robustness. Static robustness, e.g., homeostasis, is the insensitivity of a state against perturbations, whereas dynamics robustness, e.g., homeorhesis, is the insensitivity of a dynamic process. In contrast to the extensively studied static robustness, dynamics robustness, i.e., how a system creates an invariant temporal profile against perturbations, is little explored despite transient dynamics being crucial for cellular fates and are reported to be robust experimentally. For example, the duration of a stimulus elicits different phenotypic responses, and signaling networks process and encode temporal information. Hence, robustness in time courses will be necessary for functional biochemical networks. Based on dynamical systems theory, we uncovered a general mechanism to achieve dynamics robustness. Using a three-stage linear signaling cascade as an example, we found that the temporal profiles and response duration post-stimulus is robust to perturbations against certain parameters. Then analyzing the linearized model, we elucidated the criteria of when signaling cascades will display dynamics robustness. We found that changes in the upstream modules are masked in the cascade, and that the response duration is mainly controlled by the rate-limiting module and organization of the cascade's kinetics. Specifically, we found two necessary conditions for dynamics robustness in signaling cascades: 1) Constraint on the rate-limiting process: The phosphatase activity in the perturbed module is not the slowest. 2) Constraints on the initial conditions: The kinase activity needs to be fast enough such that each module is saturated even with fast phosphatase activity and upstream changes are attenuated. We discussed the relevance of such robustness to several biological examples and the validity of the above conditions therein. Given the applicability of dynamics robustness to a variety of systems, it will provide a

  13. Dynamics robustness of cascading systems

    PubMed Central

    Kaneko, Kunihiko

    2017-01-01

    A most important property of biochemical systems is robustness. Static robustness, e.g., homeostasis, is the insensitivity of a state against perturbations, whereas dynamics robustness, e.g., homeorhesis, is the insensitivity of a dynamic process. In contrast to the extensively studied static robustness, dynamics robustness, i.e., how a system creates an invariant temporal profile against perturbations, is little explored despite transient dynamics being crucial for cellular fates and are reported to be robust experimentally. For example, the duration of a stimulus elicits different phenotypic responses, and signaling networks process and encode temporal information. Hence, robustness in time courses will be necessary for functional biochemical networks. Based on dynamical systems theory, we uncovered a general mechanism to achieve dynamics robustness. Using a three-stage linear signaling cascade as an example, we found that the temporal profiles and response duration post-stimulus is robust to perturbations against certain parameters. Then analyzing the linearized model, we elucidated the criteria of when signaling cascades will display dynamics robustness. We found that changes in the upstream modules are masked in the cascade, and that the response duration is mainly controlled by the rate-limiting module and organization of the cascade’s kinetics. Specifically, we found two necessary conditions for dynamics robustness in signaling cascades: 1) Constraint on the rate-limiting process: The phosphatase activity in the perturbed module is not the slowest. 2) Constraints on the initial conditions: The kinase activity needs to be fast enough such that each module is saturated even with fast phosphatase activity and upstream changes are attenuated. We discussed the relevance of such robustness to several biological examples and the validity of the above conditions therein. Given the applicability of dynamics robustness to a variety of systems, it will provide a

  14. Extreme robustness of scaling in sample space reducing processes explains Zipf’s law in diffusion on directed networks

    NASA Astrophysics Data System (ADS)

    Corominas-Murtra, Bernat; Hanel, Rudolf; Thurner, Stefan

    2016-09-01

    It has been shown recently that a specific class of path-dependent stochastic processes, which reduce their sample space as they unfold, lead to exact scaling laws in frequency and rank distributions. Such sample space reducing processes offer an alternative new mechanism to understand the emergence of scaling in countless processes. The corresponding power law exponents were shown to be related to noise levels in the process. Here we show that the emergence of scaling is not limited to the simplest SSRPs, but holds for a huge domain of stochastic processes that are characterised by non-uniform prior distributions. We demonstrate mathematically that in the absence of noise the scaling exponents converge to -1 (Zipf’s law) for almost all prior distributions. As a consequence it becomes possible to fully understand targeted diffusion on weighted directed networks and its associated scaling laws in node visit distributions. The presence of cycles can be properly interpreted as playing the same role as noise in SSRPs and, accordingly, determine the scaling exponents. The result that Zipf’s law emerges as a generic feature of diffusion on networks, regardless of its details, and that the exponent of visiting times is related to the amount of cycles in a network could be relevant for a series of applications in traffic-, transport- and supply chain management.

  15. Parallel processing in an identified neural circuit: the Aplysia californica gill-withdrawal response model system.

    PubMed

    Leonard, Janet L; Edstrom, John P

    2004-02-01

    /or 'Back-propagation' type. Such models may offer a more biologically realistic representation of nervous system organisation than has been thought. In this model, the six parallel GMNs of the CNS correspond to a hidden layer within one module of the gill-control system. That is, the gill-control system appears to be organised as a distributed system with several parallel modules, some of which are neural networks in their own right. A new model is presented here which predicts that the six GMNs serve as components of a 'push-pull' gain control system, along with known but largely unidentified inhibitory motor neurons from the PVG. This 'push-pull' gain control system sets the responsiveness of the peripheral gill motor system. Neither causal nor correlational links between specific forms of neural plasticity and behavioural plasticity have been demonstrated in the GWR model system. However, the GWR model system does provide an opportunity to observe and describe directly the physiological and biochemical mechanisms of distributed representation and parallel processing in a largely identifiable 'wetware' neural network.

  16. On the processes generating latitudinal richness gradients: identifying diagnostic patterns and predictions

    SciTech Connect

    Hurlbert, Allen H.; Stegen, James C.

    2014-12-02

    Many processes have been put forward to explain the latitudinal gradient in species richness. Here, we use a simulation model to examine four of the most common hypotheses and identify patterns that might be diagnostic of those four hypotheses. The hypotheses examined include (1) tropical niche conservatism, or the idea that the tropics are more diverse because a tropical clade origin has allowed more time for diversification in the tropics and has resulted in few species adapted to extra-tropical climates. (2) The productivity, or energetic constraints, hypothesis suggests that species richness is limited by the amount of biologically available energy in a region. (3) The tropical stability hypothesis argues that major climatic fluctuations and glacial cycles in extratropical regions have led to greater extinction rates and less opportunity for specialization relative to the tropics. (4) Finally, the speciation rates hypothesis suggests that the latitudinal richness gradient arises from a parallel gradient in rates of speciation. We found that tropical niche conservatism can be distinguished from the other three scenarios by phylogenies which are more balanced than expected, no relationship between mean root distance and richness across regions, and a homogeneous rate of speciation across clades and through time. The energy gradient, speciation gradient, and disturbance gradient scenarios all exhibited phylogenies which were more imbalanced than expected, showed a negative relationship between mean root distance and richness, and diversity-dependence of speciation rate estimates through time. Using Bayesian Analysis of Macroevolutionary Mixtures on the simulated phylogenies, we found that the relationship between speciation rates and latitude could distinguish among these three scenarios. We emphasize the importance of considering multiple hypotheses and focusing on diagnostic predictions instead of predictions that are consistent with more than one hypothesis.

  17. Identifying the influential aquifer heterogeneity factor on nitrate reduction processes by numerical simulation

    NASA Astrophysics Data System (ADS)

    Jang, E.; He, W.; Savoy, H.; Dietrich, P.; Kolditz, O.; Rubin, Y.; Schüth, C.; Kalbacher, T.

    2017-01-01

    Nitrate reduction reactions in groundwater systems are strongly influenced by various aquifer heterogeneity factors that affect the transport of chemical species, spatial distribution of redox reactive substances and, as a result, the overall nitrate reduction efficiency. In this study, we investigated the influence of physical and chemical aquifer heterogeneity, with a focus on nitrate transport and redox transformation processes. A numerical modeling study for simulating coupled hydrological-geochemical aquifer heterogeneity was conducted in order to improve our understanding of the influence of the aquifer heterogeneity on the nitrate reduction reactions and to identify the most influential aquifer heterogeneity factors throughout the simulation. Results show that the most influential aquifer heterogeneity factors could change over time. With abundant presence of electron donors in the high permeable zones (initial stage), physical aquifer heterogeneity significantly influences the nitrate reduction since it enables the preferential transport of nitrate to these zones and enhances mixing of reactive partners. Chemical aquifer heterogeneity plays a comparatively minor role. Increasing the spatial variability of the hydraulic conductivity also increases the nitrate removal efficiency of the system. However, ignoring chemical aquifer heterogeneity can lead to an underestimation of nitrate removals in long-term behavior. With the increase of the spatial variability of the electron donor, i.e. chemical heterogeneity, the number of the "hot spots" i.e. zones with comparably higher reactivity, should also increase. Hence, nitrate removal efficiencies will also be spatially variable but overall removal efficiency will be sustained if longer time scales are considered and nitrate fronts reach these high reactivity zones.

  18. On the robustness of the r-process in neutron-star mergers against variations of nuclear masses

    NASA Astrophysics Data System (ADS)

    Mendoza-Temis, J. J.; Wu, M. R.; Martínez-Pinedo, G.; Langanke, K.; Bauswein, A.; Janka, H.-T.; Frank, A.

    2016-07-01

    r-process calculations have been performed for matter ejected dynamically in neutron star mergers (NSM), such calculations are based on a complete set of trajectories from a three-dimensional relativistic smoothed particle hydrodynamic (SPH) simulation. Our calculations consider an extended nuclear reaction network, including spontaneous, β- and neutron-induced fission and adopting fission yield distributions from the ABLA code. In this contribution we have studied the sensitivity of the r-process abundances to nuclear masses by using diferent mass models for the calculation of neutron capture cross sections via the statistical model. Most of the trajectories, corresponding to 90% of the ejected mass, follow a relatively slow expansion allowing for all neutrons to be captured. The resulting abundances are very similar to each other and reproduce the general features of the observed r-process abundance (the second and third peaks, the rare-earth peak and the lead peak) for all mass models as they are mainly determined by the fission yields. We find distinct differences in the predictions of the mass models at and just above the third peak, which can be traced back to different predictions of neutron separation energies for r-process nuclei around neutron number N = 130.

  19. Process development for robust removal of aggregates using cation exchange chromatography in monoclonal antibody purification with implementation of quality by design.

    PubMed

    Xu, Zhihao; Li, Jason; Zhou, Joe X

    2012-01-01

    Aggregate removal is one of the most important aspects in monoclonal antibody (mAb) purification. Cation-exchange chromatography (CEX), a widely used polishing step in mAb purification, is able to clear both process-related impurities and product-related impurities. In this study, with the implementation of quality by design (QbD), a process development approach for robust removal of aggregates using CEX is described. First, resin screening studies were performed and a suitable CEX resin was chosen because of its relatively better selectivity and higher dynamic binding capacity. Second, a pH-conductivity hybrid gradient elution method for the CEX was established, and the risk assessment for the process was carried out. Third, a process characterization study was used to evaluate the impact of the potentially important process parameters on the process performance with respect to aggregate removal. Accordingly, a process design space was established. Aggregate level in load is the critical parameter. Its operating range is set at 0-3% and the acceptable range is set at 0-5%. Equilibration buffer is the key parameter. Its operating range is set at 40 ± 5 mM acetate, pH 5.0 ± 0.1, and acceptable range is set at 40 ± 10 mM acetate, pH 5.0 ± 0.2. Elution buffer, load mass, and gradient elution volume are non-key parameters; their operating ranges and acceptable ranges are equally set at 250 ± 10 mM acetate, pH 6.0 ± 0.2, 45 ± 10 g/L resin, and 10 ± 20% CV respectively. Finally, the process was scaled up 80 times and the impurities removal profiles were revealed. Three scaled-up runs showed that the size-exclusion chromatography (SEC) purity of the CEX pool was 99.8% or above and the step yield was above 92%, thereby proving that the process is both consistent and robust.

  20. CONTAINER MATERIALS, FABRICATION AND ROBUSTNESS

    SciTech Connect

    Dunn, K.; Louthan, M.; Rawls, G.; Sindelar, R.; Zapp, P.; Mcclard, J.

    2009-11-10

    The multi-barrier 3013 container used to package plutonium-bearing materials is robust and thereby highly resistant to identified degradation modes that might cause failure. The only viable degradation mechanisms identified by a panel of technical experts were pressurization within and corrosion of the containers. Evaluations of the container materials and the fabrication processes and resulting residual stresses suggest that the multi-layered containers will mitigate the potential for degradation of the outer container and prevent the release of the container contents to the environment. Additionally, the ongoing surveillance programs and laboratory studies should detect any incipient degradation of containers in the 3013 storage inventory before an outer container is compromised.

  1. Identifying structures, processes, resources and needs of research ethics committees in Egypt

    PubMed Central

    2010-01-01

    Background Concerns have been expressed regarding the adequacy of ethics review systems in developing countries. Limited data are available regarding the structural and functional status of Research Ethics Committees (RECs) in the Middle East. The purpose of this study was to survey the existing RECs in Egypt to better understand their functioning status, perceived resource needs, and challenges. Methods We distributed a self-administered survey tool to Egyptian RECs to collect information on the following domains: general characteristics of the REC, membership composition, ethics training, workload, process of ethics review, perceived challenges to effective functioning, and financial and material resources. We used basic descriptive statistics to evaluate the quantitative data. Results We obtained responses from 67% (12/18) of the identified RECs. Most RECs (10/12) have standard operating procedures and many (7/12) have established policies to manage conflicts of interests. The average membership was 10.3 with a range from 7-19. The predominant member type was physicians (69.5% of all of the REC members) with little lay representation (13.7%). Most RECs met at least once/month and the average number of protocols reviewed per meeting was 3.8 with a range from 1-10. Almost three-quarters of the members from all of the 12 RECs indicated they received some formal training in ethics. Regarding resources, roughly half of the RECs have dedicated capital equipment (e.g., meeting room, computers, office furniture, etc); none of the RECs have a formal operating budget. Perceived challenges included the absence of national research ethics guidelines and national standards for RECs and lack of ongoing training of its members in research ethics. Conclusion Our study documents several areas of strengths and areas for improvements in the operations of Egyptian RECs. Regarding strengths, many of the existing RECs meet frequently, have a majority of members with prior training in

  2. A novel scalable, robust downstream process for oncolytic rat parvovirus: isoelectric point-based elimination of empty particles.

    PubMed

    Leuchs, Barbara; Frehtman, Veronika; Riese, Markus; Müller, Marcus; Rommelaere, Jean

    2017-04-01

    The rodent protoparvovirus H-1PV, with its oncolytic and oncosuppressive properties, is a promising anticancer agent currently under testing in clinical trials. This explains the current demand for a scalable, good manufacturing practice-compatible virus purification process yielding high-grade pure infectious particles and overcoming the limitations of the current system based on density gradient centrifugation. We describe here a scalable process offering high purity and recovery. Taking advantage of the isoelectric point difference between full and empty particles, it eliminates most empty particles. Full particles have a significantly higher cationic charge than empty ones, with an isoelectric point of 5.8-6.2 versus 6.3 (as determined by isoelectric focusing and chromatofocusing). Thanks to this difference, infectious full particles can be separated from empty particles and most protein impurities by Convective interaction media(®) diethylaminoethyl (DEAE) anion exchange chromatography: applying unpurified H-1PV to the column in 0.15 M NaCl leaves, the former on the column and the latter in the flow through. The full particles are then recovered by elution with 0.25 M NaCl. The whole large-scale purification process involves filtration, single-step DEAE anion exchange chromatography, buffer exchange by cross-flow filtration, and final formulation in Visipaque/Ringer solution. It results in 98% contaminating protein removal and 96% empty particle elimination. The final infectious particle concentration reaches 3.5E10 plaque forming units (PFU)/ml, with a specific activity of 6.8E11 PFU/mg protein. Overall recovery is over 40%. The newly established method is suitable for use in commercial production.

  3. Development of a Robust and Cost-Effective Friction Stir Welding Process for Use in Advanced Military Vehicles

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Arakere, G.; Pandurangan, B.; Hariharan, A.; Yen, C.-F.; Cheeseman, B. A.

    2011-02-01

    To respond to the advent of more lethal threats, recently designed aluminum-armor-based military-vehicle systems have resorted to an increasing use of higher strength aluminum alloys (with superior ballistic resistance against armor piercing (AP) threats and with high vehicle-light weighing potential). Unfortunately, these alloys are not very amenable to conventional fusion-based welding technologies and in-order to obtain high-quality welds, solid-state joining technologies such as Friction stir welding (FSW) have to be employed. However, since FSW is a relatively new and fairly complex joining technology, its introduction into advanced military vehicle structures is not straight forward and entails a comprehensive multi-step approach. One such (three-step) approach is developed in the present work. Within the first step, experimental and computational techniques are utilized to determine the optimal tool design and the optimal FSW process parameters which result in maximal productivity of the joining process and the highest quality of the weld. Within the second step, techniques are developed for the identification and qualification of the optimal weld joint designs in different sections of a prototypical military vehicle structure. In the third step, problems associated with the fabrication of a sub-scale military vehicle test structure and the blast survivability of the structure are assessed. The results obtained and the lessons learned are used to judge the potential of the current approach in shortening the development time and in enhancing reliability and blast survivability of military vehicle structures.

  4. Robustness surfaces of complex networks

    NASA Astrophysics Data System (ADS)

    Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis

    2014-09-01

    Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared.

  5. Developing and commercializing sustainable new wood products : a process for identifying viable products.

    Treesearch

    Gordon A. Enk; Stuart L. Hart

    2003-01-01

    A process was designed to evaluate the sustainability and potential marketability of USDA Forest Service patented technologies. The process was designed and tested jointly by the University of North Carolina, the University of Michigan, Partners for Strategic Change, and the USDA Forest Service. Two technologies were evaluated: a fiber-based product and a wood fiber/...

  6. Robustness of Embryonic Patterning

    NASA Astrophysics Data System (ADS)

    Barkai, Naama

    2002-03-01

    Developmental patterning proceeds reliably despite natural fluctuations in the expression levels of genes and changes in gene dosage. Patterning mechanisms that rely on morphogen gradients generally involve a network of feedback loops, which buffer against such perturbations. Although most components of the patterning networks have been identified and characterized, quantitative and mechanistic understanding of how their functions are integrated to achieve robustness is still missing. Here, we report a quantitative study of a morphogen gradient, determining cell fates in the fruit-fly embryo. We find that differential protein cleavage, coupled with selective diffusion, defines a robust system which can adjust to large variations in levels of the different protein components. The mechanism underlying robustness relies on the convergence of the signaling profile to a finite distribution, which in most regions is independent of physical boundary conditions such as production rates. An excess of signaling molecules is stored in a restricted spatial domain. This limiting profile property is exhibited by a class of reaction-diffusion equations, and may represent a general mechanism for achieving robustness in morphogen gradient systems. Experimental verification of model's predictions will be described.

  7. Phenotypic Screening Identifies Modulators of Amyloid Precursor Protein Processing in Human Stem Cell Models of Alzheimer's Disease.

    PubMed

    Brownjohn, Philip W; Smith, James; Portelius, Erik; Serneels, Lutgarde; Kvartsberg, Hlin; De Strooper, Bart; Blennow, Kaj; Zetterberg, Henrik; Livesey, Frederick J

    2017-03-06

    Human stem cell models have the potential to provide platforms for phenotypic screens to identify candidate treatments and cellular pathways involved in the pathogenesis of neurodegenerative disorders. Amyloid precursor protein (APP) processing and the accumulation of APP-derived amyloid β (Aβ) peptides are key processes in Alzheimer's disease (AD). We designed a phenotypic small-molecule screen to identify modulators of APP processing in trisomy 21/Down syndrome neurons, a complex genetic model of AD. We identified the avermectins, commonly used as anthelmintics, as compounds that increase the relative production of short Aβ peptides at the expense of longer, potentially more toxic peptides. Further studies demonstrated that this effect is not due to an interaction with the core γ-secretase responsible for Aβ production. This study demonstrates the feasibility of phenotypic drug screening in human stem cell models of Alzheimer-type dementia, and points to possibilities for indirectly modulating APP processing, independently of γ-secretase modulation.

  8. Hypochondriasis Differs From Panic Disorder and Social Phobia: Specific Processes Identified Within Patient Groups.

    PubMed

    Höfling, Volkmar; Weck, Florian

    2017-03-01

    Studies of the comorbidity of hypochondriasis have indicated high rates of cooccurrence with other anxiety disorders. In this study, the contrast among hypochondriasis, panic disorder, and social phobia was investigated using specific processes drawing on cognitive-perceptual models of hypochondriasis. Affective, behavioral, cognitive, and perceptual processes specific to hypochondriasis were assessed with 130 diagnosed participants based on the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, criteria (66 with hypochondriasis, 32 with panic disorder, and 32 with social phobia). All processes specific to hypochondriasis were more intense for patients with hypochondriasis in contrast to those with panic disorder or social phobia (0.61 < d < 2.67). No differences were found between those with hypochondriasis with comorbid disorders and those without comorbid disorders. Perceptual processes were shown to best discriminate between patients with hypochondriasis and those with panic disorder.

  9. The June 2014 eruption at Piton de la Fournaise: Robust methods developed for monitoring challenging eruptive processes

    NASA Astrophysics Data System (ADS)

    Villeneuve, N.; Ferrazzini, V.; Di Muro, A.; Peltier, A.; Beauducel, F.; Roult, G. C.; Lecocq, T.; Brenguier, F.; Vlastelic, I.; Gurioli, L.; Guyard, S.; Catry, T.; Froger, J. L.; Coppola, D.; Harris, A. J. L.; Favalli, M.; Aiuppa, A.; Liuzzo, M.; Giudice, G.; Boissier, P.; Brunet, C.; Catherine, P.; Fontaine, F. J.; Henriette, L.; Lauret, F.; Riviere, A.; Kowalski, P.

    2014-12-01

    After almost 3.5 years of quiescence, Piton de la Fournaise (PdF) produced a small summit eruption on 20 June 2014 at 21:35 (GMT). The eruption lasted 20 hours and was preceded by: i) onset of deep eccentric seismicity (15-20 km bsl; 9 km NW of the volcano summit) in March and April 2014; ii) enhanced CO2 soil flux along the NW rift zone; iii) increase in the number and energy of shallow (<1.5 km asl) VT events. The increase in VT events occurred on 9 June. Their signature, and shallow location, was not characteristic of an eruptive crisis. However, at 20:06 on 20/06 their character changed. This was 74 minutes before the onset of tremor. Deformations then began at 20:20. Since 2007, PdF has emitted small magma volumes (<3 Mm3) in events preceded by weak and short precursory phases. To respond to this challenging activity style, new monitoring methods were deployed at OVPF. While the JERK and MSNoise methods were developed for processing of seismic data, borehole tiltmeters and permanent monitoring of summit gas emissions, plus CO2 soil flux, were used to track precursory activity. JERK, based on an analysis of the acceleration slope of a broad-band seismometer data, allowed advanced notice of the new eruption by 50 minutes. MSNoise, based on seismic velocity determination, showed a significant decrease 7 days before the eruption. These signals were coupled with change in summit fumarole composition. Remote sensing allowed the following syn-eruptive observations: - INSAR confirmed measurements made by the OVPF geodetic network, showing that deformation was localized around the eruptive fissures; - A SPOT5 image acquired at 05:41 on 21/06 allowed definition of the flow field area (194 500 m2); - A MODIS image acquired at 06:35 on 21/06 gave a lava discharge rate of 6.9±2.8 m3 s-1, giving an erupted volume of 0.3 and 0.4 Mm3. - This rate was used with the DOWNFLOW and FLOWGO models, calibrated with the textural data from Piton's 2010 lava, to run lava flow

  10. 20 CFR 1010.300 - What processes are to be implemented to identify covered persons?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... FOR VETERANS' EMPLOYMENT AND TRAINING SERVICE, DEPARTMENT OF LABOR APPLICATION OF PRIORITY OF SERVICE... identify covered persons? (a) Recipients of funds for qualified job training programs must implement... service delivery programs or Web sites in order to provide covered persons with timely and...

  11. 20 CFR 1010.300 - What processes are to be implemented to identify covered persons?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... FOR VETERANS' EMPLOYMENT AND TRAINING SERVICE, DEPARTMENT OF LABOR APPLICATION OF PRIORITY OF SERVICE... identify covered persons? (a) Recipients of funds for qualified job training programs must implement... service delivery programs or Web sites in order to provide covered persons with timely and...

  12. Identifying Leadership Potential: The Process of Principals within a Charter School Network

    ERIC Educational Resources Information Center

    Waidelich, Lynn A.

    2012-01-01

    The importance of strong educational leadership for American K-12 schools cannot be overstated. As such, school districts need to actively recruit and develop leaders. One way to do so is for school officials to become more strategic in leadership identification and development. If contemporary leaders are strategic about whom they identify and…

  13. Identifying Gifted Students: Educator Beliefs regarding Various Policies, Processes, and Procedures

    ERIC Educational Resources Information Center

    Schroth, Stepen T.; Helfer, Jason A.

    2008-01-01

    Issues regarding the identification of gifted students have perplexed the field almost since its inception. How one identifies gifted students has tremendous ramifications for a gifted education program's size, curriculum, instructional methods, and administration. Little is known, however, regarding educator beliefs regarding gifted…

  14. Identifying Leadership Potential: The Process of Principals within a Charter School Network

    ERIC Educational Resources Information Center

    Waidelich, Lynn A.

    2012-01-01

    The importance of strong educational leadership for American K-12 schools cannot be overstated. As such, school districts need to actively recruit and develop leaders. One way to do so is for school officials to become more strategic in leadership identification and development. If contemporary leaders are strategic about whom they identify and…

  15. Students' Conceptual Knowledge and Process Skills in Civic Education: Identifying Cognitive Profiles and Classroom Correlates

    ERIC Educational Resources Information Center

    Zhang, Ting; Torney-Purta, Judith; Barber, Carolyn

    2012-01-01

    In 2 related studies framed by social constructivism theory, the authors explored a fine-grained analysis of adolescents' civic conceptual knowledge and skills and investigated them in relation to factors such as teachers' qualifications and students' classroom experiences. In Study 1 (with about 2,800 U.S. students), the authors identified 4…

  16. Identifying the hazard characteristics of powder byproducts generated from semiconductor fabrication processes.

    PubMed

    Choi, Kwang-Min; An, Hee-Chul; Kim, Kwan-Sick

    2015-01-01

    Semiconductor manufacturing processes generate powder particles as byproducts which potentially could affect workers' health. The chemical composition, size, shape, and crystal structure of these powder particles were investigated by scanning electron microscopy equipped with an energy dispersive spectrometer, Fourier transform infrared spectrometry, and X-ray diffractometry. The powders generated in diffusion and chemical mechanical polishing processes were amorphous silica. The particles in the chemical vapor deposition (CVD) and etch processes were TiO(2) and Al(2)O(3), and Al(2)O(3) particles, respectively. As for metallization, WO(3), TiO(2), and Al(2)O(3) particles were generated from equipment used for tungsten and barrier metal (TiN) operations. In photolithography, the size and shape of the powder particles showed 1-10 μm and were of spherical shape. In addition, the powders generated from high-current and medium-current processes for ion implantation included arsenic (As), whereas the high-energy process did not include As. For all samples collected using a personal air sampler during preventive maintenance of process equipment, the mass concentrations of total airborne particles were < 1 μg, which is the detection limit of the microbalance. In addition, the mean mass concentrations of airborne PM10 (particles less than 10 μm in diameter) using direct-reading aerosol monitor by area sampling were between 0.00 and 0.02 μg/m(3). Although the exposure concentration of airborne particles during preventive maintenance is extremely low, it is necessary to make continuous improvements to the process and work environment, because the influence of chronic low-level exposure cannot be excluded.

  17. Establishment of a Cost-Effective and Robust Planning Basis for the Processing of M-91 Waste at the Hanford Site

    SciTech Connect

    Johnson, Wayne L.; Parker, Brian M.

    2004-07-30

    This report identifies and evaluates viable alternatives for the accelerated processing of Hanford Site transuranic (TRU) and mixed low-level wastes (MLLW) that cannot be processed using existing site capabilities. Accelerated processing of these waste streams will lead to earlier reduction of risk and considerable life-cycle cost savings. The processing need is to handle both oversized MLLW and TRU containers as well as containers with surface contact dose rates greater than 200 mrem/hr. This capability is known as the ''M-91'' processing capability required by the Tri-Party Agreement milestone M-91--01. The new, phased approach proposed in this evaluation would use a combination of existing and planned processing capabilities to treat and more easily manage contact-handled waste streams first and would provide for earlier processing of these wastes.

  18. Identifying temporal and causal contributions of neural processes underlying the Implicit Association Test (IAT)

    PubMed Central

    Forbes, Chad E.; Cameron, Katherine A.; Grafman, Jordan; Barbey, Aron; Solomon, Jeffrey; Ritter, Walter; Ruchkin, Daniel S.

    2012-01-01

    The Implicit Association Test (IAT) is a popular behavioral measure that assesses the associative strength between outgroup members and stereotypical and counterstereotypical traits. Less is known, however, about the degree to which the IAT reflects automatic processing. Two studies examined automatic processing contributions to a gender-IAT using a data driven, social neuroscience approach. Performance on congruent (e.g., categorizing male names with synonyms of strength) and incongruent (e.g., categorizing female names with synonyms of strength) IAT blocks were separately analyzed using EEG (event-related potentials, or ERPs, and coherence; Study 1) and lesion (Study 2) methodologies. Compared to incongruent blocks, performance on congruent IAT blocks was associated with more positive ERPs that manifested in frontal and occipital regions at automatic processing speeds, occipital regions at more controlled processing speeds and was compromised by volume loss in the anterior temporal lobe (ATL), insula and medial PFC. Performance on incongruent blocks was associated with volume loss in supplementary motor areas, cingulate gyrus and a region in medial PFC similar to that found for congruent blocks. Greater coherence was found between frontal and occipital regions to the extent individuals exhibited more bias. This suggests there are separable neural contributions to congruent and incongruent blocks of the IAT but there is also a surprising amount of overlap. Given the temporal and regional neural distinctions, these results provide converging evidence that stereotypic associative strength assessed by the IAT indexes automatic processing to a degree. PMID:23226123

  19. You can't get there from here: identifying process routes to replication.

    PubMed

    Primavera, Judy

    2004-06-01

    All too often the reports of our community research and action are presented in an ahistorical and decontextualized fashion focused more on the content of what was done than on the process of how the work was done and why. The story of the university-community partnership and the family literacy intervention that was developed illustrates the importance of several key process variables in project development and implementation. More specifically, the role of the social-ecological context, prehistory, personality, self-correction, and unexpected serendipitous events are discussed. If, as community psychologists, we are serious about conducting our work in the most efficient and effective manner possible, if we truly wish to make our work available for replication, and if we seek to develop standards of "best practice" that are meaningful, our communication regarding process must shift from the anecdotal to a position of central importance.

  20. Identifying Cortical Lateralization of Speech Processing in Infants Using Near-Infrared Spectroscopy

    PubMed Central

    Bortfeld, Heather; Fava, Eswen; Boas, David A.

    2010-01-01

    We investigate the utility of near-infrared spectroscopy (NIRS) as an alternative technique for studying infant speech processing. NIRS is an optical imaging technology that uses relative changes in total hemoglobin concentration and oxygenation as an indicator of neural activation. Procedurally, NIRS has the advantage over more common methods (e.g., fMRI) in that it can be used to study the neural responses of behaviorally active infants. Older infants (aged 6–9 months) were allowed to sit on their caretakers’ laps during stimulus presentation to determine relative differences in focal activity in the temporal region of the brain during speech processing. Results revealed a dissociation of sensory-specific processing in two cortical regions, the left and right temporal lobes. These findings are consistent with those obtained using other neurophysiological methods and point to the utility of NIRS as a means of establishing neural correlates of language development in older (and more active) infants. PMID:19142766

  1. A national effort to identify fry processing clones with low acrylamide-forming potential

    USDA-ARS?s Scientific Manuscript database

    Acrylamide is a suspected human carcinogen. Processed potato products, such as chips and fries, contribute to dietary intake of acrylamide. One of the most promising approaches to reducing acrylamide consumption is to develop and commercialize new potato varieties with low acrylamide-forming potenti...

  2. Stress test: identifying crowding stress-tolerant hybrids in processing sweet corn

    USDA-ARS?s Scientific Manuscript database

    Improvement in tolerance to intense competition at high plant populations (i.e. crowding stress) is a major genetic driver of corn yield gain the last half-century. Recent research found differences in crowding stress tolerance among a few modern processing sweet corn hybrids; however, a larger asse...

  3. 20 CFR 1010.300 - What processes are to be implemented to identify covered persons?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... service delivery programs or Web sites in order to provide covered persons with timely and useful... FOR VETERANS' EMPLOYMENT AND TRAINING SERVICE, DEPARTMENT OF LABOR APPLICATION OF PRIORITY OF SERVICE FOR COVERED PERSONS Applying Priority of Service § 1010.300 What processes are to be implemented to...

  4. 20 CFR 1010.300 - What processes are to be implemented to identify covered persons?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... service delivery programs or Web sites in order to provide covered persons with timely and useful... FOR VETERANS' EMPLOYMENT AND TRAINING SERVICE, DEPARTMENT OF LABOR APPLICATION OF PRIORITY OF SERVICE FOR COVERED PERSONS Applying Priority of Service § 1010.300 What processes are to be implemented to...

  5. Identifying the Neural Correlates Underlying Social Pain: Implications for Developmental Processes

    ERIC Educational Resources Information Center

    Eisenberger, Naomi I.

    2006-01-01

    Although the need for social connection is critical for early social development as well as for psychological well-being throughout the lifespan, relatively little is known about the neural processes involved in maintaining social connections. The following review summarizes what is known regarding the neural correlates underlying feeling of…

  6. Sociometric Effects in Small Classroom Groups Using Curricula Identified as Process-Oriented.

    ERIC Educational Resources Information Center

    Nickse, Ruth S.; Ripple, Richard E.

    This study was an attempt fo document aspects of small group work in classrooms engaged in the process education curricula called "Materials and Activities for Teachers and Children" (MATCH). Data on student-student interaction was related to small group work and gathered by paper-and-pencil sociometric questionnaires and measures of group…

  7. Identifying the Neural Correlates Underlying Social Pain: Implications for Developmental Processes

    ERIC Educational Resources Information Center

    Eisenberger, Naomi I.

    2006-01-01

    Although the need for social connection is critical for early social development as well as for psychological well-being throughout the lifespan, relatively little is known about the neural processes involved in maintaining social connections. The following review summarizes what is known regarding the neural correlates underlying feeling of…

  8. Identifying and locating surface defects in wood: Part of an automated lumber processing system

    Treesearch

    Richard W. Conners; Charles W. McMillin; Kingyao Lin; Ramon E. Vasquez-Espinosa

    1983-01-01

    Continued increases in the cost of materials and labor make it imperative for furniture manufacturers to control costs by improved yield and increased productivity. This paper describes an Automated Lumber Processing System (ALPS) that employs computer tomography, optical scanning technology, the calculation of an optimum cutting strategy, and 1 computer-driven laser...

  9. Using Workflow Modeling to Identify Areas to Improve Genetic Test Processes in the University of Maryland Translational Pharmacogenomics Project

    PubMed Central

    Cutting, Elizabeth M.; Overby, Casey L.; Banchero, Meghan; Pollin, Toni; Kelemen, Mark; Shuldiner, Alan R.; Beitelshees, Amber L.

    2015-01-01

    Delivering genetic test results to clinicians is a complex process. It involves many actors and multiple steps, requiring all of these to work together in order to create an optimal course of treatment for the patient. We used information gained from focus groups in order to illustrate the current process of delivering genetic test results to clinicians. We propose a business process model and notation (BPMN) representation of this process for a Translational Pharmacogenomics Project being implemented at the University of Maryland Medical Center, so that personalized medicine program implementers can identify areas to improve genetic testing processes. We found that the current process could be improved to reduce input errors, better inform and notify clinicians about the implications of certain genetic tests, and make results more easily understood. We demonstrate our use of BPMN to improve this important clinical process for CYP2C19 genetic testing in patients undergoing invasive treatment of coronary heart disease. PMID:26958179

  10. Using Workflow Modeling to Identify Areas to Improve Genetic Test Processes in the University of Maryland Translational Pharmacogenomics Project.

    PubMed

    Cutting, Elizabeth M; Overby, Casey L; Banchero, Meghan; Pollin, Toni; Kelemen, Mark; Shuldiner, Alan R; Beitelshees, Amber L

    Delivering genetic test results to clinicians is a complex process. It involves many actors and multiple steps, requiring all of these to work together in order to create an optimal course of treatment for the patient. We used information gained from focus groups in order to illustrate the current process of delivering genetic test results to clinicians. We propose a business process model and notation (BPMN) representation of this process for a Translational Pharmacogenomics Project being implemented at the University of Maryland Medical Center, so that personalized medicine program implementers can identify areas to improve genetic testing processes. We found that the current process could be improved to reduce input errors, better inform and notify clinicians about the implications of certain genetic tests, and make results more easily understood. We demonstrate our use of BPMN to improve this important clinical process for CYP2C19 genetic testing in patients undergoing invasive treatment of coronary heart disease.

  11. Accessing spoilage features of osmotolerant yeasts identified from kiwifruit plantation and processing environment in Shaanxi, China.

    PubMed

    Niu, Chen; Yuan, Yahong; Hu, Zhongqiu; Wang, Zhouli; Liu, Bin; Wang, Huxuan; Yue, Tianli

    2016-09-02

    Osmotolerant yeasts originating from kiwifruit industrial chain can result in spoilage incidences, while little information is available about their species and spoilage features. This work identified possible spoilage osmotolerant yeasts from orchards and a manufacturer (quick-freeze kiwifruit manufacturer) in main producing areas in Shaanxi, China and further characterized their spoilage features. A total of 86 osmotolerant isolates dispersing over 29 species were identified through 26S rDNA sequencing at the D1/D2 domain, among which Hanseniaspora uvarum occurred most frequently and have intimate relationships with kiwifruit. RAPD analysis indicated a high variability of this species from sampling regions. The correlation of genotypes with origins was established except for isolates from Zhouzhi orchards, and the mobility of H. uvarum from orchard to the manufacturer can be speculated and contributed to spoilage sourcing. The manufacturing environment favored the inhabitance of osmotolerant yeasts more than the orchard by giving high positive sample ratio or osmotolerant yeast ratio. The growth curves under various glucose levels were fitted by Grofit R package and the obtained growth parameters indicated phenotypic diversity in the H. uvarum and the rest species. Wickerhamomyces anomalus (OM14) and Candida glabrata (OZ17) were the most glucose tolerant species and availability of high glucose would assist them to produce more gas. The test osmotolerant species were odor altering in kiwifruit concentrate juice. 3-Methyl-1-butanol, phenylethyl alcohol, phenylethyl acetate, 5-hydroxymethylfurfural (5-HMF) and ethyl acetate were the most altered compound identified by GC/MS in the juice. Particularly, W. anomalus produced 4-vinylguaiacol and M. guilliermondii produced 4-ethylguaiacol that would imperil product acceptance. The study determines the target spoilers as well as offering a detailed spoilage features, which will be instructive in implementing preventative

  12. Exploring High-Dimensional Data Space: Identifying Optimal Process Conditions in Photovoltaics

    SciTech Connect

    Suh, C.; Biagioni, D.; Glynn, S.; Scharf, J.; Contreras, M. A.; Noufi, R.; Jones, W. B.

    2011-01-01

    We demonstrate how advanced exploratory data analysis coupled to data-mining techniques can be used to scrutinize the high-dimensional data space of photovoltaics in the context of thin films of Al-doped ZnO (AZO), which are essential materials as a transparent conducting oxide (TCO) layer in CuIn{sub x}Ga{sub 1-x}Se{sub 2} (CIGS) solar cells. AZO data space, wherein each sample is synthesized from a different process history and assessed with various characterizations, is transformed, reorganized, and visualized in order to extract optimal process conditions. The data-analysis methods used include parallel coordinates, diffusion maps, and hierarchical agglomerative clustering algorithms combined with diffusion map embedding.

  13. Exploring High-Dimensional Data Space: Identifying Optimal Process Conditions in Photovoltaics: Preprint

    SciTech Connect

    Suh, C.; Glynn, S.; Scharf, J.; Contreras, M. A.; Noufi, R.; Jones, W. B.; Biagioni, D.

    2011-07-01

    We demonstrate how advanced exploratory data analysis coupled to data-mining techniques can be used to scrutinize the high-dimensional data space of photovoltaics in the context of thin films of Al-doped ZnO (AZO), which are essential materials as a transparent conducting oxide (TCO) layer in CuInxGa1-xSe2 (CIGS) solar cells. AZO data space, wherein each sample is synthesized from a different process history and assessed with various characterizations, is transformed, reorganized, and visualized in order to extract optimal process conditions. The data-analysis methods used include parallel coordinates, diffusion maps, and hierarchical agglomerative clustering algorithms combined with diffusion map embedding.

  14. The use of mass spectrometry to identify antigens from proteasome processing.

    PubMed

    Burlet-Schiltz, Odile; Claverol, Stéphane; Gairin, Jean Edouard; Monsarrat, Bernard

    2005-01-01

    Mass spectrometry (MS) is a powerful tool for the characterization of antigenic peptides that play a major role in the immune system. Most of the major histocompatibility complex (MHC) class I peptides are generated during the degradation of intracellular proteins by the proteasome, a catalytic complex present in all eukaryotic cells. This chapter focuses on the contribution of MS to the understanding of the mechanisms of antigen processing by the proteasome. This knowledge may be valuable for the design of specific inhibitors of proteasome, which has recently been recognized as a therapeutic target in cancer therapies and for the development of efficient peptidic vaccines in immunotherapies. Examples from the literature have been chosen to illustrate how MS data can contribute first to the understanding of the mechanisms of proteasomal processing and, second, to the understanding of the crucial role of proteasome in cytotoxic T lymphocytes (CTL) activation. The general strategy based on MS analyses used in these studies is also described.

  15. Identifying scale-emergent, nonlinear, asynchronous processes of wetland methane exchange

    NASA Astrophysics Data System (ADS)

    Sturtevant, Cove; Ruddell, Benjamin L.; Knox, Sara Helen; Verfaillie, Joseph; Matthes, Jaclyn Hatala; Oikawa, Patricia Y.; Baldocchi, Dennis

    2016-01-01

    Methane (CH4) exchange in wetlands is complex, involving nonlinear asynchronous processes across diverse time scales. These processes and time scales are poorly characterized at the whole-ecosystem level, yet are crucial for accurate representation of CH4 exchange in process models. We used a combination of wavelet analysis and information theory to analyze interactions between whole-ecosystem CH4 flux and biophysical drivers in two restored wetlands of Northern California from hourly to seasonal time scales, explicitly questioning assumptions of linear, synchronous, single-scale analysis. Although seasonal variability in CH4 exchange was dominantly and synchronously controlled by soil temperature, water table fluctuations, and plant activity were important synchronous and asynchronous controls at shorter time scales that propagated to the seasonal scale. Intermittent, subsurface water table decline promoted short-term pulses of methane emission but ultimately decreased seasonal CH4 emission through subsequent inhibition after rewetting. Methane efflux also shared information with evapotranspiration from hourly to multiday scales and the strength and timing of hourly and diel interactions suggested the strong importance of internal gas transport in regulating short-term emission. Traditional linear correlation analysis was generally capable of capturing the major diel and seasonal relationships, but mesoscale, asynchronous interactions and nonlinear, cross-scale effects were unresolved yet important for a deeper understanding of methane flux dynamics. We encourage wider use of these methods to aid interpretation and modeling of long-term continuous measurements of trace gas and energy exchange.

  16. The engagement of children with disabilities in health-related technology design processes: identifying methodology.

    PubMed

    Allsop, Matthew J; Holt, Raymond J; Levesley, Martin C; Bhakta, Bipinchandra

    2010-01-01

    This review aims to identify research methodology that is suitable for involving children with disabilities in the design of healthcare technology, such as assistive technology and rehabilitation equipment. A review of the literature included the identification of methodology that is available from domains outside of healthcare and suggested a selection of available methods. The need to involve end users within the design of healthcare technology was highlighted, with particular attention to the need for greater levels of participation from children with disabilities within all healthcare research. Issues that may arise when trying to increase such involvement included the need to consider communication via feedback and tailored information, the need to measure levels of participation occurring in current research, and caution regarding the use of proxy information. Additionally, five suitable methods were highlighted that are available for use with children with disabilities in the design of healthcare technology. The methods identified in the review need to be put into practice to establish effective and, if necessary, novel ways of designing healthcare technology when end users are children with disabilities.

  17. Comparative assessment of genomic DNA extraction processes for Plasmodium: Identifying the appropriate method.

    PubMed

    Mann, Riti; Sharma, Supriya; Mishra, Neelima; Valecha, Neena; Anvikar, Anupkumar R

    2015-12-01

    Plasmodium DNA, in addition to being used for molecular diagnosis of malaria, find utility in monitoring patient responses to antimalarial drugs, drug resistance studies, genotyping and sequencing purposes. Over the years, numerous protocols have been proposed for extracting Plasmodium DNA from a variety of sources. Given that DNA isolation is fundamental to successful molecular studies, here we review the most commonly used methods for Plasmodium genomic DNA isolation, emphasizing their pros and cons. A comparison of these existing methods has been made, to evaluate their appropriateness for use in different applications and identify the method suitable for a particular laboratory based study. Selection of a suitable and accessible DNA extraction method for Plasmodium requires consideration of many factors, the most important being sensitivity, cost-effectiveness and, purity and stability of isolated DNA. Need of the hour is to accentuate on the development of a method that upholds well on all these parameters.

  18. Novel and Robust Transplantation Reveals the Acquisition of Polarized Processes by Cortical Cells Derived from Mouse and Human Pluripotent Stem Cells

    PubMed Central

    Nagashima, Fumiaki; Suzuki, Ikuo K.; Shitamukai, Atsunori; Sakaguchi, Haruko; Iwashita, Misato; Kobayashi, Taeko; Tone, Shigenobu; Toida, Kazunori; Vanderhaeghen, Pierre

    2014-01-01

    Current stem cell technologies have enabled the induction of cortical progenitors and neurons from embryonic stem cells (ESCs) and induced pluripotent stem cells in vitro. To understand the mechanisms underlying the acquisition of apico-basal polarity and the formation of processes associated with the stemness of cortical cells generated in monolayer culture, here, we developed a novel in utero transplantation system based on the moderate dissociation of adherens junctions in neuroepithelial tissue. This method enables (1) the incorporation of remarkably higher numbers of grafted cells and (2) quantitative morphological analyses at single-cell resolution, including time-lapse recording analyses. We then grafted cortical progenitors induced from mouse ESCs into the developing brain. Importantly, we revealed that the mode of process extension depends on the extrinsic apico-basal polarity of the host epithelial tissue, as well as on the intrinsic differentiation state of the grafted cells. Further, we successfully transplanted cortical progenitors induced from human ESCs, showing that our strategy enables investigation of the neurogenesis of human neural progenitors within the developing mouse cortex. Specifically, human cortical cells exhibit multiple features of radial migration. The robust transplantation method established here could be utilized both to uncover the missing gap between neurogenesis from ESCs and the tissue environment and as an in vivo model of normal and pathological human corticogenesis. PMID:24325299

  19. Identifying positioning-based attacks against 3D printed objects and the 3D printing process

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2017-05-01

    Zeltmann, et al. demonstrated that structural integrity and other quality damage to objects can be caused by changing its position on a 3D printer's build plate. On some printers, for example, object surfaces and support members may be stronger when oriented parallel to the X or Y axis. The challenge presented by the need to assure 3D printed object orientation is that this can be altered in numerous places throughout the system. This paper considers attack scenarios and discusses where attacks that change printing orientation can occur in the process. An imaging-based solution to combat this problem is presented.

  20. Modelling evapotranspiration during precipitation deficits: identifying critical processes in a land surface model

    NASA Astrophysics Data System (ADS)

    Ukkola, A.; Pitman, A.; Decker, M. R.; De Kauwe, M. G.; Abramowitz, G.; Wang, Y.; Kala, J.

    2015-12-01

    Surface fluxes from land surface models (LSM) have traditionally been evaluated against monthly, seasonal or annual mean states. Previous studies have noted the limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions but very few studies have systematically evaluated LSMs during rainfall deficits. We investigate the performance of the Community Atmosphere Biosphere Land Exchange (CABLE) LSM in simulating latent heat fluxes in offline mode. CABLE is evaluated against eddy covariance measurements of latent heat flux across 20 flux tower sites at sub-annual to inter-annual time scales, with a focus on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux is explored by employing alternative representations of hydrology, soil properties, leaf area index and stomatal conductance. We demonstrate the critical role of hydrological processes for capturing observed declines in latent heat. The effects of soil, LAI and stomatal conductance are shown to be highly site-specific. The default CABLE performs reasonably well at annual scales despite grossly underestimating latent heat during rainfall deficits, highlighting the importance for evaluating models explicitly under water-stressed conditions across multiple vegetation and climate regimes. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions but remaining deficiencies point to future research needs.

  1. Modelling evapotranspiration during precipitation deficits: identifying critical processes in a land surface model

    NASA Astrophysics Data System (ADS)

    Ukkola, Anna M.; Pitman, Andy J.; Decker, Mark; De Kauwe, Martin G.; Abramowitz, Gab; Kala, Jatin; Wang, Ying-Ping

    2016-06-01

    Surface fluxes from land surface models (LSMs) have traditionally been evaluated against monthly, seasonal or annual mean states. The limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions has been previously noted, but very few studies have systematically evaluated these models during rainfall deficits. We evaluated latent heat fluxes simulated by the Community Atmosphere Biosphere Land Exchange (CABLE) LSM across 20 flux tower sites at sub-annual to inter-annual timescales, in particular focusing on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux was explored by employing alternative representations of hydrology, leaf area index, soil properties and stomatal conductance. We found that the representation of hydrological processes was critical for capturing observed declines in latent heat during rainfall deficits. By contrast, the effects of soil properties, LAI and stomatal conductance were highly site-specific. Whilst the standard model performs reasonably well at annual scales as measured by common metrics, it grossly underestimates latent heat during rainfall deficits. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions, but remaining biases point to future research needs. Our results highlight the importance of evaluating LSMs under water-stressed conditions and across multiple plant functional types and climate regimes.

  2. Modelling evapotranspiration during precipitation deficits: identifying critical processes in a land surface model

    NASA Astrophysics Data System (ADS)

    Ukkola, A. M.; Pitman, A. J.; Decker, M.; De Kauwe, M. G.; Abramowitz, G.; Kala, J.; Wang, Y.-P.

    2015-10-01

    Surface fluxes from land surface models (LSM) have traditionally been evaluated against monthly, seasonal or annual mean states. The limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions has been previously noted, but very few studies have systematically evaluated these models during rainfall deficits. We evaluated latent heat flux simulated by the Community Atmosphere Biosphere Land Exchange (CABLE) LSM across 20 flux tower sites at sub-annual to inter-annual time scales, in particular focusing on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux are explored by employing alternative representations of hydrology, leaf area index, soil properties and stomatal conductance. We found that the representation of hydrological processes was critical for capturing observed declines in latent heat during rainfall deficits. By contrast, the effects of soil properties, LAI and stomatal conductance are shown to be highly site-specific. Whilst the standard model performs reasonably well at annual scales as measured by common metrics, it grossly underestimates latent heat during rainfall deficits. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions but remaining biases point to future research needs. Our results highlight the importance of evaluating LSMs under water-stressed conditions and across multiple plant functional types and climate regimes.

  3. Modelling evapotranspiration during precipitation deficits: Identifying critical processes in a land surface model

    DOE PAGES

    Ukkola, Anna M.; Pitman, Andy J.; Decker, Mark; ...

    2016-06-21

    Surface fluxes from land surface models (LSMs) have traditionally been evaluated against monthly, seasonal or annual mean states. The limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions has been previously noted, but very few studies have systematically evaluated these models during rainfall deficits. We evaluated latent heat fluxes simulated by the Community Atmosphere Biosphere Land Exchange (CABLE) LSM across 20 flux tower sites at sub-annual to inter-annual timescales, in particular focusing on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux was explored by employing alternative representations of hydrology, leafmore » area index, soil properties and stomatal conductance. We found that the representation of hydrological processes was critical for capturing observed declines in latent heat during rainfall deficits. By contrast, the effects of soil properties, LAI and stomatal conductance were highly site-specific. Whilst the standard model performs reasonably well at annual scales as measured by common metrics, it grossly underestimates latent heat during rainfall deficits. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions, but remaining biases point to future research needs. Lastly, our results highlight the importance of evaluating LSMs under water-stressed conditions and across multiple plant functional types and climate regimes.« less

  4. Modelling evapotranspiration during precipitation deficits: Identifying critical processes in a land surface model

    SciTech Connect

    Ukkola, Anna M.; Pitman, Andy J.; Decker, Mark; De Kauwe, Martin G.; Abramowitz, Gab; Kala, Jatin; Wang, Ying -Ping

    2016-06-21

    Surface fluxes from land surface models (LSMs) have traditionally been evaluated against monthly, seasonal or annual mean states. The limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions has been previously noted, but very few studies have systematically evaluated these models during rainfall deficits. We evaluated latent heat fluxes simulated by the Community Atmosphere Biosphere Land Exchange (CABLE) LSM across 20 flux tower sites at sub-annual to inter-annual timescales, in particular focusing on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux was explored by employing alternative representations of hydrology, leaf area index, soil properties and stomatal conductance. We found that the representation of hydrological processes was critical for capturing observed declines in latent heat during rainfall deficits. By contrast, the effects of soil properties, LAI and stomatal conductance were highly site-specific. Whilst the standard model performs reasonably well at annual scales as measured by common metrics, it grossly underestimates latent heat during rainfall deficits. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions, but remaining biases point to future research needs. Lastly, our results highlight the importance of evaluating LSMs under water-stressed conditions and across multiple plant functional types and climate regimes.

  5. Repurposing the Clinical Record: Can an Existing Natural Language Processing System De-identify Clinical Notes?

    PubMed Central

    Morrison, Frances P.; Li, Li; Lai, Albert M.; Hripcsak, George

    2009-01-01

    Electronic clinical documentation can be useful for activities such as public health surveillance, quality improvement, and research, but existing methods of de-identification may not provide sufficient protection of patient data. The general-purpose natural language processor MedLEE retains medical concepts while excluding the remaining text so, in addition to processing text into structured data, it may be able provide a secondary benefit of de-identification. Without modifying the system, the authors tested the ability of MedLEE to remove protected health information (PHI) by comparing 100 outpatient clinical notes with the corresponding XML-tagged output. Of 809 instances of PHI, 26 (3.2%) were detected in output as a result of processing and identification errors. However, PHI in the output was highly transformed, much appearing as normalized terms for medical concepts, potentially making re-identification more difficult. The MedLEE processor may be a good enhancement to other de-identification systems, both removing PHI and providing coded data from clinical text. PMID:18952938

  6. Identifying the Institutional Decision Process to Introduce Decentralized Sanitation in the City of Kunming (China)

    NASA Astrophysics Data System (ADS)

    Medilanski, Edi; Chuan, Liang; Mosler, Hans-Joachim; Schertenleib, Roland; Larsen, Tove A.

    2007-05-01

    We conducted a study of the institutional barriers to introducing urine source separation in the urban area of Kunming, China. On the basis of a stakeholder analysis, we constructed stakeholder diagrams showing the relative importance of decision-making power and (positive) interest in the topic. A hypothetical decision-making process for the urban case was derived based on a successful pilot project in a periurban area. All our results were evaluated by the stakeholders. We concluded that although a number of primary stakeholders have a large interest in testing urine source separation also in an urban context, most of the key stakeholders would be reluctant to this idea. However, the success in the periurban area showed that even a single, well-received pilot project can trigger the process of broad dissemination of new technologies. Whereas the institutional setting for such a pilot project is favorable in Kunming, a major challenge will be to adapt the technology to the demands of an urban population. Methodologically, we developed an approach to corroborate a stakeholder analysis with the perception of the stakeholders themselves. This is important not only in order to validate the analysis but also to bridge the theoretical gap between stakeholder analysis and stakeholder involvement. We also show that in disagreement with the assumption of most policy theories, local stakeholders consider informal decision pathways to be of great importance in actual policy-making.

  7. Repurposing the clinical record: can an existing natural language processing system de-identify clinical notes?

    PubMed

    Morrison, Frances P; Li, Li; Lai, Albert M; Hripcsak, George

    2009-01-01

    Electronic clinical documentation can be useful for activities such as public health surveillance, quality improvement, and research, but existing methods of de-identification may not provide sufficient protection of patient data. The general-purpose natural language processor MedLEE retains medical concepts while excluding the remaining text so, in addition to processing text into structured data, it may be able provide a secondary benefit of de-identification. Without modifying the system, the authors tested the ability of MedLEE to remove protected health information (PHI) by comparing 100 outpatient clinical notes with the corresponding XML-tagged output. Of 809 instances of PHI, 26 (3.2%) were detected in output as a result of processing and identification errors. However, PHI in the output was highly transformed, much appearing as normalized terms for medical concepts, potentially making re-identification more difficult. The MedLEE processor may be a good enhancement to other de-identification systems, both removing PHI and providing coded data from clinical text.

  8. Unsupervised image processing scheme for transistor photon emission analysis in order to identify defect location

    NASA Astrophysics Data System (ADS)

    Chef, Samuel; Jacquir, Sabir; Sanchez, Kevin; Perdu, Philippe; Binczak, Stéphane

    2015-01-01

    The study of the light emitted by transistors in a highly scaled complementary metal oxide semiconductor (CMOS) integrated circuit (IC) has become a key method with which to analyze faulty devices, track the failure root cause, and have candidate locations for where to start the physical analysis. The localization of defective areas in IC corresponds to a reliability check and gives information to the designer to improve the IC design. The scaling of CMOS leads to an increase in the number of active nodes inside the acquisition area. There are also more differences between the spot's intensities. In order to improve the identification of all of the photon emission spots, we introduce an unsupervised processing scheme. It is based on iterative thresholding decomposition (ITD) and mathematical morphology operations. It unveils all of the emission spots and removes most of the noise from the database thanks to a succession of image processing. The ITD approach based on five thresholding methods is tested on 15 photon emission databases (10 real cases and 5 simulated cases). The photon emission areas' localization is compared to an expert identification and the estimation quality is quantified using the object consistency error.

  9. Identifying Spatial Patterns and Processes Affecting Mean Annual Runoff in the Alzette River Basin, Luxembourg

    NASA Astrophysics Data System (ADS)

    Smettem, Keith; Klaus, Julian; Dickson, Sam; Pfister, Laurent; Giustarini, Laura

    2017-04-01

    Mean annual runoff can be impacted by changes to climate and anthropogenic activities within a catchment. Differences in mean annual runoff between catchments in a local region can also reflect variations in average catchment properties, particularly average soil water storage over the prevailing plant root depth. We investigate the relative importance of precipitation, potential evapotranspiration and catchment properties using the Budyko framework on sub-catchments of the Alzette river basin in Luxembourg (as represented by the Choudhury model, which uses a single catchment parameter 'n' to encode catchment characteristics). We seek to establish if the 'Choudhury catchment parameter' can be used as a regionalisation index for mean annual runoff and therefore aid in identifying hydrologic response units. For 51 Luxembourgish sub-catchments ranging in size from 0.45km2 to 4232km2 we used average annual precipitation, potential evapotranspiration and runoff over a 12 year period to 2012 to identify the 'n' parameter by curve fitting. We then break down 'n' into three component parts: annual mean storm depth, α, (mm); mean effective rooting depth, Ze, (mm) and relative soil water holding capacity, κ (dimensionless). The n parameter then becomes a function of κ Ze /α . Information on each of these three components can be obtained independently from GIS mapping of land use, soil texture and spatially distributed rainfall statistics. Results showed the fitted n parameter is not affected by catchment size and did not increase with increased percentage of forest cover (potentially increased Ze). The soil water holding capacity exhibits a weak regional trend from north (0.1 to 0.2) to south (0.15 to 0.25) and α also declined from 18 mm in the north-east to 12 mm in the south-west, following a general slight orographic trend in the rainfall. The independent estimate of n suggests a regional trend, with the lowest values in the north-east and the highest values (less

  10. Progression after AKI: Understanding Maladaptive Repair Processes to Predict and Identify Therapeutic Treatments.

    PubMed

    Basile, David P; Bonventre, Joseph V; Mehta, Ravindra; Nangaku, Masaomi; Unwin, Robert; Rosner, Mitchell H; Kellum, John A; Ronco, Claudio

    2016-03-01

    Recent clinical studies indicate a strong link between AKI and progression of CKD. The increasing prevalence of AKI must compel the nephrology community to consider the long-term ramifications of this syndrome. Considerable gaps in knowledge exist regarding the connection between AKI and CKD. The 13th Acute Dialysis Quality Initiative meeting entitled "Therapeutic Targets of Human Acute Kidney Injury: Harmonizing Human and Experimental Animal Acute Kidney Injury" convened in April of 2014 and assigned a working group to focus on issues related to progression after AKI. This article provides a summary of the key conclusions and recommendations of the group, including an emphasis on terminology related to injury and repair processes for both clinical and preclinical studies, elucidation of pathophysiologic alterations of AKI, identification of potential treatment strategies, identification of patients predisposed to progression, and potential management strategies. Copyright © 2016 by the American Society of Nephrology.

  11. Progression after AKI: Understanding Maladaptive Repair Processes to Predict and Identify Therapeutic Treatments

    PubMed Central

    Bonventre, Joseph V.; Mehta, Ravindra; Nangaku, Masaomi; Unwin, Robert; Rosner, Mitchell H.; Kellum, John A.; Ronco, Claudio

    2016-01-01

    Recent clinical studies indicate a strong link between AKI and progression of CKD. The increasing prevalence of AKI must compel the nephrology community to consider the long-term ramifications of this syndrome. Considerable gaps in knowledge exist regarding the connection between AKI and CKD. The 13th Acute Dialysis Quality Initiative meeting entitled “Therapeutic Targets of Human Acute Kidney Injury: Harmonizing Human and Experimental Animal Acute Kidney Injury” convened in April of 2014 and assigned a working group to focus on issues related to progression after AKI. This article provides a summary of the key conclusions and recommendations of the group, including an emphasis on terminology related to injury and repair processes for both clinical and preclinical studies, elucidation of pathophysiologic alterations of AKI, identification of potential treatment strategies, identification of patients predisposed to progression, and potential management strategies. PMID:26519085

  12. Identifying and quantifying geochemical and mixing processes in the Matanza-Riachuelo Aquifer System, Argentina.

    PubMed

    Armengol, S; Manzano, M; Bea, S A; Martínez, S

    2017-12-01

    The Matanza-Riachuelo River Basin, in the Northeast of the Buenos Aires Province, is one of the most industrialized and populated region in Argentina and it is worldwide known for its alarming environmental degradation. In order to prevent further damages, the aquifer system, which consists of two overlaid aquifers, is being monitored from 2008 by the river basin authority, Autoridad de la Cuenca Matanza-Riachuelo. The groundwater chemical baseline has been established in a previous paper (Zabala et al., 2016), and this one is devoted to the identification of the main physical and hydrogeochemical processes that control groundwater chemistry and its areal distribution. Thirty five representative groundwater samples from the Upper Aquifer and thirty four from the deep Puelche Aquifer have been studied with a multi-tool approach to understand the origin of their chemical and isotopic values. The resulting conceptual model has been validated though hydrogeochemical modeling. Most of the aquifer system has fresh groundwater, but some areas have brackish and salt groundwater. Water recharging the Upper Aquifer is of the Ca-HCO3 type as a result of soil CO2 and carbonate dissolution. Evapotranspiration plays a great role concentrating recharge water. After recharge, groundwater becomes Na-HCO3, mostly due to cation exchange with Na release and Ca uptake, which induces calcite dissolution. Saline groundwaters exist in the lower and upper sectors of the basin as a result of Na-HCO3 water mixing with marine water of different origins. In the upper reaches, besides mixing with connate sea water other sources of SO4 exist, most probably gypsum and/or sulfides. This work highlights the relevance of performing detailed studies to understand the processes controlling groundwater chemistry at regional scale. Moreover, it is a step forward in the knowledge of the aquifer system, and provides a sound scientific basis to design effective management programs and recovery plans

  13. Covariance Association Test (CVAT) Identifies Genetic Markers Associated with Schizophrenia in Functionally Associated Biological Processes.

    PubMed

    Rohde, Palle Duun; Demontis, Ditte; Cuyabano, Beatriz Castro Dias; Børglum, Anders D; Sørensen, Peter

    2016-08-01

    Schizophrenia is a psychiatric disorder with large personal and social costs, and understanding the genetic etiology is important. Such knowledge can be obtained by testing the association between a disease phenotype and individual genetic markers; however, such single-marker methods have limited power to detect genetic markers with small effects. Instead, aggregating genetic markers based on biological information might increase the power to identify sets of genetic markers of etiological significance. Several set test methods have been proposed: Here we propose a new set test derived from genomic best linear unbiased prediction (GBLUP), the covariance association test (CVAT). We compared the performance of CVAT to other commonly used set tests. The comparison was conducted using a simulated study population having the same genetic parameters as for schizophrenia. We found that CVAT was among the top performers. When extending CVAT to utilize a mixture of SNP effects, we found an increase in power to detect the causal sets. Applying the methods to a Danish schizophrenia case-control data set, we found genomic evidence for association of schizophrenia with vitamin A metabolism and immunological responses, which previously have been implicated with schizophrenia based on experimental and observational studies. Copyright © 2016 by the Genetics Society of America.

  14. Key biological processes driving metastatic spread of pancreatic cancer as identified by multi-omics studies.

    PubMed

    Le Large, T Y S; Bijlsma, M F; Kazemier, G; van Laarhoven, H W M; Giovannetti, E; Jimenez, C R

    2017-06-01

    Pancreatic ductal adenocarcinoma (PDAC) is an extremely aggressive malignancy, characterized by a high metastatic burden, already at the time of diagnosis. The metastatic potential of PDAC is one of the main reasons for the poor outcome next to lack of significant improvement in effective treatments in the last decade. Key mutated driver genes, such as activating KRAS mutations, are concordantly expressed in primary and metastatic tumors. However, the biology behind the metastatic potential of PDAC is not fully understood. Recently, large-scale omic approaches have revealed new mechanisms by which PDAC cells gain their metastatic potency. In particular, genomic studies have shown that multiple heterogeneous subclones reside in the primary tumor with different metastatic potential. The development of metastases may be correlated to a more mesenchymal transcriptomic subtype. However, for cancer cells to survive in a distant organ, metastatic sites need to be modulated into pre-metastatic niches. Proteomic studies identified the influence of exosomes on the Kuppfer cells in the liver, which could function to prepare this tissue for metastatic colonization. Phosphoproteomics adds an extra layer to the established omic techniques by unravelling key functional signaling. Future studies integrating results from these large-scale omic approaches will hopefully improve PDAC prognosis through identification of new therapeutic targets and patient selection tools. In this article, we will review the current knowledge on the biology of PDAC metastasis unravelled by large scale multi-omic approaches. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  15. Pharmacy patronage: identifying key factors in the decision making process using the determinant attribute approach.

    PubMed

    Franic, Duska M; Haddock, Sarah M; Tucker, Leslie Tootle; Wooten, Nathan

    2008-01-01

    To use the determinant attribute approach, a research method commonly used in marketing to identify the wants of various consumer groups, to evaluate consumer pharmacy choice when having a prescription order filled in different pharmacy settings. Cross sectional. Community independent, grocery store, community chain, and discount store pharmacies in Georgia between April 2005 and April 2006. Convenience sample of adult pharmacy consumers (n = 175). Survey measuring consumer preferences on 26 attributes encompassing general pharmacy site features (16 items), pharmacist characteristics (5 items), and pharmacy staff characteristics (5 items). 26 potential determinant attributes for pharmacy selection. 175 consumers were surveyed at community independent (n = 81), grocery store (n = 44), community chain (n = 27), or discount store (n = 23) pharmacy settings. The attributes of pharmacists and staff at all four pharmacy settings were shown to affect pharmacy patronage motives, although consumers frequenting non-community independent pharmacies were also motivated by secondary convenience factors, e.g., hours of operation, and prescription coverage. Most consumers do not perceive pharmacies as merely prescription-distribution centers that vary only by convenience. Prescriptions are not just another economic good. Pharmacy personnel influence pharmacy selection; therefore, optimal staff selection and training is likely the greatest asset and most important investment for ensuring pharmacy success.

  16. ESL Teachers' Perceptions of the Process for Identifying Adolescent Latino English Language Learners with Specific Learning Disabilities

    ERIC Educational Resources Information Center

    Ferlis, Emily C.

    2012-01-01

    This dissertation examines the question "how do ESL teachers perceive the prereferral process for identifying adolescent Latino English language learners with specific learning disabilities?" The study fits within the Latino Critical Race Theory framework and employs an interpretive phenomenological qualitative research approach.…

  17. Using Statistical Process Control Charts to Identify the Steroids Era in Major League Baseball: An Educational Exercise

    ERIC Educational Resources Information Center

    Hill, Stephen E.; Schvaneveldt, Shane J.

    2011-01-01

    This article presents an educational exercise in which statistical process control charts are constructed and used to identify the Steroids Era in American professional baseball. During this period (roughly 1993 until the present), numerous baseball players were alleged or proven to have used banned, performance-enhancing drugs. Also observed…

  18. Using Statistical Process Control Charts to Identify the Steroids Era in Major League Baseball: An Educational Exercise

    ERIC Educational Resources Information Center

    Hill, Stephen E.; Schvaneveldt, Shane J.

    2011-01-01

    This article presents an educational exercise in which statistical process control charts are constructed and used to identify the Steroids Era in American professional baseball. During this period (roughly 1993 until the present), numerous baseball players were alleged or proven to have used banned, performance-enhancing drugs. Also observed…

  19. Robustness of metabolic networks

    NASA Astrophysics Data System (ADS)

    Jeong, Hawoong

    2009-03-01

    We investigated the robustness of cellular metabolism by simulating the system-level computational models, and also performed the corresponding experiments to validate our predictions. We address the cellular robustness from the ``metabolite''-framework by using the novel concept of ``flux-sum,'' which is the sum of all incoming or outgoing fluxes (they are the same under the pseudo-steady state assumption). By estimating the changes of the flux-sum under various genetic and environmental perturbations, we were able to clearly decipher the metabolic robustness; the flux-sum around an essential metabolite does not change much under various perturbations. We also identified the list of the metabolites essential to cell survival, and then ``acclimator'' metabolites that can control the cell growth were discovered. Furthermore, this concept of ``metabolite essentiality'' should be useful in developing new metabolic engineering strategies for improved production of various bioproducts and designing new drugs that can fight against multi-antibiotic resistant superbacteria by knocking-down the enzyme activities around an essential metabolite. Finally, we combined a regulatory network with the metabolic network to investigate its effect on dynamic properties of cellular metabolism.

  20. Identifying influential nodes in a wound healing-related network of biological processes using mean first-passage time

    NASA Astrophysics Data System (ADS)

    Arodz, Tomasz; Bonchev, Danail

    2015-02-01

    In this study we offer an approach to network physiology, which proceeds from transcriptomic data and uses gene ontology analysis to identify the biological processes most enriched in several critical time points of wound healing process (days 0, 3 and 7). The top-ranking differentially expressed genes for each process were used to build two networks: one with all proteins regulating the transcription of selected genes, and a second one involving the proteins from the signaling pathways that activate the transcription factors. The information from these networks is used to build a network of the most enriched processes with undirected links weighted proportionally to the count of shared genes between the pair of processes, and directed links weighted by the count of relationships connecting genes from one process to genes from the other. In analyzing the network thus built we used an approach based on random walks and accounting for the temporal aspects of the spread of a signal in the network (mean-first passage time, MFPT). The MFPT scores allowed identifying the top influential, as well as the top essential biological processes, which vary with the progress in the healing process. Thus, the most essential for day 0 was found to be the Wnt-receptor signaling pathway, well known for its crucial role in wound healing, while in day 3 this was the regulation of NF-kB cascade, essential for matrix remodeling in the wound healing process. The MFPT-based scores correctly reflected the pattern of the healing process dynamics to be highly concentrated around several processes between day 0 and day 3, and becoming more diffuse at day 7.

  1. Development and optimization of a process for automated recovery of single cells identified by microengraving.

    PubMed

    Choi, Jae Hyeok; Ogunniyi, Adebola O; Du, Mindy; Du, Minna; Kretschmann, Marcel; Eberhardt, Jens; Love, J Christopher

    2010-01-01

    Microfabricated devices are useful tools for manipulating and interrogating large numbers of single cells in a rapid and cost-effective manner, but connecting these systems to the existing platforms used in routine high-throughput screening of libraries of cells remains challenging. Methods to sort individual cells of interest from custom microscale devices to standardized culture dishes in an efficient and automated manner without affecting the viability of the cells are critical. Combining a commercially available instrument for colony picking (CellCelector, AVISO GmbH) and a customized software module, we have established an optimized process for the automated retrieval of individual antibody-producing cells, secreting desirable antibodies, from dense arrays of subnanoliter containers. The selection of cells for retrieval is guided by data obtained from a high-throughput, single-cell screening method called microengraving. Using this system, 100 clones from a mixed population of two cell lines secreting different antibodies (12CA5 and HYB099-01) were sorted with 100% accuracy (50 clones of each) in approximately 2 h, and the cells retained viability.

  2. Isotopic investigations of dissolved organic N in soils identifies N mineralization as a major sink process

    NASA Astrophysics Data System (ADS)

    Wanek, Wolfgang; Prommer, Judith; Hofhansl, Florian

    2016-04-01

    Dissolved organic nitrogen (DON) is a major component of transfer processes in the global nitrogen (N) cycle, contributing to atmospheric N deposition, terrestrial N losses and aquatic N inputs. In terrestrial ecosystems several sources and sinks contribute to belowground DON pools but yet are hard to quantify. In soils, DON is released by desorption of soil organic N and by microbial lysis. Major losses from the DON pool occur via sorption, hydrological losses and by soil N mineralization. Sorption/desorption, lysis and hydrological losses are expected to exhibit no 15N fractionation therefore allowing to trace different DON sources. Soil N mineralization of DON has been commonly assumed to have no or only a small isotope effect of between 0-4‰, however isotope fractionation by N mineralization has rarely been measured and might be larger than anticipated. Depending on the degree of 15N fractionation by soil N mineralization, we would expect DON to become 15N-enriched relative to bulk soil N, and dissolved inorganic N (DIN; ammonium and nitrate) to become 15N-depleted relative to both, bulk soil N and DON. Isotopic analyses of soil organic N, DON and DIN might therefore provide insights into the relative contributions of different sources and sink processes. This study therefore aimed at a better understanding of the isotopic signatures of DON and its controls in soils. We investigated the concentration and isotopic composition of bulk soil N, DON and DIN in a wide range of sites, covering arable, grassland and forest ecosystems in Austria across an altitudinal transect. Isotopic composition of ammonium, nitrate and DON were measured in soil extracts after chemical conversion to N2O by purge-and-trap isotope ratio mass spectrometry. We found that delta15N values of DON ranged between -0.4 and 7.6‰, closely tracking the delta15N values of bulk soils. However, DON was 15N-enriched relative to bulk soil N by 1.5±1.3‰ (1 SD), and inorganic N was 15N

  3. A fast-initiating ionically tagged ruthenium complex: a robust supported pre-catalyst for batch-process and continuous-flow olefin metathesis.

    PubMed

    Borré, Etienne; Rouen, Mathieu; Laurent, Isabelle; Magrez, Magaly; Caijo, Fréderic; Crévisy, Christophe; Solodenko, Wladimir; Toupet, Loic; Frankfurter, René; Vogt, Carla; Kirschning, Andreas; Mauduit, Marc

    2012-12-14

    In this study, a new pyridinium-tagged Ru complex was designed and anchored onto sulfonated silica, thereby forming a robust and highly active supported olefin-metathesis pre-catalyst for applications under batch and continuous-flow conditions. The involvement of an oxazine-benzylidene ligand allowed the reactivity of the formed Ru pre-catalyst to be efficiently controlled through both steric and electronic activation. The oxazine scaffold facilitated the introduction of the pyridinium tag, thereby affording the corresponding cationic pre-catalyst in good yield. Excellent activities in ring-closing (RCM), cross (CM), and enyne metathesis were observed with only 0.5 mol % loading of the pre-catalyst. When this powerful pre-catalyst was immobilized onto a silica-based cationic-exchange resin, a versatile catalytically active material for batch reactions was generated that also served as fixed-bed material for flow reactors. This system could be reused at 1 mol % loading to afford metathesis products in high purity with very low ruthenium contamination under batch conditions (below 5 ppm). Scavenging procedures for both batch and flow processes were conducted, which led to a lowering of the ruthenium content to as little as one tenth of the original values. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Simple process-led algorithms for simulating habitats (SPLASH v.1.0): robust indices of radiation, evapotranspiration and plant-available moisture

    NASA Astrophysics Data System (ADS)

    Davis, Tyler W.; Prentice, I. Colin; Stocker, Benjamin D.; Thomas, Rebecca T.; Whitley, Rhys J.; Wang, Han; Evans, Bradley J.; Gallego-Sala, Angela V.; Sykes, Martin T.; Cramer, Wolfgang

    2017-02-01

    Bioclimatic indices for use in studies of ecosystem function, species distribution, and vegetation dynamics under changing climate scenarios depend on estimates of surface fluxes and other quantities, such as radiation, evapotranspiration and soil moisture, for which direct observations are sparse. These quantities can be derived indirectly from meteorological variables, such as near-surface air temperature, precipitation and cloudiness. Here we present a consolidated set of simple process-led algorithms for simulating habitats (SPLASH) allowing robust approximations of key quantities at ecologically relevant timescales. We specify equations, derivations, simplifications, and assumptions for the estimation of daily and monthly quantities of top-of-the-atmosphere solar radiation, net surface radiation, photosynthetic photon flux density, evapotranspiration (potential, equilibrium, and actual), condensation, soil moisture, and runoff, based on analysis of their relationship to fundamental climatic drivers. The climatic drivers include a minimum of three meteorological inputs: precipitation, air temperature, and fraction of bright sunshine hours. Indices, such as the moisture index, the climatic water deficit, and the Priestley-Taylor coefficient, are also defined. The SPLASH code is transcribed in C++, FORTRAN, Python, and R. A total of 1 year of results are presented at the local and global scales to exemplify the spatiotemporal patterns of daily and monthly model outputs along with comparisons to other model results.

  5. Hétérochronies dans l'évolution des hominidés. Le développement dentaire des australopithécines «robustes»Heterochronic process in hominid evolution. The dental development in 'robust' australopithecines.

    NASA Astrophysics Data System (ADS)

    Ramirez Rozzi, Fernando V.

    2000-10-01

    Heterochrony is defined as an evolutionary modification in time and in the relative rate of development [6]. Growth (size), development (shape), and age (adult) are the three fundamental factors of ontogeny and have to be known to carry out a study on heterochronies. These three factors have been analysed in 24 Plio-Pleistocene hominid molars from Omo, Ethiopia, attributed to A. afarensis and robust australopithecines ( A. aethiopicus and A. aff. aethiopicus) . Molars were grouped into three chronological periods. The analysis suggests that morphological modifications through time are due to heterochronic process, a neoteny ( A. afarensis - robust australopithecine clade) and a time hypermorphosis ( A. aethiopicus - A. aff. aethiopicus).

  6. Computational methods using genome-wide association studies to predict radiotherapy complications and to identify correlative molecular processes

    NASA Astrophysics Data System (ADS)

    Oh, Jung Hun; Kerns, Sarah; Ostrer, Harry; Powell, Simon N.; Rosenstein, Barry; Deasy, Joseph O.

    2017-02-01

    The biological cause of clinically observed variability of normal tissue damage following radiotherapy is poorly understood. We hypothesized that machine/statistical learning methods using single nucleotide polymorphism (SNP)-based genome-wide association studies (GWAS) would identify groups of patients of differing complication risk, and furthermore could be used to identify key biological sources of variability. We developed a novel learning algorithm, called pre-conditioned random forest regression (PRFR), to construct polygenic risk models using hundreds of SNPs, thereby capturing genomic features that confer small differential risk. Predictive models were trained and validated on a cohort of 368 prostate cancer patients for two post-radiotherapy clinical endpoints: late rectal bleeding and erectile dysfunction. The proposed method results in better predictive performance compared with existing computational methods. Gene ontology enrichment analysis and protein-protein interaction network analysis are used to identify key biological processes and proteins that were plausible based on other published studies. In conclusion, we confirm that novel machine learning methods can produce large predictive models (hundreds of SNPs), yielding clinically useful risk stratification models, as well as identifying important underlying biological processes in the radiation damage and tissue repair process. The methods are generally applicable to GWAS data and are not specific to radiotherapy endpoints.

  7. Computational methods using genome-wide association studies to predict radiotherapy complications and to identify correlative molecular processes

    PubMed Central

    Oh, Jung Hun; Kerns, Sarah; Ostrer, Harry; Powell, Simon N.; Rosenstein, Barry; Deasy, Joseph O.

    2017-01-01

    The biological cause of clinically observed variability of normal tissue damage following radiotherapy is poorly understood. We hypothesized that machine/statistical learning methods using single nucleotide polymorphism (SNP)-based genome-wide association studies (GWAS) would identify groups of patients of differing complication risk, and furthermore could be used to identify key biological sources of variability. We developed a novel learning algorithm, called pre-conditioned random forest regression (PRFR), to construct polygenic risk models using hundreds of SNPs, thereby capturing genomic features that confer small differential risk. Predictive models were trained and validated on a cohort of 368 prostate cancer patients for two post-radiotherapy clinical endpoints: late rectal bleeding and erectile dysfunction. The proposed method results in better predictive performance compared with existing computational methods. Gene ontology enrichment analysis and protein-protein interaction network analysis are used to identify key biological processes and proteins that were plausible based on other published studies. In conclusion, we confirm that novel machine learning methods can produce large predictive models (hundreds of SNPs), yielding clinically useful risk stratification models, as well as identifying important underlying biological processes in the radiation damage and tissue repair process. The methods are generally applicable to GWAS data and are not specific to radiotherapy endpoints. PMID:28233873

  8. SU-E-T-452: Identifying Inefficiencies in Radiation Oncology Workflow and Prioritizing Solutions for Process Improvement and Patient Safety

    SciTech Connect

    Bennion, N; Driewer, J; Denniston, K; Zhen, W; Enke, C; Jacobs, K; Poole, M; McMahon, R; Wilson, K; Yager, A

    2015-06-15

    Purpose: Successful radiation therapy requires multi-step processes susceptible to unnecessary delays that can negatively impact clinic workflow, patient satisfaction, and safety. This project applied process improvement tools to assess workflow bottlenecks and identify solutions to barriers for effective implementation. Methods: We utilized the DMAIC (define, measure, analyze, improve, control) methodology, limiting our scope to the treatment planning process. From May through December of 2014, times and dates of each step from simulation to treatment were recorded for 507 cases. A value-stream map created from this dataset directed our selection of outcome measures (Y metrics). Critical goals (X metrics) that would accomplish the Y metrics were identified. Barriers to actions were binned into control-impact matrices, in order to stratify them into four groups: in/out of control and high/low impact. Solutions to each barrier were then categorized into benefit-effort matries to identify those of high benefit and low effort. Results: For 507 cases, the mean time from simulation to treatment was 235 total hours. The mean process and wait time were 60 and 132 hours, respectively. The Y metric was to increase the ratio of all non-emergent plans completed the business day prior to treatment from 47% to 75%. Project X metrics included increasing the number of IMRT QAs completed at least 24 hours prior to treatment from 19% to 80% and the number of non-IMRT plans approved at least 24 hours prior to treatment from 33% to 80%. Intervals from simulation to target contour and from initial plan completion to plan approval were identified as periods that could benefit from intervention. Barriers to actions were binned into control-impact matrices and solutions by benefit-effort matrices. Conclusion: The DMAIC method can be successfully applied in radiation therapy clinics to identify inefficiencies and prioritize solutions for the highest impact.

  9. Robust verification analysis

    SciTech Connect

    Rider, William; Witkowski, Walt; Kamm, James R.; Wildey, Tim

    2016-02-15

    We introduce a new methodology for inferring the accuracy of computational simulations through the practice of solution verification. We demonstrate this methodology on examples from computational heat transfer, fluid dynamics and radiation transport. Our methodology is suited to both well- and ill-behaved sequences of simulations. Our approach to the analysis of these sequences of simulations incorporates expert judgment into the process directly via a flexible optimization framework, and the application of robust statistics. The expert judgment is systematically applied as constraints to the analysis, and together with the robust statistics guards against over-emphasis on anomalous analysis results. We have named our methodology Robust Verification. Our methodology is based on utilizing multiple constrained optimization problems to solve the verification model in a manner that varies the analysis' underlying assumptions. Constraints applied in the analysis can include expert judgment regarding convergence rates (bounds and expectations) as well as bounding values for physical quantities (e.g., positivity of energy or density). This approach then produces a number of error models, which are then analyzed through robust statistical techniques (median instead of mean statistics). This provides self-contained, data and expert informed error estimation including uncertainties for both the solution itself and order of convergence. Our method produces high quality results for the well-behaved cases relatively consistent with existing practice. The methodology can also produce reliable results for ill-behaved circumstances predicated on appropriate expert judgment. We demonstrate the method and compare the results with standard approaches used for both code and solution verification on well-behaved and ill-behaved simulations.

  10. Robust verification analysis

    NASA Astrophysics Data System (ADS)

    Rider, William; Witkowski, Walt; Kamm, James R.; Wildey, Tim

    2016-02-01

    We introduce a new methodology for inferring the accuracy of computational simulations through the practice of solution verification. We demonstrate this methodology on examples from computational heat transfer, fluid dynamics and radiation transport. Our methodology is suited to both well- and ill-behaved sequences of simulations. Our approach to the analysis of these sequences of simulations incorporates expert judgment into the process directly via a flexible optimization framework, and the application of robust statistics. The expert judgment is systematically applied as constraints to the analysis, and together with the robust statistics guards against over-emphasis on anomalous analysis results. We have named our methodology Robust Verification. Our methodology is based on utilizing multiple constrained optimization problems to solve the verification model in a manner that varies the analysis' underlying assumptions. Constraints applied in the analysis can include expert judgment regarding convergence rates (bounds and expectations) as well as bounding values for physical quantities (e.g., positivity of energy or density). This approach then produces a number of error models, which are then analyzed through robust statistical techniques (median instead of mean statistics). This provides self-contained, data and expert informed error estimation including uncertainties for both the solution itself and order of convergence. Our method produces high quality results for the well-behaved cases relatively consistent with existing practice. The methodology can also produce reliable results for ill-behaved circumstances predicated on appropriate expert judgment. We demonstrate the method and compare the results with standard approaches used for both code and solution verification on well-behaved and ill-behaved simulations.

  11. Robustness Elasticity in Complex Networks

    PubMed Central

    Matisziw, Timothy C.; Grubesic, Tony H.; Guo, Junyu

    2012-01-01

    Network robustness refers to a network’s resilience to stress or damage. Given that most networks are inherently dynamic, with changing topology, loads, and operational states, their robustness is also likely subject to change. However, in most analyses of network structure, it is assumed that interaction among nodes has no effect on robustness. To investigate the hypothesis that network robustness is not sensitive or elastic to the level of interaction (or flow) among network nodes, this paper explores the impacts of network disruption, namely arc deletion, over a temporal sequence of observed nodal interactions for a large Internet backbone system. In particular, a mathematical programming approach is used to identify exact bounds on robustness to arc deletion for each epoch of nodal interaction. Elasticity of the identified bounds relative to the magnitude of arc deletion is assessed. Results indicate that system robustness can be highly elastic to spatial and temporal variations in nodal interactions within complex systems. Further, the presence of this elasticity provides evidence that a failure to account for nodal interaction can confound characterizations of complex networked systems. PMID:22808060

  12. An evaluation of a natural language processing tool for identifying and encoding allergy information in emergency department clinical notes.

    PubMed

    Goss, Foster R; Plasek, Joseph M; Lau, Jason J; Seger, Diane L; Chang, Frank Y; Zhou, Li

    2014-01-01

    Emergency department (ED) visits due to allergic reactions are common. Allergy information is often recorded in free-text provider notes; however, this domain has not yet been widely studied by the natural language processing (NLP) community. We developed an allergy module built on the MTERMS NLP system to identify and encode food, drug, and environmental allergies and allergic reactions. The module included updates to our lexicon using standard terminologies, and novel disambiguation algorithms. We developed an annotation schema and annotated 400 ED notes that served as a gold standard for comparison to MTERMS output. MTERMS achieved an F-measure of 87.6% for the detection of allergen names and no known allergies, 90% for identifying true reactions in each allergy statement where true allergens were also identified, and 69% for linking reactions to their allergen. These preliminary results demonstrate the feasibility using NLP to extract and encode allergy information from clinical notes.

  13. Reasoning about anomalies: a study of the analytical process of detecting and identifying anomalous behavior in maritime traffic data

    NASA Astrophysics Data System (ADS)

    Riveiro, Maria; Falkman, Göran; Ziemke, Tom; Kronhamn, Thomas

    2009-05-01

    The goal of visual analytical tools is to support the analytical reasoning process, maximizing human perceptual, understanding and reasoning capabilities in complex and dynamic situations. Visual analytics software must be built upon an understanding of the reasoning process, since it must provide appropriate interactions that allow a true discourse with the information. In order to deepen our understanding of the human analytical process and guide developers in the creation of more efficient anomaly detection systems, this paper investigates how is the human analytical process of detecting and identifying anomalous behavior in maritime traffic data. The main focus of this work is to capture the entire analysis process that an analyst goes through, from the raw data to the detection and identification of anomalous behavior. Three different sources are used in this study: a literature survey of the science of analytical reasoning, requirements specified by experts from organizations with interest in port security and user field studies conducted in different marine surveillance control centers. Furthermore, this study elaborates on how to support the human analytical process using data mining, visualization and interaction methods. The contribution of this paper is twofold: (1) within visual analytics, contribute to the science of analytical reasoning with practical understanding of users tasks in order to develop a taxonomy of interactions that support the analytical reasoning process and (2) within anomaly detection, facilitate the design of future anomaly detector systems when fully automatic approaches are not viable and human participation is needed.

  14. Identifying determinants of medication adherence following myocardial infarction using the Theoretical Domains Framework and the Health Action Process Approach.

    PubMed

    Presseau, Justin; Schwalm, J D; Grimshaw, Jeremy M; Witteman, Holly O; Natarajan, Madhu K; Linklater, Stefanie; Sullivan, Katrina; Ivers, Noah M

    2016-12-20

    Despite evidence-based recommendations, adherence with secondary prevention medications post-myocardial infarction (MI) remains low. Taking medication requires behaviour change, and using behavioural theories to identify what factors determine adherence could help to develop novel adherence interventions. Compare the utility of different behaviour theory-based approaches for identifying modifiable determinants of medication adherence post-MI that could be targeted by interventions. Two studies were conducted with patients 0-2, 3-12, 13-24 or 25-36 weeks post-MI. Study 1: 24 patients were interviewed about barriers and facilitators to medication adherence. Interviews were conducted and coded using the Theoretical Domains Framework. Study 2: 201 patients answered a telephone questionnaire assessing Health Action Process Approach constructs to predict intention and medication adherence (MMAS-8). Study 1: domains identified: Beliefs about Consequences, Memory/Attention/Decision Processes, Behavioural Regulation, Social Influences and Social Identity. Study 2: 64, 59, 42 and 58% reported high adherence at 0-2, 3-12, 13-24 and 25-36 weeks. Social Support and Action Planning predicted adherence at all time points, though the relationship between Action Planning and adherence decreased over time. Using two behaviour theory-based approaches provided complimentary findings and identified modifiable factors that could be targeted to help translate Intention into action to improve medication adherence post-MI.

  15. Assessment Approach for Identifying Compatibility of Restoration Projects with Geomorphic and Flooding Processes in Gravel Bed Rivers

    NASA Astrophysics Data System (ADS)

    DeVries, Paul; Aldrich, Robert

    2015-08-01

    A critical requirement for a successful river restoration project in a dynamic gravel bed river is that it be compatible with natural hydraulic and sediment transport processes operating at the reach scale. The potential for failure is greater at locations where the influence of natural processes is inconsistent with intended project function and performance. We present an approach using practical GIS, hydrologic, hydraulic, and sediment transport analyses to identify locations where specific restoration project types have the greatest likelihood of working as intended because their function and design are matched with flooding and morphologic processes. The key premise is to identify whether a specific river analysis segment (length ~1-10 bankfull widths) within a longer reach is geomorphically active or inactive in the context of vertical and lateral stabilities, and hydrologically active for floodplain connectivity. Analyses involve empirical channel geometry relations, aerial photographic time series, LiDAR data, HEC-RAS hydraulic modeling, and a time-integrated sediment transport budget to evaluate trapping efficiency within each segment. The analysis segments are defined by HEC-RAS model cross sections. The results have been used effectively to identify feasible projects in a variety of alluvial gravel bed river reaches with lengths between 11 and 80 km and 2-year flood magnitudes between ~350 and 1330 m3/s. Projects constructed based on the results have all performed as planned. In addition, the results provide key criteria for formulating erosion and flood management plans.

  16. Assessment Approach for Identifying Compatibility of Restoration Projects with Geomorphic and Flooding Processes in Gravel Bed Rivers.

    PubMed

    DeVries, Paul; Aldrich, Robert

    2015-08-01

    A critical requirement for a successful river restoration project in a dynamic gravel bed river is that it be compatible with natural hydraulic and sediment transport processes operating at the reach scale. The potential for failure is greater at locations where the influence of natural processes is inconsistent with intended project function and performance. We present an approach using practical GIS, hydrologic, hydraulic, and sediment transport analyses to identify locations where specific restoration project types have the greatest likelihood of working as intended because their function and design are matched with flooding and morphologic processes. The key premise is to identify whether a specific river analysis segment (length ~1-10 bankfull widths) within a longer reach is geomorphically active or inactive in the context of vertical and lateral stabilities, and hydrologically active for floodplain connectivity. Analyses involve empirical channel geometry relations, aerial photographic time series, LiDAR data, HEC-RAS hydraulic modeling, and a time-integrated sediment transport budget to evaluate trapping efficiency within each segment. The analysis segments are defined by HEC-RAS model cross sections. The results have been used effectively to identify feasible projects in a variety of alluvial gravel bed river reaches with lengths between 11 and 80 km and 2-year flood magnitudes between ~350 and 1330 m(3)/s. Projects constructed based on the results have all performed as planned. In addition, the results provide key criteria for formulating erosion and flood management plans.

  17. Stable Isotope Composition of Molecular Oxygen in Soil Gas and Groundwater: A Potentially Robust Tracer for Diffusion and Oxygen Consumption Processes

    NASA Astrophysics Data System (ADS)

    Aggarwal, Pradeep K.; Dillon, M. A.

    1998-02-01

    We have measured the concentration and isotopic composition of molecular oxygen in soil gas and groundwater. At a site near Lincoln, Nebraska, USA, soil gas oxygen concentrations ranged from 13.8 to 17.6% at depths of 3-4 m and the δ 18O values ranged mostly from 24.0 to 27.2‰ (SMOW). The concentration of dissolved oxygen in a perched aquifer in the Texas Panhandle (depth to water ˜76 m) was about 5 mg/L and the δ 18O values were 21.2-22.9‰. The δ 18O of soil gas oxygen in our study are higher and those of dissolved oxygen are lower than the δ 18O of atmospheric oxygen (23.5‰). A model for the oxygen concentration and isotopic composition in soil gas was developed using the molecular diffusion theory. The higher δ 18O values in soil gas at the Nebraska site can be explained by the effects of diffusion and soil respiration (plant root and bacterial) on the isotopic composition of molecular oxygen. The lower δ 18O of dissolved oxygen at the Texas site indicates that oxygen consumption below the root zone in the relatively thick unsaturated zone here may have occurred with a different fractionation factor (either due to inorganic consumption or due to low respiration rates) than that observed for the dominant pathways of plant root and bacterial respiration. It is concluded that the use of the concentration and isotopic composition of soil gas and dissolved oxygen should provide a robust tool for studying the subsurface gaseous diffusion and oxygen consumption processes.

  18. Formosa Plastics Corporation: Plant-Wide Assessment of Texas Plant Identifies Opportunities for Improving Process Efficiency and Reducing Energy Costs

    SciTech Connect

    2005-01-01

    At Formosa Plastics Corporation's plant in Point Comfort, Texas, a plant-wide assessment team analyzed process energy requirements, reviewed new technologies for applicability, and found ways to improve the plant's energy efficiency. The assessment team identified the energy requirements of each process and compared actual energy consumption with theoretical process requirements. The team estimated that total annual energy savings would be about 115,000 MBtu for natural gas and nearly 14 million kWh for electricity if the plant makes several improvements, which include upgrading the gas compressor impeller, improving the vent blower system, and recovering steam condensate for reuse. Total annual cost savings could be $1.5 million. The U.S. Department of Energy's Industrial Technologies Program cosponsored this assessment.

  19. Strategy for identifying dendritic cell-processed CD4+ T cell epitopes from the HIV gag p24 protein.

    PubMed

    Bozzacco, Leonia; Yu, Haiqiang; Dengjel, Jörn; Trumpfheller, Christine; Zebroski, Henry A; Zhang, Nawei; Küttner, Victoria; Ueberheide, Beatrix M; Deng, Haiteng; Chait, Brian T; Steinman, Ralph M; Mojsov, Svetlana; Fenyö, David

    2012-01-01

    Mass Spectrometry (MS) is becoming a preferred method to identify class I and class II peptides presented on major histocompability complexes (MHC) on antigen presenting cells (APC). We describe a combined computational and MS approach to identify exogenous MHC II peptides presented on mouse spleen dendritic cells (DCs). This approach enables rapid, effective screening of a large number of possible peptides by a computer-assisted strategy that utilizes the extraordinary human ability for pattern recognition. To test the efficacy of the approach, a mixture of epitope peptide mimics (mimetopes) from HIV gag p24 sequence were added exogenously to Fms-like tyrosine kinase 3 ligand (Flt3L)-mobilized splenic DCs. We identified the exogenously added peptide, VDRFYKTLRAEQASQ, and a second peptide, DRFYKLTRAEQASQ, derived from the original exogenously added 15-mer peptide. Furthermore, we demonstrated that our strategy works efficiently with HIV gag p24 protein when delivered, as vaccine protein, to Flt3L expanded mouse splenic DCs in vitro through the DEC-205 receptor. We found that the same MHC II-bound HIV gag p24 peptides, VDRFYKTLRAEQASQ and DRFYKLTRAEQASQ, were naturally processed from anti-DEC-205 HIV gag p24 protein and presented on DCs. The two identified VDRFYKTLRAEQASQ and DRFYKLTRAEQASQ MHC II-bound HIV gag p24 peptides elicited CD4(+) T-cell mediated responses in vitro. Their presentation by DCs to antigen-specific T cells was inhibited by chloroquine (CQ), indicating that optimal presentation of these exogenously added peptides required uptake and vesicular trafficking in mature DCs. These results support the application of our strategy to identify and characterize peptide epitopes derived from vaccine proteins processed by DCs and thus has the potential to greatly accelerate DC-based vaccine development.

  20. An algorithm for processing vital sign monitoring data to remotely identify operating room occupancy in real-time.

    PubMed

    Xiao, Yan; Hu, Peter; Hu, Hao; Ho, Danny; Dexter, Franklin; Mackenzie, Colin F; Seagull, F Jacob; Dutton, Richard P

    2005-09-01

    We developed an algorithm for processing networked vital signs (VS) to remotely identify in real-time when a patient enters and leaves a given operating room (OR). The algorithm addresses two types of mismatches between OR occupancy and VS: a patient is in the OR but no VS are available (e.g., patient is being hooked up), and no patient is in the OR but artifactual VS are present (e.g., because of staff handling of sensors). The algorithm was developed with data from 7 consecutive days (122 cases) in a 6 OR trauma center. The algorithm was then tested on data from another 7 consecutive days (98 cases), against patient in- and out-times captured by OR surveillance videos. When pulse oximetry, electrocardiogram, and temperature readings were used, OR occupancy was correctly identified 96% (95% confidence interval [CI] 95%-97%) and OR vacancy >99% of the time. Identified patient in- and out-times were accurate within 4.9 min (CI 4.2-5.7) and 2.8 min (CI 2.3-3.5), respectively, and were not different in accuracy from times reported by staff on OR records. The algorithm's usefulness was demonstrated partly by its continued operational use. We conclude that VS can be processed to accurately report OR occupancy in real-time.

  1. Robust indexing for automatic data collection

    PubMed Central

    Sauter, Nicholas K.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.

    2004-01-01

    Improved methods for indexing diffraction patterns from macromolecular crystals are presented. The novel procedures include a more robust way to verify the position of the incident X-ray beam on the detector, an algorithm to verify that the deduced lattice basis is consistent with the observations, and an alternative approach to identify the metric symmetry of the lattice. These methods help to correct failures commonly experienced during indexing, and increase the overall success rate of the process. Rapid indexing, without the need for visual inspection, will play an important role as beamlines at synchrotron sources prepare for high-throughput automation. PMID:20090869

  2. Robust indexing for automatic data collection

    SciTech Connect

    Sauter, Nicholas K.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.

    2003-12-09

    We present improved methods for indexing diffraction patterns from macromolecular crystals. The novel procedures include a more robust way to verify the position of the incident X-ray beam on the detector, an algorithm to verify that the deduced lattice basis is consistent with the observations, and an alternative approach to identify the metric symmetry of the lattice. These methods help to correct failures commonly experienced during indexing, and increase the overall success rate of the process. Rapid indexing, without the need for visual inspection, will play an important role as beamlines at synchrotron sources prepare for high-throughput automation.

  3. Robust clustering by pruning outliers.

    PubMed

    Zhang, Jiang-She; Leung, Yiu-Wing

    2003-01-01

    In many applications of C-means clustering, the given data set often contains noisy points. These noisy points will affect the resulting clusters, especially if they are far away from the data points. In this paper, we develop a pruning approach for robust C-means clustering. This approach identifies and prunes the outliers based on the sizes and shapes of the clusters so that the resulting clusters are least affected by the outliers. The pruning approach is general, and it can improve the robustness of many existing C-means clustering methods. In particular, we apply the pruning approach to improve the robustness of hard C-means clustering, fuzzy C-means clustering, and deterministic-annealing C-means clustering. As a result, we obtain three clustering algorithms that are the robust versions of the existing ones. In addition, we integrate the pruning approach with the fuzzy approach and the possibilistic approach to design two new algorithms for robust C-means clustering. The numerical results demonstrate that the pruning approach can achieve good robustness.

  4. Genome-Wide Functional Profiling Identifies Genes and Processes Important for Zinc-Limited Growth of Saccharomyces cerevisiae

    PubMed Central

    Loguinov, Alex V.; Zimmerman, Ginelle R.; Vulpe, Chris D.; Eide, David J.

    2012-01-01

    Zinc is an essential nutrient because it is a required cofactor for many enzymes and transcription factors. To discover genes and processes in yeast that are required for growth when zinc is limiting, we used genome-wide functional profiling. Mixed pools of ∼4,600 deletion mutants were inoculated into zinc-replete and zinc-limiting media. These cells were grown for several generations, and the prevalence of each mutant in the pool was then determined by microarray analysis. As a result, we identified more than 400 different genes required for optimal growth under zinc-limiting conditions. Among these were several targets of the Zap1 zinc-responsive transcription factor. Their importance is consistent with their up-regulation by Zap1 in low zinc. We also identified genes that implicate Zap1-independent processes as important. These include endoplasmic reticulum function, oxidative stress resistance, vesicular trafficking, peroxisome biogenesis, and chromatin modification. Our studies also indicated the critical role of macroautophagy in low zinc growth. Finally, as a result of our analysis, we discovered a previously unknown role for the ICE2 gene in maintaining ER zinc homeostasis. Thus, functional profiling has provided many new insights into genes and processes that are needed for cells to thrive under the stress of zinc deficiency. PMID:22685415

  5. Using Analytic Hierarchy Process to Identify the Nurses with High Stress-Coping Capability: Model and Application

    PubMed Central

    F. C. PAN, Frank

    2014-01-01

    Abstract Background Nurses have long been relied as the major labor force in hospitals. Featured with complicated and highly labor-intensive job requirement, multiple pressures from different sources was inevitable. Success in identifying stresses and accordingly coping with such stresses is important for job performance of nurses, and service quality of a hospital. Purpose of this research is to identify the determinants of nurses' capabilities. Methods A modified Analytic Hierarchy Process (AHP) was adopted. Overall, 105 nurses from several randomly selected hospitals in southern Taiwan were investigated to generate factors. Ten experienced practitioners were included as the expert in the AHP to produce weights of each criterion. Six nurses from two regional hospitals were then selected to test the model. Results Four factors are then identified as the second level of hierarchy. The study result shows that the family factor is the most important factor, and followed by the personal attributes. Top three sub-criteria that attribute to the nurse's stress-coping capability are children's education, good career plan, and healthy family. The practical simulation provided evidence for the usefulness of this model. Conclusion The study suggested including these key determinants into the practice of human-resource management, and restructuring the hospital's organization, creating an employee-support system as well as a family-friendly working climate. The research provided evidence that supports the usefulness of AHP in identifying the key factors that help stabilizing a nursing team. PMID:25988086

  6. Using analytic hierarchy process to identify the nurses with high stress-coping capability: model and application.

    PubMed

    F C Pan, Frank

    2014-03-01

    Nurses have long been relied as the major labor force in hospitals. Featured with complicated and highly labor-intensive job requirement, multiple pressures from different sources was inevitable. Success in identifying stresses and accordingly coping with such stresses is important for job performance of nurses, and service quality of a hospital. Purpose of this research is to identify the determinants of nurses' capabilities. A modified Analytic Hierarchy Process (AHP) was adopted. Overall, 105 nurses from several randomly selected hospitals in southern Taiwan were investigated to generate factors. Ten experienced practitioners were included as the expert in the AHP to produce weights of each criterion. Six nurses from two regional hospitals were then selected to test the model. Four factors are then identified as the second level of hierarchy. The study result shows that the family factor is the most important factor, and followed by the personal attributes. Top three sub-criteria that attribute to the nurse's stress-coping capability are children's education, good career plan, and healthy family. The practical simulation provided evidence for the usefulness of this model. The study suggested including these key determinants into the practice of human-resource management, and restructuring the hospital's organization, creating an employee-support system as well as a family-friendly working climate. The research provided evidence that supports the usefulness of AHP in identifying the key factors that help stabilizing a nursing team.

  7. Development of a Natural Language Processing System to Identify Timing and Status of Colonoscopy Testing in Electronic Medical Records

    PubMed Central

    Denny, Joshua C.; Peterson, Josh F.; Choma, Neesha N.; Xu, Hua; Miller, Randolph A.; Bastarache, Lisa; Peterson, Neeraja B.

    2009-01-01

    Colorectal cancer (CRC) screening rates are low despite proven benefits. We developed natural language processing (NLP) algorithms to identify temporal expressions and status indicators, such as “patient refused” or “test scheduled.” The authors incorporated the algorithms into the KnowledgeMap Concept Identifier system in order to detect references to completed colonoscopies within electronic text. The modified NLP system was evaluated using 200 randomly selected electronic medical records (EMRs) from a primary care population aged ≥50 years. The system detected completed colonoscopies with recall and precision of 0.93 and 0.92. The system was superior to a query of colonoscopy billing codes to determine screening status. PMID:20351837

  8. Application of the Mathar Method to Identify Internal Stress Variation in Steel as a Welding Process Result

    NASA Astrophysics Data System (ADS)

    Kowalski, Dariusz

    2017-06-01

    The paper deals with the method to identify internal stresses in two-dimensional steel members. Steel members were investigated in the delivery stage and after assembly, by means of electric-arc welding. In order to perform the member assessment two methods to identify the stress variation were applied. The first is a non-destructive measurement method employing local external magnetic field and to detecting the induced voltage, including Barkhausen noise The analysis of the latter allows to assess internal stresses in a surface layer of the material. The second method, essential in the paper, is a semi-trepanation Mathar method of tensometric strain variation measurement in the course of a controlled void-making in the material. Variation of internal stress distribution in the material led to the choice of welding technology to join. The assembly process altered the actual stresses and made up new stresses, triggering post-welding stresses as a response for the excessive stress variation.

  9. Comparison of the Analytic Hierarchy Process and Incomplete Analytic Hierarchy Process for identifying customer preferences in the Texas retail energy provider market

    NASA Astrophysics Data System (ADS)

    Davis, Christopher

    The competitive market for retail energy providers in Texas has been in existence for 10 years. When the market opened in 2002, 5 energy providers existed, offering, on average, 20 residential product plans in total. As of January 2012, there are now 115 energy providers in Texas offering over 300 residential product plans for customers. With the increase in providers and product plans, customers can be bombarded with information and suffer from the "too much choice" effect. The goal of this praxis is to aid customers in the decision making process of identifying an energy provider and product plan. Using the Analytic Hierarchy Process (AHP), a hierarchical decomposition decision making tool, and the Incomplete Analytic Hierarchy Process (IAHP), a modified version of AHP, customers can prioritize criteria such as price, rate type, customer service, and green energy products to identify the provider and plan that best meets their needs. To gather customer data, a survey tool has been developed for customers to complete the pairwise comparison process. Results are compared for the Incomplete AHP and AHP method to determine if the Incomplete AHP method is just as accurate, but more efficient, than the traditional AHP method.

  10. Volcanic Centers in the East Africa Rift: Volcanic Processes with Seismic Stresses to Identify Potential Hydrothermal Vents

    NASA Astrophysics Data System (ADS)

    Patlan, E.; Wamalwa, A. M.; Kaip, G.; Velasco, A. A.

    2015-12-01

    The Geothermal Development Company (GDC) in Kenya actively seeks to produce geothermal energy, which lies within the East African Rift System (EARS). The EARS, an active continental rift zone, appears to be a developing tectonic plate boundary and thus, has a number of active as well as dormant volcanoes throughout its extent. These volcanic centers can be used as potential sources for geothermal energy. The University of Texas at El Paso (UTEP) and the GDC deployed seismic sensors to monitor several volcanic centers: Menengai, Silali, and Paka, and Korosi. We identify microseismic, local events, and tilt like events using automatic detection algorithms and manual review to identify potential local earthquakes within our seismic network. We then perform the double-difference location method of local magnitude less than two to image the boundary of the magma chamber and the conduit feeding the volcanoes. In the process of locating local seismicity, we also identify long-period, explosion, and tremor signals that we interpret as magma passing through conduits of the magma chamber and/or fluid being transported as a function of magma movement or hydrothermal activity. We used waveform inversion and S-wave shear wave splitting to approximate the orientation of the local stresses from the vent or fissure-like conduit of the volcano. The microseismic events and long period events will help us interpret the activity of the volcanoes. Our goal is to investigate basement structures beneath the volcanoes and identify the extent of magmatic modifications of the crust. Overall, these seismic techniques will help us understand magma movement and volcanic processes in the region.

  11. The Daily Readiness Huddle: a process to rapidly identify issues and foster improvement through problem-solving accountability.

    PubMed

    Donnelly, Lane F; Cherian, Shirley S; Chua, Kimberly B; Thankachan, Sam; Millecker, Laura A; Koroll, Alex G; Bisset, George S

    2017-01-01

    Because of the increasing complexities of providing imaging for pediatric health care services, a more reliable process to manage the daily delivery of care is necessary. Objective We describe our Daily Readiness Huddle and the effects of the process on problem identification and improvement. Our Daily Readiness Huddle has four elements: metrics review, clinical volume review, daily readiness assessment, and problem accountability. It is attended by radiologists, directors, managers, front-line staff with concerns, representatives from support services (information technology [IT] and biomedical engineering [biomed]), and representatives who join the meeting in a virtual format from off-site locations. Data are visually displayed on erasable whiteboards. The daily readiness assessment uses queues to determine whether anyone has concerns or outlier data in regard to S-MESA (Safety, Methods, Equipment, Supplies or Associates). Through this assessment, problems are identified and categorized as quick hits (will be resolved in 24-48 h, not requiring project management) and complex issues. Complex issues are assigned an owner, quality coach and report-back date. Additionally, projects are defined as improvements that are often strategic, are anticipated to take more than 60 days, and do not necessarily arise out of identified issues during the Daily Readiness Huddle. We tracked and calculated the mean, median and range of days to resolution and completion for complex issues and for projects during the first full year of implementing this process. During the first 12 months, 91 complex issues were identified and resolved, 11 projects were in progress and 33 completed, with 23 other projects active or in planning. Time to resolution of complex issues (in days) was mean 37.5, median 34.0, and range 1-105. For projects, time to completion (in days) was mean 86.0, median 84.0, and range 5-280. The Daily Readiness Huddle process has given us a framework to rapidly identify

  12. Elevated intrabolus pressure identifies obstructive processes when integrated relaxation pressure is normal on esophageal high-resolution manometry.

    PubMed

    Quader, Farhan; Reddy, Chanakyaram; Patel, Amit; Gyawali, C Prakash

    2017-07-01

    Elevated integrated relaxation pressure (IRP) on esophageal high-resolution manometry (HRM) identifies obstructive processes at the esophagogastric junction (EGJ). Our aim was to determine whether intrabolus pressure (IBP) can identify structural EGJ processes when IRP is normal. In this observational cohort study, adult patients with dysphagia and undergoing HRM were evaluated for endoscopic evidence of structural EGJ processes (strictures, rings, hiatus hernia) in the setting of normal IRP. HRM metrics [IRP, distal contractile integral (DCI), distal latency (DL), IBP, and EGJ contractile integral (EGJ-CI)] were compared among 74 patients with structural EGJ findings (62.8 ± 1.6 yr, 67.6% women), 27 patients with normal EGD (52.9 ± 3.2 yr, 70.3% women), and 21 healthy controls (27.6 ± 0.6 yr, 52.4% women). Findings were validated in 85 consecutive symptomatic patients to address clinical utility. In the primary cohort, mean IBP (18.4 ± 0.9 mmHg) was higher with structural EGJ findings compared with dysphagia with normal EGD (13.5 ± 1.1 mmHg, P = 0.002) and healthy controls (10.9 ± 0.9 mmHg, P < 0.001). However, mean IRP, DCI, DL, and EGJ-CI were similar across groups (P > 0.05 for each comparison). During multiple rapid swallows, IBP remained higher in the structural findings group compared with controls (P = 0.02). Similar analysis of the prospective validation cohort confirmed IBP elevation in structural EGJ processes, but correlation with dysphagia could not be demonstrated. We conclude that elevated IBP predicts the presence of structural EGJ processes even when IRP is normal, but correlation with dysphagia is suboptimal.NEW & NOTEWORTHY Integrated relaxation pressure (IRP) above the upper limit of normal defines esophageal outflow obstruction using high-resolution manometry. In patients with normal IRP, elevated intrabolus pressure (IBP) can be a surrogate marker for a structural restrictive or obstructive process at the

  13. Using Natural Language Processing of Free-Text Radiology Reports to Identify Type 1 Modic Endplate Changes.

    PubMed

    Huhdanpaa, Hannu T; Tan, W Katherine; Rundell, Sean D; Suri, Pradeep; Chokshi, Falgun H; Comstock, Bryan A; Heagerty, Patrick J; James, Kathryn T; Avins, Andrew L; Nedeljkovic, Srdjan S; Nerenz, David R; Kallmes, David F; Luetmer, Patrick H; Sherman, Karen J; Organ, Nancy L; Griffith, Brent; Langlotz, Curtis P; Carrell, David; Hassanpour, Saeed; Jarvik, Jeffrey G

    2017-08-14

    Electronic medical record (EMR) systems provide easy access to radiology reports and offer great potential to support quality improvement efforts and clinical research. Harnessing the full potential of the EMR requires scalable approaches such as natural language processing (NLP) to convert text into variables used for evaluation or analysis. Our goal was to determine the feasibility of using NLP to identify patients with Type 1 Modic endplate changes using clinical reports of magnetic resonance (MR) imaging examinations of the spine. Identifying patients with Type 1 Modic change who may be eligible for clinical trials is important as these findings may be important targets for intervention. Four annotators identified all reports that contained Type 1 Modic change, using N = 458 randomly selected lumbar spine MR reports. We then implemented a rule-based NLP algorithm in Java using regular expressions. The prevalence of Type 1 Modic change in the annotated dataset was 10%. Results were recall (sensitivity) 35/50 = 0.70 (95% confidence interval (C.I.) 0.52-0.82), specificity 404/408 = 0.99 (0.97-1.0), precision (positive predictive value) 35/39 = 0.90 (0.75-0.97), negative predictive value 404/419 = 0.96 (0.94-0.98), and F1-score 0.79 (0.43-1.0). Our evaluation shows the efficacy of rule-based NLP approach for identifying patients with Type 1 Modic change if the emphasis is on identifying only relevant cases with low concern regarding false negatives. As expected, our results show that specificity is higher than recall. This is due to the inherent difficulty of eliciting all possible keywords given the enormous variability of lumbar spine reporting, which decreases recall, while availability of good negation algorithms improves specificity.

  14. The Narrative-Emotion Process Coding System 2.0: A multi-methodological approach to identifying and assessing narrative-emotion process markers in psychotherapy.

    PubMed

    Angus, Lynne E; Boritz, Tali; Bryntwick, Emily; Carpenter, Naomi; Macaulay, Christianne; Khattra, Jasmine

    2017-05-01

    Recent studies suggest that it is not simply the expression of emotion or emotional arousal in session that is important, but rather it is the reflective processing of emergent, adaptive emotions, arising in the context of personal storytelling and/or Emotion-Focused Therapy (EFT) interventions, that is associated with change. To enhance narrative-emotion integration specifically in EFT, Angus and Greenberg originally identified a set of eight clinically derived narrative-emotion integration markers were originally identified for the implementation of process-guiding therapeutic responses. Further evaluation and testing by the Angus Narrative-Emotion Marker Lab resulted in the identification of 10 empirically validated Narrative-Emotion Process (N-EP) markers that are included in the Narrative-Emotion Process Coding System Version 2.0 (NEPCS 2.0). Based on empirical research findings, individual markers are clustered into Problem (e.g., stuckness in repetitive story patterns, over-controlled or dysregulated emotion, lack of reflectivity), Transition (e.g., reflective, access to adaptive emotions and new emotional plotlines, heightened narrative and emotion integration), and Change (e.g., new story outcomes and self-narrative discovery, and co-construction and re-conceptualization) subgroups. To date, research using the NEPCS 2.0 has investigated the proportion and pattern of narrative-emotion markers in Emotion-Focused, Client-Centered, and Cognitive Therapy for Major Depression, Motivational Interviewing plus Cognitive Behavioral Therapy for Generalized Anxiety Disorder, and EFT for Complex Trauma. Results have consistently identified significantly higher proportions of N-EP Transition and Change markers, and productive shifts, in mid- and late phase sessions, for clients who achieved recovery by treatment termination. Recovery is consistently associated with client storytelling that is emotionally engaged, reflective, and evidencing new story outcomes and self

  15. Synthesis of robust controllers

    NASA Technical Reports Server (NTRS)

    Marrison, Chris

    1993-01-01

    At the 1990 American Controls Conference a benchmark problem was issued as a challenge for designing robust compensators. Many compensators were presented in response to the problem. In previous work Stochastic Robustness Analysis (SRA) was used to compare these compensators. In this work SRA metrics are used as guides to synthesize robust compensators, using the benchmark problem as an example.

  16. Enabling Rapid and Robust Structural Analysis During Conceptual Design

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.; Padula, Sharon L.; Li, Wu

    2015-01-01

    This paper describes a multi-year effort to add a structural analysis subprocess to a supersonic aircraft conceptual design process. The desired capabilities include parametric geometry, automatic finite element mesh generation, static and aeroelastic analysis, and structural sizing. The paper discusses implementation details of the new subprocess, captures lessons learned, and suggests future improvements. The subprocess quickly compares concepts and robustly handles large changes in wing or fuselage geometry. The subprocess can rank concepts with regard to their structural feasibility and can identify promising regions of the design space. The automated structural analysis subprocess is deemed robust and rapid enough to be included in multidisciplinary conceptual design and optimization studies.

  17. Robust (semi) nonnegative graph embedding.

    PubMed

    Zhang, Hanwang; Zha, Zheng-Jun; Yang, Yang; Yan, Shuicheng; Chua, Tat-Seng

    2014-07-01

    Nonnegative matrix factorization (NMF) has received considerable attention in image processing, computer vision, and patter recognition. An important variant of NMF is nonnegative graph embedding (NGE), which encodes the statistical or geometric information of data in the process of matrix factorization. The NGE offers a general framework for unsupervised/supervised settings. However, NGE-like algorithms often suffer from noisy data, unreliable graphs, and noisy labels, which are commonly encountered in real-world applications. To address these issues, in this paper, we first propose a robust nonnegative graph embedding (RNGE) framework, where the joint sparsity in both graph embedding and data reconstruction endues robustness to undesirable noises. Next, we present a robust seminonnegative graph embedding (RsNGE) framework, which only constrains the coefficient matrix to be nonnegative while places no constraint on the base matrix. This extends the applicable range of RNGE to data which are not nonnegative and endows more discriminative power of the learnt base matrix. The RNGE/RsNGE provides a general formulation such that all the algorithms unified within the graph embedding framework can be easily extended to obtain their robust nonnegative/seminonnegative solutions. Further, we develop elegant multiplicative updating solutions that can solve RNGE/RsNGE efficiently and offer a rigorous convergence analysis. We conduct extensive experiments on four real-world data sets and compare the proposed RNGE/RsNGE to other representative NMF variants and data factorization methods. The experimental results demonstrate the robustness and effectiveness of the proposed approaches.

  18. Transcription factor binding site analysis identifies FOXO transcription factors as regulators of the cutaneous wound healing process.

    PubMed

    Roupé, Karl Markus; Veerla, Srinivas; Olson, Joshua; Stone, Erica L; Sørensen, Ole E; Hedrick, Stephen M; Nizet, Victor

    2014-01-01

    The search for significantly overrepresented and co-occurring transcription factor binding sites in the promoter regions of the most differentially expressed genes in microarray data sets could be a powerful approach for finding key regulators of complex biological processes. To test this concept, two previously published independent data sets on wounded human epidermis were re-analyzed. The presence of co-occurring transcription factor binding sites for FOXO1, FOXO3 and FOXO4 in the majority of the promoter regions of the most significantly differentially expressed genes between non-wounded and wounded epidermis implied an important role for FOXO transcription factors during wound healing. Expression levels of FOXO transcription factors during wound healing in vivo in both human and mouse skin were analyzed and a decrease for all FOXOs in human wounded skin was observed, with FOXO3 having the highest expression level in non wounded skin. Impaired re-epithelialization was found in cultures of primary human keratinocytes expressing a constitutively active variant of FOXO3. Conversely knockdown of FOXO3 in keratinocytes had the opposite effect and in an in vivo mouse model with FOXO3 knockout mice we detected significantly accelerated wound healing. This article illustrates that the proposed approach is a viable method for identifying important regulators of complex biological processes using in vivo samples. FOXO3 has not previously been implicated as an important regulator of wound healing and its exact function in this process calls for further investigation.

  19. A cross-sectional study to identify organisational processes associated with nurse-reported quality and patient safety

    PubMed Central

    Tvedt, Christine; Sjetne, Ingeborg Strømseng; Helgeland, Jon; Bukholm, Geir

    2012-01-01

    Objectives The purpose of this study was to identify organisational processes and structures that are associated with nurse-reported patient safety and quality of nursing. Design This is an observational cross-sectional study using survey methods. Setting Respondents from 31 Norwegian hospitals with more than 85 beds were included in the survey. Participants All registered nurses working in direct patient care in a position of 20% or more were invited to answer the survey. In this study, 3618 nurses from surgical and medical wards responded (response rate 58.9). Nurses' practice environment was defined as organisational processes and measured by the Nursing Work Index Revised and items from Hospital Survey on Patient Safety Culture. Outcome measures Nurses' assessments of patient safety, quality of nursing, confidence in how their patients manage after discharge and frequency of adverse events were used as outcome measures. Results Quality system, nurse–physician relation, patient safety management and staff adequacy were process measures associated with nurse-reported work-related and patient-related outcomes, but we found no associations with nurse participation, education and career and ward leadership. Most organisational structures were non-significant in the multilevel model except for nurses’ affiliations to medical department and hospital type. Conclusions Organisational structures may have minor impact on how nurses perceive work-related and patient-related outcomes, but the findings in this study indicate that there is a considerable potential to address organisational design in improvement of patient safety and quality of care. PMID:23263021

  20. Information-processing alternatives to holistic perception: identifying the mechanisms of secondary-level holism within a categorization paradigm.

    PubMed

    Fifić, Mario; Townsend, James T

    2010-09-01

    Failure to selectively attend to a facial feature, in the part-to-whole paradigm, has been taken as evidence of holistic perception in a large body of face perception literature. In this article, we demonstrate that although failure of selective attention is a necessary property of holistic perception, its presence alone is not sufficient to conclude holistic processing has occurred. One must also consider the cognitive properties that are a natural part of information-processing systems, namely, mental architecture (serial, parallel), a stopping rule (self-terminating, exhaustive), and process dependency. We demonstrate that an analytic model (nonholistic) based on a parallel mental architecture and a self-terminating stopping rule can predict failure of selective attention. The new insights in our approach are based on the systems factorial technology, which provides a rigorous means of identifying the holistic-analytic distinction. Our main goal in the study was to compare potential changes in architecture when 2 second-order relational facial features are manipulated across different face contexts. Supported by simulation data, we suggest that the critical concept for modeling holistic perception is the interactive dependency between features. We argue that without conducting tests for architecture, stopping rule, and dependency, apparent holism could be confounded with analytic perception. This research adds to the list of converging operations for distinguishing between analytic forms and holistic forms of face perception.

  1. Robust information propagation through noisy neural circuits

    PubMed Central

    Pouget, Alexandre

    2017-01-01

    Sensory neurons give highly variable responses to stimulation, which can limit the amount of stimulus information available to downstream circuits. Much work has investigated the factors that affect the amount of information encoded in these population responses, leading to insights about the role of covariability among neurons, tuning curve shape, etc. However, the informativeness of neural responses is not the only relevant feature of population codes; of potentially equal importance is how robustly that information propagates to downstream structures. For instance, to quantify the retina’s performance, one must consider not only the informativeness of the optic nerve responses, but also the amount of information that survives the spike-generating nonlinearity and noise corruption in the next stage of processing, the lateral geniculate nucleus. Our study identifies the set of covariance structures for the upstream cells that optimize the ability of information to propagate through noisy, nonlinear circuits. Within this optimal family are covariances with “differential correlations”, which are known to reduce the information encoded in neural population activities. Thus, covariance structures that maximize information in neural population codes, and those that maximize the ability of this information to propagate, can be very different. Moreover, redundancy is neither necessary nor sufficient to make population codes robust against corruption by noise: redundant codes can be very fragile, and synergistic codes can—in some cases—optimize robustness against noise. PMID:28419098

  2. Transcriptome analysis of ripe and unripe fruit tissue of banana identifies major metabolic networks involved in fruit ripening process.

    PubMed

    Asif, Mehar Hasan; Lakhwani, Deepika; Pathak, Sumya; Gupta, Parul; Bag, Sumit K; Nath, Pravendra; Trivedi, Prabodh Kumar

    2014-12-02

    Banana is one of the most important crop plants grown in the tropics and sub-tropics. It is a climacteric fruit and undergoes ethylene dependent ripening. Once ripening is initiated, it proceeds at a fast rate making postharvest life short, which can result in heavy economic losses. During the fruit ripening process a number of physiological and biochemical changes take place and thousands of genes from various metabolic pathways are recruited to produce a ripe and edible fruit. To better understand the underlying mechanism of ripening, we undertook a study to evaluate global changes in the transcriptome of the fruit during the ripening process. We sequenced the transcriptomes of the unripe and ripe stages of banana (Musa accuminata; Dwarf Cavendish) fruit. The transcriptomes were sequenced using a 454 GSFLX-Titanium platform that resulted in more than 7,00,000 high quality (HQ) reads. The assembly of the reads resulted in 19,410 contigs and 92,823 singletons. A large number of the differentially expressed genes identified were linked to ripening dependent processes including ethylene biosynthesis, perception and signalling, cell wall degradation and production of aromatic volatiles. In the banana fruit transcriptomes, we found transcripts included in 120 pathways described in the KEGG database for rice. The members of the expansin and xyloglucan transglycosylase/hydrolase (XTH) gene families were highly up-regulated during ripening, which suggests that they might play important roles in the softening of the fruit. Several genes involved in the synthesis of aromatic volatiles and members of transcription factor families previously reported to be involved in ripening were also identified. A large number of differentially regulated genes were identified during banana fruit ripening. Many of these are associated with cell wall degradation and synthesis of aromatic volatiles. A large number of differentially expressed genes did not align with any of the databases and

  3. Robust Detection, Discrimination, and Remediation of UXO: Statistical Signal Processing Approaches to Address Uncertainties Encountered in Field Test Scenarios SERDP Project MR-1663

    DTIC Science & Technology

    2012-01-03

    2001.  [6]  S. L. Tantum and L. M. Collins. A comparison of algorithms for subsurface  target   detection   and  identification   using  time  domain...excellent classification performance can be achieved. Here, we aim to develop techniques to improve target characterization and reduce classifier...4  Robust  Target   Classification  with Limited Training Data

  4. Reducing Missed Laboratory Results: Defining Temporal Responsibility, Generating User Interfaces for Test Process Tracking, and Retrospective Analyses to Identify Problems

    PubMed Central

    Tarkan, Sureyya; Plaisant, Catherine; Shneiderman, Ben; Hettinger, A. Zachary

    2011-01-01

    Researchers have conducted numerous case studies reporting the details on how laboratory test results of patients were missed by the ordering medical providers. Given the importance of timely test results in an outpatient setting, there is limited discussion of electronic versions of test result management tools to help clinicians and medical staff with this complex process. This paper presents three ideas to reduce missed results with a system that facilitates tracking laboratory tests from order to completion as well as during follow-up: (1) define a workflow management model that clarifies responsible agents and associated time frame, (2) generate a user interface for tracking that could eventually be integrated into current electronic health record (EHR) systems, (3) help identify common problems in past orders through retrospective analyses. PMID:22195201

  5. Identifying biogeochemical processes beneath stormwater infiltration ponds in support of a new best management practice for groundwater protection

    USGS Publications Warehouse

    O'Reilly, Andrew M.; Chang, Ni-Bin; Wanielista, Martin P.; Xuan, Zhemin; Schirmer, Mario; Hoehn, Eduard; Vogt, Tobias

    2011-01-01

     When applying a stormwater infiltration pond best management practice (BMP) for protecting the quality of underlying groundwater, a common constituent of concern is nitrate. Two stormwater infiltration ponds, the SO and HT ponds, in central Florida, USA, were monitored. A temporal succession of biogeochemical processes was identified beneath the SO pond, including oxygen reduction, denitrification, manganese and iron reduction, and methanogenesis. In contrast, aerobic conditions persisted beneath the HT pond, resulting in nitrate leaching into groundwater. Biogeochemical differences likely are related to soil textural and hydraulic properties that control surface/subsurface oxygen exchange. A new infiltration BMP was developed and a full-scale application was implemented for the HT pond. Preliminary results indicate reductions in nitrate concentration exceeding 50% in soil water and shallow groundwater beneath the HT pond.

  6. Identifying and prioritizing the preference criteria using analytical hierarchical process for a student-lecturer allocation problem of internship programme

    NASA Astrophysics Data System (ADS)

    Faudzi, Syakinah; Abdul-Rahman, Syariza; Rahman, Rosshairy Abd; Hew, Jafri Hj. Zulkepli

    2016-10-01

    This paper discusses on identifying and prioritizing the student's preference criteria towards supervisor using Analytical Hierarchical Process (AHP) for student-lecturer allocation problem of internship programme. Typically a wide number of students undertake internship every semester and many preferences criteria may involve when assigning students to lecturer for supervision. Thus, identifying and prioritizing the preference criteria of assigning students to lecturer is critically needed especially when involving many preferences. AHP technique is used to prioritize the seven criteria which are capacity, specialization, academic position, availability, professional support, relationship and gender. Student's preference alternative is classified based on lecturer's academic position which are lecturer, senior lecturer, associate professor and professor. Criteria are ranked to find the best preference criteria and alternatives of the supervisor that students prefer to have. This problem is solved using Expert Choice 11 software. A sample of 30 respondents who are from semester 6 and above are randomly selected to participate in the study. By using questionnaire as our medium in collecting the student's data, consistency index is produced to validate the proposed study. Findings and result showed that, the most important preference criteria is professional support. It is followed by specialization, availability, relationship, gender, academic position and capacity. This study found that student would like to have a supportive supervisor because lack of supervision can lead the students to achieve low grade and knowledge from the internship session.

  7. Robust vessel segmentation

    NASA Astrophysics Data System (ADS)

    Bock, Susanne; Kühnel, Caroline; Boskamp, Tobias; Peitgen, Heinz-Otto

    2008-03-01

    In the context of cardiac applications, the primary goal of coronary vessel analysis often consists in supporting the diagnosis of vessel wall anomalies, such as coronary plaque and stenosis. Therefore, a fast and robust segmentation of the coronary tree is a very important but challenging task. We propose a new approach for coronary artery segmentation. Our method is based on an earlier proposed progressive region growing. A new growth front monitoring technique controls the segmentation and corrects local leakage by retrospective detection and removal of leakage artifacts. While progressively reducing the region growing threshold for the whole image, the growing process is locally analyzed using criteria based on the assumption of tubular, gradually narrowing vessels. If a voxel volume limit or a certain shape constraint is exceeded, the growing process is interrupted. Voxels affected by a failed segmentation are detected and deleted from the result. To avoid further processing at these positions, a large neighborhood is blocked for growing. Compared to a global region growing without local correction, our new local growth control and the adapted correction can deal with contrast decrease even in very small coronary arteries. Furthermore, our algorithm can efficiently handle noise artifacts and partial volume effects near the myocardium. The enhanced segmentation of more distal vessel parts was tested on 150 CT datasets. Furthermore, a comparison between the pure progressive region growing and our new approach was conducted.

  8. Spectral identifiers from roasting process of Arabica and Robusta green beans using Laser-Induced Breakdown Spectroscopy (LIBS)

    NASA Astrophysics Data System (ADS)

    Wirani, Ayu Puspa; Nasution, Aulia; Suyanto, Hery

    2016-11-01

    Coffee (Coffea spp.) is one of the most widely consumed beverages in the world. World coffee consumption is around 70% comes from Arabica, 26% from Robusta , and the rest 4% from other varieties. Coffee beverages characteristics are related to chemical compositions of its roasted beans. Usually testing of coffee quality is subjectively tasted by an experienced coffee tester. An objective quantitative technique to analyze the chemical contents of coffee beans using LIBS will be reported in this paper. Optimum experimental conditions was using of 120 mJ of laser energy and delay time 1 μs. Elements contained in coffee beans are Ca, W, Sr, Mg, Na, H, K, O, Rb, and Be. The Calcium (Ca) is the main element in the coffee beans. Roasting process will cause the emission intensity of Ca decreased by 42.45%. In addition, discriminant analysis was used to distinguish the arabica and robusta variants, either in its green and roasted coffee beans. Observed identifier elements are Ca, W, Sr, and Mg. Overall chemical composition of roasted coffee beans are affected by many factors, such as the composition of the soil, the location, the weather in the neighborhood of its plantation, and the post-harvesting process of the green coffee beans (drying, storage, fermentation, and roasting methods used).

  9. Identifying Armed Respondents to Domestic Violence Restraining Orders and Recovering Their Firearms: Process Evaluation of an Initiative in California

    PubMed Central

    Frattaroli, Shannon; Claire, Barbara E.; Vittes, Katherine A.; Webster, Daniel W.

    2014-01-01

    Objectives. We evaluated a law enforcement initiative to screen respondents to domestic violence restraining orders for firearm ownership or possession and recover their firearms. Methods. The initiative was implemented in San Mateo and Butte counties in California from 2007 through 2010. We used descriptive methods to evaluate the screening process and recovery effort in each county, relying on records for individual cases. Results. Screening relied on an archive of firearm transactions, court records, and petitioner interviews; no single source was adequate. Screening linked 525 respondents (17.7%) in San Mateo County to firearms; 405 firearms were recovered from 119 (22.7%) of them. In Butte County, 88 (31.1%) respondents were linked to firearms; 260 firearms were recovered from 45 (51.1%) of them. Nonrecovery occurred most often when orders were never served or respondents denied having firearms. There were no reports of serious violence or injury. Conclusions. Recovering firearms from persons subject to domestic violence restraining orders is possible. We have identified design and implementation changes that may improve the screening process and the yield from recovery efforts. Larger implementation trials are needed. PMID:24328660

  10. Identifying armed respondents to domestic violence restraining orders and recovering their firearms: process evaluation of an initiative in California.

    PubMed

    Wintemute, Garen J; Frattaroli, Shannon; Claire, Barbara E; Vittes, Katherine A; Webster, Daniel W

    2014-02-01

    We evaluated a law enforcement initiative to screen respondents to domestic violence restraining orders for firearm ownership or possession and recover their firearms. The initiative was implemented in San Mateo and Butte counties in California from 2007 through 2010. We used descriptive methods to evaluate the screening process and recovery effort in each county, relying on records for individual cases. Screening relied on an archive of firearm transactions, court records, and petitioner interviews; no single source was adequate. Screening linked 525 respondents (17.7%) in San Mateo County to firearms; 405 firearms were recovered from 119 (22.7%) of them. In Butte County, 88 (31.1%) respondents were linked to firearms; 260 firearms were recovered from 45 (51.1%) of them. Nonrecovery occurred most often when orders were never served or respondents denied having firearms. There were no reports of serious violence or injury. Recovering firearms from persons subject to domestic violence restraining orders is possible. We have identified design and implementation changes that may improve the screening process and the yield from recovery efforts. Larger implementation trials are needed.

  11. A multivariate statistical approach to identify the spatio-temporal variation of geochemical process in a hard rock aquifer.

    PubMed

    Thivya, C; Chidambaram, S; Thilagavathi, R; Prasanna, M V; Singaraja, C; Adithya, V S; Nepolian, M

    2015-09-01

    A study has been carried out in crystalline hard rock aquifers of Madurai district, Tamil Nadu, to identify the spatial and temporal variations and to understand sources responsible for hydrogeochemical processes in the region. Totally, 216 samples were collected for four seasons [premonsoon (PRM), southwest monsoon (SWM), northeast monsoon (NWM), and postmonsoon (POM)]. The Na and K ions are attributed from weathering of feldspars in charnockite and fissile hornblende gneiss. The results also indicate that monsoon leaches the U ions in the groundwater and later it is reflected in the (222)Rn levels also. The statistical relationship on the temporal data reflects the fact that Ca, Mg, Na, Cl, HCO3, and SO4 form the spinal species, which are the chief ions playing the significant role in the geochemistry of the region. The factor loadings of the temporal data reveal the fact that the predominant factor is anthropogenic process and followed by natural weathering and U dissolution. The spatial analysis of the temporal data reveals that weathering is prominent in the NW part and that of distribution of U and (222)Rn along the NE part of the study area. This is also reflected in the cluster analysis, and it is understood that lithology, land use pattern, lineaments, and groundwater flow direction determine the spatial variation of these ions with respect to season.

  12. Acetylome study in mouse adipocytes identifies targets of SIRT1 deacetylation in chromatin organization and RNA processing.

    PubMed

    Kim, Sun-Yee; Sim, Choon Kiat; Tang, Hui; Han, Weiping; Zhang, Kangling; Xu, Feng

    2016-05-15

    SIRT1 is a key protein deacetylase that regulates cellular metabolism through lysine deacetylation on both histones and non-histone proteins. Lysine acetylation is a wide-spread post-translational modification found on many regulatory proteins and it plays an essential role in cell signaling, transcription and metabolism. In mice, SIRT1 has known protective functions during high-fat diet but the acetylome regulated by SIRT1 in adipocytes is not completely understood. Here we conducted acetylome analyses in murine adipocytes treated with small-molecule modulators that inhibit or activate the deacetylase activity of SIRT1. We identified a total of 302 acetylated peptides from 78 proteins in this study. From the list of potential SIRT1 targets, we selected seven candidates and further verified that six of them can be deacetylated by SIRT1 in-vitro. Among them, half of the SIRT1 targets are involved in regulating chromatin structure and the other half is involved in RNA processing. Our results provide a resource for further SIRT1 target validation in fat cells and suggest a potential role of SIRT1 in the regulation of chromatin structure and RNA processing, which may possibly extend to other cell types as well.

  13. The role of various amino acids in enzymatic browning process in potato tubers, and identifying the browning products.

    PubMed

    Ali, Hussein M; El-Gizawy, Ahmed M; El-Bassiouny, Rawia E I; Saleh, Mahmoud A

    2016-02-01

    The effects of five structurally variant amino acids, glycine, valine, methionine, phenylalanine and cysteine were examined as inhibitors and/or stimulators of fresh-cut potato browning. The first four amino acids showed conflict effects; high concentrations (⩾ 100mM for glycine and ⩾ 1.0M for the other three amino acids) induced potato browning while lower concentrations reduced the browning process. Alternatively, increasing cysteine concentration consistently reduced the browning process due to reaction with quinone to give colorless adduct. In PPO assay, high concentrations (⩾ 1.11 mM) of the four amino acids developed more color than that of control samples. Visible spectra indicated a continuous condensation of quinone and glycine to give colored adducts absorbed at 610-630 nm which were separated and identified by LC-ESI-MS as catechol-diglycine adduct that undergoes polymerization with other glycine molecules to form peptide side chains. In lower concentrations, the less concentration the less developed color was observed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. A Bayesian Network Based Global Sensitivity Analysis Method for Identifying Dominant Processes in a Multi-physics Model

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2016-12-01

    Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can

  15. Robust Adaptive Control

    NASA Technical Reports Server (NTRS)

    Narendra, K. S.; Annaswamy, A. M.

    1985-01-01

    Several concepts and results in robust adaptive control are are discussed and is organized in three parts. The first part surveys existing algorithms. Different formulations of the problem and theoretical solutions that have been suggested are reviewed here. The second part contains new results related to the role of persistent excitation in robust adaptive systems and the use of hybrid control to improve robustness. In the third part promising new areas for future research are suggested which combine different approaches currently known.

  16. Comparing Four Instructional Techniques for Promoting Robust Knowledge

    ERIC Educational Resources Information Center

    Richey, J. Elizabeth; Nokes-Malach, Timothy J.

    2015-01-01

    Robust knowledge serves as a common instructional target in academic settings. Past research identifying characteristics of experts' knowledge across many domains can help clarify the features of robust knowledge as well as ways of assessing it. We review the expertise literature and identify three key features of robust knowledge (deep,…

  17. A multi-resolution analysis of lidar-DTMs to identify geomorphic processes from characteristic topographic length scales

    NASA Astrophysics Data System (ADS)

    Sangireddy, H.; Passalacqua, P.; Stark, C. P.

    2013-12-01

    Characteristic length scales are often present in topography, and they reflect the driving geomorphic processes. The wide availability of high resolution lidar Digital Terrain Models (DTMs) allows us to measure such characteristic scales, but new methods of topographic analysis are needed in order to do so. Here, we explore how transitions in probability distributions (pdfs) of topographic variables such as (log(area/slope)), defined as topoindex by Beven and Kirkby[1979], can be measured by Multi-Resolution Analysis (MRA) of lidar DTMs [Stark and Stark, 2001; Sangireddy et al.,2012] and used to infer dominant geomorphic processes such as non-linear diffusion and critical shear. We show this correlation between dominant geomorphic processes to characteristic length scales by comparing results from a landscape evolution model to natural landscapes. The landscape evolution model MARSSIM Howard[1994] includes components for modeling rock weathering, mass wasting by non-linear creep, detachment-limited channel erosion, and bedload sediment transport. We use MARSSIM to simulate steady state landscapes for a range of hillslope diffusivity and critical shear stresses. Using the MRA approach, we estimate modal values and inter-quartile ranges of slope, curvature, and topoindex as a function of resolution. We also construct pdfs at each resolution and identify and extract characteristic scale breaks. Following the approach of Tucker et al.,[2001], we measure the average length to channel from ridges, within the GeoNet framework developed by Passalacqua et al.,[2010] and compute pdfs for hillslope lengths at each scale defined in the MRA. We compare the hillslope diffusivity used in MARSSIM against inter-quartile ranges of topoindex and hillslope length scales, and observe power law relationships between the compared variables for simulated landscapes at steady state. We plot similar measures for natural landscapes and are able to qualitatively infer the dominant geomorphic

  18. Mobile Phone Apps to Improve Medication Adherence: A Systematic Stepwise Process to Identify High-Quality Apps.

    PubMed

    Santo, Karla; Richtering, Sarah S; Chalmers, John; Thiagalingam, Aravinda; Chow, Clara K; Redfern, Julie

    2016-12-02

    There are a growing number of mobile phone apps available to support people in taking their medications and to improve medication adherence. However, little is known about how these apps differ in terms of features, quality, and effectiveness. We aimed to systematically review the medication reminder apps available in the Australian iTunes store and Google Play to assess their features and their quality in order to identify high-quality apps. This review was conducted in a similar manner to a systematic review by using a stepwise approach that included (1) a search strategy; (2) eligibility assessment; (3) app selection process through an initial screening of all retrieved apps and full app review of the included apps; (4) data extraction using a predefined set of features considered important or desirable in medication reminder apps; (5) analysis by classifying the apps as basic and advanced medication reminder apps and scoring and ranking them; and (6) a quality assessment by using the Mobile App Rating Scale (MARS), a reliable tool to assess mobile health apps. We identified 272 medication reminder apps, of which 152 were found only in Google Play, 87 only in iTunes, and 33 in both app stores. Apps found in Google Play had more customer reviews, higher star ratings, and lower cost compared with apps in iTunes. Only 109 apps were available for free and 124 were recently updated in 2015 or 2016. Overall, the median number of features per app was 3.0 (interquartile range 4.0) and only 18 apps had ≥9 of the 17 desirable features. The most common features were flexible scheduling that was present in 56.3% (153/272) of the included apps, medication tracking history in 54.8% (149/272), snooze option in 34.9% (95/272), and visual aids in 32.4% (88/272). We classified 54.8% (149/272) of the included apps as advanced medication reminder apps and 45.2% (123/272) as basic medication reminder apps. The advanced apps had a higher number of features per app compared with the

  19. Mobile Phone Apps to Improve Medication Adherence: A Systematic Stepwise Process to Identify High-Quality Apps

    PubMed Central

    Richtering, Sarah S; Chalmers, John; Thiagalingam, Aravinda; Chow, Clara K; Redfern, Julie

    2016-01-01

    Background There are a growing number of mobile phone apps available to support people in taking their medications and to improve medication adherence. However, little is known about how these apps differ in terms of features, quality, and effectiveness. Objective We aimed to systematically review the medication reminder apps available in the Australian iTunes store and Google Play to assess their features and their quality in order to identify high-quality apps. Methods This review was conducted in a similar manner to a systematic review by using a stepwise approach that included (1) a search strategy; (2) eligibility assessment; (3) app selection process through an initial screening of all retrieved apps and full app review of the included apps; (4) data extraction using a predefined set of features considered important or desirable in medication reminder apps; (5) analysis by classifying the apps as basic and advanced medication reminder apps and scoring and ranking them; and (6) a quality assessment by using the Mobile App Rating Scale (MARS), a reliable tool to assess mobile health apps. Results We identified 272 medication reminder apps, of which 152 were found only in Google Play, 87 only in iTunes, and 33 in both app stores. Apps found in Google Play had more customer reviews, higher star ratings, and lower cost compared with apps in iTunes. Only 109 apps were available for free and 124 were recently updated in 2015 or 2016. Overall, the median number of features per app was 3.0 (interquartile range 4.0) and only 18 apps had ≥9 of the 17 desirable features. The most common features were flexible scheduling that was present in 56.3% (153/272) of the included apps, medication tracking history in 54.8% (149/272), snooze option in 34.9% (95/272), and visual aids in 32.4% (88/272). We classified 54.8% (149/272) of the included apps as advanced medication reminder apps and 45.2% (123/272) as basic medication reminder apps. The advanced apps had a higher number

  20. RAVE J203843.2-002333: The First Highly R-process-enhanced Star Identified in the RAVE Survey

    NASA Astrophysics Data System (ADS)

    Placco, Vinicius M.; Holmbeck, Erika M.; Frebel, Anna; Beers, Timothy C.; Surman, Rebecca A.; Ji, Alexander P.; Ezzeddine, Rana; Points, Sean D.; Kaleida, Catherine C.; Hansen, Terese T.; Sakari, Charli M.; Casey, Andrew R.

    2017-07-01

    We report the discovery of RAVE J203843.2-002333, a bright (V = 12.73), very metal-poor ([{Fe}/{{H}}] = -2.91), r-process-enhanced ([{Eu}/{Fe}] = +1.64 and [{Ba}/{Eu}] = -0.81) star selected from the RAVE survey. This star was identified as a metal-poor candidate based on its medium-resolution (R ˜ 1600) spectrum obtained with the KPNO/Mayall Telescope, and followed up with high-resolution (R ˜ 66,000) spectroscopy with the Magellan/Clay Telescope, allowing for the determination of elemental abundances for 24 neutron-capture elements, including thorium and uranium. RAVE J2038-0023 is only the fourth metal-poor star with a clearly measured U abundance. The derived chemical abundance pattern exhibits good agreement with those of other known highly r-process-enhanced stars, and evidence suggests that it is not an actinide-boost star. Age estimates were calculated using U/X abundance ratios, yielding a mean age of 13.0 ± 1.1 Gyr. Based on observations gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile; Kitt Peak National Observatory, National Optical Astronomy Observatory (NOAO Prop. ID: 14B-0231; PI: Placco), which is operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation. The authors are honored to be permitted to conduct astronomical research on Iolkam Du’ag (Kitt Peak), a mountain with particular significance to the Tohono O’odham.

  1. Network Robustness: the whole story

    NASA Astrophysics Data System (ADS)

    Longjas, A.; Tejedor, A.; Zaliapin, I. V.; Ambroj, S.; Foufoula-Georgiou, E.

    2014-12-01

    A multitude of actual processes operating on hydrological networks may exhibit binary outcomes such as clean streams in a river network that may become contaminated. These binary outcomes can be modeled by node removal processes (attacks) acting in a network. Network robustness against attacks has been widely studied in fields as diverse as the Internet, power grids and human societies. However, the current definition of robustness is only accounting for the connectivity of the nodes unaffected by the attack. Here, we put forward the idea that the connectivity of the affected nodes can play a crucial role in proper evaluation of the overall network robustness and its future recovery from the attack. Specifically, we propose a dual perspective approach wherein at any instant in the network evolution under attack, two distinct networks are defined: (i) the Active Network (AN) composed of the unaffected nodes and (ii) the Idle Network (IN) composed of the affected nodes. The proposed robustness metric considers both the efficiency of destroying the AN and the efficiency of building-up the IN. This approach is motivated by concrete applied problems, since, for example, if we study the dynamics of contamination in river systems, it is necessary to know both the connectivity of the healthy and contaminated parts of the river to assess its ecological functionality. We show that trade-offs between the efficiency of the Active and Idle network dynamics give rise to surprising crossovers and re-ranking of different attack strategies, pointing to significant implications for decision making.

  2. Robust Control Systems.

    DTIC Science & Technology

    1981-12-01

    Controller ................... 38 Sampled-Data Performance Analysis ............. 44 Doyle and Stein Technique in Discrete-Time Systems - 1...48 Doyle and Stein Technique in Discretd-Time System.s - 2 ................................. 50 Enhancing Robustness of... Technique Extended to Sampled-Data Controllers ................ 73 G715 Robustness Enhancement by Directly D"?C TAB E

  3. Robust Critical Point Detection

    SciTech Connect

    Bhatia, Harsh

    2016-07-28

    Robust Critical Point Detection is a software to compute critical points in a 2D or 3D vector field robustly. The software was developed as a part of the author's work at the lab as a Phd student under Livermore Scholar Program (now called Livermore Graduate Scholar Program).

  4. Mechanisms for Robust Cognition

    ERIC Educational Resources Information Center

    Walsh, Matthew M.; Gluck, Kevin A.

    2015-01-01

    To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within…

  5. Mechanisms for Robust Cognition

    ERIC Educational Resources Information Center

    Walsh, Matthew M.; Gluck, Kevin A.

    2015-01-01

    To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within…

  6. Processes for Identifying Regional Influences of and Responses to Increasing Atmospheric CO sub 2 and Climate Change --- The MINK Project

    SciTech Connect

    Easterling, W.E. III; McKenney, M.S.; Rosenberg, N.J.; Lemon, K.M.

    1991-08-01

    The second report of a series Processes for Identifying Regional Influences of and Responses to Increasing Atmospheric CO{sub 2} and Climate Change -- The MINK Project is composed of two parts. This Report (IIB) deals with agriculture at the level of farms and Major Land Resource Areas (MLRAs). The Erosion Productivity Impact Calculator (EPIC), a crop growth simulation model developed by scientists at the US Department of Agriculture, is used to study the impacts of the analog climate on yields of main crops in both the 1984/87 and the 2030 baselines. The results of this work with EPIC are the basis for the analysis of the climate change impacts on agriculture at the region-wide level undertaken in this report. Report IIA treats agriculture in MINK in terms of state and region-wide production and resource use for the main crops and animals in the baseline periods of 1984/87 and 2030. The effects of the analog climate on the industry at this level of aggregation are considered in both baseline periods. 41 refs., 40 figs., 46 tabs.

  7. Using stable isotopes to identify the scaling effects of riparian peatlands on runoff generation processes and DOC mobilisation

    NASA Astrophysics Data System (ADS)

    Tunaley, Claire; Tetzlaff, Doerthe; Soulsby, Chris

    2017-04-01

    Knowledge of hydrological sources, flow paths, and their connectivity is fundamental to understanding stream flow generation and surface water quality in peatlands. Stable isotopes are proven tools for tracking the sources and flow paths of runoff. However, relativity few studies have used isotopes in peat-dominated catchments. Here, we combined 13 months (June 2014 - July 2015) of daily isotope measurements in stream water with daily DOC and 15 minute FDOM (fluorescent component of dissolved organic matter) data, at three nested scales in NE Scotland, to identify the hydrological processes occurring in riparian peatlands. We investigated how runoff generation processes in a small, riparian peatland dominated headwater catchment (0.65 km2) propagate to larger scales (3.2 km2 and 31 km2) with decreasing percentage of riparian peatland coverage. Isotope damping was most pronounced in the 0.65 km2 catchment due to high water storage in the organic soils which encouraged tracer mixing and resulted in attenuated runoff peaks. At the largest scale, stream flow and water isotope dynamics showed a more flashy response. Particularly insightful in this study was calculating the deviation of the isotopes from the local meteoric water line, the lc-excess. The lc-excess revealed evaporative fractionation in the peatland dominated catchment, particularly during summer low flows. This implied high hydrological connectivity in the form of constant seepage from the peatlands sustaining high baseflows at the headwater scale. This constant connectivity resulted in high DOC concentrations at the peatland site during baseflow ( 5 mg l-1). In contrast, at the larger scales, DOC was minimal during low flows ( 2 mg l-1) due to increased groundwater influence and the disconnection between DOC sources and the stream. Insights into event dynamics through the analysis of DOC hysteresis loops showed slight dilution on the rising limb, the strong influence of dry antecedent conditions and a

  8. Robust input design for nonlinear dynamic modeling of AUV.

    PubMed

    Nouri, Nowrouz Mohammad; Valadi, Mehrdad

    2017-09-01

    Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Robust Nonlinear Neural Codes

    NASA Astrophysics Data System (ADS)

    Yang, Qianli; Pitkow, Xaq

    2015-03-01

    Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.

  10. Association Study with 77 SNPs Confirms the Robust Role for the rs10830963/G of MTNR1B Variant and Identifies Two Novel Associations in Gestational Diabetes Mellitus Development

    PubMed Central

    Rosta, Klara; Al-Aissa, Zahra; Hadarits, Orsolya; Harreiter, Jürgen; Nádasdi, Ákos; Kelemen, Fanni; Bancher-Todesca, Dagmar; Komlósi, Zsolt; Németh, László; Rigó, János; Sziller, István; Somogyi, Anikó; Kautzky-Willer, Alexandra; Firneisz, Gábor

    2017-01-01

    Context Genetic variation in human maternal DNA contributes to the susceptibility for development of gestational diabetes mellitus (GDM). Objective We assessed 77 maternal single nucleotide gene polymorphisms (SNPs) for associations with GDM or plasma glucose levels at OGTT in pregnancy. Methods 960 pregnant women (after dropouts 820: case/control: m99’WHO: 303/517, IADPSG: 287/533) were enrolled in two countries into this case-control study. After genomic DNA isolation the 820 samples were collected in a GDM biobank and assessed using KASP (LGC Genomics) genotyping assay. Logistic regression risk models were used to calculate ORs according to IADPSG/m’99WHO criteria based on standard OGTT values. Results The most important risk alleles associated with GDM were rs10830963/G of MTNR1B (OR = 1.84/1.64 [IADPSG/m’99WHO], p = 0.0007/0.006), rs7754840/C (OR = 1.51/NS, p = 0.016) of CDKAL1 and rs1799884/T (OR = 1.4/1.56, p = 0.04/0.006) of GCK. The rs13266634/T (SLC30A8, OR = 0.74/0.71, p = 0.05/0.02) and rs7578326/G (LOC646736/IRS1, OR = 0.62/0.60, p = 0.001/0.006) variants were associated with lower risk to develop GDM. Carrying a minor allele of rs10830963 (MTNR1B); rs7903146 (TCF7L2); rs1799884 (GCK) SNPs were associated with increased plasma glucose levels at routine OGTT. Conclusions We confirmed the robust association of MTNR1B rs10830963/G variant with GDM binary and glycemic traits in this Caucasian case-control study. As novel associations we report the minor, G allele of the rs7578326 SNP in the LOC646736/IRS1 region as a significant and the rs13266634/T SNP (SLC30A8) as a suggestive protective variant against GDM development. Genetic susceptibility appears to be more preponderant in individuals who meet both the modified 99’WHO and the IADPSG GDM diagnostic criteria. PMID:28072873

  11. A Systematic Approach of Employing Quality by Design Principles: Risk Assessment and Design of Experiments to Demonstrate Process Understanding and Identify the Critical Process Parameters for Coating of the Ethylcellulose Pseudolatex Dispersion Using Non-Conventional Fluid Bed Process.

    PubMed

    Kothari, Bhaveshkumar H; Fahmy, Raafat; Claycamp, H Gregg; Moore, Christine M V; Chatterjee, Sharmista; Hoag, Stephen W

    2017-05-01

    The goal of this study was to utilize risk assessment techniques and statistical design of experiments (DoE) to gain process understanding and to identify critical process parameters for the manufacture of controlled release multiparticulate beads using a novel disk-jet fluid bed technology. The material attributes and process parameters were systematically assessed using the Ishikawa fish bone diagram and failure mode and effect analysis (FMEA) risk assessment methods. The high risk attributes identified by the FMEA analysis were further explored using resolution V fractional factorial design. To gain an understanding of the processing parameters, a resolution V fractional factorial study was conducted. Using knowledge gained from the resolution V study, a resolution IV fractional factorial study was conducted; the purpose of this IV study was to identify the critical process parameters (CPP) that impact the critical quality attributes and understand the influence of these parameters on film formation. For both studies, the microclimate, atomization pressure, inlet air volume, product temperature (during spraying and curing), curing time, and percent solids in the coating solutions were studied. The responses evaluated were percent agglomeration, percent fines, percent yield, bead aspect ratio, median particle size diameter (d50), assay, and drug release rate. Pyrobuttons® were used to record real-time temperature and humidity changes in the fluid bed. The risk assessment methods and process analytical tools helped to understand the novel disk-jet technology and to systematically develop models of the coating process parameters like process efficiency and the extent of curing during the coating process.

  12. Robust, optimal subsonic airfoil shapes

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor)

    2008-01-01

    Method system, and product from application of the method, for design of a subsonic airfoil shape, beginning with an arbitrary initial airfoil shape and incorporating one or more constraints on the airfoil geometric parameters and flow characteristics. The resulting design is robust against variations in airfoil dimensions and local airfoil shape introduced in the airfoil manufacturing process. A perturbation procedure provides a class of airfoil shapes, beginning with an initial airfoil shape.

  13. Efficient and Robust Signal Approximations

    DTIC Science & Technology

    2009-05-01

    otherwise. Remark. Permutation matrices are both orthogonal and doubly- stochastic [62]. We will now show how to further simplify the Robust Coding...reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching...Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: signal processing, image compression, independent component analysis , sparse

  14. Computational Cognition and Robust Decision Making

    DTIC Science & Technology

    2013-03-06

    processes underlying human performance in complex problem solving tasks; 2. Achieving robust and seamless symbiosis between humans and systems in...prediction, planning, scheduling, and decision making. Challenges and Strategy: • Seek computational principles for optimal symbiosis of mixed human

  15. Deep-UV positive resist image by dry etching (DUV PRIME): a robust process for 0.3-μm contact holes

    NASA Astrophysics Data System (ADS)

    Louis, Didier; Laporte, Philippe; Molle, Pascale; Ullmann, H.

    1994-05-01

    Classical positive resist process for DUV is not yet available and stabilized. We noticed various limiting points such as the delay time for resist material, the limitation of thickness related to ultimate resolution, and the bulk effect. P.R.I.M.E. (Positive Resist Image by dry Etching) process technology using DUV 248 nm exposure wavelength improve solutions for each process parameters, for example, a well known and stable resist (J.S.R- U.C.B PLASMASK 200G) is used with Hexamethyldisilazane (HMDS) as silylated compound. The combination of DUV exposure and top surface imaging P.R.I.M.E. process can open contact holes down to 0.3 micrometers with a large process window and a good wafer uniformity. This publication will show the improvement of each process parameter. Extended information will be given for process latitude (focus and exposure). We demonstrated and verified the feasibility of the contact holes process by etching 1 micrometers oxide (BPSG + USG) through the PRIME process lithography.

  16. Structurally robust biological networks

    PubMed Central

    2011-01-01

    Background The molecular circuitry of living organisms performs remarkably robust regulatory tasks, despite the often intrinsic variability of its components. A large body of research has in fact highlighted that robustness is often a structural property of biological systems. However, there are few systematic methods to mathematically model and describe structural robustness. With a few exceptions, numerical studies are often the preferred approach to this type of investigation. Results In this paper, we propose a framework to analyze robust stability of equilibria in biological networks. We employ Lyapunov and invariant sets theory, focusing on the structure of ordinary differential equation models. Without resorting to extensive numerical simulations, often necessary to explore the behavior of a model in its parameter space, we provide rigorous proofs of robust stability of known bio-molecular networks. Our results are in line with existing literature. Conclusions The impact of our results is twofold: on the one hand, we highlight that classical and simple control theory methods are extremely useful to characterize the behavior of biological networks analytically. On the other hand, we are able to demonstrate that some biological networks are robust thanks to their structure and some qualitative properties of the interactions, regardless of the specific values of their parameters. PMID:21586168

  17. Euphausiid distribution along the Western Antarctic Peninsula—Part A: Development of robust multi-frequency acoustic techniques to identify euphausiid aggregations and quantify euphausiid size, abundance, and biomass

    NASA Astrophysics Data System (ADS)

    Lawson, Gareth L.; Wiebe, Peter H.; Stanton, Timothy K.; Ashjian, Carin J.

    2008-02-01

    Methods were refined and tested for identifying the aggregations of Antarctic euphausiids ( Euphausia spp.) and then estimating euphausiid size, abundance, and biomass, based on multi-frequency acoustic survey data. A threshold level of volume backscattering strength for distinguishing euphausiid aggregations from other zooplankton was derived on the basis of published measurements of euphausiid visual acuity and estimates of the minimum density of animals over which an individual can maintain visual contact with its nearest neighbor. Differences in mean volume backscattering strength at 120 and 43 kHz further served to distinguish euphausiids from other sources of scattering. An inversion method was then developed to estimate simultaneously the mean length and density of euphausiids in these acoustically identified aggregations based on measurements of mean volume backscattering strength at four frequencies (43, 120, 200, and 420 kHz). The methods were tested at certain locations within an acoustically surveyed continental shelf region in and around Marguerite Bay, west of the Antarctic Peninsula, where independent evidence was also available from net and video systems. Inversion results at these test sites were similar to net samples for estimated length, but acoustic estimates of euphausiid density exceeded those from nets by one to two orders of magnitude, likely due primarily to avoidance and to a lesser extent to differences in the volumes sampled by the two systems. In a companion study, these methods were applied to the full acoustic survey data in order to examine the distribution of euphausiids in relation to aspects of the physical and biological environment [Lawson, G.L., Wiebe, P.H., Ashjian, C.J., Stanton, T.K., 2008. Euphausiid distribution along the Western Antarctic Peninsula—Part B: Distribution of euphausiid aggregations and biomass, and associations with environmental features. Deep-Sea Research II, this issue [doi:10.1016/j.dsr2.2007.11.014

  18. Feel No Guilt! Your Statistics Are Probably Robust.

    ERIC Educational Resources Information Center

    Micceri, Theodore

    This paper reports an attempt to identify appropriate and robust location estimators for situations that tend to occur among various types of empirical data. Emphasizing robustness across broad unidentifiable ranges of contamination, an attempt was made to replicate, on a somewhat smaller scale, the definitive Princeton Robustness Study of 1972 to…

  19. Robust control of ionic polymer metal composites

    NASA Astrophysics Data System (ADS)

    Kang, Sunhyuk; Shin, Jongho; Kim, Seong Jun; Kim, H. Jin; Hyup Kim, Yong

    2007-12-01

    Ionic polymer-metal composites (IPMCs) have been considered for various applications due to their light weight, large bending, and low actuation voltage requirements. However, their response can be slow and vary widely, depending on various factors such as fabrication processes, water content, and contact conditions with the electrodes. In order to utilize their capability in various high-performance microelectromechanical systems, controllers need to address this uncertainty and non-repeatability while improving the response speed. In this work, we identified an empirical model for the dynamic relationship between the applied voltage and the IPMC beam deflection, which includes the uncertainties and variations of the response. Then, four types of controller were designed, and their performances were compared: a proportional-integral-derivative (PID) controller with optimized gains using a co-evolutionary algorithm, and three types of robust controller based on H_\\infty , H_\\infty with loop shaping, and μ-synthesis, respectively. Our results show that the robust control techniques can significantly improve the IPMC performance against non-repeatability or parametric uncertainties, in terms of the faster response and lower overshoot than the PID control, using lower actuation voltage.

  20. Socially Shared Metacognitive Regulation during Reciprocal Peer Tutoring: Identifying Its Relationship with Students' Content Processing and Transactive Discussions

    ERIC Educational Resources Information Center

    De Backer, Liesje; Van Keer, Hilde; Valcke, Martin

    2015-01-01

    Although successful collaborative learning requires socially shared metacognitive regulation (SSMR) of the learning process among multiple students, empirical research on SSMR is limited. The present study contributes to the emerging research on SSMR by examining its correlation with both collaborative learners' content processing strategies and…

  1. Socially Shared Metacognitive Regulation during Reciprocal Peer Tutoring: Identifying Its Relationship with Students' Content Processing and Transactive Discussions

    ERIC Educational Resources Information Center

    De Backer, Liesje; Van Keer, Hilde; Valcke, Martin

    2015-01-01

    Although successful collaborative learning requires socially shared metacognitive regulation (SSMR) of the learning process among multiple students, empirical research on SSMR is limited. The present study contributes to the emerging research on SSMR by examining its correlation with both collaborative learners' content processing strategies and…

  2. Identifying Complex Cultural Interactions in the Instructional Design Process: A Case Study of a Cross-Border, Cross-Sector Training for Innovation Program

    ERIC Educational Resources Information Center

    Russell, L. Roxanne; Kinuthia, Wanjira L.; Lokey-Vega, Anissa; Tsang-Kosma, Winnie; Madathany, Reeny

    2013-01-01

    The purpose of this research is to identify complex cultural dynamics in the instructional design process of a cross-sector, cross-border training environment by applying Young's (2009) Culture-Based Model (CBM) as a theoretical framework and taxonomy for description of the instructional design process under the conditions of one case. This…

  3. Identifying Complex Cultural Interactions in the Instructional Design Process: A Case Study of a Cross-Border, Cross-Sector Training for Innovation Program

    ERIC Educational Resources Information Center

    Russell, L. Roxanne; Kinuthia, Wanjira L.; Lokey-Vega, Anissa; Tsang-Kosma, Winnie; Madathany, Reeny

    2013-01-01

    The purpose of this research is to identify complex cultural dynamics in the instructional design process of a cross-sector, cross-border training environment by applying Young's (2009) Culture-Based Model (CBM) as a theoretical framework and taxonomy for description of the instructional design process under the conditions of one case. This…

  4. Engineering robust intelligent robots

    NASA Astrophysics Data System (ADS)

    Hall, E. L.; Ali, S. M. Alhaj; Ghaffari, M.; Liao, X.; Cao, M.

    2010-01-01

    The purpose of this paper is to discuss the challenge of engineering robust intelligent robots. Robust intelligent robots may be considered as ones that not only work in one environment but rather in all types of situations and conditions. Our past work has described sensors for intelligent robots that permit adaptation to changes in the environment. We have also described the combination of these sensors with a "creative controller" that permits adaptive critic, neural network learning, and a dynamic database that permits task selection and criteria adjustment. However, the emphasis of this paper is on engineering solutions which are designed for robust operations and worst case situations such as day night cameras or rain and snow solutions. This ideal model may be compared to various approaches that have been implemented on "production vehicles and equipment" using Ethernet, CAN Bus and JAUS architectures and to modern, embedded, mobile computing architectures. Many prototype intelligent robots have been developed and demonstrated in terms of scientific feasibility but few have reached the stage of a robust engineering solution. Continual innovation and improvement are still required. The significance of this comparison is that it provides some insights that may be useful in designing future robots for various manufacturing, medical, and defense applications where robust and reliable performance is essential.

  5. Content uniformity determination of pharmaceutical tablets using five near-infrared reflectance spectrometers: a process analytical technology (PAT) approach using robust multivariate calibration transfer algorithms.

    PubMed

    Sulub, Yusuf; LoBrutto, Rosario; Vivilecchia, Richard; Wabuyele, Busolo Wa

    2008-03-24

    Near-infrared calibration models were developed for the determination of content uniformity of pharmaceutical tablets containing 29.4% drug load for two dosage strengths (X and Y). Both dosage strengths have a circular geometry and the only difference is the size and weight. Strength X samples weigh approximately 425 mg with a diameter of 12 mm while strength Y samples, weigh approximately 1700 mg with a diameter of 20mm. Data used in this study were acquired from five NIR instruments manufactured by two different vendors. One of these spectrometers is a dispersive-based NIR system while the other four were Fourier transform (FT) based. The transferability of the optimized partial least-squares (PLS) calibration models developed on the primary instrument (A) located in a research facility was evaluated using spectral data acquired from secondary instruments B, C, D and E. Instruments B and E were located in the same research facility as spectrometer A while instruments C and D were located in a production facility 35 miles away. The same set of tablet samples were used to acquire spectral data from all instruments. This scenario mimics the conventional pharmaceutical technology transfer from research and development to production. Direct cross-instrument prediction without standardization was performed between the primary and each secondary instrument to evaluate the robustness of the primary instrument calibration model. For the strength Y samples, this approach was successful for data acquired on instruments B, C, and D producing root mean square error of prediction (RMSEP) of 1.05, 1.05, and 1.22%, respectively. However for instrument E data, this approach was not successful producing an RMSEP value of 3.40%. A similar deterioration was observed for the strength X samples, with RMSEP values of 2.78, 5.54, 3.40, and 5.78% corresponding to spectral data acquired on instruments B, C, D, and E, respectively. To minimize the effect of instrument variability

  6. The test of both worlds: identifying feature binding and control processes in congruency sequence tasks by means of action dynamics.

    PubMed

    Scherbaum, Stefan; Frisch, Simon; Dshemuchadse, Maja; Rudolf, Matthias; Fischer, Rico

    2016-11-07

    Cognitive control processes enable us to act flexibly in a world posing ever-changing demands on our cognitive system. To study cognitive control, conflict tasks and especially congruency sequence effects have been regarded as a fruitful tool. However, for the last decade a dispute has arisen whether or not congruency sequence effects are indeed a valid measure of cognitive control processes. This debate has led to the development of increasingly complex paradigms involving numerous, intricately designed experimental conditions which are aimed at excluding low-level, associative learning mechanisms like feature binding as an alternative explanation for the emergence of congruency sequence effects. Here, we try to go beyond this all-or-nothing thinking by investigating the assumption that both cognitive control processes as well as feature binding mechanisms occur within trials of the same task. Based on a theoretical dual-route-model of behavior under conflict, we show that both classes of cognitive mechanisms should affect behavior at different points of the decision process. By comparing these predictions to continuous mouse movements from an adapted Simon task, we find evidence that control processes and feature binding mechanisms do indeed coexist within the task but that they follow distinct timing patterns. We argue that this dynamic approach to cognitive processing opens up new ways to investigate the diversity of co-existing processes that contribute to the selection of behavior.

  7. Identifying causal networks linking cancer processes and anti-tumor immunity using Bayesian network inference and metagene constructs

    PubMed Central

    Kaiser, Jacob L.; Bland, Cassidy L.; Klinke, David J.

    2017-01-01

    Cancer arises from a deregulation of both intracellular and intercellular networks that maintain system homeostasis. Identifying the architecture of these networks and how they are changed in cancer is a pre-requisite for designing drugs to restore homeostasis. Since intercellular networks only appear in intact systems, it is difficult to identify how these networks become altered in human cancer using many of the common experimental models. To overcome this, we used the diversity in normal and malignant human tissue samples from the Cancer Genome Atlas (TCGA) database of human breast cancer to identify the topology associated with intercellular networks in vivo. To improve the underlying biological signals, we constructed Bayesian networks using metagene constructs, which represented groups of genes that are concomitantly associated with different immune and cancer states. We also used bootstrap resampling to establish the significance associated with the inferred networks. In short, we found opposing relationships between cell proliferation and epithelial-to-mesenchymal transformation (EMT) with regards to macrophage polarization. These results were consistent across multiple carcinomas in that proliferation was associated with a type 1 cell-mediated anti-tumor immune response and EMT was associated with a pro-tumor anti-inflammatory response. To address the identifiability of these networks from other datasets, we could identify the relationship between EMT and macrophage polarization with fewer samples when the Bayesian network was generated from malignant samples alone. However, the relationship between proliferation and macrophage polarization was identified with fewer samples when the samples were taken from a combination of the normal and malignant samples. PMID:26785356

  8. Demonstration of the efficiency and robustness of an acid leaching process to remove metals from various CCA-treated wood samples.

    PubMed

    Coudert, Lucie; Blais, Jean-François; Mercier, Guy; Cooper, Paul; Janin, Amélie; Gastonguay, Louis

    2014-01-01

    In recent years, an efficient and economically attractive leaching process has been developed to remove metals from copper-based treated wood wastes. This study explored the applicability of this leaching process using chromated copper arsenate (CCA) treated wood samples with different initial metal loading and elapsed time between wood preservation treatment and remediation. The sulfuric acid leaching process resulted in the solubilization of more than 87% of the As, 70% of the Cr, and 76% of the Cu from CCA-chips and in the solubilization of more than 96% of the As, 78% of the Cr and 91% of the Cu from CCA-sawdust. The results showed that the performance of this leaching process might be influenced by the initial metal loading of the treated wood wastes and the elapsed time between preservation treatment and remediation. The effluents generated during the leaching steps were treated by precipitation-coagulation to satisfy the regulations for effluent discharge in municipal sewers. Precipitation using ferric chloride and sodium hydroxide was highly efficient, removing more than 99% of the As, Cr, and Cu. It appears that this leaching process can be successfully applied to remove metals from different CCA-treated wood samples and then from the effluents.

  9. Robust and Comprehensive Analysis of 20 Osteoporosis Candidate Genes by Very High-Density Single-Nucleotide Polymorphism Screen Among 405 White Nuclear Families Identified Significant Association and Gene–Gene Interaction

    PubMed Central

    Xiong, Dong-Hai; Shen, Hui; Zhao, Lan-Juan; Xiao, Peng; Yang, Tie-Lin; Guo, Yan; Wang, Wei; Guo, Yan-Fang; Liu, Yong-Jun; Recker, Robert R; Deng, Hong-Wen

    2007-01-01

    Many “novel” osteoporosis candidate genes have been proposed in recent years. To advance our knowledge of their roles in osteoporosis, we screened 20 such genes using a set of high-density SNPs in a large family-based study. Our efforts led to the prioritization of those osteoporosis genes and the detection of gene–gene interactions. Introduction We performed large-scale family-based association analyses of 20 novel osteoporosis candidate genes using 277 single nucleotide polymorphisms (SNPs) for the quantitative trait BMD variation and the qualitative trait osteoporosis (OP) at three clinically important skeletal sites: spine, hip, and ultradistal radius (UD). Materials and Methods One thousand eight hundred seventy-three subjects from 405 white nuclear families were genotyped and analyzed with an average density of one SNP per 4 kb across the 20 genes. We conducted association analyses by SNP- and haplotype-based family-based association test (FBAT) and performed gene–gene interaction analyses using multianalytic approaches such as multifactor-dimensionality reduction (MDR) and conditional logistic regression. Results and Conclusions We detected four genes (DBP, LRP5, CYP17, and RANK) that showed highly suggestive associations (10,000-permutation derived empirical global p ≤ 0.01) with spine BMD/OP; four genes (CYP19, RANK, RANKL, and CYP17) highly suggestive for hip BMD/OP; and four genes (CYP19, BMP2, RANK, and TNFR2) highly suggestive for UD BMD/OP. The associations between BMP2 with UD BMD and those between RANK with OP at the spine, hip, and UD also met the experiment-wide stringent criterion (empirical global p ≤ 0.0007). Sex-stratified analyses further showed that some of the significant associations in the total sample were driven by either male or female subjects. In addition, we identified and validated a two-locus gene–gene interaction model involving GCR and ESR2, for which prior biological evidence exists. Our results suggested the

  10. SU-C-304-02: Robust and Efficient Process for Acceptance Testing of Varian TrueBeam Linacs Using An Electronic Portal Imaging Device (EPID)

    SciTech Connect

    Yaddanapudi, S; Cai, B; Sun, B; Li, H; Noel, C; Goddu, S; Mutic, S; Harry, T; Pawlicki, T

    2015-06-15

    Purpose: The purpose of this project was to develop a process that utilizes the onboard kV and MV electronic portal imaging devices (EPIDs) to perform rapid acceptance testing (AT) of linacs in order to improve efficiency and standardize AT equipment and processes. Methods: In this study a Varian TrueBeam linac equipped with an amorphous silicon based EPID (aSi1000) was used. The conventional set of AT tests and tolerances was used as a baseline guide, and a novel methodology was developed to perform as many tests as possible using EPID exclusively. The developer mode on Varian TrueBeam linac was used to automate the process. In the current AT process there are about 45 tests that call for customer demos. Many of the geometric tests such as jaw alignment and MLC positioning are performed with highly manual methods, such as using graph paper. The goal of the new methodology was to achieve quantitative testing while reducing variability in data acquisition, analysis and interpretation of the results. The developed process was validated on two machines at two different institutions. Results: At least 25 of the 45 (56%) tests which required customer demo can be streamlined and performed using EPIDs. More than half of the AT tests can be fully automated using the developer mode, while others still require some user interaction. Overall, the preliminary data shows that EPID-based linac AT can be performed in less than a day, compared to 2–3 days using conventional methods. Conclusions: Our preliminary results show that performance of onboard imagers is quite suitable for both geometric and dosimetric testing of TrueBeam systems. A standardized AT process can tremendously improve efficiency, and minimize the variability related to third party quality assurance (QA) equipment and the available onsite expertise. Research funding provided by Varian Medical Systems. Dr. Sasa Mutic receives compensation for providing patient safety training services from Varian Medical

  11. The Origins of Light and Heavy R-process Elements Identified by Chemical Tagging of Metal-poor Stars

    NASA Astrophysics Data System (ADS)

    Tsujimoto, Takuji; Shigeyama, Toshikazu

    2014-11-01

    Growing interests in neutron star (NS) mergers as the origin of r-process elements have sprouted since the discovery of evidence for the ejection of these elements from a short-duration γ-ray burst. The hypothesis of a NS merger origin is reinforced by a theoretical update of nucleosynthesis in NS mergers successful in yielding r-process nuclides with A > 130. On the other hand, whether the origin of light r-process elements are associated with nucleosynthesis in NS merger events remains unclear. We find a signature of nucleosynthesis in NS mergers from peculiar chemical abundances of stars belonging to the Galactic globular cluster M15. This finding combined with the recent nucleosynthesis results implies a potential diversity of nucleosynthesis in NS mergers. Based on these considerations, we are successful in the interpretation of an observed correlation between [light r-process/Eu] and [Eu/Fe] among Galactic halo stars and accordingly narrow down the role of supernova nucleosynthesis in the r-process production site. We conclude that the tight correlation by a large fraction of halo stars is attributable to the fact that core-collapse supernovae produce light r-process elements while heavy r-process elements such as Eu and Ba are produced by NS mergers. On the other hand, stars in the outlier, composed of r-enhanced stars ([Eu/Fe] gsim +1) such as CS22892-052, were exclusively enriched by matter ejected by a subclass of NS mergers that is inclined to be massive and consist of both light and heavy r-process nuclides.

  12. THE ORIGINS OF LIGHT AND HEAVY R-PROCESS ELEMENTS IDENTIFIED BY CHEMICAL TAGGING OF METAL-POOR STARS

    SciTech Connect

    Tsujimoto, Takuji; Shigeyama, Toshikazu

    2014-11-01

    Growing interests in neutron star (NS) mergers as the origin of r-process elements have sprouted since the discovery of evidence for the ejection of these elements from a short-duration γ-ray burst. The hypothesis of a NS merger origin is reinforced by a theoretical update of nucleosynthesis in NS mergers successful in yielding r-process nuclides with A > 130. On the other hand, whether the origin of light r-process elements are associated with nucleosynthesis in NS merger events remains unclear. We find a signature of nucleosynthesis in NS mergers from peculiar chemical abundances of stars belonging to the Galactic globular cluster M15. This finding combined with the recent nucleosynthesis results implies a potential diversity of nucleosynthesis in NS mergers. Based on these considerations, we are successful in the interpretation of an observed correlation between [light r-process/Eu] and [Eu/Fe] among Galactic halo stars and accordingly narrow down the role of supernova nucleosynthesis in the r-process production site. We conclude that the tight correlation by a large fraction of halo stars is attributable to the fact that core-collapse supernovae produce light r-process elements while heavy r-process elements such as Eu and Ba are produced by NS mergers. On the other hand, stars in the outlier, composed of r-enhanced stars ([Eu/Fe] ≳ +1) such as CS22892-052, were exclusively enriched by matter ejected by a subclass of NS mergers that is inclined to be massive and consist of both light and heavy r-process nuclides.

  13. Robust distribution network reconfiguration

    SciTech Connect

    Lee, Changhyeok; Liu, Cong; Mehrotra, Sanjay; Bie, Zhaohong

    2015-03-01

    We propose a two-stage robust optimization model for the distribution network reconfiguration problem with load uncertainty. The first-stage decision is to configure the radial distribution network and the second-stage decision is to find the optimal a/c power flow of the reconfigured network for given demand realization. We solve the two-stage robust model by using a column-and-constraint generation algorithm, where the master problem and subproblem are formulated as mixed-integer second-order cone programs. Computational results for 16, 33, 70, and 94-bus test cases are reported. We find that the configuration from the robust model does not compromise much the power loss under the nominal load scenario compared to the configuration from the deterministic model, yet it provides the reliability of the distribution system for all scenarios in the uncertainty set.

  14. Neurocognition and symptoms identify links between facial recognition and emotion processing in schizophrenia: meta-analytic findings.

    PubMed

    Ventura, Joseph; Wood, Rachel C; Jimenez, Amy M; Hellemann, Gerhard S

    2013-12-01

    In schizophrenia patients, one of the most commonly studied deficits of social cognition is emotion processing (EP), which has documented links to facial recognition (FR). But, how are deficits in facial recognition linked to emotion processing deficits? Can neurocognitive and symptom correlates of FR and EP help differentiate the unique contribution of FR to the domain of social cognition? A meta-analysis of 102 studies (combined n=4826) in schizophrenia patients was conducted to determine the magnitude and pattern of relationships between facial recognition, emotion processing, neurocognition, and type of symptom. Meta-analytic results indicated that facial recognition and emotion processing are strongly interrelated (r=.51). In addition, the relationship between FR and EP through voice prosody (r=.58) is as strong as the relationship between FR and EP based on facial stimuli (r=.53). Further, the relationship between emotion recognition, neurocognition, and symptoms is independent of the emotion processing modality - facial stimuli and voice prosody. The association between FR and EP that occurs through voice prosody suggests that FR is a fundamental cognitive process. The observed links between FR and EP might be due to bottom-up associations between neurocognition and EP, and not simply because most emotion recognition tasks use visual facial stimuli. In addition, links with symptoms, especially negative symptoms and disorganization, suggest possible symptom mechanisms that contribute to FR and EP deficits. © 2013 Elsevier B.V. All rights reserved.

  15. Neurocognition and symptoms identify links between facial recognition and emotion processing in schizophrenia: Meta-analytic findings

    PubMed Central

    Ventura, Joseph; Wood, Rachel C.; Jimenez, Amy M.; Hellemann, Gerhard S.

    2014-01-01

    Background In schizophrenia patients, one of the most commonly studied deficits of social cognition is emotion processing (EP), which has documented links to facial recognition (FR). But, how are deficits in facial recognition linked to emotion processing deficits? Can neurocognitive and symptom correlates of FR and EP help differentiate the unique contribution of FR to the domain of social cognition? Methods A meta-analysis of 102 studies (combined n = 4826) in schizophrenia patients was conducted to determine the magnitude and pattern of relationships between facial recognition, emotion processing, neurocognition, and type of symptom. Results Meta-analytic results indicated that facial recognition and emotion processing are strongly interrelated (r = .51). In addition, the relationship between FR and EP through voice prosody (r = .58) is as strong as the relationship between FR and EP based on facial stimuli (r = .53). Further, the relationship between emotion recognition, neurocognition, and symptoms is independent of the emotion processing modality – facial stimuli and voice prosody. Discussion The association between FR and EP that occurs through voice prosody suggests that FR is a fundamental cognitive process. The observed links between FR and EP might be due to bottom-up associations between neurocognition and EP, and not simply because most emotion recognition tasks use visual facial stimuli. In addition, links with symptoms, especially negative symptoms and disorganization, suggest possible symptom mechanisms that contribute to FR and EP deficits. PMID:24268469

  16. Examining the Cognitive Processes Used by Adolescent Girls and Women Scientists in Identifying Science Role Models: A Feminist Approach

    ERIC Educational Resources Information Center

    Buck, Gayle A.; Plano Clark, Vicki L.; Leslie-Pelecky, Diandra; Lu, Yun; Cerda-Lizarraga, Particia

    2008-01-01

    Women remain underrepresented in science professions. Studies have shown that students are more likely to select careers when they can identify a role model in that career path. Further research has shown that the success of this strategy is enhanced by the use of gender-matched role models. While prior work provides insights into the value of…

  17. Seventeen Projects Carried out by Students Designing for and with Disabled Children: Identifying Designers' Difficulties during the Whole Design Process

    ERIC Educational Resources Information Center

    Magnier, Cecile; Thomann, Guillaume; Villeneuve, Francois

    2012-01-01

    This article aims to identify the difficulties that may arise when designing assistive devices for disabled children. Seventeen design projects involving disabled children, engineering students, and special schools were analysed. A content analysis of the design reports was performed. For this purpose, a coding scheme was built based on a review…

  18. The Process of Identifying Children's Mental Model of Their Own Learning as Inferred from Learning a Song.

    ERIC Educational Resources Information Center

    Brand, Eva

    1998-01-01

    Identifies and describes children's in-action mental model of their own learning, which was inferred from the observation of children's behaviors when learning a song. Explains that the mental model is made up of musical organizations and learning strategies, each on two levels, large-scale and detailed. (CMK)

  19. Examining the Cognitive Processes Used by Adolescent Girls and Women Scientists in Identifying Science Role Models: A Feminist Approach

    ERIC Educational Resources Information Center

    Buck, Gayle A.; Plano Clark, Vicki L.; Leslie-Pelecky, Diandra; Lu, Yun; Cerda-Lizarraga, Particia

    2008-01-01

    Women remain underrepresented in science professions. Studies have shown that students are more likely to select careers when they can identify a role model in that career path. Further research has shown that the success of this strategy is enhanced by the use of gender-matched role models. While prior work provides insights into the value of…

  20. Identifying key hydrological and biochemical processes for predicting field scale nitrate and ammonia export in agricultural cold regions

    NASA Astrophysics Data System (ADS)

    Costa, D.; Pomeroy, J. W.; Wheater, H. S.

    2016-12-01

    Nutrient runoff from agricultural cold regions such as the Canadian Prairies is impairing the ecological function of regional lakes and contributing to massive algal blooms such as found in Lake Winnipeg. Improving catchment model predictions of nutrient export in cold regions requires a better understanding and representation of the main processes controlling nutrient exports at multiple scales. Popular state-of-the-art models often have deficient representation of processes at smaller scales and lack the temporal resolution required to capture important solute transport phenomena, such as preferential elution of ions from the melting snowpack, solute infiltration to frozen soils, and transport during rain-on-snow events. Important processes in the Canadian Prairies that are often neglected are wind redistributed snowpacks and the impacts of their heterogeneous snowcover depletion on nutrient transport. In this research, physical evidence from high frequency field measurements were used to develop a process-based nutrient model for field-scale prediction of nitrate-nitrite (NO3-NO2) and ammonia (NH3) concentrations in both spring snowmelt and summer rainfall driven runoff. The process-based, modular Cold Regions Hydrological Model (CRHM) was used to simulate the main hydrological drivers such as snow redistribution, snowmelt, infiltration into frozen and unfrozen soils, evapotranspiration and subsurface and surface runoff generation. Field observations and a model application to the South Tobacco Creek sub-basin of the Red River in Manitoba, Canada, suggests that the transport of nutrients can be divided in five phases of dominant transport mechanisms due to the available nutrient sources progressively changing from the snowpack to the thawing frozen soil during melt. The vertical distribution of nutrient in the snowpack also varies due to ion exclusion processes at the snow crystal-air interface. Such findings are an important step towards more accurate and

  1. A Robust Biomarker

    NASA Technical Reports Server (NTRS)

    Westall, F.; Steele, A.; Toporski, J.; Walsh, M. M.; Allen, C. C.; Guidry, S.; McKay, D. S.; Gibson, E. K.; Chafetz, H. S.

    2000-01-01

    Polymers of bacterial origin, either through cell secretion or the degraded product of cell lysis, form isolated mucoidal strands as well as well-developed biofilms on interfaces. Biofilms are structurally and compositionally complex and are readily distinguishable from abiogenic films. These structures range in size from micrometers to decimeters, the latter occurring as the well-known, mineralised biofilms called stromatolites. Compositionally bacterial polymers are greater than 90 % water, with while the majority of the macromolecules forming the framework of the polymers consisting of polysaccharides (with and some nucteic acids and proteins). These macromolecules contain a vaste amount of functional groups, such as carboxyls, hydroxyls, and phosphoryls which are implicated in cation-binding. It is the elevated metal- binding capacity which provides the bacterial polymer with structural support and also helps to preserves it for up to 3.5 b.y. in the terrestrial rock record. The macromolecules, thus, can become rapidly mineralised and trapped in a mineral matrix. Through early and late diagenesis (bacterial degradation, burial, heat, pressure and time) they break down, losing the functional groups and, gradually, their hydrogen atoms. The degraded product is known as "kerogen". With further diagenesis and metamorphism, all the hydrogen atoms are lost and the carbonaceous matter becomes graphite. until the remnant carbonaceous material become graphitised. This last sentence reads a bit as if ALL these macromolecules break down and end up as graphite., but since we find 441 this is not true for all of the macromolecules. We have traced fossilised polymer and biofilms in rocks from throughout Earth's history, to rocks as old as the oldest being 3.5 b.y.-old. Furthermore, Time of Flight Secondary Ion Mass Spectrometry has been able to identify individual macromolecules of bacterial origin, the identities of which are still being investigated, in all the samples

  2. A Robust Biomarker

    NASA Technical Reports Server (NTRS)

    Westall, F.; Steele, A.; Toporski, J.; Walsh, M. M.; Allen, C. C.; Guidry, S.; McKay, D. S.; Gibson, E. K.; Chafetz, H. S.

    2000-01-01

    Polymers of bacterial origin, either through cell secretion or the degraded product of cell lysis, form isolated mucoidal strands as well as well-developed biofilms on interfaces. Biofilms are structurally and compositionally complex and are readily distinguishable from abiogenic films. These structures range in size from micrometers to decimeters, the latter occurring as the well-known, mineralised biofilms called stromatolites. Compositionally bacterial polymers are greater than 90 % water, with while the majority of the macromolecules forming the framework of the polymers consisting of polysaccharides (with and some nucteic acids and proteins). These macromolecules contain a vaste amount of functional groups, such as carboxyls, hydroxyls, and phosphoryls which are implicated in cation-binding. It is the elevated metal- binding capacity which provides the bacterial polymer with structural support and also helps to preserves it for up to 3.5 b.y. in the terrestrial rock record. The macromolecules, thus, can become rapidly mineralised and trapped in a mineral matrix. Through early and late diagenesis (bacterial degradation, burial, heat, pressure and time) they break down, losing the functional groups and, gradually, their hydrogen atoms. The degraded product is known as "kerogen". With further diagenesis and metamorphism, all the hydrogen atoms are lost and the carbonaceous matter becomes graphite. until the remnant carbonaceous material become graphitised. This last sentence reads a bit as if ALL these macromolecules break down and end up as graphite., but since we find 441 this is not true for all of the macromolecules. We have traced fossilised polymer and biofilms in rocks from throughout Earth's history, to rocks as old as the oldest being 3.5 b.y.-old. Furthermore, Time of Flight Secondary Ion Mass Spectrometry has been able to identify individual macromolecules of bacterial origin, the identities of which are still being investigated, in all the samples

  3. Combining ecophysiological modelling and quantitative trait locus analysis to identify key elementary processes underlying tomato fruit sugar concentration

    PubMed Central

    Prudent, Marion; Lecomte, Alain; Bouchet, Jean-Paul; Bertin, Nadia; Causse, Mathilde; Génard, Michel

    2011-01-01

    A mechanistic model predicting the accumulation of tomato fruit sugars was developed in order (i) to dissect the relative influence of three underlying processes: assimilate supply (S), metabolic transformation of sugars into other compounds (M), and dilution by water uptake (D); and (ii) to estimate the genetic variability of S, M, and D. The latter was estimated in a population of 20 introgression lines derived from the introgression of a wild tomato species (Solanum chmielewskii) into S. lycopersicum, grown under two contrasted fruit load conditions. Low load systematically decreased D in the whole population, while S and M were targets of genotype×fruit load interactions. The sugar concentration positively correlated to S and D when the variation was due to genetic introgressions, while it positively correlated to S and M when the variation was due to changes in fruit load. Co-localizations between quantitative trait loci (QTLs) for sugar concentration and QTLs for S, M, and D allowed hypotheses to be proposed on the processes putatively involved at the QTLs. Among the five QTLs for sugar concentration, four co-localized with QTLs for S, M, and D with similar allele effects. Moreover, the processes underlying QTLs for sugar accumulation changed according to the fruit load condition. Finally, for some genotypes, the processes underlying sugar concentration compensated in such a way that they did not modify the sugar concentration. By uncoupling genetic from physiological relationships between processes, these results provide new insights into further understanding of tomato fruit sugar accumulation. PMID:21036926

  4. Identifying low-dimensional dynamics in type-I edge-localised-mode processes in JET plasmas

    SciTech Connect

    Calderon, F. A.; Chapman, S. C.; Nicol, R. M.; Dendy, R. O.; Webster, A. J.; Alper, B. [EURATOM Collaboration: JET EFDA Contributors

    2013-04-15

    Edge localised mode (ELM) measurements from reproducibly similar plasmas in the Joint European Torus (JET) tokamak, which differ only in their gas puffing rate, are analysed in terms of the pattern in the sequence of inter-ELM time intervals. It is found that the category of ELM defined empirically as type I-typically more regular, less frequent, and having larger amplitude than other ELM types-embraces substantially different ELMing processes. By quantifying the structure in the sequence of inter-ELM time intervals using delay time plots, we reveal transitions between distinct phase space dynamics, implying transitions between distinct underlying physical processes. The control parameter for these transitions between these different ELMing processes is the gas puffing rate.

  5. Acetylome Analysis Identifies SIRT1 Targets in mRNA-Processing and Chromatin-Remodeling in Mouse Liver

    PubMed Central

    Tang, Hui; Han, Weiping; Zhang, Kangling; Xu, Feng

    2015-01-01

    Lysine acetylation is a post-translational modification found on numerous proteins, a strategy used in cell signaling to change protein activity in response to internal or external cues. Sirtuin 1 (SIRT1) is a central lysine deacetylase involved in a variety of cellular processes including metabolism, apoptosis, and DNA repair. Here we characterize the lysine acetylome in mouse liver, and by using a model of Sirt1-/-knockout mouse, show that SIRT1 regulates the deacetylation of 70 proteins in the liver in-vivo. Amongst these SIRT1-regulated proteins, we find that four RNA-processing proteins and a chromatin-remodeling protein can be deacetylated by SIRT1 directly in-vitro. The discovery that SIRT1 has a potential role in RNA-processing suggests a new layer of regulation in the variety of functions performed by SIRT1. PMID:26468954

  6. Cascading failure and robustness in metabolic networks.

    PubMed

    Smart, Ashley G; Amaral, Luis A N; Ottino, Julio M

    2008-09-09

    We investigate the relationship between structure and robustness in the metabolic networks of Escherichia coli, Methanosarcina barkeri, Staphylococcus aureus, and Saccharomyces cerevisiae, using a cascading failure model based on a topological flux balance criterion. We find that, compared to appropriate null models, the metabolic networks are exceptionally robust. Furthermore, by decomposing each network into rigid clusters and branched metabolites, we demonstrate that the enhanced robustness is related to the organization of branched metabolites, as rigid cluster formations in the metabolic networks appear to be consistent with null model behavior. Finally, we show that cascading in the metabolic networks can be described as a percolation process.

  7. Cascading failure and robustness in metabolic networks

    PubMed Central

    Smart, Ashley G.; Amaral, Luis A. N.; Ottino, Julio M.

    2008-01-01

    We investigate the relationship between structure and robustness in the metabolic networks of Escherichia coli, Methanosarcina barkeri, Staphylococcus aureus, and Saccharomyces cerevisiae, using a cascading failure model based on a topological flux balance criterion. We find that, compared to appropriate null models, the metabolic networks are exceptionally robust. Furthermore, by decomposing each network into rigid clusters and branched metabolites, we demonstrate that the enhanced robustness is related to the organization of branched metabolites, as rigid cluster formations in the metabolic networks appear to be consistent with null model behavior. Finally, we show that cascading in the metabolic networks can be described as a percolation process. PMID:18765805

  8. Redundancy relations and robust failure detection

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.; Lou, X. C.; Verghese, G. C.; Willsky, A. S.

    1984-01-01

    All failure detection methods are based on the use of redundancy, that is on (possible dynamic) relations among the measured variables. Consequently the robustness of the failure detection process depends to a great degree on the reliability of the redundancy relations given the inevitable presence of model uncertainties. The problem of determining redundancy relations which are optimally robust in a sense which includes the major issues of importance in practical failure detection is addressed. A significant amount of intuition concerning the geometry of robust failure detection is provided.

  9. A corpus of full-text journal articles is a robust evaluation tool for revealing differences in performance of biomedical natural language processing tools.

    PubMed

    Verspoor, Karin; Cohen, Kevin Bretonnel; Lanfranchi, Arrick; Warner, Colin; Johnson, Helen L; Roeder, Christophe; Choi, Jinho D; Funk, Christopher; Malenkiy, Yuriy; Eckert, Miriam; Xue, Nianwen; Baumgartner, William A; Bada, Michael; Palmer, Martha; Hunter, Lawrence E

    2012-08-17

    We introduce the linguistic annotation of a corpus of 97 full-text biomedical publications, known as the Colorado Richly Annotated Full Text (CRAFT) corpus. We further assess the performance of existing tools for performing sentence splitting, tokenization, syntactic parsing, and named entity recognition on this corpus. Many biomedical natural language processing systems demonstrated large differences between their previously published results and their performance on the CRAFT corpus when tested with the publicly available models or rule sets. Trainable systems differed widely with respect to their ability to build high-performing models based on this data. The finding that some systems were able to train high-performing models based on this corpus is additional evidence, beyond high inter-annotator agreement, that the quality of the CRAFT corpus is high. The overall poor performance of various systems indicates that considerable work needs to be done to enable natural language processing systems to work well when the input is full-text journal articles. The CRAFT corpus provides a valuable resource to the biomedical natural language processing community for evaluation and training of new models for biomedical full text publications.

  10. A corpus of full-text journal articles is a robust evaluation tool for revealing differences in performance of biomedical natural language processing tools

    PubMed Central

    2012-01-01

    Background We introduce the linguistic annotation of a corpus of 97 full-text biomedical publications, known as the Colorado Richly Annotated Full Text (CRAFT) corpus. We further assess the performance of existing tools for performing sentence splitting, tokenization, syntactic parsing, and named entity recognition on this corpus. Results Many biomedical natural language processing systems demonstrated large differences between their previously published results and their performance on the CRAFT corpus when tested with the publicly available models or rule sets. Trainable systems differed widely with respect to their ability to build high-performing models based on this data. Conclusions The finding that some systems were able to train high-performing models based on this corpus is additional evidence, beyond high inter-annotator agreement, that the quality of the CRAFT corpus is high. The overall poor performance of various systems indicates that considerable work needs to be done to enable natural language processing systems to work well when the input is full-text journal articles. The CRAFT corpus provides a valuable resource to the biomedical natural language processing community for evaluation and training of new models for biomedical full text publications. PMID:22901054

  11. A systematic study of process windows and MEF for line end shortening under various photo conditions for more effective and robust OPC correction

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Zhu, Jun; Wu, Peng; Jiang, Yuntao

    2006-03-01

    Line end shortening (LES) is a classical phenomenon in photolithography, which is primarily caused by finite resolution from the optics at the position of the line ends. The shortening varies from a couple tens of nanometers for processes with a k1 around 0.5 to as much as 100 nanometers for advanced processes with more aggressive k1 numbers. Besides illumination, the effective resist diffusion has been found to worsen the situation. The effective diffusion length for a typical chemically amplified resist, which has been demonstrated to be critical to the performance of the photolithographic process, can be as much as 30 to 60 nm, which has been found to generate some extra 30 nm LES. Experiments have indicated that wider lines have less LES effect. However, under certain CD through-pitch condition, when the lines or spaces are very wide, the opposing line ends may even merge. Currently, two methods have been widely used to improve the situation. One method to fix this problem is to extend the line ends on mask, or to make them move closer toward each other to compensate for the shortening. However, for a more conservatively defined minimum external separation rule, this method itself may not be enough to fully offset the LES. This is because it has been found that there is a limit when the line ends are too close to each other on mask, any perturbation on the mask CD may cause line ends to merge on wafer. The other way is to add hammerheads, or to add wider endings. This is equivalent to the situation of an effectively wider line ends, which has less shortening effect and can also live with a rather conservative minimum external separation. But in some design, this luxury may not have room to implement, i.e., when the line ends are sandwiched by dense lines with minimum ground-rules. Therefore, to best minimize the effect of LES or to completely characterize the LES effect, one will need to study both the process window and mask error factor (MEF) under a variety

  12. An Exploration of Strategic Planning Perspectives and Processes within Community Colleges Identified as Being Distinctive in Their Strategic Planning Practices

    ERIC Educational Resources Information Center

    Augustyniak, Lisa J.

    2015-01-01

    Community college leaders face unprecedented change, and some have begun reexamining their institutional strategic planning processes. Yet, studies in higher education strategic planning spend little time examining how community colleges formulate their strategic plans. This mixed-method qualitative study used an expert sampling method to identify…

  13. The Role of the Speech-Language Pathologist in Identifying and Treating Children with Auditory Processing Disorder

    ERIC Educational Resources Information Center

    Richard, Gail J.

    2011-01-01

    Purpose: The purpose of this prologue is to provide a historical perspective regarding the controversial issues surrounding auditory processing disorder (APD), as well as a summary of the current issues and perspectives that will be discussed in the articles in this forum. Method: An evidence-based systematic review was conducted to examine…

  14. Identifying thresholds in pattern-process relationships: a new cross-scale interactions experiment at the Jornada Basin LTER

    USDA-ARS?s Scientific Manuscript database

    Interactions among ecological patterns and processes at multiple scales play a significant role in threshold behaviors in arid systems. Black grama grasslands and mesquite shrublands are hypothesized to operate under unique sets of feedbacks: grasslands are maintained by fine-scale biotic feedbacks ...

  15. Identifying the Associated Factors of Mediation and Due Process in Families of Students with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Burke, Meghan M.; Goldman, Samantha E.

    2015-01-01

    Compared to families of students with other types of disabilities, families of students with autism spectrum disorder (ASD) are significantly more likely to enact their procedural safeguards such as mediation and due process. However, we do not know which school, child, and parent characteristics are associated with the enactment of safeguards.…

  16. The AP Chemistry Course Audit: A Fertile Ground for Identifying and Addressing Misconceptions about the Course and Process

    ERIC Educational Resources Information Center

    Schwenz, Richard W.; Miller, Sheldon

    2014-01-01

    The advanced placement course audit was implemented to standardize the college-level curricular and resource requirements for AP courses. While the process has had this effect, it has brought with it misconceptions about how much the College Board intends to control what happens within the classroom, what information is required to be included in…

  17. The AP Chemistry Course Audit: A Fertile Ground for Identifying and Addressing Misconceptions about the Course and Process

    ERIC Educational Resources Information Center

    Schwenz, Richard W.; Miller, Sheldon

    2014-01-01

    The advanced placement course audit was implemented to standardize the college-level curricular and resource requirements for AP courses. While the process has had this effect, it has brought with it misconceptions about how much the College Board intends to control what happens within the classroom, what information is required to be included in…

  18. Calculation of the Helfferich number to identify the rate-controlling step of ion exchange for a batch process

    SciTech Connect

    Bunzl, K.

    1995-08-01

    The Helfferich number He is used frequently as a valuable criterion to decide whether for an ion exchange process film diffusion or particle diffusion of the ions is the rate-determining step. The corresponding equation given by Helfferich is restricted, however, for the boundary condition of an infinite solution volume. In the present paper, the Helfferich number is calculated also for a finite solution volume, i.e., for a typical batch process. Because the resulting equation can be solved only numerically, the results are presented in graphical form. It is also examined for which batch processes the conventional Helfferich number already yields a conservative and thus a very simple and useful estimate of the rate-determining step. Information on the kinetics of ion exchange reactions is required not only for the economic employment of synthetic ion exchangers in the industry and the laboratory but also for a better understanding of these processes in natural systems, as, e.g., the sorption of nutrient and toxic ions by the soil.

  19. An Exploration of Strategic Planning Perspectives and Processes within Community Colleges Identified as Being Distinctive in Their Strategic Planning Practices

    ERIC Educational Resources Information Center

    Augustyniak, Lisa J.

    2015-01-01

    Community college leaders face unprecedented change, and some have begun reexamining their institutional strategic planning processes. Yet, studies in higher education strategic planning spend little time examining how community colleges formulate their strategic plans. This mixed-method qualitative study used an expert sampling method to identify…

  20. Robustness of spatial micronetworks.

    PubMed

    McAndrew, Thomas C; Danforth, Christopher M; Bagrow, James P

    2015-04-01

    Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure.

  1. Wetting: Intrinsically robust hydrophobicity

    NASA Astrophysics Data System (ADS)

    Tian, Ye; Jiang, Lei

    2013-04-01

    Ceramic surfaces can be rendered hydrophobic by using polymeric modifiers, but these are not robust to harsh environments. A known family of rare-earth oxide ceramics is now found to exhibit intrinsic hydrophobicity, even after exposure to high temperatures and abrasive wear.

  2. Robustness of spatial micronetworks

    NASA Astrophysics Data System (ADS)

    McAndrew, Thomas C.; Danforth, Christopher M.; Bagrow, James P.

    2015-04-01

    Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure.

  3. Frequency-dependent processing and interpretation (FDPI) of seismic data for identifying, imaging and monitoring fluid-saturated underground reservoirs

    DOEpatents

    Goloshubin, Gennady M.; Korneev, Valeri A.

    2005-09-06

    A method for identifying, imaging and monitoring dry or fluid-saturated underground reservoirs using seismic waves reflected from target porous or fractured layers is set forth. Seismic imaging the porous or fractured layer occurs by low pass filtering of the windowed reflections from the target porous or fractured layers leaving frequencies below low-most corner (or full width at half maximum) of a recorded frequency spectra. Additionally, the ratio of image amplitudes is shown to be approximately proportional to reservoir permeability, viscosity of fluid, and the fluid saturation of the porous or fractured layers.

  4. Frequency-dependent processing and interpretation (FDPI) of seismic data for identifying, imaging and monitoring fluid-saturated underground reservoirs

    DOEpatents

    Goloshubin, Gennady M.; Korneev, Valeri A.

    2006-11-14

    A method for identifying, imaging and monitoring dry or fluid-saturated underground reservoirs using seismic waves reflected from target porous or fractured layers is set forth. Seismic imaging the porous or fractured layer occurs by low pass filtering of the windowed reflections from the target porous or fractured layers leaving frequencies below low-most corner (or full width at half maximum) of a recorded frequency spectra. Additionally, the ratio of image amplitudes is shown to be approximately proportional to reservoir permeability, viscosity of fluid, and the fluid saturation of the porous or fractured layers.

  5. A Robust and Effective Multivariate Post-processing approach: Application on North American Multi-Model Ensemble Climate Forecast over the CONUS

    NASA Astrophysics Data System (ADS)

    Khajehei, Sepideh; Ahmadalipour, Ali; Moradkhani, Hamid

    2017-04-01

    The North American Multi-model Ensemble (NMME) forecasting system has been providing valuable information using a large number of contributing models each consisting of several ensemble members. Despite all the potential benefits that the NMME offers, the forecasts are prone to bias in many regions. In this study, monthly precipitation from 11 contributing models totaling 128 ensemble members in the NMME are assessed and bias corrected. All the models are regridded to 0.5 degree spatial resolution for a more detailed assessment. The goals of this study are as follows: 1. Evaluating the performance of the NMME models over the Contiguous United States using the probabilistic and deterministic measures. 2. Introducing the Copula based ensemble post-processing (COP-EPP) method rooted in Bayesian methods for conditioning the forecast on the observations to improve the performance of NMME predictions. 3. Comparing the forecast skill of the NMME at four different lead-times (lead-0 to lead-3) across the western US, and assessing the effectiveness of COP-EPP in post-processing of precipitation forecasts. Results revealed that NMME models are highly biased in central and western US, while they provide acceptable performance in the eastern regions. The new approach demonstrates substantial improvement over the raw NMME forecasts. However, regional assessment indicates that the COP-EPP is superior to the commonly used Quantile Matching (QM) approach. Also, this method is showing considerable improvements on the seasonal NMME forecasts at all lead times.

  6. Identifying component-processes of executive functioning that serve as risk factors for the alcohol-aggression relation.

    PubMed

    Giancola, Peter R; Godlaski, Aaron J; Roth, Robert M

    2012-06-01

    The present investigation determined how different component-processes of executive functioning (EF) served as risk factors for intoxicated aggression. Participants were 512 (246 males and 266 females) healthy social drinkers between 21 and 35 years of age. EF was measured using the Behavior Rating Inventory of Executive Function-Adult Version (BRIEF-A) that assesses nine EF components. After the consumption of either an alcohol or a placebo beverage, participants were tested on a modified version of the Taylor Aggression Paradigm in which mild electric shocks were received from, and administered to, a fictitious opponent. Aggressive behavior was operationalized as the shock intensities and durations administered to the opponent. Although a general BRIEF-A EF construct consisting of all nine components predicted intoxicated aggression, the best predictor involved one termed the Behavioral Regulation Index that comprises component processes such as inhibition, emotional control, flexible thinking, and self-monitoring.

  7. Identifying Component-Processes of Executive Functioning that Serve as Risk Factors for the Alcohol-Aggression Relation

    PubMed Central

    Giancola, Peter R.; Godlaski, Aaron J.; Roth, Robert M.

    2011-01-01

    The present investigation determined how different component-processes of executive functioning (EF) served as risk factors for intoxicated aggression. Participants were 512 (246 men and 266 women) healthy social drinkers between 21 and 35 years of age. EF was measured using the Behavior Rating Inventory of Executive Functioning – Adult Version (BRIEF-A; Roth, Isquith, & Gioia, 2005) that assesses nine EF components. Following the consumption of either an alcohol or a placebo beverage, participants were tested on a modified version of the Taylor Aggression Paradigm (Taylor, 1967) in which mild electric shocks were received from, and administered to, a fictitious opponent. Aggressive behavior was operationalized as the shock intensities and durations administered to the opponent. Although a general BRIEF-A EF construct consisting of all nine components predicted intoxicated aggression, the best predictor involved one termed the Behavioral Regulation Index which comprises component processes such as inhibition, emotional control, flexible thinking, and self-monitoring. PMID:21875167

  8. Identifying the sources and processes of mercury in subtropical estuarine and ocean sediments using Hg isotopic composition.

    PubMed

    Yin, Runsheng; Feng, Xinbin; Chen, Baowei; Zhang, Junjun; Wang, Wenxiong; Li, Xiangdong

    2015-02-03

    The concentrations and isotopic compositions of mercury (Hg) in surface sediments of the Pearl River Estuary (PRE) and the South China Sea (SCS) were analyzed. The data revealed significant differences between the total Hg (THg) in fine-grained sediments collected from the PRE (8-251 μg kg(-1)) and those collected from the SCS (12-83 μg kg(-1)). Large spatial variations in Hg isotopic compositions were observed in the SCS (δ(202)Hg, from -2.82 to -2.10‰; Δ(199)Hg, from +0.21 to +0.45‰) and PRE (δ(202)Hg, from -2.80 to -0.68‰; Δ(199)Hg, from -0.15 to +0.16‰). The large positive Δ(199)Hg in the SCS indicated that a fraction of Hg has undergone Hg(2+) photoreduction processes prior to incorporation into the sediments. The relatively negative Δ(199)Hg values in the PRE indicated that photoreduction of Hg is not the primary route for the removal of Hg from the water column. The riverine input of fine particles played an important role in transporting Hg to the PRE sediments. In the deep ocean bed of the SCS, source-related signatures of Hg isotopes may have been altered by natural geochemical processes (e.g., Hg(2+) photoreduction and preferential adsorption processes). Using Hg isotope compositions, we estimate that river deliveries of Hg from industrial and urban sources and natural soils could be the main inputs of Hg to the PRE. However, the use of Hg isotopes as tracers in source attribution could be limited because of the isotope fractionation by natural processes in the SCS.

  9. Geostatistical analysis to identify hydrogeochemical processes in complex aquifers: a case study (Aguadulce unit, Almeria, SE Spain).

    PubMed

    Daniele, Linda; Pulido Bosch, Antonio; Vallejos, Angela; Molina, Luis

    2008-06-01

    The Aguadulce aquifer unit in southeastern Spain is a complex hydrogeological system because of the varied lithology of the aquifer strata and the variability of the processes that can take place within the unit. Factorial analysis of the data allowed the number of variables to be reduced to 3 factors, which were found to be related to such physico-chemical processes as marine intrusion and leaching of saline deposits. Variographic analysis was applied to these factors, culminating in a study of spatial distribution using ordinary kriging. Mapping of the factors allowed rapid differentiation of some of the processes that affect the waters of the Gador carbonate aquifer within the Aguadulce unit, without the need to recur to purely hydrogeochemical techniques. The results indicate the existence of several factors related to salinity: marine intrusion, paleowaters, and/or leaching of marls and evaporitic deposits. The techniques employed are effective, and the results conform to those obtained using hydrogeochemical methods (vertical records of conductivity and temperature, ion ratios, and others). The findings of this study confirm that the application of such analytical methods can provide a useful assessment of factors affecting groundwater composition.

  10. Estimation and Identifiability of Model Parameters in Human Nociceptive Processing Using Yes-No Detection Responses to Electrocutaneous Stimulation.

    PubMed

    Yang, Huan; Meijer, Hil G E; Buitenweg, Jan R; van Gils, Stephan A

    2016-01-01

    Healthy or pathological states of nociceptive subsystems determine different stimulus-response relations measured from quantitative sensory testing. In turn, stimulus-response measurements may be used to assess these states. In a recently developed computational model, six model parameters characterize activation of nerve endings and spinal neurons. However, both model nonlinearity and limited information in yes-no detection responses to electrocutaneous stimuli challenge to estimate model parameters. Here, we address the question whether and how one can overcome these difficulties for reliable parameter estimation. First, we fit the computational model to experimental stimulus-response pairs by maximizing the likelihood. To evaluate the balance between model fit and complexity, i.e., the number of model parameters, we evaluate the Bayesian Information Criterion. We find that the computational model is better than a conventional logistic model regarding the balance. Second, our theoretical analysis suggests to vary the pulse width among applied stimuli as a necessary condition to prevent structural non-identifiability. In addition, the numerically implemented profile likelihood approach reveals structural and practical non-identifiability. Our model-based approach with integration of psychophysical measurements can be useful for a reliable assessment of states of the nociceptive system.

  11. Estimation and Identifiability of Model Parameters in Human Nociceptive Processing Using Yes-No Detection Responses to Electrocutaneous Stimulation

    PubMed Central

    Yang, Huan; Meijer, Hil G. E.; Buitenweg, Jan R.; van Gils, Stephan A.

    2016-01-01

    Healthy or pathological states of nociceptive subsystems determine different stimulus-response relations measured from quantitative sensory testing. In turn, stimulus-response measurements may be used to assess these states. In a recently developed computational model, six model parameters characterize activation of nerve endings and spinal neurons. However, both model nonlinearity and limited information in yes-no detection responses to electrocutaneous stimuli challenge to estimate model parameters. Here, we address the question whether and how one can overcome these difficulties for reliable parameter estimation. First, we fit the computational model to experimental stimulus-response pairs by maximizing the likelihood. To evaluate the balance between model fit and complexity, i.e., the number of model parameters, we evaluate the Bayesian Information Criterion. We find that the computational model is better than a conventional logistic model regarding the balance. Second, our theoretical analysis suggests to vary the pulse width among applied stimuli as a necessary condition to prevent structural non-identifiability. In addition, the numerically implemented profile likelihood approach reveals structural and practical non-identifiability. Our model-based approach with integration of psychophysical measurements can be useful for a reliable assessment of states of the nociceptive system. PMID:27994563

  12. LipidHunter Identifies Phospholipids by High-Throughput Processing of LC-MS and Shotgun Lipidomics Datasets.

    PubMed

    Ni, Zhixu; Angelidou, Georgia; Lange, Mike; Hoffmann, Ralf; Fedorova, Maria

    2017-09-05

    Lipids are dynamic constituents of biological systems, rapidly responding to any changes in physiological conditions. Thus, there is a large interest in lipid-derived markers for diagnostic and prognostic applications, especially in translational and systems medicine research. As lipid identification remains a bottleneck of modern untargeted lipidomics, we developed LipidHunter, a new open source software for the high-throughput identification of phospholipids in data acquired by LC-MS and shotgun experiments. LipidHunter resembles a workflow of manual spectra annotation. Lipid identification is based on MS/MS data analysis in accordance with defined fragmentation rules for each phospholipid (PL) class. The software tool matches product and neutral loss signals obtained by collision-induced dissociation to a user-defined white list of fatty acid residues and PL class-specific fragments. The identified signals are tested against elemental composition and bulk identification provided via LIPID MAPS search. Furthermore, LipidHunter provides information-rich tabular and graphical reports allowing to trace back key identification steps and perform data quality control. Thereby, 202 discrete lipid species were identified in lipid extracts from rat primary cardiomyocytes treated with a peroxynitrite donor. Their relative quantification allowed the monitoring of dynamic reconfiguration of the cellular lipidome in response to mild nitroxidative stress. LipidHunter is available free for download at https://bitbucket.org/SysMedOs/lipidhunter .

  13. Novel Dendritic Kinesin Sorting Identified by Different Process Targeting of Two Related Kinesins: KIF21A and KIF21B

    PubMed Central

    Marszalek, Joseph R.; Weiner, Joshua A.; Farlow, Samuel J.; Chun, Jerold; Goldstein, Lawrence S.B.

    1999-01-01

    Neurons use kinesin and dynein microtubule-dependent motor proteins to transport essential cellular components along axonal and dendritic microtubules. In a search for new kinesin-like proteins, we identified two neuronally enriched mouse kinesins that provide insight into a unique intracellular kinesin targeting mechanism in neurons. KIF21A and KIF21B share colinear amino acid similarity to each other, but not to any previously identified kinesins outside of the motor domain. Each protein also contains a domain of seven WD-40 repeats, which may be involved in binding to cargoes. Despite the amino acid sequence similarity between KIF21A and KIF21B, these proteins localize differently to dendrites and axons. KIF21A protein is localized throughout neurons, while KIF21B protein is highly enriched in dendrites. The plus end-directed motor activity of KIF21B and its enrichment in dendrites indicate that models suggesting that minus end-directed motor activity is sufficient for dendrite specific motor localization are inadequate. We suggest that a novel kinesin sorting mechanism is used by neurons to localize KIF21B protein to dendrites since its mRNA is restricted to the cell body. PMID:10225949

  14. Identifying consumer preferences for specific beef flavor characteristics in relation to cattle production and postmortem processing parameters.

    PubMed

    O'Quinn, T G; Woerner, D R; Engle, T E; Chapman, P L; Legako, J F; Brooks, J C; Belk, K E; Tatum, J D

    2016-02-01

    Sensory analysis of ground LL samples representing 12 beef product categories was conducted in 3 different regions of the U.S. to identify flavor preferences of beef consumers. Treatments characterized production-related flavor differences associated with USDA grade, cattle type, finishing diet, growth enhancement, and postmortem aging method. Consumers (N=307) rated cooked samples for 12 flavors and overall flavor desirability. Samples were analyzed to determine fatty acid content. Volatile compounds produced by cooking were extracted and quantified. Overall, consumers preferred beef that rated high for beefy/brothy, buttery/beef fat, and sweet flavors and disliked beef with fishy, livery, gamey, and sour flavors. Flavor attributes of samples higher in intramuscular fat with greater amounts of monounsaturated fatty acids and lesser proportions of saturated, odd-chain, omega-3, and trans fatty acids were preferred by consumers. Of the volatiles identified, diacetyl and acetoin were most closely correlated with desirable ratings for overall flavor and dimethyl sulfide was associated with an undesirable sour flavor. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Process development of a New Haemophilus influenzae type b conjugate vaccine and the use of mathematical modeling to identify process optimization possibilities.

    PubMed

    Hamidi, Ahd; Kreeftenberg, Hans; V D Pol, Leo; Ghimire, Saroj; V D Wielen, Luuk A M; Ottens, Marcel

    2016-05-01

    Vaccination is one of the most successful public health interventions being a cost-effective tool in preventing deaths among young children. The earliest vaccines were developed following empirical methods, creating vaccines by trial and error. New process development tools, for example mathematical modeling, as well as new regulatory initiatives requiring better understanding of both the product and the process are being applied to well-characterized biopharmaceuticals (for example recombinant proteins). The vaccine industry is still running behind in comparison to these industries. A production process for a new Haemophilus influenzae type b (Hib) conjugate vaccine, including related quality control (QC) tests, was developed and transferred to a number of emerging vaccine manufacturers. This contributed to a sustainable global supply of affordable Hib conjugate vaccines, as illustrated by the market launch of the first Hib vaccine based on this technology in 2007 and concomitant price reduction of Hib vaccines. This paper describes the development approach followed for this Hib conjugate vaccine as well as the mathematical modeling tool applied recently in order to indicate options for further improvements of the initial Hib process. The strategy followed during the process development of this Hib conjugate vaccine was a targeted and integrated approach based on prior knowledge and experience with similar products using multi-disciplinary expertise. Mathematical modeling was used to develop a predictive model for the initial Hib process (the 'baseline' model) as well as an 'optimized' model, by proposing a number of process changes which could lead to further reduction in price. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:568-580, 2016.

  16. Meta-analysis of genome-wide association studies identifies novel loci that influence cupping and the glaucomatous process.

    PubMed

    Springelkamp, Henriët; Höhn, René; Mishra, Aniket; Hysi, Pirro G; Khor, Chiea-Chuen; Loomis, Stephanie J; Bailey, Jessica N Cooke; Gibson, Jane; Thorleifsson, Gudmar; Janssen, Sarah F; Luo, Xiaoyan; Ramdas, Wishal D; Vithana, Eranga; Nongpiur, Monisha E; Montgomery, Grant W; Xu, Liang; Mountain, Jenny E; Gharahkhani, Puya; Lu, Yi; Amin, Najaf; Karssen, Lennart C; Sim, Kar-Seng; van Leeuwen, Elisabeth M; Iglesias, Adriana I; Verhoeven, Virginie J M; Hauser, Michael A; Loon, Seng-Chee; Despriet, Dominiek D G; Nag, Abhishek; Venturini, Cristina; Sanfilippo, Paul G; Schillert, Arne; Kang, Jae H; Landers, John; Jonasson, Fridbert; Cree, Angela J; van Koolwijk, Leonieke M E; Rivadeneira, Fernando; Souzeau, Emmanuelle; Jonsson, Vesteinn; Menon, Geeta; Weinreb, Robert N; de Jong, Paulus T V M; Oostra, Ben A; Uitterlinden, André G; Hofman, Albert; Ennis, Sarah; Thorsteinsdottir, Unnur; Burdon, Kathryn P; Spector, Timothy D; Mirshahi, Alireza; Saw, Seang-Mei; Vingerling, Johannes R; Teo, Yik-Ying; Haines, Jonathan L; Wolfs, Roger C W; Lemij, Hans G; Tai, E-Shyong; Jansonius, Nomdo M; Jonas, Jost B; Cheng, Ching-Yu; Aung, Tin; Viswanathan, Ananth C; Klaver, Caroline C W; Craig, Jamie E; Macgregor, Stuart; Mackey, David A; Lotery, Andrew J; Stefansson, Kari; Bergen, Arthur A B; Young, Terri L; Wiggs, Janey L; Pfeiffer, Norbert; Wong, Tien-Yin; Pasquale, Louis R; Hewitt, Alex W; van Duijn, Cornelia M; Hammond, Christopher J

    2014-09-22

    Glaucoma is characterized by irreversible optic nerve degeneration and is the most frequent cause of irreversible blindness worldwide. Here, the International Glaucoma Genetics Consortium conducts a meta-analysis of genome-wide association studies of vertical cup-disc ratio (VCDR), an important disease-related optic nerve parameter. In 21,094 individuals of European ancestry and 6,784 individuals of Asian ancestry, we identify 10 new loci associated with variation in VCDR. In a separate risk-score analysis of five case-control studies, Caucasians in the highest quintile have a 2.5-fold increased risk of primary open-angle glaucoma as compared with those in the lowest quintile. This study has more than doubled the known loci associated with optic disc cupping and will allow greater understanding of mechanisms involved in this common blinding condition.

  17. Meta-analysis of genome-wide association studies identifies novel loci that influence cupping and the glaucomatous process

    PubMed Central

    Springelkamp, Henriët.; Höhn, René; Mishra, Aniket; Hysi, Pirro G.; Khor, Chiea-Chuen; Loomis, Stephanie J.; Bailey, Jessica N. Cooke; Gibson, Jane; Thorleifsson, Gudmar; Janssen, Sarah F.; Luo, Xiaoyan; Ramdas, Wishal D.; Vithana, Eranga; Nongpiur, Monisha E.; Montgomery, Grant W.; Xu, Liang; Mountain, Jenny E.; Gharahkhani, Puya; Lu, Yi; Amin, Najaf; Karssen, Lennart C.; Sim, Kar-Seng; van Leeuwen, Elisabeth M.; Iglesias, Adriana I.; Verhoeven, Virginie J. M.; Hauser, Michael A.; Loon, Seng-Chee; Despriet, Dominiek D. G.; Nag, Abhishek; Venturini, Cristina; Sanfilippo, Paul G.; Schillert, Arne; Kang, Jae H.; Landers, John; Jonasson, Fridbert; Cree, Angela J.; van Koolwijk, Leonieke M. E.; Rivadeneira, Fernando; Souzeau, Emmanuelle; Jonsson, Vesteinn; Menon, Geeta; Mitchell, Paul; Wang, Jie Jin; Rochtchina, Elena; Attia, John; Scott, Rodney; Holliday, Elizabeth G.; Wong, Tien-Yin; Baird, Paul N.; Xie, Jing; Inouye, Michael; Viswanathan, Ananth; Sim, Xueling; Weinreb, Robert N.; de Jong, Paulus T. V. M.; Oostra, Ben A.; Uitterlinden, André G.; Hofman, Albert; Ennis, Sarah; Thorsteinsdottir, Unnur; Burdon, Kathryn P.; Allingham, R. Rand; Brilliant, Murray H.; Budenz, Donald L.; Cooke Bailey, Jessica N.; Christen, William G.; Fingert, John; Friedman, David S.; Gaasterland, Douglas; Gaasterland, Terry; Haines, Jonathan L.; Hauser, Michael A.; Kang, Jae Hee; Kraft, Peter; Lee, Richard K.; Lichter, Paul R.; Liu, Yutao; Loomis, Stephanie J.; Moroi, Sayoko E.; Pasquale, Louis R.; Pericak-Vance, Margaret A.; Realini, Anthony; Richards, Julia E.; Schuman, Joel S.; Scott, William K.; Singh, Kuldev; Sit, Arthur J.; Vollrath, Douglas; Weinreb, Robert N.; Wiggs, Janey L.; Wollstein, Gadi; Zack, Donald J.; Zhang, Kang; Donnelly (Chair), Peter; Barroso (Deputy Chair), Ines; Blackwell, Jenefer M.; Bramon, Elvira; Brown, Matthew A.; Casas, Juan P.; Corvin, Aiden; Deloukas, Panos; Duncanson, Audrey; Jankowski, Janusz; Markus, Hugh S.; Mathew, Christopher G.; Palmer, Colin N. A.; Plomin, Robert; Rautanen, Anna; Sawcer, Stephen J.; Trembath, Richard C.; Viswanathan, Ananth C.; Wood, Nicholas W.; Spencer, Chris C. A.; Band, Gavin; Bellenguez, Céline; Freeman, Colin; Hellenthal, Garrett; Giannoulatou, Eleni; Pirinen, Matti; Pearson, Richard; Strange, Amy; Su, Zhan; Vukcevic, Damjan; Donnelly, Peter; Langford, Cordelia; Hunt, Sarah E.; Edkins, Sarah; Gwilliam, Rhian; Blackburn, Hannah; Bumpstead, Suzannah J.; Dronov, Serge; Gillman, Matthew; Gray, Emma; Hammond, Naomi; Jayakumar, Alagurevathi; McCann, Owen T.; Liddle, Jennifer; Potter, Simon C.; Ravindrarajah, Radhi; Ricketts, Michelle; Waller, Matthew; Weston, Paul; Widaa, Sara; Whittaker, Pamela; Barroso, Ines; Deloukas, Panos; Mathew (Chair), Christopher G.; Blackwell, Jenefer M.; Brown, Matthew A.; Corvin, Aiden; Spencer, Chris C. A.; Spector, Timothy D.; Mirshahi, Alireza; Saw, Seang-Mei; Vingerling, Johannes R.; Teo, Yik-Ying; Haines, Jonathan L.; Wolfs, Roger C. W.; Lemij, Hans G.; Tai, E-Shyong; Jansonius, Nomdo M.; Jonas, Jost B.; Cheng, Ching-Yu; Aung, Tin; Viswanathan, Ananth C.; Klaver, Caroline C. W.; Craig, Jamie E.; Macgregor, Stuart; Mackey, David A.; Lotery, Andrew J.; Stefansson, Kari; Bergen, Arthur A. B.; Young, Terri L.; Wiggs, Janey L.; Pfeiffer, Norbert; Wong, Tien-Yin; Pasquale, Louis R.; Hewitt, Alex W.; van Duijn, Cornelia M.; Hammond, Christopher J.

    2014-01-01

    Glaucoma is characterized by irreversible optic nerve degeneration and is the most frequent cause of irreversible blindness worldwide. Here, the International Glaucoma Genetics Consortium conducts a meta-analysis of genome-wide association studies of vertical cup-disc ratio (VCDR), an important disease-related optic nerve parameter. In 21,094 individuals of European ancestry and 6,784 individuals of Asian ancestry, we identify 10 new loci associated with variation in VCDR. In a separate risk-score analysis of five case-control studies, Caucasians in the highest quintile have a 2.5-fold increased risk of primary open-angle glaucoma as compared with those in the lowest quintile. This study has more than doubled the known loci associated with optic disc cupping and will allow greater understanding of mechanisms involved in this common blinding condition. PMID:25241763

  18. Mapping, Monitoring, and Modeling Geomorphic Processes to Identify Sources of Anthropogenic Sediment Pollution in West Maui, Hawai'i

    NASA Astrophysics Data System (ADS)

    Cerovski-Darriau, C.; Stock, J. D.; Winans, W. R.

    2016-12-01

    Episodic storm runoff in West Maui (Hawai'i) brings plumes of terrestrially-sourced fine sediment to the nearshore ocean environment, degrading coral reef ecosystems. The sediment pollution sources were largely unknown, though suspected to be due to modern human disturbance of the landscape, and initially assumed to be from visibly obvious exposed soil on agricultural fields and unimproved roads. To determine the sediment sources and estimate a sediment budget for the West Maui watersheds, we mapped the geomorphic processes in the field and from DEMs and orthoimagery, monitored erosion rates in the field, and modeled the sediment flux using the mapped processes and corresponding rates. We found the primary source of fine sands, silts and clays to be previously unidentified fill terraces along the stream bed. These terraces, formed during legacy agricultural activity, are the banks along 40-70% of the streams where the channels intersect human-modified landscapes. Monitoring over the last year shows that a few storms erode the fill terraces 10-20 mm annually, contributing up to 100s of tonnes of sediment per catchment. Compared to the average long-term, geologic erosion rate of 0.03 mm/yr, these fill terraces alone increase the suspended sediment flux to the coral reefs by 50-90%. Stakeholders can use our resulting geomorphic process map and sediment budget to inform the location and type of mitigation effort needed to limit terrestrial sediment pollution. We compare our mapping, monitoring, and modeling (M3) approach to NOAA's OpenNSPECT model. OpenNSPECT uses empirical hydrologic and soil erosion models paired with land cover data to compare the spatially distributed sediment yield from different land-use scenarios. We determine the relative effectiveness of calculating a baseline watershed sediment yield from each approach, and the utility of calibrating OpenNSEPCT with M3 results to better forecast future sediment yields from land-use or climate change scenarios.

  19. Process Evaluation of Serial Screening Criteria to Identify Injured Patients That Benefit From Brief Intervention: Practical Implications

    PubMed Central

    Field, Craig; Caetano, Raul; Pezzia, Carla

    2010-01-01

    Background The aim of the current study is to evaluate the effectiveness of serial screening methods for the identification of injured patients at risk for alcohol problems and are most likely to benefit from brief interventions. We hypothesize that blood alcohol concentration (BAC) alone is not sufficient to effectively identify at-risk drinkers in the trauma care setting. Methods During a 2-year period, patients admitted to an urban Level I trauma center for treatment of an injury were screened for alcohol problems. Screening consisted of four serial screening criteria: (1) clinical indication of acute intoxication including positive BAC; (2) self-reported drinking 6 hours before injury; (3) at-risk drinking as defined by National Institutes on Alcohol Abuse and Alcoholism or (4) by responding yes to one or more items on the CAGE within the last year. Results In all, 11,028 patients were seen. Fifty-eight percent were eligible for screening and 90% of eligible patients were screened. Of screened patients, 41% screened positive for an alcohol-related injury. Of patients that did not have a BAC drawn, 39% (n = 935) went on to screen positive using serial screening procedures. Additionally, 36% (n = 339) of patients with a negative BAC went on to screen positive using serial screening procedures. Conclusions This evaluation clearly suggests that BAC alone is not sufficient to identify patients who are most likely to benefit from brief alcohol interventions. Self-reported drinking in conjunction with BAC facilitates identification and intervention of injured patients with alcohol problems. PMID:18797416

  20. Enhancing the Performance of a robust sol-gel-processed p-type delafossite CuFeO2 photocathode for solar water reduction.

    PubMed

    Prévot, Mathieu S; Guijarro, Néstor; Sivula, Kevin

    2015-04-24

    Delafossite CuFeO2 is a promising material for solar hydrogen production, but is limited by poor photocurrent. Strategies are demonstrated herein to improve the performance of CuFeO2 electrodes prepared directly on transparent conductive substrates by using a simple sol-gel technique. Optimizing the delafossite layer thickness and increasing the majority carrier concentration (through the thermal intercalation of oxygen) give insights into the limitations of photogenerated charge extraction and enable performance improvements. In oxygen-saturated electrolyte, (sacrificial) photocurrents (1 sun illumination) up to 1.51 mA cm(-2) at +0.35 V versus a reversible hydrogen electrode (RHE) are observed. Water photoreduction with bare delafossite is limited by poor hydrogen evolution catalysis, but employing methyl viologen as an electron acceptor verifies that photogenerated electrons can be extracted from the conduction band before recombination into mid-gap trap states identified by electrochemical impedance spectroscopy. Through the use of suitable oxide overlayers and a platinum catalyst, sustained solar hydrogen production photocurrents of 0.4 mA cm(-2) at 0 V versus RHE (0.8 mA cm(-2) at -0.2 V) are demonstrated. Importantly, bare CuFeO2 is highly stable at potentials at which photocurrent is generated. No degradation is observed after 40 h under operating conditions in oxygen-saturated electrolyte. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Robustness properties of circadian clock architectures

    PubMed Central

    Stelling, Jörg; Gilles, Ernst Dieter; Doyle, Francis J.

    2004-01-01

    Robustness, a relative insensitivity to perturbations, is a key characteristic of living cells. However, the specific structural characteristics that are responsible for robust performance are not clear, even in genetic circuits of moderate complexity. Formal sensitivity analysis allows the investigation of robustness and fragility properties of mathematical models representing regulatory networks, but it yields only local properties with respect to a particular choice of parameter values. Here, we show that by systematically investigating the parameter space, more global properties linked to network structure can be derived. Our analysis focuses on the genetic oscillator responsible for generating circadian rhythms in Drosophila as a prototypic dynamical cellular system. Analysis of two mathematical models of moderate complexity shows that the tradeoff between robustness and fragility is largely determined by the regulatory structure. Rank-ordered sensitivities, for instance, allow the correct identification of protein phosphorylation as an influential process determining the oscillator's period. Furthermore, sensitivity analysis confirms the theoretical insight that hierarchical control might be important for achieving robustness. The complex feedback structures encountered in vivo, however, do not seem to enhance robustness per se but confer robust precision and adjustability of the clock while avoiding catastrophic failure. PMID:15340155

  2. Evolution, robustness, and the cost of complexity

    NASA Astrophysics Data System (ADS)

    Leclerc, Robert D.

    Evolutionary systems biology is the study of how regulatory networks evolve under the influence of natural selection, mutation, and the environment. It attempts to explain the dynamics, architecture, and variational properties of regulatory networks and how this relates to the origins, evolution and maintenance of complex and diverse functions. Key questions in the field of evolutionary systems biology ask how does robustness evolve, what are the factors that drive its evolution, and what are the underlying mechanisms that discharge robustness? In this dissertation, I investigate the evolution of robustness in artificial gene regulatory networks. I show how different conceptions of robustness fit together as pieces of a general notion of robustness, and I show how this relationship implies potential tradeoffs in how robustness can be implemented. I present results which suggest that inherent logistical problems with genetic recombination may help drive the evolution of modularity in the genotype-phenotype map. Finally, I show that robustness implies a parsimonious network structure, one which is sparsely connected and not unnecessarily complex. These results challenge conclusions drawn from many high-profile studies, and may offer a broad new perspective on biological systems. Because life must orchestrate its existence on random nonlinear thermodynamic processes, it will be designed and implemented in the most probable way. Life turns the law of entropy back onto itself to root out every inefficiency, waste, and every surprise.

  3. Biological robustness: paradigms, mechanisms, and systems principles.

    PubMed

    Whitacre, James Michael

    2012-01-01

    Robustness has been studied through the analysis of data sets, simulations, and a variety of experimental techniques that each have their own limitations but together confirm the ubiquity of biological robustness. Recent trends suggest that different types of perturbation (e.g., mutational, environmental) are commonly stabilized by similar mechanisms, and system sensitivities often display a long-tailed distribution with relatively few perturbations representing the majority of sensitivities. Conceptual paradigms from network theory, control theory, complexity science, and natural selection have been used to understand robustness, however each paradigm has a limited scope of applicability and there has been little discussion of the conditions that determine this scope or the relationships between paradigms. Systems properties such as modularity, bow-tie architectures, degeneracy, and other topological features are often positively associated with robust traits, however common underlying mechanisms are rarely mentioned. For instance, many system properties support robustness through functional redundancy or through response diversity with responses regulated by competitive exclusion and cooperative facilitation. Moreover, few studies compare and contrast alternative strategies for achieving robustness such as homeostasis, adaptive plasticity, environment shaping, and environment tracking. These strategies share similarities in their utilization of adaptive and self-organization processes that are not well appreciated yet might be suggestive of reusable building blocks for generating robust behavior.

  4. Biological Robustness: Paradigms, Mechanisms, and Systems Principles

    PubMed Central

    Whitacre, James Michael

    2012-01-01

    Robustness has been studied through the analysis of data sets, simulations, and a variety of experimental techniques that each have their own limitations but together confirm the ubiquity of biological robustness. Recent trends suggest that different types of perturbation (e.g., mutational, environmental) are commonly stabilized by similar mechanisms, and system sensitivities often display a long-tailed distribution with relatively few perturbations representing the majority of sensitivities. Conceptual paradigms from network theory, control theory, complexity science, and natural selection have been used to understand robustness, however each paradigm has a limited scope of applicability and there has been little discussion of the conditions that determine this scope or the relationships between paradigms. Systems properties such as modularity, bow-tie architectures, degeneracy, and other topological features are often positively associated with robust traits, however common underlying mechanisms are rarely mentioned. For instance, many system properties support robustness through functional redundancy or through response diversity with responses regulated by competitive exclusion and cooperative facilitation. Moreover, few studies compare and contrast alternative strategies for achieving robustness such as homeostasis, adaptive plasticity, environment shaping, and environment tracking. These strategies share similarities in their utilization of adaptive and self-organization processes that are not well appreciated yet might be suggestive of reusable building blocks for generating robust behavior. PMID:22593762

  5. Developmental study identifies the ages at which the processes involved in the perception of verticality and in postural stability occur.

    PubMed

    Tringali, Margherita; Wiener-Vacher, Sylvette; Bucci, Maria Pia

    2017-01-01

    The aim of this study was to understand the role played by visual information on the development of verticality and postural stability in healthy children. The study comprised 66 healthy children from 4.0 to 15.7 years of age. Postural performances were recorded with a TechnoConcept platform. At the same time, the children's perception of subjective visual vertical (SVV) was recorded while they adjusted a vertical fluorescent line, either in the dark or in the presence of perturbing visual stimuli. Two testing control conditions without an SVV task were also performed by all of the children: static posturographic recording with open eyes and closed eyes. Postural measurements provided evidence of a correlation between the children's age and the tasks performed. Postural stability improved with age until eight to nine years, and SVV performance improved after 10-11 years. After these ages, postural and SVV capabilities did not change until at least 15 years of age. Our findings suggest that the maturation of cortical and central processes involved in both the perception of verticality and in postural stability took place during childhood. However, maturation occurred later for vertical perception, which could imply delayed maturation of sensory integration processes. ©2016 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  6. Application of surface area measurement for identifying the source of batch-to-batch variation in processability.

    PubMed

    Vippagunta, Radha R; Pan, Changkang; Vakil, Ronak; Meda, Vindhya; Vivilecchia, Richard; Motto, Michael

    2009-01-01

    The primary goal of this study was to evaluate the use of specific surface area as a measurable physical property of materials for understanding the batch-to-batch variation in the flow behavior. The specific surface area measurements provide information about the nature of the surface making up the solid, which may include defects or void space on the surface. These void spaces are often present in the crystalline material due to varying degrees of disorderness and can be considered as amorphous regions. In the present work, the specific surface area for 10 batches of the same active pharmaceutical ingredient (compound 1) with varying quantity of amorphous content was investigated. Some of these batches showed different flow behavior when processed using roller compaction. The surface area value was found to increase in the presence of low amorphous content, and decrease with high amorphous content as compared to crystalline material. To complement the information obtained from the above study, physical blends of another crystalline active pharmaceutical ingredient (compound 2) and its amorphous form were prepared in known proportions. Similar trend in specific surface area value was found. Tablets prepared from known formulation with varying amorphous content of the active ingredient (compound 3) also exhibited the same trend. A hypothesis to explain the correlation between the amorphous content and specific surface area has been proposed. The results strongly support the use of specific surface area as a measurable tool for investigation of source of batch to batch variation in processability.

  7. Metabolic engineering of industrial platform microorganisms for biorefinery applications--optimization of substrate spectrum and process robustness by rational and evolutive strategies.

    PubMed

    Buschke, Nele; Schäfer, Rudolf; Becker, Judith; Wittmann, Christoph

    2013-05-01

    Bio-based production promises a sustainable route to myriads of chemicals, materials and fuels. With regard to eco-efficiency, its future success strongly depends on a next level of bio-processes using raw materials beyond glucose. Such renewables, i.e., polymers, complex substrate mixtures and diluted waste streams, often cannot be metabolized naturally by the producing organisms. This particularly holds for well-known microorganisms from the traditional sugar-based biotechnology, including Escherichia coli, Corynebacterium glutamicum and Saccharomyces cerevisiae which have been engineered successfully to produce a broad range of products from glucose. In order to make full use of their production potential within the bio-refinery value chain, they have to be adapted to various feed-stocks of interest. This review focuses on the strategies to be applied for this purpose which combine rational and evolutive approaches. Hereby, the three industrial platform microorganisms, E. coli, C. glutamicum and S. cerevisiae are highlighted due to their particular importance. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Robust calibration of a global aerosol model

    NASA Astrophysics Data System (ADS)

    Lee, L.; Carslaw, K. S.; Pringle, K. J.; Reddington, C.

    2013-12-01

    Comparison of models and observations is vital for evaluating how well computer models can simulate real world processes. However, many current methods are lacking in their assessment of the model uncertainty, which introduces questions regarding the robustness of the observationally constrained model. In most cases, models are evaluated against observations using a single baseline simulation considered to represent the models' best estimate. The model is then improved in some way so that its comparison to observations is improved. Continuous adjustments in such a way may result in a model that compares better to observations but there may be many compensating features which make prediction with the newly calibrated model difficult to justify. There may also be some model outputs whose comparison to observations becomes worse in some regions/seasons as others improve. In such cases calibration cannot be considered robust. We present details of the calibration of a global aerosol model, GLOMAP, in which we consider not just a single model setup but a perturbed physics ensemble with 28 uncertain parameters. We first quantify the uncertainty in various model outputs (CCN, CN) for the year 2008 and use statistical emulation to identify which of the 28 parameters contribute most to this uncertainty. We then compare the emulated model simulations in the entire parametric uncertainty space to observations. Regions where the entire ensemble lies outside the error of the observations indicate structural model error or gaps in current knowledge which allows us to target future research areas. Where there is some agreement with the observations we use the information on the sources of the model uncertainty to identify geographical regions in which the important parameters are similar. Identification of regional calibration clusters helps us to use information from observation rich regions to calibrate regions with sparse observations and allow us to make recommendations for

  9. What develops during emotional development? A component process approach to identifying sources of psychopathology risk in adolescence.

    PubMed

    McLaughlin, Katie A; Garrad, Megan C; Somerville, Leah H

    2015-12-01

    Adolescence is a phase of the lifespan associated with widespread changes in emotional behavior thought to reflect both changing environments and stressors, and psychological and neurobiological development. However, emotions themselves are complex phenomena that are composed of multiple subprocesses. In this paper, we argue that examining emotional development from a process-level perspective facilitates important insights into the mechanisms that underlie adolescents' shifting emotions and intensified risk for psychopathology. Contrasting the developmental progressions for the antecedents to emotion, physiological reactivity to emotion, emotional regulation capacity, and motivation to experience particular affective states reveals complex trajectories that intersect in a unique way during adolescence. We consider the implications of these intersecting trajectories for negative outcomes such as psychopathology, as well as positive outcomes for adolescent social bonds.

  10. Atom Tunneling in the Hydroxylation Process of Taurine/α-Ketoglutarate Dioxygenase Identified by Quantum Mechanics/Molecular Mechanics Simulations.

    PubMed

    Álvarez-Barcia, Sonia; Kästner, Johannes

    2017-06-01

    Taurine/α-ketoglutarate dioxygenase is one of the most studied α-ketoglutarate-dependent dioxygenases (αKGDs), involved in several biotechnological applications. We investigated the key step in the catalytic cycle of the αKGDs, the hydrogen transfer process, by a quantum mechanics/molecular mechanics approach (B3LYP/CHARMM22). Analysis of the charge and spin densities during the reaction demonstrates that a concerted mechanism takes place, where the H atom transfer happens simultaneously with the electron transfer from taurine to the Fe═O cofactor. We found the quantum tunneling of the hydrogen atom to increase the rate constant by a factor of 40 at 5 °C. As a consequence, a quite high kinetic isotope effect close to 60 is obtained, which is consistent with the experimental value.

  11. What develops during emotional development? A component process approach to identifying sources of psychopathology risk in adolescence

    PubMed Central

    McLaughlin, Katie A.; Garrad, Megan C.; Somerville, Leah H.

    2015-01-01

    Adolescence is a phase of the lifespan associated with widespread changes in emotional behavior thought to reflect both changing environments and stressors, and psychological and neurobiological development. However, emotions themselves are complex phenomena that are composed of multiple subprocesses. In this paper, we argue that examining emotional development from a process-level perspective facilitates important insights into the mechanisms that underlie adolescents' shifting emotions and intensified risk for psychopathology. Contrasting the developmental progressions for the antecedents to emotion, physiological reactivity to emotion, emotional regulation capacity, and motivation to experience particular affective states reveals complex trajectories that intersect in a unique way during adolescence. We consider the implications of these intersecting trajectories for negative outcomes such as psychopathology, as well as positive outcomes for adolescent social bonds. PMID:26869841

  12. Comparing dependent robust correlations.

    PubMed

    Wilcox, Rand R

    2016-11-01

    Let r1 and r2 be two dependent estimates of Pearson's correlation. There is a substantial literature on testing H0  : ρ1  = ρ2 , the hypothesis that the population correlation coefficients are equal. However, it is well known that Pearson's correlation is not robust. Even a single outlier can have a substantial impact on Pearson's correlation, resulting in a misleading understanding about the strength of the association among the bulk of the points. A way of mitigating this concern is to use a correlation coefficient that guards against outliers, many of which have been proposed. But apparently there are no results on how to compare dependent robust correlation coefficients when there is heteroscedasicity. Extant results suggest that a basic percentile bootstrap will perform reasonably well. This paper reports simulation results indicating the extent to which this is true when using Spearman's rho, a Winsorized correlation or a skipped correlation.

  13. Doubly robust survival trees.

    PubMed

    Steingrimsson, Jon Arni; Diao, Liqun; Molinaro, Annette M; Strawderman, Robert L

    2016-09-10

    Estimating a patient's mortality risk is important in making treatment decisions. Survival trees are a useful tool and employ recursive partitioning to separate patients into different risk groups. Existing 'loss based' recursive partitioning procedures that would be used in the absence of censoring have previously been extended to the setting of right censored outcomes using inverse probability censoring weighted estimators of loss functions. In this paper, we propose new 'doubly robust' extensions of these loss estimators motivated by semiparametric efficiency theory for missing data that better utilize available data. Simulations and a data analysis demonstrate strong performance of the doubly robust survival trees compared with previously used methods. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Using empirical models of species colonization under multiple threatening processes to identify complementary threat-mitigation strategies.

    PubMed

    Tulloch, Ayesha I T; Mortelliti, Alessio; Kay, Geoffrey M; Florance, Daniel; Lindenmayer, David

    2016-08-01

    Approaches to prioritize conservation actions are gaining popularity. However, limited empirical evidence exists on which species might benefit most from threat mitigation and on what combination of threats, if mitigated simultaneously, would result in the best outcomes for biodiversity. We devised a way to prioritize threat mitigation at a regional scale with empirical evidence based on predicted changes to population dynamics-information that is lacking in most threat-management prioritization frameworks that rely on expert elicitation. We used dynamic occupancy models to investigate the effects of multiple threats (tree cover, grazing, and presence of an hyperaggressive competitor, the Noisy Miner (Manorina melanocephala) on bird-population dynamics in an endangered woodland community in southeastern Australia. The 3 threatening processes had different effects on different species. We used predicted patch-colonization probabilities to estimate the benefit to each species of removing one or more threats. We then determined the complementary set of threat-mitigation strategies that maximized colonization of all species while ensuring that redundant actions with little benefit were avoided. The single action that resulted in the highest colonization was increasing tree cover, which increased patch colonization by 5% and 11% on average across all species and for declining species, respectively. Combining Noisy Miner control with increasing tree cover increased species colonization by 10% and 19% on average for all species and for declining species respectively, and was a higher priority than changing grazing regimes. Guidance for prioritizing threat mitigation is critical in the face of cumulative threatening processes. By incorporating population dynamics in prioritization of threat management, our approach helps ensure funding is not wasted on ineffective management programs that target the wrong threats or species.

  15. Comparative Transcriptional Analysis of Loquat Fruit Identifies Major Signal Networks Involved in Fruit Development and Ripening Process.

    PubMed

    Song, Huwei; Zhao, Xiangxiang; Hu, Weicheng; Wang, Xinfeng; Shen, Ting; Yang, Liming

    2016-11-04

    Loquat (Eriobotrya japonica Lindl.) is an important non-climacteric fruit and rich in essential nutrients such as minerals and carotenoids. During fruit development and ripening, thousands of the differentially expressed genes (DEGs) from various metabolic pathways cause a series of physiological and biochemical changes. To better understand the underlying mechanism of fruit development, the Solexa/Illumina RNA-seq high-throughput sequencing was used to evaluate the global changes of gene transcription levels. More than 51,610,234 high quality reads from ten runs of fruit development were sequenced and assembled into 48,838 unigenes. Among 3256 DEGs, 2304 unigenes could be annotated to the Gene Ontology database. These DEGs were distributed into 119 pathways described in the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. A large number of DEGs were involved in carbohydrate metabolism, hormone signaling, and cell-wall degradation. The real-time reverse transcription (qRT)-PCR analyses revealed that several genes related to cell expansion, auxin signaling and ethylene response were differentially expressed during fruit development. Other members of transcription factor families were also identified. There were 952 DEGs considered as novel genes with no annotation in any databases. These unigenes will serve as an invaluable genetic resource for loquat molecular breeding and postharvest storage.

  16. Comparative Transcriptional Analysis of Loquat Fruit Identifies Major Signal Networks Involved in Fruit Development and Ripening Process

    PubMed Central

    Song, Huwei; Zhao, Xiangxiang; Hu, Weicheng; Wang, Xinfeng; Shen, Ting; Yang, Liming

    2016-01-01

    Loquat (Eriobotrya japonica Lindl.) is an important non-climacteric fruit and rich in essential nutrients such as minerals and carotenoids. During fruit development and ripening, thousands of the differentially expressed genes (DEGs) from various metabolic pathways cause a series of physiological and biochemical changes. To better understand the underlying mechanism of