Nonenzymatic Wearable Sensor for Electrochemical Analysis of Perspiration Glucose.
Zhu, Xiaofei; Ju, Yinhui; Chen, Jian; Liu, Deye; Liu, Hong
2018-05-25
We report a nonenzymatic wearable sensor for electrochemical analysis of perspiration glucose. Multipotential steps are applied on a Au electrode, including a high negative pretreatment potential step for proton reduction which produces a localized alkaline condition, a moderate potential step for electrocatalytic oxidation of glucose under the alkaline condition, and a positive potential step to clean and reactivate the electrode surface for the next detection. Fluorocarbon-based materials were coated on the Au electrode for improving the selectivity and robustness of the sensor. A fully integrated wristband is developed for continuous real-time monitoring of perspiration glucose during physical activities, and uploading the test result to a smartphone app via Bluetooth.
Quantization of charged fields in the presence of critical potential steps
NASA Astrophysics Data System (ADS)
Gavrilov, S. P.; Gitman, D. M.
2016-02-01
QED with strong external backgrounds that can create particles from the vacuum is well developed for the so-called t -electric potential steps, which are time-dependent external electric fields that are switched on and off at some time instants. However, there exist many physically interesting situations where external backgrounds do not switch off at the time infinity. E.g., these are time-independent nonuniform electric fields that are concentrated in restricted space areas. The latter backgrounds represent a kind of spatial x -electric potential steps for charged particles. They can also create particles from the vacuum, the Klein paradox being closely related to this process. Approaches elaborated for treating quantum effects in the t -electric potential steps are not directly applicable to the x -electric potential steps and their generalization for x -electric potential steps was not sufficiently developed. We believe that the present work represents a consistent solution of the latter problem. We have considered a canonical quantization of the Dirac and scalar fields with x -electric potential step and have found in- and out-creation and annihilation operators that allow one to have particle interpretation of the physical system under consideration. To identify in- and out-operators we have performed a detailed mathematical and physical analysis of solutions of the relativistic wave equations with an x -electric potential step with subsequent QFT analysis of correctness of such an identification. We elaborated a nonperturbative (in the external field) technique that allows one to calculate all characteristics of zero-order processes, such, for example, scattering, reflection, and electron-positron pair creation, without radiation corrections, and also to calculate Feynman diagrams that describe all characteristics of processes with interaction between the in-, out-particles and photons. These diagrams have formally the usual form, but contain special propagators. Expressions for these propagators in terms of in- and out-solutions are presented. We apply the elaborated approach to two popular exactly solvable cases of x -electric potential steps, namely, to the Sauter potential and to the Klein step.
Moran, Tim P; Schroder, Hans S; Kneip, Chelsea; Moser, Jason S
2017-01-01
Meta-analyses are regularly used to quantitatively integrate the findings of a field, assess the consistency of an effect and make decisions based on extant research. The current article presents an overview and step-by-step tutorial of meta-analysis aimed at psychophysiological researchers. We also describe best-practices and steps that researchers can take to facilitate future meta-analysis in their sub-discipline. Lastly, we illustrate each of the steps by presenting a novel meta-analysis on the relationship between depression and action-monitoring event-related potentials - the error-related negativity (ERN) and the feedback negativity (FN). This meta-analysis found that the literature on depression and the ERN is contaminated by publication bias. With respect to the FN, the meta-analysis found that depression does predict the magnitude of the FN; however, this effect was dependent on the type of task used by the study. Copyright © 2016 Elsevier B.V. All rights reserved.
Automating the evaluation of flood damages: methodology and potential gains
NASA Astrophysics Data System (ADS)
Eleutério, Julian; Martinez, Edgar Daniel
2010-05-01
The evaluation of flood damage potential consists of three main steps: assessing and processing data, combining data and calculating potential damages. The first step consists of modelling hazard and assessing vulnerability. In general, this step of the evaluation demands more time and investments than the others. The second step of the evaluation consists of combining spatial data on hazard with spatial data on vulnerability. Geographic Information System (GIS) is a fundamental tool in the realization of this step. GIS software allows the simultaneous analysis of spatial and matrix data. The third step of the evaluation consists of calculating potential damages by means of damage-functions or contingent analysis. All steps demand time and expertise. However, the last two steps must be realized several times when comparing different management scenarios. In addition, uncertainty analysis and sensitivity test are made during the second and third steps of the evaluation. The feasibility of these steps could be relevant in the choice of the extent of the evaluation. Low feasibility could lead to choosing not to evaluate uncertainty or to limit the number of scenario comparisons. Several computer models have been developed over time in order to evaluate the flood risk. GIS software is largely used to realise flood risk analysis. The software is used to combine and process different types of data, and to visualise the risk and the evaluation results. The main advantages of using a GIS in these analyses are: the possibility of "easily" realising the analyses several times, in order to compare different scenarios and study uncertainty; the generation of datasets which could be used any time in future to support territorial decision making; the possibility of adding information over time to update the dataset and make other analyses. However, these analyses require personnel specialisation and time. The use of GIS software to evaluate the flood risk requires personnel with a double professional specialisation. The professional should be proficient in GIS software and in flood damage analysis (which is already a multidisciplinary field). Great effort is necessary in order to correctly evaluate flood damages, and the updating and the improvement of the evaluation over time become a difficult task. The automation of this process should bring great advance in flood management studies over time, especially for public utilities. This study has two specific objectives: (1) show the entire process of automation of the second and third steps of flood damage evaluations; and (2) analyse the induced potential gains in terms of time and expertise needed in the analysis. A programming language is used within GIS software in order to automate hazard and vulnerability data combination and potential damages calculation. We discuss the overall process of flood damage evaluation. The main result of this study is a computational tool which allows significant operational gains on flood loss analyses. We quantify these gains by means of a hypothetical example. The tool significantly reduces the time of analysis and the needs for expertise. An indirect gain is that sensitivity and cost-benefit analyses can be more easily realized.
NASA Astrophysics Data System (ADS)
Sokolović, I.; Mali, P.; Odavić, J.; Radošević, S.; Medvedeva, S. Yu.; Botha, A. E.; Shukrinov, Yu. M.; Tekić, J.
2017-08-01
The devil's staircase structure arising from the complete mode locking of an entirely nonchaotic system, the overdamped dc+ac driven Frenkel-Kontorova model with deformable substrate potential, was observed. Even though no chaos was found, a hierarchical ordering of the Shapiro steps was made possible through the use of a previously introduced continued fraction formula. The absence of chaos, deduced here from Lyapunov exponent analyses, can be attributed to the overdamped character and the Middleton no-passing rule. A comparative analysis of a one-dimensional stack of Josephson junctions confirmed the disappearance of chaos with increasing dissipation. Other common dynamic features were also identified through this comparison. A detailed analysis of the amplitude dependence of the Shapiro steps revealed that only for the case of a purely sinusoidal substrate potential did the relative sizes of the steps follow a Farey sequence. For nonsinusoidal (deformed) potentials, the symmetry of the Stern-Brocot tree, depicting all members of particular Farey sequence, was seen to be increasingly broken, with certain steps being more prominent and their relative sizes not following the Farey rule.
A New Approach to Aircraft Robust Performance Analysis
NASA Technical Reports Server (NTRS)
Gregory, Irene M.; Tierno, Jorge E.
2004-01-01
A recently developed algorithm for nonlinear system performance analysis has been applied to an F16 aircraft to begin evaluating the suitability of the method for aerospace problems. The algorithm has a potential to be much more efficient than the current methods in performance analysis for aircraft. This paper is the initial step in evaluating this potential.
A methodology to event reconstruction from trace images.
Milliet, Quentin; Delémont, Olivier; Sapin, Eric; Margot, Pierre
2015-03-01
The widespread use of digital imaging devices for surveillance (CCTV) and entertainment (e.g., mobile phones, compact cameras) has increased the number of images recorded and opportunities to consider the images as traces or documentation of criminal activity. The forensic science literature focuses almost exclusively on technical issues and evidence assessment [1]. Earlier steps in the investigation phase have been neglected and must be considered. This article is the first comprehensive description of a methodology to event reconstruction using images. This formal methodology was conceptualised from practical experiences and applied to different contexts and case studies to test and refine it. Based on this practical analysis, we propose a systematic approach that includes a preliminary analysis followed by four main steps. These steps form a sequence for which the results from each step rely on the previous step. However, the methodology is not linear, but it is a cyclic, iterative progression for obtaining knowledge about an event. The preliminary analysis is a pre-evaluation phase, wherein potential relevance of images is assessed. In the first step, images are detected and collected as pertinent trace material; the second step involves organising and assessing their quality and informative potential. The third step includes reconstruction using clues about space, time and actions. Finally, in the fourth step, the images are evaluated and selected as evidence. These steps are described and illustrated using practical examples. The paper outlines how images elicit information about persons, objects, space, time and actions throughout the investigation process to reconstruct an event step by step. We emphasise the hypothetico-deductive reasoning framework, which demonstrates the contribution of images to generating, refining or eliminating propositions or hypotheses. This methodology provides a sound basis for extending image use as evidence and, more generally, as clues in investigation and crime reconstruction processes. Copyright © 2015 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.
Araki, Yoshihiko; Nonaka, Daisuke; Hamamura, Kensuke; Yanagida, Mitsuaki; Ishikawa, Hitoshi; Banzai, Michio; Maruyama, Mayuko; Endo, Shuichiro; Tajima, Atsushi; Lee, Lyang-Ja; Nojima, Michio; Takamori, Kenji; Yoshida, Koyo; Takeda, Satoru; Tanaka, Kenji
2013-10-01
To date, numerous studies have searched for candidate molecules or clinical examination methods as potential biomarkers for monitoring intractable diseases, such as carcinomas. Evidence accumulated over the past decade shows that many proteolytic peptides appear in human humoral fluids, including peripheral blood, in association with an individual's health condition. Although an analysis of the whole peptide (the 'peptidome') using mass spectrometry is thought to be one of the most powerful and promising experimental approaches, it has failed to identify biomarkers in the clinical blood samples, presumably due to the methodological limitations. In general, commonly used techniques for proteomic analysis of blood require the removal of large amounts of serum/plasma proteins prior to mass spectrometry analysis, and this step seems to have resulted in the overlooking of important biomarkers during the analytical process. Here, we provide a brief overview of a new quantitative peptidomic analysis by a one-step direct transfer technology without depletion of major blood proteins. Using this technology, we herein report experimental data on serum peptidomic analysis for patients with pregnancy-induced hypertension as a clinical model. In addition, we refer to the potential utility of this approach for the monitoring of pathophysiological status in female reproductive system disorders in general. © 2013 The Authors. Journal of Obstetrics and Gynaecology Research © 2013 Japan Society of Obstetrics and Gynecology.
Fluorescence analysis of ubiquinone and its application in quality control of medical supplies
NASA Astrophysics Data System (ADS)
Timofeeva, Elvira O.; Gorbunova, Elena V.; Chertov, Aleksandr N.
2017-02-01
The presence of antioxidant issues such as redox potential imbalance in human body is a very important question for modern clinical diagnostics. Implementation of fluorescence analysis into optical diagnostics of such wide distributed in a human body antioxidant as ubiquinone is one of the steps for development of the device with a view to clinical diagnostics of redox potential. Recording of fluorescence was carried out with spectrometer using UV irradiation source with thin band (max at 287 and 330 nm) as a background radiation. Concentrations of ubiquinone from 0.25 to 2.5 mmol/l in explored samples were used for investigation. Recording data was processed using correlation analysis and differential analytical technique. The fourth derivative spectrum of fluorescence spectrum provided the basis for a multicomponent analysis of the solutions. As a technique in clinical diagnostics fluorescence analysis with processing method including differential spectrophotometry, it is step forward towards redox potential calculation and quality control in pharmacy for better health care.
Shale characterization on Barito field, Southeast Kalimantan for shale hydrocarbon exploration
NASA Astrophysics Data System (ADS)
Sumotarto, T. A.; Haris, A.; Riyanto, A.; Usman, A.
2017-07-01
Exploration and exploitation in Indonesia now are still focused on conventional hydrocarbon energy than unconventional hydrocarbon energy such as shale gas. Tanjung Formation is a source rock of Barito Basin located in South Kalimantan that potentially as shale hydrocarbon. In this research, integrated methods using geochemical analysis, mineralogy, petrophysical analysis and seismic interpretation has been applied to explore the shale hydrocarbon potential in Barito Field for Tanjung formation. The first step is conducting geochemical and mineralogy analysis to the shale rock sample. Our analysis shows that the organic richness is ranging from 1.26-5.98 wt.% (good to excellent) with the depth of early mature window of 2170 m. The brittleness index is in an average of 0.44-0.56 (less Brittle) and Kerogen type is classified into II/III type that potentially produces oil and gas. The second step is continued by performing petrophysical analysis, which includes Total Organic Carbon (TOC) calculation and brittleness index continuously. The result has been validated with a laboratory measurement that obtained a good correlation. In addition, seismic interpretation based on inverted acoustic impedance is applied to map the distributions of shale hydrocarbon potential. Our interpretation shows that shale hydrocarbon potential is localized in the eastern and southeastern part of the study area.
Epicenter location by analysis for interictal spikes
NASA Technical Reports Server (NTRS)
Hand, C.
2001-01-01
The MEG recording is a quick and painless process that requires no surgery. This approach has the potential to save time, reduce patient discomfort, and eliminates a painful and potentially dangerous surgical step in the treatment procedure.
NASA Astrophysics Data System (ADS)
Yusufaly, Tahir; Olson, Wilma; Li, Yun
2014-03-01
Van der Waals density functional theory is integrated with analysis of a non-redundant set of protein-DNA crystal structures from the Nucleic Acid Database to study the stacking energetics of CG:CG base-pair steps, specifically the role of cytosine 5-methylation. Principal component analysis of the steps reveals the dominant collective motions to correspond to a tensile ``opening'' mode and two shear ``sliding'' and ``tearing'' modes in the orthogonal plane. The stacking interactions of the methyl groups are observed to globally inhibit CG:CG step overtwisting while simultaneously softening the modes locally via potential energy modulations that create metastable states. The results have implications for the epigenetic control of DNA mechanics.
Handsaker, J C; Brown, S J; Bowling, F L; Marple-Horvat, D E; Boulton, A J M; Reeves, N D
2016-05-01
To examine the stepping accuracy of people with diabetes and diabetic peripheral neuropathy. Fourteen patients with diabetic peripheral neuropathy (DPN), 12 patients with diabetes but no neuropathy (D) and 10 healthy non-diabetic control participants (C). Accuracy of stepping was measured whilst the participants walked along a walkway consisting of 18 stepping targets. Preliminary data on visual gaze characteristics were also captured in a subset of participants (diabetic peripheral neuropathy group: n = 4; diabetes-alone group: n = 4; and control group: n = 4) during the same task. Patients in the diabetic peripheral neuropathy group, and patients in the diabetes-alone group were significantly less accurate at stepping on targets than were control subjects (P < 0.05). Preliminary visual gaze analysis identified that patients diabetic peripheral neuropathy were slower to look between targets, resulting in less time being spent looking at a target before foot-target contact. Impaired motor control is theorized to be a major factor underlying the changes in stepping accuracy, and potentially altered visual gaze behaviour may also play a role. Reduced stepping accuracy may indicate a decreased ability to control the placement of the lower limbs, leading to patients with neuropathy potentially being less able to avoid observed obstacles during walking. © 2015 Diabetes UK.
Two-step estimation in ratio-of-mediator-probability weighted causal mediation analysis.
Bein, Edward; Deutsch, Jonah; Hong, Guanglei; Porter, Kristin E; Qin, Xu; Yang, Cheng
2018-04-15
This study investigates appropriate estimation of estimator variability in the context of causal mediation analysis that employs propensity score-based weighting. Such an analysis decomposes the total effect of a treatment on the outcome into an indirect effect transmitted through a focal mediator and a direct effect bypassing the mediator. Ratio-of-mediator-probability weighting estimates these causal effects by adjusting for the confounding impact of a large number of pretreatment covariates through propensity score-based weighting. In step 1, a propensity score model is estimated. In step 2, the causal effects of interest are estimated using weights derived from the prior step's regression coefficient estimates. Statistical inferences obtained from this 2-step estimation procedure are potentially problematic if the estimated standard errors of the causal effect estimates do not reflect the sampling uncertainty in the estimation of the weights. This study extends to ratio-of-mediator-probability weighting analysis a solution to the 2-step estimation problem by stacking the score functions from both steps. We derive the asymptotic variance-covariance matrix for the indirect effect and direct effect 2-step estimators, provide simulation results, and illustrate with an application study. Our simulation results indicate that the sampling uncertainty in the estimated weights should not be ignored. The standard error estimation using the stacking procedure offers a viable alternative to bootstrap standard error estimation. We discuss broad implications of this approach for causal analysis involving propensity score-based weighting. Copyright © 2018 John Wiley & Sons, Ltd.
Multidisciplinary Analysis and Optimization Generation 1 and Next Steps
NASA Technical Reports Server (NTRS)
Naiman, Cynthia Gutierrez
2008-01-01
The Multidisciplinary Analysis & Optimization Working Group (MDAO WG) of the Systems Analysis Design & Optimization (SAD&O) discipline in the Fundamental Aeronautics Program s Subsonic Fixed Wing (SFW) project completed three major milestones during Fiscal Year (FY)08: "Requirements Definition" Milestone (1/31/08); "GEN 1 Integrated Multi-disciplinary Toolset" (Annual Performance Goal) (6/30/08); and "Define Architecture & Interfaces for Next Generation Open Source MDAO Framework" Milestone (9/30/08). Details of all three milestones are explained including documentation available, potential partner collaborations, and next steps in FY09.
Topographic ERP analyses: a step-by-step tutorial review.
Murray, Micah M; Brunet, Denis; Michel, Christoph M
2008-06-01
In this tutorial review, we detail both the rationale for as well as the implementation of a set of analyses of surface-recorded event-related potentials (ERPs) that uses the reference-free spatial (i.e. topographic) information available from high-density electrode montages to render statistical information concerning modulations in response strength, latency, and topography both between and within experimental conditions. In these and other ways these topographic analysis methods allow the experimenter to glean additional information and neurophysiologic interpretability beyond what is available from canonical waveform analyses. In this tutorial we present the example of somatosensory evoked potentials (SEPs) in response to stimulation of each hand to illustrate these points. For each step of these analyses, we provide the reader with both a conceptual and mathematical description of how the analysis is carried out, what it yields, and how to interpret its statistical outcome. We show that these topographic analysis methods are intuitive and easy-to-use approaches that can remove much of the guesswork often confronting ERP researchers and also assist in identifying the information contained within high-density ERP datasets.
NASA Technical Reports Server (NTRS)
Liu, A. F.
1974-01-01
A systematic approach for applying methods for fracture control in the structural components of space vehicles consists of four major steps. The first step is to define the primary load-carrying structural elements and the type of load, environment, and design stress levels acting upon them. The second step is to identify the potential fracture-critical parts by means of a selection logic flow diagram. The third step is to evaluate the safe-life and fail-safe capabilities of the specified part. The last step in the sequence is to apply the control procedures that will prevent damage to the fracture-critical parts. The fracture control methods discussed include fatigue design and analysis methods, methods for preventing crack-like defects, fracture mechanics analysis methods, and nondestructive evaluation methods. An example problem is presented for evaluation of the safe-crack-growth capability of the space shuttle crew compartment skin structure.
Computing Prediction and Functional Analysis of Prokaryotic Propionylation.
Wang, Li-Na; Shi, Shao-Ping; Wen, Ping-Ping; Zhou, Zhi-You; Qiu, Jian-Ding
2017-11-27
Identification and systematic analysis of candidates for protein propionylation are crucial steps for understanding its molecular mechanisms and biological functions. Although several proteome-scale methods have been performed to delineate potential propionylated proteins, the majority of lysine-propionylated substrates and their role in pathological physiology still remain largely unknown. By gathering various databases and literatures, experimental prokaryotic propionylation data were collated to be trained in a support vector machine with various features via a three-step feature selection method. A novel online tool for seeking potential lysine-propionylated sites (PropSeek) ( http://bioinfo.ncu.edu.cn/PropSeek.aspx ) was built. Independent test results of leave-one-out and n-fold cross-validation were similar to each other, showing that PropSeek is a stable and robust predictor with satisfying performance. Meanwhile, analyses of Gene Ontology, Kyoto Encyclopedia of Genes and Genomes pathways, and protein-protein interactions implied a potential role of prokaryotic propionylation in protein synthesis and metabolism.
Accuracy of professional sports drafts in predicting career potential.
Koz, D; Fraser-Thomas, J; Baker, J
2012-08-01
The forecasting of talented players is a crucial aspect of building a successful sports franchise and professional sports invest significant resources in making player choices in sport drafts. The current study examined the relationship between career performance (i.e. games played) and draft round for the National Football League, National Hockey League, National Basketball League, and Major League Baseball for players drafted from 1980 to 1989 (n = 4874) against the assumption of a linear relationship between performance and draft round (i.e. that players with the most potential will be selected before players of lower potential). A two-step analysis revealed significant differences in games played across draft rounds (step 1) and a significant negative relationship between draft round and games played (step 2); however, the amount of variance accounted for was relatively low (less than 17%). Results highlight the challenges of accurately evaluating amateur talent. © 2011 John Wiley & Sons A/S.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cermelli, Paolo; Jabbour, Michel E.; Department of Mathematics, University of Kentucky, Lexington, Kentucky 40506-0027
A thermodynamically consistent continuum theory for single-species, step-flow epitaxy that extends the classical Burton-Cabrera-Frank (BCF) framework is derived from basic considerations. In particular, an expression for the step chemical potential is obtained that contains two energetic contributions--one from the adjacent terraces in the form of the jump in the adatom grand canonical potential and the other from the monolayer of crystallized adatoms that underlies the upper terrace in the form of the nominal bulk chemical potential--thus generalizing the classical Gibbs-Thomson relation to the dynamic, dissipative setting of step-flow growth. The linear stability analysis of the resulting quasistatic free-boundary problem formore » an infinite train of equidistant rectilinear steps yields explicit--i.e., analytical--criteria for the onset of step bunching in terms of the basic physical and geometric parameters of the theory. It is found that, in contrast with the predictions of the classical BCF model, both in the absence as well as in the presence of desorption, a growth regime exists for which step bunching occurs, except possibly in the dilute limit where the train is always stable to step bunching. In the present framework, the onset of one-dimensional instabilities is directly attributed to the energetic influence on the migrating steps of the adjacent terraces. Hence the theory provides a ''minimalist'' alternative to existing theories of step bunching and should be relevant to, e.g., molecular beam epitaxy of GaAs where the equilibrium adatom density is shown by Tersoff, Johnson, and Orr [Phys. Rev. B 78, 282 (1997)] to be extremely high.« less
Two-step evolution of endosymbiosis between hydra and algae.
Ishikawa, Masakazu; Shimizu, Hiroshi; Nozawa, Masafumi; Ikeo, Kazuho; Gojobori, Takashi
2016-10-01
In the Hydra vulgaris group, only 2 of the 25 strains in the collection of the National Institute of Genetics in Japan currently show endosymbiosis with green algae. However, whether the other non-symbiotic strains also have the potential to harbor algae remains unknown. The endosymbiotic potential of non-symbiotic strains that can harbor algae may have been acquired before or during divergence of the strains. With the aim of understanding the evolutionary process of endosymbiosis in the H. vulgaris group, we examined the endosymbiotic potential of non-symbiotic strains of the H. vulgaris group by artificially introducing endosymbiotic algae. We found that 12 of the 23 non-symbiotic strains were able to harbor the algae until reaching the grand-offspring through the asexual reproduction by budding. Moreover, a phylogenetic analysis of mitochondrial genome sequences showed that all the strains with endosymbiotic potential grouped into a single cluster (cluster γ). This cluster contained two strains (J7 and J10) that currently harbor algae; however, these strains were not the closest relatives. These results suggest that evolution of endosymbiosis occurred in two steps; first, endosymbiotic potential was gained once in the ancestor of the cluster γ lineage; second, strains J7 and J10 obtained algae independently after the divergence of the strains. By demonstrating the evolution of the endosymbiotic potential in non-symbiotic H. vulgaris group strains, we have clearly distinguished two evolutionary steps. The step-by-step evolutionary process provides significant insight into the evolution of endosymbiosis in cnidarians. Copyright © 2016 Elsevier Inc. All rights reserved.
A two-step FEM-SEM approach for wave propagation analysis in cable structures
NASA Astrophysics Data System (ADS)
Zhang, Songhan; Shen, Ruili; Wang, Tao; De Roeck, Guido; Lombaert, Geert
2018-02-01
Vibration-based methods are among the most widely studied in structural health monitoring (SHM). It is well known, however, that the low-order modes, characterizing the global dynamic behaviour of structures, are relatively insensitive to local damage. Such local damage may be easier to detect by methods based on wave propagation which involve local high frequency behaviour. The present work considers the numerical analysis of wave propagation in cables. A two-step approach is proposed which allows taking into account the cable sag and the distribution of the axial forces in the wave propagation analysis. In the first step, the static deformation and internal forces are obtained by the finite element method (FEM), taking into account geometric nonlinear effects. In the second step, the results from the static analysis are used to define the initial state of the dynamic analysis which is performed by means of the spectral element method (SEM). The use of the SEM in the second step of the analysis allows for a significant reduction in computational costs as compared to a FE analysis. This methodology is first verified by means of a full FE analysis for a single stretched cable. Next, simulations are made to study the effects of damage in a single stretched cable and a cable-supported truss. The results of the simulations show how damage significantly affects the high frequency response, confirming the potential of wave propagation based methods for SHM.
NASA Technical Reports Server (NTRS)
Diorio, Kimberly A.; Voska, Ned (Technical Monitor)
2002-01-01
This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.
FOOTPRINTS FOR SUSTAINABILITY: THE NEXT STEPS
This paper discusses the strengths and weaknesses of the ecological footprint as an ecological accounting method, points out research needs for improvement of the analysis, and suggests potential new applications.
Step voltage analysis for the catenoid lightning protection system
NASA Technical Reports Server (NTRS)
Chai, J. C.; Briet, R.; Barker, D. L.; Eley, H. E.
1991-01-01
The main objective of the proposed overhead Catenoid Lightning Protection System (CLPS) is personnel safety. To ensure working personnel's safety in lightning situations, it is necessary that the potential difference developed across a distance equal to a person's pace (step voltage) does not exceed a separately established safe voltage in order to avoid electrocution (ventricular fibrillation) of humans. Therefore, the first stage of the analytical effort is to calculate the open circuit step voltage. An impedance model is developed for this purpose. It takes into consideration the earth's complex impedance behavior and the transient nature of the lightning phenomenon. In the low frequency limit, this impedance model is shown to reduce to results similar to those predicted by the conventional resistor model in a DC analysis.
Study of the highly ordered TiO2 nanotubes physical properties prepared with two-step anodization
NASA Astrophysics Data System (ADS)
Pishkar, Negin; Ghoranneviss, Mahmood; Ghorannevis, Zohreh; Akbari, Hossein
2018-06-01
Highly ordered hexagonal closely packed titanium dioxide nanotubes (TiO2 NTs) were successfully grown by a two-step anodization process. The TiO2 NTs were synthesized by electrochemical anodization of titanium foils in an ethylene glycol based electrolyte solution containing 0.3 wt% NH4F and 2 vol% deionized (DI) water at constant potential (50 V) for 1 h at room temperature. Physical properties of the TiO2 NTs, which were prepared via one and two-step anodization, were investigated. Atomic Force Microscopy (AFM) analysis revealed that anodization and subsequently peeled off the TiO2 NTs caused to the periodic pattern on the Ti surface. In order To study the nanotubes morphology, Field Emission Scanning Electron Microscopy (FESEM) was used, which was revealed that the two-step anodization resulted highly ordered hexagonal TiO2 NTs. Crystal structures of the TiO2 NTs were mainly anatase, determined by X-ray diffraction analysis. Optical studies were performed by Diffuse Reflection Spectra (DRS) and Photoluminescence (PL) analysis showed that the band gap of TiO2 NTs prepared via two-step anodization was lower than the band gap of samples prepared by one-step anodization process.
Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks.
Shelley, M J; Tao, L
2001-01-01
To avoid the numerical errors associated with resetting the potential following a spike in simulations of integrate-and-fire neuronal networks, Hansel et al. and Shelley independently developed a modified time-stepping method. Their particular scheme consists of second-order Runge-Kutta time-stepping, a linear interpolant to find spike times, and a recalibration of postspike potential using the spike times. Here we show analytically that such a scheme is second order, discuss the conditions under which efficient, higher-order algorithms can be constructed to treat resets, and develop a modified fourth-order scheme. To support our analysis, we simulate a system of integrate-and-fire conductance-based point neurons with all-to-all coupling. For six-digit accuracy, our modified Runge-Kutta fourth-order scheme needs a time-step of Delta(t) = 0.5 x 10(-3) seconds, whereas to achieve comparable accuracy using a recalibrated second-order or a first-order algorithm requires time-steps of 10(-5) seconds or 10(-9) seconds, respectively. Furthermore, since the cortico-cortical conductances in standard integrate-and-fire neuronal networks do not depend on the value of the membrane potential, we can attain fourth-order accuracy with computational costs normally associated with second-order schemes.
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks
Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis
2015-01-01
Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the number of the sensitive parameters. PMID:26161544
NASA Astrophysics Data System (ADS)
Kawamura, M.; Umeda, K.; Ohi, T.; Ishimaru, T.; Niizato, T.; Yasue, K.; Makino, H.
2007-12-01
We have developed a formal evaluation method to assess the potential impact of natural phenomena (earthquakes and faulting; volcanism; uplift, subsidence, denudation and sedimentation; climatic and sea-level changes) on a High Level Radioactive Waste (HLW) Disposal System. In 2000, we had developed perturbation scenarios in a generic and conservative sense and illustrated the potential impact on a HLW disposal system. As results of the development of perturbation scenarios, two points were highlighted for consideration in subsequent work: improvement of the scenarios from the viewpoints of reality, transparency, traceability and consistency and avoiding extreme conservatism. Subsequently, we have thus developed a new procedure for describing such perturbation scenarios based on further studies of the characteristics of these natural perturbation phenomena in Japan. The approach to describing the perturbation scenario is effectively developed in five steps: Step 1: Description of potential process of phenomena and their impacts on the geological environment. Step 2: Characterization of potential changes of geological environment in terms of T-H-M-C (Thermal - Hydrological - Mechanical - Chemical) processes. The focus is on specific T-H-M-C parameters that influence geological barrier performance, utilizing the input from Step 1. Step 3: Classification of potential influences, based on similarity of T-H-M-C perturbations. This leads to development of perturbation scenarios to serve as a basis for consequence analysis. Step 4: Establishing models and parameters for performance assessment. Step 5: Calculation and assessment. This study focuses on identifying key T-H-M-C process associated with perturbations at Step 2. This framework has two advantages. First one is assuring maintenance of traceability during the scenario construction processes, facilitating the production and structuring of suitable records. The second is providing effective elicitation and organization of information from a wide range of investigations of earth sciences within a performance assessment context. In this framework, scenario development work proceeds in a stepwise manner, to ensure clear identification of the impact of processes associated with these phenomena on a HLW disposal system. Output is organized to create credible scenarios with required transparency, consistency, traceability and adequate conservatism. In this presentation, the potential impact of natural phenomena in the viewpoint of performance assessment for HLW disposal will be discussed and modeled using the approach.
2016-01-01
This review aimed to arrange the process of a systematic review of genome-wide association studies in order to practice and apply a genome-wide meta-analysis (GWMA). The process has a series of five steps: searching and selection, extraction of related information, evaluation of validity, meta-analysis by type of genetic model, and evaluation of heterogeneity. In contrast to intervention meta-analyses, GWMA has to evaluate the Hardy–Weinberg equilibrium (HWE) in the third step and conduct meta-analyses by five potential genetic models, including dominant, recessive, homozygote contrast, heterozygote contrast, and allelic contrast in the fourth step. The ‘genhwcci’ and ‘metan’ commands of STATA software evaluate the HWE and calculate a summary effect size, respectively. A meta-regression using the ‘metareg’ command of STATA should be conducted to evaluate related factors of heterogeneities. PMID:28092928
NASA Astrophysics Data System (ADS)
Gómez, José J. Arroyo; Zubieta, Carolina; Ferullo, Ricardo M.; García, Silvana G.
2016-02-01
The electrochemical formation of Au nanoparticles on a highly ordered pyrolytic graphite (HOPG) substrate using conventional electrochemical techniques and ex-situ AFM is reported. From the potentiostatic current transients studies, the Au electrodeposition process on HOPG surfaces was described, within the potential range considered, by a model involving instantaneous nucleation and diffusion controlled 3D growth, which was corroborated by the microscopic analysis. Initially, three-dimensional (3D) hemispherical nanoparticles distributed on surface defects (step edges) of the substrate were observed, with increasing particle size at more negative potentials. The double potential pulse technique allowed the formation of rounded deposits at low deposition potentials, which tend to form lines of nuclei aligned in defined directions leading to 3D ordered structures. By choosing suitable nucleation and growth pulses, one-dimensional (1D) deposits were possible, preferentially located on step edges of the HOPG substrate. Quantum-mechanical calculations confirmed the tendency of Au atoms to join selectively on surface defects, such as the HOPG step edges, at the early stages of Au electrodeposition.
Establishing Evidence for Internal Structure Using Exploratory Factor Analysis
ERIC Educational Resources Information Center
Watson, Joshua C.
2017-01-01
Exploratory factor analysis (EFA) is a data reduction technique used to condense data into smaller sets of summary variables by identifying underlying factors potentially accounting for patterns of collinearity among said variables. Using an illustrative example, the 5 general steps of EFA are described with best practices for decision making…
Using Geographic Information Systems for Exposure Assessment in Environmental Epidemiology Studies
Nuckols, John R.; Ward, Mary H.; Jarup, Lars
2004-01-01
Geographic information systems (GIS) are being used with increasing frequency in environmental epidemiology studies. Reported applications include locating the study population by geocoding addresses (assigning mapping coordinates), using proximity analysis of contaminant source as a surrogate for exposure, and integrating environmental monitoring data into the analysis of the health outcomes. Although most of these studies have been ecologic in design, some have used GIS in estimating environmental levels of a contaminant at the individual level and to design exposure metrics for use in epidemiologic studies. In this article we discuss fundamentals of three scientific disciplines instrumental to using GIS in exposure assessment for epidemiologic studies: geospatial science, environmental science, and epidemiology. We also explore how a GIS can be used to accomplish several steps in the exposure assessment process. These steps include defining the study population, identifying source and potential routes of exposure, estimating environmental levels of target contaminants, and estimating personal exposures. We present and discuss examples for the first three steps. We discuss potential use of GIS and global positioning systems (GPS) in the last step. On the basis of our findings, we conclude that the use of GIS in exposure assessment for environmental epidemiology studies is not only feasible but can enhance the understanding of the association between contaminants in our environment and disease. PMID:15198921
System Safety and the Unintended Consequence
NASA Technical Reports Server (NTRS)
Watson, Clifford
2012-01-01
The analysis and identification of risks often result in design changes or modification of operational steps. This paper identifies the potential of unintended consequences as an over-looked result of these changes. Examples of societal changes such as prohibition, regulatory changes including mandating lifeboats on passenger ships, and engineering proposals or design changes to automobiles and spaceflight hardware are used to demonstrate that the System Safety Engineer must be cognizant of the potential for unintended consequences as a result of an analysis. Conclusions of the report indicate the need for additional foresight and consideration of the potential effects of analysis-driven design, processing changes, and/or operational modifications.
An Optimization-Driven Analysis Pipeline to Uncover Biomarkers and Signaling Paths: Cervix Cancer.
Lorenzo, Enery; Camacho-Caceres, Katia; Ropelewski, Alexander J; Rosas, Juan; Ortiz-Mojer, Michael; Perez-Marty, Lynn; Irizarry, Juan; Gonzalez, Valerie; Rodríguez, Jesús A; Cabrera-Rios, Mauricio; Isaza, Clara
2015-06-01
Establishing how a series of potentially important genes might relate to each other is relevant to understand the origin and evolution of illnesses, such as cancer. High-throughput biological experiments have played a critical role in providing information in this regard. A special challenge, however, is that of trying to conciliate information from separate microarray experiments to build a potential genetic signaling path. This work proposes a two-step analysis pipeline, based on optimization, to approach meta-analysis aiming to build a proxy for a genetic signaling path.
van Limburg, Maarten; Wentzel, Jobke; Sanderman, Robbert; van Gemert-Pijnen, Lisette
2015-08-13
It is acknowledged that the success and uptake of eHealth improve with the involvement of users and stakeholders to make technology reflect their needs. Involving stakeholders in implementation research is thus a crucial element in developing eHealth technology. Business modeling is an approach to guide implementation research for eHealth. Stakeholders are involved in business modeling by identifying relevant stakeholders, conducting value co-creation dialogs, and co-creating a business model. Because implementation activities are often underestimated as a crucial step while developing eHealth, comprehensive and applicable approaches geared toward business modeling in eHealth are scarce. This paper demonstrates the potential of several stakeholder-oriented analysis methods and their practical application was demonstrated using Infectionmanager as an example case. In this paper, we aim to demonstrate how business modeling, with the focus on stakeholder involvement, is used to co-create an eHealth implementation. We divided business modeling in 4 main research steps. As part of stakeholder identification, we performed literature scans, expert recommendations, and snowball sampling (Step 1). For stakeholder analyzes, we performed "basic stakeholder analysis," stakeholder salience, and ranking/analytic hierarchy process (Step 2). For value co-creation dialogs, we performed a process analysis and stakeholder interviews based on the business model canvas (Step 3). Finally, for business model generation, we combined all findings into the business model canvas (Step 4). Based on the applied methods, we synthesized a step-by-step guide for business modeling with stakeholder-oriented analysis methods that we consider suitable for implementing eHealth. The step-by-step guide for business modeling with stakeholder involvement enables eHealth researchers to apply a systematic and multidisciplinary, co-creative approach for implementing eHealth. Business modeling becomes an active part in the entire development process of eHealth and starts an early focus on implementation, in which stakeholders help to co-create the basis necessary for a satisfying success and uptake of the eHealth technology.
Financial planning for major initiatives: a framework for success.
Harris, John M
2007-11-01
A solid framework for assessing a major strategic initiative consists of four broad steps: Initial considerations, including level of analysis required and resources that will be brought to bear. Preliminary financial estimates for board approval to further assess the initiative. Assessment of potential partners' interest in the project. Feasibility analysis for board green light.
Hornby, T George; Holleran, Carey L; Leddy, Abigail L; Hennessy, Patrick; Leech, Kristan A; Connolly, Mark; Moore, Jennifer L; Straube, Donald; Lovell, Linda; Roth, Elliot
2015-01-01
Optimal physical therapy strategies to maximize locomotor function in patients early poststroke are not well established. Emerging data indicate that substantial amounts of task-specific stepping practice may improve locomotor function, although stepping practice provided during inpatient rehabilitation is limited (<300 steps/session). The purpose of this investigation was to determine the feasibility of providing focused stepping training to patients early poststroke and its potential association with walking and other mobility outcomes. Daily stepping was recorded on 201 patients <6 months poststroke (80% < 1 month) during inpatient rehabilitation following implementation of a focused training program to maximize stepping practice during clinical physical therapy sessions. Primary outcomes included distance and physical assistance required during a 6-minute walk test (6MWT) and balance using the Berg Balance Scale (BBS). Retrospective data analysis included multiple regression techniques to evaluate the contributions of demographics, training activities, and baseline motor function to primary outcomes at discharge. Median stepping activity recorded from patients was 1516 steps/d, which is 5 to 6 times greater than that typically observed. The number of steps per day was positively correlated with both discharge 6MWT and BBS and improvements from baseline (changes; r = 0.40-0.87), independently contributing 10% to 31% of the total variance. Stepping activity also predicted level of assistance at discharge and discharge location (home vs other facility). Providing focused, repeated stepping training was feasible early poststroke during inpatient rehabilitation and was related to mobility outcomes. Further research is required to evaluate the effectiveness of these training strategies on short- or long-term mobility outcomes as compared with conventional interventions. © The Author(s) 2015.
Practical, transparent prospective risk analysis for the clinical laboratory.
Janssens, Pim Mw
2014-11-01
Prospective risk analysis (PRA) is an essential element in quality assurance for clinical laboratories. Practical approaches to conducting PRA in laboratories, however, are scarce. On the basis of the classical Failure Mode and Effect Analysis method, an approach to PRA was developed for application to key laboratory processes. First, the separate, major steps of the process under investigation are identified. Scores are then given for the Probability (P) and Consequence (C) of predefined types of failures and the chances of Detecting (D) these failures. Based on the P and C scores (on a 10-point scale), an overall Risk score (R) is calculated. The scores for each process were recorded in a matrix table. Based on predetermined criteria for R and D, it was determined whether a more detailed analysis was required for potential failures and, ultimately, where risk-reducing measures were necessary, if any. As an illustration, this paper presents the results of the application of PRA to our pre-analytical and analytical activities. The highest R scores were obtained in the stat processes, the most common failure type in the collective process steps was 'delayed processing or analysis', the failure type with the highest mean R score was 'inappropriate analysis' and the failure type most frequently rated as suboptimal was 'identification error'. The PRA designed is a useful semi-objective tool to identify process steps with potential failures rated as risky. Its systematic design and convenient output in matrix tables makes it easy to perform, practical and transparent. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Thermokinetic analysis and product characterization of Medium Density Fiberboard pyrolysis.
Aslan, Dilan Irmak; Özoğul, Buğçe; Ceylan, Selim; Geyikçi, Feza
2018-06-01
This study investigates the pyrolysis of Medium Density Fiberboard (MDF) as a potential waste management solution. Thermal behaviour of MDF was analysed via TG/DSC. The primary decomposition step occurred between 190 °C and 425 °C. Evolved gaseous products over this step were evaluated by a FTIR spectrometer coupled with TGA. Peaks for phenolic, alcohols and aldehydes were detected at the maximum decomposition temperature. Py-GC/MS analysis revealed phenols, ketones and cyclic compounds as the primary non-condensable pyrolysis products. The kinetics of pyrolysis were investigated by the widely applied Distributed Activation Energy Model, resulting in an average activation energy and pre-exponential factor of 127.40 kJ mol -1 and 8.4E+11. The results of this study suggest that pyrolyzing MDF could potentially provide renewable fuels and prevent environmental problems related with MDF disposal. Copyright © 2018 Elsevier Ltd. All rights reserved.
[Discourse analysis: research potentialities to gender violence].
de Azambuja, Mariana Porto Ruwer; Nogueira, Conceição
2009-01-01
In the last few years we see the growing use of the terms 'discourse' and 'discourses analysis' in academic and research contexts, frequently without a precise definition. This fact opens space for critics and mistakes. The aim of this paper is to show a brief contextualization of discursive studies, as well as tasks/steps to Discourse Analysis process by the Social Construcionism perspective. As examples we used fragments of an interview with a Family Doctor about gender violence. In the results we detach the potential of Discourse Analysis to deconstruct the existing discourses to subsequently (re)construction in the way to a more holistic view about gender violence problem.
Ehrensberger, Mark T; Gilbert, Jeremy L
2010-05-01
The measurement of electrochemical impedance is a valuable tool to assess the electrochemical environment that exists at the surface of metallic biomaterials. This article describes the development and validation of a new technique, potential step impedance analysis (PSIA), to assess the electrochemical impedance of materials whose interface with solution can be modeled as a simplified Randles circuit that is modified with a constant phase element. PSIA is based upon applying a step change in voltage to a working electrode and analyzing the subsequent current transient response in a combined time and frequency domain technique. The solution resistance, polarization resistance, and interfacial capacitance are found directly in the time domain. The experimental current transient is numerically transformed to the frequency domain to determine the constant phase exponent, alpha. This combined time and frequency approach was tested using current transients generated from computer simulations, from resistor-capacitor breadboard circuits, and from commercially pure titanium samples immersed in phosphate buffered saline and polarized at -800 mV or +1000 mV versus Ag/AgCl. It was shown that PSIA calculates equivalent admittance and impedance behavior over this range of potentials when compared to standard electrochemical impedance spectroscopy. This current transient approach characterizes the frequency response of the system without the need for expensive frequency response analyzers or software. Copyright 2009 Wiley Periodicals, Inc.
Monitoring the Recovery of c-Si Modules from Potential-Induced Degradation Using Suns-Voc Curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilterdink, Harrison; Sinton, Ronald; Hacke, Peter
2016-11-21
Potential-induced degradation (PID) has recently been shown as an important failure mode in c-Si modules. We demonstrate the utility of Suns-Voc analysis for measuring shunt effects caused by PID at the module level. Our results show module shunt resistance increasing in step with module power during recovery from the degraded state.
NASA Astrophysics Data System (ADS)
Guzman, David G.
An electrical substation is composed of various subsystems that allow for the effective and safe operation of the power grid. One of the subsystems integrating a conventional substation is defined as the ground grid system. This system allows for the effective operation of the power grid and all the electrical equipment connected to it by providing a ground potential reference, commonly known as the system ground. In addition, the ground grid system provides safety to the workers and the public transiting inside or living nearby a substation by reducing the step and touch potential (or voltage) levels present during a system fault. In today's utility industry practices there is an increasing trend for using pad-mounted electrical equipment for substation applications in an effort to construct new or upgrade existing electrical facilities inside limited property spaces. This thesis work presents an analysis for the effects of touch and step voltages at existing distribution substations where 23.9kV to 4.16kV & 13.8kV to 4.16kV pad-mounted transformers and other pad-mounted switchgear was installed to replace the traditional station class equipment. Moreover, this study will expose modeling techniques employed to define and determine the effects of floating grounds and other exposed metal bodies inside or surrounding these substations using WinIGS; this is in an effort to determine any risks of electric shock associated with this type of installations. The results presented in this work are intended to verify the requirements for the ground grid analysis and design for 4.16kV distribution substations with pad-mounted equipment in order to prevent dangerous step and touch voltage levels appearing at these sites during system faults; and ultimately prevent exposing individuals to the risk of an electric shock.
Klotz, Dino; Grave, Daniel A; Dotan, Hen; Rothschild, Avner
2018-03-15
Photoelectrochemical impedance spectroscopy (PEIS) is a useful tool for the characterization of photoelectrodes for solar water splitting. However, the analysis of PEIS spectra often involves a priori assumptions that might bias the results. This work puts forward an empirical method that analyzes the distribution of relaxation times (DRT), obtained directly from the measured PEIS spectra of a model hematite photoanode. By following how the DRT evolves as a function of control parameters such as the applied potential and composition of the electrolyte solution, we obtain unbiased insights into the underlying mechanisms that shape the photocurrent. In a subsequent step, we fit the data to a process-oriented equivalent circuit model (ECM) whose makeup is derived from the DRT analysis in the first step. This yields consistent quantitative trends of the dominant polarization processes observed. Our observations reveal a common step for the photo-oxidation reactions of water and H 2 O 2 in alkaline solution.
A novel frequency analysis method for assessing K(ir)2.1 and Na (v)1.5 currents.
Rigby, J R; Poelzing, S
2012-04-01
Voltage clamping is an important tool for measuring individual currents from an electrically active cell. However, it is difficult to isolate individual currents without pharmacological or voltage inhibition. Herein, we present a technique that involves inserting a noise function into a standard voltage step protocol, which allows one to characterize the unique frequency response of an ion channel at different step potentials. Specifically, we compute the fast Fourier transform for a family of current traces at different step potentials for the inward rectifying potassium channel, K(ir)2.1, and the channel encoding the cardiac fast sodium current, Na(v)1.5. Each individual frequency magnitude, as a function of voltage step, is correlated to the peak current produced by each channel. The correlation coefficient vs. frequency relationship reveals that these two channels are associated with some unique frequencies with high absolute correlation. The individual IV relationship can then be recreated using only the unique frequencies with magnitudes of high absolute correlation. Thus, this study demonstrates that ion channels may exhibit unique frequency responses.
NASA Astrophysics Data System (ADS)
Gadala, Ibrahim M.; Alfantazi, Akram
2015-12-01
The key steps involved in X100 pipeline steel passivation in bicarbonate-based simulated soil solutions from the pre-passive to transpassive potential regions have been analyzed here using a step-wise anodizing-electrochemical impedance spectroscopy (EIS) routine. Pre-passive steps involve parallel dissolution-adsorption in early stages followed by clear diffusion-adsorption control shortly before iron hydroxide formation. Aggressive NS4 chlorides/sulfate promote steel dissolution whilst inhibiting diffusion in pre-passive steps. Diffusive and adsorptive effects remain during iron hydroxide formation, but withdraw shortly thereafter during its removal and the development of the stable iron carbonate passive layer. Passive layer protectiveness is evaluated using EIS fitting, current density analysis, and correlations with semiconductive parameters, consistently revealing improved robustness in colder, bicarbonate-rich, chloride/sulfate-free conditions. Ferrous oxide formation at higher potentials results in markedly lower impedances with disordered behavior, and the involvement of the iron(III) valence state is observed in Mott-Schottky tests exclusively for 75 °C conditions.
Gedeon, Patrick C; Thomas, James R; Madura, Jeffry D
2015-01-01
Molecular dynamics simulation provides a powerful and accurate method to model protein conformational change, yet timescale limitations often prevent direct assessment of the kinetic properties of interest. A large number of molecular dynamic steps are necessary for rare events to occur, which allow a system to overcome energy barriers and conformationally transition from one potential energy minimum to another. For many proteins, the energy landscape is further complicated by a multitude of potential energy wells, each separated by high free-energy barriers and each potentially representative of a functionally important protein conformation. To overcome these obstacles, accelerated molecular dynamics utilizes a robust bias potential function to simulate the transition between different potential energy minima. This straightforward approach more efficiently samples conformational space in comparison to classical molecular dynamics simulation, does not require advanced knowledge of the potential energy landscape and converges to the proper canonical distribution. Here, we review the theory behind accelerated molecular dynamics and discuss the approach in the context of modeling protein conformational change. As a practical example, we provide a detailed, step-by-step explanation of how to perform an accelerated molecular dynamics simulation using a model neurotransmitter transporter embedded in a lipid cell membrane. Changes in protein conformation of relevance to the substrate transport cycle are then examined using principle component analysis.
Remotely Sensed Quantitative Drought Risk Assessment in Vulnerable Agroecosystems
NASA Astrophysics Data System (ADS)
Dalezios, N. R.; Blanta, A.; Spyropoulos, N. V.
2012-04-01
Hazard may be defined as a potential threat to humans and their welfare and risk (or consequence) as the probability of a hazard occurring and creating loss. Drought is considered as one of the major natural hazards with significant impact to agriculture, environment, economy and society. This paper deals with drought risk assessment, which the first step designed to find out what the problems are and comprises three distinct steps, namely risk identification, risk management which is not covered in this paper, there should be a fourth step to address the need for feedback and to take post-audits of all risk assessment exercises. In particular, quantitative drought risk assessment is attempted by using statistical methods. For the qualification of drought, the Reconnaissance Drought Index (RDI) is employed, which is a new index based on hydrometeorological parameters, such as precipitation and potential evapotranspiration. The remotely sensed estimation of RDI is based on NOA-AVHRR satellite data for a period of 20 years (1981-2001). The study area is Thessaly, central Greece, which is a drought-prone agricultural region characterized by vulnerable agriculture. Specifically, the undertaken drought risk assessment processes are specified as follows: 1. Risk identification: This step involves drought quantification and monitoring based on remotely sensed RDI and extraction of several features such as severity, duration, areal extent, onset and end time. Moreover, it involves a drought early warning system based on the above parameters. 2. Risk estimation: This step includes an analysis of drought severity, frequency and their relationships. 3. Risk evaluation: This step covers drought evaluation based on analysis of RDI images before and after each drought episode, which usually lasts one hydrological year (12month). The results of these three-step drought assessment processes are considered quite satisfactory in a drought-prone region such as Thessaly in central Greece. Moreover, remote sensing has proven very effective in delineating spatial variability and features in drought monitoring and assessment.
Wentzel, Jobke; Sanderman, Robbert; van Gemert-Pijnen, Lisette
2015-01-01
Background It is acknowledged that the success and uptake of eHealth improve with the involvement of users and stakeholders to make technology reflect their needs. Involving stakeholders in implementation research is thus a crucial element in developing eHealth technology. Business modeling is an approach to guide implementation research for eHealth. Stakeholders are involved in business modeling by identifying relevant stakeholders, conducting value co-creation dialogs, and co-creating a business model. Because implementation activities are often underestimated as a crucial step while developing eHealth, comprehensive and applicable approaches geared toward business modeling in eHealth are scarce. Objective This paper demonstrates the potential of several stakeholder-oriented analysis methods and their practical application was demonstrated using Infectionmanager as an example case. In this paper, we aim to demonstrate how business modeling, with the focus on stakeholder involvement, is used to co-create an eHealth implementation. Methods We divided business modeling in 4 main research steps. As part of stakeholder identification, we performed literature scans, expert recommendations, and snowball sampling (Step 1). For stakeholder analyzes, we performed “basic stakeholder analysis,” stakeholder salience, and ranking/analytic hierarchy process (Step 2). For value co-creation dialogs, we performed a process analysis and stakeholder interviews based on the business model canvas (Step 3). Finally, for business model generation, we combined all findings into the business model canvas (Step 4). Results Based on the applied methods, we synthesized a step-by-step guide for business modeling with stakeholder-oriented analysis methods that we consider suitable for implementing eHealth. Conclusions The step-by-step guide for business modeling with stakeholder involvement enables eHealth researchers to apply a systematic and multidisciplinary, co-creative approach for implementing eHealth. Business modeling becomes an active part in the entire development process of eHealth and starts an early focus on implementation, in which stakeholders help to co-create the basis necessary for a satisfying success and uptake of the eHealth technology. PMID:26272510
NASA Technical Reports Server (NTRS)
1974-01-01
The BRAVO User's Manual is presented which describes the BRAVO methodology in terms of step-by-step procedures, so that it may be used as a tool for a team of analysts performing cost effectiveness analyses on potential future space applications. BRAVO requires a relatively general set of input information and a relatively small expenditure of resources. For Vol. 1, see N74-12493; for Vol. 2, see N74-14530.
Oxygen Reduction Reaction on Platinum-Terminated “Onion-structured” Alloy Catalysts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herron, Jeffrey A.; Jiao, Jiao; Hahn, Konstanze
Using periodic, self-consistent density functional theory (GGA-PW91) calculations, a series of onion-structured metal alloys have been investigated for their catalytic performance towards the oxygen reduction reaction (ORR). The onion-structures consist of a varying number of atomic layers of one or two metals each, pseudomorphically deposited on top of one another to form the overall structure. All catalysts studied feature a Pt overlayer, and often consist of at least one Pd layer below the surface. Three distinct ORR mechanisms were analyzed on the close-packed facets of all the structures considered. These mechanisms include a direct route of O2 dissociation and twomore » hydrogen-assisted routes of O–O bond-breaking in peroxyl (OOH) and in hydrogen peroxide (HOOH) intermediates. A thermochemical analysis of the elementary steps provides information on the operating potential, and thereby energy efficiency of each electrocatalyst. A Sabatier analysis of catalytic activity based on thermochemistry of proton/electron transfer steps and activation energy barrier for O–O bond-breaking steps leads to a “volcano” relation between the surfaces’ activity and the binding energy of O. Several of the onion-structured alloys studied here show promise for achieving energy efficiency higher than that of Pt, by being active at potentials higher than the operating potential of Pt. Furthermore, some have at least as good activity as pure Pt at that operating potential. Thus, a number of the onion-structured alloys studied here are promising as cathode electrocatalysts in proton exchange membrane fuel cells.« less
A step-by-step guide to systematically identify all relevant animal studies.
Leenaars, Marlies; Hooijmans, Carlijn R; van Veggel, Nieky; ter Riet, Gerben; Leeflang, Mariska; Hooft, Lotty; van der Wilt, Gert Jan; Tillema, Alice; Ritskes-Hoitinga, Merel
2012-01-01
Before starting a new animal experiment, thorough analysis of previously performed experiments is essential from a scientific as well as from an ethical point of view. The method that is most suitable to carry out such a thorough analysis of the literature is a systematic review (SR). An essential first step in an SR is to search and find all potentially relevant studies. It is important to include all available evidence in an SR to minimize bias and reduce hampered interpretation of experimental outcomes. Despite the recent development of search filters to find animal studies in PubMed and EMBASE, searching for all available animal studies remains a challenge. Available guidelines from the clinical field cannot be copied directly to the situation within animal research, and although there are plenty of books and courses on searching the literature, there is no compact guide available to search and find relevant animal studies. Therefore, in order to facilitate a structured, thorough and transparent search for animal studies (in both preclinical and fundamental science), an easy-to-use, step-by-step guide was prepared and optimized using feedback from scientists in the field of animal experimentation. The step-by-step guide will assist scientists in performing a comprehensive literature search and, consequently, improve the scientific quality of the resulting review and prevent unnecessary animal use in the future.
A step-by-step guide to systematically identify all relevant animal studies
Leenaars, Marlies; Hooijmans, Carlijn R; van Veggel, Nieky; ter Riet, Gerben; Leeflang, Mariska; Hooft, Lotty; van der Wilt, Gert Jan; Tillema, Alice; Ritskes-Hoitinga, Merel
2012-01-01
Before starting a new animal experiment, thorough analysis of previously performed experiments is essential from a scientific as well as from an ethical point of view. The method that is most suitable to carry out such a thorough analysis of the literature is a systematic review (SR). An essential first step in an SR is to search and find all potentially relevant studies. It is important to include all available evidence in an SR to minimize bias and reduce hampered interpretation of experimental outcomes. Despite the recent development of search filters to find animal studies in PubMed and EMBASE, searching for all available animal studies remains a challenge. Available guidelines from the clinical field cannot be copied directly to the situation within animal research, and although there are plenty of books and courses on searching the literature, there is no compact guide available to search and find relevant animal studies. Therefore, in order to facilitate a structured, thorough and transparent search for animal studies (in both preclinical and fundamental science), an easy-to-use, step-by-step guide was prepared and optimized using feedback from scientists in the field of animal experimentation. The step-by-step guide will assist scientists in performing a comprehensive literature search and, consequently, improve the scientific quality of the resulting review and prevent unnecessary animal use in the future. PMID:22037056
Direct optical detection of protein-ligand interactions.
Gesellchen, Frank; Zimmermann, Bastian; Herberg, Friedrich W
2005-01-01
Direct optical detection provides an excellent means to investigate interactions of molecules in biological systems. The dynamic equilibria inherent to these systems can be described in greater detail by recording the kinetics of a biomolecular interaction. Optical biosensors allow direct detection of interaction patterns without the need for labeling. An overview covering several commercially available biosensors is given, with a focus on instruments based on surface plasmon resonance (SPR) and reflectometric interference spectroscopy (RIFS). Potential assay formats and experimental design, appropriate controls, and calibration procedures, especially when handling low molecular weight substances, are discussed. The single steps of an interaction analysis combined with practical tips for evaluation, data processing, and interpretation of kinetic data are described in detail. In a practical example, a step-by-step procedure for the analysis of a low molecular weight compound interaction with serum protein, determined on a commercial SPR sensor, is presented.
de Lima-Pardini, Andrea Cristina; Zimeo Morais, Guilherme A; Balardin, Joana Bisol; Coelho, Daniel Boari; Azzi, Nametala Maia; Teixeira, Luis Augusto; Sato, João Ricardo
2017-07-01
Walkers are commonly prescribed worldwide to individuals unable to walk independently. Walker usage leads to improved postural control and voluntary movement during step. In the present study, we aimed to provide a concept-proof on the feasibility of an event-related protocol integrating the analyses of biomechanical variables of step initiation and functional near-infrared spectroscopy (fNIRS) to measure activation of the supplementary motor area (SMA) while using a walker. Healthy young participants were tested while stepping with versus without the use of the walker. Behavioral analysis showed that anticipatory postural adjustments (APA) decreased when supporting the body weight on the walker. Delta (without-with) of activation magnitude of the muscle tibialis anterior was positively correlated to the delta of deoxyhemoglobin concentration changes in the SMA. The novelty of this study is the development of a protocol to assess brain function together with biomechanical analysis during the use of a walker. The method sheds light to the potential utility of combining fNIRS and biomechanical assessment during assistive step initiation, which can represent a new opportunity to study populations with mobility deficits. Copyright © 2017 Elsevier B.V. All rights reserved.
Fractured reservoir characterization through injection, falloff, and flowback tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, C.P.; Singh, P.K.; Halvorsen, H.
1992-09-01
This paper presents the development of a multiphase pressure-transient-analysis technique for naturally fractured reservoirs and the analysis of a series of field tests performed to evaluate the water injection potential and the reservoir characteristics of a naturally fractured reservoir. These included step-rate, water-injectivity, pressure-falloff, and flowback tests. Through these tests, a description of the reservoir was obtained.
14 CFR 33.70 - Engine life-limited parts.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., hubs, shafts, high-pressure casings, and non-redundant mount components. For the purposes of this... life before hazardous engine effects can occur. These steps include validated analysis, test, or... assessments to address the potential for failure from material, manufacturing, and service induced anomalies...
14 CFR 33.70 - Engine life-limited parts.
Code of Federal Regulations, 2013 CFR
2013-01-01
..., hubs, shafts, high-pressure casings, and non-redundant mount components. For the purposes of this... life before hazardous engine effects can occur. These steps include validated analysis, test, or... assessments to address the potential for failure from material, manufacturing, and service induced anomalies...
Failure modes and effects analysis for ocular brachytherapy.
Lee, Yongsook C; Kim, Yongbok; Huynh, Jason Wei-Yeong; Hamilton, Russell J
The aim of the study was to identify potential failure modes (FMs) having a high risk and to improve our current quality management (QM) program in Collaborative Ocular Melanoma Study (COMS) ocular brachytherapy by undertaking a failure modes and effects analysis (FMEA) and a fault tree analysis (FTA). Process mapping and FMEA were performed for COMS ocular brachytherapy. For all FMs identified in FMEA, risk priority numbers (RPNs) were determined by assigning and multiplying occurrence, severity, and lack of detectability values, each ranging from 1 to 10. FTA was performed for the major process that had the highest ranked FM. Twelve major processes, 121 sub-process steps, 188 potential FMs, and 209 possible causes were identified. For 188 FMs, RPN scores ranged from 1.0 to 236.1. The plaque assembly process had the highest ranked FM. The majority of FMs were attributable to human failure (85.6%), and medical physicist-related failures were the most numerous (58.9% of all causes). After FMEA, additional QM methods were included for the top 10 FMs and 6 FMs with severity values > 9.0. As a result, for these 16 FMs and the 5 major processes involved, quality control steps were increased from 8 (50%) to 15 (93.8%), and major processes having quality assurance steps were increased from 2 to 4. To reduce high risk in current clinical practice, we proposed QM methods. They mainly include a check or verification of procedures/steps and the use of checklists for both ophthalmology and radiation oncology staff, and intraoperative ultrasound-guided plaque positioning for ophthalmology staff. Copyright © 2017 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Santi, S. S.; Renanto; Altway, A.
2018-01-01
The energy use system in a production process, in this case heat exchangers networks (HENs), is one element that plays a role in the smoothness and sustainability of the industry itself. Optimizing Heat Exchanger Networks (HENs) from process streams can have a major effect on the economic value of an industry as a whole. So the solving of design problems with heat integration becomes an important requirement. In a plant, heat integration can be carried out internally or in combination between process units. However, steps in the determination of suitable heat integration techniques require long calculations and require a long time. In this paper, we propose an alternative step in determining heat integration technique by investigating 6 hypothetical units using Pinch Analysis approach with objective function energy target and total annual cost target. The six hypothetical units consist of units A, B, C, D, E, and F, where each unit has the location of different process streams to the temperature pinch. The result is a potential heat integration (ΔH’) formula that can trim conventional steps from 7 steps to just 3 steps. While the determination of the preferred heat integration technique is to calculate the potential of heat integration (ΔH’) between the hypothetical process units. Completion of calculation using matlab language programming.
Xu, Guojie; Cai, Wei; Gao, Wei; Liu, Chunsheng
2016-10-01
Glycyrrhizin is an important bioactive compound that is used clinically to treat chronic hepatitis and is also used as a sweetener world-wide. However, the key UDP-dependent glucuronosyltransferases (UGATs) involved in the biosynthesis of glycyrrhizin remain unknown. To discover unknown UGATs, we fully annotated potential UGATs from Glycyrrhiza uralensis using deep transcriptome sequencing. The catalytic functions of candidate UGATs were determined by an in vitro enzyme assay. Systematically screening 434 potential UGATs, we unexpectedly found one unique GuUGAT that was able to catalyse the glucuronosylation of glycyrrhetinic acid to directly yield glycyrrhizin via continuous two-step glucuronosylation. Expression analysis further confirmed the key role of GuUGAT in the biosynthesis of glycyrrhizin. Site-directed mutagenesis revealed that Gln-352 may be important for the initial step of glucuronosylation, and His-22, Trp-370, Glu-375 and Gln-392 may be important residues for the second step of glucuronosylation. Notably, the ability of GuUGAT to catalyse a continuous two-step glucuronosylation reaction was determined to be unprecedented among known glycosyltransferases of bioactive plant natural products. Our findings increase the understanding of traditional glycosyltransferases and pave the way for the complete biosynthesis of glycyrrhizin. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.
Kim, In-Ah; den-Hollander, Elyn; Lee, Hye-Seong
2018-03-01
Descriptive analysis with a trained sensory panel has thus far been the most well defined methodology to characterize various products. However, in practical terms, intensive training in descriptive analysis has been recognized as a serious defect. To overcome this limitation, various novel rapid sensory profiling methodologies have been suggested in the literature. Among these, attribute-based methodologies such as check-all-that-apply (CATA) questions showed results comparable to those of conventional sensory descriptive analysis. Kim, Hopkinson, van Hout, and Lee (2017a, 2017b) have proposed a novel attribute-based methodology termed the two-step rating-based 'double-faced applicability' test with a novel output measure of applicability magnitude (d' A ) for measuring consumers' product usage experience throughout various product usage stages. In this paper, the potential of the two-step rating-based 'double-faced applicability' test with d' A was investigated as an alternative to conventional sensory descriptive analysis in terms of sensory characterization and product discrimination. Twelve commercial spread products were evaluated using both conventional sensory descriptive analysis with a trained sensory panel and two-step rating-based 'double-faced applicability' test with an untrained sensory panel. The results demonstrated that the 'double-faced applicability' test can be used to provide a direct measure of the applicability magnitude of sensory attributes of the samples tested in terms of d' A for sensory characterization of individual samples and multiple sample comparisons. This suggests that when the appropriate list of attributes to be used in the questionnaire is already available, the two-step rating-based 'double-faced applicability' test with d' A can be used as a more efficient alternative to conventional descriptive analysis, without requiring any intensive training process. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zhang, Limao; Wu, Xianguo; Qin, Yawei; Skibniewski, Miroslaw J; Liu, Wenli
2016-02-01
Tunneling excavation is bound to produce significant disturbances to surrounding environments, and the tunnel-induced damage to adjacent underground buried pipelines is of considerable importance for geotechnical practice. A fuzzy Bayesian networks (FBNs) based approach for safety risk analysis is developed in this article with detailed step-by-step procedures, consisting of risk mechanism analysis, the FBN model establishment, fuzzification, FBN-based inference, defuzzification, and decision making. In accordance with the failure mechanism analysis, a tunnel-induced pipeline damage model is proposed to reveal the cause-effect relationships between the pipeline damage and its influential variables. In terms of the fuzzification process, an expert confidence indicator is proposed to reveal the reliability of the data when determining the fuzzy probability of occurrence of basic events, with both the judgment ability level and the subjectivity reliability level taken into account. By means of the fuzzy Bayesian inference, the approach proposed in this article is capable of calculating the probability distribution of potential safety risks and identifying the most likely potential causes of accidents under both prior knowledge and given evidence circumstances. A case concerning the safety analysis of underground buried pipelines adjacent to the construction of the Wuhan Yangtze River Tunnel is presented. The results demonstrate the feasibility of the proposed FBN approach and its application potential. The proposed approach can be used as a decision tool to provide support for safety assurance and management in tunnel construction, and thus increase the likelihood of a successful project in a complex project environment. © 2015 Society for Risk Analysis.
Nakhi, Ali; Adepu, Raju; Rambabu, D; Kishore, Ravada; Vanaja, G R; Kalle, Arunasree M; Pal, Manojit
2012-07-01
Novel thieno[3,2-c]pyran-4-one based small molecules were designed as potential anticancer agents. Expeditious synthesis of these compounds was carried out via a multi-step sequence consisting of few steps such as Gewald reaction, Sandmeyer type iodination, Sonogashira type coupling followed by iodocyclization and then Pd-mediated various C-C bond forming reactions. The overall strategy involved the construction of thiophene ring followed by the fused pyranone moiety and then functionalization at C-7 position of the resultant thieno[3,2-c]pyran-4-one framework. Some of the compounds synthesized showed selective growth inhibition of cancer cells in vitro among which two compounds for example, 5d and 6c showed IC(50) values in the range of 2.0-2.5 μM. The crystal structure analysis of an active compound along with hydrogen bonding patterns and molecular arrangement present within the molecule is described. Copyright © 2012 Elsevier Ltd. All rights reserved.
Commercialising genetically engineered animal biomedical products.
Sullivan, Eddie J; Pommer, Jerry; Robl, James M
2008-01-01
Research over the past two decades has increased the quality and quantity of tools available to produce genetically engineered animals. The number of potentially viable biomedical products from genetically engineered animals is increasing. However, moving from cutting-edge research to development and commercialisation of a biomedical product that is useful and wanted by the public has significant challenges. Even early stage development of genetically engineered animal applications requires consideration of many steps, including quality assurance and quality control, risk management, gap analysis, founder animal establishment, cell banking, sourcing of animals and animal-derived material, animal facilities, product collection facilities and processing facilities. These steps are complicated and expensive. Biomedical applications of genetically engineered animals have had some recent successes and many applications are well into development. As researchers consider applications for their findings, having a realistic understanding of the steps involved in the development and commercialisation of a product, produced in genetically engineered animals, is useful in determining the risk of genetic modification to the animal nu. the potential public benefit of the application.
NASA Astrophysics Data System (ADS)
Zaraska, Leszek; Stępniowski, Wojciech J.; Sulka, Grzegorz D.; Ciepiela, Eryk; Jaskuła, Marian
2014-02-01
Anodic porous alumina layers were fabricated by a two-step self-organized anodization in 0.3 M oxalic acid under various anodizing potentials ranging from 30 to 60 V at two different temperatures (10 and 17 ∘C). The effect of anodizing conditions on structural features and pore arrangement of AAO was investigated in detail by using the dedicated executable publication combined with ImageJ software. With increasing anodizing potential, a linear increase of the average pore diameter, interpore distance, wall thickness and barrier layer thickness, as well as a decrease of the pore density, were observed. In addition, the higher pore diameter and porosity values were obtained for samples anodized at the elevated temperature, independently of the anodizing potential. A degree of pore order was investigated on the basis of Delaunay triangulations (defect maps) and calculation of pair distribution or angle distribution functions (PDF or ADF), respectively. All methods confirmed that in order to obtain nanoporous alumina with the best, hexagonal pore arrangement, the potential of 40 V should be applied during anodization. It was confirmed that the dedicated executable publication can be used to a fast and complex analysis of nanopore arrangement and structural features of nanoporous oxide layers.
ERPLAB: an open-source toolbox for the analysis of event-related potentials
Lopez-Calderon, Javier; Luck, Steven J.
2014-01-01
ERPLAB toolbox is a freely available, open-source toolbox for processing and analyzing event-related potential (ERP) data in the MATLAB environment. ERPLAB is closely integrated with EEGLAB, a popular open-source toolbox that provides many EEG preprocessing steps and an excellent user interface design. ERPLAB adds to EEGLAB’s EEG processing functions, providing additional tools for filtering, artifact detection, re-referencing, and sorting of events, among others. ERPLAB also provides robust tools for averaging EEG segments together to create averaged ERPs, for creating difference waves and other recombinations of ERP waveforms through algebraic expressions, for filtering and re-referencing the averaged ERPs, for plotting ERP waveforms and scalp maps, and for quantifying several types of amplitudes and latencies. ERPLAB’s tools can be accessed either from an easy-to-learn graphical user interface or from MATLAB scripts, and a command history function makes it easy for users with no programming experience to write scripts. Consequently, ERPLAB provides both ease of use and virtually unlimited power and flexibility, making it appropriate for the analysis of both simple and complex ERP experiments. Several forms of documentation are available, including a detailed user’s guide, a step-by-step tutorial, a scripting guide, and a set of video-based demonstrations. PMID:24782741
ERPLAB: an open-source toolbox for the analysis of event-related potentials.
Lopez-Calderon, Javier; Luck, Steven J
2014-01-01
ERPLAB toolbox is a freely available, open-source toolbox for processing and analyzing event-related potential (ERP) data in the MATLAB environment. ERPLAB is closely integrated with EEGLAB, a popular open-source toolbox that provides many EEG preprocessing steps and an excellent user interface design. ERPLAB adds to EEGLAB's EEG processing functions, providing additional tools for filtering, artifact detection, re-referencing, and sorting of events, among others. ERPLAB also provides robust tools for averaging EEG segments together to create averaged ERPs, for creating difference waves and other recombinations of ERP waveforms through algebraic expressions, for filtering and re-referencing the averaged ERPs, for plotting ERP waveforms and scalp maps, and for quantifying several types of amplitudes and latencies. ERPLAB's tools can be accessed either from an easy-to-learn graphical user interface or from MATLAB scripts, and a command history function makes it easy for users with no programming experience to write scripts. Consequently, ERPLAB provides both ease of use and virtually unlimited power and flexibility, making it appropriate for the analysis of both simple and complex ERP experiments. Several forms of documentation are available, including a detailed user's guide, a step-by-step tutorial, a scripting guide, and a set of video-based demonstrations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Benthem, Mark Hilary; Mowry, Curtis Dale; Kotula, Paul Gabriel
Thermal decomposition of poly dimethyl siloxane compounds, Sylgard{reg_sign} 184 and 186, were examined using thermal desorption coupled gas chromatography-mass spectrometry (TD/GC-MS) and multivariate analysis. This work describes a method of producing multiway data using a stepped thermal desorption. The technique involves sequentially heating a sample of the material of interest with subsequent analysis in a commercial GC/MS system. The decomposition chromatograms were analyzed using multivariate analysis tools including principal component analysis (PCA), factor rotation employing the varimax criterion, and multivariate curve resolution. The results of the analysis show seven components related to offgassing of various fractions of siloxanes that varymore » as a function of temperature. Thermal desorption coupled with gas chromatography-mass spectrometry (TD/GC-MS) is a powerful analytical technique for analyzing chemical mixtures. It has great potential in numerous analytic areas including materials analysis, sports medicine, in the detection of designer drugs; and biological research for metabolomics. Data analysis is complicated, far from automated and can result in high false positive or false negative rates. We have demonstrated a step-wise TD/GC-MS technique that removes more volatile compounds from a sample before extracting the less volatile compounds. This creates an additional dimension of separation before the GC column, while simultaneously generating three-way data. Sandia's proven multivariate analysis methods, when applied to these data, have several advantages over current commercial options. It also has demonstrated potential for success in finding and enabling identification of trace compounds. Several challenges remain, however, including understanding the sources of noise in the data, outlier detection, improving the data pretreatment and analysis methods, developing a software tool for ease of use by the chemist, and demonstrating our belief that this multivariate analysis will enable superior differentiation capabilities. In addition, noise and system artifacts challenge the analysis of GC-MS data collected on lower cost equipment, ubiquitous in commercial laboratories. This research has the potential to affect many areas of analytical chemistry including materials analysis, medical testing, and environmental surveillance. It could also provide a method to measure adsorption parameters for chemical interactions on various surfaces by measuring desorption as a function of temperature for mixtures. We have presented results of a novel method for examining offgas products of a common PDMS material. Our method involves utilizing a stepped TD/GC-MS data acquisition scheme that may be almost totally automated, coupled with multivariate analysis schemes. This method of data generation and analysis can be applied to a number of materials aging and thermal degradation studies.« less
NASA Technical Reports Server (NTRS)
Hudson, Nicolas; Lin, Ying; Barengoltz, Jack
2010-01-01
A method for evaluating the probability of a Viable Earth Microorganism (VEM) contaminating a sample during the sample acquisition and handling (SAH) process of a potential future Mars Sample Return mission is developed. A scenario where multiple core samples would be acquired using a rotary percussive coring tool, deployed from an arm on a MER class rover is analyzed. The analysis is conducted in a structured way by decomposing sample acquisition and handling process into a series of discrete time steps, and breaking the physical system into a set of relevant components. At each discrete time step, two key functions are defined: The probability of a VEM being released from each component, and the transport matrix, which represents the probability of VEM transport from one component to another. By defining the expected the number of VEMs on each component at the start of the sampling process, these decompositions allow the expected number of VEMs on each component at each sampling step to be represented as a Markov chain. This formalism provides a rigorous mathematical framework in which to analyze the probability of a VEM entering the sample chain, as well as making the analysis tractable by breaking the process down into small analyzable steps.
NASA Astrophysics Data System (ADS)
Peer, Regina; Peer, Siegfried; Sander, Heike; Marsolek, Ingo; Koller, Wolfgang; Pappert, Dirk; Hierholzer, Johannes
2002-05-01
If new technology is introduced into medical practice it must prove to make a difference. However traditional approaches of outcome analysis failed to show a direct benefit of PACS on patient care and economical benefits are still in debate. A participatory process analysis was performed to compare workflow in a film based hospital and a PACS environment. This included direct observation of work processes, interview of involved staff, structural analysis and discussion of observations with staff members. After definition of common structures strong and weak workflow steps were evaluated. With a common workflow structure in both hospitals, benefits of PACS were revealed in workflow steps related to image reporting with simultaneous image access for ICU-physicians and radiologists, archiving of images as well as image and report distribution. However PACS alone is not able to cover the complete process of 'radiography for intensive care' from ordering of an image till provision of the final product equals image + report. Interference of electronic workflow with analogue process steps such as paper based ordering reduces the potential benefits of PACS. In this regard workflow modeling proved to be very helpful for the evaluation of complex work processes linking radiology and the ICU.
Puttini, Stefania; Ouvrard-Pascaud, Antoine; Palais, Gael; Beggah, Ahmed T; Gascard, Philippe; Cohen-Tannoudji, Michel; Babinet, Charles; Blot-Chabaud, Marcel; Jaisser, Frederic
2005-03-16
Functional genomic analysis is a challenging step in the so-called post-genomic field. Identification of potential targets using large-scale gene expression analysis requires functional validation to identify those that are physiologically relevant. Genetically modified cell models are often used for this purpose allowing up- or down-expression of selected targets in a well-defined and if possible highly differentiated cell type. However, the generation of such models remains time-consuming and expensive. In order to alleviate this step, we developed a strategy aimed at the rapid and efficient generation of genetically modified cell lines with conditional, inducible expression of various target genes. Efficient knock-in of various constructs, called targeted transgenesis, in a locus selected for its permissibility to the tet inducible system, was obtained through the stimulation of site-specific homologous recombination by the meganuclease I-SceI. Our results demonstrate that targeted transgenesis in a reference inducible locus greatly facilitated the functional analysis of the selected recombinant cells. The efficient screening strategy we have designed makes possible automation of the transfection and selection steps. Furthermore, this strategy could be applied to a variety of highly differentiated cells.
A framework provided an outline toward the proper evaluation of potential screening strategies.
Adriaensen, Wim J; Matheï, Cathy; Buntinx, Frank J; Arbyn, Marc
2013-06-01
Screening tests are often introduced into clinical practice without proper evaluation, despite the increasing awareness that screening is a double-edged sword that can lead to either net benefits or harms. Our objective was to develop a comprehensive framework for the evaluation of new screening strategies. Elaborating on the existing concepts proposed by experts, a stepwise framework is proposed to evaluate whether a potential screening test can be introduced as a screening strategy into clinical practice. The principle of screening strategy evaluation is illustrated for cervical cancer, which is a template for screening because of the existence of an easily detectable and treatable precursor lesion. The evaluation procedure consists of six consecutive steps. In steps 1-4, the technical accuracy, place of the test in the screening pathway, diagnostic accuracy, and longitudinal sensitivity and specificity of the screening test are assessed. In steps 5 and 6, the impact of the screening strategy on the patient and population levels, respectively, is evaluated. The framework incorporates a harm and benefit trade-off and cost-effectiveness analysis. Our framework provides an outline toward the proper evaluation of potential screening strategies before considering implementation. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
1974-01-01
The purpose of the BRAVO User's Manual is to describe the BRAVO methodology in terms of step-by-step procedures. The BRAVO methodology then becomes a tool which a team of analysts can utilize to perform cost effectiveness analyses on potential future space applications with a relatively general set of input information and a relatively small expenditure of resources. An overview of the BRAVO procedure is given by describing the complete procedure in a general form.
NASA Astrophysics Data System (ADS)
Kokorina, Alina A.; Goryacheva, Irina Y.; Sapelkin, Andrei V.; Sukhorukov, Gleb B.
2018-04-01
Photoluminescent (PL) carbon nanoparticles (CNPs) have been synthesized by one-step microwave irradiation from water solution of sodium dextran sulfate (DSS) as the sole carbon source. Microwave (MW) method is very simple and cheap and it provides fast synthesis of CNPs. We have varied synthesis time for obtaining high luminescent CNPs. The synthesized CNPs exhibit excitation-dependent photoluminescent. Final CNPs water solution has a blue- green luminescence. CNPs have low cytotoxicity, good photostability and can be potentially suitable candidates for bioimaging, analysis or analytical tests.
Mental health service acceptability for the armed forces veteran community.
Farrand, P; Jeffs, A; Bloomfield, T; Greenberg, N; Watkins, E; Mullan, E
2018-06-15
Despite developments in mental health services for armed forces veterans and family members, barriers to access associated with poor levels of acceptability regarding service provision remain. Adapting a Step 2 mental health service based on low-intensity cognitive behavioural therapy (CBT) interventions to represent a familiar context and meet the needs of the armed forces veteran community may serve to enhance acceptability and reduce help-seeking barriers. To examine acceptability of a Step 2 low-intensity CBT mental health service adapted for armed forces veterans and family members provided by a UK Armed Forces charity. Qualitative study using individual semi-structured interviews with armed forces veterans and family members of those injured or becoming unwell while serving in the British Armed Forces. Data analysis was undertaken using thematic alongside disconfirming case analysis. Adapting a Step 2 mental health service for armed forces veterans and family members enhanced acceptability and promoted help-seeking. Wider delivery characteristics associated with Step 2 mental health services within the Improving Access to Psychological Therapies (IAPT) programme also contributed to service acceptability. However, limitations of Step 2 mental health service provision were also identified. A Step 2 mental health service adapted for armed forces veterans and family members enhances acceptability and may potentially overcome help-seeking barriers. However, concerns remain regarding ways to accommodate the treatment of post-traumatic stress disorder and provide support for family members.
Microcomputers: Software Evaluation. Evaluation Guides. Guide Number 17.
ERIC Educational Resources Information Center
Gray, Peter J.
This guide discusses three critical steps in selecting microcomputer software and hardware: setting the context, software evaluation, and managing microcomputer use. Specific topics addressed include: (1) conducting an informal task analysis to determine how the potential user's time is spent; (2) identifying tasks amenable to computerization and…
Martens, Brian K; DiGennaro, Florence D; Reed, Derek D; Szczech, Frances M; Rosenthal, Blair D
2008-01-01
Descriptive assessment methods have been used in applied settings to identify consequences for problem behavior, thereby aiding in the design of effective treatment programs. Consensus has not been reached, however, regarding the types of data or analytic strategies that are most useful for describing behavior–consequence relations. One promising approach involves the analysis of conditional probabilities from sequential recordings of behavior and events that follow its occurrence. In this paper we review several strategies for identifying contingent relations from conditional probabilities, and propose an alternative strategy known as a contingency space analysis (CSA). Step-by-step procedures for conducting and interpreting a CSA using sample data are presented, followed by discussion of the potential use of a CSA for conducting descriptive assessments, informing intervention design, and evaluating changes in reinforcement contingencies following treatment. PMID:18468280
Influence of fault steps on rupture termination of strike-slip earthquake faults
NASA Astrophysics Data System (ADS)
Li, Zhengfang; Zhou, Bengang
2018-03-01
A statistical analysis was completed on the rupture data of 29 historical strike-slip earthquakes across the world. The purpose of this study is to examine the effects of fault steps on the rupture termination of these events. The results show good correlations between the type and length of steps with the seismic rupture and a poor correlation between the step number and seismic rupture. For different magnitude intervals, the smallest widths of the fault steps (Lt) that can terminate the rupture propagation are variable: Lt = 3 km for Ms 6.5 6.9, Lt = 4 km for Ms 7.0 7.5, Lt = 6 km for Ms 7.5 8.0, and Lt = 8 km for Ms 8.0 8.5. The dilational fault step is easier to rupture through than the compression fault step. The smallest widths of the fault step for the rupture arrest can be used as an indicator to judge the scale of the rupture termination of seismic faults. This is helpful for research on fault segmentation, as well as estimating the magnitude of potential earthquakes, and is thus of significance for the assessment of seismic risks.
Co-authorship network analysis in health research: method and potential use.
Fonseca, Bruna de Paula Fonseca E; Sampaio, Ricardo Barros; Fonseca, Marcus Vinicius de Araújo; Zicker, Fabio
2016-04-30
Scientific collaboration networks are a hallmark of contemporary academic research. Researchers are no longer independent players, but members of teams that bring together complementary skills and multidisciplinary approaches around common goals. Social network analysis and co-authorship networks are increasingly used as powerful tools to assess collaboration trends and to identify leading scientists and organizations. The analysis reveals the social structure of the networks by identifying actors and their connections. This article reviews the method and potential applications of co-authorship network analysis in health. The basic steps for conducting co-authorship studies in health research are described and common network metrics are presented. The application of the method is exemplified by an overview of the global research network for Chikungunya virus vaccines.
Herrero, Enrique; Chen, Qing-Song; Hernández, Javier; Sun, Shi-Gang; Feliu, Juan M
2011-10-06
The oxidation of adsorbed CO on Pt single crystal electrodes has been studied in alkaline media. The surfaces used in this study were the Pt(111) electrode and vicinal stepped and kinked surfaces with (111) terraces. The kinked surfaces have either (110) steps broken by (100) kinks or (100) steps broken by (110) kinks and different kink densities. The voltammetric profiles for the CO stripping on those electrodes show peaks corresponding to the oxidation of CO on the (111) terraces, on the (100) steps/kinks and on the (110) steps/kinks at very distinctive potentials. Additionally, the stripping voltammograms always present a prewave. The analysis of the results with the different stepped and kinked surfaces indicates that the presence of the prewave is not associated with defects or kinks in the electrode surface. Also, the clear separation of the CO stripping process in different peak contributions indicates that the mobility of CO on the surface is very low. Using partial CO stripping experiments and studies at different pH, it has been proposed that the low mobility is a consequence of the negative absolute potential at which the adlayers are formed in alkaline media. Also, the surface diffusion coefficient for CO in these media has been estimated from the dependence of the stripping charge of the peaks with the scan rate of the voltammetry.
Systematic procedure for designing processes with multiple environmental objectives.
Kim, Ki-Joo; Smith, Raymond L
2005-04-01
Evaluation of multiple objectives is very important in designing environmentally benign processes. It requires a systematic procedure for solving multiobjective decision-making problems due to the complex nature of the problems, the need for complex assessments, and the complicated analysis of multidimensional results. In this paper, a novel systematic procedure is presented for designing processes with multiple environmental objectives. This procedure has four steps: initialization, screening, evaluation, and visualization. The first two steps are used for systematic problem formulation based on mass and energy estimation and order of magnitude analysis. In the third step, an efficient parallel multiobjective steady-state genetic algorithm is applied to design environmentally benign and economically viable processes and to provide more accurate and uniform Pareto optimal solutions. In the last step a new visualization technique for illustrating multiple objectives and their design parameters on the same diagram is developed. Through these integrated steps the decision-maker can easily determine design alternatives with respect to his or her preferences. Most importantly, this technique is independent of the number of objectives and design parameters. As a case study, acetic acid recovery from aqueous waste mixtures is investigated by minimizing eight potential environmental impacts and maximizing total profit. After applying the systematic procedure, the most preferred design alternatives and their design parameters are easily identified.
Mycotoxin analysis: an update.
Krska, Rudolf; Schubert-Ullrich, Patricia; Molinelli, Alexandra; Sulyok, Michael; MacDonald, Susan; Crews, Colin
2008-02-01
Mycotoxin contamination of cereals and related products used for feed can cause intoxication, especially in farm animals. Therefore, efficient analytical tools for the qualitative and quantitative analysis of toxic fungal metabolites in feed are required. Current methods usually include an extraction step, a clean-up step to reduce or eliminate unwanted co-extracted matrix components and a separation step with suitably specific detection ability. Quantitative methods of analysis for most mycotoxins use immunoaffinity clean-up with high-performance liquid chromatography (HPLC) separation in combination with UV and/or fluorescence detection. Screening of samples contaminated with mycotoxins is frequently performed by thin layer chromatography (TLC), which yields qualitative or semi-quantitative results. Nowadays, enzyme-linked immunosorbent assays (ELISA) are often used for rapid screening. A number of promising methods, such as fluorescence polarization immunoassays, dipsticks, and even newer methods such as biosensors and non-invasive techniques based on infrared spectroscopy, have shown great potential for mycotoxin analysis. Currently, there is a strong trend towards the use of multi-mycotoxin methods for the simultaneous analysis of several of the important Fusarium mycotoxins, which is best achieved by LC-MS/MS (liquid chromatography with tandem mass spectrometry). This review focuses on recent developments in the determination of mycotoxins with a special emphasis on LC-MS/MS and emerging rapid methods.
Machine Learning: A Crucial Tool for Sensor Design
Zhao, Weixiang; Bhushan, Abhinav; Santamaria, Anthony D.; Simon, Melinda G.; Davis, Cristina E.
2009-01-01
Sensors have been widely used for disease diagnosis, environmental quality monitoring, food quality control, industrial process analysis and control, and other related fields. As a key tool for sensor data analysis, machine learning is becoming a core part of novel sensor design. Dividing a complete machine learning process into three steps: data pre-treatment, feature extraction and dimension reduction, and system modeling, this paper provides a review of the methods that are widely used for each step. For each method, the principles and the key issues that affect modeling results are discussed. After reviewing the potential problems in machine learning processes, this paper gives a summary of current algorithms in this field and provides some feasible directions for future studies. PMID:20191110
Noise Enhances Action Potential Generation in Mouse Sensory Neurons via Stochastic Resonance.
Onorato, Irene; D'Alessandro, Giuseppina; Di Castro, Maria Amalia; Renzi, Massimiliano; Dobrowolny, Gabriella; Musarò, Antonio; Salvetti, Marco; Limatola, Cristina; Crisanti, Andrea; Grassi, Francesca
2016-01-01
Noise can enhance perception of tactile and proprioceptive stimuli by stochastic resonance processes. However, the mechanisms underlying this general phenomenon remain to be characterized. Here we studied how externally applied noise influences action potential firing in mouse primary sensory neurons of dorsal root ganglia, modelling a basic process in sensory perception. Since noisy mechanical stimuli may cause stochastic fluctuations in receptor potential, we examined the effects of sub-threshold depolarizing current steps with superimposed random fluctuations. We performed whole cell patch clamp recordings in cultured neurons of mouse dorsal root ganglia. Noise was added either before and during the step, or during the depolarizing step only, to focus onto the specific effects of external noise on action potential generation. In both cases, step + noise stimuli triggered significantly more action potentials than steps alone. The normalized power norm had a clear peak at intermediate noise levels, demonstrating that the phenomenon is driven by stochastic resonance. Spikes evoked in step + noise trials occur earlier and show faster rise time as compared to the occasional ones elicited by steps alone. These data suggest that external noise enhances, via stochastic resonance, the recruitment of transient voltage-gated Na channels, responsible for action potential firing in response to rapid step-wise depolarizing currents.
Noise Enhances Action Potential Generation in Mouse Sensory Neurons via Stochastic Resonance
Onorato, Irene; D'Alessandro, Giuseppina; Di Castro, Maria Amalia; Renzi, Massimiliano; Dobrowolny, Gabriella; Musarò, Antonio; Salvetti, Marco; Limatola, Cristina; Crisanti, Andrea; Grassi, Francesca
2016-01-01
Noise can enhance perception of tactile and proprioceptive stimuli by stochastic resonance processes. However, the mechanisms underlying this general phenomenon remain to be characterized. Here we studied how externally applied noise influences action potential firing in mouse primary sensory neurons of dorsal root ganglia, modelling a basic process in sensory perception. Since noisy mechanical stimuli may cause stochastic fluctuations in receptor potential, we examined the effects of sub-threshold depolarizing current steps with superimposed random fluctuations. We performed whole cell patch clamp recordings in cultured neurons of mouse dorsal root ganglia. Noise was added either before and during the step, or during the depolarizing step only, to focus onto the specific effects of external noise on action potential generation. In both cases, step + noise stimuli triggered significantly more action potentials than steps alone. The normalized power norm had a clear peak at intermediate noise levels, demonstrating that the phenomenon is driven by stochastic resonance. Spikes evoked in step + noise trials occur earlier and show faster rise time as compared to the occasional ones elicited by steps alone. These data suggest that external noise enhances, via stochastic resonance, the recruitment of transient voltage-gated Na channels, responsible for action potential firing in response to rapid step-wise depolarizing currents. PMID:27525414
Protein mass spectra data analysis for clinical biomarker discovery: a global review.
Roy, Pascal; Truntzer, Caroline; Maucort-Boulch, Delphine; Jouve, Thomas; Molinari, Nicolas
2011-03-01
The identification of new diagnostic or prognostic biomarkers is one of the main aims of clinical cancer research. In recent years there has been a growing interest in using high throughput technologies for the detection of such biomarkers. In particular, mass spectrometry appears as an exciting tool with great potential. However, to extract any benefit from the massive potential of clinical proteomic studies, appropriate methods, improvement and validation are required. To better understand the key statistical points involved with such studies, this review presents the main data analysis steps of protein mass spectra data analysis, from the pre-processing of the data to the identification and validation of biomarkers.
Communicative Interaction Processes Involving Non-Vocal Physically Handicapped Children.
ERIC Educational Resources Information Center
Harris, Deberah
1982-01-01
Communication prostheses are critical components of the nonvocal child's communication process, but are only one component. This article focuses on the steps involved in communicative interaction processes and the potential barriers to the development of effective interaction and analysis of nonvocal communicative interactions. A discussion of the…
Is parenting style a predictor of suicide attempts in a representative sample of adolescents?
Donath, Carolin; Graessel, Elmar; Baier, Dirk; Bleich, Stefan; Hillemacher, Thomas
2014-04-26
Suicidal ideation and suicide attempts are serious but not rare conditions in adolescents. However, there are several research and practical suicide-prevention initiatives that discuss the possibility of preventing serious self-harm. Profound knowledge about risk and protective factors is therefore necessary. The aim of this study is a) to clarify the role of parenting behavior and parenting styles in adolescents' suicide attempts and b) to identify other statistically significant and clinically relevant risk and protective factors for suicide attempts in a representative sample of German adolescents. In the years 2007/2008, a representative written survey of N = 44,610 students in the 9th grade of different school types in Germany was conducted. In this survey, the lifetime prevalence of suicide attempts was investigated as well as potential predictors including parenting behavior. A three-step statistical analysis was carried out: I) As basic model, the association between parenting and suicide attempts was explored via binary logistic regression controlled for age and sex. II) The predictive values of 13 additional potential risk/protective factors were analyzed with single binary logistic regression analyses for each predictor alone. Non-significant predictors were excluded in Step III. III) In a multivariate binary logistic regression analysis, all significant predictor variables from Step II and the parenting styles were included after testing for multicollinearity. Three parental variables showed a relevant association with suicide attempts in adolescents - (all protective): mother's warmth and father's warmth in childhood and mother's control in adolescence (Step I). In the full model (Step III), Authoritative parenting (protective: OR: .79) and Rejecting-Neglecting parenting (risk: OR: 1.63) were identified as significant predictors (p < .001) for suicidal attempts. Seven further variables were interpreted to be statistically significant and clinically relevant: ADHD, female sex, smoking, Binge Drinking, absenteeism/truancy, migration background, and parental separation events. Parenting style does matter. While children of Authoritative parents profit, children of Rejecting-Neglecting parents are put at risk - as we were able to show for suicide attempts in adolescence. Some of the identified risk factors contribute new knowledge and potential areas of intervention for special groups such as migrants or children diagnosed with ADHD.
Carvalho, Luis Felipe C. S.; Nogueira, Marcelo Saito; Neto, Lázaro P. M.; Bhattacharjee, Tanmoy T.; Martin, Airton A.
2017-01-01
Most oral injuries are diagnosed by histopathological analysis of a biopsy, which is an invasive procedure and does not give immediate results. On the other hand, Raman spectroscopy is a real time and minimally invasive analytical tool with potential for the diagnosis of diseases. The potential for diagnostics can be improved by data post-processing. Hence, this study aims to evaluate the performance of preprocessing steps and multivariate analysis methods for the classification of normal tissues and pathological oral lesion spectra. A total of 80 spectra acquired from normal and abnormal tissues using optical fiber Raman-based spectroscopy (OFRS) were subjected to PCA preprocessing in the z-scored data set, and the KNN (K-nearest neighbors), J48 (unpruned C4.5 decision tree), RBF (radial basis function), RF (random forest), and MLP (multilayer perceptron) classifiers at WEKA software (Waikato environment for knowledge analysis), after area normalization or maximum intensity normalization. Our results suggest the best classification was achieved by using maximum intensity normalization followed by MLP. Based on these results, software for automated analysis can be generated and validated using larger data sets. This would aid quick comprehension of spectroscopic data and easy diagnosis by medical practitioners in clinical settings. PMID:29188115
Carvalho, Luis Felipe C S; Nogueira, Marcelo Saito; Neto, Lázaro P M; Bhattacharjee, Tanmoy T; Martin, Airton A
2017-11-01
Most oral injuries are diagnosed by histopathological analysis of a biopsy, which is an invasive procedure and does not give immediate results. On the other hand, Raman spectroscopy is a real time and minimally invasive analytical tool with potential for the diagnosis of diseases. The potential for diagnostics can be improved by data post-processing. Hence, this study aims to evaluate the performance of preprocessing steps and multivariate analysis methods for the classification of normal tissues and pathological oral lesion spectra. A total of 80 spectra acquired from normal and abnormal tissues using optical fiber Raman-based spectroscopy (OFRS) were subjected to PCA preprocessing in the z-scored data set, and the KNN (K-nearest neighbors), J48 (unpruned C4.5 decision tree), RBF (radial basis function), RF (random forest), and MLP (multilayer perceptron) classifiers at WEKA software (Waikato environment for knowledge analysis), after area normalization or maximum intensity normalization. Our results suggest the best classification was achieved by using maximum intensity normalization followed by MLP. Based on these results, software for automated analysis can be generated and validated using larger data sets. This would aid quick comprehension of spectroscopic data and easy diagnosis by medical practitioners in clinical settings.
Direct analysis of terpenes from biological buffer systems using SESI and IR-MALDESI.
Nazari, Milad; Malico, Alexandra A; Ekelöf, Måns; Lund, Sean; Williams, Gavin J; Muddiman, David C
2018-01-01
Terpenes are the largest class of natural products with a wide range of applications including use as pharmaceuticals, fragrances, flavorings, and agricultural products. Terpenes are biosynthesized by the condensation of a variable number of isoprene units resulting in linear polyisoprene diphosphate units, which can then be cyclized by terpene synthases into a range of complex structures. While these cyclic structures have immense diversity and potential in different applications, their direct analysis in biological buffer systems requires intensive sample preparation steps such as salt cleanup, extraction with organic solvents, and chromatographic separations. Electrospray post-ionization can be used to circumvent many sample cleanup and desalting steps. SESI and IR-MALDESI are two examples of ionization methods that employ electrospray post-ionization at atmospheric pressure and temperature. By coupling the two techniques and doping the electrospray solvent with silver ions, olefinic terpenes of different classes and varying degrees of volatility were directly analyzed from a biological buffer system with no sample workup steps.
Surveyor 3: Bacterium isolated from lunar retrieved television camera
NASA Technical Reports Server (NTRS)
Mitchell, F. J.; Ellis, W. L.
1972-01-01
Microbial analysis was the first of several studies of the retrieved camera and was performed immediately after the camera was opened. The emphasis of the analysis was placed upon isolating microorganisms that could be potentially pathogenic for man. Every step in the retrieval of the Surveyor 3 television camera was analyzed for possible contamination sources, including camera contact by the astronauts, ingassing in the lunar and command module during the mission or at splashdown, and handling during quarantine, disassembly, and analysis at the Lunar Receiving Laboratory
Charged particle mobility refrigerant analyzer
Allman, S.L.; Chunghsuan Chen; Chen, F.C.
1993-02-02
A method for analyzing a gaseous electronegative species comprises the steps of providing an analysis chamber; providing an electric field of known potential within the analysis chamber; admitting into the analysis chamber a gaseous sample containing the gaseous electronegative species; providing a pulse of free electrons within the electric field so that the pulse of free electrons interacts with the gaseous electronegative species so that a swarm of electrically charged particles is produced within the electric field; and, measuring the mobility of the electrically charged particles within the electric field.
Charged particle mobility refrigerant analyzer
Allman, Steve L.; Chen, Chung-Hsuan; Chen, Fang C.
1993-01-01
A method for analyzing a gaseous electronegative species comprises the steps of providing an analysis chamber; providing an electric field of known potential within the analysis chamber; admitting into the analysis chamber a gaseous sample containing the gaseous electronegative species; providing a pulse of free electrons within the electric field so that the pulse of free electrons interacts with the gaseous electronegative species so that a swarm of electrically charged particles is produced within the electric field; and, measuring the mobility of the electrically charged particles within the electric field.
On-orbit assembly of a team of flexible spacecraft using potential field based method
NASA Astrophysics Data System (ADS)
Chen, Ti; Wen, Hao; Hu, Haiyan; Jin, Dongping
2017-04-01
In this paper, a novel control strategy is developed based on artificial potential field for the on-orbit autonomous assembly of four flexible spacecraft without inter-member collision. Each flexible spacecraft is simplified as a hub-beam model with truncated beam modes in the floating frame of reference and the communication graph among the four spacecraft is assumed to be a ring topology. The four spacecraft are driven to a pre-assembly configuration first and then to the assembly configuration. In order to design the artificial potential field for the first step, each spacecraft is outlined by an ellipse and a virtual leader of circle is introduced. The potential field mainly depends on the attitude error between the flexible spacecraft and its neighbor, the radial Euclidian distance between the ellipse and the circle and the classical Euclidian distance between the centers of the ellipse and the circle. It can be demonstrated that there are no local minima for the potential function and the global minimum is zero. If the function is equal to zero, the solution is not a certain state, but a set. All the states in the set are corresponding to the desired configurations. The Lyapunov analysis guarantees that the four spacecraft asymptotically converge to the target configuration. Moreover, the other potential field is also included to avoid the inter-member collision. In the control design of the second step, only small modification is made for the controller in the first step. Finally, the successful application of the proposed control law to the assembly mission is verified by two case studies.
A Tutorial in Bayesian Potential Outcomes Mediation Analysis.
Miočević, Milica; Gonzalez, Oscar; Valente, Matthew J; MacKinnon, David P
2018-01-01
Statistical mediation analysis is used to investigate intermediate variables in the relation between independent and dependent variables. Causal interpretation of mediation analyses is challenging because randomization of subjects to levels of the independent variable does not rule out the possibility of unmeasured confounders of the mediator to outcome relation. Furthermore, commonly used frequentist methods for mediation analysis compute the probability of the data given the null hypothesis, which is not the probability of a hypothesis given the data as in Bayesian analysis. Under certain assumptions, applying the potential outcomes framework to mediation analysis allows for the computation of causal effects, and statistical mediation in the Bayesian framework gives indirect effects probabilistic interpretations. This tutorial combines causal inference and Bayesian methods for mediation analysis so the indirect and direct effects have both causal and probabilistic interpretations. Steps in Bayesian causal mediation analysis are shown in the application to an empirical example.
Multispot single-molecule FRET: High-throughput analysis of freely diffusing molecules
Panzeri, Francesco
2017-01-01
We describe an 8-spot confocal setup for high-throughput smFRET assays and illustrate its performance with two characteristic experiments. First, measurements on a series of freely diffusing doubly-labeled dsDNA samples allow us to demonstrate that data acquired in multiple spots in parallel can be properly corrected and result in measured sample characteristics consistent with those obtained with a standard single-spot setup. We then take advantage of the higher throughput provided by parallel acquisition to address an outstanding question about the kinetics of the initial steps of bacterial RNA transcription. Our real-time kinetic analysis of promoter escape by bacterial RNA polymerase confirms results obtained by a more indirect route, shedding additional light on the initial steps of transcription. Finally, we discuss the advantages of our multispot setup, while pointing potential limitations of the current single laser excitation design, as well as analysis challenges and their solutions. PMID:28419142
Crystal step edges can trap electrons on the surfaces of n-type organic semiconductors.
He, Tao; Wu, Yanfei; D'Avino, Gabriele; Schmidt, Elliot; Stolte, Matthias; Cornil, Jérôme; Beljonne, David; Ruden, P Paul; Würthner, Frank; Frisbie, C Daniel
2018-05-30
Understanding relationships between microstructure and electrical transport is an important goal for the materials science of organic semiconductors. Combining high-resolution surface potential mapping by scanning Kelvin probe microscopy (SKPM) with systematic field effect transport measurements, we show that step edges can trap electrons on the surfaces of single crystal organic semiconductors. n-type organic semiconductor crystals exhibiting positive step edge surface potentials display threshold voltages that increase and carrier mobilities that decrease with increasing step density, characteristic of trapping, whereas crystals that do not have positive step edge surface potentials do not have strongly step density dependent transport. A device model and microelectrostatics calculations suggest that trapping can be intrinsic to step edges for crystals of molecules with polar substituents. The results provide a unique example of a specific microstructure-charge trapping relationship and highlight the utility of surface potential imaging in combination with transport measurements as a productive strategy for uncovering microscopic structure-property relationships in organic semiconductors.
Heuett, William J; Beard, Daniel A; Qian, Hong
2008-05-15
Several approaches, including metabolic control analysis (MCA), flux balance analysis (FBA), correlation metric construction (CMC), and biochemical circuit theory (BCT), have been developed for the quantitative analysis of complex biochemical networks. Here, we present a comprehensive theory of linear analysis for nonequilibrium steady-state (NESS) biochemical reaction networks that unites these disparate approaches in a common mathematical framework and thermodynamic basis. In this theory a number of relationships between key matrices are introduced: the matrix A obtained in the standard, linear-dynamic-stability analysis of the steady-state can be decomposed as A = SRT where R and S are directly related to the elasticity-coefficient matrix for the fluxes and chemical potentials in MCA, respectively; the control-coefficients for the fluxes and chemical potentials can be written in terms of RTBS and STBS respectively where matrix B is the inverse of A; the matrix S is precisely the stoichiometric matrix in FBA; and the matrix eAt plays a central role in CMC. One key finding that emerges from this analysis is that the well-known summation theorems in MCA take different forms depending on whether metabolic steady-state is maintained by flux injection or concentration clamping. We demonstrate that if rate-limiting steps exist in a biochemical pathway, they are the steps with smallest biochemical conductances and largest flux control-coefficients. We hypothesize that biochemical networks for cellular signaling have a different strategy for minimizing energy waste and being efficient than do biochemical networks for biosynthesis. We also discuss the intimate relationship between MCA and biochemical systems analysis (BSA).
Lubin, Arnaud; Sheng, Sheng; Cabooter, Deirdre; Augustijns, Patrick; Cuyckens, Filip
2017-11-17
Lack of knowledge on the expected concentration range or insufficient linear dynamic range of the analytical method applied are common challenges for the analytical scientist. Samples that are above the upper limit of quantification are typically diluted and reanalyzed. The analysis of undiluted highly concentrated samples can cause contamination of the system, while the dilution step is time consuming and as the case for any sample preparation step, also potentially leads to precipitation, adsorption or degradation of the analytes. Copyright © 2017 Elsevier B.V. All rights reserved.
The touro 12-step: a systematic guide to optimizing survey research with online discussion boards.
Ip, Eric J; Barnett, Mitchell J; Tenerowicz, Michael J; Perry, Paul J
2010-05-27
The Internet, in particular discussion boards, can provide a unique opportunity for recruiting participants in online research surveys. Despite its outreach potential, there are significant barriers which can limit its success. Trust, participation, and visibility issues can all hinder the recruitment process; the Touro 12-Step was developed to address these potential hurdles. By following this step-by-step approach, researchers will be able to minimize these pitfalls and maximize their recruitment potential via online discussion boards.
COMPENDIUM OF METHODS FOR THE DETERMINATION ...
This Second Edition of the Compendium has been prepared to provide regional, state and local environmental regulatory agencies with step-by-step sampling and analysis procedures for the determination of selected toxic organic pollutants in ambient air. It is designed to assist those persons responsible for sampling and analysis of toxic organic pollutants in complying with the requirements of Title III of the Clean Air Act. This revised Compendium presents a set of 17 methods in a standardized format with a variety of applicable sampling methods, as well as several analytical techniques, for specific classes of organic pollutants, as appropriate to the specific pollutant compound, its level, and potential interferences. Consequently, this treatment allows the user flexibility in selecting alternatives to complement his or her background and laboratory capability. Information
Cochlear Processes: A Research Report.
ERIC Educational Resources Information Center
Zwislocki, Jozef J.
This paper summarizes recent research on functions of the cochlea of the inner ear. The cochlea is described as the seat of the first step in the auditory sound analysis and transduction of mechanical vibration into electrochemical processes leading to the generation of neural action potentials. The cochlea is also described as a frequent seat of…
Internal PR for Education Associations. PR Bookshelf No. 4.
ERIC Educational Resources Information Center
National Education Association, Washington, DC.
This booklet contains discussion of internal public relations for a local education association with suggestions for enhancing the association's image with its members and potential members. The five sections are (1) "Start with Analysis and Evaluation"--a listing of steps in planning an internal public relations program; (2) "Orientation: A Key…
Point clouds segmentation as base for as-built BIM creation
NASA Astrophysics Data System (ADS)
Macher, H.; Landes, T.; Grussenmeyer, P.
2015-08-01
In this paper, a three steps segmentation approach is proposed in order to create 3D models from point clouds acquired by TLS inside buildings. The three scales of segmentation are floors, rooms and planes composing the rooms. First, floor segmentation is performed based on analysis of point distribution along Z axis. Then, for each floor, room segmentation is achieved considering a slice of point cloud at ceiling level. Finally, planes are segmented for each room, and planes corresponding to ceilings and floors are identified. Results of each step are analysed and potential improvements are proposed. Based on segmented point clouds, the creation of as-built BIM is considered in a future work section. Not only the classification of planes into several categories is proposed, but the potential use of point clouds acquired outside buildings is also considered.
Economou, Anastasios; Voulgaropoulos, Anastasios
2003-01-01
The development of a dedicated automated sequential-injection analysis apparatus for anodic stripping voltammetry (ASV) and adsorptive stripping voltammetry (AdSV) is reported. The instrument comprised a peristaltic pump, a multiposition selector valve and a home-made potentiostat and used a mercury-film electrode as the working electrodes in a thin-layer electrochemical detector. Programming of the experimental sequence was performed in LabVIEW 5.1. The sequence of operations included formation of the mercury film, electrolytic or adsorptive accumulation of the analyte on the electrode surface, recording of the voltammetric current-potential response, and cleaning of the electrode. The stripping step was carried out by applying a square-wave (SW) potential-time excitation signal to the working electrode. The instrument allowed unattended operation since multiple-step sequences could be readily implemented through the purpose-built software. The utility of the analyser was tested for the determination of copper(II), cadmium(II), lead(II) and zinc(II) by SWASV and of nickel(II), cobalt(II) and uranium(VI) by SWAdSV.
Economou, Anastasios; Voulgaropoulos, Anastasios
2003-01-01
The development of a dedicated automated sequential-injection analysis apparatus for anodic stripping voltammetry (ASV) and adsorptive stripping voltammetry (AdSV) is reported. The instrument comprised a peristaltic pump, a multiposition selector valve and a home-made potentiostat and used a mercury-film electrode as the working electrodes in a thin-layer electrochemical detector. Programming of the experimental sequence was performed in LabVIEW 5.1. The sequence of operations included formation of the mercury film, electrolytic or adsorptive accumulation of the analyte on the electrode surface, recording of the voltammetric current-potential response, and cleaning of the electrode. The stripping step was carried out by applying a square-wave (SW) potential-time excitation signal to the working electrode. The instrument allowed unattended operation since multiple-step sequences could be readily implemented through the purpose-built software. The utility of the analyser was tested for the determination of copper(II), cadmium(II), lead(II) and zinc(II) by SWASV and of nickel(II), cobalt(II) and uranium(VI) by SWAdSV. PMID:18924623
The use of administrative and other records for the analysis of internal migration.
1983-01-01
There are 5 main types of administrative records that are of potential use in the analysis of internal migration in Africa: 1) population registers, 2) electoral rolls, 3) school records, 4) labor or employment records, and 5) social security records. The population register provides legal identification for the individual and records his movements from 1 civil subdivision to another. The process of establishing a population register is not a simple one. All 5 of these records are incomplete, defective, and in most cases decentralized; yet, in spite of these limitations, administrative records are potential sources of migration data. Because of their imcompleteness, major biases are likely to arise in their use. The 1st step is for National Statistical Services to assist in improving the coverage of events expected to be registered in any of these records. The 2nd step is to try to use the data through some form of ratio of regression estimation. If use is not made of the records for migration data, it is unlikely that the quality of the migration data in the records will ever improve.
Duggan, P S; Siegel, A W; Blass, D M; Bok, H; Coyle, J T; Faden, R; Finkel, J; Gearhart, J D; Greely, H T; Hillis, A; Hoke, A; Johnson, R; Johnston, M; Kahn, J; Kerr, D; King, P; Kurtzberg, J; Liao, S M; McDonald, J W; McKhann, G; Nelson, K B; Rao, M; Regenberg, A; Smith, K; Solter, D; Song, H; Sugarman, J; Traystman, R J; Vescovi, A; Yanofski, J; Young, W; Mathews, D J H
2009-05-01
The prospect of using cell-based interventions (CBIs) to treat neurological conditions raises several important ethical and policy questions. In this target article, we focus on issues related to the unique constellation of traits that characterize CBIs targeted at the central nervous system. In particular, there is at least a theoretical prospect that these cells will alter the recipients' cognition, mood, and behavior-brain functions that are central to our concept of the self. The potential for such changes, although perhaps remote, is cause for concern and careful ethical analysis. Both to enable better informed consent in the future and as an end in itself, we argue that early human trials of CBIs for neurological conditions must monitor subjects for changes in cognition, mood, and behavior; further, we recommend concrete steps for that monitoring. Such steps will help better characterize the potential risks and benefits of CBIs as they are tested and potentially used for treatment.
NASA Astrophysics Data System (ADS)
Wu, Yu-Xia; Zhang, Xi; Xu, Xiao-Pan; Liu, Yang; Zhang, Guo-Peng; Li, Bao-Juan; Chen, Hui-Jun; Lu, Hong-Bing
2017-02-01
Ischemic stroke has great correlation with carotid atherosclerosis and is mostly caused by vulnerable plaques. It's particularly important to analysis the components of plaques for the detection of vulnerable plaques. Recently plaque analysis based on multi-contrast magnetic resonance imaging has attracted great attention. Though multi-contrast MR imaging has potentials in enhanced demonstration of carotid wall, its performance is hampered by the misalignment of different imaging sequences. In this study, a coarse-to-fine registration strategy based on cross-sectional images and wall boundaries is proposed to solve the problem. It includes two steps: a rigid step using the iterative closest points to register the centerlines of carotid artery extracted from multi-contrast MR images, and a non-rigid step using the thin plate spline to register the lumen boundaries of carotid artery. In the rigid step, the centerline was extracted by tracking the crosssectional images along the vessel direction calculated by Hessian matrix. In the non-rigid step, a shape context descriptor is introduced to find corresponding points of two similar boundaries. In addition, the deterministic annealing technique is used to find a globally optimized solution. The proposed strategy was evaluated by newly developed three-dimensional, fast and high resolution multi-contrast black blood MR imaging. Quantitative validation indicated that after registration, the overlap of two boundaries from different sequences is 95%, and their mean surface distance is 0.12 mm. In conclusion, the proposed algorithm has improved the accuracy of registration effectively for further component analysis of carotid plaques.
Certification-Based Process Analysis
NASA Technical Reports Server (NTRS)
Knight, Russell L.
2013-01-01
Space mission architects are often challenged with knowing which investment in technology infusion will have the highest return. Certification-based analysis (CBA) gives architects and technologists a means to communicate the risks and advantages of infusing technologies at various points in a process. Various alternatives can be compared, and requirements based on supporting streamlining or automation can be derived and levied on candidate technologies. CBA is a technique for analyzing a process and identifying potential areas of improvement. The process and analysis products are used to communicate between technologists and architects. Process means any of the standard representations of a production flow; in this case, any individual steps leading to products, which feed into other steps, until the final product is produced at the end. This sort of process is common for space mission operations, where a set of goals is reduced eventually to a fully vetted command sequence to be sent to the spacecraft. Fully vetting a product is synonymous with certification. For some types of products, this is referred to as verification and validation, and for others it is referred to as checking. Fundamentally, certification is the step in the process where one insures that a product works as intended, and contains no flaws.
Quadratic adaptive algorithm for solving cardiac action potential models.
Chen, Min-Hung; Chen, Po-Yuan; Luo, Ching-Hsing
2016-10-01
An adaptive integration method is proposed for computing cardiac action potential models accurately and efficiently. Time steps are adaptively chosen by solving a quadratic formula involving the first and second derivatives of the membrane action potential. To improve the numerical accuracy, we devise an extremum-locator (el) function to predict the local extremum when approaching the peak amplitude of the action potential. In addition, the time step restriction (tsr) technique is designed to limit the increase in time steps, and thus prevent the membrane potential from changing abruptly. The performance of the proposed method is tested using the Luo-Rudy phase 1 (LR1), dynamic (LR2), and human O'Hara-Rudy dynamic (ORd) ventricular action potential models, and the Courtemanche atrial model incorporating a Markov sodium channel model. Numerical experiments demonstrate that the action potential generated using the proposed method is more accurate than that using the traditional Hybrid method, especially near the peak region. The traditional Hybrid method may choose large time steps near to the peak region, and sometimes causes the action potential to become distorted. In contrast, the proposed new method chooses very fine time steps in the peak region, but large time steps in the smooth region, and the profiles are smoother and closer to the reference solution. In the test on the stiff Markov ionic channel model, the Hybrid blows up if the allowable time step is set to be greater than 0.1ms. In contrast, our method can adjust the time step size automatically, and is stable. Overall, the proposed method is more accurate than and as efficient as the traditional Hybrid method, especially for the human ORd model. The proposed method shows improvement for action potentials with a non-smooth morphology, and it needs further investigation to determine whether the method is helpful during propagation of the action potential. Copyright © 2016 Elsevier Ltd. All rights reserved.
Gu, Di; Gao, Simeng; Jiang, TingTing; Wang, Baohui
2017-03-15
To match the relentless pursuit of three research hot points - efficient solar utilization, green and sustainable remediation of wastewater and advanced oxidation processes, solar-mediated thermo-electrochemical oxidation of surfactant was proposed and developed for green remediation of surfactant wastewater. The solar thermal electrochemical process (STEP), fully driven with solar energy to electric energy and heat and without an input of other energy, sustainably serves as efficient thermo-electrochemical oxidation of surfactant, exemplified by SDBS, in wastewater with the synergistic production of hydrogen. The electrooxidation-resistant surfactant is thermo-electrochemically oxidized to CO 2 while hydrogen gas is generated by lowing effective oxidation potential and suppressing the oxidation activation energy originated from the combination of thermochemical and electrochemical effect. A clear conclusion on the mechanism of SDBS degradation can be proposed and discussed based on the theoretical analysis of electrochemical potential by quantum chemical method and experimental analysis of the CV, TG, GC, FT-IR, UV-vis, Fluorescence spectra and TOC. The degradation data provide a pilot for the treatment of SDBS wastewater that appears to occur via desulfonation followed by aromatic-ring opening. The solar thermal utilization that can initiate the desulfonation and activation of SDBS becomes one key step in the degradation process.
Gu, Di; Gao, Simeng; Jiang, TingTing; Wang, Baohui
2017-01-01
To match the relentless pursuit of three research hot points - efficient solar utilization, green and sustainable remediation of wastewater and advanced oxidation processes, solar-mediated thermo-electrochemical oxidation of surfactant was proposed and developed for green remediation of surfactant wastewater. The solar thermal electrochemical process (STEP), fully driven with solar energy to electric energy and heat and without an input of other energy, sustainably serves as efficient thermo-electrochemical oxidation of surfactant, exemplified by SDBS, in wastewater with the synergistic production of hydrogen. The electrooxidation-resistant surfactant is thermo-electrochemically oxidized to CO2 while hydrogen gas is generated by lowing effective oxidation potential and suppressing the oxidation activation energy originated from the combination of thermochemical and electrochemical effect. A clear conclusion on the mechanism of SDBS degradation can be proposed and discussed based on the theoretical analysis of electrochemical potential by quantum chemical method and experimental analysis of the CV, TG, GC, FT-IR, UV-vis, Fluorescence spectra and TOC. The degradation data provide a pilot for the treatment of SDBS wastewater that appears to occur via desulfonation followed by aromatic-ring opening. The solar thermal utilization that can initiate the desulfonation and activation of SDBS becomes one key step in the degradation process. PMID:28294180
NASA Astrophysics Data System (ADS)
Gu, Di; Gao, Simeng; Jiang, Tingting; Wang, Baohui
2017-03-01
To match the relentless pursuit of three research hot points - efficient solar utilization, green and sustainable remediation of wastewater and advanced oxidation processes, solar-mediated thermo-electrochemical oxidation of surfactant was proposed and developed for green remediation of surfactant wastewater. The solar thermal electrochemical process (STEP), fully driven with solar energy to electric energy and heat and without an input of other energy, sustainably serves as efficient thermo-electrochemical oxidation of surfactant, exemplified by SDBS, in wastewater with the synergistic production of hydrogen. The electrooxidation-resistant surfactant is thermo-electrochemically oxidized to CO2 while hydrogen gas is generated by lowing effective oxidation potential and suppressing the oxidation activation energy originated from the combination of thermochemical and electrochemical effect. A clear conclusion on the mechanism of SDBS degradation can be proposed and discussed based on the theoretical analysis of electrochemical potential by quantum chemical method and experimental analysis of the CV, TG, GC, FT-IR, UV-vis, Fluorescence spectra and TOC. The degradation data provide a pilot for the treatment of SDBS wastewater that appears to occur via desulfonation followed by aromatic-ring opening. The solar thermal utilization that can initiate the desulfonation and activation of SDBS becomes one key step in the degradation process.
NASA Technical Reports Server (NTRS)
Allen, Carlton C.; Beaty, David W.
2010-01-01
Sample return from Mars has been advocated by numerous scientific advisory panels for over 30 years, most prominently beginning with the National Research Council s [1] strategy for the exploration of the inner solar system, and most recently by the Mars Exploration Program Analysis Group (MEPAG s) Next Decade Science Analysis Group [2]. Analysis of samples here on Earth would have enormous advantages over in situ analyses in producing the data quality needed to address many of the complex scientific questions the community has posed about Mars. Instead of a small, predetermined set of analytical techniques, state of the art preparative and instrumental resources of the entire scientific community could be applied to the samples. The analytical emphasis could shift as the meaning of each result becomes better appreciated. These arguments apply both to igneous rocks and to layered sedimentary materials, either of which could contain water and other volatile constituents. In 2009 MEPAG formed the Mid-Range Rover Science Analysis Group (MRR-SAG) to formulate a mission concept that would address two general objectives: (1) conduct high-priority in situ science and (2) make concrete steps towards the potential return of samples to Earth. This analysis resulted in a mission concept named the Mars Astrobiology Explorer-Cacher (MAX-C), which was envisioned for launch in the 2018 opportunity. After extensive discussion, this group concluded that by far the most definitive contribution to sample return by this mission would be to collect and cache, in an accessible location, a suite of compelling samples that could potentially be recovered and returned by a subsequent mission. This would have the effect of separating two of the essential functions of MSR, the acquisition of the sample collection and its delivery to martian orbit, into two missions.
Force sum rules for stepped surfaces of jellium
NASA Astrophysics Data System (ADS)
Farjam, Mani
2007-03-01
The Budd-Vannimenus theorem for jellium surface is generalized for stepped surfaces of jellium. Our sum rules show that the average value of the electrostatic potential over the stepped jellium surface equals the value of the potential at the corresponding flat jellium surface. Several sum rules are tested with numerical results obtained within the Thomas-Fermi model of stepped surfaces.
Premedical special master’s programs increase USMLE STEP1 scores and improve residency placements
Khuder, Sadik
2017-01-01
The effectiveness of Special Master’s Programs (SMPs) in benefiting a potential medical student’s career beyond admission into an MD-program is largely unknown. This study aims to evaluate the role of SMPs, if any, in affecting the performance and outcomes of students during their medical school career. This study analyzed anonymous surveys of students and residents from the University of Toledo. The data analysis is used to evaluate a student’s academic performance before, during and after medical school. Measured metrics included: MCAT Scores, undergraduate GPA, USMLE STEP 1 scores, participation in research, number of research publications, and residency placement. Of 500 people surveyed 164 medical students or residents responded. Based on their responses, the respondents were divided into traditional (non-SMP) and SMP groups. As anticipated, MCAT scores (SMP: 29.82 vs. traditional 31.10) are significantly (p<0.05) different between the two groups. Interestingly, there is no significant difference in USMLE STEP 1 scores (SMP: 232.7 vs. traditional: 233.8) and when normalized relative to MCAT scores, USMLE STEP 1 scores for SMP-students are significantly (p<0.05) higher than their traditional counterparts (p<0.05). Additionally, SMP-students did not outperform the traditional students with regards to research publications. But, they did demonstrate a significant (p<0.05) proclivity towards surgical residencies when compared to the traditional students. Overall, our results highlight that SMPs potentiate USMLE STEP 1 performance and competitive residency-placements for its students. PMID:29190691
Nelson, Rohan; Howden, Mark; Hayman, Peter
2013-07-30
This paper explores heuristic methods with potential to place the analytical power of real options analysis into the hands of natural resource managers. The complexity of real options analysis has led to patchy or ephemeral adoption even by corporate managers familiar with the financial-market origins of valuation methods. Intuitively accessible methods for estimating the value of real options have begun to evolve, but their evaluation has mostly been limited to researcher-driven applications. In this paper we work closely with Bush Heritage Australia to evaluate the potential of real options analysis to support the intuitive judgement of conservation estate managers in covenanting land with uncertain future conservation value due to climate change. The results show that modified decision trees have potential to estimate the option value of covenanting individual properties while time and ongoing research resolves their future conservation value. Complementing this, Luehrman's option space has potential to assist managers with limited budgets to increase the portfolio value of multiple properties with different conservation attributes. Copyright © 2013 Elsevier Ltd. All rights reserved.
Executive Overview of SEI MOSAIC: Managing for Success Using a Risk-Based Approach
2007-03-01
and pro - vides the lens through which all potential outcomes are viewed and interpreted. Defining the con - text is thus an essential first step when...Success Analysis and Improvement Crite- ria (SEI MOSAIC)—a suite of advanced analysis methods for assessing complex, distributed pro - grams, processes...achieve that set of objectives, four ac- tivities must be executed in the order shown, while also adhering to any cost and schedule con - straints. Process
NASA Technical Reports Server (NTRS)
Lent, P. C. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Step-wise discriminate analysis has demonstrated the feasibility of feature identification using linear discriminate functions of ERTS-1 MSS band densities and their ratios. The analysis indicated that features such as small streams can be detected even when they are in dark mountain shadow. The potential utility of this and similar analytic techniques appears considerable, and the limits it can be applied to analysis of ERTS-1 imagery are not yet fully known.
Failure mode and effects analysis of witnessing protocols for ensuring traceability during IVF.
Rienzi, Laura; Bariani, Fiorenza; Dalla Zorza, Michela; Romano, Stefania; Scarica, Catello; Maggiulli, Roberta; Nanni Costa, Alessandro; Ubaldi, Filippo Maria
2015-10-01
Traceability of cells during IVF is a fundamental aspect of treatment, and involves witnessing protocols. Failure mode and effects analysis (FMEA) is a method of identifying real or potential breakdowns in processes, and allows strategies to mitigate risks to be developed. To examine the risks associated with witnessing protocols, an FMEA was carried out in a busy IVF centre, before and after implementation of an electronic witnessing system (EWS). A multidisciplinary team was formed and moderated by human factors specialists. Possible causes of failures, and their potential effects, were identified and risk priority number (RPN) for each failure calculated. A second FMEA analysis was carried out after implementation of an EWS. The IVF team identified seven main process phases, 19 associated process steps and 32 possible failure modes. The highest RPN was 30, confirming the relatively low risk that mismatches may occur in IVF when a manual witnessing system is used. The introduction of the EWS allowed a reduction in the moderate-risk failure mode by two-thirds (highest RPN = 10). In our experience, FMEA is effective in supporting multidisciplinary IVF groups to understand the witnessing process, identifying critical steps and planning changes in practice to enable safety to be enhanced. Copyright © 2015 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.
Ashley, Laura; Armitage, Gerry; Taylor, Julie
2017-03-01
Failure Modes and Effects Analysis (FMEA) is a prospective quality assurance methodology increasingly used in healthcare, which identifies potential vulnerabilities in complex, high-risk processes and generates remedial actions. We aimed, for the first time, to apply FMEA in a social care context to evaluate the process for recognising and referring children exposed to domestic abuse within one Midlands city safeguarding area in England. A multidisciplinary, multi-agency team of 10 front-line professionals undertook the FMEA, using a modified methodology, over seven group meetings. The FMEA included mapping out the process under evaluation to identify its component steps, identifying failure modes (potential errors) and possible causes for each step and generating corrective actions. In this article, we report the output from the FMEA, including illustrative examples of the failure modes and corrective actions generated. We also present an analysis of feedback from the FMEA team and provide future recommendations for the use of FMEA in appraising social care processes and practice. Although challenging, the FMEA was unequivocally valuable for team members and generated a significant number of corrective actions locally for the safeguarding board to consider in its response to children exposed to domestic abuse. © 2016 John Wiley & Sons Ltd.
Application of failure mode and effect analysis in an assisted reproduction technology laboratory.
Intra, Giulia; Alteri, Alessandra; Corti, Laura; Rabellotti, Elisa; Papaleo, Enrico; Restelli, Liliana; Biondo, Stefania; Garancini, Maria Paola; Candiani, Massimo; Viganò, Paola
2016-08-01
Assisted reproduction technology laboratories have a very high degree of complexity. Mismatches of gametes or embryos can occur, with catastrophic consequences for patients. To minimize the risk of error, a multi-institutional working group applied failure mode and effects analysis (FMEA) to each critical activity/step as a method of risk assessment. This analysis led to the identification of the potential failure modes, together with their causes and effects, using the risk priority number (RPN) scoring system. In total, 11 individual steps and 68 different potential failure modes were identified. The highest ranked failure modes, with an RPN score of 25, encompassed 17 failures and pertained to "patient mismatch" and "biological sample mismatch". The maximum reduction in risk, with RPN reduced from 25 to 5, was mostly related to the introduction of witnessing. The critical failure modes in sample processing were improved by 50% in the RPN by focusing on staff training. Three indicators of FMEA success, based on technical skill, competence and traceability, have been evaluated after FMEA implementation. Witnessing by a second human operator should be introduced in the laboratory to avoid sample mix-ups. These findings confirm that FMEA can effectively reduce errors in assisted reproduction technology laboratories. Copyright © 2016 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.
Embodied Cognition is Not What you Think it is
Wilson, Andrew D.; Golonka, Sabrina
2013-01-01
The most exciting hypothesis in cognitive science right now is the theory that cognition is embodied. Like all good ideas in cognitive science, however, embodiment immediately came to mean six different things. The most common definitions involve the straight-forward claim that “states of the body modify states of the mind.” However, the implications of embodiment are actually much more radical than this. If cognition can span the brain, body, and the environment, then the “states of mind” of disembodied cognitive science won’t exist to be modified. Cognition will instead be an extended system assembled from a broad array of resources. Taking embodiment seriously therefore requires both new methods and theory. Here we outline four key steps that research programs should follow in order to fully engage with the implications of embodiment. The first step is to conduct a task analysis, which characterizes from a first person perspective the specific task that a perceiving-acting cognitive agent is faced with. The second step is to identify the task-relevant resources the agent has access to in order to solve the task. These resources can span brain, body, and environment. The third step is to identify how the agent can assemble these resources into a system capable of solving the problem at hand. The last step is to test the agent’s performance to confirm that agent is actually using the solution identified in step 3. We explore these steps in more detail with reference to two useful examples (the outfielder problem and the A-not-B error), and introduce how to apply this analysis to the thorny question of language use. Embodied cognition is more than we think it is, and we have the tools we need to realize its full potential. PMID:23408669
USDA-ARS?s Scientific Manuscript database
Glycidyl fatty acid esters (GEs), one of the main contaminants in processed oil, are mainly formed during the deodorization step in the oil refining process of edible oils and therefore occur in almost all refined edible oils. GEs are potential carcinogens, due to the fact that they hydrolyze into t...
ERIC Educational Resources Information Center
Galbraith, Craig S.; Merrill, Gregory B.
2015-01-01
We examine the impact of university student burnout on academic achievement. With a longitudinal sample of working undergraduate university business and economics students, we use a two-step analytical process to estimate the efficient frontiers of student productivity given inputs of labour and capital and then analyse the potential determinants…
Safe Preparation of HCl and DCl for IR Spectroscopy
ERIC Educational Resources Information Center
Furlong, William R.; Grubbs, W. Tandy
2005-01-01
The widely used method of synthesizing HCl and DCl gases for infrared analysis by hydrolysis of benzoyl chloride includes a potentially dangerous final step whereby the frozen product is allowed to heat and expand into an infrared gas cell. The subsequent rapid rise in vapor pressure can "pop" open glass joints in the vacuum line and vent the…
ERIC Educational Resources Information Center
Van Meter, Jerry R.
This booklet is a general guide to park site planning. The four basic steps involved in developing a park site are a) determination of the uses of the site, b) analysis of the site potential for these uses, c) identification of the functional relationship among the uses, and d) coordination of the uses to the park sites. Uses of park sites are…
NASA Astrophysics Data System (ADS)
Scharfenberg, Franz-Josef; Bogner, Franz X.
2011-08-01
Emphasis on improving higher level biology education continues. A new two-step approach to the experimental phases within an outreach gene technology lab, derived from cognitive load theory, is presented. We compared our approach using a quasi-experimental design with the conventional one-step mode. The difference consisted of additional focused discussions combined with students writing down their ideas (step one) prior to starting any experimental procedure (step two). We monitored students' activities during the experimental phases by continuously videotaping 20 work groups within each approach ( N = 131). Subsequent classification of students' activities yielded 10 categories (with well-fitting intra- and inter-observer scores with respect to reliability). Based on the students' individual time budgets, we evaluated students' roles during experimentation from their prevalent activities (by independently using two cluster analysis methods). Independently of the approach, two common clusters emerged, which we labeled as `all-rounders' and as `passive students', and two clusters specific to each approach: `observers' as well as `high-experimenters' were identified only within the one-step approach whereas under the two-step conditions `managers' and `scribes' were identified. Potential changes in group-leadership style during experimentation are discussed, and conclusions for optimizing science teaching are drawn.
Quantum Transmission Conditions for Diffusive Transport in Graphene with Steep Potentials
NASA Astrophysics Data System (ADS)
Barletti, Luigi; Negulescu, Claudia
2018-05-01
We present a formal derivation of a drift-diffusion model for stationary electron transport in graphene, in presence of sharp potential profiles, such as barriers and steps. Assuming the electric potential to have steep variations within a strip of vanishing width on a macroscopic scale, such strip is viewed as a quantum interface that couples the classical regions at its left and right sides. In the two classical regions, where the potential is assumed to be smooth, electron and hole transport is described in terms of semiclassical kinetic equations. The diffusive limit of the kinetic model is derived by means of a Hilbert expansion and a boundary layer analysis, and consists of drift-diffusion equations in the classical regions, coupled by quantum diffusive transmission conditions through the interface. The boundary layer analysis leads to the discussion of a four-fold Milne (half-space, half-range) transport problem.
Regional comparisons of on-site solar potential in the residential and industrial sectors
NASA Astrophysics Data System (ADS)
Gatzke, A. E.; Skewes-Cox, A. O.
1980-10-01
Regional and subregional differences in the potential development of decentralized solar technologies are studied. Two sectors of the economy were selected for intensive analysis: the residential and industrial sectors. The sequence of analysis follows the same general steps: (1) selection of appropriate prototypes within each land use sector disaggregated by census region; (2) characterization of the end-use energy demand of each prototype in order to match an appropriate decentralized solar technology to the energy demand; (3) assessment of the energy conservation potential within each prototype limited by land use patterns, technology efficiency, and variation in solar insolation; and (4) evaluation of the regional and subregional differences in the land use implications of decentralized energy supply technologies that result from the combination of energy demand, energy supply potential, and the subsequent addition of increasingly more restrictive policies to increase the percent contribution of on-site solar energy.
Knowledge Discovery from Vibration Measurements
Li, Jian; Wang, Daoyao
2014-01-01
The framework as well as the particular algorithms of pattern recognition process is widely adopted in structural health monitoring (SHM). However, as a part of the overall process of knowledge discovery from data bases (KDD), the results of pattern recognition are only changes and patterns of changes of data features. In this paper, based on the similarity between KDD and SHM and considering the particularity of SHM problems, a four-step framework of SHM is proposed which extends the final goal of SHM from detecting damages to extracting knowledge to facilitate decision making. The purposes and proper methods of each step of this framework are discussed. To demonstrate the proposed SHM framework, a specific SHM method which is composed by the second order structural parameter identification, statistical control chart analysis, and system reliability analysis is then presented. To examine the performance of this SHM method, real sensor data measured from a lab size steel bridge model structure are used. The developed four-step framework of SHM has the potential to clarify the process of SHM to facilitate the further development of SHM techniques. PMID:24574933
Heuett, William J; Beard, Daniel A; Qian, Hong
2008-01-01
Background Several approaches, including metabolic control analysis (MCA), flux balance analysis (FBA), correlation metric construction (CMC), and biochemical circuit theory (BCT), have been developed for the quantitative analysis of complex biochemical networks. Here, we present a comprehensive theory of linear analysis for nonequilibrium steady-state (NESS) biochemical reaction networks that unites these disparate approaches in a common mathematical framework and thermodynamic basis. Results In this theory a number of relationships between key matrices are introduced: the matrix A obtained in the standard, linear-dynamic-stability analysis of the steady-state can be decomposed as A = SRT where R and S are directly related to the elasticity-coefficient matrix for the fluxes and chemical potentials in MCA, respectively; the control-coefficients for the fluxes and chemical potentials can be written in terms of RTBS and STBS respectively where matrix B is the inverse of A; the matrix S is precisely the stoichiometric matrix in FBA; and the matrix eAt plays a central role in CMC. Conclusion One key finding that emerges from this analysis is that the well-known summation theorems in MCA take different forms depending on whether metabolic steady-state is maintained by flux injection or concentration clamping. We demonstrate that if rate-limiting steps exist in a biochemical pathway, they are the steps with smallest biochemical conductances and largest flux control-coefficients. We hypothesize that biochemical networks for cellular signaling have a different strategy for minimizing energy waste and being efficient than do biochemical networks for biosynthesis. We also discuss the intimate relationship between MCA and biochemical systems analysis (BSA). PMID:18482450
40 CFR 60.1120 - What steps must I complete for my siting analysis?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What steps must I complete for my... Requirements: Siting Analysis § 60.1120 What steps must I complete for my siting analysis? (a) For your siting analysis, you must complete five steps: (1) Prepare an analysis. (2) Make your analysis available to the...
Gullo, Charles A
2016-01-01
Biomedical programs have a potential treasure trove of data they can mine to assist admissions committees in identification of students who are likely to do well and help educational committees in the identification of students who are likely to do poorly on standardized national exams and who may need remediation. In this article, we provide a step-by-step approach that schools can utilize to generate data that are useful when predicting the future performance of current students in any given program. We discuss the use of linear regression analysis as the means of generating that data and highlight some of the limitations. Finally, we lament on how the combination of these institution-specific data sets are not being fully utilized at the national level where these data could greatly assist programs at large.
Comparative Analysis of Models of the Earth's Gravity: 3. Accuracy of Predicting EAS Motion
NASA Astrophysics Data System (ADS)
Kuznetsov, E. D.; Berland, V. E.; Wiebe, Yu. S.; Glamazda, D. V.; Kajzer, G. T.; Kolesnikov, V. I.; Khremli, G. P.
2002-05-01
This paper continues a comparative analysis of modern satellite models of the Earth's gravity which we started in [6, 7]. In the cited works, the uniform norms of spherical functions were compared with their gradients for individual harmonics of the geopotential expansion [6] and the potential differences were compared with the gravitational accelerations obtained in various models of the Earth's gravity [7]. In practice, it is important to know how consistently the EAS motion is represented by various geopotential models. Unless otherwise stated, a model version in which the equations of motion are written using the classical Encke scheme and integrated together with the variation equations by the implicit one-step Everhart's algorithm [1] was used. When calculating coordinates and velocities on the integration step (at given instants of time), the approximate Everhart formula was employed.
Sullivan, Maura E; Yates, Kenneth A; Inaba, Kenji; Lam, Lydia; Clark, Richard E
2014-05-01
Because of the automated nature of knowledge, experts tend to omit information when describing a task. A potential solution is cognitive task analysis (CTA). The authors investigated the percentage of knowledge experts omitted when teaching a cricothyrotomy to determine the percentage of additional knowledge gained during a CTA interview. Three experts were videotaped teaching a cricothyrotomy in 2010 at the University of Southern California. After transcription, they participated in CTA interviews for the same procedure. Three additional surgeons were recruited to perform a CTA for the procedure, and a "gold standard" task list was created. Transcriptions from the teaching sessions were compared with the task list to identify omitted steps (both "what" and "how" to do). Transcripts from the CTA interviews were compared against the task list to determine the percentage of knowledge articulated by each expert during the initial "free recall" (unprompted) phase of the CTA interview versus the amount of knowledge gained by using CTA elicitation techniques (prompted). Experts omitted an average of 71% (10/14) of clinical knowledge steps, 51% (14/27) of action steps, and 73% (3.6/5) of decision steps. For action steps, experts described "how to do it" only 13% (3.6/27) of the time. The average number of steps that were described increased from 44% (20/46) when unprompted to 66% (31/46) when prompted. This study supports previous research that experts unintentionally omit knowledge when describing a procedure. CTA is a useful method to extract automated knowledge and augment expert knowledge recall during teaching.
A method for scenario-based risk assessment for robust aerospace systems
NASA Astrophysics Data System (ADS)
Thomas, Victoria Katherine
In years past, aircraft conceptual design centered around creating a feasible aircraft that could be built and could fly the required missions. More recently, aircraft viability entered into conceptual design, allowing that the product's potential to be profitable should also be examined early in the design process. While examining an aerospace system's feasibility and viability early in the design process is extremely important, it is also important to examine system risk. In traditional aerospace systems risk analysis, risk is examined from the perspective of performance, schedule, and cost. Recently, safety and reliability analysis have been brought forward in the design process to also be examined during late conceptual and early preliminary design. While these analyses work as designed, existing risk analysis methods and techniques are not designed to examine an aerospace system's external operating environment and the risks present there. A new method has been developed here to examine, during the early part of concept design, the risk associated with not meeting assumptions about the system's external operating environment. The risks are examined in five categories: employment, culture, government and politics, economics, and technology. The risks are examined over a long time-period, up to the system's entire life cycle. The method consists of eight steps over three focus areas. The first focus area is Problem Setup. During problem setup, the problem is defined and understood to the best of the decision maker's ability. There are four steps in this area, in the following order: Establish the Need, Scenario Development, Identify Solution Alternatives, and Uncertainty and Risk Identification. There is significant iteration between steps two through four. Focus area two is Modeling and Simulation. In this area the solution alternatives and risks are modeled, and a numerical value for risk is calculated. A risk mitigation model is also created. The four steps involved in completing the modeling and simulation are: Alternative Solution Modeling, Uncertainty Quantification, Risk Assessment, and Risk Mitigation. Focus area three consists of Decision Support. In this area a decision support interface is created that allows for game playing between solution alternatives and risk mitigation. A multi-attribute decision making process is also implemented to aid in decision making. A demonstration problem inspired by Airbus' mid 1980s decision to break into the widebody long-range market was developed to illustrate the use of this method. The results showed that the method is able to capture additional types of risk than previous analysis methods, particularly at the early stages of aircraft design. It was also shown that the method can be used to help create a system that is robust to external environmental factors. The addition of an external environment risk analysis in the early stages of conceptual design can add another dimension to the analysis of feasibility and viability. The ability to take risk into account during the early stages of the design process can allow for the elimination of potentially feasible and viable but too-risky alternatives. The addition of a scenario-based analysis instead of a traditional probabilistic analysis enabled uncertainty to be effectively bound and examined over a variety of potential futures instead of only a single future. There is also potential for a product to be groomed for a specific future that one believes is likely to happen, or for a product to be steered during design as the future unfolds.
The hyperbolic step potential: Anti-bound states, SUSY partners and Wigner time delays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gadella, M.; Kuru, Ş.; Negro, J., E-mail: jnegro@fta.uva.es
We study the scattering produced by a one dimensional hyperbolic step potential, which is exactly solvable and shows an unusual interest because of its asymmetric character. The analytic continuation of the scattering matrix in the momentum representation has a branch cut and an infinite number of simple poles on the negative imaginary axis which are related with the so called anti-bound states. This model does not show resonances. Using the wave functions of the anti-bound states, we obtain supersymmetric (SUSY) partners which are the series of Rosen–Morse II potentials. We have computed the Wigner reflection and transmission time delays formore » the hyperbolic step and such SUSY partners. Our results show that the more bound states a partner Hamiltonian has the smaller is the time delay. We also have evaluated time delays for the hyperbolic step potential in the classical case and have obtained striking similitudes with the quantum case. - Highlights: • The scattering matrix of hyperbolic step potential is studied. • The scattering matrix has a branch cut and an infinite number of poles. • The poles are associated to anti-bound states. • Susy partners using antibound states are computed. • Wigner time delays for the hyperbolic step and partner potentials are compared.« less
Manhat, Beth A.; Brown, Anna L.; Black, Labe A.; Ross, J.B. Alexander; Fichter, Katye; Vu, Tania; Richman, Erik
2012-01-01
We have developed a versatile, one-step melt synthesis of water-soluble, highly emissive silicon nanoparticles using bi-functional, low-melting solids (such as glutaric acid) as reaction media. Characterization through transmission electron microscopy, selected area electron diffraction, X-ray photoelectron spectroscopy, and Raman spectroscopy shows that the one-step melt synthesis produces nanoscale Si cores surrounded by a silicon oxide shell. Analysis of the nanoparticle surface using FT-IR, zeta potential, and gel electrophoresis indicates that the bi-functional ligand used in the one-step synthesis is grafted onto the nanoparticle, which allows for tuning of the particle surface charge, solubility, and functionality. Photoluminescence spectra of the as-prepared glutaric acid-synthesized silicon nanoparticles show an intense blue-green emission with a short (ns) lifetime suitable for biological imaging. These nanoparticles are found to be stable in biological media and have been used to examine cellular uptake and distribution in live N2a cells. PMID:23139440
Dissociative Ionization of Benzene by Electron Impact
NASA Technical Reports Server (NTRS)
Huo, Winifred; Dateo, Christopher; Kwak, Dochan (Technical Monitor)
2002-01-01
We report a theoretical study of the dissociative ionization (DI) of benzene from the low-lying ionization channels. Our approach makes use of the fact that electron motion is much faster than nuclear motion and DI is treated as a two-step process. The first step is electron-impact ionization resulting in an ion with the same nuclear geometry as the neutral molecule. In the second step the nuclei relax from the initial geometry and undergo unimolecular dissociation. For the ionization process we use the improved binary-encounter dipole (iBED) model. For the unimolecular dissociation step, we study the steepest descent reaction path to the minimum of the ion potential energy surface. The path is used to analyze the probability of unimolecular dissociation and to determine the product distributions. Our analysis of the dissociation products and the thresholds of the productions are compared with the result dissociative photoionization measurements of Feng et al. The partial oscillator strengths from Feng et al. are then used in the iBED cross section calculations.
Pittmann, Timo; Steinmetz, Heidrun
2017-01-01
This work describes the production of polyhydroxyalkanoates (PHA) as a side stream process on a municipal waste water treatment plant (WWTP) and a subsequent analysis of the production potential in Germany and the European Union (EU). Therefore, tests with different types of sludge from a WWTP were investigated regarding their volatile fatty acids (VFA) production-potential. Afterwards, primary sludge was used as substrate to test a series of operating conditions (temperature, pH, retention time (RT) and withdrawal (WD)) in order to find suitable settings for a high and stable VFA production. In a second step, various tests regarding a high PHA production and stable PHA composition to determine the influence of substrate concentration, temperature, pH and cycle time of an installed feast/famine-regime were conducted. Experiments with a semi-continuous reactor operation showed that a short RT of 4 days and a small WD of 25% at pH = 6 and around 30 °C is preferable for a high VFA production rate (PR) of 1913 mgVFA/(L×d) and a stable VFA composition. A high PHA production up to 28.4% of cell dry weight (CDW) was reached at lower substrate concentration, 20 °C, neutral pH-value and a 24 h cycle time. A final step a potential analysis, based on the results and detailed data from German waste water treatment plants, showed that the theoretically possible production of biopolymers in Germany amounts to more than 19% of the 2016 worldwide biopolymer production. In addition, a profound estimation regarding the EU showed that in theory about 120% of the worldwide biopolymer production (in 2016) could be produced on European waste water treatment plants. PMID:28952533
NASA Astrophysics Data System (ADS)
Durdureanu-Angheluta, A.; Dascalu, A.; Fifere, A.; Coroaba, A.; Pricop, L.; Chiriac, H.; Tura, V.; Pinteala, M.; Simionescu, B. C.
2012-05-01
This manuscript deals with the synthesis of new hydrophilic magnetite particles by employing a two-step method: in the first step magnetite particles with hydrophobic shell formed in presence of oleic acid-oleylamine complex through a synthesis in mass, without solvent, in a mortar with pestle were obtained; while in the second step the hydrophobic shell was interchanged with an aminosilane monomer. The influence of the Fe2+/Fe3+ molar ratio on the dimension of the particles of high importance for their potential applications was carefully investigated. This paper, also presents an alternative method of synthesis of new core-shell magnetite particles and the complete study of their structure and morphology by FT-IR, XPS, TGA, ESEM and TEM techniques. The rheological properties and magnetization analysis of high importance for magnetic particles were also investigated.
Pérez-Esteve, Edgar; Bernardos, Andrea; Martínez-Máñez, Ramón; Barat, José M
2013-04-01
In recent years nanotechnology has become a significant component in food industry. It is present in all food chain steps, from the design of new ingredients or additives, to the most modern systems of food quality methods or packaging, demonstrating the great potential of this new technology in a sector as traditional as food. However, while interest by industry in nanotechnology increases, the rejection by consumers, concerned about the potential risk, does too. The aim of this review is to evaluate the development of food nanotechnology by means of a patent analysis, highlighting current applications of nanotechnology along the whole food chain and contextualizing this evolution in the social scene.
Is parenting style a predictor of suicide attempts in a representative sample of adolescents?
2014-01-01
Background Suicidal ideation and suicide attempts are serious but not rare conditions in adolescents. However, there are several research and practical suicide-prevention initiatives that discuss the possibility of preventing serious self-harm. Profound knowledge about risk and protective factors is therefore necessary. The aim of this study is a) to clarify the role of parenting behavior and parenting styles in adolescents’ suicide attempts and b) to identify other statistically significant and clinically relevant risk and protective factors for suicide attempts in a representative sample of German adolescents. Methods In the years 2007/2008, a representative written survey of N = 44,610 students in the 9th grade of different school types in Germany was conducted. In this survey, the lifetime prevalence of suicide attempts was investigated as well as potential predictors including parenting behavior. A three-step statistical analysis was carried out: I) As basic model, the association between parenting and suicide attempts was explored via binary logistic regression controlled for age and sex. II) The predictive values of 13 additional potential risk/protective factors were analyzed with single binary logistic regression analyses for each predictor alone. Non-significant predictors were excluded in Step III. III) In a multivariate binary logistic regression analysis, all significant predictor variables from Step II and the parenting styles were included after testing for multicollinearity. Results Three parental variables showed a relevant association with suicide attempts in adolescents – (all protective): mother’s warmth and father’s warmth in childhood and mother’s control in adolescence (Step I). In the full model (Step III), Authoritative parenting (protective: OR: .79) and Rejecting-Neglecting parenting (risk: OR: 1.63) were identified as significant predictors (p < .001) for suicidal attempts. Seven further variables were interpreted to be statistically significant and clinically relevant: ADHD, female sex, smoking, Binge Drinking, absenteeism/truancy, migration background, and parental separation events. Conclusions Parenting style does matter. While children of Authoritative parents profit, children of Rejecting-Neglecting parents are put at risk – as we were able to show for suicide attempts in adolescence. Some of the identified risk factors contribute new knowledge and potential areas of intervention for special groups such as migrants or children diagnosed with ADHD. PMID:24766881
Capillary fluctuations of surface steps: An atomistic simulation study for the model Cu(111) system
NASA Astrophysics Data System (ADS)
Freitas, Rodrigo; Frolov, Timofey; Asta, Mark
2017-10-01
Molecular dynamics (MD) simulations are employed to investigate the capillary fluctuations of steps on the surface of a model metal system. The fluctuation spectrum, characterized by the wave number (k ) dependence of the mean squared capillary-wave amplitudes and associated relaxation times, is calculated for 〈110 〉 and 〈112 〉 steps on the {111 } surface of elemental copper near the melting temperature of the classical potential model considered. Step stiffnesses are derived from the MD results, yielding values from the largest system sizes of (37 ±1 ) meV/A ˚ for the different line orientations, implying that the stiffness is isotropic within the statistical precision of the calculations. The fluctuation lifetimes are found to vary by approximately four orders of magnitude over the range of wave numbers investigated, displaying a k dependence consistent with kinetics governed by step-edge mediated diffusion. The values for step stiffness derived from these simulations are compared to step free energies for the same system and temperature obtained in a recent MD-based thermodynamic-integration (TI) study [Freitas, Frolov, and Asta, Phys. Rev. B 95, 155444 (2017), 10.1103/PhysRevB.95.155444]. Results from the capillary-fluctuation analysis and TI calculations yield statistically significant differences that are discussed within the framework of statistical-mechanical theories for configurational contributions to step free energies.
Improved kinect-based spatiotemporal and kinematic treadmill gait assessment.
Eltoukhy, Moataz; Oh, Jeonghoon; Kuenze, Christopher; Signorile, Joseph
2017-01-01
A cost-effective, clinician friendly gait assessment tool that can automatically track patients' anatomical landmarks can provide practitioners with important information that is useful in prescribing rehabilitative and preventive therapies. This study investigated the validity and reliability of the Microsoft Kinect v2 as a potential inexpensive gait analysis tool. Ten healthy subjects walked on a treadmill at 1.3 and 1.6m·s -1 , as spatiotemporal parameters and kinematics were extracted concurrently using the Kinect and three-dimensional motion analysis. Spatiotemporal measures included step length and width, step and stride times, vertical and mediolateral pelvis motion, and foot swing velocity. Kinematic outcomes included hip, knee, and ankle joint angles in the sagittal plane. The absolute agreement and relative consistency between the two systems were assessed using interclass correlations coefficients (ICC2,1), while reproducibility between systems was established using Lin's Concordance Correlation Coefficient (rc). Comparison of ensemble curves and associated 90% confidence intervals (CI90) of the hip, knee, and ankle joint angles were performed to investigate if the Kinect sensor could consistently and accurately assess lower extremity joint motion throughout the gait cycle. Results showed that the Kinect v2 sensor has the potential to be an effective clinical assessment tool for sagittal plane knee and hip joint kinematics, as well as some spatiotemporal temporal variables including pelvis displacement and step characteristics during the gait cycle. Copyright © 2016 Elsevier B.V. All rights reserved.
Karakülah, Gökhan
2017-06-28
Novel transcript discovery through RNA sequencing has substantially improved our understanding of the transcriptome dynamics of biological systems. Endogenous target mimicry (eTM) transcripts, a novel class of regulatory molecules, bind to their target microRNAs (miRNAs) by base pairing and block their biological activity. The objective of this study was to provide a computational analysis framework for the prediction of putative eTM sequences in plants, and as an example, to discover previously un-annotated eTMs in Prunus persica (peach) transcriptome. Therefore, two public peach transcriptome libraries downloaded from Sequence Read Archive (SRA) and a previously published set of long non-coding RNAs (lncRNAs) were investigated with multi-step analysis pipeline, and 44 putative eTMs were found. Additionally, an eTM-miRNA-mRNA regulatory network module associated with peach fruit organ development was built via integration of the miRNA target information and predicted eTM-miRNA interactions. My findings suggest that one of the most widely expressed miRNA families among diverse plant species, miR156, might be potentially sponged by seven putative eTMs. Besides, the study indicates eTMs potentially play roles in the regulation of development processes in peach fruit via targeting specific miRNAs. In conclusion, by following the step-by step instructions provided in this study, novel eTMs can be identified and annotated effectively in public plant transcriptome libraries.
A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market
Hu, Zhineng; Lu, Wei; Han, Bing
2015-01-01
This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847
2015-03-01
Conference Planning and Food and Beverage Costs, Audit Report 11-43 (October 2011). 5White House, Executive Order 13589, Promoting Efficient Spending, 76...and conducted a content analysis of these interviews. Based on this analysis, we enumerated challenges and mitigation strategies as well as benefits...of officials, asking respondents to rate the effect of each potential mitigation strategy and prioritize steps for implementing the strategies . We
Assessment of Managed Aquifer Recharge Site Suitability Using a GIS and Modeling.
Russo, Tess A; Fisher, Andrew T; Lockwood, Brian S
2015-01-01
We completed a two-step regional analysis of a coastal groundwater basin to (1) assess regional suitability for managed aquifer recharge (MAR), and (2) quantify the relative impact of MAR activities on groundwater levels and sea water intrusion. The first step comprised an analysis of surface and subsurface hydrologic properties and conditions, using a geographic information system (GIS). Surface and subsurface data coverages were compiled, georeferenced, reclassified, and integrated (including novel approaches for combining related datasets) to derive a spatial distribution of MAR suitability values. In the second step, results from the GIS analysis were used with a regional groundwater model to assess the hydrologic impact of potential MAR placement and operating scenarios. For the region evaluated in this study, the Pajaro Valley Groundwater Basin, California, GIS results suggest that about 7% (15 km2) of the basin may be highly suitable for MAR. Modeling suggests that simulated MAR projects placed near the coast help to reduce sea water intrusion more rapidly, but these projects also result in increased groundwater flows to the ocean. In contrast, projects placed farther inland result in more long-term reduction in sea water intrusion and less groundwater flowing to the ocean. This work shows how combined GIS analysis and modeling can assist with regional water supply planning, including evaluation of options for enhancing groundwater resources. © 2014, National Ground Water Association.
Cai, Rui; Tao, Gang; He, Huawei; Song, Kai; Zuo, Hua; Jiang, Wenchao; Wang, Yejing
2017-04-30
Silk sericin has great potential as a biomaterial for biomedical applications due to its good hydrophilicity, reactivity, and biodegradability. To develop multifunctional sericin materials for potential antibacterial application, a one-step synthesis method for preparing silver nanoparticles (AgNPs) modified on polydopamine-coated sericin/polyvinyl alcohol (PVA) composite films was developed. Polydopamine (PDA) acted as both metal ion chelating and reducing agent to synthesize AgNPs in situ on the sericin/PVA composite film. Scanning electron microscopy and energy dispersive spectroscopy analysis revealed that polydopamine could effectively facilitate the high-density growth of AgNPs as a 3-D matrix. X-ray diffractometry studies suggested the synthesized AgNPs formed good face-centered cubic crystalline structures. Contact angle measurement and mechanical test indicated AgNPs modified PDA-sericin/PVA composite film had good hydrophilicity and mechanical property. The bacterial growth curve and inhibition zone assays showed the AgNPs modified PDA-sericin/PVA composite film had long-term antibacterial activities. This work develops a new method for the preparation of AgNPs modified PDA-sericin/PVA film with good hydrophilicity, mechanical performance and antibacterial activities for the potential antimicrobial application in biomedicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aurich, Maike K.; Fleming, Ronan M. T.; Thiele, Ines
Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration, and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorialsmore » explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. In conclusion, this computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.« less
Causal inference in economics and marketing.
Varian, Hal R
2016-07-05
This is an elementary introduction to causal inference in economics written for readers familiar with machine learning methods. The critical step in any causal analysis is estimating the counterfactual-a prediction of what would have happened in the absence of the treatment. The powerful techniques used in machine learning may be useful for developing better estimates of the counterfactual, potentially improving causal inference.
Geospatial Representation, Analysis and Computing Using Bandlimited Functions
2010-02-19
navigation of aircraft and missiles require detailed representations of gravity and efficient methods for determining orbits and trajectories. However, many...efficient on today’s computers. Under this grant new, computationally efficient, localized representations of gravity have been developed and tested. As a...step in developing a new approach to estimating gravitational potentials, a multiresolution representation for gravity estimation has been proposed
Causal inference in economics and marketing
Varian, Hal R.
2016-01-01
This is an elementary introduction to causal inference in economics written for readers familiar with machine learning methods. The critical step in any causal analysis is estimating the counterfactual—a prediction of what would have happened in the absence of the treatment. The powerful techniques used in machine learning may be useful for developing better estimates of the counterfactual, potentially improving causal inference. PMID:27382144
Biophysical Interactions within Step-Pool Mountain Streams Following Wildfire
NASA Astrophysics Data System (ADS)
Parker, A.; Chin, A.; O'Dowd, A. P.
2014-12-01
Recovery of riverine ecosystems following disturbance is driven by a variety of interacting processes. Wildfires pose increasing disturbances to riverine landscapes, with rising frequencies and magnitudes owing to warming climates and increased fuel loads. The effects of wildfire include loss of vegetation, elevated runoff and flash floods, erosion and deposition, and changing biological habitats and communities. Understanding process interactions in post-fire landscapes is increasingly urgent for successful management and restoration of affected ecosystems. In steep channels, steps and pools provide prominent habitats for organisms and structural integrity in high energy environments. Step-pools are typically stable, responding to extreme events with recurrence intervals often exceeding 50 years. Once wildfire occurs, however, intensification of post-fire flood events can potentially overpower the inherent stability of these systems, with significant consequences for aquatic life and human well-being downstream. This study examined the short-term response of step-pool streams following the 2012 Waldo Canyon Fire in Colorado. We explored interacting feedbacks among geomorphology, hydrology, and ecology in the post-fire environment. At selected sites with varying burn severity, we established baseline conditions immediately after the fire with channel surveys, biological assessment using benthic macroinvertebrates, sediment analysis including pebble counts, and precipitation gauging. Repeat measurements after major storm events over several years enabled analysis of the interacting feedbacks among post-fire processes. We found that channels able to retain the step-pool structure changed less and facilitated recovery more readily. Step habitats maintained higher percentages of sensitive macroinvertebrate taxa compared to pools through post-fire floods. Sites burned with high severity experienced greater reduction in the percentage of sensitive taxa. The decimation of macroinvertebrates closely coincides with the physical destruction of the step-pool morphology. The role that step-pools play in enhancing the ecological quality of fluvial systems, therefore, provides a key focus for effective management and restoration of aquatic resources following wildfires.
Liang, Zhenxing; Ahn, Hyun S; Bard, Allen J
2017-04-05
The hydrogen evolution reaction (HER) on Ni in alkaline media was investigated by scanning electrochemical microscopy under two operating modes. First, the substrate generation/tip collection mode was employed to extract the "true" cathodic current associated with the HER from the total current in the polarization curve. Compared to metallic Ni, the electrocatalytic activity of the HER is improved in the presence of the low-valence-state oxide of Ni. This result is in agreement with a previous claim that the dissociative adsorption of water can be enhanced at the Ni/Ni oxide interface. Second, the surface-interrogation scanning electrochemical microscopy (SI-SECM) mode was used to directly measure the coverage of the adsorbed hydrogen on Ni at given potentials. Simulation indicates that the hydrogen coverage follows a Frumkin isotherm with respect to the applied potential. On the basis of the combined analysis of the Tafel slope and surface hydrogen coverage, the rate-determining step is suggested to be the adsorption of hydrogen (Volmer step) in the investigated potential window.
Altan, Irem; Charbonneau, Patrick; Snell, Edward H.
2016-01-01
Crystallization is a key step in macromolecular structure determination by crystallography. While a robust theoretical treatment of the process is available, due to the complexity of the system, the experimental process is still largely one of trial and error. In this article, efforts in the field are discussed together with a theoretical underpinning using a solubility phase diagram. Prior knowledge has been used to develop tools that computationally predict the crystallization outcome and define mutational approaches that enhance the likelihood of crystallization. For the most part these tools are based on binary outcomes (crystal or no crystal), and the full information contained in an assembly of crystallization screening experiments is lost. The potential of this additional information is illustrated by examples where new biological knowledge can be obtained and where a target can be sub-categorized to predict which class of reagents provides the crystallization driving force. Computational analysis of crystallization requires complete and correctly formatted data. While massive crystallization screening efforts are under way, the data available from many of these studies are sparse. The potential for this data and the steps needed to realize this potential are discussed. PMID:26792536
NASA Technical Reports Server (NTRS)
Martini, W. R.
1978-01-01
This manual is intended to serve both as an introduction to Stirling engine analysis methods and as a key to the open literature on Stirling engines. Over 800 references are listed and these are cross referenced by date of publication, author and subject. Engine analysis is treated starting from elementary principles and working through cycles analysis. Analysis methodologies are classified as first, second or third order depending upon degree of complexity and probable application; first order for preliminary engine studies, second order for performance prediction and engine optimization, and third order for detailed hardware evaluation and engine research. A few comparisons between theory and experiment are made. A second order design procedure is documented step by step with calculation sheets and a worked out example to follow. Current high power engines are briefly described and a directory of companies and individuals who are active in Stirling engine development is included. Much remains to be done. Some of the more complicated and potentially very useful design procedures are now only referred to. Future support will enable a more thorough job of comparing all available design procedures against experimental data which should soon be available.
Ye, Hui; Zhu, Lin; Wang, Lin; Liu, Huiying; Zhang, Jun; Wu, Mengqiu; Wang, Guangji; Hao, Haiping
2016-02-11
Multiple reaction monitoring (MRM) is a universal approach for quantitative analysis because of its high specificity and sensitivity. Nevertheless, optimization of MRM parameters remains as a time and labor-intensive task particularly in multiplexed quantitative analysis of small molecules in complex mixtures. In this study, we have developed an approach named Stepped MS(All) Relied Transition (SMART) to predict the optimal MRM parameters of small molecules. SMART requires firstly a rapid and high-throughput analysis of samples using a Stepped MS(All) technique (sMS(All)) on a Q-TOF, which consists of serial MS(All) events acquired from low CE to gradually stepped-up CE values in a cycle. The optimal CE values can then be determined by comparing the extracted ion chromatograms for the ion pairs of interest among serial scans. The SMART-predicted parameters were found to agree well with the parameters optimized on a triple quadrupole from the same vendor using a mixture of standards. The parameters optimized on a triple quadrupole from a different vendor was also employed for comparison, and found to be linearly correlated with the SMART-predicted parameters, suggesting the potential applications of the SMART approach among different instrumental platforms. This approach was further validated by applying to simultaneous quantification of 31 herbal components in the plasma of rats treated with a herbal prescription. Because the sMS(All) acquisition can be accomplished in a single run for multiple components independent of standards, the SMART approach are expected to find its wide application in the multiplexed quantitative analysis of complex mixtures. Copyright © 2015 Elsevier B.V. All rights reserved.
[Procedural analysis of acid-base balance disorder: case serials in 4 patents].
Ma, Chunyuan; Wang, Guijie
2017-05-01
To establish the standardization process of acid-base balance analysis, analyze cases of acid-base balance disorder with the aid of acid-base balance coordinate graph. The acid-base balance theory were reviewed systematically on recent research progress, and the important concepts, definitions, formulas, parameters, regularity and inference in the analysis of acid-base balance were studied. The analysis of acid-base balance disordered processes and steps were figured. The application of acid-base balance coordinate graph in the cases was introduced. The method of "four parameters-four steps" analysis was put forward to analyze the acid-base balance disorders completely. "Four parameters" included pH, arterial partial pressure of carbon dioxide (PaCO 2 ), HCO 3 - and anion gap (AG). "Four steps" were outlined by following aspects: (1) according to the pH, PaCO 2 and HCO 3 - , the primary or main types of acid-base balance disorder was determined; (2) primary or main types of acid-base disorder were used to choose the appropriate compensation formula and to determine the presence of double mixed acid-base balance disorder; (3) the primary acid-base balance disorders were divided into two parts: respiratory acidosis or respiratory alkalosis, at the same time, the potential HCO 3 - should be calculated, the measured HCO 3 - should be replaced with potential HCO 3 - , to determine whether there were three mixed acid-base disorders; (4) based on the above analysis the data judged as the simple AG increased-metabolic acidosis was needed to be further analyzed. The ratio of ΔAG↑/ΔHCO 3 - ↓ was also needed to be calculated, to determine whether there was normal AG metabolic acidosis or metabolic alkalosis. In the clinical practice, PaCO 2 (as the abscissa) and HCO 3 - (as the ordinate) were used to establish a rectangular coordinate system, through origin (0, 0) and coordinate point (40, 24) could be a straight line, and all points on the straight line pH were equal to 7.40. The acid-base balance coordinate graph could be divided into seven areas by three straight lines [namely pH = 7.40 isoline, PaCO 2 = 40 mmHg (1 mmHg = 0.133 kPa) line and HCO 3 - = 24 mmol/L line]: main respiratory alkalosis area, main metabolic alkalosis area, respiratory + metabolic alkalosis area, main respiratory acidosis area, main metabolic acidosis area, respiratory + metabolic acidosis area and normal area. It was easier to determine the type of acid-base balance disorders by identifying the location of the (PaCO 2 , HCO 3 - ) or (PaCO 2 , potential HCO 3 - ) point on the acid-base balance coordinate graph. "Four parameters-four steps" method is systematic and comprehensive. At the same time, by using the acid-base balance coordinate graph, it is simpler to estimate the types of acid-base balance disorders. It is worthy of popularizing and generalizing.
Gullo, Charles A.
2016-01-01
Biomedical programs have a potential treasure trove of data they can mine to assist admissions committees in identification of students who are likely to do well and help educational committees in the identification of students who are likely to do poorly on standardized national exams and who may need remediation. In this article, we provide a step-by-step approach that schools can utilize to generate data that are useful when predicting the future performance of current students in any given program. We discuss the use of linear regression analysis as the means of generating that data and highlight some of the limitations. Finally, we lament on how the combination of these institution-specific data sets are not being fully utilized at the national level where these data could greatly assist programs at large. PMID:27374246
Novel applications of the dispersive optical model
NASA Astrophysics Data System (ADS)
Dickhoff, W. H.; Charity, R. J.; Mahzoon, M. H.
2017-03-01
A review of recent developments of the dispersive optical model (DOM) is presented. Starting from the original work of Mahaux and Sartor, several necessary steps are developed and illustrated which increase the scope of the DOM allowing its interpretation as generating an experimentally constrained functional form of the nucleon self-energy. The method could therefore be renamed as the dispersive self-energy method. The aforementioned steps include the introduction of simultaneous fits of data for chains of isotopes or isotones allowing a data-driven extrapolation for the prediction of scattering cross sections and level properties in the direction of the respective drip lines. In addition, the energy domain for data was enlarged to include results up to 200 MeV where available. An important application of this work was implemented by employing these DOM potentials to the analysis of the (d, p) transfer reaction using the adiabatic distorted wave approximation. We review these calculations which suggest that physically meaningful results are easier to obtain by employing DOM ingredients as compared to the traditional approach which relies on a phenomenologically-adjusted bound-state wave function combined with a global (nondispersive) optical-model potential. Application to the exotic 132Sn nucleus also shows great promise for the extrapolation of DOM potentials towards the drip line with attendant relevance for the physics of FRIB. We note that the DOM method combines structure and reaction information on the same footing providing a unique approach to the analysis of exotic nuclei. We illustrate the importance of abandoning the custom of representing the non-local Hartree-Fock (HF) potential in the DOM by an energy-dependent local potential as it impedes the proper normalization of the solution of the Dyson equation. This important step allows for the interpretation of the DOM potential as representing the nucleon self-energy permitting the calculations of spectral amplitudes and spectral functions above and below the Fermi energy. The latter feature provides access to quantities like the momentum distribution, charge density, and particle number which were not available in the original work of Mahaux and Sartor. When employing a non-local HF potential, but local dispersive contributions (as originally proposed by Mahaux and Sartor), we illustrate that it is impossible to reproduce the particle number and the measured charge density. Indeed, the use of local absorptive potentials leads to a substantial overestimate of particle number. However from detailed comparisons with self-energies calculated with ab initio many-body methods that include both short- and long-range correlations, we demonstrate that it is essential to introduce non-local absorptive potentials in order to remediate these deficiencies. We review the fully non-local DOM potential fitted to 40Ca where elastic-scattering data, level information, particle number, charge density and high-momentum-removal (e,e\\prime p) cross sections obtained at Jefferson Lab were included in the analysis. All these quantities are accurately described by assuming more or less traditional functional forms for the potentials but allowing for non-locality and the abandonment of complete symmetry around the Fermi energy for surface absorption which is suggested by ab initio theory. An important consequence of this new analysis is the finding that the spectroscopic factor for the removal of valence protons in this nucleus comes out larger by about 0.15 than the results obtained from the NIKHEF analysis of their (e,e\\prime p) data. This issue is discussed in detail and its implications clarified. Another important consequence of this analysis is that it can shed light on the relative importance of two-body and three-body interactions as far as their contribution to the energy of the ground state is concerned through application of the energy sum rule.
Antibody-Mediated Small Molecule Detection Using Programmable DNA-Switches.
Rossetti, Marianna; Ippodrino, Rudy; Marini, Bruna; Palleschi, Giuseppe; Porchetta, Alessandro
2018-06-13
The development of rapid, cost-effective, and single-step methods for the detection of small molecules is crucial for improving the quality and efficiency of many applications ranging from life science to environmental analysis. Unfortunately, current methodologies still require multiple complex, time-consuming washing and incubation steps, which limit their applicability. In this work we present a competitive DNA-based platform that makes use of both programmable DNA-switches and antibodies to detect small target molecules. The strategy exploits both the advantages of proximity-based methods and structure-switching DNA-probes. The platform is modular and versatile and it can potentially be applied for the detection of any small target molecule that can be conjugated to a nucleic acid sequence. Here the rational design of programmable DNA-switches is discussed, and the sensitive, rapid, and single-step detection of different environmentally relevant small target molecules is demonstrated.
Design principles and optimal performance for molecular motors under realistic constraints
NASA Astrophysics Data System (ADS)
Tu, Yuhai; Cao, Yuansheng
2018-02-01
The performance of a molecular motor, characterized by its power output and energy efficiency, is investigated in the motor design space spanned by the stepping rate function and the motor-track interaction potential. Analytic results and simulations show that a gating mechanism that restricts forward stepping in a narrow window in configuration space is needed for generating high power at physiologically relevant loads. By deriving general thermodynamics laws for nonequilibrium motors, we find that the maximum torque (force) at stall is less than its theoretical limit for any realistic motor-track interactions due to speed fluctuations. Our study reveals a tradeoff for the motor-track interaction: while a strong interaction generates a high power output for forward steps, it also leads to a higher probability of wasteful spontaneous back steps. Our analysis and simulations show that this tradeoff sets a fundamental limit to the maximum motor efficiency in the presence of spontaneous back steps, i.e., loose-coupling. Balancing this tradeoff leads to an optimal design of the motor-track interaction for achieving a maximum efficiency close to 1 for realistic motors that are not perfectly coupled with the energy source. Comparison with existing data and suggestions for future experiments are discussed.
Classification of pulmonary airway disease based on mucosal color analysis
NASA Astrophysics Data System (ADS)
Suter, Melissa; Reinhardt, Joseph M.; Riker, David; Ferguson, John Scott; McLennan, Geoffrey
2005-04-01
Airway mucosal color changes occur in response to the development of bronchial diseases including lung cancer, cystic fibrosis, chronic bronchitis, emphysema and asthma. These associated changes are often visualized using standard macro-optical bronchoscopy techniques. A limitation to this form of assessment is that the subtle changes that indicate early stages in disease development may often be missed as a result of this highly subjective assessment, especially in inexperienced bronchoscopists. Tri-chromatic CCD chip bronchoscopes allow for digital color analysis of the pulmonary airway mucosa. This form of analysis may facilitate a greater understanding of airway disease response. A 2-step image classification approach is employed: the first step is to distinguish between healthy and diseased bronchoscope images and the second is to classify the detected abnormal images into 1 of 4 possible disease categories. A database of airway mucosal color constructed from healthy human volunteers is used as a standard against which statistical comparisons are made from mucosa with known apparent airway abnormalities. This approach demonstrates great promise as an effective detection and diagnosis tool to highlight potentially abnormal airway mucosa identifying a region possibly suited to further analysis via airway forceps biopsy, or newly developed micro-optical biopsy strategies. Following the identification of abnormal airway images a neural network is used to distinguish between the different disease classes. We have shown that classification of potentially diseased airway mucosa is possible through comparative color analysis of digital bronchoscope images. The combination of the two strategies appears to increase the classification accuracy in addition to greatly decreasing the computational time.
Hierarchical Regularity in Multi-Basin Dynamics on Protein Landscapes
NASA Astrophysics Data System (ADS)
Matsunaga, Yasuhiro; Kostov, Konstatin S.; Komatsuzaki, Tamiki
2004-04-01
We analyze time series of potential energy fluctuations and principal components at several temperatures for two kinds of off-lattice 46-bead models that have two distinctive energy landscapes. The less-frustrated "funnel" energy landscape brings about stronger nonstationary behavior of the potential energy fluctuations at the folding temperature than the other, rather frustrated energy landscape at the collapse temperature. By combining principal component analysis with an embedding nonlinear time-series analysis, it is shown that the fast fluctuations with small amplitudes of 70-80% of the principal components cause the time series to become almost "random" in only 100 simulation steps. However, the stochastic feature of the principal components tends to be suppressed through a wide range of degrees of freedom at the transition temperature.
Li, Hui; Li, Kang-shuai; Su, Jing; Chen, Lai-Zhong; Xu, Yun-Fei; Wang, Hong-Mei; Gong, Zheng; Cui, Guo-Ying; Yu, Xiao; Wang, Kai; Yao, Wei; Xin, Tao; Li, Min-Yong; Xiao, Kun-Hong; An, Xiao-fei; Huo, Yuqing; Xu, Zhi-gang; Sun, Jin-Peng; Pang, Qi
2013-01-01
Striatal-enriched tyrosine phosphatase (STEP) is an important regulator of neuronal synaptic plasticity, and its abnormal level or activity contributes to cognitive disorders. One crucial downstream effector and direct substrate of STEP is extracellular signal-regulated protein kinase (ERK), which has important functions in spine stabilisation and action potential transmission. The inhibition of STEP activity toward phospho-ERK has the potential to treat neuronal diseases, but the detailed mechanism underlying the dephosphorylation of phospho-ERK by STEP is not known. Therefore, we examined STEP activity toward pNPP, phospho-tyrosine-containing peptides, and the full-length phospho-ERK protein using STEP mutants with different structural features. STEP was found to be a highly efficient ERK tyrosine phosphatase that required both its N-terminal regulatory region and key residues in its active site. Specifically, both KIM and KIS of STEP were required for ERK interaction. In addition to the N-terminal KIS region, S245, hydrophobic residues L249/L251, and basic residues R242/R243 located in the KIM region were important in controlling STEP activity toward phospho-ERK. Further kinetic experiments revealed subtle structural differences between STEP and HePTP that affected the interactions of their KIMs with ERK. Moreover, STEP recognised specific positions of a phospho-ERK peptide sequence through its active site, and the contact of STEP F311 with phospho-ERK V205 and T207 were crucial interactions. Taken together, our results not only provide the information for interactions between ERK and STEP, but will also help in the development of specific strategies to target STEP-ERK recognition, which could serve as a potential therapy for neurological disorders. PMID:24117863
Personal computer study of finite-difference methods for the transonic small disturbance equation
NASA Technical Reports Server (NTRS)
Bland, Samuel R.
1989-01-01
Calculation of unsteady flow phenomena requires careful attention to the numerical treatment of the governing partial differential equations. The personal computer provides a convenient and useful tool for the development of meshes, algorithms, and boundary conditions needed to provide time accurate solution of these equations. The one-dimensional equation considered provides a suitable model for the study of wave propagation in the equations of transonic small disturbance potential flow. Numerical results for effects of mesh size, extent, and stretching, time step size, and choice of far-field boundary conditions are presented. Analysis of the discretized model problem supports these numerical results. Guidelines for suitable mesh and time step choices are given.
SU-E-T-420: Failure Effects Mode Analysis for Trigeminal Neuralgia Frameless Radiosurgery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howe, J
2015-06-15
Purpose: Functional radiosurgery has been used successfully in the treatment of trigeminal neuralgia but presents significant challenges to ensuring the high prescription dose is delivered accurately. A review of existing practice should help direct the focus of quality improvement for this treatment regime. Method: Failure modes and effects analysis was used to identify the processes in preparing radiosurgery treatment for TN. The map was developed by a multidisciplinary team including: neurosurgeon, radiation oncology, physicist and therapist. Potential failure modes were identified for each step in the process map as well as potential causes and end effect. A risk priority numbermore » was assigned to each cause. Results: The process map identified 66 individual steps (see attached supporting document). Corrective actions were developed for areas of high risk priority number. Wrong site treatment is at higher risk for trigeminal neuralgia treatment due to the lack of site specific pathologic imaging on MR and CT – additional site specific checks were implemented to minimize the risk of wrong site treatment. Failed collision checks resulted from an insufficient collision model in the treatment planning system and a plan template was developed to address this problem. Conclusion: Failure modes and effects analysis is an effective tool for developing quality improvement in high risk radiotherapy procedures such as functional radiosurgery.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vile, D; Zhang, L; Cuttino, L
2016-06-15
Purpose: To create a quality assurance program based upon a risk-based assessment of a newly implemented SirSpheres Y-90 procedure. Methods: A process map was created for a newly implemented SirSpheres procedure at a community hospital. The process map documented each step of this collaborative procedure, as well as the roles and responsibilities of each member. From the process map, different potential failure modes were determined as well as any current controls in place. From this list, a full failure mode and effects analysis (FMEA) was performed by grading each failure mode’s likelihood of occurrence, likelihood of detection, and potential severity.more » These numbers were then multiplied to compute the risk priority number (RPN) for each potential failure mode. Failure modes were then ranked based on their RPN. Additional controls were then added, with failure modes corresponding to the highest RPNs taking priority. Results: A process map was created that succinctly outlined each step in the SirSpheres procedure in its current implementation. From this, 72 potential failure modes were identified and ranked according to their associated RPN. Quality assurance controls and safety barriers were then added for failure modes associated with the highest risk being addressed first. Conclusion: A quality assurance program was created from a risk-based assessment of the SirSpheres process. Process mapping and FMEA were effective in identifying potential high-risk failure modes for this new procedure, which were prioritized for new quality assurance controls. TG 100 recommends the fault tree analysis methodology to design a comprehensive and effective QC/QM program, yet we found that by simply introducing additional safety barriers to address high RPN failure modes makes the whole process simpler and safer.« less
Impaired Response Selection During Stepping Predicts Falls in Older People-A Cohort Study.
Schoene, Daniel; Delbaere, Kim; Lord, Stephen R
2017-08-01
Response inhibition, an important executive function, has been identified as a risk factor for falls in older people. This study investigated whether step tests that include different levels of response inhibition differ in their ability to predict falls and whether such associations are mediated by measures of attention, speed, and/or balance. A cohort study with a 12-month follow-up was conducted in community-dwelling older people without major cognitive and mobility impairments. Participants underwent 3 step tests: (1) choice stepping reaction time (CSRT) requiring rapid decision making and step initiation; (2) inhibitory choice stepping reaction time (iCSRT) requiring additional response inhibition and response-selection (go/no-go); and (3) a Stroop Stepping Test (SST) under congruent and incongruent conditions requiring conflict resolution. Participants also completed tests of processing speed, balance, and attention as potential mediators. Ninety-three of the 212 participants (44%) fell in the follow-up period. Of the step tests, only components of the iCSRT task predicted falls in this time with the relative risk per standard deviation for the reaction time (iCSRT-RT) = 1.23 (95%CI = 1.10-1.37). Multiple mediation analysis indicated that the iCSRT-RT was independently associated with falls and not mediated through slow processing speed, poor balance, or inattention. Combined stepping and response inhibition as measured in a go/no-go test stepping paradigm predicted falls in older people. This suggests that integrity of the response-selection component of a voluntary stepping response is crucial for minimizing fall risk. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
Improving HIV outcomes in resource-limited countries: the importance of quality indicators.
Ahonkhai, Aima A; Bassett, Ingrid V; Ferris, Timothy G; Freedberg, Kenneth A
2012-11-24
Resource-limited countries increasingly depend on quality indicators to improve outcomes within HIV treatment programs, but indicators of program performance suitable for use at the local program level remain underdeveloped. Using the existing literature as a guide, we applied standard quality improvement (QI) concepts to the continuum of HIV care from HIV diagnosis, to enrollment and retention in care, and highlighted critical service delivery process steps to identify opportunities for performance indicator development. We then identified existing indicators to measure program performance, citing examples used by pivotal donor agencies, and assessed their feasibility for use in surveying local program performance. Clinical delivery steps without existing performance measures were identified as opportunities for measure development. Using National Quality Forum (NQF) criteria as a guide, we developed measurement concepts suitable for use at the local program level that address existing gaps in program performance assessment. This analysis of the HIV continuum of care identified seven critical process steps providing numerous opportunities for performance measurement. Analysis of care delivery process steps and the application of NQF criteria identified 24 new measure concepts that are potentially useful for improving operational performance in HIV care at the local level. An evidence-based set of program-level quality indicators is critical for the improvement of HIV care in resource-limited settings. These performance indicators should be utilized as treatment programs continue to grow.
Six-sigma application in tire-manufacturing company: a case study
NASA Astrophysics Data System (ADS)
Gupta, Vikash; Jain, Rahul; Meena, M. L.; Dangayach, G. S.
2017-09-01
Globalization, advancement of technologies, and increment in the demand of the customer change the way of doing business in the companies. To overcome these barriers, the six-sigma define-measure-analyze-improve-control (DMAIC) method is most popular and useful. This method helps to trim down the wastes and generating the potential ways of improvement in the process as well as service industries. In the current research, the DMAIC method was used for decreasing the process variations of bead splice causing wastage of material. This six-sigma DMAIC research was initiated by problem identification through voice of customer in the define step. The subsequent step constitutes of gathering the specification data of existing tire bead. This step was followed by the analysis and improvement steps, where the six-sigma quality tools such as cause-effect diagram, statistical process control, and substantial analysis of existing system were implemented for root cause identification and reduction in process variation. The process control charts were used for systematic observation and control the process. Utilizing DMAIC methodology, the standard deviation was decreased from 2.17 to 1.69. The process capability index (C p) value was enhanced from 1.65 to 2.95 and the process performance capability index (C pk) value was enhanced from 0.94 to 2.66. A DMAIC methodology was established that can play a key role for reducing defects in the tire-manufacturing process in India.
Advancing our thinking in presence-only and used-available analysis.
Warton, David; Aarts, Geert
2013-11-01
1. The problems of analysing used-available data and presence-only data are equivalent, and this paper uses this equivalence as a platform for exploring opportunities for advancing analysis methodology. 2. We suggest some potential methodological advances in used-available analysis, made possible via lessons learnt in the presence-only literature, for example, using modern methods to improve predictive performance. We also consider the converse - potential advances in presence-only analysis inspired by used-available methodology. 3. Notwithstanding these potential advances in methodology, perhaps a greater opportunity is in advancing our thinking about how to apply a given method to a particular data set. 4. It is shown by example that strikingly different results can be achieved for a single data set by applying a given method of analysis in different ways - hence having chosen a method of analysis, the next step of working out how to apply it is critical to performance. 5. We review some key issues to consider in deciding how to apply an analysis method: apply the method in a manner that reflects the study design; consider data properties; and use diagnostic tools to assess how reasonable a given analysis is for the data at hand. © 2013 The Authors. Journal of Animal Ecology © 2013 British Ecological Society.
MetaboTools: A comprehensive toolbox for analysis of genome-scale metabolic models
Aurich, Maike K.; Fleming, Ronan M. T.; Thiele, Ines
2016-08-03
Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration, and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorialsmore » explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. In conclusion, this computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.« less
NASA Technical Reports Server (NTRS)
Barber, Peter W.; Demerdash, Nabeel A. O.; Wang, R.; Hurysz, B.; Luo, Z.
1991-01-01
The goal is to analyze the potential effects of electromagnetic interference (EMI) originating from power system processing and transmission components for Space Station Freedom.The approach consists of four steps: (1) develop analytical tools (models and computer programs); (2) conduct parameterization studies; (3) predict the global space station EMI environment; and (4) provide a basis for modification of EMI standards.
Hospital acquisition or management contract: a theory of strategic choice.
Morrisey, M A; Alexander, J A
1987-01-01
Differences in the mission of the hospital and the multihospital system are key elements underlying the development of a management contract. Preliminary analysis suggests that the number of potential new acquisitions is severely limited, that contract management is not a stepping stone to acquisition, and that many recent management contracts appear to be attempts to overcome problems beyond the hospital's and the contractor's direct control.
A statistical model of expansion in a colony of black-tailed prairie dogs
R. P. Cincotta; Daniel W. Uresk; R. M. Hansen
1988-01-01
To predict prairie dog establishment in areas adjacent to a colony we sample: (1) VISIBILITY through the vegetation using a target, (2) POPULATION DENSITY at the cology edge, (3) DISTANCE from the edge to the potential site of settlement, and (4) % FORB COVER. Step-wise regression analysis indicated that establishment of prairie dogs in adjacent prairie was most likely...
ERIC Educational Resources Information Center
Coyne, Thomas J.; Nordone, Ronald; Donovan, Joseph W.; Thygeson, William
This paper describes the initial analyses needed to help institutions of higher education plan majors in nursing and allied health as institutions look for new markets based on demographic and employment factors. Twelve variables were identified and weighted to describe an ideal recruitment market. Using a three-phase process, potential U.S.…
Rahaman, Obaidur; Estrada, Trilce P.; Doren, Douglas J.; Taufer, Michela; Brooks, Charles L.; Armen, Roger S.
2011-01-01
The performance of several two-step scoring approaches for molecular docking were assessed for their ability to predict binding geometries and free energies. Two new scoring functions designed for “step 2 discrimination” were proposed and compared to our CHARMM implementation of the linear interaction energy (LIE) approach using the Generalized-Born with Molecular Volume (GBMV) implicit solvation model. A scoring function S1 was proposed by considering only “interacting” ligand atoms as the “effective size” of the ligand, and extended to an empirical regression-based pair potential S2. The S1 and S2 scoring schemes were trained and five-fold cross validated on a diverse set of 259 protein-ligand complexes from the Ligand Protein Database (LPDB). The regression-based parameters for S1 and S2 also demonstrated reasonable transferability in the CSARdock 2010 benchmark using a new dataset (NRC HiQ) of diverse protein-ligand complexes. The ability of the scoring functions to accurately predict ligand geometry was evaluated by calculating the discriminative power (DP) of the scoring functions to identify native poses. The parameters for the LIE scoring function with the optimal discriminative power (DP) for geometry (step 1 discrimination) were found to be very similar to the best-fit parameters for binding free energy over a large number of protein-ligand complexes (step 2 discrimination). Reasonable performance of the scoring functions in enrichment of active compounds in four different protein target classes established that the parameters for S1 and S2 provided reasonable accuracy and transferability. Additional analysis was performed to definitively separate scoring function performance from molecular weight effects. This analysis included the prediction of ligand binding efficiencies for a subset of the CSARdock NRC HiQ dataset where the number of ligand heavy atoms ranged from 17 to 35. This range of ligand heavy atoms is where improved accuracy of predicted ligand efficiencies is most relevant to real-world drug design efforts. PMID:21644546
Rahaman, Obaidur; Estrada, Trilce P; Doren, Douglas J; Taufer, Michela; Brooks, Charles L; Armen, Roger S
2011-09-26
The performances of several two-step scoring approaches for molecular docking were assessed for their ability to predict binding geometries and free energies. Two new scoring functions designed for "step 2 discrimination" were proposed and compared to our CHARMM implementation of the linear interaction energy (LIE) approach using the Generalized-Born with Molecular Volume (GBMV) implicit solvation model. A scoring function S1 was proposed by considering only "interacting" ligand atoms as the "effective size" of the ligand and extended to an empirical regression-based pair potential S2. The S1 and S2 scoring schemes were trained and 5-fold cross-validated on a diverse set of 259 protein-ligand complexes from the Ligand Protein Database (LPDB). The regression-based parameters for S1 and S2 also demonstrated reasonable transferability in the CSARdock 2010 benchmark using a new data set (NRC HiQ) of diverse protein-ligand complexes. The ability of the scoring functions to accurately predict ligand geometry was evaluated by calculating the discriminative power (DP) of the scoring functions to identify native poses. The parameters for the LIE scoring function with the optimal discriminative power (DP) for geometry (step 1 discrimination) were found to be very similar to the best-fit parameters for binding free energy over a large number of protein-ligand complexes (step 2 discrimination). Reasonable performance of the scoring functions in enrichment of active compounds in four different protein target classes established that the parameters for S1 and S2 provided reasonable accuracy and transferability. Additional analysis was performed to definitively separate scoring function performance from molecular weight effects. This analysis included the prediction of ligand binding efficiencies for a subset of the CSARdock NRC HiQ data set where the number of ligand heavy atoms ranged from 17 to 35. This range of ligand heavy atoms is where improved accuracy of predicted ligand efficiencies is most relevant to real-world drug design efforts.
Calculation of muscle loading and joint contact forces during the rock step in Irish dance.
Shippen, James M; May, Barbara
2010-01-01
A biomechanical model for the analysis of dancers and their movements is described. The model consisted of 31 segments, 35 joints, and 539 muscles, and was animated using movement data obtained from a three-dimensional optical tracking system that recorded the motion of dancers. The model was used to calculate forces within the muscles and contact forces at the joints of the dancers in this study. Ground reaction forces were measured using force plates mounted in a sprung floor. The analysis procedure is generic and can be applied to any dance form. As an exemplar of the application process an Irish dance step, the rock, was analyzed. The maximum ground reaction force found was 4.5 times the dancer's body weight. The muscles connected to the Achilles tendon experienced a maximum force comparable to their maximal isometric strength. The contact force at the ankle joint was 14 times body weight, of which the majority of the force was due to muscle contraction. It is suggested that as the rock step produces high forces, and therefore the potential to cause injury, its use should be carefully monitored.
Hukkerikar, Amol Shivajirao; Kalakul, Sawitree; Sarup, Bent; Young, Douglas M; Sin, Gürkan; Gani, Rafiqul
2012-11-26
The aim of this work is to develop group-contribution(+) (GC(+)) method (combined group-contribution (GC) method and atom connectivity index (CI) method) based property models to provide reliable estimations of environment-related properties of organic chemicals together with uncertainties of estimated property values. For this purpose, a systematic methodology for property modeling and uncertainty analysis is used. The methodology includes a parameter estimation step to determine parameters of property models and an uncertainty analysis step to establish statistical information about the quality of parameter estimation, such as the parameter covariance, the standard errors in predicted properties, and the confidence intervals. For parameter estimation, large data sets of experimentally measured property values of a wide range of chemicals (hydrocarbons, oxygenated chemicals, nitrogenated chemicals, poly functional chemicals, etc.) taken from the database of the US Environmental Protection Agency (EPA) and from the database of USEtox is used. For property modeling and uncertainty analysis, the Marrero and Gani GC method and atom connectivity index method have been considered. In total, 22 environment-related properties, which include the fathead minnow 96-h LC(50), Daphnia magna 48-h LC(50), oral rat LD(50), aqueous solubility, bioconcentration factor, permissible exposure limit (OSHA-TWA), photochemical oxidation potential, global warming potential, ozone depletion potential, acidification potential, emission to urban air (carcinogenic and noncarcinogenic), emission to continental rural air (carcinogenic and noncarcinogenic), emission to continental fresh water (carcinogenic and noncarcinogenic), emission to continental seawater (carcinogenic and noncarcinogenic), emission to continental natural soil (carcinogenic and noncarcinogenic), and emission to continental agricultural soil (carcinogenic and noncarcinogenic) have been modeled and analyzed. The application of the developed property models for the estimation of environment-related properties and uncertainties of the estimated property values is highlighted through an illustrative example. The developed property models provide reliable estimates of environment-related properties needed to perform process synthesis, design, and analysis of sustainable chemical processes and allow one to evaluate the effect of uncertainties of estimated property values on the calculated performance of processes giving useful insights into quality and reliability of the design of sustainable processes.
Self-constrained inversion of potential fields
NASA Astrophysics Data System (ADS)
Paoletti, V.; Ialongo, S.; Florio, G.; Fedi, M.; Cella, F.
2013-11-01
We present a potential-field-constrained inversion procedure based on a priori information derived exclusively from the analysis of the gravity and magnetic data (self-constrained inversion). The procedure is designed to be applied to underdetermined problems and involves scenarios where the source distribution can be assumed to be of simple character. To set up effective constraints, we first estimate through the analysis of the gravity or magnetic field some or all of the following source parameters: the source depth-to-the-top, the structural index, the horizontal position of the source body edges and their dip. The second step is incorporating the information related to these constraints in the objective function as depth and spatial weighting functions. We show, through 2-D and 3-D synthetic and real data examples, that potential field-based constraints, for example, structural index, source boundaries and others, are usually enough to obtain substantial improvement in the density and magnetization models.
Military Base Off-Taker Opportunities for Tribal Renewable Energy Projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nangle, J.
This white paper surveys DOD installations that could have an increased potential interest in the purchase of energy from renewable energy projects on tribal lands. Identification of likely purchasers of renewable energy is a first step in the energy project development process, and this paper aims to identify likely electricity customers that tribal commercial-scale projects could serve. This white paper builds on a geospatial analysis completed in November 2012 identifying 53 reservations within 10 miles of military bases (DOE 2012). This analysis builds on those findings by further refining the list of potential opportunity sites to 15 reservations (Table ES-1),more » based on five additional factors: 1) The potential renewable resources required to meet the installation energy loads; 2) Proximity to transmission lines; 3) Military installation energy demand; 4) State electricity prices; 5) Local policy and regulatory environment.« less
Siewert, Bettina; Brook, Olga R; Hochman, Mary; Eisenberg, Ronald L
2016-03-01
The purpose of this study is to analyze the impact of communication errors on patient care, customer satisfaction, and work-flow efficiency and to identify opportunities for quality improvement. We performed a search of our quality assurance database for communication errors submitted from August 1, 2004, through December 31, 2014. Cases were analyzed regarding the step in the imaging process at which the error occurred (i.e., ordering, scheduling, performance of examination, study interpretation, or result communication). The impact on patient care was graded on a 5-point scale from none (0) to catastrophic (4). The severity of impact between errors in result communication and those that occurred at all other steps was compared. Error evaluation was performed independently by two board-certified radiologists. Statistical analysis was performed using the chi-square test and kappa statistics. Three hundred eighty of 422 cases were included in the study. One hundred ninety-nine of the 380 communication errors (52.4%) occurred at steps other than result communication, including ordering (13.9%; n = 53), scheduling (4.7%; n = 18), performance of examination (30.0%; n = 114), and study interpretation (3.7%; n = 14). Result communication was the single most common step, accounting for 47.6% (181/380) of errors. There was no statistically significant difference in impact severity between errors that occurred during result communication and those that occurred at other times (p = 0.29). In 37.9% of cases (144/380), there was an impact on patient care, including 21 minor impacts (5.5%; result communication, n = 13; all other steps, n = 8), 34 moderate impacts (8.9%; result communication, n = 12; all other steps, n = 22), and 89 major impacts (23.4%; result communication, n = 45; all other steps, n = 44). In 62.1% (236/380) of cases, no impact was noted, but 52.6% (200/380) of cases had the potential for an impact. Among 380 communication errors in a radiology department, 37.9% had a direct impact on patient care, with an additional 52.6% having a potential impact. Most communication errors (52.4%) occurred at steps other than result communication, with similar severity of impact.
An evaluation of a reagentless method for the determination of total mercury in aquatic life
Haynes, Sekeenia; Gragg, Richard D.; Johnson, Elijah; Robinson, Larry; Orazio, Carl E.
2006-01-01
Multiple treatment (i.e., drying, chemical digestion, and oxidation) steps are often required during preparation of biological matrices for quantitative analysis of mercury; these multiple steps could potentially lead to systematic errors and poor recovery of the analyte. In this study, the Direct Mercury Analyzer (Milestone Inc., Monroe, CT) was utilized to measure total mercury in fish tissue by integrating steps of drying, sample combustion and gold sequestration with successive identification using atomic absorption spectrometry. We also evaluated the differences between the mercury concentrations found in samples that were homogenized and samples with no preparation. These results were confirmed with cold vapor atomic absorbance and fluorescence spectrometric methods of analysis. Finally, total mercury in wild captured largemouth bass (n = 20) were assessed using the Direct Mercury Analyzer to examine internal variability between mercury concentrations in muscle, liver and brain organs. Direct analysis of total mercury measured in muscle tissue was strongly correlated with muscle tissue that was homogenized before analysis (r = 0.81, p < 0.0001). Additionally, results using this integrated method compared favorably (p < 0.05) with conventional cold vapor spectrometry with atomic absorbance and fluorescence detection methods. Mercury concentrations in brain were significantly lower than concentrations in muscle (p < 0.001) and liver (p < 0.05) tissues. This integrated method can measure a wide range of mercury concentrations (0-500 ??g) using small sample sizes. Total mercury measurements in this study are comparative to the methods (cold vapor) commonly used for total mercury analysis and are devoid of laborious sample preparation and expensive hazardous waste. ?? Springer 2006.
Three-Step Test System for the Identification of Novel GABAA Receptor Modulating Food Plants.
Sahin, Sümeyye; Eulenburg, Volker; Kreis, Wolfgang; Villmann, Carmen; Pischetsrieder, Monika
2016-12-01
Potentiation of γ-amino butyric acid (GABA)-induced GABA A receptor (GABA A R) activation is a common pathway to achieve sedative, sleep-enhancing, anxiolytic, and antidepressant effects. Presently, a three-component test system was established for the identification of novel GABA A R modulating food plants. In the first step, potentiation of GABA-induced response of the GABA A R was analysed by two-electrode voltage clamp (TEVC) for activity on human α1β2-GABA A R expressed in Xenopus laevis oocytes. Positively tested food plants were then subjected to quantification of GABA content by high-performance liquid chromatography with fluorescence detection (HPLC-FLD) to exclude test foods, which evoke a TEVC-response by endogenous GABA. In the third step, specificity of GABA A -modulating activity was assessed by TEVC analysis of Xenopus laevis oocytes expressing the homologous glycine receptor (GlyR). The three-component test was then applied to screen 10 aqueous extracts of food plants for their GABA A R activity. Thus, hop cones (Humulus lupulus) and Sideritis sipylea were identified as the most potent specific GABA A R modulators eliciting significant potentiation of the current by 182 ± 27 and 172 ± 19 %, respectively, at the lowest concentration of 0.5 μg/mL. The extracts can now be further evaluated by in vivo studies and by structural evaluation of the active components.
Chemical facility vulnerability assessment project.
Jaeger, Calvin D
2003-11-14
Sandia National Laboratories, under the direction of the Office of Science and Technology, National Institute of Justice, conducted the chemical facility vulnerability assessment (CFVA) project. The primary objective of this project was to develop, test and validate a vulnerability assessment methodology (VAM) for determining the security of chemical facilities against terrorist or criminal attacks (VAM-CF). The project also included a report to the Department of Justice for Congress that in addition to describing the VAM-CF also addressed general observations related to security practices, threats and risks at chemical facilities and chemical transport. In the development of the VAM-CF Sandia leveraged the experience gained from the use and development of VAs in other areas and the input from the chemical industry and Federal agencies. The VAM-CF is a systematic, risk-based approach where risk is a function of the severity of consequences of an undesired event, the attack potential, and the likelihood of adversary success in causing the undesired event. For the purpose of the VAM-CF analyses Risk is a function of S, L(A), and L(AS), where S is the severity of consequence of an event, L(A) is the attack potential and L(AS) likelihood of adversary success in causing a catastrophic event. The VAM-CF consists of 13 basic steps. It involves an initial screening step, which helps to identify and prioritize facilities for further analysis. This step is similar to the prioritization approach developed by the American Chemistry Council (ACC). Other steps help to determine the components of the risk equation and ultimately the risk. The VAM-CF process involves identifying the hazardous chemicals and processes at a chemical facility. It helps chemical facilities to focus their attention on the most critical areas. The VAM-CF is not a quantitative analysis but, rather, compares relative security risks. If the risks are deemed too high, recommendations are developed for measures to reduce the risk. This paper will briefly discuss the CFVA project and VAM-CF process.
Klein tunneling in the α -T3 model
NASA Astrophysics Data System (ADS)
Illes, E.; Nicol, E. J.
2017-06-01
We investigate Klein tunneling for the α -T3 model, which interpolates between graphene and the dice lattice via parameter α . We study transmission across two types of electrostatic interfaces: sharp potential steps and sharp potential barriers. We find both interfaces to be perfectly transparent for normal incidence for the full range of the parameter α for both interfaces. For other angles of incidence, we find that transmission is enhanced with increasing α . For the dice lattice, we find perfect, all-angle transmission across a potential step for incoming electrons with energy equal to half of the height of the potential step. This is analogous to the "super", all-angle transmission reported for the dice lattice for Klein tunneling across a potential barrier.
Belousov—Zhabotinskii reaction systems in the presence of Ag +
NASA Astrophysics Data System (ADS)
Rastogi, R. P.; Mani, Kiran; Misra, G. P.
1991-03-01
Threshold values of [Ag +] for quenching (i) Br - potential oscillation and (ii) redox potential oscillations + Br - oscillations have been determined under aerobic and anaerobic conditions for the (a) malonic acid + Ce 4+ + BrO -3 + H 2SO 4 system, and the (b) oxalic acid + acetone + Ce 4+ + BrO -3 + H 2SO 4 system. Similar experiments on the lower limit of BrO -3 under aerobic/anaerobic conditions are reported. The results show that the mechanism of quenching, so far as the key steps are concerned, is expected to be the same in the two cases. Numerical simulations have been made on the basis of the Field, Körös and Noyes (FKN) mechanisms with the additional step Ag + + Br - ⇌ AgBr. The analysis shows that the experimental results can be explained quite well if the corresponding forward rate constant, k6, is assumed to lie between 10 4-10 5 M -1 s -1, as has been suggested by Ruoff and others.
Cooke, Alexandra B; Pace, Romina; Chan, Deborah; Rosenberg, Ellen; Dasgupta, Kaberi; Daskalopoulou, Stella S
2018-05-01
The integration of pedometers into clinical practice has the potential to enhance physical activity levels in patients with chronic disease. Our SMARTER randomized controlled trial demonstrated that a physician-delivered step count prescription strategy has measurable effects on daily steps, glycemic control, and insulin resistance in patients with type 2 diabetes and/or hypertension. In this study, we aimed to understand perceived barriers and facilitators influencing successful uptake and sustainability of the strategy, from patient and physician perspectives. Qualitative in-depth interviews were conducted in a purposive sample of physicians (n = 10) and participants (n = 20), including successful and less successful cases in terms of pedometer-assessed step count improvements. Themes that achieved saturation in either group through thematic analysis are presented. All participants appreciated the pedometer-based monitoring combined with step count prescriptions. Accountability to physicians and support offered by the trial coordinator influenced participant motivation. Those who increased step counts adopted strategies to integrate more steps into their routines and were able to overcome weather-related barriers by finding indoor alternative options to outdoor steps. Those who decreased step counts reported difficulty in overcoming weather-related challenges, health limitations and work constraints. Physicians indicated the strategy provided a framework for discussing physical activity and motivating patients, but emphasized the need for support from allied professionals to help deliver the strategy in busy clinical settings. A physician-delivered step count prescription strategy was feasibly integrated into clinical practice and successful in engaging most patients; however, continual support is needed for maximal engagement and sustained use. Copyright © 2018 Elsevier B.V. All rights reserved.
Analyzing the costs to deliver medication therapy management services.
Rupp, Michael T
2011-01-01
To provide pharmacy managers and consultant pharmacists with a step-by-step approach for analyzing of the costs of delivering medication therapy management (MTM) services and to describe use of a free online software application for determining costs of delivering MTM. The process described is applicable to community pharmacies and consultant pharmacists who provide MTM services from nonpharmacy settings. The PharmAccount Service Cost Calculator is an Internet- based software application that uses a guided online interview to collect information needed to conduct a comprehensive cost analysis of any specialized pharmacy service. In addition to direct variable and fixed costs, the software automatically allocates indirect and overhead costs to the service and generates an itemized report that details the components of service delivery costs. The service cost calculator is sufficiently flexible to support the analysis of virtually any specialized pharmacy service, irrespective of whether the service is being delivered from a physical pharmacy. The software application allows users to perform sensitivity analysis to quickly determine the potential impact that alternate scenarios would have on service delivery cost. It is therefore particularly well suited to assist in the design and planning of a new pharmacy service. Good management requires that the cost implications of service delivery decisions are known and considered. Analyzing the cost of an MTM service is an important step in developing a sustainable business model.
Photovoltaic central station step and touch potential considerations in grounding system design
NASA Technical Reports Server (NTRS)
Engmann, G.
1983-01-01
The probability of hazardous step and touch potentials is an important consideration in central station grounding system design. Steam turbine generating station grounding system design is based on accepted industry practices and there is extensive in-service experience with these grounding systems. A photovoltaic (PV) central station is a relatively new concept and there is limited experience with PV station grounding systems. The operation and physical configuration of a PV central station is very different from a steam electric station. A PV station bears some similarity to a substation and the PV station step and touch potentials might be addressed as they are in substation design. However, the PV central station is a generating station and it is appropriate to examine the effect that the differences and similarities of the two types of generating stations have on step and touch potential considerations.
Gauld, Cassandra S; Lewis, Ioni M; White, Katherine M; Watson, Barry
2016-11-01
The current study forms part of a larger study based on the Step Approach to Message Design and Testing (SatMDT), a new and innovative framework designed to guide the development and evaluation of health communication messages, including road safety messages. This four step framework is based on several theories, including the Theory of Planned Behaviour. The current study followed steps one and two of the SatMDT framework and utilised a quantitative survey to validate salient beliefs (behavioural, normative, and control) about initiating, monitoring/reading, and responding to social interactive technology on smartphones by N=114 (88F, 26M) young drivers aged 17-25 years. These beliefs had been elicited in a prior in-depth qualitative study. A subsequent critical beliefs analysis identified seven beliefs as potential targets for public education messages, including, 'slow-moving traffic' (control belief - facilitator) for both monitoring/reading and responding behaviours; 'feeling at ease that you had received an expected communication' (behavioural belief -advantage) for monitoring/reading behaviour; and 'friends/peers more likely to approve' (normative belief) for responding behaviour. Potential message content targeting these seven critical beliefs is discussed in accordance with the SatMDT. Copyright © 2016 Elsevier Ltd. All rights reserved.
Signatures of two-step impurity mediated vortex lattice melting in Bose-Einstein condensate
NASA Astrophysics Data System (ADS)
Dey, Bishwajyoti
2017-04-01
We study impurity mediated vortex lattice melting in a rotating two-dimensional Bose-Einstein condensate (BEC). Impurities are introduced either through a protocol in which vortex lattice is produced in an impurity potential or first creating the vortex lattice in the absence of random pinning and then cranking up the impurity potential. These two protocols have obvious relation with the two commonly known protocols of creating vortex lattice in a type-II superconductor: zero field cooling protocol and the field cooling protocol respectively. Time-splitting Crank-Nicolson method has been used to numerically simulate the vortex lattice dynamics. It is shown that the vortex lattice follows a two-step melting via loss of positional and orientational order. This vortex lattice melting process in BEC closely mimics the recently observed two-step melting of vortex matter in weakly pinned type-II superconductor Co-intercalated NbSe2. Also, using numerical perturbation analysis, we compare between the states obtained in two protocols and show that the vortex lattice states are metastable and more disordered when impurities are introduced after the formation of an ordered vortex lattice. The author would like to thank SERB, Govt. of India and BCUD-SPPU for financial support through research Grants.
Managing landscape connectivity for a fragmented area using spatial analysis model at town scale
NASA Astrophysics Data System (ADS)
Liu, Shiliang; Dong, Yuhong; Fu, Wei; Zhang, Zhaoling
2009-10-01
Urban growth has great effect on land uses of its suburbs. The habitat loss and fragmentation in those areas are a main threat to conservation of biodiversity. Enhancing landscape functional connectivity is usually an effective way to maintain high biodiversity level in disturbed area. Taking a small town in Beijing as an example, we designed potential landscape corridors based on identification of landscape element quality and "least-cost" path analysis. We described a general approach to establish the corridor network in such fragmented area at town scale. The results showed that landscape elements position has various effects on landscape suitability. Small forest patches and other green lands such as meadow, shrub, even farmland could be a potential stepping-stone or corridor for animal movements. Also, the analysis reveals that critical areas should be managed to facilitate the movement of dispersers among habitat patches.
Zou, Ling; Chen, Shuyue; Sun, Yuqiang; Ma, Zhenghua
2010-08-01
In this paper we present a new method of combining Independent Component Analysis (ICA) and Wavelet de-noising algorithm to extract Evoked Related Potentials (ERPs). First, the extended Infomax-ICA algorithm is used to analyze EEG signals and obtain the independent components (Ics); Then, the Wave Shrink (WS) method is applied to the demixed Ics as an intermediate step; the EEG data were rebuilt by using the inverse ICA based on the new Ics; the ERPs were extracted by using de-noised EEG data after being averaged several trials. The experimental results showed that the combined method and ICA method could remove eye artifacts and muscle artifacts mixed in the ERPs, while the combined method could retain the brain neural activity mixed in the noise Ics and could extract the weak ERPs efficiently from strong background artifacts.
Antenna analysis using properties of metamaterials
NASA Astrophysics Data System (ADS)
Mitra, Atindra K.; Hu, Colin; Maxwell, Kasandra
2010-04-01
As part of the Student Internship Programs at Wright-Patterson Air Force Base, including the AFRL Wright Scholar Program for High School Students and the AFRL STEP Program, sample results from preliminary investigation and analysis of integrated antenna structures are reported. Investigation of these novel integrated antenna geometries can be interpreted as a continuation of systems analysis under the general topic area of potential integrated apertures for future software radar/radio solutions [1] [2]. Specifically, the categories of novel integrated aperture geometries investigated in this paper include slotted-fractal structures on microstrip rectangular patch antenna models in tandem with the analysis of exotic substrate materials comprised of a type of synthesized electromagnetic structure known as metamaterials [8] - [10].
Novitski, David; Holdcroft, Steven
2015-12-16
Oxygen mass transport resistance through the ionomer component in the cathode catalyst layer is considered to contribute overpotential losses in polymer electrolyte membrane fuel cells. Whereas it is known that water uptake, water transport, and proton conductivity are reduced upon reducing relative humidity, the effect on oxygen mass transport remains unknown. We report a two-electrode approach to determine mass transport coefficients for the oxygen reduction reaction in air at the Pt/perfluorosulfonic acid ionomer membrane interface between 90 and 30% RH at 70 °C using a Pt microdisk in a solid state electrochemical cell. Potential-step chronoamperometry was performed at specific mass-transport limiting potentials to allow for the elucidation of the oxygen diffusion coefficient (D(bO2)) and oxygen concentration (c(bO2)). In our efforts, novel approaches in data acquisition, as well as analysis, were examined because of the dynamic nature of the membrane under lowered hydration conditions. Linear regression analysis reveals a decrease in oxygen permeability (D(bO2c(bO2)) by a factor of 1.7 and 3.4 from 90 to 30% RH for Nafion 211 membrane and membranes cast from Nafion DE2020 ionomer solutions, respectively. Additionally, nonlinear curve fitting by way of the Shoup-Szabo equation is employed to analyze the entire current transient during potential step controlled ORR. We also report on the presence of an RH dependence of our previously reported time-dependency measurements for O2 mass transport coefficients.
Wait, Isaac W; Johnston, Cliff T; Blatchley, Ernest R
2007-06-01
Ultraviolet (UV) disinfection systems are incorporated into drinking water production facilities because of their broad-spectrum antimicrobial capabilities, and the minimal disinfection by-product formation that generally accompanies their use. Selection of an optimal location for a UV system within a drinking water treatment facility depends on many factors; a potentially important consideration is the effect of system location on operation and maintenance issues, including the potential for fouling of quartz surfaces. To examine the effect of system location on fouling, experiments were conducted at a groundwater treatment facility, wherein aeration, chlorination, and sand filtration were applied sequentially for treatment. In this facility, access to the water stream was available prior to and following each of the treatment steps. Therefore, it was possible to examine the effects of each of these unit operations on fouling dynamics within a UV system. Results indicated zero-order formation kinetics for the fouling reactions at all locations. Increases in oxidation reduction potential, caused by water treatment steps such as aeration and chlorination, increased the rate of sleeve fouling and the rate of irradiance loss within the reactor. Analysis of metals in the sleeve foulant showed that calcium and iron predominate, and relative comparisons of foulant composition to water chemistry highlighted a high affinity for incorporation into the foulant matrix for both iron and manganese, particularly after oxidizing treatment steps. Fouling behavior was observed to be in qualitative agreement with representations of the degree of saturation, relative to the metal:ligand combinations that are believed to comprise a large fraction of the foulants that accumulate on the surfaces of quartz jackets in UV systems used to treat water.
[Sample preparation and bioanalysis in mass spectrometry].
Bourgogne, Emmanuel; Wagner, Michel
2015-01-01
The quantitative analysis of compounds of clinical interest of low molecular weight (<1000 Da) in biological fluids is currently in most cases performed by liquid chromatography-mass spectrometry (LC-MS). Analysis of these compounds in biological fluids (plasma, urine, saliva, hair...) is a difficult task requiring a sample preparation. Sample preparation is a crucial part of chemical/biological analysis and in a sense is considered the bottleneck of the whole analytical process. The main objectives of sample preparation are the removal of potential interferences, analyte preconcentration, and converting (if needed) the analyte into a more suitable form for detection or separation. Without chromatographic separation, endogenous compounds, co-eluted products may affect a quantitative method in mass spectrometry performance. This work focuses on three distinct parts. First, quantitative bioanalysis will be defined, different matrices and sample preparation techniques currently used in bioanalysis by mass spectrometry of/for small molecules of clinical interest in biological fluids. In a second step the goals of sample preparation will be described. Finally, in a third step, sample preparation strategies will be made either directly ("dilute and shoot") or after precipitation.
Surrogate Analysis and Index Developer (SAID) tool
Domanski, Marian M.; Straub, Timothy D.; Landers, Mark N.
2015-10-01
The regression models created in SAID can be used in utilities that have been developed to work with the USGS National Water Information System (NWIS) and for the USGS National Real-Time Water Quality (NRTWQ) Web site. The real-time dissemination of predicted SSC and prediction intervals for each time step has substantial potential to improve understanding of sediment-related water quality and associated engineering and ecological management decisions.
Preconcentration for Improved Long-term Monitoring of Contaminants in Groundwater
2014-04-10
Johnson of the US Army Corps of Engineers, Tulsa District (recently retired) provided sites in northeastern Oklahoma for field trials as well as...neighboring wildlife is also a concern. Long-term monitoring of sites undergoing remediation as well as sites that may eventually require cleanup is...Activated charcoal and peroxide cleanup steps offer potential avenues for addressing this problem. The materials may be of value in isotopic analysis of
EM61-MK2 Response of Three Munitions Surrogates
2009-03-12
time-domain electromagnetic induction sensors, it produces a pulsed magnetic field (primary field) that induces a secondary field in metallic objects...selected and marked as potential metal targets. This initial list of anomalies is used as input to an analysis step that selects anomalies for digging...response of a metallic object to an Electromagnetic Induction sensor is most simply modeled as an induced dipole moment represented by a magnetic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wunschel, David S.; Colburn, Heather A.; Fox, Alvin
2008-08-01
Detection of small quantities of agar associated with spores of Bacillus anthracis could provide key information regarding its source or growth characteristics. Agar, widely used in growth of bacteria on solid surfaces, consists primarily of repeating polysaccharide units of 3,6-anhydro-L-galactose (AGal) and galactose (Gal) with sulfated and O-methylated galactoses present as minor constituents. Two variants of the alditol acetate procedure were evaluated for detection of potential agar markers associated with spores. The first method employed a reductive hydrolysis step, to stabilize labile anhydrogalactose, by converting to anhydrogalactitol. The second eliminated the reductive hydrolysis step simplifying the procedure. Anhydrogalactitol, derived frommore » agar, was detected using both derivatization methods followed by gas chromatography-mass spectrometry (GC-MS) analysis. However, challenges with artefactual background (reductive hydrolysis) or marker destruction (hydrolysis) lead to the search for alternative sugar markers. A minor agar component, 6-O-methyl galactose (6-O-M gal), was readily detected in agar-grown but not broth-grown bacteria. Detection was optimized by the use of gas chromatography-tandem mass spectrometry (GC-MS-MS). With appropriate choice of sugar marker and analytical procedure, detection of sugar markers for agar has considerable potential in microbial forensics.« less
Maternity Nurses' Perceptions of Implementation of the Ten Steps to Successful Breastfeeding.
Cunningham, Emilie M; Doyle, Eva I; Bowden, Rodney G
The purpose of this study was to determine maternity nurses' perceptions of implementing the Ten Steps to Successful Breastfeeding. An online survey and a focus group were used to evaluate perceptions of maternity nurses of implementing the Ten Steps to Successful Breastfeeding in an urban Texas hospital at the onset of the project initiation. Responses were transcribed and coded using Nvivo software. Thematic analysis was conducted and consensus was reached among the research team to validate themes. Twenty-eight maternity nurses participated. Nurses perceived a number of barriers to implementing the Ten Steps to Successful Breastfeeding including nurse staffing shortages, variations in practice among nurses, different levels of nurse education and knowledge about breastfeeding, lack of parental awareness and knowledge about breastfeeding, culture, and postpartum issues such as maternal fatigue, visitors, and routine required procedures during recovery care that interfered with skin-to-skin positioning. Maternity nurses desired more education about breastfeeding; specifically, a hands-on approach, rather than formal classroom instruction, to be able to promote successful implementation of the Ten Steps. More education on breastfeeding for new mothers, their families, and healthcare providers was recommended. Nurse staffing should be adequate to support nurses in their efforts to promote breastfeeding. Skin-to-skin positioning should be integrated into the recovery period. Hospital leadership support for full implementation and policy adherence is essential. Challenges in implementing the Ten Steps were identified along with potential solutions.
Communication as a Strategic Activity (Invited)
NASA Astrophysics Data System (ADS)
Fischhoff, B.
2010-12-01
Effective communication requires preparation. The first step is explicit analysis of the decisions faced by audience members, in order to identify the facts essential to their choices. The second step is assessing their current beliefs, in order to identify the gaps in their understanding, as well as their natural ways of thinking. The third step is drafting communications potentially capable of closing those gaps, taking advantage of the relevant behavioral science. The fourth step is empirically evaluating those communications, refining them as necessary. The final step is communicating through trusted channels, capable of getting the message out and receiving needed feedback. Executing these steps requires a team involving subject matter experts (for ensuring that the science is right), decision analysts (for identifying the decision-critical facts), behavioral scientists (for designing and evaluating messages), and communication specialists (for creating credible channels). Larger organizations should be able to assemble those teams and anticipate their communication needs. However, even small organizations, individuals, or large organizations that have been caught flat-footed can benefit from quickly assembling informal teams, before communicating in ways that might undermine their credibility. The work is not expensive, but does require viewing communication as a strategic activity, rather than an afterthought. The talk will illustrate the science base, with a few core research results; note the risks of miscommunication, with a few bad examples; and suggest the opportunities for communication leadership, focusing on the US Food and Drug Administration.
Systematically evaluating interfaces for RNA-seq analysis from a life scientist perspective.
Poplawski, Alicia; Marini, Federico; Hess, Moritz; Zeller, Tanja; Mazur, Johanna; Binder, Harald
2016-03-01
RNA-sequencing (RNA-seq) has become an established way for measuring gene expression in model organisms and humans. While methods development for refining the corresponding data processing and analysis pipeline is ongoing, protocols for typical steps have been proposed and are widely used. Several user interfaces have been developed for making such analysis steps accessible to life scientists without extensive knowledge of command line tools. We performed a systematic search and evaluation of such interfaces to investigate to what extent these can indeed facilitate RNA-seq data analysis. We found a total of 29 open source interfaces, and six of the more widely used interfaces were evaluated in detail. Central criteria for evaluation were ease of configuration, documentation, usability, computational demand and reporting. No interface scored best in all of these criteria, indicating that the final choice will depend on the specific perspective of users and the corresponding weighting of criteria. Considerable technical hurdles had to be overcome in our evaluation. For many users, this will diminish potential benefits compared with command line tools, leaving room for future improvement of interfaces. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Dlugonski, Deirdre; Motl, Robert W
2012-02-01
Persons with multiple sclerosis (MS) have consistently reported lower levels of self-esteem compared with the general population. Despite this, very little is known about the antecedents and consequences of self-esteem in persons with MS. To examine (1) physical activity and social support as potentially modifiable correlates (i.e., antecedents) of self-esteem and (2) physical and psychological health-related quality of life as possible consequences of self-esteem in persons with MS. Participants (N = 46) wore an Actigraph accelerometer for 7 days and then completed a battery of questionnaires, including the Rosenberg Self-Esteem Scale (RSES), Multiple Sclerosis Impact Scale (MSIS-29), and Social Provisions Scale (SPS). The data were analyzed using PASW Statistics 18. Bivariate correlation analysis indicated that average daily step counts (r = .298, p = .026) and social support (r = .366, p = .007) were significantly correlated with self-esteem. Multiple linear regression analysis indicated that only social support was a significant predictor of self-esteem scores (β = .411, p = .004); pedometer steps approached significance as a predictor of self-esteem (β = .178, p = .112). Bivariate correlation analysis further indicated significant negative associations between self-esteem and physical (r = -.391, p = .004) and psychological (r = -.540, p = .0001) domains of health-related quality of life (HRQOL), indicating that higher self-esteem was associated with more positive HRQOL. Social support is a potentially modifiable variable that may be important to target when designing interventions to improve self-esteem and this might have implications for improving physical and psychological HRQOL in persons with MS.
Pavlova, Milena; Tsiachristas, Apostolos; Vermaeten, Gerhard; Groot, Wim
2009-01-01
Portfolio analysis is a business management tool that can assist health care managers to develop new organizational strategies. The application of portfolio analysis to US hospital settings has been frequently reported. In Europe however, the application of this technique has received little attention, especially concerning public hospitals. Therefore, this paper examines the peculiarities of portfolio analysis and its applicability to the strategic management of European public hospitals. The analysis is based on a pilot application of a multi-factor portfolio analysis in a Dutch university hospital. The nature of portfolio analysis and the steps in a multi-factor portfolio analysis are reviewed along with the characteristics of the research setting. Based on these data, a multi-factor portfolio model is developed and operationalized. The portfolio model is applied in a pilot investigation to analyze the market attractiveness and hospital strengths with regard to the provision of three orthopedic services: knee surgery, hip surgery, and arthroscopy. The pilot portfolio analysis is discussed to draw conclusions about potential barriers to the overall adoption of portfolio analysis in the management of a public hospital. Copyright (c) 2008 John Wiley & Sons, Ltd.
Eutrophication of lakes and reservoirs: A framework for making management decisions
Rast, W.; Holland, M.
1988-01-01
The development of management strategies for the protection of environmental quality usually involves consideration both of technical and nontechnical issues. A logical, step-by-step framework for development of such strategies is provided. Its application to the control of cultured eutrophication of lakes and reservoirs illustrates its potential usefulness. From the perspective of the policymaker, the main consideration is that the eutrophication-related water quality of a lake or reservoir can be managed for given water uses. The approach presented here allows the rational assessment of relevant water-quality parameters and establishment of water-quality goals, consideration of social and other nontechnical issues, the possibilities of public involvement in the decision-making process, and a reasonable economic analysis within a management framework.
Computational crystallization.
Altan, Irem; Charbonneau, Patrick; Snell, Edward H
2016-07-15
Crystallization is a key step in macromolecular structure determination by crystallography. While a robust theoretical treatment of the process is available, due to the complexity of the system, the experimental process is still largely one of trial and error. In this article, efforts in the field are discussed together with a theoretical underpinning using a solubility phase diagram. Prior knowledge has been used to develop tools that computationally predict the crystallization outcome and define mutational approaches that enhance the likelihood of crystallization. For the most part these tools are based on binary outcomes (crystal or no crystal), and the full information contained in an assembly of crystallization screening experiments is lost. The potential of this additional information is illustrated by examples where new biological knowledge can be obtained and where a target can be sub-categorized to predict which class of reagents provides the crystallization driving force. Computational analysis of crystallization requires complete and correctly formatted data. While massive crystallization screening efforts are under way, the data available from many of these studies are sparse. The potential for this data and the steps needed to realize this potential are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.
The semantic pathfinder: using an authoring metaphor for generic multimedia indexing.
Snoek, Cees G M; Worring, Marcel; Geusebroek, Jan-Mark; Koelma, Dennis C; Seinstra, Frank J; Smeulders, Arnold W M
2006-10-01
This paper presents the semantic pathfinder architecture for generic indexing of multimedia archives. The semantic pathfinder extracts semantic concepts from video by exploring different paths through three consecutive analysis steps, which we derive from the observation that produced video is the result of an authoring-driven process. We exploit this authoring metaphor for machine-driven understanding. The pathfinder starts with the content analysis step. In this analysis step, we follow a data-driven approach of indexing semantics. The style analysis step is the second analysis step. Here, we tackle the indexing problem by viewing a video from the perspective of production. Finally, in the context analysis step, we view semantics in context. The virtue of the semantic pathfinder is its ability to learn the best path of analysis steps on a per-concept basis. To show the generality of this novel indexing approach, we develop detectors for a lexicon of 32 concepts and we evaluate the semantic pathfinder against the 2004 NIST TRECVID video retrieval benchmark, using a news archive of 64 hours. Top ranking performance in the semantic concept detection task indicates the merit of the semantic pathfinder for generic indexing of multimedia archives.
Single-step electrodeposition of CIS thin films with the complexing agent triethanolamine
NASA Astrophysics Data System (ADS)
Chiu, Yu-Shuen; Hsieh, Mu-Tao; Chang, Chih-Min; Chen, Chun-Shuo; Whang, Thou-Jen
2014-04-01
Some difficulties have long been encountered by single-step electrodeposition such as the optimization of electrolyte composition, deposition potentials, deposition time, and pH values. The approach of introducing ternary components into single-step electrodeposition is rather challenging especially due to the different values of the equilibrium potential for each constituent. Complexing agents play an important role in single-step electrodeposition of CuInSe2 (CIS), since the equilibrium potential of every constituent can be brought closer to each other when complexing agents are employed. In this work, single-step electrodeposition of CIS was enhanced by adding triethanolamine (TEA) into deposition bath, the CIS thin films were improved consequently in the form of polycrystalline cauliflower structures through the examination of SEM images and XRD patterns. The optimum composition of the solution for single-step electrodeposition of CIS is found to be 5 mM CuCl2, 22 mM InCl3, and 22 mM SeO2 at pH 1.5 with 0.1 M TEA. The structures, compositions, and morphologies of as-deposited and of annealed films were investigated.
Separation of Intercepted Multi-Radar Signals Based on Parameterized Time-Frequency Analysis
NASA Astrophysics Data System (ADS)
Lu, W. L.; Xie, J. W.; Wang, H. M.; Sheng, C.
2016-09-01
Modern radars use complex waveforms to obtain high detection performance and low probabilities of interception and identification. Signals intercepted from multiple radars overlap considerably in both the time and frequency domains and are difficult to separate with primary time parameters. Time-frequency analysis (TFA), as a key signal-processing tool, can provide better insight into the signal than conventional methods. In particular, among the various types of TFA, parameterized time-frequency analysis (PTFA) has shown great potential to investigate the time-frequency features of such non-stationary signals. In this paper, we propose a procedure for PTFA to separate overlapped radar signals; it includes five steps: initiation, parameterized time-frequency analysis, demodulating the signal of interest, adaptive filtering and recovering the signal. The effectiveness of the method was verified with simulated data and an intercepted radar signal received in a microwave laboratory. The results show that the proposed method has good performance and has potential in electronic reconnaissance applications, such as electronic intelligence, electronic warfare support measures, and radar warning.
Processing and analysis of cardiac optical mapping data obtained with potentiometric dyes
Laughner, Jacob I.; Ng, Fu Siong; Sulkin, Matthew S.; Arthur, R. Martin
2012-01-01
Optical mapping has become an increasingly important tool to study cardiac electrophysiology in the past 20 years. Multiple methods are used to process and analyze cardiac optical mapping data, and no consensus currently exists regarding the optimum methods. The specific methods chosen to process optical mapping data are important because inappropriate data processing can affect the content of the data and thus alter the conclusions of the studies. Details of the different steps in processing optical imaging data, including image segmentation, spatial filtering, temporal filtering, and baseline drift removal, are provided in this review. We also provide descriptions of the common analyses performed on data obtained from cardiac optical imaging, including activation mapping, action potential duration mapping, repolarization mapping, conduction velocity measurements, and optical action potential upstroke analysis. Optical mapping is often used to study complex arrhythmias, and we also discuss dominant frequency analysis and phase mapping techniques used for the analysis of cardiac fibrillation. PMID:22821993
Volgushev, Maxim; Malyshev, Aleksey; Balaban, Pavel; Chistiakova, Marina; Volgushev, Stanislav; Wolf, Fred
2008-04-09
The generation of action potentials (APs) is a key process in the operation of nerve cells and the communication between neurons. Action potentials in mammalian central neurons are characterized by an exceptionally fast onset dynamics, which differs from the typically slow and gradual onset dynamics seen in identified snail neurons. Here we describe a novel method of analysis which provides a quantitative measure of the onset dynamics of action potentials. This method captures the difference between the fast, step-like onset of APs in rat neocortical neurons and the gradual, exponential-like AP onset in identified snail neurons. The quantitative measure of the AP onset dynamics, provided by the method, allows us to perform quantitative analyses of factors influencing the dynamics.
Volgushev, Maxim; Malyshev, Aleksey; Balaban, Pavel; Chistiakova, Marina; Volgushev, Stanislav; Wolf, Fred
2008-01-01
The generation of action potentials (APs) is a key process in the operation of nerve cells and the communication between neurons. Action potentials in mammalian central neurons are characterized by an exceptionally fast onset dynamics, which differs from the typically slow and gradual onset dynamics seen in identified snail neurons. Here we describe a novel method of analysis which provides a quantitative measure of the onset dynamics of action potentials. This method captures the difference between the fast, step-like onset of APs in rat neocortical neurons and the gradual, exponential-like AP onset in identified snail neurons. The quantitative measure of the AP onset dynamics, provided by the method, allows us to perform quantitative analyses of factors influencing the dynamics. PMID:18398478
Step by Step: Avoiding Spiritual Bypass in 12-Step Work
ERIC Educational Resources Information Center
Cashwell, Craig S.; Clarke, Philip B.; Graves, Elizabeth G.
2009-01-01
With spirituality as a cornerstone, 12-step groups serve a vital role in the recovery community. It is important for counselors to be mindful, however, of the potential for clients to be in spiritual bypass, which likely will undermine the recovery process.
The STEP database through the end-users eyes--USABILITY STUDY.
Salunke, Smita; Tuleu, Catherine
2015-08-15
The user-designed database of Safety and Toxicity of Excipients for Paediatrics ("STEP") is created to address the shared need of drug development community to access the relevant information of excipients effortlessly. Usability testing was performed to validate if the database satisfies the need of the end-users. Evaluation framework was developed to assess the usability. The participants performed scenario based tasks and provided feedback and post-session usability ratings. Failure Mode Effect Analysis (FMEA) was performed to prioritize the problems and improvements to the STEP database design and functionalities. The study revealed several design vulnerabilities. Tasks such as limiting the results, running complex queries, location of data and registering to access the database were challenging. The three critical attributes identified to have impact on the usability of the STEP database included (1) content and presentation (2) the navigation and search features (3) potential end-users. Evaluation framework proved to be an effective method for evaluating database effectiveness and user satisfaction. This study provides strong initial support for the usability of the STEP database. Recommendations would be incorporated into the refinement of the database to improve its usability and increase user participation towards the advancement of the database. Copyright © 2015 Elsevier B.V. All rights reserved.
Metaphase II oocytes from human unilaminar follicles grown in a multi-step culture system.
McLaughlin, M; Albertini, D F; Wallace, W H B; Anderson, R A; Telfer, E E
2018-03-01
Can complete oocyte development be achieved from human ovarian tissue containing primordial/unilaminar follicles and grown in vitro in a multi-step culture to meiotic maturation demonstrated by the formation of polar bodies and a Metaphase II spindle? Development of human oocytes from primordial/unilaminar stages to resumption of meiosis (Metaphase II) and emission of a polar body was achieved within a serum free multi-step culture system. Complete development of oocytes in vitro has been achieved in mouse, where in vitro grown (IVG) oocytes from primordial follicles have resulted in the production of live offspring. Human oocytes have been grown in vitro from the secondary/multi-laminar stage to obtain fully grown oocytes capable of meiotic maturation. However, there are no reports of a culture system supporting complete growth from the earliest stages of human follicle development through to Metaphase II. Ovarian cortical biopsies were obtained with informed consent from women undergoing elective caesarean section (mean age: 30.7 ± 1.7; range: 25-39 years, n = 10). Laboratory setting. Ovarian biopsies were dissected into thin strips, and after removal of growing follicles were cultured in serum free medium for 8 days (Step 1). At the end of this period secondary/multi-laminar follicles were dissected from the strips and intact follicles 100-150 μm in diameter were selected for further culture. Isolated follicles were cultured individually in serum free medium in the presence of 100 ng/ml of human recombinant Activin A (Step 2). Individual follicles were monitored and after 8 days, cumulus oocyte complexes (COCs) were retrieved by gentle pressure on the cultured follicles. Complexes with complete cumulus and adherent mural granulosa cells were selected and cultured in the presence of Activin A and FSH on membranes for a further 4 days (Step 3). At the end of Step 3, complexes containing oocytes >100 μm diameter were selected for IVM in SAGE medium (Step 4) then fixed for analysis. Pieces of human ovarian cortex cultured in serum free medium for 8 days (Step 1) supported early follicle growth and 87 secondary follicles of diameter 120 ± 6 μm (mean ± SEM) could be dissected for further culture. After a further 8 days, 54 of the 87 follicles had reached the antral stage of development. COCs were retrieved by gentle pressure from the cultured follicles and those with adherent mural granulosa cells (n = 48) were selected and cultured for a further 4 days (Step 3). At the end of Step 3, 32 complexes contained oocytes >100 μm diameter were selected for IVM (Step 4). Nine of these complexes contained polar bodies within 24 h and all polar bodies were abnormally large. Confocal immuno-histochemical analysis showed the presence of a Metaphase II spindle confirming that these IVG oocytes had resumed meiosis but their developmental potential is unknown. This is a small number of samples but provides proof of concept that complete development of human oocytes can occur in vitro. Further optimization with morphological evaluation and fertilization potential of IVG oocytes is required to determine whether they are normal. The ability to develop human oocytes from the earliest follicular stages in vitro through to maturation and fertilization would benefit fertility preservation practice. Funded by MRC Grants (G0901839 and MR/L00299X/1). No competing interests.
Baghdady, Mariam T; Carnahan, Heather; Lam, Ernest W N; Woods, Nicole N
2014-09-01
There has been much debate surrounding diagnostic strategies and the most appropriate training models for novices in oral radiology. It has been argued that an analytic approach, using a step-by-step analysis of the radiographic features of an abnormality, is ideal. Alternative research suggests that novices can successfully employ non-analytic reasoning. Many of these studies do not take instructional methodology into account. This study evaluated the effectiveness of non-analytic and analytic strategies in radiographic interpretation and explored the relationship between instructional methodology and diagnostic strategy. Second-year dental and dental hygiene students were taught four radiographic abnormalities using basic science instructions or a step-by-step algorithm. The students were tested on diagnostic accuracy and memory immediately after learning and one week later. A total of seventy-three students completed both immediate and delayed sessions and were included in the analysis. Students were randomly divided into two instructional conditions: one group provided a diagnostic hypothesis for the image and then identified specific features to support it, while the other group first identified features and then provided a diagnosis. Participants in the diagnosis-first condition (non-analytic reasoning) had higher diagnostic accuracy then those in the features-first condition (analytic reasoning), regardless of their learning condition. No main effect of learning condition or interaction with diagnostic strategy was observed. Educators should be mindful of the potential influence of analytic and non-analytic approaches on the effectiveness of the instructional method.
Quantum Control of Graphene Plasmon Excitation and Propagation at Heaviside Potential Steps.
Wang, Dongli; Fan, Xiaodong; Li, Xiaoguang; Dai, Siyuan; Wei, Laiming; Qin, Wei; Wu, Fei; Zhang, Huayang; Qi, Zeming; Zeng, Changgan; Zhang, Zhenyu; Hou, Jianguo
2018-02-14
Quantum mechanical effects of single particles can affect the collective plasmon behaviors substantially. In this work, the quantum control of plasmon excitation and propagation in graphene is demonstrated by adopting the variable quantum transmission of carriers at Heaviside potential steps as a tuning knob. First, the plasmon reflection is revealed to be tunable within a broad range by varying the ratio γ between the carrier energy and potential height, which originates from the quantum mechanical effect of carrier propagation at potential steps. Moreover, the plasmon excitation by free-space photos can be regulated from fully suppressed to fully launched in graphene potential wells also through adjusting γ, which defines the degrees of the carrier confinement in the potential wells. These discovered quantum plasmon effects offer a unified quantum-mechanical solution toward ultimate control of both plasmon launching and propagating, which are indispensable processes in building plasmon circuitry.
Lombroso, Paul J.; Ogren, Marilee; Kurup, Pradeep; Nairn, Angus C.
2016-01-01
This commentary focuses on potential molecular mechanisms related to the dysfunctional synaptic plasticity that is associated with neurodegenerative disorders such as Alzheimer’s disease and Parkinson’s disease. Specifically, we focus on the role of striatal-enriched protein tyrosine phosphatase (STEP) in modulating synaptic function in these illnesses. STEP affects neuronal communication by opposing synaptic strengthening and does so by dephosphorylating several key substrates known to control synaptic signaling and plasticity. STEP levels are elevated in brains from patients with Alzheimer’s and Parkinson’s disease. Studies in model systems have found that high levels of STEP result in internalization of glutamate receptors as well as inactivation of ERK1/2, Fyn, Pyk2, and other STEP substrates necessary for the development of synaptic strengthening. We discuss the search for inhibitors of STEP activity that may offer potential treatments for neurocognitive disorders that are characterized by increased STEP activity. Future studies are needed to examine the mechanisms of differential and region-specific changes in STEP expression pattern, as such knowledge could lead to targeted therapies for disorders involving disrupted STEP activity. PMID:29098072
USE OF THE SDO POINTING CONTROLLERS FOR INSTRUMENT CALIBRATION MANEUVERS
NASA Technical Reports Server (NTRS)
Vess, Melissa F.; Starin, Scott R.; Morgenstern, Wendy M.
2005-01-01
During the science phase of the Solar Dynamics Observatory mission, the three science instruments require periodic instrument calibration maneuvers with a frequency of up to once per month. The command sequences for these maneuvers vary in length from a handful of steps to over 200 steps, and individual steps vary in size from 5 arcsec per step to 22.5 degrees per step. Early in the calibration maneuver development, it was determined that the original attitude sensor complement could not meet the knowledge requirements for the instrument calibration maneuvers in the event of a sensor failure. Because the mission must be single fault tolerant, an attitude determination trade study was undertaken to determine the impact of adding an additional attitude sensor versus developing alternative, potentially complex, methods of performing the maneuvers in the event of a sensor failure. To limit the impact to the science data capture budget, these instrument calibration maneuvers must be performed as quickly as possible while maintaining the tight pointing and knowledge required to obtain valid data during the calibration. To this end, the decision was made to adapt a linear pointing controller by adjusting gains and adding an attitude limiter so that it would be able to slew quickly and still achieve steady pointing once on target. During the analysis of this controller, questions arose about the stability of the controller during slewing maneuvers due to the combination of the integral gain, attitude limit, and actuator saturation. Analysis was performed and a method for disabling the integral action while slewing was incorporated to ensure stability. A high fidelity simulation is used to simulate the various instrument calibration maneuvers.
One-Step Electrochemical Preparation of Multilayer Graphene Functionalized with Nitrogen
NASA Astrophysics Data System (ADS)
Ustavytska, Olena; Kurys, Yaroslav; Koshechko, Vyacheslav; Pokhodenko, Vitaly
2017-03-01
A new environmentally friendly one-step method for producing multilayer (preferably 7-9 layers) nitrogen-doped graphene (N-MLG) with a slight amount of oxygen-containing defects was developed. The approach is based on the electrochemical exfoliation of graphite electrode in the presence of azide ions under the conditions of electrolysis with pulse changing of the electrode polarization potential. It was found that usage of azide anions lead not only to the exfoliation of graphite but also to the simultaneous functionalization of graphene sheets by nitrogen atoms (as a result of electrochemical decomposition of azide anions with ammonia evolution). Composition, morphology, structure, and electrochemical properties of N-MLG were characterized by C,H,N analysis, transmission electron microscopy, atomic force microscopy, FTIR, UV-Vis, and Raman spectroscopy, as well as cyclic voltammetry. The perspective of using N-MLG as oxygen reduction reaction electrocatalyst and for the electrochemical analysis of biomarkers (dopamine, ascorbic acid, and uric acid) in their mixtures was shown.
[Application of virtual instrumentation technique in toxicological studies].
Moczko, Jerzy A
2005-01-01
Research investigations require frequently direct connection of measuring equipment to the computer. Virtual instrumentation technique considerably facilitates programming of sophisticated acquisition-and-analysis procedures. In standard approach these two steps are performed subsequently with separate software tools. The acquired data are transfered with export / import procedures of particular program to the another one which executes next step of analysis. The described procedure is cumbersome, time consuming and may be potential source of the errors. In 1987 National Instruments Corporation introduced LabVIEW language based on the concept of graphical programming. Contrary to conventional textual languages it allows the researcher to concentrate on the resolved problem and omit all syntactical rules. Programs developed in LabVIEW are called as virtual instruments (VI) and are portable among different computer platforms as PCs, Macintoshes, Sun SPARCstations, Concurrent PowerMAX stations, HP PA/RISK workstations. This flexibility warrants that the programs prepared for one particular platform would be also appropriate to another one. In presented paper basic principles of connection of research equipment to computer systems were described.
Viral peptides-MHC interaction: Binding probability and distance from human peptides.
Santoni, Daniele
2018-05-23
Identification of peptides binding to MHC class I complex can play a crucial role in retrieving potential targets able to trigger an immune response. Affinity binding of viral peptides can be estimated through effective computational methods that in the most of cases are based on machine learning approach. Achieving a better insight into peptide features that impact on the affinity binding rate is a challenging issue. In the present work we focused on 9-mer peptides of Human immunodeficiency virus type 1 and Human herpes simplex virus 1, studying their binding to MHC class I. Viral 9-mers were partitioned into different classes, where each class is characterized by how far (in terms of mutation steps) the peptides belonging to that class are from human 9-mers. Viral 9-mers were partitioned in different classes, based on the number of mutation steps they are far from human 9-mers. We showed that the overall binding probability significantly differs among classes, and it typically increases as the distance, computed in terms of number of mutation steps from the human set of 9-mers, increases. The binding probability is particularly high when considering viral 9-mers that are far from all human 9-mers more than three mutation steps. A further evidence, providing significance to those special viral peptides and suggesting a potential role they can play, comes from the analysis of their distribution along viral genomes, as it revealed they are not randomly located, but they preferentially occur in specific genes. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sörme, L., E-mail: louise.sorme@scb.se; Palm, V.; KTH Royal Institute of Technology, Division of Environmental Strategies Research, SE-100 44 Stockholm
2016-01-15
There is a great need for indicators to monitor the use and potential impacts of hazardous chemicals. Today there is a huge lack of data, methods and results and method development and studies should be given urgent priority. The aim of this paper was to develop and test an approach to calculate the potential environmental impacts of chemicals for a whole country using the E-PRTR (European Pollutant Release and Transfer Register) as a database and Sweden as an example. Swedish data from 2008 on emissions to air and water for 54 substances from point sources were retrieved from an openmore » database. The data were transformed and aggregated using USEtox, a life-cycle impact assessment (LCIA) method for calculating potential human toxicity and ecotoxicity, both from industrial emissions directly and after input–output analysis (IO analysis) to reallocate emissions to product categories. Zinc to air and water contributed most to human toxicity followed by mercury to air. The largest contribution by industry to potential human toxicity came from the metal industry, followed by the paper and paper product industry. For potential ecotoxicity, zinc, fluoranthene and copper contributed the most. The largest contributions by industry came from the paper and paper products manufacturing sector, followed by the basic metals manufacturing sector. The approach used here can be seen as the first step towards a chemical footprint for nations. By adding data from other countries and other sources, a more complete picture can be gained in line with other footprint calculations. Furthermore, diffuse emissions from, for example, transport or emissions of pesticides could also be added for a more holistic assessment. Since the area of chemicals is complicated, it is probably necessary to develop and use several indicators that complement each other. It is suggested that the approach outlined here could be useful in developing a method for establishing a national chemical footprint. - Highlights: • European Pollutant and Transfer Register (E-PRTR) used to develop indicators • Study combined emissions to air and water from E-PRTR with USEtox and IO analysis • Metals and especially zinc contributed most to potential human toxicity and ecotoxicity • Paper and metal industries contribute most to potential human toxicity and ecotoxicity • This new assessment could be used by many countries and can be developed further.« less
NASA Astrophysics Data System (ADS)
Bar-Cohen, Yoseph; Badescu, Mircea; Bao, Xiaoqi; Lee, Hyeong Jae; Sherrit, Stewart; Freeman, David; Campos, Sergio
2017-04-01
The potential return of samples back to Earth in a future NASA mission would require protection of our planet from the risk of bringing uncontrolled biological materials back with the samples. In order to ensure this does not happen, it would be necessary to "break the chain of contact (BTC)", where any material reaching Earth would have to be inside a container that is sealed with extremely high confidence. Therefore, it would be necessary to contain the acquired samples and destroy any potential biological materials that may contaminate the external surface of their container while protecting the sample itself for further analysis. A novel synchronous separation, seaming, sealing and sterilization (S4) process for sample containerization and planetary protection has been conceived and demonstrated. A prototype double wall container with inner and outer shells and Earth clean interstitial space was used for this demonstration. In a potential future mission, the double wall container would be split into two halves and prepared on Earth, while the potential on-orbit execution would consist of inserting the sample into one of the halves and then mating to the other half and brazing. The use of brazing material that melts at temperatures higher than 500°C would assure sterilization of the exposed areas since all carbon bonds are broken at this temperature. The process would be executed in two-steps, Step-1: the double wall container halves would be fabricated and brazed on Earth; and Step-2: the containerization and sterilization process would be executed on-orbit. To prevent potential jamming during the process of mating the two halves of the double wall container and the extraction of the brazed inner container, a cone-within-cone approach has been conceived and demonstrated. The results of this study will be described and discussed.
NASA Astrophysics Data System (ADS)
Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.; Morel, Jim E.
2009-09-01
The Fokker-Planck equation is a widely used approximation for modeling the Compton scattering of photons in high energy density applications. In this paper, we perform a stability analysis of three implicit time discretizations for the Compton-Scattering Fokker-Planck equation. Specifically, we examine (i) a Semi-Implicit (SI) scheme that employs backward-Euler differencing but evaluates temperature-dependent coefficients at their beginning-of-time-step values, (ii) a Fully Implicit (FI) discretization that instead evaluates temperature-dependent coefficients at their end-of-time-step values, and (iii) a Linearized Implicit (LI) scheme, which is developed by linearizing the temperature dependence of the FI discretization within each time step. Our stability analysis shows that the FI and LI schemes are unconditionally stable and cannot generate oscillatory solutions regardless of time-step size, whereas the SI discretization can suffer from instabilities and nonphysical oscillations for sufficiently large time steps. With the results of this analysis, we present time-step limits for the SI scheme that prevent undesirable behavior. We test the validity of our stability analysis and time-step limits with a set of numerical examples.
A semi-Lagrangian approach to the shallow water equation
NASA Technical Reports Server (NTRS)
Bates, J. R.; Mccormick, Stephen F.; Ruge, John; Sholl, David S.; Yavneh, Irad
1993-01-01
We present a formulation of the shallow water equations that emphasizes the conservation of potential vorticity. A locally conservative semi-Lagrangian time-stepping scheme is developed, which leads to a system of three coupled PDE's to be solved at each time level. We describe a smoothing analysis of these equations, on which an effective multigrid solver is constructed. Some results from applying this solver to the static version of these equations are presented.
NASA Technical Reports Server (NTRS)
Barber, Peter W.; Demerdash, Nabeel A. O.; Hurysz, B.; Luo, Z.; Denny, Hugh W.; Millard, David P.; Herkert, R.; Wang, R.
1992-01-01
The goal of this research project was to analyze the potential effects of electromagnetic interference (EMI) originating from power system processing and transmission components for Space Station Freedom. The approach consists of four steps: (1) developing analytical tools (models and computer programs); (2) conducting parameterization (what if?) studies; (3) predicting the global space station EMI environment; and (4) providing a basis for modification of EMI standards.
Determination of Silicon in Hydrazine
NASA Technical Reports Server (NTRS)
McClure, Mark B.; Mast, Dion; Greene, Ben; Maes, Miguel J.
2006-01-01
Inductively coupled plasma-mass spectrometry (ICP-MS) is a highly sensitive technique sometimes used for the trace determination of silicon at a mass-to-charge (m/z) ratio of 28, the most abundant natural isotope of silicon. Unfortunately, ICP-MS is unable to differentiate between other sources of m/z 28 and false positive results for silicon will result when other sources of m/z 28 are present. Nitrogen was a major source of m/z 28 and contributes to the m/z 28 signal when hydrazine sample or nitric acid preservative is introduced into the plasma. Accordingly, this work was performed to develop a sample preparation step coupled with an ICP-MS analysis that minimized non-silicon sources of m/z 28. In the preparatory step of this method, the hydrazine sample was first decomposed predominately to nitrogen gas and water with copper-catalyzed hydrogen peroxide. In the analysis step, ICP-MS was used without nitric acid preservative in samples or standards. Glass, a potential source of silicon contamination, was also avoided where possible. The method was sensitive, accurate, and reliable for the determination of silicon in monopropellant grade hydrazine (MPH) in AF-E-332 elastomer leaching tests. Results for silicon in MPH were comparable to those reported in the literature for other studies.
Helsloot, Kaat; Walraevens, Mieke; Besauw, Saskia Van; Van Parys, An-Sofie; Devos, Hanne; Holsbeeck, Ann Van; Roelens, Kristien
2017-05-01
to develop a set of quality indicators for postnatal care after discharge from the hospital, using a systematic approach. key elements of qualitative postnatal care were defined by performing a systematic review and the literature was searched for potential indicators (step 1). The potential indicators were evaluated by five criteria (validity, reliability, sensitivity, feasibility and acceptability) and by making use of the 'Appraisal of Guidelines for Research and Evaluation', the AIRE-instrument (step 2). In a modified Delphi-survey, the quality indicators were presented to a panel of experts in the field of postnatal care using an online tool (step 3). The final results led to a Flemish model of postnatal care (step 4). Flanders, Belgium PARTICIPANTS: health care professionals, representatives of health care organisations and policy makers with expertise in the field of postnatal care. after analysis 57 research articles, 10 reviews, one book and eight other documents resulted in 150 potential quality indicators in seven critical care domains. Quality assessment of the indicators resulted in 58 concept quality indicators which were presented to an expert-panel of health care professionals. After two Delphi-rounds, 30 quality indicators (six structure, 17 process, and seven outcome indicators) were found appropriate to monitor and improve the quality of postnatal care after discharge from the hospital. KEY CONCLUSIONS AND IMPLICATIONS FOR CLINICAL PRACTICE: the quality indicators resulted in a Flemish model of qualitative postnatal care that was implemented by health authorities as a minimum standard in the context of shortened length of stay. Postnatal care should be adjusted to a flexible length of stay and start in pregnancy with an individualised care plan that follows mother and new-born throughout pregnancy, childbirth and postnatal period. Criteria for discharge and local protocols about the organisation and content of care are essential to facilitate continuity of care. Copyright © 2017 Elsevier Ltd. All rights reserved.
Behavioral preference in sequential decision-making and its association with anxiety.
Zhang, Dandan; Gu, Ruolei
2018-06-01
In daily life, people often make consecutive decisions before the ultimate goal is reached (i.e., sequential decision-making). However, this kind of decision-making has been largely overlooked in the literature. The current study investigated whether behavioral preference would change during sequential decisions, and the neural processes underlying the potential changes. For this purpose, we revised the classic balloon analogue risk task and recorded the electroencephalograph (EEG) signals associated with each step of decision-making. Independent component analysis performed on EEG data revealed that four EEG components elicited by periodic feedback in the current step predicted participants' decisions (gamble vs. no gamble) in the next step. In order of time sequence, these components were: bilateral occipital alpha rhythm, bilateral frontal theta rhythm, middle frontal theta rhythm, and bilateral sensorimotor mu rhythm. According to the information flows between these EEG oscillations, we proposed a brain model that describes the temporal dynamics of sequential decision-making. Finally, we found that the tendency to gamble (as well as the power intensity of bilateral frontal theta rhythms) was sensitive to the individual level of trait anxiety in certain steps, which may help understand the role of emotion in decision-making. © 2018 Wiley Periodicals, Inc.
Rowe, Sylvia; Alexander, Nick; Kretser, Alison; Steele, Robert; Kretsch, Molly; Applebaum, Rhona; Clydesdale, Fergus; Cummins, Deborah; Hentges, Eric; Navia, Juan; Jarvis, Ashley; Falci, Ken
2013-01-01
The present article articulates principles for effective public-private partnerships (PPPs) in scientific research. Recognizing that PPPs represent one approach for creating research collaborations and that there are other methods outside the scope of this article, PPPs can be useful in leveraging diverse expertise among government, academic, and industry researchers to address public health needs and questions concerned with nutrition, health, food science, and food and ingredient safety. A three-step process was used to identify the principles proposed herein: step 1) review of existing PPP guidelines, both in the peer-reviewed literature and at 16 disparate non-industry organizations; step 2) analysis of relevant successful or promising PPPs; and step 3) formal background interviews of 27 experienced, senior-level individuals from academia, government, industry, foundations, and non-governmental organizations. This process resulted in the articulation of 12 potential principles for establishing and managing successful research PPPs. The review of existing guidelines showed that guidelines for research partnerships currently reside largely within institutions rather than in the peer-reviewed literature. This article aims to introduce these principles into the literature to serve as a framework for dialogue and for future PPPs. PMID:24117791
J. M. Canik; Lore, J. D.; Ahn, J. -W.; ...
2013-01-12
Here, the pulsed application of n = 3 magnetic perturbation fields with amplitudes below that which triggers ELMs results in distinct, transient responses observable on several edge and divertor diagnostics in NSTX. We refer to these responses as Sub-Threshold Edge Perturbations (STEPs). An analysis of edge measurements suggests that STEPs result in increased transport in the plasma edge and scrape-off layer, which leads to augmentation of the intrinsic strike point splitting due to error fields, i.e., an intensification of the helical divertor footprint flux pattern. These effects are much smaller in magnitude than those of triggered ELMs, and are observedmore » for the duration of the field perturbation measured internal to the vacuum vessel. In addition, STEPs are correlated with changes to the MHD activity, along with transient reductions in the neutron production rate. Ideally the STEPs could be used to provide density control and prevent impurity accumulation, in the same manner that on-demand ELM triggering is used on NSTX, without the impulsive divertor fluxes and potential for damage to plasma facing components associated with ELMs.« less
Workflow Management for Complex HEP Analyses
NASA Astrophysics Data System (ADS)
Erdmann, M.; Fischer, R.; Rieger, M.; von Cube, R. F.
2017-10-01
We present the novel Analysis Workflow Management (AWM) that provides users with the tools and competences of professional large scale workflow systems, e.g. Apache’s Airavata[1]. The approach presents a paradigm shift from executing parts of the analysis to defining the analysis. Within AWM an analysis consists of steps. For example, a step defines to run a certain executable for multiple files of an input data collection. Each call to the executable for one of those input files can be submitted to the desired run location, which could be the local computer or a remote batch system. An integrated software manager enables automated user installation of dependencies in the working directory at the run location. Each execution of a step item creates one report for bookkeeping purposes containing error codes and output data or file references. Required files, e.g. created by previous steps, are retrieved automatically. Since data storage and run locations are exchangeable from the steps perspective, computing resources can be used opportunistically. A visualization of the workflow as a graph of the steps in the web browser provides a high-level view on the analysis. The workflow system is developed and tested alongside of a ttbb cross section measurement where, for instance, the event selection is represented by one step and a Bayesian statistical inference is performed by another. The clear interface and dependencies between steps enables a make-like execution of the whole analysis.
NASA Astrophysics Data System (ADS)
Weidlich, O.; Bernecker, M.
2004-04-01
Measurements of laminations from marine and limnic sediments are commonly a time-consuming procedure. However, the resulting quantitative proxies are of importance for the interpretation of both, climate changes and paleo-seismic activities. Digital image analysis accelerates the generation and interpretation of large data sets from laminated sediments based on contrasting grey values of dark and light laminae. Statistical transformation and correlation of the grey value signals reflect high frequency cycles due to changing mean laminae thicknesses, and thus provide data monitoring climate change. Perturbations (e.g., slumping structures, seismites, and tsunamites) of the commonly continuous laminae record seismic activities and obtain proxies for paleo-earthquake frequency. Using outcrop data from (i) the Pleistocene Lisan Formation of Jordan (Dead Sea Basin) and (ii) the Carboniferous-Permian Copacabana Formation of Bolivia (Lake Titicaca), we present a two-step approach to gain high-resolution time series based on field data for both purposes from unconsolidated and lithified outcrops. Step 1 concerns the construction of a continuous digital phototransect and step 2 covers the creation of a grey density curve based on digital photos along a line transect using image analysis. The applied automated image analysis technique provides a continuous digital record of the studied sections and, therefore, serves as useful tool for the evaluation of further proxy data. Analysing the obtained grey signal of the light and dark laminae of varves using phototransects, we discuss the potential and limitations of the proposed technique.
Analysis, design, fabrication, and performance of three-dimensional braided composites
NASA Astrophysics Data System (ADS)
Kostar, Timothy D.
1998-11-01
Cartesian 3-D (track and column) braiding as a method of composite preforming has been investigated. A complete analysis of the process was conducted to understand the limitations and potentials of the process. Knowledge of the process was enhanced through development of a computer simulation, and it was discovered that individual control of each track and column and multiple-step braid cycles greatly increases possible braid architectures. Derived geometric constraints coupled with the fundamental principles of Cartesian braiding resulted in an algorithm to optimize preform geometry in relation to processing parameters. The design of complex and unusual 3-D braids was investigated in three parts: grouping of yarns to form hybrid composites via an iterative simulation; design of composite cross-sectional shape through implementation of the Universal Method; and a computer algorithm developed to determine the braid plan based on specified cross-sectional shape. Several 3-D braids, which are the result of variations or extensions to Cartesian braiding, are presented. An automated four-step braiding machine with axial yarn insertion has been constructed and used to fabricate two-step, double two-step, four-step, and four-step with axial and transverse yarn insertion braids. A working prototype of a multi-step braiding machine was used to fabricate four-step braids with surrogate material insertion, unique hybrid structures from multiple track and column displacement and multi-step cycles, and complex-shaped structures with constant or varying cross-sections. Braid materials include colored polyester yarn to study the yarn grouping phenomena, Kevlar, glass, and graphite for structural reinforcement, and polystyrene, silicone rubber, and fasteners for surrogate material insertion. A verification study for predicted yarn orientation and volume fraction was conducted, and a topological model of 3-D braids was developed. The solid model utilizes architectural parameters, generated from the process simulation, to determine the composite elastic properties. Methods of preform consolidation are investigated and the results documented. The extent of yarn deformation (packing) resulting from preform consolidation was investigated through cross-sectional micrographs. The fiber volume fraction of select hybrid composites was measured and representative unit cells are suggested. Finally, a comparison study of the elastic performance of Kevlar/epoxy and carbon/Kevlar hybrid composites was conducted.
NASA Astrophysics Data System (ADS)
Foresti, Loris; Reyniers, Maarten; Delobbe, Laurent
2014-05-01
The Short-Term Ensemble Prediction System (STEPS) is a probabilistic precipitation nowcasting scheme developed at the Australian Bureau of Meteorology in collaboration with the UK Met Office. In order to account for the multiscaling nature of rainfall structures, the radar field is decomposed into an 8 levels multiplicative cascade using a Fast Fourier Transform. The cascade is advected using the velocity field estimated with optical flow and evolves stochastically according to a hierarchy of auto-regressive processes. This allows reproducing the empirical observation that the rate of temporal evolution of the small scales is faster than the large scales. The uncertainty in radar rainfall measurement and the unknown future development of the velocity field are also considered by stochastic modelling in order to reflect their typical spatial and temporal variability. Recently, a 4 years national research program has been initiated by the University of Leuven, the Royal Meteorological Institute (RMI) of Belgium and 3 other partners: PLURISK ("forecasting and management of extreme rainfall induced risks in the urban environment"). The project deals with the nowcasting of rainfall and subsequent urban inundations, as well as socio-economic risk quantification, communication, warning and prevention. At the urban scale it is widely recognized that the uncertainty of hydrological and hydraulic models is largely driven by the input rainfall estimation and forecast uncertainty. In support to the PLURISK project the RMI aims at integrating STEPS in the current operational deterministic precipitation nowcasting system INCA-BE (Integrated Nowcasting through Comprehensive Analysis). This contribution will illustrate examples of STEPS ensemble and probabilistic nowcasts for a few selected case studies of stratiform and convective rain in Belgium. The paper focuses on the development of STEPS products for potential hydrological users and a preliminary verification of the nowcasts, especially to analyze the spatial distribution of forecast errors. The analysis of nowcast biases reveals the locations where the convective initiation, rainfall growth and decay processes significantly reduce the forecast accuracy, but also points out the need for improving the radar-based quantitative precipitation estimation product that is used both to generate and verify the nowcasts. The collection of fields of verification statistics is implemented using an online update strategy, which potentially enables the system to learn from forecast errors as the archive of nowcasts grows. The study of the spatial or temporal distribution of nowcast errors is a key step to convey to the users an overall estimation of the nowcast accuracy and to drive future model developments.
New method for stock-tank oil compositional analysis.
McAndrews, Kristine; Nighswander, John; Kotzakoulakis, Konstantin; Ross, Paul; Schroeder, Helmut
2009-01-01
A new method for accurately determining stock-tank oil composition to normal pentatriacontane using gas chromatography is developed and validated. The new method addresses the potential errors associated with the traditional equipment and technique employed for extended hydrocarbon gas chromatography outside a controlled laboratory environment, such as on an offshore oil platform. In particular, the experimental measurement of stock-tank oil molecular weight with the freezing point depression technique and the use of an internal standard to find the unrecovered sample fraction are replaced with correlations for estimating these properties. The use of correlations reduces the number of necessary experimental steps in completing the required sample preparation and analysis, resulting in reduced uncertainty in the analysis.
Adhesive-bonded scarf and stepped-lap joints
NASA Technical Reports Server (NTRS)
Hart-Smith, L. J.
1973-01-01
Continuum mechanics solutions are derived for the static load-carrying capacity of scarf and stepped-lap adhesive-bonded joints. The analyses account for adhesive plasticity and adherend stiffness imbalance and thermal mismatch. The scarf joint solutions include a simple algebraic formula which serves as a close lower bound, within a small fraction of a per cent of the true answer for most practical geometries and materials. Digital computer programs were developed and, for the stepped-lap joints, the critical adherend and adhesive stresses are computed for each step. The scarf joint solutions exhibit grossly different behavior from that for double-lap joints for long overlaps inasmuch as that the potential bond shear strength continues to increase with indefinitely long overlaps on the scarf joints. The stepped-lap joint solutions exhibit some characteristics of both the scarf and double-lap joints. The stepped-lap computer program handles arbitrary (different) step lengths and thickness and the solutions obtained have clarified potentially weak design details and the remedies. The program has been used effectively to optimize the joint proportions.
Continuous track paths reveal additive evidence integration in multistep decision making.
Buc Calderon, Cristian; Dewulf, Myrtille; Gevers, Wim; Verguts, Tom
2017-10-03
Multistep decision making pervades daily life, but its underlying mechanisms remain obscure. We distinguish four prominent models of multistep decision making, namely serial stage, hierarchical evidence integration, hierarchical leaky competing accumulation (HLCA), and probabilistic evidence integration (PEI). To empirically disentangle these models, we design a two-step reward-based decision paradigm and implement it in a reaching task experiment. In a first step, participants choose between two potential upcoming choices, each associated with two rewards. In a second step, participants choose between the two rewards selected in the first step. Strikingly, as predicted by the HLCA and PEI models, the first-step decision dynamics were initially biased toward the choice representing the highest sum/mean before being redirected toward the choice representing the maximal reward (i.e., initial dip). Only HLCA and PEI predicted this initial dip, suggesting that first-step decision dynamics depend on additive integration of competing second-step choices. Our data suggest that potential future outcomes are progressively unraveled during multistep decision making.
Rooftop solar photovoltaic potential in cities: how scalable are assessment approaches?
NASA Astrophysics Data System (ADS)
Castellanos, Sergio; Sunter, Deborah A.; Kammen, Daniel M.
2017-12-01
Distributed photovoltaics (PV) have played a critical role in the deployment of solar energy, currently making up roughly half of the global PV installed capacity. However, there remains significant unused economically beneficial potential. Estimates of the total technical potential for rooftop PV systems in the United States calculate a generation comparable to approximately 40% of the 2016 total national electric-sector sales. To best take advantage of the rooftop PV potential, effective analytic tools that support deployment strategies and aggressive local, state, and national policies to reduce the soft cost of solar energy are vital. A key step is the low-cost automation of data analysis and business case presentation for structure-integrated solar energy. In this paper, the scalability and resolution of various methods to assess the urban rooftop PV potential are compared, concluding with suggestions for future work in bridging methodologies to better assist policy makers.
Energy dispersive-EXAFS of Pd nucleation at a liquid/liquid interface
NASA Astrophysics Data System (ADS)
Chang, S.-Y.; Booth, S. G.; Uehara, A.; Mosselmans, J. F. W.; Cibin, G.; Pham, V.-T.; Nataf, L.; Dryfe, R. A. W.; Schroeder, S. L. M.
2016-05-01
Energy dispersive extended X-ray absorption fine structure (EDE) has been applied to Pd nanoparticle nucleation at a liquid/liquid interface under control over the interfacial potential and thereby the driving force for nucleation. Preliminary analysis focusing on Pd K edge-step height determination shows that under supersaturated conditions the concentration of Pd near the interface fluctuate over a period of several hours, likely due to the continuous formation and dissolution of sub-critical nuclei. Open circuit potential measurements conducted ex-situ in a liquid/liquid electrochemical cell support this view, showing that the fluctuations in Pd concentration are also visible as variations in potential across the liquid/liquid interface. By decreasing the interfacial potential through inclusion of a common ion (tetraethylammonium, TEA+) the Pd nanoparticle growth rate could be slowed down, resulting in a smooth nucleation process. Eventually, when the TEA+ ions reached an equilibrium potential, Pd nucleation and particle growth were inhibited.
Dynamic analysis environment for nuclear forensic analyses
NASA Astrophysics Data System (ADS)
Stork, C. L.; Ummel, C. C.; Stuart, D. S.; Bodily, S.; Goldblum, B. L.
2017-01-01
A Dynamic Analysis Environment (DAE) software package is introduced to facilitate group inclusion/exclusion method testing, evaluation and comparison for pre-detonation nuclear forensics applications. Employing DAE, the multivariate signatures of a questioned material can be compared to the signatures for different, known groups, enabling the linking of the questioned material to its potential process, location, or fabrication facility. Advantages of using DAE for group inclusion/exclusion include built-in query tools for retrieving data of interest from a database, the recording and documentation of all analysis steps, a clear visualization of the analysis steps intelligible to a non-expert, and the ability to integrate analysis tools developed in different programming languages. Two group inclusion/exclusion methods are implemented in DAE: principal component analysis, a parametric feature extraction method, and k nearest neighbors, a nonparametric pattern recognition method. Spent Fuel Isotopic Composition (SFCOMPO), an open source international database of isotopic compositions for spent nuclear fuels (SNF) from 14 reactors, is used to construct PCA and KNN models for known reactor groups, and 20 simulated SNF samples are utilized in evaluating the performance of these group inclusion/exclusion models. For all 20 simulated samples, PCA in conjunction with the Q statistic correctly excludes a large percentage of reactor groups and correctly includes the true reactor of origination. Employing KNN, 14 of the 20 simulated samples are classified to their true reactor of origination.
Using Click Chemistry to Identify Potential Drug Targets in Plasmodium
2015-04-01
step of the Plasmodium mammalian cycle . Inhibiting this step can block malaria at an early step. However, few anti-malarials target liver infection...points in the life cycle of malaria parasites. PLoS Biol 12: e1001806. 2. Falae A, Combe A, Amaladoss A, Carvalho T, Menard R, et al. (2010) Role of...AWARD NUMBER: W81XWH-13-1-0429 TITLE: Using "Click Chemistry" to Identify Potential Drug Targets in Plasmodium PRINCIPAL INVESTIGATOR: Dr. Purnima
Designing divertor targets for uniform power load
NASA Astrophysics Data System (ADS)
Dekeyser, W.; Reiter, D.; Baelmans, M.
2015-08-01
Divertor design for next step fusion reactors heavily relies on 2D edge plasma modeling with codes as e.g. B2-EIRENE. While these codes are typically used in a design-by-analysis approach, in previous work we have shown that divertor design can alternatively be posed as a mathematical optimization problem, and solved very efficiently using adjoint methods adapted from computational aerodynamics. This approach has been applied successfully to divertor target shape design for more uniform power load. In this paper, the concept is further extended to include all contributions to the target power load, with particular focus on radiation. In a simplified test problem, we show the potential benefits of fully including the radiation load in the design cycle as compared to only assessing this load in a post-processing step.
Synthesis of Novel Double-Layer Nanostructures of SiC–WOxby a Two Step Thermal Evaporation Process
2009-01-01
A novel double-layer nanostructure of silicon carbide and tungsten oxide is synthesized by a two-step thermal evaporation process using NiO as the catalyst. First, SiC nanowires are grown on Si substrate and then high density W18O49nanorods are grown on these SiC nanowires to form a double-layer nanostructure. XRD and TEM analysis revealed that the synthesized nanostructures are well crystalline. The growth of W18O49nanorods on SiC nanowires is explained on the basis of vapor–solid (VS) mechanism. The reasonably better turn-on field (5.4 V/μm) measured from the field emission measurements suggest that the synthesized nanostructures could be used as potential field emitters. PMID:20596292
Shang, Fengjun; Muimhneacháin, Eoin Ó; Jerry Reen, F; Buzid, Alyah; O'Gara, Fergal; Luong, John H T; Glennon, Jeremy D; McGlacken, Gerard P
2014-10-01
Pseudomonas aeruginosa uses a hierarchical cell-cell communication system consisting of a number of regulatory elements to coordinate the expression of bacterial virulence genes. Sensitive detection of quorum sensing (QS) molecules has the potential for early identification of P. aeruginosa facilitating early medical intervention. A recently isolated cell-cell communication molecule, a thiazole termed IQS, can bypass the las QS system of P. aeruginosa under times of stress, activating a subset of QS-controlled genes. This compound offers a new target for pathogen detection and has been prepared in a one step protocol. A simple electrochemical strategy was employed for its sensitive detection using boron-doped diamond and glassy carbon electrodes by cyclic voltammetry and amperometry. Copyright © 2014 Elsevier Ltd. All rights reserved.
The current role of on-line extraction approaches in clinical and forensic toxicology.
Mueller, Daniel M
2014-08-01
In today's clinical and forensic toxicological laboratories, automation is of interest because of its ability to optimize processes, to reduce manual workload and handling errors and to minimize exposition to potentially infectious samples. Extraction is usually the most time-consuming step; therefore, automation of this step is reasonable. Currently, from the field of clinical and forensic toxicology, methods using the following on-line extraction techniques have been published: on-line solid-phase extraction, turbulent flow chromatography, solid-phase microextraction, microextraction by packed sorbent, single-drop microextraction and on-line desorption of dried blood spots. Most of these published methods are either single-analyte or multicomponent procedures; methods intended for systematic toxicological analysis are relatively scarce. However, the use of on-line extraction will certainly increase in the near future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Eric; Snowden-Swan, Lesley J.; Talmadge, Michael
This paper presents a comparative techno-economic analysis of five conversion pathways from biomass to gasoline-, jet-, and diesel-range hydrocarbons via indirect liquefaction with specific focus on pathways utilizing oxygenated intermediates (derived either via thermochemical or biochemical conversion steps). The four emerging pathways of interest are compared with one conventional pathway (Fischer-Tropsch) for the production of the hydrocarbon blendstocks. The processing steps of the four emerging pathways include: biomass-to-syngas via indirect gasification, gas cleanup, conversion of syngas to alcohols/oxygenates, followed by conversion of alcohols/oxygenates to hydrocarbon blendstocks via dehydration, oligomerization, and hydrogenation. We show that the emerging pathways via oxygenated intermediatesmore » have the potential to be cost competitive with the conventional Fischer-Tropsch process. The evaluated pathways and the benchmark process generally exhibit similar fuel yields and carbon conversion efficiencies. The resulting minimum fuel selling prices are comparable to the benchmark at approximately $3.60 per gallon-gasoline equivalent, with potential for two new pathways to be more economically competitive. Additionally, the coproduct values can play an important role in the economics of the processes with oxygenated intermediates derived via syngas fermentation. Major cost drivers for the integrated processes are tied to achievable fuel yields and conversion efficiency of the intermediate steps, i.e., the production of oxygenates/alcohols from syngas and the conversion of oxygenates/alcohols to hydrocarbon fuels.« less
Crenshaw, Jeremy R; Kaufman, Kenton R; Grabiner, Mark D
2013-07-01
The purpose of this study was to evaluate the effects of compensatory-step training of healthy, mobile, young-to-middle aged people with unilateral, transfemoral or knee disarticulation amputations. Outcomes of interest included recovery success, reliance on the prosthesis, and the kinematic variables relevant to trip recovery. Over the course of six training sessions, five subjects responded to postural disturbances that necessitated forward compensatory steps to avoid falling. Subjects improved their ability to recover from these postural disturbances without falling or hopping on the non-prosthetic limb. Subjects improved their compensatory stepping response by decreasing trunk flexion and increasing the sagittal plane distance between the body center of mass and the stepping foot. In response to more challenging disturbances, these training-related improvements were not observed for the initial step with the non-prosthetic limb. Regardless of the stepping limb, step length and the change in pelvic height were not responsive to training. This study exhibits the potential benefits of a compensatory-step training program for amputees and informs future improvements to the protocol. Copyright © 2013 Elsevier B.V. All rights reserved.
Improved enteral tolerance following step procedure: systematic literature review and meta-analysis.
Fernandes, Melissa A; Usatin, Danielle; Allen, Isabel E; Rhee, Sue; Vu, Lan
2016-10-01
Surgical management of children with short bowel syndrome (SBS) changed with the introduction of the serial transverse enteroplasty procedure (STEP). We conducted a systematic review and meta-analysis using MEDLINE and SCOPUS to determine if children with SBS had improved enteral tolerance following STEP. Studies were included if information about a child's pre- and post-STEP enteral tolerance was provided. A random effects meta-analysis provided a summary estimate of the proportion of children with enteral tolerance increase following STEP. From 766 abstracts, seven case series involving 86 children were included. Mean percent tolerance of enteral nutrition improved from 35.1 to 69.5. Sixteen children had no enteral improvement following STEP. A summary estimate showed that 87 % (95 % CI 77-95 %) of children who underwent STEP had an increase in enteral tolerance. Compilation of the literature supports the belief that SBS subjects' enteral tolerance improves following STEP. Enteral nutritional tolerance is a measure of efficacy of STEP and should be presented as a primary or secondary outcome. By standardizing data collection on children undergoing STEP procedure, better determination of nutritional benefit from STEP can be ascertained.
Ohnishi, T; King, T E; Salerno, J C; Blum, H; Bowyer, J R; Maida, T
1981-06-10
Thermodynamic parameters of succinate dehydrogenase flavin were determined potentiometrically from the analysis of free radical signal levels as a function of the oxidation-reduction potential. Midpoint redox potentials of consecutive 1-electron transfer steps are -127 and -31 mV at pH 7.0. This corresponds to a stability constant of intermediate stability, 2.5 x 10(-2), which suggests flavin itself may be a converter from n = 2 to n = 1 electron transfer steps. The pK values of the free radical (FlH . in equilibrium Fl . -) and the fully reduced form (FlH2 in equilibrium FlH-) were estimated as 8.0 +/- 0.2 and 7.7 +/- 0.2, respectively. Succinate dehydrogenase flavosemiquinone elicits an EPR spectrum at g = 2.00 with a peak to peak width of 1.2 mT even in the protonated form, suggesting the delocalization in the unpaired electron density. A close proximity of succinate dehydrogenase flavin and iron-sulfur cluster S-1 was demonstrated based on the enhancement of flavin spin relaxation by Center S-1.
Guihéneuf, Freddy; Schmid, Matthias; Stengel, Dagmar B
2015-01-01
Despite the number of biochemical studies exploring algal lipids and fatty acid biosynthesis pathways and profiles, analytical methods used by phycologists for this purpose are often diverse and incompletely described. Potential confusion and potential variability of the results between studies can therefore occur due to change of protocols for lipid extraction and fractionation, as well as fatty acid methyl esters (FAME) preparation before gas chromatography (GC) analyses. Here, we describe a step-by-step procedure for the profiling of neutral and polar lipids using techniques such as solid-liquid extraction (SLE), thin-layer chromatography (TLC), and gas chromatography coupled with flame ionization detector (GC-FID). As an example, in this protocol chapter, analyses of neutral and polar lipids from the marine microalga Pavlova lutheri (an EPA/DHA-rich haptophyte) will be outlined to describe the distribution of fatty acid residues within its major lipid classes. This method has been proven to be a reliable technique to assess changes in lipid and fatty acid profiles in several other microalgal species and seaweeds.
Multi-modal two-step floating catchment area analysis of primary health care accessibility.
Langford, Mitchel; Higgs, Gary; Fry, Richard
2016-03-01
Two-step floating catchment area (2SFCA) techniques are popular for measuring potential geographical accessibility to health care services. This paper proposes methodological enhancements to increase the sophistication of the 2SFCA methodology by incorporating both public and private transport modes using dedicated network datasets. The proposed model yields separate accessibility scores for each modal group at each demand point to better reflect the differential accessibility levels experienced by each cohort. An empirical study of primary health care facilities in South Wales, UK, is used to illustrate the approach. Outcomes suggest the bus-riding cohort of each census tract experience much lower accessibility levels than those estimated by an undifferentiated (car-only) model. Car drivers' accessibility may also be misrepresented in an undifferentiated model because they potentially profit from the lower demand placed upon service provision points by bus riders. The ability to specify independent catchment sizes for each cohort in the multi-modal model allows aspects of preparedness to travel to be investigated. Copyright © 2016. Published by Elsevier Ltd.
Uranium Pyrophoricity Phenomena and Prediction (FAI/00-39)
DOE Office of Scientific and Technical Information (OSTI.GOV)
PLYS, M.G.
2000-10-10
The purpose of this report is to provide a topical reference on the phenomena and prediction of uranium pyrophoricity for the Hanford Spent Nuclear Fuel (SNF) Project with specific applications to SNF Project processes and situations. Spent metallic uranium nuclear fuel is currently stored underwater at the K basins in the Hanford 100 area, and planned processing steps include: (1) At the basins, cleaning and placing fuel elements and scrap into stainless steel multi-canister overpacks (MCOs) holding about 6 MT of fuel apiece; (2) At nearby cold vacuum drying (CVD) stations, draining, vacuum drying, and mechanically sealing the MCOs; (3)more » Shipping the MCOs to the Canister Storage Building (CSB) on the 200 Area plateau; and (4) Welding shut and placing the MCOs for interim (40 year) dry storage in closed CSB storage tubes cooled by natural air circulation through the surrounding vault. Damaged fuel elements have exposed and corroded fuel surfaces, which can exothermically react with water vapor and oxygen during normal process steps and in off-normal situations, A key process safety concern is the rate of reaction of damaged fuel and the potential for self-sustaining or runaway reactions, also known as uranium fires or fuel ignition. Uranium metal and one of its corrosion products, uranium hydride, are potentially pyrophoric materials. Dangers of pyrophoricity of uranium and its hydride have long been known in the U.S. Department of Energy (Atomic Energy Commission/DOE) complex and will be discussed more below; it is sufficient here to note that there are numerous documented instances of uranium fires during normal operations. The motivation for this work is to place the safety of the present process in proper perspective given past operational experience. Steps in development of such a perspective are: (1) Description of underlying physical causes for runaway reactions, (2) Modeling physical processes to explain runaway reactions, (3) Validation of the method against experimental data, (4) Application of the method to plausibly explain operational experience, and (5) Application of the method to present process steps to demonstrate process safety and margin. Essentially, the logic above is used to demonstrate that runaway reactions cannot occur during normal SNF Project process steps, and to illustrate the depth of the technical basis for such a conclusion. Some off-normal conditions are identified here that could potentially lead to runaway reactions. However, this document is not intended to provide an exhaustive analysis of such cases. In summary, this report provides a ''toolkit'' of models and approaches for analysis of pyrophoricity safety issues at Hanford, and the technical basis for the recommended approaches. A summary of recommended methods appears in Section 9.0.« less
2014-01-01
objectives. This report reviews the approaches used in the private sector and in government organizations for tackling Steps 2a and 2b of the work- force... unemployment rate). Separation rates may also reflect early retirement incentive packages offered by the company because of reduced staffing needs...depending on their objectives. Purpose This report provides a review of analytical approaches used in the pri- vate sector and in government organizations
Tsai, Charlie; Lee, Kyoungjin; Yoo, Jong Suk; ...
2016-02-16
Density functional theory calculations are used to investigate thermal water decomposition over the close-packed (111), stepped (211), and open (100) facets of transition metal surfaces. A descriptor-based approach is used to determine that the (211) facet leads to the highest possible rates. As a result, a range of 96 binary alloys were screened for their potential activity and a rate control analysis was performed to assess how the overall rate could be improved.
Malliou, P; Rokka, S; Beneka, A; Gioftsidou, A; Mavromoustakos, S; Godolias, G
2014-01-01
There is limited information on injury patterns in Step Aerobic Instructors (SAI) who exclusively execute "step" aerobic classes. To record the type and the anatomical position in relation to diagnosis of muscular skeletal injuries in step aerobic instructors. Also, to analyse the days of absence due to chronic injury in relation to weekly working hours, height of the step platform, working experience and working surface and footwear during the step class. The Step Aerobic Instructors Injuries Questionnaire was developed, and then validity and reliability indices were calculated. 63 SAI completed the questionnaire. For the statistical analysis of the data, the method used was the analysis of frequencies, the non-parametric test χ
Analysis on burnup step effect for evaluating reactor criticality and fuel breeding ratio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saputra, Geby; Purnama, Aditya Rizki; Permana, Sidik
Criticality condition of the reactors is one of the important factors for evaluating reactor operation and nuclear fuel breeding ratio is another factor to show nuclear fuel sustainability. This study analyzes the effect of burnup steps and cycle operation step for evaluating the criticality condition of the reactor as well as the performance of nuclear fuel breeding or breeding ratio (BR). Burnup step is performed based on a day step analysis which is varied from 10 days up to 800 days and for cycle operation from 1 cycle up to 8 cycles reactor operations. In addition, calculation efficiency based onmore » the variation of computer processors to run the analysis in term of time (time efficiency in the calculation) have been also investigated. Optimization method for reactor design analysis which is used a large fast breeder reactor type as a reference case was performed by adopting an established reactor design code of JOINT-FR. The results show a criticality condition becomes higher for smaller burnup step (day) and for breeding ratio becomes less for smaller burnup step (day). Some nuclides contribute to make better criticality when smaller burnup step due to individul nuclide half-live. Calculation time for different burnup step shows a correlation with the time consuming requirement for more details step calculation, although the consuming time is not directly equivalent with the how many time the burnup time step is divided.« less
Rehabilitation Associate Training for Employed Staff. Task Analysis (RA-2).
ERIC Educational Resources Information Center
Davis, Michael J.; Jensen, Mary
This learning module, which is intended for use in in-service training for vocational rehabilitation counselors, deals with writing a task analysis. Step-by-step guidelines are provided for breaking down a task into small teachable steps by analyzing the task in terms of the way in which it will be performed once learned (method), the steps to be…
Fully integrated lab-on-a-disc for nucleic acid analysis of food-borne pathogens.
Kim, Tae-Hyeong; Park, Juhee; Kim, Chi-Ju; Cho, Yoon-Kyoung
2014-04-15
This paper describes a micro total analysis system for molecular analysis of Salmonella, a major food-borne pathogen. We developed a centrifugal microfluidic device, which integrated the three main steps of pathogen detection, DNA extraction, isothermal recombinase polymerase amplification (RPA), and detection, onto a single disc. A single laser diode was utilized for wireless control of valve actuation, cell lysis, and noncontact heating in the isothermal amplification step, thereby yielding a compact and miniaturized system. To achieve high detection sensitivity, rare cells in large volumes of phosphate-buffered saline (PBS) and milk samples were enriched before loading onto the disc by using antibody-coated magnetic beads. The entire procedure, from DNA extraction through to detection, was completed within 30 min in a fully automated fashion. The final detection was carried out using lateral flow strips by direct visual observation; detection limit was 10 cfu/mL and 10(2) cfu/mL in PBS and milk, respectively. Our device allows rapid molecular diagnostic analysis and does not require specially trained personnel or expensive equipment. Thus, we expect that it would have an array of potential applications, including in the detection of food-borne pathogens, environmental monitoring, and molecular diagnostics in resource-limited settings.
Low Carbon Technology Options for the Natural Gas ...
The ultimate goal of this task is to perform environmental and economic analysis of natural gas based power production technologies (different routes) to investigate and evaluate strategies for reducing emissions from the power sector. It is a broad research area. Initially, the research will be focused on the preliminary analyses of hydrogen fuel based power production technologies utilizing hydrogen fuel in a large size, heavy-duty gas turbines in integrated reformer combined cycle (IRCC) and integrated gasification combined cycle (IGCC) for electric power generation. The research will be expanded step-by-step to include other advanced (e.g., Net Power, a potentially transformative technology utilizing a high efficiency CO2 conversion cycle (Allam cycle), and chemical looping etc.) pre-combustion and post-combustion technologies applied to natural gas, other fossil fuels (coal and heavy oil) and biomass/biofuel based on findings. Screening analysis is already under development and data for the analysis is being processed. The immediate action on this task include preliminary economic and environmental analysis of power production technologies applied to natural gas. Data for catalytic reforming technology to produce hydrogen from natural gas is being collected and compiled on Microsoft Excel. The model will be expanded for exploring and comparing various technologies scenarios to meet our goal. The primary focus of this study is to: 1) understand the chemic
Huang, Wen-Chien; Tsai, Hsin-Chi; Tao, Chi-Wei; Chen, Jung-Sheng; Shih, Yi-Jia; Kao, Po-Min; Huang, Tung-Yi; Hsu, Bing-Mu
2017-01-01
In this study, we describe a nested PCR-DGGE strategy to detect Legionella communities from river water samples. The nearly full-length 16S rRNA gene was amplified using bacterial primer in the first step. After, the amplicons were employed as DNA templates in the second PCR using Legionella specific primer. The third round of gene amplification was conducted to gain PCR fragments apposite for DGGE analysis. Then the total numbers of amplified genes were observed in DGGE bands of products gained with primers specific for the diversity of Legionella species. The DGGE patterns are thus potential for a high-throughput preliminary determination of aquatic environmental Legionella species before sequencing. Comparative DNA sequence analysis of excised DGGE unique band patterns showed the identity of the Legionella community members, including a reference profile with two pathogenic species of Legionella strains. In addition, only members of Legionella pneumophila and uncultured Legionella sp. were detected. Development of three step nested PCR-DGGE tactic is seen as a useful method for studying the diversity of Legionella community. The method is rapid and provided sequence information for phylogenetic analysis.
Approach to determine the diversity of Legionella species by nested PCR-DGGE in aquatic environments
Huang, Wen-Chien; Tsai, Hsin-Chi; Tao, Chi-Wei; Chen, Jung-Sheng; Shih, Yi-Jia; Kao, Po-Min; Huang, Tung-Yi; Hsu, Bing-Mu
2017-01-01
In this study, we describe a nested PCR-DGGE strategy to detect Legionella communities from river water samples. The nearly full-length 16S rRNA gene was amplified using bacterial primer in the first step. After, the amplicons were employed as DNA templates in the second PCR using Legionella specific primer. The third round of gene amplification was conducted to gain PCR fragments apposite for DGGE analysis. Then the total numbers of amplified genes were observed in DGGE bands of products gained with primers specific for the diversity of Legionella species. The DGGE patterns are thus potential for a high-throughput preliminary determination of aquatic environmental Legionella species before sequencing. Comparative DNA sequence analysis of excised DGGE unique band patterns showed the identity of the Legionella community members, including a reference profile with two pathogenic species of Legionella strains. In addition, only members of Legionella pneumophila and uncultured Legionella sp. were detected. Development of three step nested PCR-DGGE tactic is seen as a useful method for studying the diversity of Legionella community. The method is rapid and provided sequence information for phylogenetic analysis. PMID:28166249
Frew, Paula M; Macias, Wendy; Chan, Kayshin; Harding, Ashley C
2009-01-01
During the past two decades of the HIV/AIDS pandemic, several recruitment campaigns were designed to generate community involvement in preventive HIV vaccine clinical trials. These efforts utilized a blend of advertising and marketing strategies mixed with public relations and community education approaches to attract potential study participants to clinical trials (integrated marketing communications). Although more than 30,000 persons worldwide have participated in preventive HIV vaccine studies, no systematic analysis of recruitment campaigns exists. This content analysis study was conducted to examine several United States and Canadian recruitment campaigns for one of the largest-scale HIV vaccine trials to date (the "Step Study"). This study examined persuasive features consistent with the Elaboration Likelihood Model (ELM) including message content, personal relevance of HIV/AIDS and vaccine research, intended audiences, information sources, and other contextual features. The results indicated variation in messages and communication approaches with gay men more exclusively targeted in these regions. Racial/ethnic representations also differed by campaign. Most of the materials promote affective evaluation of the information through heuristic cueing. Implications for subsequent campaigns and research directions are discussed.
Joshi, Varsha; Kumar, Vijesh; Rathore, Anurag S
2015-08-07
A method is proposed for rapid development of a short, analytical cation exchange high performance liquid chromatography method for analysis of charge heterogeneity in monoclonal antibody products. The parameters investigated and optimized include pH, shape of elution gradient and length of the column. It is found that the most important parameter for development of a shorter method is the choice of the shape of elution gradient. In this paper, we propose a step by step approach to develop a non-linear sigmoidal shape gradient for analysis of charge heterogeneity for two different monoclonal antibody products. The use of this gradient not only decreases the run time of the method to 4min against the conventional method that takes more than 40min but also the resolution is retained. Superiority of the phosphate gradient over sodium chloride gradient for elution of mAbs is also observed. The method has been successfully evaluated for specificity, sensitivity, linearity, limit of detection, and limit of quantification. Application of this method as a potential at-line process analytical technology tool has been suggested. Copyright © 2015 Elsevier B.V. All rights reserved.
Exploring patient satisfaction predictors in relation to a theoretical model.
Grøndahl, Vigdis Abrahamsen; Hall-Lord, Marie Louise; Karlsson, Ingela; Appelgren, Jari; Wilde-Larsson, Bodil
2013-01-01
The aim is to describe patients' care quality perceptions and satisfaction and to explore potential patient satisfaction predictors as person-related conditions, external objective care conditions and patients' perception of actual care received ("PR") in relation to a theoretical model. A cross-sectional design was used. Data were collected using one questionnaire combining questions from four instruments: Quality from patients' perspective; Sense of coherence; Big five personality trait; and Emotional stress reaction questionnaire (ESRQ), together with questions from previous research. In total, 528 patients (83.7 per cent response rate) from eight medical, three surgical and one medical/surgical ward in five Norwegian hospitals participated. Answers from 373 respondents with complete ESRQ questionnaires were analysed. Sequential multiple regression analysis with ESRQ as dependent variable was run in three steps: person-related conditions, external objective care conditions, and PR (p < 0.05). Step 1 (person-related conditions) explained 51.7 per cent of the ESRQ variance. Step 2 (external objective care conditions) explained an additional 2.4 per cent. Step 3 (PR) gave no significant additional explanation (0.05 per cent). Steps 1 and 2 contributed statistical significance to the model. Patients rated both quality-of-care and satisfaction highly. The paper shows that the theoretical model using an emotion-oriented approach to assess patient satisfaction can explain 54 per cent of patient satisfaction in a statistically significant manner.
Are Hong Kong and Taiwan stepping-stones for invasive species to the mainland of China?
Lu, Jianbo; Li, Shao-Peng; Wu, Yujia; Jiang, Lin
2018-02-01
Understanding the origins and introduction pathways of invasive species is a fundamental issue for invasion biology, which is necessary for predicting and preventing future invasion. Once an invasive species is established in a new location, this location could serve as a stepping-stone for further invasions. However, such "stepping-stone" effect has not been widely investigated. Using the published literature and records, we compiled the first found locations of 127 top invasive species in China. Our study showed that the most common landing spots of these invasive species were Hong Kong (22 species) and Taiwan (20 species), which accounted for one-third of the invasive species in China. Our analysis revealed that the invasive species in mainland China were more likely to transport from Hong Kong than Macau, a neighboring region with a similar area and colonial history. Similarly, more invasive species were also first landed on Taiwan than Hainan, a nearby island sharing similar climate conditions. Together, our findings indicate that Hong Kong and Taiwan are the most important stepping-stones for invasive species to the mainland of China and suggesting that the increasing trade exchange of China's coastal ports constitutes a potential risk for the spread of more invasive species. We suppose that they would be the future stepping-stones for invasive species to the mainland of China and these coastal ports regions where improved biosecurity is needed now.
Adolescent pedometer protocols: examining reactivity, tampering and participants' perceptions.
Scott, Joseph John; Morgan, Philip James; Plotnikoff, Ronald Cyril; Trost, Stewart Graeme; Lubans, David Revalds
2014-01-01
The aim of this study was to investigate adolescents' potential reactivity and tampering while wearing pedometers by comparing different monitoring protocols to accelerometer output. The sample included adolescents (N = 123, age range = 14-15 years) from three secondary schools in New South Wales, Australia. Schools were randomised to one of the three pedometer monitoring protocols: (i) daily sealed (DS) pedometer group, (ii) unsealed (US) pedometer group or (iii) weekly sealed (WS) pedometer group. Participants wore pedometers (Yamax Digi-Walker CW700, Yamax Corporation, Kumamoto City, Japan) and accelerometers (Actigraph GT3X+, Pensacola, USA) simultaneously for seven days. Repeated measures analysis of variance was used to examine potential reactivity. Bivariate correlations between step counts and accelerometer output were calculated to explore potential tampering. The correlation between accelerometer output and pedometer steps/day was strongest among participants in the WS group (r = 0.82, P ≤ 0.001), compared to the US (r = 0.63, P ≤ 0.001) and DS (r = 0.16, P = 0.324) groups. The DS (P ≤ 0.001) and US (P = 0.003), but not the WS (P = 0.891), groups showed evidence of reactivity. The results suggest that reactivity and tampering does occur in adolescents and contrary to existing research, pedometer monitoring protocols may influence participant behaviour.
Parks, Nathan A.; Gannon, Matthew A.; Long, Stephanie M.; Young, Madeleine E.
2016-01-01
Analysis of event-related potential (ERP) data includes several steps to ensure that ERPs meet an appropriate level of signal quality. One such step, subject exclusion, rejects subject data if ERP waveforms fail to meet an appropriate level of signal quality. Subject exclusion is an important quality control step in the ERP analysis pipeline as it ensures that statistical inference is based only upon those subjects exhibiting clear evoked brain responses. This critical quality control step is most often performed simply through visual inspection of subject-level ERPs by investigators. Such an approach is qualitative, subjective, and susceptible to investigator bias, as there are no standards as to what constitutes an ERP of sufficient signal quality. Here, we describe a standardized and objective method for quantifying waveform quality in individual subjects and establishing criteria for subject exclusion. The approach uses bootstrap resampling of ERP waveforms (from a pool of all available trials) to compute a signal-to-noise ratio confidence interval (SNR-CI) for individual subject waveforms. The lower bound of this SNR-CI (SNRLB) yields an effective and objective measure of signal quality as it ensures that ERP waveforms statistically exceed a desired signal-to-noise criterion. SNRLB provides a quantifiable metric of individual subject ERP quality and eliminates the need for subjective evaluation of waveform quality by the investigator. We detail the SNR-CI methodology, establish the efficacy of employing this approach with Monte Carlo simulations, and demonstrate its utility in practice when applied to ERP datasets. PMID:26903849
Analysis of smear in high-resolution remote sensing satellites
NASA Astrophysics Data System (ADS)
Wahballah, Walid A.; Bazan, Taher M.; El-Tohamy, Fawzy; Fathy, Mahmoud
2016-10-01
High-resolution remote sensing satellites (HRRSS) that use time delay and integration (TDI) CCDs have the potential to introduce large amounts of image smear. Clocking and velocity mismatch smear are two of the key factors in inducing image smear. Clocking smear is caused by the discrete manner in which the charge is clocked in the TDI-CCDs. The relative motion between the HRRSS and the observed object obliges that the image motion velocity must be strictly synchronized with the velocity of the charge packet transfer (line rate) throughout the integration time. During imaging an object off-nadir, the image motion velocity changes resulting in asynchronization between the image velocity and the CCD's line rate. A Model for estimating the image motion velocity in HRRSS is derived. The influence of this velocity mismatch combined with clocking smear on the modulation transfer function (MTF) is investigated by using Matlab simulation. The analysis is performed for cross-track and along-track imaging with different satellite attitude angles and TDI steps. The results reveal that the velocity mismatch ratio and the number of TDI steps have a serious impact on the smear MTF; a velocity mismatch ratio of 2% degrades the MTFsmear by 32% at Nyquist frequency when the TDI steps change from 32 to 96. In addition, the results show that to achieve the requirement of MTFsmear >= 0.95 , for TDI steps of 16 and 64, the allowable roll angles are 13.7° and 6.85° and the permissible pitch angles are no more than 9.6° and 4.8°, respectively.
Physical activity cut-offs and risk factors for preventing child obesity in Japan.
Minematsu, Kazuo; Kawabuchi, Ryosuke; Okazaki, Hiromi; Tomita, Hiroyuki; Tobina, Takuro; Tanigawa, Takeshi; Tsunawake, Noriaki
2015-01-01
There is no official recommendations for physical activity level or steps for preventing and improving child obesity in Japan. Three hundred and two Japanese children aged 9-12 years were recruited wore 3-D speed sensors. Subjects were divided into two groups using the criteria for child obesity in Japan. Body composition was measured on bioelectrical impedance analysis. Physical fitness test was done to evaluate physical strength. Twenty-four hour total steps, energy expenditure, and metabolic equivalents (MET) from Monday to Sunday were consecutively measured. The cut-offs for steps and physical activity level for preventing child obesity were evaluated on receiver operating characteristic curves. Daily life-related risk factors for child obesity were assessed on logistic regression analysis. In both sexes, body volume; bodyweight, body mass index, fat mass, and percentage body fat in the obese group was significantly higher than in the normal group, but age and height were not different (P < 0.001). Aerobic power, running speed, and explosive strength in the obese group were inferior to those in the normal group (P < 0.001). More than 40 min of 4 MET exercise, defined as moderate-vigorous exercise, and 11,000 steps per day are essential to prevent child obesity. Additionally, >2 h TV viewing per day is a significant risk factor for child obesity (OR, 3.43; 95%CI: 1.27-9.31). Cut-offs for physical activity and potential risk factors for child obesity have been identified. Recommendations for changes to daily lifestyle for school-aged Japanese children are given. © 2014 Japan Pediatric Society.
Tri-Texts: A Potential Next Step for Paired Texts
ERIC Educational Resources Information Center
Ciecierski, Lisa M.; Bintz, William P.
2018-01-01
This article presents the concept of tri-texts as a potential next step from paired texts following a collaborative inquiry with fifth-grade students. Paired texts are two texts intertextually connected, whereas tri-texts are three texts connected this way. The authors begin the article with a short literature review highlighting some of the…
Water pollution risk associated with natural gas extraction from the Marcellus Shale.
Rozell, Daniel J; Reaven, Sheldon J
2012-08-01
In recent years, shale gas formations have become economically viable through the use of horizontal drilling and hydraulic fracturing. These techniques carry potential environmental risk due to their high water use and substantial risk for water pollution. Using probability bounds analysis, we assessed the likelihood of water contamination from natural gas extraction in the Marcellus Shale. Probability bounds analysis is well suited when data are sparse and parameters highly uncertain. The study model identified five pathways of water contamination: transportation spills, well casing leaks, leaks through fractured rock, drilling site discharge, and wastewater disposal. Probability boxes were generated for each pathway. The potential contamination risk and epistemic uncertainty associated with hydraulic fracturing wastewater disposal was several orders of magnitude larger than the other pathways. Even in a best-case scenario, it was very likely that an individual well would release at least 200 m³ of contaminated fluids. Because the total number of wells in the Marcellus Shale region could range into the tens of thousands, this substantial potential risk suggested that additional steps be taken to reduce the potential for contaminated fluid leaks. To reduce the considerable epistemic uncertainty, more data should be collected on the ability of industrial and municipal wastewater treatment facilities to remove contaminants from used hydraulic fracturing fluid. © 2012 Society for Risk Analysis.
Text-based Analytics for Biosurveillance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charles, Lauren E.; Smith, William P.; Rounds, Jeremiah
The ability to prevent, mitigate, or control a biological threat depends on how quickly the threat is identified and characterized. Ensuring the timely delivery of data and analytics is an essential aspect of providing adequate situational awareness in the face of a disease outbreak. This chapter outlines an analytic pipeline for supporting an advanced early warning system that can integrate multiple data sources and provide situational awareness of potential and occurring disease situations. The pipeline, includes real-time automated data analysis founded on natural language processing (NLP), semantic concept matching, and machine learning techniques, to enrich content with metadata related tomore » biosurveillance. Online news articles are presented as an example use case for the pipeline, but the processes can be generalized to any textual data. In this chapter, the mechanics of a streaming pipeline are briefly discussed as well as the major steps required to provide targeted situational awareness. The text-based analytic pipeline includes various processing steps as well as identifying article relevance to biosurveillance (e.g., relevance algorithm) and article feature extraction (who, what, where, why, how, and when). The ability to prevent, mitigate, or control a biological threat depends on how quickly the threat is identified and characterized. Ensuring the timely delivery of data and analytics is an essential aspect of providing adequate situational awareness in the face of a disease outbreak. This chapter outlines an analytic pipeline for supporting an advanced early warning system that can integrate multiple data sources and provide situational awareness of potential and occurring disease situations. The pipeline, includes real-time automated data analysis founded on natural language processing (NLP), semantic concept matching, and machine learning techniques, to enrich content with metadata related to biosurveillance. Online news articles are presented as an example use case for the pipeline, but the processes can be generalized to any textual data. In this chapter, the mechanics of a streaming pipeline are briefly discussed as well as the major steps required to provide targeted situational awareness. The text-based analytic pipeline includes various processing steps as well as identifying article relevance to biosurveillance (e.g., relevance algorithm) and article feature extraction (who, what, where, why, how, and when).« less
NASA Astrophysics Data System (ADS)
Dalrymple, Odesma Onika
Undergraduate engineering institutions are currently seeking to improve recruiting practices and to retain engineering majors particularly by addressing what many studies document as a major challenge of poor instruction. There is an undisputed need for instructional practices that motivate students in addition to facilitating the transfer of learning beyond the classroom. Reverse engineering and product dissection, more broadly termed Disassemble/Analyze/Assemble (DAA) activities, have shown potential to address these concerns, based on the reviews of students and professors alike. DAA activities involve the systematic deconstruction of an artifact, the subsequent analysis and possible reconstruction of its components for the purpose of understanding the embodied fundamental concepts, design principles and developmental processes. These activities have been part of regular industry practice for some time; however, the systematic analysis of their benefits for learning and instruction is a relatively recent phenomenon. A number of studies have provided highly descriptive accounts of curricula and possible outcomes of DAA activities; but, relatively few have compared participants doing DAA activities to a control group doing more traditional activities. In this respect, two quasi-experiments were conducted as part of a first-year engineering laboratory, and it was hypothesized that students who engaged in the DAA activity would be more motivated and would demonstrate higher frequencies of transfer than the control. A DAA activity that required students to disassemble a single-use camera and analyze its components to discover how it works was compared to a step-by-step laboratory activity in the first experiment and a lecture method of instruction in the second experiment. In both experiments, over forty percent of the students that engaged in the DAA activity demonstrated the ability to transfer the knowledge gained about the functions of the camera's components and their interconnectedness and describe an approach for modifying the camera that involved the adaptation of a current mechanism to add new functionality. This exhibition of transfer was significantly greater than the frequency of transfer yielded by the comparative traditional activities. In addition, the post laboratory surveys indicated that the DAA activities elicited significantly higher levels of motivation than the step-by-step laboratory and the direct instructional method.
Zhao, Renjie; Evans, James W.; Oliveira, Tiago J.
2016-04-08
Here, a discrete version of deposition-diffusion equations appropriate for description of step flow on a vicinal surface is analyzed for a two-dimensional grid of adsorption sites representing the stepped surface and explicitly incorporating kinks along the step edges. Model energetics and kinetics appropriately account for binding of adatoms at steps and kinks, distinct terrace and edge diffusion rates, and possible additional barriers for attachment to steps. Analysis of adatom attachment fluxes as well as limiting values of adatom densities at step edges for nonuniform deposition scenarios allows determination of both permeability and kinetic coefficients. Behavior of these quantities is assessedmore » as a function of key system parameters including kink density, step attachment barriers, and the step edge diffusion rate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Renjie; Evans, James W.; Oliveira, Tiago J.
Here, a discrete version of deposition-diffusion equations appropriate for description of step flow on a vicinal surface is analyzed for a two-dimensional grid of adsorption sites representing the stepped surface and explicitly incorporating kinks along the step edges. Model energetics and kinetics appropriately account for binding of adatoms at steps and kinks, distinct terrace and edge diffusion rates, and possible additional barriers for attachment to steps. Analysis of adatom attachment fluxes as well as limiting values of adatom densities at step edges for nonuniform deposition scenarios allows determination of both permeability and kinetic coefficients. Behavior of these quantities is assessedmore » as a function of key system parameters including kink density, step attachment barriers, and the step edge diffusion rate.« less
Mitigation of Manhole Events Caused by Secondary Cable Failure
NASA Astrophysics Data System (ADS)
Zhang, Lili
"Manhole event" refers to a range of phenomena, such as smokers, fires and explosions which occur on underground electrical infrastructure, primarily in major cities. The most common cause of manhole events is decomposition of secondary cable initiated by an electric fault. The work presented in this thesis addresses various aspects related to the evolution and mitigation of the manhole events caused by secondary cable insulation failure. Manhole events develop as a result of thermal decomposition of organic materials present in the cable duct and manholes. Polymer characterization techniques are applied to intensively study the materials properties as related to manhole events, mainly the thermal decomposition behaviors of the polymers present in the cable duct. Though evolved gas analysis, the combustible gases have been quantitatively identified. Based on analysis and knowledge of field conditions, manhole events is divided into at least two classes, those in which exothermic chemical reactions dominate and those in which electrical energy dominates. The more common form of manhole event is driven by air flow down the duct. Numerical modeling of smolder propagation in the cable duct demonstrated that limiting air flow is effective in reducing the generation rate of combustible gas, in other words, limiting manhole events to relatively minor "smokers". Besides manhole events, another by-product of secondary cable insulation breakdown is stray voltage. The danger to personnel due to stray voltage is mostly caused by the 'step potential'. The amplitude of step potential as a result of various types of insulation defects is calculated using Finite Element Analysis (FEA) program.
LaPlante, Kerry L; Rybak, Michael J; Tsuji, Brian; Lodise, Thomas P; Kaatz, Glenn W
2007-04-01
The potential for resistance development in Streptococcus pneumoniae secondary to exposure to gatifloxacin, gemifloxacin, levofloxacin, and moxifloxacin at various levels was examined at high inoculum (10(8.5) to 10(9) log10 CFU/ml) over 96 h in an in vitro pharmacodynamic (PD) model using two fluoroquinolone-susceptible isolates. The pharmacokinetics of each drug was simulated to provide a range of free areas under the concentration-time curves (fAUC) that correlated with various fluoroquinolone doses. Potential first (parC and parE)- and second-step (gyrA and gyrB) mutations in isolates with raised MICs were identified by sequence analysis. PD models simulating fAUC/MICs of 51 and
NASA Astrophysics Data System (ADS)
Truong, Thanh N.; Stefanovich, Eugene V.
1997-05-01
We present a study of micro-solvation of Cl anion by water clusters of the size up to seven molecules using a perturbative Monte Carlo approach with a hybrid HF/MM potential. In this approach, a perturbation theory was used to avoid performing full SCF calculations at every Monte Carlo step. In this study, the anion is treated quantum mechanically at the HF/6-31G ∗ level of theory while interactions between solvent waters are presented by the TIP3P potential force field. Analysis on the solvent induced dipole moment of the ion indicates that the Cl anion resides most of the time on the surface of the clusters. Accuracy of the perturbative MC approach is also discussed.
Prokop, Zbyněk; Nečasová, Anežka; Klánová, Jana; Čupr, Pavel
2016-03-01
A novel approach was developed for rapid assessment of bioavailability and potential mobility of contaminants in soil. The response of the same test organism to the organic extract, water extract and solid phase of soil was recorded and compared. This approach was designed to give an initial estimate of the total organic toxicity (response to organic extractable fraction), as well as the mobile (response to water extract) and bioavailable fraction (response to solid phase) of soil samples. Eighteen soil samples with different levels of pollution and content of organic carbon were selected to validate the novel three-step ecotoxicological evaluation approach. All samples were chemically analysed for priority contaminants, including aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs), hexachlorocyclohexane (HCH) and dichlordiphenyltrichloroethane (DDT). The ecotoxicological evaluation involved determination of toxicity of the organic, mobile and bioavailable fractions of soil to the test organism, bacterium Bacillus cereus. We found a good correlation between the chemical analysis and the toxicity of organic extract. The low toxicity of water extracts indicated low water solubility, and thus, low potential mobility of toxic contaminants present in the soil samples. The toxicity of the bioavailable fraction was significantly greater than the toxicity of water-soluble (mobile) fraction of the contaminants as deduced from comparing untreated samples and water extracts. The bioavailability of the contaminants decreased with increasing concentrations of organic carbon in evaluated soil samples. In conclusion, the three-step ecotoxicological evaluation utilised in this study can give a quick insight into soil contamination in context with bioavailability and mobility of the contaminants present. This information can be useful for hazard identification and risk assessment of soil-associated contaminants. Graphical Abstract New three-step ecotoxicological evaluation by using the same organism.
Evaluation method of leachate leaking from carcass burial site
NASA Astrophysics Data System (ADS)
Park, S.; Kim, H.; Lee, M.; Lee, K.; Kim, S.; Kim, M.; Kim, H.; Kim, T.; Han, J.
2012-12-01
More than 150,000 cattle carcasses and 3,140,000 pig carcasses were buried all over the nation in Korea because of 2010 outbreak of foot and mouth disease (FMD). Various disposal Techniques such as incineration, composting, rendering, and burial have been developed and applied to effectively dispose an animal carcass. Since a large number of carcasses should be disposed for a short-term period to prevent the spread of FMD virus, most of the carcasses were disposed by mass burial technique. However, a long-term management and monitoring of leachate discharges are required because mass burial can cause soil and groundwater contamination. In this study, we used key parameters related to major components of leachate such as NH4-N, NO3-N, Cl-, E.coli and electrical conductivity as potential leachate contamination indicator to determine leachate leakage from the site. We monitored 300 monitoring wells in both burial site and the monitoring well 5m away from burial sites to identify leachate leaking from burial site. Average concentration of NH3-N in 300 monitoring wells, both burial site and the well 5m away from burial sites, were 2,593 mg/L and 733 mg/L, respectively. 24% out of 300 monitoring wells showed higher than 10 mg/L NH4-N, 100 mg/L Cl- and than 800 μS/cm electrical conductivity. From this study, we set up 4 steps guidelines to evaluate leachate leakage like; step 1 : High potential step of leachate leakage, step 2 : Middle potential step of leachate leakage, step 3 : Low potential step of leachate leakage, step 4 : No leachate leakage. On the basis of this result, we moved 34 leachate leaking burial sites to other places safely and it is necessary to monitor continuously the monitoring wells for environmental protection and human health.
Exponential error reduction in pretransfusion testing with automation.
South, Susan F; Casina, Tony S; Li, Lily
2012-08-01
Protecting the safety of blood transfusion is the top priority of transfusion service laboratories. Pretransfusion testing is a critical element of the entire transfusion process to enhance vein-to-vein safety. Human error associated with manual pretransfusion testing is a cause of transfusion-related mortality and morbidity and most human errors can be eliminated by automated systems. However, the uptake of automation in transfusion services has been slow and many transfusion service laboratories around the world still use manual blood group and antibody screen (G&S) methods. The goal of this study was to compare error potentials of commonly used manual (e.g., tiles and tubes) versus automated (e.g., ID-GelStation and AutoVue Innova) G&S methods. Routine G&S processes in seven transfusion service laboratories (four with manual and three with automated G&S methods) were analyzed using failure modes and effects analysis to evaluate the corresponding error potentials of each method. Manual methods contained a higher number of process steps ranging from 22 to 39, while automated G&S methods only contained six to eight steps. Corresponding to the number of the process steps that required human interactions, the risk priority number (RPN) of the manual methods ranged from 5304 to 10,976. In contrast, the RPN of the automated methods was between 129 and 436 and also demonstrated a 90% to 98% reduction of the defect opportunities in routine G&S testing. This study provided quantitative evidence on how automation could transform pretransfusion testing processes by dramatically reducing error potentials and thus would improve the safety of blood transfusion. © 2012 American Association of Blood Banks.
NASA Astrophysics Data System (ADS)
Marković, Zoran S.; Mentus, Slavko V.; Dimitrić Marković, Jasmina M.
2009-12-01
Antioxidative properties of naturally occurring flavon-3-ol, fisetin, were examined by both cyclic voltammetry and quantum-chemical based calculations. The three voltametrically detectable consecutive steps, reflected the fisetin molecular structure, catecholic structural unit in the ring B, C3-OH, and C7-OH groups in the rings C and A. Oxidation potential values, used as quantitative parameter in determining its oxidation capability, indicated good antioxidative properties found with this molecule. Oxidation of the C3'C4' dixydroxy moiety at the B ring occurred first at the lowest positive potentials. The first oxidation step induced fast intramolecular transformations in which the C3 hydroxy group disappeared and the product of this transformation participated in the second oxidation step. The highest potential of oxidation was attributed to the oxidation of C7 hydroxy group. The structural and electronic features of fisetin were investigated at the B3LYP/6-311++G** level of theory. Particularly, the interest was focused on the C3' and C4'-OH sites in the B ring and on C3-OH site in the C ring. The calculated bond dissociation enthalpy values for all OH sites of fisetin clearly indicated the importance of the B ring and C3' and C4'-OH group. The importance of keto-enol tautomerism has also been considered. The analysis also included the Mulliken spin density distribution for the radicals formed after H removal on each OH site. The results showed the higher values of the BDE on the C7-OH and C3-OH sites.
NASA Astrophysics Data System (ADS)
Bar-Cohen, Yoseph; Badescu, Mircea; Sherrit, Stewart; Bao, Xiaoqi; Lindsey, Cameron; Kutzer, Thomas; Salazar, Eduardo
2018-03-01
The return of samples back to Earth in future missions would require protection of our planet from the risk of bringing uncontrolled biological materials back with the samples. This protection would require "breaking the chain of contact (BTC)", where any returned material reaching Earth for further analysis would have to be sealed inside a container with extremely high confidence. Therefore, the acquired samples would need to be contained while destroying any potential biological materials that may contaminate the external surface of the container. A novel process that could be used to contain returning samples has been developed and demonstrated in a quarter scale size. The process consists of brazing using non-contact induction heating that synchronously separates, seams, seals and sterilizes (S4) the container. The use of brazing involves melting at temperatures higher than 500°C and this level of heating assures sterilization of the exposed areas since all carbon bonds (namely, organic materials) are broken at this temperature. The mechanism consists of a double wall container with inner and outer shells having Earth-clean interior surfaces. The process consists of two-steps, Step-1: the double wall container halves are fabricated and brazed (equivalent to production on Earth); and Step-2 is the S4 process and it is the equivalent to the execution on-orbit around Mars. In a potential future mission, the double wall container would be split into two halves and prepared on Earth. The potential on-orbit execution would consist of inserting the orbiting sample (OS) container into one of the halves and then mated to the other half and brazed. The latest results of this effort will be described and discussed in this manuscript.
Koli, Sunil H; Mohite, Bhavana V; Suryawanshi, Rahul K; Borase, Hemant P; Patil, Satish V
2018-05-01
The development of a safe and eco-friendly method for metal nanoparticle synthesis has an increasing demand, due to emerging environmental and biological harms of hazardous chemicals used in existing nanosynthesis methods. The present investigation reports a rapid one-step, eco-friendly and green approach for the formation of nanosized silver particles (AgNPs) using extracellular non-toxic-colored fungal metabolites (Monascus pigments-MPs). The formation of nanosized silver particles utilizing Monascus pigments was confirmed after exposure of reaction mixture to sunlight, by visually color change and further established by spectrophotometric analysis. The size, shape, and topography of synthesized MPs-AgNPs were well-defined using different microscopic and spectroscopic techniques, i.e., FE-SEM, HR-TEM, and DLS. The average size of MPs-AgNPs was found to be 10-40 nm with a spherical shape which was highly stable and dispersed in the solution. HR-TEM and XRD confirmed crystalline nature of MPs-AgNPs. The biocidal potential of MPs-AgNPs was evaluated against three bacterial pathogens such as Pseudomonas aeruginosa, Escherichia coli, and Staphylococcus aureus and it was observed that the MPs-AgNPs significantly inhibited the growth of all three bacterial pathogens. The anti-biofilm activity of MPs-AgNPs was recorded against antibiotic-resistant P. aeruginosa. Besides, the colorimetric metal sensing using MPs-AgNPs was studied. Among the metals tested, the selective Hg 2+ -sensing potential at micromolar concentration was observed. In conclusion, this is the rapid one-step (within 12-15 min), environment-friendly method for synthesis of AgNPs and synthesized MPs-AgNPs could be used as a potential antibacterial agent against antibiotic-resistant bacterial pathogens.
Mastrandrea, F; Pecora, S; Scatena, C; Cadario, G
2005-11-01
Medical statistics may contribute to ameliorate research by improving the design of studies and identifying the optimal method for the analysis of results. Sometimes, nevertheless, it could be misemployed flawing the benefit potential. Allergic diseases pathogenesis is recognized to be systemic but global initiatives such as GINA and ARIA documents define allergic asthma and rhinitis as organ diseases; such an asymmetrical view raises a set of known and unknown confounding that could influence the quality of the process of evidence-based decision-making (topic symptomatic therapeutic interventions versus systemic pathogenetic interventions). This article shows the first scoring system for the assessment of atopic dermatitis lesions developed in the allergy-area. A four-step severity score (FSSS) was chosen in agreement with those developed for asthma and rhinitis in global initiatives, to avoid any further differences in evaluating the severity of allergic diseases. FSSS relates each step with the objective signs of the SCORAD and rates the disease course as intermittent or persistent. A devoted electronic program has been also framed to allow a quick and simple contemporary evaluation of the SCORAD Index (Section I) and of the FSSS (Section II); the program furthermore foresees a third section named ESAS (Extra Skin Allergic Signs) (Section III) in which it is possible to check whether organs other than the skin are involved by the allergic inflammation. The limitations potential generated by a misemployment of medical statistics for clinical trials designed to establish benefits rising from specific immunotherapy for allergic diseases have been also discussed extensively.
Try Fault Tree Analysis, a Step-by-Step Way to Improve Organization Development.
ERIC Educational Resources Information Center
Spitzer, Dean
1980-01-01
Fault Tree Analysis, a systems safety engineering technology used to analyze organizational systems, is described. Explains the use of logic gates to represent the relationship between failure events, qualitative analysis, quantitative analysis, and effective use of Fault Tree Analysis. (CT)
Masi, Marco; Moeini, Seyed Arash; Boari, Angela; Cimmino, Alessio; Vurro, Maurizio; Evidente, Antonio
2018-07-01
Cavoxin is a tetrasubstituted phytotoxic chalcone and cavoxone is the corresponding chroman-4-one, both produced in vitro by Phoma cava, a fungus isolated from chestnut. Cavoxin showed biofungicide potential against fungal species responsible for food moulding. Therefore, cavoxin has potential to be incorporated into biopolymer to generate 'intelligent food packaging'. To reach this objective, large-scale production of cavoxin by P. cava fermentation needs to be optimized. A rapid and efficient method for cavoxin analysis, as well as of cavoxone, in the fungal culture filtrates and the corresponding organic extracts is the first experimental step. Thus, a HPLC method was developed and applied to quantify cavoxin and cavoxone production in two different fungal culture conditions. The analysis proved that cavoxin production in stirred culture filtrates is significantly higher than in static ones.
Feasibility and Utility of Lexical Analysis for Occupational Health Text.
Harber, Philip; Leroy, Gondy
2017-06-01
Assess feasibility and potential utility of natural language processing (NLP) for storing and analyzing occupational health data. Basic NLP lexical analysis methods were applied to 89,000 Mine Safety and Health Administration (MSHA) free text records. Steps included tokenization, term and co-occurrence counts, term annotation, and identifying exposure-health effect relationships. Presence of terms in the Unified Medical Language System (UMLS) was assessed. The methods efficiently demonstrated common exposures, health effects, and exposure-injury relationships. Many workplace terms are not present in UMLS or map inaccurately. Use of free text rather than narrowly defined numerically coded fields is feasible, flexible, and efficient. It has potential to encourage workers and clinicians to provide more data and to support automated knowledge creation. The lexical method used is easily generalizable to other areas. The UMLS vocabularies should be enhanced to be relevant to occupational health.
Kimura, Yuka; Ishibashi, Yasuyuki; Tsuda, Eiichi; Yamamoto, Yuji; Hayashi, Yoshimitsu; Sato, Shuichi
2012-03-01
In badminton, knees opposite to the racket-hand side received anterior cruciate ligament (ACL) injuries during single-leg landing after overhead stroke. Most of them occurred in the backhand-side of the rear court. Comparing lower limb biomechanics during single-leg landing after overhead stroke between the forehand-side and backhand-side court may help explain the different injury rates depending on court position. The knee kinematics and kinetics during single-leg landing after overhead stroke following back-stepping were different between the forehand-side and backhand-side court. Controlled laboratory study. Hip, knee and ankle joint kinematic and knee kinetic data were collected for 17 right-handed female college badminton players using a 3-dimensional motion analysis system. Subjects performed single-left-legged landing after an overhead stroke following left and right back-stepping. The kinematic and kinetic data of the left lower extremities during landing were measured and compared between left and right back-steps. Hip flexion and abduction and knee valgus at the initial contact, hip and knee flexion and knee valgus at the maximum knee flexion and the maximum knee valgus moment were significantly larger for the left back-step than the right back-step (p<0.05). Significant differences in joint kinematics and kinetics of the lower extremity during single-leg landing after overhead stroke were observed between different back-step directions. Increased knee valgus angle and moment following back-stepping to the backhand-side might be related to the higher incidence of ACL injury during single-leg landing after overhead stroke.
Implementing Immediate Postpartum Long-Acting Reversible Contraception Programs.
Hofler, Lisa G; Cordes, Sarah; Cwiak, Carrie A; Goedken, Peggy; Jamieson, Denise J; Kottke, Melissa
2017-01-01
To understand the most important steps required to implement immediate postpartum long-acting reversible contraception (LARC) programs in different Georgia hospitals and the barriers to implementing such a program. This was a qualitative study. We interviewed 32 key personnel from 10 Georgia hospitals working to establish immediate postpartum LARC programs. Data were analyzed using directed qualitative content analysis principles. We used the Stages of Implementation to organize participant-identified key steps for immediate postpartum LARC into an implementation guide. We compared this guide to hospitals' implementation experiences. At the completion of the study, LARC was available for immediate postpartum placement at 7 of 10 study hospitals. Participants identified common themes for the implementation experience: team member identification and ongoing communication, payer preparedness challenges, interdependent department-specific tasks, and piloting with continuing improvements. Participants expressed a need for anticipatory guidance throughout the process. Key first steps to immediate postpartum LARC program implementation were identifying project champions, creating an implementation team that included all relevant departments, obtaining financial reassurance, and ensuring hospital administration awareness of the project. Potential barriers included lack of knowledge about immediate postpartum LARC, financial concerns, and competing clinical and administrative priorities. Hospitals that were successful at implementing immediate postpartum LARC programs did so by prioritizing clear communication and multidisciplinary teamwork. Although the implementation guide reflects a comprehensive assessment of the steps to implementing immediate postpartum LARC programs, not all hospitals required every step to succeed. Hospital teams report that implementing immediate postpartum LARC programs involves multiple departments and a number of important steps to consider. A stage-based approach to implementation, and a standardized guide detailing these steps, may provide the necessary structure for the complex process of implementing immediate postpartum LARC programs in the hospital setting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agostini, Federica; Abedi, Ali; Suzuki, Yasumitsu
The decomposition of electronic and nuclear motion presented in Abedi et al. [Phys. Rev. Lett. 105, 123002 (2010)] yields a time-dependent potential that drives the nuclear motion and fully accounts for the coupling to the electronic subsystem. Here, we show that propagation of an ensemble of independent classical nuclear trajectories on this exact potential yields dynamics that are essentially indistinguishable from the exact quantum dynamics for a model non-adiabatic charge transfer problem. We point out the importance of step and bump features in the exact potential that are critical in obtaining the correct splitting of the quasiclassical nuclear wave packetmore » in space after it passes through an avoided crossing between two Born-Oppenheimer surfaces and analyze their structure. Finally, an analysis of the exact potentials in the context of trajectory surface hopping is presented, including preliminary investigations of velocity-adjustment and the force-induced decoherence effect.« less
Nagashima, Hiroaki; Watari, Akiko; Shinoda, Yasuharu; Okamoto, Hiroshi; Takuma, Shinya
2013-12-01
This case study describes the application of Quality by Design elements to the process of culturing Chinese hamster ovary cells in the production of a monoclonal antibody. All steps in the cell culture process and all process parameters in each step were identified by using a cause-and-effect diagram. Prospective risk assessment using failure mode and effects analysis identified the following four potential critical process parameters in the production culture step: initial viable cell density, culture duration, pH, and temperature. These parameters and lot-to-lot variability in raw material were then evaluated by process characterization utilizing a design of experiments approach consisting of a face-centered central composite design integrated with a full factorial design. Process characterization was conducted using a scaled down model that had been qualified by comparison with large-scale production data. Multivariate regression analysis was used to establish statistical prediction models for performance indicators and quality attributes; with these, we constructed contour plots and conducted Monte Carlo simulation to clarify the design space. The statistical analyses, especially for raw materials, identified set point values, which were most robust with respect to the lot-to-lot variability of raw materials while keeping the product quality within the acceptance criteria. © 2013 Wiley Periodicals, Inc. and the American Pharmacists Association.
Choi, Woo June; Pepple, Kathryn L; Wang, Ruikang K
2018-05-24
In preclinical vision research, cell grading in small animal models is essential for the quantitative evaluation of intraocular inflammation. Here, we present a new and practical optical coherence tomography (OCT) image analysis method for the automated detection and counting of aqueous cells in the anterior chamber (AC) of a rodent model of uveitis. Anterior segment OCT (AS-OCT) images are acquired with a 100kHz swept-source OCT (SS-OCT) system. The proposed method consists of two steps. In the first step, we first despeckle and binarize each OCT image. After removing AS structures in the binary image, we then apply area thresholding to isolate cell-like objects. Potential cell candidates are selected based on their best fit to roundness. The second step performs the cell counting within the whole AC, in which additional cell tracking analysis is conducted on the successive OCT images to eliminate redundancy in cell counting. Finally, 3-D cell grading using the proposed method is demonstrated in longitudinal OCT imaging of a mouse model of anterior uveitis in vivo. Rendering of anterior segment (orange) of mouse eye and automatically counted anterior chamber cells (green). Inset is a top view of the rendering, showing the cell distribution across the anterior chamber. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
NASA Astrophysics Data System (ADS)
Rao, Lang; Cai, Bo; Yu, Xiao-Lei; Guo, Shi-Shang; Liu, Wei; Zhao, Xing-Zhong
2015-05-01
3D microelectrodes are one-step fabricated into a microfluidic droplet separator by filling conductive silver paste into PDMS microchambers. The advantages of 3D silver paste electrodes in promoting droplet sorting accuracy are systematically demonstrated by theoretical calculation, numerical simulation and experimental validation. The employment of 3D electrodes also helps to decrease the droplet sorting voltage, guaranteeing that cells encapsulated in droplets undergo chip-based sorting processes are at better metabolic status for further potential cellular assays. At last, target droplet containing single cell are selectively sorted out from others by an appropriate electric pulse. This method provides a simple and inexpensive alternative to fabricate 3D electrodes, and it is expected our 3D electrode-integrated microfluidic droplet separator platform can be widely used in single cell operation and analysis.
Development of binding assays in microfabricated picoliter vials: an assay for biotin.
Grosvenor, A L; Feltus, A; Conover, R C; Daunert, S; Anderson, K W
2000-06-01
A homogeneous binding assay for the detection of biotin in picoliter vials was developed using the photoprotein aequorin as the label. The binding assay was based on the competition of free biotin with biotinylated aequorin (AEQ-biotin) for avidin. A sequential protocol was used, and modification of the assay to reduce the number of steps was examined. Results showed that detection limits on the order of 10(-14) mol of biotin were possible. Reducing the number of steps provided similar detection limits but only if the amount of avidin used was decreased. These binding assays based on picoliter volumes have potential applications in a variety of fields, including microanalysis and single-cell analysis, where the amount of sample is limited. In addition, these assays are suitable for the high-throughput screening of biopharmaceuticals.
One-step fabrication of porous GaN crystal membrane and its application in energy storage
NASA Astrophysics Data System (ADS)
Zhang, Lei; Wang, Shouzhi; Shao, Yongliang; Wu, Yongzhong; Sun, Changlong; Huo, Qin; Zhang, Baoguo; Hu, Haixiao; Hao, Xiaopeng
2017-03-01
Single-crystal gallium nitride (GaN) membranes have great potential for a variety of applications. However, fabrication of single-crystalline GaN membranes remains a challenge owing to its chemical inertness and mechanical hardness. This study prepares large-area, free-standing, and single-crystalline porous GaN membranes using a one-step high-temperature annealing technique for the first time. A promising separation model is proposed through a comprehensive study that combines thermodynamic theories analysis and experiments. Porous GaN crystal membrane is processed into supercapacitors, which exhibit stable cycling life, high-rate capability, and ultrahigh power density, to complete proof-of-concept demonstration of new energy storage application. Our results contribute to the study of GaN crystal membranes into a new stage related to the elelctrochemical energy storage application.
"Nano" Scale Biosignatures and the Search for Extraterrestrial Life
NASA Technical Reports Server (NTRS)
Oehler, D. Z.; Robert, F.; Meibom, A.; Mostefaoui, S.; Selo, M.; Walter, M. R.; Sugitani, K.; Allwood, A.; Mimura, K.; Gibson, E. K.
2008-01-01
A critical step in the search for remnants of potential life forms on other planets lies in our ability to recognize indigenous fragments of ancient microbes preserved in some of Earth's oldest rocks. To this end, we are building a database of nano-scale chemical and morphological characteristics of some of Earth's oldest organic microfossils. We are primarily using the new technology of Nano-Secondary ion mass spectrometry (NanoSIMS) which provides in-situ, nano-scale elemental analysis of trace quantities of organic residues. The initial step was to characterize element composition of well-preserved, organic microfossils from the late Proterozoic (0.8 Ga) Bitter Springs Formation of Australia. Results from that work provide morphologic detail and nitrogen/carbon ratios that appear to reflect the well-established biological origin of these 0.8 Ga fossils.
Chi, Xiaowei; Tang, Yongan; Zeng, Xiangqun
2016-10-20
Water and oxygen are ubiquitous present in ambient conditions. This work studies the unique oxygen, trace water and a volatile organic compound (VOC) acetaldehyde redox chemistry in a hydrophobic and aprotic ionic liquid (IL), 1-butyl-1-methylpyrrolidinium bis(trifluoromethanesulfonyl)imide ([Bmpy] [NTf 2 ]) by cyclic voltammetry and potential step methods. One electron oxygen reduction leads to superoxide radical formation in the IL. Trace water in the IL acts as a protic species that reacts with the superoxide radical. Acetaldehyde is a stronger protic species than water for reacting with the superoxide radical. The presence of trace water in the IL was also demonstrated to facilitate the electro-oxidation of acetaldehyde, with similar mechanism to that in the aqueous solutions. A multiple-step coupling reaction mechanism between water, superoxide radical and acetaldehyde has been described. The unique characteristics of redox chemistry of acetaldehyde in [Bmpy][NTf 2 ] in the presence of oxygen and trace water can be controlled by electrochemical potentials. By controlling the electrode potential windows, several methods including cyclic voltammetry, potential step methods (single-potential, double-potential and triple-potential step methods) were established for the quantification of acetaldehyde. Instead of treating water and oxygen as frustrating interferents to ILs, we found that oxygen and trace water chemistry in [Bmpy][NTf 2 ] can be utilized to develop innovative electrochemical methods for electroanalysis of acetaldehyde.
Chi, Xiaowei; Tang, Yongan; Zeng, Xiangqun
2017-01-01
Water and oxygen are ubiquitous present in ambient conditions. This work studies the unique oxygen, trace water and a volatile organic compound (VOC) acetaldehyde redox chemistry in a hydrophobic and aprotic ionic liquid (IL), 1-butyl-1-methylpyrrolidinium bis(trifluoromethanesulfonyl)imide ([Bmpy] [NTf2]) by cyclic voltammetry and potential step methods. One electron oxygen reduction leads to superoxide radical formation in the IL. Trace water in the IL acts as a protic species that reacts with the superoxide radical. Acetaldehyde is a stronger protic species than water for reacting with the superoxide radical. The presence of trace water in the IL was also demonstrated to facilitate the electro-oxidation of acetaldehyde, with similar mechanism to that in the aqueous solutions. A multiple-step coupling reaction mechanism between water, superoxide radical and acetaldehyde has been described. The unique characteristics of redox chemistry of acetaldehyde in [Bmpy][NTf2] in the presence of oxygen and trace water can be controlled by electrochemical potentials. By controlling the electrode potential windows, several methods including cyclic voltammetry, potential step methods (single-potential, double-potential and triple-potential step methods) were established for the quantification of acetaldehyde. Instead of treating water and oxygen as frustrating interferents to ILs, we found that oxygen and trace water chemistry in [Bmpy][NTf2] can be utilized to develop innovative electrochemical methods for electroanalysis of acetaldehyde. PMID:29142331
A new model for care population management.
Williams, Jeni
2013-03-01
Steps toward building a population management model of care should include: Identifying the population that would be cared for through a population management initiative. Conducting an actuarial analysis for this population, reviewing historical utilization and cost data and projecting changes in utilization. Investing in data infrastructure that supports the exchange of data among providers and with payers. Determining potential exposure to downside risk and organizational capacity to assume this risk. Experimenting with payment models and care delivery approaches Hiring care coordinators to manage care for high-risk patients.
High energy behavior of gravity at large N
DOE Office of Scientific and Technical Information (OSTI.GOV)
Canfora, F.
2006-09-15
A first step in the analysis of the renormalizability of gravity at large N is carried out. Suitable resummations of planar diagrams give rise to a theory in which there is only a finite number of primitive, superficially divergent, Feynman diagrams. The mechanism is similar to the one which makes the 3D Gross-Neveu model renormalizable at large N. The connections with gravitational confinement and Kawai-Lewellen-Tye relations are briefly analyzed. Some potential problems in fulfilling the Zinn-Justin equations are pointed out.
EM61-MK2 Response of Standard Munition Items
2008-10-06
metallic objects in the vicinity of the sensor. The decay of this induced field is sensed by monitoring the current in a wire-loop receiver coil in four...response are selected and marked as potential metal targets. This initial list of anomalies is used as input to an analysis step that selects anomalies... metal objects are left un-remediated but we are confident that the objects responsible for the anomaly have a smaller response than any of our targets
NASA Astrophysics Data System (ADS)
Kim, Hyunhong; Choi, Seong-Hyeon; Kim, Mijung; Park, Jang-Ung; Bae, Joonwon; Park, Jongnam
2017-11-01
Owing to a recent push toward one-dimensional nanomaterials, in this study, we report a seed-mediated synthetic strategy for copper nanowires (Cu NWs) production involving thermal decomposition of metal-surfactant complexes in an organic medium. Ultra-long Cu NWs with a high aspect ratio and uniform diameter were obtained by separating nucleation and growth steps. The underlying mechanism for nanowire formation was investigated, in addition, properties of the obtained Cu NWs were also characterized using diverse analysis techniques. The performance of resulting Cu NWs as transparent electrodes was demonstrated for potential application. This article can provide information on both new synthetic pathway and potential use of Cu NWs.
Chakravorty, Dhruva K.; Soudackov, Alexander V.; Hammes-Schiffer, Sharon
2009-01-01
Hybrid quantum/classical molecular dynamics simulations of the two proton transfer reactions catalyzed by ketosteroid isomerase are presented. The potential energy surfaces for the proton transfer reactions are described with the empirical valence bond method. Nuclear quantum effects of the transferring hydrogen increase the rates by a factor of ~8, and dynamical barrier recrossings decrease the rates by a factor of 3–4. For both proton transfer reactions, the donor-acceptor distance decreases substantially at the transition state. The carboxylate group of the Asp38 side chain, which serves as the proton acceptor and donor in the first and second steps, respectively, rotates significantly between the two proton transfer reactions. The hydrogen bonding interactions within the active site are consistent with the hydrogen bonding of both Asp99 and Tyr14 to the substrate. The simulations suggest that a hydrogen bond between Asp99 and the substrate is present from the beginning of the first proton transfer step, whereas the hydrogen bond between Tyr14 and the substrate is virtually absent in the first part of this step but forms nearly concurrently with the formation of the transition state. Both hydrogen bonds are present throughout the second proton transfer step until partial dissociation of the product. The hydrogen bond between Tyr14 and Tyr55 is present throughout both proton transfer steps. The active site residues are more mobile during the first step than during the second step. The van der Waals interaction energy between the substrate and the enzyme remains virtually constant along the reaction pathway, but the electrostatic interaction energy is significantly stronger for the dienolate intermediate than for the reactant and product. Mobile loop regions distal to the active site exhibit significant structural rearrangements and, in some cases, qualitative changes in the electrostatic potential during the catalytic reaction. These results suggest that relatively small conformational changes of the enzyme active site and substrate strengthen the hydrogen bonds that stabilize the intermediate, thereby facilitating the proton transfer reactions. Moreover, the conformational and electrostatic changes associated with these reactions are not limited to the active site but rather extend throughout the entire enzyme. PMID:19799395
Correlation between Gas Bubble Formation and Hydrogen Evolution Reaction Kinetics at Nanoelectrodes.
Chen, Qianjin; Luo, Long
2018-04-17
We report the correlation between H 2 gas bubble formation potential and hydrogen evolution reaction (HER) activity for Au and Pt nanodisk electrodes (NEs). Microkinetic models were formulated to obtain the HER kinetic information for individual Au and Pt NEs. We found that the rate-determining steps for the HER at Au and Pt NEs were the Volmer step and the Heyrovsky step, respectively. More interestingly, the standard rate constant ( k 0 ) of the rate-determining step was found to vary over 2 orders of magnitude for the same type of NEs. The observed variations indicate the HER activity heterogeneity at the nanoscale. Furthermore, we discovered a linear relationship between bubble formation potential ( E bubble ) and log( k 0 ) with a slope of 125 mV/decade for both Au and Pt NEs. As log ( k 0 ) increases, E bubble shifts linearly to more positive potentials, meaning NEs with higher HER activities form H 2 bubbles at less negative potentials. Our theoretical model suggests that such linear relationship is caused by the similar critical bubble formation condition for Au and Pt NEs with varied sizes. Our results have potential implications for using gas bubble formation to evaluate the HER activity distribution of nanoparticles in an ensemble.
Steps toward improving ethical evaluation in health technology assessment: a proposed framework.
Assasi, Nazila; Tarride, Jean-Eric; O'Reilly, Daria; Schwartz, Lisa
2016-06-06
While evaluation of ethical aspects in health technology assessment (HTA) has gained much attention during the past years, the integration of ethics in HTA practice still presents many challenges. In response to the increasing demand for expansion of health technology assessment (HTA) methodology to include ethical issues more systematically, this article reports on a multi-stage study that aimed at construction of a framework for improving the integration of ethics in HTA. The framework was developed through the following phases: 1) a systematic review and content analysis of guidance documents for ethics in HTA; 2) identification of factors influencing the integration of ethical considerations in HTA; 3) preparation of an action-oriented framework based on the key elements of the existing guidance documents and identified barriers to and facilitators of their implementation; and 4) expert consultation and revision of the framework. The proposed framework consists of three main components: an algorithmic flowchart, which exhibits the different steps of an ethical inquiry throughout the HTA process, including: defining the objectives and scope of the evaluation, stakeholder analysis, assessing organizational capacity, framing ethical evaluation questions, ethical analysis, deliberation, and knowledge translation; a stepwise guide, which focuses on the task objectives and potential questions that are required to be addressed at each step; and a list of some commonly recommended or used tools to help facilitate the evaluation process. The proposed framework can be used to support and promote good practice in integration of ethics into HTA. However, further validation of the framework through case studies and expert consultation is required to establish its utility for HTA practice.
Functional genomics identifies specific vulnerabilities in PTEN-deficient breast cancer.
Tang, Yew Chung; Ho, Szu-Chi; Tan, Elisabeth; Ng, Alvin Wei Tian; McPherson, John R; Goh, Germaine Yen Lin; Teh, Bin Tean; Bard, Frederic; Rozen, Steven G
2018-03-22
Phosphatase and tensin homolog (PTEN) is one of the most frequently inactivated tumor suppressors in breast cancer. While PTEN itself is not considered a druggable target, PTEN synthetic-sick or synthetic-lethal (PTEN-SSL) genes are potential drug targets in PTEN-deficient breast cancers. Therefore, with the aim of identifying potential targets for precision breast cancer therapy, we sought to discover PTEN-SSL genes present in a broad spectrum of breast cancers. To discover broad-spectrum PTEN-SSL genes in breast cancer, we used a multi-step approach that started with (1) a genome-wide short interfering RNA (siRNA) screen of ~ 21,000 genes in a pair of isogenic human mammary epithelial cell lines, followed by (2) a short hairpin RNA (shRNA) screen of ~ 1200 genes focused on hits from the first screen in a panel of 11 breast cancer cell lines; we then determined reproducibility of hits by (3) identification of overlaps between our results and reanalyzed data from 3 independent gene-essentiality screens, and finally, for selected candidate PTEN-SSL genes we (4) confirmed PTEN-SSL activity using either drug sensitivity experiments in a panel of 19 cell lines or mutual exclusivity analysis of publicly available pan-cancer somatic mutation data. The screens (steps 1 and 2) and the reproducibility analysis (step 3) identified six candidate broad-spectrum PTEN-SSL genes (PIK3CB, ADAMTS20, AP1M2, HMMR, STK11, and NUAK1). PIK3CB was previously identified as PTEN-SSL, while the other five genes represent novel PTEN-SSL candidates. Confirmation studies (step 4) provided additional evidence that NUAK1 and STK11 have PTEN-SSL patterns of activity. Consistent with PTEN-SSL status, inhibition of the NUAK1 protein kinase by the small molecule drug HTH-01-015 selectively impaired viability in multiple PTEN-deficient breast cancer cell lines, while mutations affecting STK11 and PTEN were largely mutually exclusive across large pan-cancer data sets. Six genes showed PTEN-SSL patterns of activity in a large proportion of PTEN-deficient breast cancer cell lines and are potential specific vulnerabilities in PTEN-deficient breast cancer. Furthermore, the NUAK1 PTEN-SSL vulnerability identified by RNA interference techniques can be recapitulated and exploited using the small molecule kinase inhibitor HTH-01-015. Thus, NUAK1 inhibition may be an effective strategy for precision treatment of PTEN-deficient breast tumors.
Association rule mining in the US Vaccine Adverse Event Reporting System (VAERS).
Wei, Lai; Scott, John
2015-09-01
Spontaneous adverse event reporting systems are critical tools for monitoring the safety of licensed medical products. Commonly used signal detection algorithms identify disproportionate product-adverse event pairs and may not be sensitive to more complex potential signals. We sought to develop a computationally tractable multivariate data-mining approach to identify product-multiple adverse event associations. We describe an application of stepwise association rule mining (Step-ARM) to detect potential vaccine-symptom group associations in the US Vaccine Adverse Event Reporting System. Step-ARM identifies strong associations between one vaccine and one or more adverse events. To reduce the number of redundant association rules found by Step-ARM, we also propose a clustering method for the post-processing of association rules. In sample applications to a trivalent intradermal inactivated influenza virus vaccine and to measles, mumps, rubella, and varicella (MMRV) vaccine and in simulation studies, we find that Step-ARM can detect a variety of medically coherent potential vaccine-symptom group signals efficiently. In the MMRV example, Step-ARM appears to outperform univariate methods in detecting a known safety signal. Our approach is sensitive to potentially complex signals, which may be particularly important when monitoring novel medical countermeasure products such as pandemic influenza vaccines. The post-processing clustering algorithm improves the applicability of the approach as a screening method to identify patterns that may merit further investigation. Copyright © 2015 John Wiley & Sons, Ltd.
Preprocessing and Analysis of LC-MS-Based Proteomic Data
Tsai, Tsung-Heng; Wang, Minkun; Ressom, Habtom W.
2016-01-01
Liquid chromatography coupled with mass spectrometry (LC-MS) has been widely used for profiling protein expression levels. This chapter is focused on LC-MS data preprocessing, which is a crucial step in the analysis of LC-MS based proteomics. We provide a high-level overview, highlight associated challenges, and present a step-by-step example for analysis of data from LC-MS based untargeted proteomic study. Furthermore, key procedures and relevant issues with the subsequent analysis by multiple reaction monitoring (MRM) are discussed. PMID:26519169
Localization and recognition of traffic signs for automated vehicle control systems
NASA Astrophysics Data System (ADS)
Zadeh, Mahmoud M.; Kasvand, T.; Suen, Ching Y.
1998-01-01
We present a computer vision system for detection and recognition of traffic signs. Such systems are required to assist drivers and for guidance and control of autonomous vehicles on roads and city streets. For experiments we use sequences of digitized photographs and off-line analysis. The system contains four stages. First, region segmentation based on color pixel classification called SRSM. SRSM limits the search to regions of interest in the scene. Second, we use edge tracing to find parts of outer edges of signs which are circular or straight, corresponding to the geometrical shapes of traffic signs. The third step is geometrical analysis of the outer edge and preliminary recognition of each candidate region, which may be a potential traffic sign. The final step in recognition uses color combinations within each region and model matching. This system maybe used for recognition of other types of objects, provided that the geometrical shape and color content remain reasonably constant. The method is reliable, easy to implement, and fast, This differs form the road signs recognition method in the PROMETEUS. The overall structure of the approach is sketched.
NASA Astrophysics Data System (ADS)
Xiao, H.; Ren, G.; Dong, Y.; Li, H.; Xiao, S.; Wu, B.; Jian, S.
2018-06-01
A numerical analysis of a GeO2-doped single-mode optical fiber with a multi-step index core toward stimulated Brillouin scattering (SBS) based dual-parameter sensing applications is proposed. Adjusting the parameters in the fiber design, higher-order acoustic modes are sufficiently enhanced, making the fiber feasible for discriminative measurements of temperature and strain in the meantime. Numerical simulations indicate that the Brillouin frequency shifts and peak SBS efficiencies are strongly dependent on the doping concentration and the thickness of low-index ring in the proposed fiber. With appropriate structural and optical parameters, this fiber could support two distinct acoustic modes with comparable peak SBS efficiencies and well-spaced Brillouin frequency shifts. The sensing characteristics contributed by the dual-peak feature in the Brillouin gain spectrum are explored. Calculated accuracies of temperature and strain in simultaneous measurements can be up to 0.64 °C and 15.4 με, respectively. The proposed fiber might have potential applications for long-haul distributed dual-parameter simultaneous measurements.
Fast auto-focus scheme based on optical defocus fitting model
NASA Astrophysics Data System (ADS)
Wang, Yeru; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting; Cen, Min
2018-04-01
An optical defocus fitting model-based (ODFM) auto-focus scheme is proposed. Considering the basic optical defocus principle, the optical defocus fitting model is derived to approximate the potential-focus position. By this accurate modelling, the proposed auto-focus scheme can make the stepping motor approach the focal plane more accurately and rapidly. Two fitting positions are first determined for an arbitrary initial stepping motor position. Three images (initial image and two fitting images) at these positions are then collected to estimate the potential-focus position based on the proposed ODFM method. Around the estimated potential-focus position, two reference images are recorded. The auto-focus procedure is then completed by processing these two reference images and the potential-focus image to confirm the in-focus position using a contrast based method. Experimental results prove that the proposed scheme can complete auto-focus within only 5 to 7 steps with good performance even under low-light condition.
Direct Sensor Orientation of a Land-Based Mobile Mapping System
Rau, Jiann-Yeou; Habib, Ayman F.; Kersting, Ana P.; Chiang, Kai-Wei; Bang, Ki-In; Tseng, Yi-Hsing; Li, Yu-Hua
2011-01-01
A land-based mobile mapping system (MMS) is flexible and useful for the acquisition of road environment geospatial information. It integrates a set of imaging sensors and a position and orientation system (POS). The positioning quality of such systems is highly dependent on the accuracy of the utilized POS. This limitation is the major drawback due to the elevated cost associated with high-end GPS/INS units, particularly the inertial system. The potential accuracy of the direct sensor orientation depends on the architecture and quality of the GPS/INS integration process as well as the validity of the system calibration (i.e., calibration of the individual sensors as well as the system mounting parameters). In this paper, a novel single-step procedure using integrated sensor orientation with relative orientation constraint for the estimation of the mounting parameters is introduced. A comparative analysis between the proposed single-step and the traditional two-step procedure is carried out. Moreover, the estimated mounting parameters using the different methods are used in a direct geo-referencing procedure to evaluate their performance and the feasibility of the implemented system. Experimental results show that the proposed system using single-step system calibration method can achieve high 3D positioning accuracy. PMID:22164015
Gu, Di; Shao, Nan; Zhu, Yanji; Wu, Hongjun; Wang, Baohui
2017-01-05
The STEP concept has successfully been demonstrated for driving chemical reaction by utilization of solar heat and electricity to minimize the fossil energy, meanwhile, maximize the rate of thermo- and electrochemical reactions in thermodynamics and kinetics. This pioneering investigation experimentally exhibit that the STEP concept is adapted and adopted efficiently for degradation of nitrobenzene. By employing the theoretical calculation and thermo-dependent cyclic voltammetry, the degradation potential of nitrobenzene was found to be decreased obviously, at the same time, with greatly lifting the current, while the temperature was increased. Compared with the conventional electrochemical methods, high efficiency and fast degradation rate were markedly displayed due to the co-action of thermo- and electrochemical effects and the switch of the indirect electrochemical oxidation to the direct one for oxidation of nitrobenzene. A clear conclusion on the mechanism of nitrobenzene degradation by the STEP can be schematically proposed and discussed by the combination of thermo- and electrochemistry based the analysis of the HPLC, UV-vis and degradation data. This theory and experiment provide a pilot for the treatment of nitrobenzene wastewater with high efficiency, clean operation and low carbon footprint, without any other input of energy and chemicals from solar energy. Copyright © 2016 Elsevier B.V. All rights reserved.
Vinholes, Daniele Botelho; Assunção, Maria Cecília Formoso; Neutzling, Marilda Borges
2009-04-01
This study aimed to measure frequency of healthy eating habits and associated factors using the 10 Steps to Healthy Eating score proposed by the Ministry of Health in the adult population in Pelotas, Rio Grande do Sul State, Brazil. A cross-sectional population-based survey was conducted on a cluster sample of 3,136 adult residents in Pelotas. The frequency of each step to healthy eating was collected with a pre-coded questionnaire. Data analysis consisted of descriptive analysis, followed by bivariate analysis using the chi-square test. Only 1.1% of the population followed all the recommended steps. The average number of steps was six. Step four, salt intake, showed the highest frequency, while step nine, physical activity, showed the lowest. Knowledge of the population's eating habits and their distribution according to demographic and socioeconomic variables is important to guide local and national strategies to promote healthy eating habits and thus improve quality of life.
Basu, Amar S
2013-05-21
Emerging assays in droplet microfluidics require the measurement of parameters such as drop size, velocity, trajectory, shape deformation, fluorescence intensity, and others. While micro particle image velocimetry (μPIV) and related techniques are suitable for measuring flow using tracer particles, no tool exists for tracking droplets at the granularity of a single entity. This paper presents droplet morphometry and velocimetry (DMV), a digital video processing software for time-resolved droplet analysis. Droplets are identified through a series of image processing steps which operate on transparent, translucent, fluorescent, or opaque droplets. The steps include background image generation, background subtraction, edge detection, small object removal, morphological close and fill, and shape discrimination. A frame correlation step then links droplets spanning multiple frames via a nearest neighbor search with user-defined matching criteria. Each step can be individually tuned for maximum compatibility. For each droplet found, DMV provides a time-history of 20 different parameters, including trajectory, velocity, area, dimensions, shape deformation, orientation, nearest neighbour spacing, and pixel statistics. The data can be reported via scatter plots, histograms, and tables at the granularity of individual droplets or by statistics accrued over the population. We present several case studies from industry and academic labs, including the measurement of 1) size distributions and flow perturbations in a drop generator, 2) size distributions and mixing rates in drop splitting/merging devices, 3) efficiency of single cell encapsulation devices, 4) position tracking in electrowetting operations, 5) chemical concentrations in a serial drop dilutor, 6) drop sorting efficiency of a tensiophoresis device, 7) plug length and orientation of nonspherical plugs in a serpentine channel, and 8) high throughput tracking of >250 drops in a reinjection system. Performance metrics show that highest accuracy and precision is obtained when the video resolution is >300 pixels per drop. Analysis time increases proportionally with video resolution. The current version of the software provides throughputs of 2-30 fps, suggesting the potential for real time analysis.
Language-Agnostic Reproducible Data Analysis Using Literate Programming.
Vassilev, Boris; Louhimo, Riku; Ikonen, Elina; Hautaniemi, Sampsa
2016-01-01
A modern biomedical research project can easily contain hundreds of analysis steps and lack of reproducibility of the analyses has been recognized as a severe issue. While thorough documentation enables reproducibility, the number of analysis programs used can be so large that in reality reproducibility cannot be easily achieved. Literate programming is an approach to present computer programs to human readers. The code is rearranged to follow the logic of the program, and to explain that logic in a natural language. The code executed by the computer is extracted from the literate source code. As such, literate programming is an ideal formalism for systematizing analysis steps in biomedical research. We have developed the reproducible computing tool Lir (literate, reproducible computing) that allows a tool-agnostic approach to biomedical data analysis. We demonstrate the utility of Lir by applying it to a case study. Our aim was to investigate the role of endosomal trafficking regulators to the progression of breast cancer. In this analysis, a variety of tools were combined to interpret the available data: a relational database, standard command-line tools, and a statistical computing environment. The analysis revealed that the lipid transport related genes LAPTM4B and NDRG1 are coamplified in breast cancer patients, and identified genes potentially cooperating with LAPTM4B in breast cancer progression. Our case study demonstrates that with Lir, an array of tools can be combined in the same data analysis to improve efficiency, reproducibility, and ease of understanding. Lir is an open-source software available at github.com/borisvassilev/lir.
Language-Agnostic Reproducible Data Analysis Using Literate Programming
Vassilev, Boris; Louhimo, Riku; Ikonen, Elina; Hautaniemi, Sampsa
2016-01-01
A modern biomedical research project can easily contain hundreds of analysis steps and lack of reproducibility of the analyses has been recognized as a severe issue. While thorough documentation enables reproducibility, the number of analysis programs used can be so large that in reality reproducibility cannot be easily achieved. Literate programming is an approach to present computer programs to human readers. The code is rearranged to follow the logic of the program, and to explain that logic in a natural language. The code executed by the computer is extracted from the literate source code. As such, literate programming is an ideal formalism for systematizing analysis steps in biomedical research. We have developed the reproducible computing tool Lir (literate, reproducible computing) that allows a tool-agnostic approach to biomedical data analysis. We demonstrate the utility of Lir by applying it to a case study. Our aim was to investigate the role of endosomal trafficking regulators to the progression of breast cancer. In this analysis, a variety of tools were combined to interpret the available data: a relational database, standard command-line tools, and a statistical computing environment. The analysis revealed that the lipid transport related genes LAPTM4B and NDRG1 are coamplified in breast cancer patients, and identified genes potentially cooperating with LAPTM4B in breast cancer progression. Our case study demonstrates that with Lir, an array of tools can be combined in the same data analysis to improve efficiency, reproducibility, and ease of understanding. Lir is an open-source software available at github.com/borisvassilev/lir. PMID:27711123
Morrissey, Karyn; Kinderman, Peter; Pontin, Eleanor; Tai, Sara; Schwannauer, Mathias
2016-08-01
In June 2011 the BBC Lab UK carried out a web-based survey on the causes of mental distress. The 'Stress Test' was launched on 'All in the Mind' a BBC Radio 4 programme and the test's URL was publicised on radio and TV broadcasts, and made available via BBC web pages and social media. Given the large amount of data created, over 32,800 participants, with corresponding diagnosis, demographic and socioeconomic characteristics; the dataset are potentially an important source of data for population based research on depression and anxiety. However, as respondents self-selected to participate in the online survey, the survey may comprise a non-random sample. It may be only individuals that listen to BBC Radio 4 and/or use their website that participated in the survey. In this instance using the Stress Test data for wider population based research may create sample selection bias. Focusing on the depression component of the Stress Test, this paper presents an easy-to-use method, the Two Step Probit Selection Model, to detect and statistically correct selection bias in the Stress Test. Using a Two Step Probit Selection Model; this paper did not find a statistically significant selection on unobserved factors for participants of the Stress Test. That is, survey participants who accessed and completed an online survey are not systematically different from non-participants on the variables of substantive interest. Copyright © 2016 Elsevier Ltd. All rights reserved.
Multivariate assessment of event-related potentials with the t-CWT method.
Bostanov, Vladimir
2015-11-05
Event-related brain potentials (ERPs) are usually assessed with univariate statistical tests although they are essentially multivariate objects. Brain-computer interface applications are a notable exception to this practice, because they are based on multivariate classification of single-trial ERPs. Multivariate ERP assessment can be facilitated by feature extraction methods. One such method is t-CWT, a mathematical-statistical algorithm based on the continuous wavelet transform (CWT) and Student's t-test. This article begins with a geometric primer on some basic concepts of multivariate statistics as applied to ERP assessment in general and to the t-CWT method in particular. Further, it presents for the first time a detailed, step-by-step, formal mathematical description of the t-CWT algorithm. A new multivariate outlier rejection procedure based on principal component analysis in the frequency domain is presented as an important pre-processing step. The MATLAB and GNU Octave implementation of t-CWT is also made publicly available for the first time as free and open source code. The method is demonstrated on some example ERP data obtained in a passive oddball paradigm. Finally, some conceptually novel applications of the multivariate approach in general and of the t-CWT method in particular are suggested and discussed. Hopefully, the publication of both the t-CWT source code and its underlying mathematical algorithm along with a didactic geometric introduction to some basic concepts of multivariate statistics would make t-CWT more accessible to both users and developers in the field of neuroscience research.
How to Perform an Ethical Risk Analysis (eRA).
Hansson, Sven Ove
2018-02-26
Ethical analysis is often needed in the preparation of policy decisions on risk. A three-step method is proposed for performing an ethical risk analysis (eRA). In the first step, the people concerned are identified and categorized in terms of the distinct but compatible roles of being risk-exposed, a beneficiary, or a decisionmaker. In the second step, a more detailed classification of roles and role combinations is performed, and ethically problematic role combinations are identified. In the third step, further ethical deliberation takes place, with an emphasis on individual risk-benefit weighing, distributional analysis, rights analysis, and power analysis. Ethical issues pertaining to subsidiary risk roles, such as those of experts and journalists, are also treated in this phase. An eRA should supplement, not replace, a traditional risk analysis that puts emphasis on the probabilities and severities of undesirable events but does not cover ethical issues such as agency, interpersonal relationships, and justice. © 2018 Society for Risk Analysis.
Rahmati, Omid; Melesse, Assefa M
2016-10-15
Effective management and sustainable development of groundwater resources of arid and semi-arid environments require monitoring of groundwater quality and quantity. The aim of this paper is to develop a reasonable methodological framework for producing the suitability map for drinking water through the geographic information system, remote sensing and field surveys of the Andimeshk-Dezful, Khozestan province, Iran as a semi-arid region. This study investigated the delineation of groundwater potential zone based on Dempster-Shafer (DS) theory of evidence and evaluate its applicability for groundwater potentiality mapping. The study also analyzed the spatial distribution of groundwater nitrate concentration; and produced the suitability map for drinking water. The study has been carried out with the following steps: i) creation of maps of groundwater conditioning factors; ii) assessment of groundwater occurrence characteristics; iii) creation of groundwater potentiality map (GPM) and model validation; iv) collection and chemical analysis of water samples; v) assessment of groundwater nitrate pollution; and vi) creation of groundwater potentiality and quality map. The performance of the DS was also evaluated using the receiver operating characteristic (ROC) curve method and pumping test data to ensure its generalization ability, which eventually, the GPM showed 87.76% accuracy. The detailed analysis of groundwater potentiality and quality revealed that the 'non acceptable' areas covers an area of about 1479km(2) (60%). The study will provide significant information for groundwater management and exploitation in areas where groundwater is a major source of water and its exploration is critical to support drinking water need. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Robin H.; Lodes, Mike; Fuji, H. Sho; Danley, David; McShea, Andrew
Microarray assays typically involve multistage sample processing and fluidic handling, which are generally labor-intensive and time-consuming. Automation of these processes would improve robustness, reduce run-to-run and operator-to-operator variation, and reduce costs. In this chapter, a fully integrated and self-contained microfluidic biochip device that has been developed to automate the fluidic handling steps for microarray-based gene expression or genotyping analysis is presented. The device consists of a semiconductor-based CustomArray® chip with 12,000 features and a microfluidic cartridge. The CustomArray was manufactured using a semiconductor-based in situ synthesis technology. The micro-fluidic cartridge consists of microfluidic pumps, mixers, valves, fluid channels, and reagent storage chambers. Microarray hybridization and subsequent fluidic handling and reactions (including a number of washing and labeling steps) were performed in this fully automated and miniature device before fluorescent image scanning of the microarray chip. Electrochemical micropumps were integrated in the cartridge to provide pumping of liquid solutions. A micromixing technique based on gas bubbling generated by electrochemical micropumps was developed. Low-cost check valves were implemented in the cartridge to prevent cross-talk of the stored reagents. Gene expression study of the human leukemia cell line (K562) and genotyping detection and sequencing of influenza A subtypes have been demonstrated using this integrated biochip platform. For gene expression assays, the microfluidic CustomArray device detected sample RNAs with a concentration as low as 0.375 pM. Detection was quantitative over more than three orders of magnitude. Experiment also showed that chip-to-chip variability was low indicating that the integrated microfluidic devices eliminate manual fluidic handling steps that can be a significant source of variability in genomic analysis. The genotyping results showed that the device identified influenza A hemagglutinin and neuraminidase subtypes and sequenced portions of both genes, demonstrating the potential of integrated microfluidic and microarray technology for multiple virus detection. The device provides a cost-effective solution to eliminate labor-intensive and time-consuming fluidic handling steps and allows microarray-based DNA analysis in a rapid and automated fashion.
2014-01-01
Background Digital image analysis has the potential to address issues surrounding traditional histological techniques including a lack of objectivity and high variability, through the application of quantitative analysis. A key initial step in image analysis is the identification of regions of interest. A widely applied methodology is that of segmentation. This paper proposes the application of image analysis techniques to segment skin tissue with varying degrees of histopathological damage. The segmentation of human tissue is challenging as a consequence of the complexity of the tissue structures and inconsistencies in tissue preparation, hence there is a need for a new robust method with the capability to handle the additional challenges materialising from histopathological damage. Methods A new algorithm has been developed which combines enhanced colour information, created following a transformation to the L*a*b* colourspace, with general image intensity information. A colour normalisation step is included to enhance the algorithm’s robustness to variations in the lighting and staining of the input images. The resulting optimised image is subjected to thresholding and the segmentation is fine-tuned using a combination of morphological processing and object classification rules. The segmentation algorithm was tested on 40 digital images of haematoxylin & eosin (H&E) stained skin biopsies. Accuracy, sensitivity and specificity of the algorithmic procedure were assessed through the comparison of the proposed methodology against manual methods. Results Experimental results show the proposed fully automated methodology segments the epidermis with a mean specificity of 97.7%, a mean sensitivity of 89.4% and a mean accuracy of 96.5%. When a simple user interaction step is included, the specificity increases to 98.0%, the sensitivity to 91.0% and the accuracy to 96.8%. The algorithm segments effectively for different severities of tissue damage. Conclusions Epidermal segmentation is a crucial first step in a range of applications including melanoma detection and the assessment of histopathological damage in skin. The proposed methodology is able to segment the epidermis with different levels of histological damage. The basic method framework could be applied to segmentation of other epithelial tissues. PMID:24521154
Scoops3D: software to analyze 3D slope stability throughout a digital landscape
Reid, Mark E.; Christian, Sarah B.; Brien, Dianne L.; Henderson, Scott T.
2015-01-01
The computer program, Scoops3D, evaluates slope stability throughout a digital landscape represented by a digital elevation model (DEM). The program uses a three-dimensional (3D) method of columns approach to assess the stability of many (typically millions) potential landslides within a user-defined size range. For each potential landslide (or failure), Scoops3D assesses the stability of a rotational, spherical slip surface encompassing many DEM cells using a 3D version of either Bishop’s simplified method or the Ordinary (Fellenius) method of limit-equilibrium analysis. Scoops3D has several options for the user to systematically and efficiently search throughout an entire DEM, thereby incorporating the effects of complex surface topography. In a thorough search, each DEM cell is included in multiple potential failures, and Scoops3D records the lowest stability (factor of safety) for each DEM cell, as well as the size (volume or area) associated with each of these potential landslides. It also determines the least-stable potential failure for the entire DEM. The user has a variety of options for building a 3D domain, including layers or full 3D distributions of strength and pore-water pressures, simplistic earthquake loading, and unsaturated suction conditions. Results from Scoops3D can be readily incorporated into a geographic information system (GIS) or other visualization software. This manual includes information on the theoretical basis for the slope-stability analysis, requirements for constructing and searching a 3D domain, a detailed operational guide (including step-by-step instructions for using the graphical user interface [GUI] software, Scoops3D-i) and input/output file specifications, practical considerations for conducting an analysis, results of verification tests, and multiple examples illustrating the capabilities of Scoops3D. Easy-to-use software installation packages are available for the Windows or Macintosh operating systems; these packages install the compiled Scoops3D program, the GUI (Scoops3D-i), and associated documentation. Several Scoops3D examples, including all input and output files, are available as well. The source code is written in the Fortran 90 language and can be compiled to run on any computer operating system with an appropriate compiler.
NASA Astrophysics Data System (ADS)
Zeyl, Timothy; Yin, Erwei; Keightley, Michelle; Chau, Tom
2016-04-01
Objective. Error-related potentials (ErrPs) have the potential to guide classifier adaptation in BCI spellers, for addressing non-stationary performance as well as for online optimization of system parameters, by providing imperfect or partial labels. However, the usefulness of ErrP-based labels for BCI adaptation has not been established in comparison to other partially supervised methods. Our objective is to make this comparison by retraining a two-step P300 speller on a subset of confident online trials using naïve labels taken from speller output, where confidence is determined either by (i) ErrP scores, (ii) posterior target scores derived from the P300 potential, or (iii) a hybrid of these scores. We further wish to evaluate the ability of partially supervised adaptation and retraining methods to adjust to a new stimulus-onset asynchrony (SOA), a necessary step towards online SOA optimization. Approach. Eleven consenting able-bodied adults attended three online spelling sessions on separate days with feedback in which SOAs were set at 160 ms (sessions 1 and 2) and 80 ms (session 3). A post hoc offline analysis and a simulated online analysis were performed on sessions two and three to compare multiple adaptation methods. Area under the curve (AUC) and symbols spelled per minute (SPM) were the primary outcome measures. Main results. Retraining using supervised labels confirmed improvements of 0.9 percentage points (session 2, p < 0.01) and 1.9 percentage points (session 3, p < 0.05) in AUC using same-day training data over using data from a previous day, which supports classifier adaptation in general. Significance. Using posterior target score alone as a confidence measure resulted in the highest SPM of the partially supervised methods, indicating that ErrPs are not necessary to boost the performance of partially supervised adaptive classification. Partial supervision significantly improved SPM at a novel SOA, showing promise for eventual online SOA optimization.
On Electron-Positron Pair Production by a Spatially Inhomogeneous Electric Field
NASA Astrophysics Data System (ADS)
Chervyakov, A.; Kleinert, H.
2018-05-01
A detailed analysis of electron-positron pair creation induced by a spatially non-uniform and static electric field from vacuum is presented. A typical example is provided by the Sauter potential. For this potential, we derive the analytic expressions for vacuum decay and pair production rate accounted for the entire range of spatial variations. In the limit of a sharp step, we recover the divergent result due to the singular electric field at the origin. The limit of a constant field reproduces the classical result of Euler, Heisenberg and Schwinger, if the latter is properly averaged over the width of a spatial variation. The pair production by the Sauter potential is described for different regimes from weak to strong fields. For all these regimes, the locally constant-field rate is shown to be the upper limit.
Geothermal Potential for China, Poland and Turkey with/Financing Workbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, J G
This collection of documents presents the results of assessments of the geothermal power potential in three countries: China, Poland, and Turkey. Also included is a Geothermal Financing Workbook, which is intended to provide a comprehensive package of information on financing, financing plans, financial analysis, and financial sources for smaller geothermal resource developers. All three countries are facing ever increasing demands for power in the coming decades, but each has some barriers to fully developing existing resources. For Poland and Turkey, it is important that legislation specific to geothermal resource development be enacted. For China, a crucial step is to developmore » more detailed and accurate estimates of resource potential. All three countries could benefit from the expertise of U.S. geothermal companies, and this collection of material provides crucial information for those interested companies.« less
Estimating heterotrophic respiration at large scales: challenges, approaches, and next steps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bond-Lamberty, Benjamin; Epron, Daniel; Harden, Jennifer W.
2016-06-27
Heterotrophic respiration (HR), the aerobic and anaerobic processes mineralizing organic matter, is a key carbon flux but one impossible to measure at scales significantly larger than small experimental plots. This impedes our ability to understand carbon and nutrient cycles, benchmark models, or reliably upscale point measurements. Given that a new generation of highly mechanistic, genomic-specific global models is not imminent, we suggest that a useful step to improve this situation would be the development of "Decomposition Functional Types" (DFTs). Analogous to plant functional types (PFTs), DFTs would abstract and capture important differences in HR metabolism and flux dynamics, allowing modelsmore » to efficiently group and vary these characteristics across space and time. We argue that DFTs should be initially informed by top-down expert opinion, but ultimately developed using bottom-up, data-driven analyses, and provide specific examples of potential dependent and independent variables that could be used. We present and discuss an example clustering analysis to show how model-produced annual HR can be broken into distinct groups associated with global variability in biotic and abiotic factors, and demonstrate that these groups are distinct from already-existing PFTs. A similar analysis, incorporating observational data, could form a basis for future DFTs. Finally, we suggest next steps and critical priorities: collection and synthesis of existing data; more in-depth analyses combining open data with high-performance computing; rigorous testing of analytical results; and planning by the global modeling community for decoupling decomposition from fixed site data. These are all critical steps to build a foundation for DFTs in global models, thus providing the ecological and climate change communities with robust, scalable estimates of HR at large scales.« less
Estimating heterotrophic respiration at large scales: Challenges, approaches, and next steps
Bond-Lamberty, Ben; Epron, Daniel; Harden, Jennifer; ...
2016-06-27
Heterotrophic respiration (HR), the aerobic and anaerobic processes mineralizing organic matter, is a key carbon flux but one impossible to measure at scales significantly larger than small experimental plots. This impedes our ability to understand carbon and nutrient cycles, benchmark models, or reliably upscale point measurements. Given that a new generation of highly mechanistic, genomic-specific global models is not imminent, we suggest that a useful step to improve this situation would be the development of Decomposition Functional Types (DFTs). Analogous to plant functional types (PFTs), DFTs would abstract and capture important differences in HR metabolism and flux dynamics, allowing modelersmore » and experimentalists to efficiently group and vary these characteristics across space and time. We argue that DFTs should be initially informed by top-down expert opinion, but ultimately developed using bottom-up, data-driven analyses, and provide specific examples of potential dependent and independent variables that could be used. We present an example clustering analysis to show how annual HR can be broken into distinct groups associated with global variability in biotic and abiotic factors, and demonstrate that these groups are distinct from (but complementary to) already-existing PFTs. A similar analysis incorporating observational data could form the basis for future DFTs. Finally, we suggest next steps and critical priorities: collection and synthesis of existing data; more in-depth analyses combining open data with rigorous testing of analytical results; using point measurements and realistic forcing variables to constrain process-based models; and planning by the global modeling community for decoupling decomposition from fixed site data. Lastly, these are all critical steps to build a foundation for DFTs in global models, thus providing the ecological and climate change communities with robust, scalable estimates of HR.« less
Estimating heterotrophic respiration at large scales: Challenges, approaches, and next steps
Bond-Lamberty, Ben; Epron, Daniel; Harden, Jennifer W.; Harmon, Mark E.; Hoffman, Forrest; Kumar, Jitendra; McGuire, Anthony David; Vargas, Rodrigo
2016-01-01
Heterotrophic respiration (HR), the aerobic and anaerobic processes mineralizing organic matter, is a key carbon flux but one impossible to measure at scales significantly larger than small experimental plots. This impedes our ability to understand carbon and nutrient cycles, benchmark models, or reliably upscale point measurements. Given that a new generation of highly mechanistic, genomic-specific global models is not imminent, we suggest that a useful step to improve this situation would be the development of “Decomposition Functional Types” (DFTs). Analogous to plant functional types (PFTs), DFTs would abstract and capture important differences in HR metabolism and flux dynamics, allowing modelers and experimentalists to efficiently group and vary these characteristics across space and time. We argue that DFTs should be initially informed by top-down expert opinion, but ultimately developed using bottom-up, data-driven analyses, and provide specific examples of potential dependent and independent variables that could be used. We present an example clustering analysis to show how annual HR can be broken into distinct groups associated with global variability in biotic and abiotic factors, and demonstrate that these groups are distinct from (but complementary to) already-existing PFTs. A similar analysis incorporating observational data could form the basis for future DFTs. Finally, we suggest next steps and critical priorities: collection and synthesis of existing data; more in-depth analyses combining open data with rigorous testing of analytical results; using point measurements and realistic forcing variables to constrain process-based models; and planning by the global modeling community for decoupling decomposition from fixed site data. These are all critical steps to build a foundation for DFTs in global models, thus providing the ecological and climate change communities with robust, scalable estimates of HR.
Associations between physical activity and mental health among bariatric surgical candidates
King, Wendy C.; Kalarchian, Melissa A.; Steffen, Kristine J.; Wolfe, Bruce M.; Elder, Katherine A.; Mitchell, James E.
2013-01-01
Objective This study aimed to examine associations between physical activity (PA) and mental health among adults undergoing bariatric surgery. Methods Cross sectional analysis was conducted on pre-operative data of 850 adults with ≥ class 2 obesity. PA was measured with a step activity monitor; mean daily steps, active minutes, and high-cadence minutes (proxy for moderate-vigorous intensity PA) were determined. Mental health functioning, depressive symptoms and treatment for depression or anxiety were measured with the Medical Outcomes Study 36-item Short Form, Beck Depression Inventory, and a study-specific questionnaire, respectively. Logistic regression analyses tested associations between PA and mental health indicators, controlling for potential confounders. Receiver operative characteristic analysis determined PA thresholds that best differentiated odds of each mental health indicator. Results Each PA parameter was significantly (P<.05) associated with a decreased odds of depressive symptoms and/or treatment for depression or anxiety, but not with impaired mental health functioning. After controlling for sociodemographics and physical health, only associations with treatment for depression and anxiety remained statistically significant. PA thresholds that best differentiated those who had vs. had not recently received treatment for depression or anxiety were <191 active minutes/day, <4750 steps/day, and <8 high-cadence minutes/day. Utilizing high-cadence minutes, compared to active minutes or steps, yielded the highest classification accuracy. Conclusion Adults undergoing bariatric surgery who meet relatively low thresholds of PA (e.g., ≥ 8 high-cadence minutes/day, representative of approximately one hour/week of moderate-vigorous intensity PA) are less likely to have recently received treatment for depression or anxiety compared to less active counterparts. PMID:23332532
End-to-end automated microfluidic platform for synthetic biology: from design to functional analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linshiz, Gregory; Jensen, Erik; Stawski, Nina
Synthetic biology aims to engineer biological systems for desired behaviors. The construction of these systems can be complex, often requiring genetic reprogramming, extensive de novo DNA synthesis, and functional screening. Here, we present a programmable, multipurpose microfluidic platform and associated software and apply the platform to major steps of the synthetic biology research cycle: design, construction, testing, and analysis. We show the platform’s capabilities for multiple automated DNA assembly methods, including a new method for Isothermal Hierarchical DNA Construction, and for Escherichia coli and Saccharomyces cerevisiae transformation. The platform enables the automated control of cellular growth, gene expression induction, andmore » proteogenic and metabolic output analysis. Finally, taken together, we demonstrate the microfluidic platform’s potential to provide end-to-end solutions for synthetic biology research, from design to functional analysis.« less
End-to-end automated microfluidic platform for synthetic biology: from design to functional analysis
Linshiz, Gregory; Jensen, Erik; Stawski, Nina; ...
2016-02-02
Synthetic biology aims to engineer biological systems for desired behaviors. The construction of these systems can be complex, often requiring genetic reprogramming, extensive de novo DNA synthesis, and functional screening. Here, we present a programmable, multipurpose microfluidic platform and associated software and apply the platform to major steps of the synthetic biology research cycle: design, construction, testing, and analysis. We show the platform’s capabilities for multiple automated DNA assembly methods, including a new method for Isothermal Hierarchical DNA Construction, and for Escherichia coli and Saccharomyces cerevisiae transformation. The platform enables the automated control of cellular growth, gene expression induction, andmore » proteogenic and metabolic output analysis. Finally, taken together, we demonstrate the microfluidic platform’s potential to provide end-to-end solutions for synthetic biology research, from design to functional analysis.« less
Electrothermal Equivalent Three-Dimensional Finite-Element Model of a Single Neuron.
Cinelli, Ilaria; Destrade, Michel; Duffy, Maeve; McHugh, Peter
2018-06-01
We propose a novel approach for modelling the interdependence of electrical and mechanical phenomena in nervous cells, by using electrothermal equivalences in finite element (FE) analysis so that existing thermomechanical tools can be applied. First, the equivalence between electrical and thermal properties of the nerve materials is established, and results of a pure heat conduction analysis performed in Abaqus CAE Software 6.13-3 are validated with analytical solutions for a range of steady and transient conditions. This validation includes the definition of equivalent active membrane properties that enable prediction of the action potential. Then, as a step toward fully coupled models, electromechanical coupling is implemented through the definition of equivalent piezoelectric properties of the nerve membrane using the thermal expansion coefficient, enabling prediction of the mechanical response of the nerve to the action potential. Results of the coupled electromechanical model are validated with previously published experimental results of deformation for squid giant axon, crab nerve fibre, and garfish olfactory nerve fibre. A simplified coupled electromechanical modelling approach is established through an electrothermal equivalent FE model of a nervous cell for biomedical applications. One of the key findings is the mechanical characterization of the neural activity in a coupled electromechanical domain, which provides insights into the electromechanical behaviour of nervous cells, such as thinning of the membrane. This is a first step toward modelling three-dimensional electromechanical alteration induced by trauma at nerve bundle, tissue, and organ levels.
Recent advances in ChIP-seq analysis: from quality management to whole-genome annotation.
Nakato, Ryuichiro; Shirahige, Katsuhiko
2017-03-01
Chromatin immunoprecipitation followed by sequencing (ChIP-seq) analysis can detect protein/DNA-binding and histone-modification sites across an entire genome. Recent advances in sequencing technologies and analyses enable us to compare hundreds of samples simultaneously; such large-scale analysis has potential to reveal the high-dimensional interrelationship level for regulatory elements and annotate novel functional genomic regions de novo. Because many experimental considerations are relevant to the choice of a method in a ChIP-seq analysis, the overall design and quality management of the experiment are of critical importance. This review offers guiding principles of computation and sample preparation for ChIP-seq analyses, highlighting the validity and limitations of the state-of-the-art procedures at each step. We also discuss the latest challenges of single-cell analysis that will encourage a new era in this field. © The Author 2016. Published by Oxford University Press.
Occhipinti, E; Colombini, Daniela
2011-01-01
When studying musculoskeletal disorders and their connection with working conditions (WMSDs), several factors of different nature (mechanical, organizational, psychophysical, individual) and their interrelationship have been considered important in general models for epidemiologic surveys and risk assessment and management. Hence the necessity of a "holistic" (that is to say complex, global, multifactorial and interdisciplinary) approach to MSD prevention, especially when establishing technical norms, guidelines and strategic plans of action at national or international level. On the other hand, considering the widespread presence of these factors and WMSDs in many working contexts, there is a great demand by OSH agencies and operators to develop "simple" tools for risk assessment and management, usable also by non-experts in both developed and developing countries. Both these needs are perfectly justified but are also to a certain extent in conflict. How can we address the problem, i.e., simplify complexity? METHODS AND CRITERIA: The proposals are based on two essential criteria: 1) Act on a step-by-step approach using basic tools first and more complex tools only when necessary. 2) Take into account the complexity and the presence of multiple influencing factors at every step (even if with different degrees of in-depth analysis). The proposals are mainly developed within the framework of an IEA-WHO collaboration initiative for a "Toolkit for MSD prevention" but they are also derived from other converging issues (i.e. ISO application document of LSO series 11228 on manual handling). The proposals consider: 1) A Basic Step devoted to preliminary occupational hazard identification and priority check by operative "key enter" questions (at this step all potential hazards--including those influencing WMSDs--should be considered). This step also can be carried out by non-experts with limited training. 2) First Step, focused on WMSDs risk factors, consisting of a "quick assessment" and mainly addressed to identifying 3 possible conditions: acceptable/no consequences; high risk present/redesign urgently needed; a more detailed analysis (via tools proposed in second step) is necessary. This step can also be carried out by non-experts with only limited training. 3) Second Step, where recognized tools (i.e. from international standards or guidelines) for risk (of WMSDs) estimation are used as a consequence of the first step outcome. Examples of such tools are "adaptations" of the Revised NIOSH Lifting Equation, Liberty Mutual Psychophysical Tables, OCRA Checklist, etc. These tools should be able to adequately take account of most of the influencing factors. For some particular working sectors (i.e. agriculture) these tools need to be specifically adapted. For particular working sectors a database could be envisaged where the most common tasks (with their "variants") are "intrinsically" evaluated by experts and could provide non-experts with the relevant knowledge to be applied to the specific work context. This step can be carried out only by persons with some sort of specific training.
A mechanical energy analysis of gait initiation
NASA Technical Reports Server (NTRS)
Miller, C. A.; Verstraete, M. C.
1999-01-01
The analysis of gait initiation (the transient state between standing and walking) is an important diagnostic tool to study pathologic gait and to evaluate prosthetic devices. While past studies have quantified mechanical energy of the body during steady-state gait, to date no one has computed the mechanical energy of the body during gait initiation. In this study, gait initiation in seven normal male subjects was studied using a mechanical energy analysis to compute total body energy. The data showed three separate states: quiet standing, gait initiation, and steady-state gait. During gait initiation, the trends in the energy data for the individual segments were similar to those seen during steady-state gait (and in Winter DA, Quanbury AO, Reimer GD. Analysis of instantaneous energy of normal gait. J Biochem 1976;9:253-257), but diminished in amplitude. However, these amplitudes increased to those seen in steady-state during the gait initiation event (GIE), with the greatest increase occurring in the second step due to the push-off of the foundation leg. The baseline level of mechanical energy was due to the potential energy of the individual segments, while the cyclic nature of the data was indicative of the kinetic energy of the particular leg in swing phase during that step. The data presented showed differences in energy trends during gait initiation from those of steady state, thereby demonstrating the importance of this event in the study of locomotion.
Uncovering Hidden Layers of Cell Cycle Regulation through Integrative Multi-omic Analysis
Aviner, Ranen; Shenoy, Anjana; Elroy-Stein, Orna; Geiger, Tamar
2015-01-01
Studying the complex relationship between transcription, translation and protein degradation is essential to our understanding of biological processes in health and disease. The limited correlations observed between mRNA and protein abundance suggest pervasive regulation of post-transcriptional steps and support the importance of profiling mRNA levels in parallel to protein synthesis and degradation rates. In this work, we applied an integrative multi-omic approach to study gene expression along the mammalian cell cycle through side-by-side analysis of mRNA, translation and protein levels. Our analysis sheds new light on the significant contribution of both protein synthesis and degradation to the variance in protein expression. Furthermore, we find that translation regulation plays an important role at S-phase, while progression through mitosis is predominantly controlled by changes in either mRNA levels or protein stability. Specific molecular functions are found to be co-regulated and share similar patterns of mRNA, translation and protein expression along the cell cycle. Notably, these include genes and entire pathways not previously implicated in cell cycle progression, demonstrating the potential of this approach to identify novel regulatory mechanisms beyond those revealed by traditional expression profiling. Through this three-level analysis, we characterize different mechanisms of gene expression, discover new cycling gene products and highlight the importance and utility of combining datasets generated using different techniques that monitor distinct steps of gene expression. PMID:26439921
A cross-sectional study of the relationship between parents' and children's physical activity.
Stearns, Jodie A; Rhodes, Ryan; Ball, Geoff D C; Boule, Normand; Veugelers, Paul J; Cutumisu, Nicoleta; Spence, John C
2016-10-28
Though parents' physical activity (PA) is thought to be a predictor of children's PA, findings have been mixed. The purpose of this study was to examine the relationship between pedometer-measured steps/day of parents' and their children and potential moderators of this relationship. We also assessed the parent-child PA relationship as measured by questionnaires. Six-hundred and twelve 7-8 year olds and one of their parents wore Steps Count (SC)-T2 pedometers for four consecutive days. Parents reported their PA from the last seven days and their child's usual PA. Hierarchical linear regressions were used to assess the parent-child PA relationships, controlling for covariates. Gender (parent, child), gender homogeneity, weight status (parent, child), weight status homogeneity, and socioeconomic status (SES) variables (parent education, household income, area-level SES) were tested as potential moderators of this relationship. Partial r's were used as an estimate of effect size. Parents' steps was significantly related to children's steps (r partial = .24). For every 1,000 step increase in parents' steps, the children took 260 additional steps. None of the tested interactions were found to moderate this relationship. Using questionnaires, a relatively smaller parent-child PA relationship was found (r partial = .14). Physically active parents tend to have physically active children. Interventions designed to get children moving more throughout the day could benefit from including a parent component. Future research should explore the mechanisms by which parents influence their children, and other parent attributes and styles as potential moderators.
2010-03-01
This report documents the work of the Mid-Range Rover Science Analysis Group (MRR-SAG), which was assigned to formulate a concept for a potential rover mission that could be launched to Mars in 2018. Based on programmatic and engineering considerations as of April 2009, our deliberations assumed that the potential mission would use the Mars Science Laboratory (MSL) sky-crane landing system and include a single solar-powered rover. The mission would also have a targeting accuracy of approximately 7 km (semimajor axis landing ellipse), a mobility range of at least 10 km, and a lifetime on the martian surface of at least 1 Earth year. An additional key consideration, given recently declining budgets and cost growth issues with MSL, is that the proposed rover must have lower cost and cost risk than those of MSL--this is an essential consideration for the Mars Exploration Program Analysis Group (MEPAG). The MRR-SAG was asked to formulate a mission concept that would address two general objectives: (1) conduct high priority in situ science and (2) make concrete steps toward the potential return of samples to Earth. The proposed means of achieving these two goals while balancing the trade-offs between them are described here in detail. We propose the name Mars Astrobiology Explorer-Cacher(MAX-C) to reflect the dual purpose of this potential 2018 rover mission.
Newton, Maria J; Harjot, Kaur
2017-01-01
Flunarizine dihydrochloride (FHC) is used for the prophylaxis to migraine. Flunarizine has solubility problems which is practically insoluble in water and alcohol. Nanoemulsion is the approach to increase the solubility of the insoluble drugs. Nanoemulsions of FHC was prepared which can be given through the alternate route such as nasal drug delivery for migraine. In this research work the solubility of the poorly soluble FHC was successfully improved by preparing it as a nano emulsion. Nanoemulsions can pass through the biological membrane easily so it can be delivered through nasal mucosa by which it may provide a quicker onset of action. The currently available dosage forms are in the form of tablet. The formulations were prepared by using Glycerl Monostearate (GMS), Tween 80 as surfactant and PEG 400: Ethanol as co-surfactant in the distilled water. Nanoemulsions were prepared by step by step procedure. The prepared nanoemulsions were analyzed preliminarily by Master Sizer followed by Zeta Sizer by using the technique Dynamic Photon Correlation Spectroscopy. The best nanoemulsion was subjected to Zeta Potential study. The TEM analysis was carried out on the best formulation to gain the detailed information about the formulation. The best formulation was selected based on the physical appearance, homogenecity of the preparation, Preliminary Master Sizer analysis report, Secondary Zeta Sizer analysis report with Zeta Potential and TEM. The best formulation demonstrated the size in nano range with improved solubility. The FHC nano emulsion was prepared successfully which improved the solubility of the drug. The drug release study on simulated nasal fluid revealed that the preparation is suitable to be delivered through the nasal route. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
An index-based robust decision making framework for watershed management in a changing climate.
Kim, Yeonjoo; Chung, Eun-Sung
2014-03-01
This study developed an index-based robust decision making framework for watershed management dealing with water quantity and quality issues in a changing climate. It consists of two parts of management alternative development and analysis. The first part for alternative development consists of six steps: 1) to understand the watershed components and process using HSPF model, 2) to identify the spatial vulnerability ranking using two indices: potential streamflow depletion (PSD) and potential water quality deterioration (PWQD), 3) to quantify the residents' preferences on water management demands and calculate the watershed evaluation index which is the weighted combinations of PSD and PWQD, 4) to set the quantitative targets for water quantity and quality, 5) to develop a list of feasible alternatives and 6) to eliminate the unacceptable alternatives. The second part for alternative analysis has three steps: 7) to analyze all selected alternatives with a hydrologic simulation model considering various climate change scenarios, 8) to quantify the alternative evaluation index including social and hydrologic criteria with utilizing multi-criteria decision analysis methods and 9) to prioritize all options based on a minimax regret strategy for robust decision. This framework considers the uncertainty inherent in climate models and climate change scenarios with utilizing the minimax regret strategy, a decision making strategy under deep uncertainty and thus this procedure derives the robust prioritization based on the multiple utilities of alternatives from various scenarios. In this study, the proposed procedure was applied to the Korean urban watershed, which has suffered from streamflow depletion and water quality deterioration. Our application shows that the framework provides a useful watershed management tool for incorporating quantitative and qualitative information into the evaluation of various policies with regard to water resource planning and management. Copyright © 2013 Elsevier B.V. All rights reserved.
The Facebook Influence Model: A Concept Mapping Approach
Kota, Rajitha; Schoohs, Shari; Whitehill, Jennifer M.
2013-01-01
Abstract Facebook is a popular social media Web site that has been hypothesized to exert potential influence over users' attitudes, intentions, or behaviors. The purpose of this study was to develop a conceptual framework to explain influential aspects of Facebook. This mixed methods study applied concept mapping methodology, a validated five-step method to visually represent complex topics. The five steps comprise preparation, brainstorming, sort and rank, analysis, and interpretation. College student participants were identified using purposeful sampling. The 80 participants had a mean age of 20.5 years, and included 36% males. A total of 169 statements were generated during brainstorming, and sorted into between 6 and 22 groups. The final concept map included 13 clusters. Interpretation data led to grouping of clusters into four final domains, including connection, comparison, identification, and Facebook as an experience. The Facebook Influence Concept Map illustrates key constructs that contribute to influence, incorporating perspectives of older adolescent Facebook users. While Facebook provides a novel lens through which to consider behavioral influence, it can best be considered in the context of existing behavioral theory. The concept map may be used toward development of potential future intervention efforts. PMID:23621717
Chumwangwapee, Sasiwimon; Chingsungnoen, Artit; Siri, Sineenat
2016-11-01
In forensic DNA analyses, biological specimens are collected and stored for subsequent recovery and analysis of DNA. A cost-effective and efficient DNA recovery approach is therefore a need. This study aims to produce a plasma modified cellulose-chitosan membrane (pCE-CS) that efficiently binds and retains DNA as a potential DNA collecting card. The pCE-CS membrane was produced by a phase separation of ionic liquid dissolving CE and CS in water with subsequent surface-modification by a two-step exposure of argon plasma and nitrogen gas. Through plasma modification, the pCE-CS membrane demonstrated better DNA retention after a washing process and higher rate of DNA recovery as compared with the original CE-CS membrane and the commercial FTA card. In addition, the pCE-CS membrane exhibited anti-bacterial properties against both Escherichia coli and Staphylococcus aureus. The results of this work suggest a potential function of the pCE-CS membrane as a DNA collecting card with a high recovery rate of captured DNA. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A novel framework for feature extraction in multi-sensor action potential sorting.
Wu, Shun-Chi; Swindlehurst, A Lee; Nenadic, Zoran
2015-09-30
Extracellular recordings of multi-unit neural activity have become indispensable in neuroscience research. The analysis of the recordings begins with the detection of the action potentials (APs), followed by a classification step where each AP is associated with a given neural source. A feature extraction step is required prior to classification in order to reduce the dimensionality of the data and the impact of noise, allowing source clustering algorithms to work more efficiently. In this paper, we propose a novel framework for multi-sensor AP feature extraction based on the so-called Matched Subspace Detector (MSD), which is shown to be a natural generalization of standard single-sensor algorithms. Clustering using both simulated data and real AP recordings taken in the locust antennal lobe demonstrates that the proposed approach yields features that are discriminatory and lead to promising results. Unlike existing methods, the proposed algorithm finds joint spatio-temporal feature vectors that match the dominant subspace observed in the two-dimensional data without needs for a forward propagation model and AP templates. The proposed MSD approach provides more discriminatory features for unsupervised AP sorting applications. Copyright © 2015 Elsevier B.V. All rights reserved.
The Facebook influence model: a concept mapping approach.
Moreno, Megan A; Kota, Rajitha; Schoohs, Shari; Whitehill, Jennifer M
2013-07-01
Facebook is a popular social media Web site that has been hypothesized to exert potential influence over users' attitudes, intentions, or behaviors. The purpose of this study was to develop a conceptual framework to explain influential aspects of Facebook. This mixed methods study applied concept mapping methodology, a validated five-step method to visually represent complex topics. The five steps comprise preparation, brainstorming, sort and rank, analysis, and interpretation. College student participants were identified using purposeful sampling. The 80 participants had a mean age of 20.5 years, and included 36% males. A total of 169 statements were generated during brainstorming, and sorted into between 6 and 22 groups. The final concept map included 13 clusters. Interpretation data led to grouping of clusters into four final domains, including connection, comparison, identification, and Facebook as an experience. The Facebook Influence Concept Map illustrates key constructs that contribute to influence, incorporating perspectives of older adolescent Facebook users. While Facebook provides a novel lens through which to consider behavioral influence, it can best be considered in the context of existing behavioral theory. The concept map may be used toward development of potential future intervention efforts.
NASA Astrophysics Data System (ADS)
Forouzandeh, Farisa; Li, Xiaoan; Banham, Dustin W.; Feng, Fangxia; Joseph Kakanat, Abraham; Ye, Siyu; Birss, Viola
2018-02-01
In this study, the effect of surface functionalization on the electrochemical corrosion resistance of a high surface area, mesoporous colloid imprinted carbon powder (CIC), as well as microporous Vulcan carbon (VC, serving as the benchmark), was demonstrated, primarily for PEM fuel cell applications. CIC-22, which is highly hydrophilic and was synthesized with 22 nm silica colloid templates, and as-received, mildly hydrophobic, VC powders, were functionalized with 2,3,4,5,6-pentafluorophenyl (-PhF5) surface groups using a straightforward diazonium reduction reaction. These carbons were then subjected to corrosion testing, involving a potential cycling-step sequence in room temperature 0.5 M H2SO4. Using cyclic voltammetry and charge/time analysis, the double layer and pseudo-capacitive gravimetric charges of the carbons, prior to and after the application of these potential steps, were tracked in order to obtain information about surface area changes and the extent of carbon oxidation, respectively. It is shown that the corrosion resistance was improved by ca. 50-80% by surface functionalization, likely due to a combination of surface passivation (loss of carbon active sites) and increased surface hydrophobicity.
Lower-extremity biomechanics during forward and lateral stepping activities in older adults
Wang, Man-Ying; Flanagan, Sean; Song, Joo-Eun; Greendale, Gail A.; Salem, George J.
2012-01-01
Objective To characterize the lower-extremity biomechanics associated with stepping activities in older adults. Design Repeated-measures comparison of kinematics and kinetics associated with forward step-up and lateral step-up activities. Background Biomechanical analysis may be used to assess the effectiveness of various ‘in-home activities’ in targeting appropriate muscle groups and preserving functional strength and power in elders. Methods Data were analyzed from 21 participants (mean 74.7 yr (standard deviation, 4.4 yr)) who performed the forward and lateral step-up activities while instrumented for biomechanical analysis. Motion analysis equipment, inverse dynamics equations, and repeated measures anovas were used to contrast the maximum joint angles, peak net joint moments, angular impulse, work, and power associated with the activities. Results The lateral step-up resulted in greater maximum knee flexion (P < 0.001) and ankle dorsiflexion angles (P < 0.01). Peak joint moments were similar between exercises. The forward step-up generated greater peak hip power (P < 0.05) and total work (P < 0.001); whereas, the lateral step-up generated greater impulse (P < 0.05), work (P < 0.01), and power (P < 0.05) at the knee and ankle. Conclusions In older adults, the forward step-up places greater demand on the hip extensors, while lateral step-up places greater demand on the knee extensors and ankle plantar flexors. PMID:12620784
Secretory immunoglobulin purification from whey by chromatographic techniques.
Matlschweiger, Alexander; Engelmaier, Hannah; Himmler, Gottfried; Hahn, Rainer
2017-08-15
Secretory immunoglobulins (SIg) are a major fraction of the mucosal immune system and represent potential drug candidates. So far, platform technologies for their purification do not exist. SIg from animal whey was used as a model to develop a simple, efficient and potentially generic chromatographic purification process. Several chromatographic stationary phases were tested. A combination of two anion-exchange steps resulted in the highest purity. The key step was the use of a small-porous anion exchanger operated in flow-through mode. Diffusion of SIg into the resin particles was significantly hindered, while the main impurities, IgG and serum albumin, were bound. In this step, initial purity was increased from 66% to 89% with a step yield of 88%. In a second anion-exchange step using giga-porous material, SIg was captured and purified by step or linear gradient elution to obtain fractions with purities >95%. For the step gradient elution step yield of highly pure SIg was 54%. Elution of SIgA and SIgM with a linear gradient resulted in a step yield of 56% and 35%, respectively. Overall yields for both anion exchange steps were 43% for the combination of flow-through and step elution mode. Combination of flow-through and linear gradient elution mode resulted in a yield of 44% for SIgA and 39% for SIgM. The proposed process allows the purification of biologically active SIg from animal whey in preparative scale. For future applications, the process can easily be adopted for purification of recombinant secretory immunoglobulin species. Copyright © 2017 Elsevier B.V. All rights reserved.
On contact modelling in isogeometric analysis
NASA Astrophysics Data System (ADS)
Cardoso, R. P. R.; Adetoro, O. B.
2017-11-01
IsoGeometric Analysis (IGA) has proved to be a reliable numerical tool for the simulation of structural behaviour and fluid mechanics. The main reasons for this popularity are essentially due to: (i) the possibility of using higher order polynomials for the basis functions; (ii) the high convergence rates possible to achieve; (iii) the possibility to operate directly on CAD geometry without the need to resort to a mesh of elements. The major drawback of IGA is the non-interpolatory characteristic of the basis functions, which adds a difficulty in handling essential boundary conditions and makes it particularly challenging for contact analysis. In this work, the IGA is expanded to include frictionless contact procedures for sheet metal forming analyses. Non-Uniform Rational B-Splines (NURBS) are going to be used for the modelling of rigid tools as well as for the modelling of the deformable blank sheet. The contact methods developed are based on a two-step contact search scheme, where during the first step a global search algorithm is used for the allocation of contact knots into potential contact faces and a second (local) contact search scheme where point inversion techniques are used for the calculation of the contact penetration gap. For completeness, elastoplastic procedures are also included for a proper description of the entire IGA of sheet metal forming processes.
Decision-making for foot-and-mouth disease control: Objectives matter
Probert, William J. M.; Shea, Katriona; Fonnesbeck, Christopher J.; Runge, Michael C.; Carpenter, Tim E.; Durr, Salome; Garner, M. Graeme; Harvey, Neil; Stevenson, Mark A.; Webb, Colleen T.; Werkman, Marleen; Tildesley, Michael J.; Ferrari, Matthew J.
2016-01-01
Formal decision-analytic methods can be used to frame disease control problems, the first step of which is to define a clear and specific objective. We demonstrate the imperative of framing clearly-defined management objectives in finding optimal control actions for control of disease outbreaks. We illustrate an analysis that can be applied rapidly at the start of an outbreak when there are multiple stakeholders involved with potentially multiple objectives, and when there are also multiple disease models upon which to compare control actions. The output of our analysis frames subsequent discourse between policy-makers, modellers and other stakeholders, by highlighting areas of discord among different management objectives and also among different models used in the analysis. We illustrate this approach in the context of a hypothetical foot-and-mouth disease (FMD) outbreak in Cumbria, UK using outputs from five rigorously-studied simulation models of FMD spread. We present both relative rankings and relative performance of controls within each model and across a range of objectives. Results illustrate how control actions change across both the base metric used to measure management success and across the statistic used to rank control actions according to said metric. This work represents a first step towards reconciling the extensive modelling work on disease control problems with frameworks for structured decision making.
Image analysis and mathematical modelling for the supervision of the dough fermentation process
NASA Astrophysics Data System (ADS)
Zettel, Viktoria; Paquet-Durand, Olivier; Hecker, Florian; Hitzmann, Bernd
2016-10-01
The fermentation (proof) process of dough is one of the quality-determining steps in the production of baking goods. Beside the fluffiness, whose fundaments are built during fermentation, the flavour of the final product is influenced very much during this production stage. However, until now no on-line measurement system is available, which can supervise this important process step. In this investigation the potential of an image analysis system is evaluated, that enables the determination of the volume of fermented dough pieces. The camera is moving around the fermenting pieces and collects images from the objects by means of different angles (360° range). Using image analysis algorithms the volume increase of individual dough pieces is determined. Based on a detailed mathematical description of the volume increase, which based on the Bernoulli equation, carbon dioxide production rate of yeast cells and the diffusion processes of carbon dioxide, the fermentation process is supervised. Important process parameters, like the carbon dioxide production rate of the yeast cells and the dough viscosity can be estimated just after 300 s of proofing. The mean percentage error for forecasting the further evolution of the relative volume of the dough pieces is just 2.3 %. Therefore, a forecast of the further evolution can be performed and used for fault detection.
Ciccia, Rossella
2017-01-01
Typologies have represented an important tool for the development of comparative social policy research and continue to be widely used in spite of growing criticism of their ability to capture the complexity of welfare states and their internal heterogeneity. In particular, debates have focused on the presence of hybrid cases and the existence of distinct cross-national pattern of variation across areas of social policy. There is growing awareness around these issues, but empirical research often still relies on methodologies aimed at classifying countries in a limited number of unambiguous types. This article proposes a two-step approach based on fuzzy-set ideal type analysis for the systematic analysis of hybrids at the level of both policies (step 1) and policy configurations or combinations of policies (step 2). This approach is demonstrated by using the case of childcare policies in European economies. In the first step, parental leave policies are analysed using three methods-direct, indirect, and combinatory-to identify and describe specific hybrid forms at the level of policy analysis. In the second step, the analysis moves on to investigate the relationship between parental leave and childcare services. Clearly shows that many countries display characteristics normally associated with different types (hybrids and sub-types) . Therefore, this two-step approach demonstrates that disaggregated and aggregated analyses are equally important to account for hybrid welfare forms and make sense of the tensions and incongruences within and between policies.
Political Regime and Human Capital: A Cross-Country Analysis
ERIC Educational Resources Information Center
Klomp, Jeroen; de Haan, Jakob
2013-01-01
We examine the relationship between different dimensions of the political regime in place and human capital using a two-step structural equation model. In the first step, we employ factor analysis on 16 human capital indicators to construct two new human capital measures (basic and advanced human capital). In the second step, we estimate the…
The role of models in estimating consequences as part of the risk assessment process.
Forde-Folle, K; Mitchell, D; Zepeda, C
2011-08-01
The degree of disease risk represented by the introduction, spread, or establishment of one or several diseases through the importation of animals and animal products is assessed by importing countries through an analysis of risk. The components of a risk analysis include hazard identification, risk assessment, risk management, and risk communication. A risk assessment starts with identification of the hazard(s) and then continues with four interrelated steps: release assessment, exposure assessment, consequence assessment, and risk estimation. Risk assessments may be either qualitative or quantitative. This paper describes how, through the integration of epidemiological and economic models, the potential adverse biological and economic consequences of exposure can be quantified.
Multi-enzyme logic network architectures for assessing injuries: digital processing of biomarkers.
Halámek, Jan; Bocharova, Vera; Chinnapareddy, Soujanya; Windmiller, Joshua Ray; Strack, Guinevere; Chuang, Min-Chieh; Zhou, Jian; Santhosh, Padmanabhan; Ramirez, Gabriela V; Arugula, Mary A; Wang, Joseph; Katz, Evgeny
2010-12-01
A multi-enzyme biocatalytic cascade processing simultaneously five biomarkers characteristic of traumatic brain injury (TBI) and soft tissue injury (STI) was developed. The system operates as a digital biosensor based on concerted function of 8 Boolean AND logic gates, resulting in the decision about the physiological conditions based on the logic analysis of complex patterns of the biomarkers. The system represents the first example of a multi-step/multi-enzyme biosensor with the built-in logic for the analysis of complex combinations of biochemical inputs. The approach is based on recent advances in enzyme-based biocomputing systems and the present paper demonstrates the potential applicability of biocomputing for developing novel digital biosensor networks.
Network meta-analysis: application and practice using Stata
2017-01-01
This review aimed to arrange the concepts of a network meta-analysis (NMA) and to demonstrate the analytical process of NMA using Stata software under frequentist framework. The NMA tries to synthesize evidences for a decision making by evaluating the comparative effectiveness of more than two alternative interventions for the same condition. Before conducting a NMA, 3 major assumptions—similarity, transitivity, and consistency—should be checked. The statistical analysis consists of 5 steps. The first step is to draw a network geometry to provide an overview of the network relationship. The second step checks the assumption of consistency. The third step is to make the network forest plot or interval plot in order to illustrate the summary size of comparative effectiveness among various interventions. The fourth step calculates cumulative rankings for identifying superiority among interventions. The last step evaluates publication bias or effect modifiers for a valid inference from results. The synthesized evidences through five steps would be very useful to evidence-based decision-making in healthcare. Thus, NMA should be activated in order to guarantee the quality of healthcare system. PMID:29092392
Network meta-analysis: application and practice using Stata.
Shim, Sungryul; Yoon, Byung-Ho; Shin, In-Soo; Bae, Jong-Myon
2017-01-01
This review aimed to arrange the concepts of a network meta-analysis (NMA) and to demonstrate the analytical process of NMA using Stata software under frequentist framework. The NMA tries to synthesize evidences for a decision making by evaluating the comparative effectiveness of more than two alternative interventions for the same condition. Before conducting a NMA, 3 major assumptions-similarity, transitivity, and consistency-should be checked. The statistical analysis consists of 5 steps. The first step is to draw a network geometry to provide an overview of the network relationship. The second step checks the assumption of consistency. The third step is to make the network forest plot or interval plot in order to illustrate the summary size of comparative effectiveness among various interventions. The fourth step calculates cumulative rankings for identifying superiority among interventions. The last step evaluates publication bias or effect modifiers for a valid inference from results. The synthesized evidences through five steps would be very useful to evidence-based decision-making in healthcare. Thus, NMA should be activated in order to guarantee the quality of healthcare system.
Melzer, Itshak; Elbar, Ori; Tsedek, Irit; Oddsson, Lars IE
2008-01-01
Background Gait and balance impairments may increase the risk of falls, the leading cause of accidental death in the elderly population. Fall-related injuries constitute a serious public health problem associated with high costs for society as well as human suffering. A rapid step is the most important protective postural strategy, acting to recover equilibrium and prevent a fall from initiating. It can arise from large perturbations, but also frequently as a consequence of volitional movements. We propose to use a novel water-based training program which includes specific perturbation exercises that will target the stepping responses that could potentially have a profound effect in reducing risk of falling. We describe the water-based balance training program and a study protocol to evaluate its efficacy (Trial registration number #NCT00708136). Methods/Design The proposed water-based training program involves use of unpredictable, multi-directional perturbations in a group setting to evoke compensatory and volitional stepping responses. Perturbations are made by pushing slightly the subjects and by water turbulence, in 24 training sessions conducted over 12 weeks. Concurrent cognitive tasks during movement tasks are included. Principles of physical training and exercise including awareness, continuity, motivation, overload, periodicity, progression and specificity were used in the development of this novel program. Specific goals are to increase the speed of stepping responses and improve the postural control mechanism and physical functioning. A prospective, randomized, cross-over trial with concealed allocation, assessor blinding and intention-to-treat analysis will be performed to evaluate the efficacy of the water-based training program. A total of 36 community-dwelling adults (age 65–88) with no recent history of instability or falling will be assigned to either the perturbation-based training or a control group (no training). Voluntary step reaction times and postural stability using stabiliogram diffusion analysis will be tested before and after the 12 weeks of training. Discussion This study will determine whether a water-based balance training program that includes perturbation exercises, in a group setting, can improve speed of voluntary stepping responses and improve balance control. Results will help guide the development of more cost-effective interventions that can prevent the occurrence of falls in the elderly. PMID:18706103
Novel approaches for bioinformatic analysis of salivary RNA sequencing data for development.
Kaczor-Urbanowicz, Karolina Elzbieta; Kim, Yong; Li, Feng; Galeev, Timur; Kitchen, Rob R; Gerstein, Mark; Koyano, Kikuye; Jeong, Sung-Hee; Wang, Xiaoyan; Elashoff, David; Kang, So Young; Kim, Su Mi; Kim, Kyoung; Kim, Sung; Chia, David; Xiao, Xinshu; Rozowsky, Joel; Wong, David T W
2018-01-01
Analysis of RNA sequencing (RNA-Seq) data in human saliva is challenging. Lack of standardization and unification of the bioinformatic procedures undermines saliva's diagnostic potential. Thus, it motivated us to perform this study. We applied principal pipelines for bioinformatic analysis of small RNA-Seq data of saliva of 98 healthy Korean volunteers including either direct or indirect mapping of the reads to the human genome using Bowtie1. Analysis of alignments to exogenous genomes by another pipeline revealed that almost all of the reads map to bacterial genomes. Thus, salivary exRNA has fundamental properties that warrant the design of unique additional steps while performing the bioinformatic analysis. Our pipelines can serve as potential guidelines for processing of RNA-Seq data of human saliva. Processing and analysis results of the experimental data generated by the exceRpt (v4.6.3) small RNA-seq pipeline (github.gersteinlab.org/exceRpt) are available from exRNA atlas (exrna-atlas.org). Alignment to exogenous genomes and their quantification results were used in this paper for the analyses of small RNAs of exogenous origin. dtww@ucla.edu. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Simplified Helium Refrigerator Cycle Analysis Using the `Carnot Step'
DOE Office of Scientific and Technical Information (OSTI.GOV)
P. Knudsen; V. Ganni
2006-05-01
An analysis of the Claude form of an idealized helium liquefier for the minimum input work reveals the ''Carnot Step'' for helium refrigerator cycles. As the ''Carnot Step'' for a multi-stage polytropic compression process consists of equal pressure ratio stages; similarly for an idealized helium liquefier the ''Carnot Step'' consists of equal temperature ratio stages for a given number of expansion stages. This paper presents the analytical basis and some useful equations for the preliminary examination of existing and new Claude helium refrigeration cycles.
Automated analysis for microcalcifications in high resolution digital mammograms
Mascio, Laura N.
1996-01-01
A method for automatically locating microcalcifications indicating breast cancer. The invention assists mammographers in finding very subtle microcalcifications and in recognizing the pattern formed by all the microcalcifications. It also draws attention to microcalcifications that might be overlooked because a more prominent feature draws attention away from an important object. A new filter has been designed to weed out false positives in one of the steps of the method. Previously, iterative selection threshold was used to separate microcalcifications from the spurious signals resulting from texture or other background. A Selective Erosion or Enhancement (SEE) Filter has been invented to improve this step. Since the algorithm detects areas containing potential calcifications on the mammogram, it can be used to determine which areas need be stored at the highest resolution available, while, in addition, the full mammogram can be reduced to an appropriate resolution for the remaining cancer signs.
Automated analysis for microcalcifications in high resolution digital mammograms
Mascio, L.N.
1996-12-17
A method is disclosed for automatically locating microcalcifications indicating breast cancer. The invention assists mammographers in finding very subtle microcalcifications and in recognizing the pattern formed by all the microcalcifications. It also draws attention to microcalcifications that might be overlooked because a more prominent feature draws attention away from an important object. A new filter has been designed to weed out false positives in one of the steps of the method. Previously, iterative selection threshold was used to separate microcalcifications from the spurious signals resulting from texture or other background. A Selective Erosion or Enhancement (SEE) Filter has been invented to improve this step. Since the algorithm detects areas containing potential calcifications on the mammogram, it can be used to determine which areas need be stored at the highest resolution available, while, in addition, the full mammogram can be reduced to an appropriate resolution for the remaining cancer signs. 8 figs.
Mother Involvement as an Influence on Father Involvement with Early Adolescents
Pleck, Joseph H.; Hofferth, Sandra L.
2009-01-01
This study hypothesized that father involvement is influenced by mothers' level of involvement as well as by marital conflict, mothers' work hours, and fathers' status as biological or step father. The analysis also tested hypotheses about mother involvement as a potential mediator of the effects of marital conflict and maternal work hours on father involvement, and hypotheses about factors influencing mother involvement. Children aged 10-14 from the NLSY79 who resided with their biological or step father and with their mother reported on each parent's involvement with them. As hypothesized, father involvement was predicted by mother involvement, and the reciprocal influence was not significant. Father involvement was associated with low marital conflict and being a biological father. Mothers' involvement partially mediated the effects of marital conflict on father involvement. If the mediating role of maternal involvement is not taken into account, the effect of marital conflict on father involvement is overestimated. PMID:21776195
A seminested PCR assay for detection and typing of human papillomavirus based on E1 gene sequences.
Cavalcante, Gustavo Henrique O; de Araújo, Josélio M G; Fernandes, José Veríssimo; Lanza, Daniel C F
2018-05-01
HPV infection is considered one of the leading causes of cervical cancer in the world. To date, more than 180 types of HPV have been described and viral typing is critical for defining the prognosis of cancer. In this work, a seminested PCR which allow fast and inexpensively detection and typing of HPV is presented. The system is based on the amplification of a variable length region within the viral gene E1, using three primers that potentially anneal in all HPV genomes. The amplicons produced in the first step can be identified by high resolution electrophoresis or direct sequencing. The seminested step includes nine specific primers which can be used in multiplex or individual reactions to discriminate the main types of HPV by amplicon size differentiation using agarose electrophoresis, reducing the time spent and cost per analysis. Copyright © 2017 Elsevier Inc. All rights reserved.
A combination of Raspberry Pi and SoftEther VPN for controlling research devices via the Internet.
Kuroda, Toshikazu
2017-11-01
Remote control over devices for experiments may increase the efficiency of operant research and expand the area where behavior can be studied. This article introduces a combination of Raspberry Pi ® (Pi) and SoftEther VPN ® that allows for such remote control via the Internet. The Pi is a small Linux computer with a great degree of flexibility for customization. Test results indicate that a Pi-based interface meets the requirement for conducting operant research. SoftEther VPN ® allows for establishing an extensive private network on the Internet using a single private Wi-Fi router. Step-by-step instructions are provided in the present article for setting up the Pi along with SoftEther VPN ® . Their potential for improving the way of conducting research is discussed. © 2017 Society for the Experimental Analysis of Behavior.
Natural language processing of spoken diet records (SDRs).
Lacson, Ronilda; Long, William
2006-01-01
Dietary assessment is a fundamental aspect of nutritional evaluation that is essential for management of obesity as well as for assessing dietary impact on chronic diseases. Various methods have been used for dietary assessment including written records, 24-hour recalls, and food frequency questionnaires. The use of mobile phones to provide real-time dietary records provides potential advantages for accessibility, ease of use and automated documentation. However, understanding even a perfect transcript of spoken dietary records (SDRs) is challenging for people. This work presents a first step towards automatic analysis of SDRs. Our approach consists of four steps - identification of food items, identification of food quantifiers, classification of food quantifiers and temporal annotation. Our method enables automatic extraction of dietary information from SDRs, which in turn allows automated mapping to a Diet History Questionnaire dietary database. Our model has an accuracy of 90%. This work demonstrates the feasibility of automatically processing SDRs.
Investigation Of In-Line Monitoring Options At H Canyon/HB Line For Plutonium Oxide Production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sexton, L.
2015-10-14
H Canyon and HB Line have a production goal of 1 MT per year of plutonium oxide feedstock for the MOX facility by FY17 (AFS-2 mission). In order to meet this goal, steps will need to be taken to improve processing efficiency. One concept for achieving this goal is to implement in-line process monitoring at key measurement points within the facilities. In-line monitoring during operations has the potential to increase throughput and efficiency while reducing costs associated with laboratory sample analysis. In the work reported here, we mapped the plutonium oxide process, identified key measurement points, investigated alternate technologies thatmore » could be used for in-line analysis, and initiated a throughput benefit analysis.« less
Stetzer, Dave; Leavitt, Adam M; Goeke, Charles L; Havas, Magda
2016-01-01
Ground current commonly referred to as "stray voltage" has been an issue on dairy farms since electricity was first brought to rural America. Equipment that generates high-frequency voltage transients on electrical wires combined with a multigrounded (electrical distribution) system and inadequate neutral returns all contribute to ground current. Despite decades of problems, we are no closer to resolving this issue, in part, due to three misconceptions that are addressed in this study. Misconception 1. The current standard of 1 V at cow contact is adequate to protect dairy cows; Misconception 2. Frequencies higher than 60 Hz do not need to be considered; and Misconception 3. All sources of ground current originate on the farm that has a ground current problem. This case study of a Wisconsin dairy farm documents, 1. how to establish permanent monitoring of ground current (step potential) on a dairy farm; 2. how to determine and remediate both on-farm and off-farm sources contributing to step potential; 3. which step-potential metrics relate to cow comfort and milk production; and 4. how these metrics relate to established standards. On-farm sources include lighting, variable speed frequency drives on motors, radio frequency identification system and off-farm sources are due to a poor primary neutral return on the utility side of the distribution system. A step-potential threshold of 1 V root mean square (RMS) at 60 Hz is inadequate to protect dairy cows as decreases of a few mV peak-peak at higher frequencies increases milk production, reduces milking time and improves cow comfort.
Ivezic, Nenad; Potok, Thomas E.
2003-09-30
A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katsounaros, Ioannis; Chen, Ting; Gewirth, Andrew A.
The two traditional mechanisms of the electrochemical ammonia oxidation consider only concerted proton-electron transfer elementary steps and thus they predict that the rate–potential relationship is independent of the pH on the pH-corrected RHE potential scale. In this letter we show that this is not the case: the increase of the solution pH shifts the onset of the NH 3-to-N 2 oxidation on Pt(100) to lower potentials and also leads to higher surface concentration of formed N Oad before the latter is oxidized to nitrite. Therefore, we present a new mechanism for the ammonia oxidation which incorporates a deprotonation step occurringmore » prior to the electron transfer. The deprotonation step yields a negatively charged surface-adsorbed species which is discharged in a subsequent electron transfer step before the N–N bond formation. The negatively charged species is thus a precursor for the formation of N 2 and NO. The new mechanism should be a future guide for computational studies aiming at the identification of intermediates and corresponding activation barriers for the elementary steps. As a result, ammonia oxidation is a new example of a bond-forming reaction on (100) terraces which involves decoupled proton-electron transfer.« less
Katsounaros, Ioannis; Chen, Ting; Gewirth, Andrew A.; ...
2016-01-12
The two traditional mechanisms of the electrochemical ammonia oxidation consider only concerted proton-electron transfer elementary steps and thus they predict that the rate–potential relationship is independent of the pH on the pH-corrected RHE potential scale. In this letter we show that this is not the case: the increase of the solution pH shifts the onset of the NH 3-to-N 2 oxidation on Pt(100) to lower potentials and also leads to higher surface concentration of formed N Oad before the latter is oxidized to nitrite. Therefore, we present a new mechanism for the ammonia oxidation which incorporates a deprotonation step occurringmore » prior to the electron transfer. The deprotonation step yields a negatively charged surface-adsorbed species which is discharged in a subsequent electron transfer step before the N–N bond formation. The negatively charged species is thus a precursor for the formation of N 2 and NO. The new mechanism should be a future guide for computational studies aiming at the identification of intermediates and corresponding activation barriers for the elementary steps. As a result, ammonia oxidation is a new example of a bond-forming reaction on (100) terraces which involves decoupled proton-electron transfer.« less
Conductance and refraction across a Barrier in Phosphorene
NASA Astrophysics Data System (ADS)
Dahal, Dipendra; Gumbs, Godfrey
The transmission coefficient and ballistic conductance for monolayer black phosphorene is calculated when a potential step or square barrier is present. The Landauer-B¨uttiker formalism is employed in our calculations of the conductance. We obtain the refractive index for the step potential barrier when an incident beam of electron travel along different paths so as to observe what role the anisotropy of the energy bands plays. Numerical results are presented for various potential heights and barrier widths and these are compared with those for gapless and gapped graphene.
NASA Astrophysics Data System (ADS)
Preiss, Bruce; Greene, Lloyd; Kriebel, Jamie; Wasson, Robert
2006-05-01
The Air Force Research Laboratory utilizes a value model as a primary input for space technology planning and budgeting. The Space Sector at AFRL headquarters manages space technology investment across all the geographically disparate technical directorates and ensures that integrated planning is achieved across the space community. The space investment portfolio must ultimately balance near, mid, and far-term investments across all the critical space mission areas. Investment levels and growth areas can always be identified by a typical capability analysis or gap analysis, but the value model approach goes one step deeper and helps identify the potential payoff of technology investments by linking the technology directly to an existing or potential concept. The value of the technology is then viewed from the enabling performance perspective of the concept that ultimately fulfills the Air Force mission. The process of linking space technologies to future concepts and technology roadmaps will be reviewed in this paper, along with representative results from this planning cycle. The initial assumptions in this process will be identified along with the strengths and weaknesses of this planning methodology.
Armendáriz-Vidales, Georgina; Frontana, Carlos
2014-09-07
An electrochemical and theoretical analysis of a series of shikonin derivatives in aprotic media is presented. Results showed that the first electrochemical reduction signal is a reversible monoelectronic transfer, generating a stable semiquinone intermediate; the corresponding E(I)⁰ values were correlated with calculated values of electroaccepting power (ω(+)) and adiabatic electron affinities (A(Ad)), obtained with BH and HLYP/6-311++G(2d,2p) and considering the solvent effect, revealing the influence of intramolecular hydrogen bonding and the substituting group at position C-2 in the experimental reduction potential. For the second reduction step, esterified compounds isobutyryl and isovalerylshikonin presented a coupled chemical reaction following dianion formation. Analysis of the variation of the dimensionless cathodic peak potential values (ξ(p)) as a function of the scan rate (v) functions and complementary experiments in benzonitrile suggested that this process follows a dissociative electron transfer, in which the rate of heterogeneous electron transfer is slow (~0.2 cm s(-1)), and the rate constant of the chemical process is at least 10(5) larger.
Valiante, D J; Richards, T B; Kinsley, K B
1992-01-01
To identify workplaces in New Jersey with potential for silica exposure, the New Jersey Department of Health compared four-digit Standard Industrial Classifications (SICs) identified by three different data sources: the National Occupational Exposure Survey (NOES), a new Jersey silicosis case registry, and regulatory agency compliance inspections in New Jersey. In total, the three data sources identified 204 SICs in New Jersey with potential for silica exposure. Forty-five percent of these SICs were identified by NOES only, 16% by registry cases only, 6% by compliance inspections only, and 33% by two or more sources. Since different surveillance sources implicate different SICs, this type of analysis is a useful first step in planning programs for prevention of silicosis.
Lacey, Susan R; Kilgore, Meredith; Yun, Huifeng; Hughes, Ronda; Allison, Jeroan; Cox, Karen S
2008-06-01
Much attention has been focused on how the nursing shortage will impact the growing number of aging Americans. This study was conducted as a first step in understanding nursing supply relative to potential pediatric demand using merged data from the American Hospital Association's annual survey and Census data by state from the year 2000. Findings indicate that there is tremendous variability among reporting states related to estimated pediatric nurses (registered nurse full-time equivalents), potential pediatric demand (persons from birth to 18 years), and allocated pediatric beds. Future research will examine how this supply-demand chain impacts clinical and cost outcomes for pediatric patients.
Exciting (and modulating) very-long-period seismic signals on White Island, New Zealand
NASA Astrophysics Data System (ADS)
Neuberg, Jurgen; Jolly, Art
2014-05-01
Very-long-period seismic signals (VLP) on volcanoes can be used to fill the gap between classic seismology and deformation studies. In this contribution we reiterate the principal processing steps to retrieve from a velocity seismogram 3D ground displacement with tiny amplitudes far beyond the resolution of GPS. As a case study we use several seismic and infrasonic signals of volcanic events from White Island, New Zealand. We apply particle motion analysis and deformation modelling tools to the resulting displacement signals and exam the potential link between ground displacement and the modulation of harmonic tremor, in turn linked to a hydrothermal system. In this way we want to demonstrate the full potential of VLPs in monitoring and modelling of volcanic processes.
Hines, Thomas; Díez-Pérez, Ismael; Nakamura, Hisao; Shimazaki, Tomomi; Asai, Yoshihiro; Tao, Nongjian
2013-03-06
We report controlling the formation of single-molecule junctions by means of electrochemically reducing two axialdiazonium terminal groups on a molecule, thereby producing direct Au-C covalent bonds in situ between the molecule and gold electrodes. We report a yield enhancement in molecular junction formation as the electrochemical potential of both junction electrodes approach the reduction potential of the diazonium terminal groups. Step length analysis shows that the molecular junction is significantly more stable, and can be pulled over a longer distance than a comparable junction created with amine anchoring bonds. The stability of the junction is explained by the calculated lower binding energy associated with the direct Au-C bond compared with the Au-N bond.
Population viability and connectivity of the Louisiana black bear (Ursus americanus luteolus)
Laufenberg, Jared S.; Clark, Joseph D.
2014-01-01
From April 2010 to April 2012, global positioning system (GPS) radio collars were placed on 8 female and 23 male bears ranging from 1 to 11 years of age to develop a step-selection function model to predict routes and rates of interchange. For both males and females, the probability of a step being selected increased as the distance to natural land cover and agriculture at the end of the step decreased and as distance from roads at the end of a step increased. Of 4,000 correlated random walks, the least potential interchange was between TRB and TRC and between UARB and LARB, but the relative potential for natural interchange between UARB and TRC was high. The step-selection model predicted that dispersals between the LARB and UARB populations were infrequent but possible for males and nearly nonexistent for females. No evidence of natural female dispersal between subpopulations has been documented thus far, which is also consistent with model predictions.
Memari, Sahel; Le Bozec, Serge; Bouisset, Simon
2014-02-21
This research deals with the postural adjustments that occur after the end of voluntary movement ("consecutive postural adjustments": CPAs). The influence of a potentially slippery surface on CPA characteristics was considered, with the aim of exploring more deeply the postural component of the task-movement. Seven male adults were asked to perform a single step, as quickly as possible, to their own footprint marked on the ground. A force plate measured the resultant reaction forces along the antero-posterior axis (R(x)) and the centre of pressure (COP) displacements along the antero-posterior and lateral axes (Xp and Yp). The velocity of the centre of gravity (COG) along the antero-posterior axis and the corresponding impulse (∫R(x)dt) were calculated; the peak velocity (termed "progression velocity": V(xG)) was measured. The required coefficient of friction (RCOF) along the progression axis (pμ(x)) was determined. Two materials, differing by their COF, were laid at foot contact (FC), providing a rough foot contact (RoFC), and a smooth foot contact (SmFC) considered to be potentially slippery. Two step lengths were also performed: a short step (SS) and a long step (LS). Finally, the subjects completed four series of ten steps each. These were preceded by preliminary trials, to allow them to acquire the necessary adaptation to experimental conditions. The antero-posterior force time course presented a positive phase, that included APAs ("anticipatory postural adjustments") and step execution (STEP), followed by a negative one, corresponding to CPAs. The backward impulse (CPI) was equal to the forward one (BPI), independently of friction and progression velocity. Moreover, V(xG) did not differ according to friction, but was faster when the step length was greater. Last CPA peak amplitudes (pCPA) were significantly greater and CPA durations (dCPA) shorter for RoFC and conversely for SmFC, contrary to APA. Finally, the results show a particular adaptation to the potentially slippery surface (SmFC). They suggest that adherence modulation at foot contact could be one of the rules for controlling COG displacement in single stepping. Consequently, the actual coefficient of friction value might be implemented in the motor programme at a higher level than the voluntary movement specific parameters. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Beyond Constant Comparison Qualitative Data Analysis: Using NVivo
ERIC Educational Resources Information Center
Leech, Nancy L.; Onwuegbuzie, Anthony J.
2011-01-01
The purposes of this paper are to outline seven types of qualitative data analysis techniques, to present step-by-step guidance for conducting these analyses via a computer-assisted qualitative data analysis software program (i.e., NVivo9), and to present screenshots of the data analysis process. Specifically, the following seven analyses are…
Effectiveness of en masse versus two-step retraction: a systematic review and meta-analysis.
Rizk, Mumen Z; Mohammed, Hisham; Ismael, Omar; Bearn, David R
2018-01-05
This review aims to compare the effectiveness of en masse and two-step retraction methods during orthodontic space closure regarding anchorage preservation and anterior segment retraction and to assess their effect on the duration of treatment and root resorption. An electronic search for potentially eligible randomized controlled trials and prospective controlled trials was performed in five electronic databases up to July 2017. The process of study selection, data extraction, and quality assessment was performed by two reviewers independently. A narrative review is presented in addition to a quantitative synthesis of the pooled results where possible. The Cochrane risk of bias tool and the Newcastle-Ottawa Scale were used for the methodological quality assessment of the included studies. Eight studies were included in the qualitative synthesis in this review. Four studies were included in the quantitative synthesis. En masse/miniscrew combination showed a statistically significant standard mean difference regarding anchorage preservation - 2.55 mm (95% CI - 2.99 to - 2.11) and the amount of upper incisor retraction - 0.38 mm (95% CI - 0.70 to - 0.06) when compared to a two-step/conventional anchorage combination. Qualitative synthesis suggested that en masse retraction requires less time than two-step retraction with no difference in the amount of root resorption. Both en masse and two-step retraction methods are effective during the space closure phase. The en masse/miniscrew combination is superior to the two-step/conventional anchorage combination with regard to anchorage preservation and amount of retraction. Limited evidence suggests that anchorage reinforcement with a headgear produces similar results with both retraction methods. Limited evidence also suggests that en masse retraction may require less time and that no significant differences exist in the amount of root resorption between the two methods.
Singer, Jonathan C; McIlroy, William E; Prentice, Stephen D
2014-11-07
Research examining age-related changes in dynamic stability during stepping has recognised the importance of the restabilisation phase, subsequent to foot-contact. While regulation of the net ground reaction force (GRFnet) line of action is believed to influence dynamic stability during steady-state locomotion, such control during restabilisation remains unknown. This work explored the origins of age-related decline in mediolateral dynamic stability by examining the line of action of GRFnet relative to the centre of mass (COM) during restabilisation following voluntary stepping. Healthy younger and older adults (n=20 per group) performed three single-step tasks (varying speed and step placement), altering the challenge to stability control. Age-related differences in magnitude and intertrial variability of the angle of divergence of GRFnet line of action relative to the COM were quantified, along with the peak mediolateral and vertical GRFnet components. The angle of divergence was further examined at discrete points during restabilisation, to uncover events of potential importance to stability control. Older adults exhibited a reduced angle of divergence throughout restabilisation. Temporal and spatial constraints on stepping increased the magnitude and intertrial variability of the angle of divergence, although not differentially among the older adults. Analysis of the time-varying angle of divergence revealed age-related reductions in magnitude, with increases in timing and intertrial timing variability during the later phase of restabilisation. This work further supports the idea that age-related challenges in lateral stability control emerge during restabilisation. Age-related alterations during the later phase of restabilisation may signify challenges with reactive control. Copyright © 2014 Elsevier Ltd. All rights reserved.
Deliberative Rhetoric as a Step in Organizational Crisis Management: Exxon as a Case Study.
ERIC Educational Resources Information Center
Johnson, Darrin; Sellnow, Timothy
1995-01-01
Explains that when organizations face crises, their rhetorical response often follows two steps: assessment of causes leading to the crisis, and a search for potential solutions and preventive measures for the future. States that epideictic rhetoric designed to sustain or regain the organization's reputation is effective in both steps. Examines…
NASA Astrophysics Data System (ADS)
de Winnaar, G.; Jewitt, G. P. W.; Horan, M.
Water scarce countries such as South Africa are subject to various hydrological constraints which can often be attributed to poor rainfall partitioning, particularly within resource poor farming communities that are reliant on rainfed agriculture. Recent initiatives to address this have shifted focus to explore more efficient alternatives to water supply and the recognition of numerous opportunities to implement runoff harvesting as a means to supplement water availability. However, increasing the implementation of runoff harvesting, without encountering unintended impacts on downstream hydrological and ecological systems, requires better understanding of the hydrologic and environmental impacts at catchment scale. In this paper the representation of spatial variations in landscape characteristics such as soil, land use, rainfall and slope information is shown to be an important step in identifying potential runoff harvesting sites, after which modelling the hydrological response in catchments where extensive runoff harvesting is being considered can be performed and likely impacts assessed. Geographic information systems (GIS) was utilised as an integrating tool to store, analyse and manage spatial information and when linked to hydrological response models, provided a rational means to facilitate decision making by providing catchment level identification, planning and assessment of runoff harvesting sites as illustrated by a case study at the Potshini catchment, a small sub-catchment in the Thukela River basin, South Africa. Through the linked GIS, potential runoff harvesting sites are identified relative to areas that concentrate runoff and where the stored water will be appropriately distributed. Based on GIS analysis it was found that 17% percent of the Potshini catchment area has a high potential for generating surface runoff, whereas an analysis of all factors which influence the location of such systems, shows that 18% is highly suitable for runoff harvesting. Details of the spatially explicit method that was adopted in this paper are provided and output from the integrated GIS modelling system is presented using suitability maps. It is concluded that providing an accurate spatial representation of the runoff generation potential within a catchment is an important step in developing a strategic runoff harvesting plan for any catchment.
NASA Astrophysics Data System (ADS)
Seuront, Laurent
2015-08-01
Fractal analysis is increasingly used to describe, and provide further understanding to, zooplankton swimming behavior. This may be related to the fact that fractal analysis and the related fractal dimension D have the desirable properties to be independent of measurement scale and to be very sensitive to even subtle behavioral changes that may be undetectable to other behavioral variables. As early claimed by Coughlin et al. (1992), this creates "the need for fractal analysis" in behavioral studies, which has hence the potential to become a valuable tool in zooplankton behavioral ecology. However, this paper stresses that fractal analysis, as well as the more elaborated multifractal analysis, is also a risky business that may lead to irrelevant results, without paying extreme attention to a series of both conceptual and practical steps that are all likely to bias the results of any analysis. These biases are reviewed and exemplified on the basis of the published literature, and remedial procedures are provided not only for geometric and stochastic fractal analyses, but also for the more complicated multifractal analysis. The concept of multifractals is finally introduced as a direct, objective and quantitative tool to identify models of motion behavior, such as Brownian motion, fractional Brownian motion, ballistic motion, Lévy flight/walk and multifractal random walk. I finally briefly review the state of this emerging field in zooplankton behavioral research.
Crowdsourced Curriculum Development for Online Medical Education.
Shappell, Eric; Chan, Teresa M; Thoma, Brent; Trueger, N Seth; Stuntz, Bob; Cooney, Robert; Ahn, James
2017-12-08
In recent years online educational content, efforts at quality appraisal, and integration of online material into institutional teaching initiatives have increased. However, medical education has yet to develop large-scale online learning centers. Crowd-sourced curriculum development may expedite the realization of this potential while providing opportunities for innovation and scholarship. This article describes the current landscape, best practices, and future directions for crowdsourced curriculum development using Kern's framework for curriculum development and the example topic of core content in emergency medicine. A scoping review of online educational content was performed by a panel of subject area experts for each step in Kern's framework. Best practices and recommendations for future development for each step were established by the same panel using a modified nominal group consensus process. The most prevalent curriculum design steps were (1) educational content and (2) needs assessments. Identified areas of potential innovation within these steps included targeting gaps in specific content areas and developing underrepresented instructional methods. Steps in curriculum development without significant representation included (1) articulation of goals and objectives and (2) tools for curricular evaluation. By leveraging the power of the community, crowd-sourced curriculum development offers a mechanism to diffuse the burden associated with creating comprehensive online learning centers. There is fertile ground for innovation and scholarship in each step along the continuum of curriculum development. Realization of this paradigm's full potential will require individual developers to strongly consider how their contributions will align with the work of others.
Crowdsourced Curriculum Development for Online Medical Education
Chan, Teresa M; Thoma, Brent; Trueger, N Seth; Stuntz, Bob; Cooney, Robert; Ahn, James
2017-01-01
In recent years online educational content, efforts at quality appraisal, and integration of online material into institutional teaching initiatives have increased. However, medical education has yet to develop large-scale online learning centers. Crowd-sourced curriculum development may expedite the realization of this potential while providing opportunities for innovation and scholarship. This article describes the current landscape, best practices, and future directions for crowdsourced curriculum development using Kern’s framework for curriculum development and the example topic of core content in emergency medicine. A scoping review of online educational content was performed by a panel of subject area experts for each step in Kern’s framework. Best practices and recommendations for future development for each step were established by the same panel using a modified nominal group consensus process. The most prevalent curriculum design steps were (1) educational content and (2) needs assessments. Identified areas of potential innovation within these steps included targeting gaps in specific content areas and developing underrepresented instructional methods. Steps in curriculum development without significant representation included (1) articulation of goals and objectives and (2) tools for curricular evaluation. By leveraging the power of the community, crowd-sourced curriculum development offers a mechanism to diffuse the burden associated with creating comprehensive online learning centers. There is fertile ground for innovation and scholarship in each step along the continuum of curriculum development. Realization of this paradigm’s full potential will require individual developers to strongly consider how their contributions will align with the work of others. PMID:29464134
The discovery of the sub-threshold currents M and Q/H in central neurons.
Adams, Paul
2016-08-15
The history, content and consequences of the highly-cited 1982 Brain Research paper by Halliwell and Adams are summarized. The paper pioneered the use of the single-electrode voltage clamp in mammalian brain slices, described 2 novel sub-threshold voltage-dependent ionic currents, IM and IQ/H, and suggested that cholinergic inputs "enabled" pyramidal cell firing in response to conventional synaptic input, the first example of central neuromodulation. The paper, published in Brain Research to give the first author appropriate importance, heralded an ongoing tidal wave of quantitative electrophysiology in mammalian central neurons. Voltage-clamp analysis of muscarinic excitation in hippocampal neurons Pyramidal cells in the CA1 field of guinea pig hippocampal slices were voltage-clamped using a single microelectrode, at 23-30°C. Small inwardly relaxing currents triggered by step hyperpolarizations from holding potentials of -80 to -40mV were investigated. Inward relaxations occurring for negative steps between -40mV and -70mV resembled M-currents of sympathetic ganglion cells: they were abolished by addition of carbachol, muscarine or bethanechol, as well as by 1mM barium; the relaxations appeared to invert at around -80mV; they became faster at more negative potentials; and the inversion potential was shifted positively by raising external K(+) concentration. Inward relaxations triggered by steps negative to -80mV, in contrast, appeared to reflect passage of another current species, which has been labeled IQ.Thus IQ did not invert negative to -80mV, it was insensitive to muscarinic agonizts or to barium, and it was blocked by 0.5-3mM cesium (which does not block IM). Turn-on of IQ causes the well known droop in the hyperpolarizing electrotonic potential in these cells. The combined effects of IQ and IM make the steady-state current-voltage relation of CA1 cells slightly sigmoidal around rest potential. It is suggested that activation of cholinergic septal inputs to the hippocampus facilitates repetitive firing off pyramidal cells by turning off the M-conductance, without much change in the resting potential of the cell. © 1982. This article is part of a Special Issue entitled SI:50th Anniversary Issue. Copyright © 2016. Published by Elsevier B.V.
Zonation of Landslide-Prone Using Microseismic Method and Slope Analysis in Margoyoso, Magelang
NASA Astrophysics Data System (ADS)
Aditya, Muchamad Reza; Fauqi Romadlon, Arriqo’; Agra Medika, Reymon; Alfontius, Yosua; Delva Jannet, Zukhruf; Hartantyo, Eddy
2018-04-01
Margoyoso Village, Salaman Sub-district, Magelang Regency, Central Java is one of the villages that were included in landslide prone areas. The steep slopes and land use in this village were quite apprehensive. There were fractures with 5 cm in width and a length of 50 m. Moreover, these fractures appeared in the home residents. Although the local government has established a disaster response organization, this village is still not getting adequate information about the landslide prone areas. Based on the description before, we conducted research with geophysical methods and geotechnical analysis to minimize the danger of landslides. The geophysical method used in this research was microseismic method and geotechnical analysis. The microseismic measurement and slope stability analysis at Margoyoso village was a step in analysing the landslide-prone zone boundary. The results of this research indicated that landslide potential areas had a low peak ground acceleration values with a range from 36 gal to 46 gal. Measurement of slope stability indicated that a slope angle values between 55°-78° are a potential landslide slope because the soil in this village has very loose properties so it is very easy to move.
Paukatong, K V; Kunawasen, S
2001-01-01
Nham is a traditional Thai fermented pork sausage. The major ingredients of Nham are ground pork meat and shredded pork rind. Nham has been reported to be contaminated with Salmonella spp., Staphylococcus aureus, and Listeria monocytogenes. Therefore, it is a potential cause of foodborne diseases for consumers. A Hazard Analysis and Critical Control Points (HACCP) generic model has been developed for the Nham process. Nham processing plants were observed and a generic flow diagram of Nham processes was constructed. Hazard analysis was then conducted. Other than microbial hazards, the pathogens previously found in Nham, sodium nitrite and metal were identified as chemical and physical hazards in this product, respectively. Four steps in the Nham process have been identified as critical control points. These steps are the weighing of the nitrite compound, stuffing, fermentation, and labeling. The chemical hazard of nitrite must be controlled during the weighing step. The critical limit of nitrite levels in the Nham mixture has been set at 100-200 ppm. This level is high enough to control Clostridium botulinum but does not cause chemical hazards to the consumer. The physical hazard from metal clips could be prevented by visual inspection of every Nham product during stuffing. The microbiological hazard in Nham could be reduced in the fermentation process. The critical limit of the pH of Nham was set at lower than 4.6. Since this product is not cooked during processing, finally, educating the consumer, by providing information on the label such as "safe if cooked before consumption", could be an alternative way to prevent the microbiological hazards of this product.
Kobayashi, Toshiki; Leung, Aaron K L; Akazawa, Yasushi; Hutchins, Stephen W
2016-01-01
The Berg balance scale (BBS) is commonly used to assess balancing ability in patients with stroke. The BBS may be a good candidate for clinical assessment prior to orthotic intervention, if it correlates well with outcome measures such as gait speed. The purpose of this study was to investigate the correlation between the BBS measured prior to walking with an ankle-foot orthosis (AFO) and specific temporal-spatial parameters of gait when walking with an AFO donned. Eight individuals with chronic stroke participated in this study. Balancing ability was assessed using the BBS, while temporal-spatial parameters of gait (gait speed, bilateral step length, stride length and step width) were measured using a three-dimensional motion analysis system. The correlations between the BBS and gait parameters were investigated using a non-parametric Kendall's Tau (τ) correlation analysis. The BBS showed correlations with gait speed (τ = 0.64, p < 0.05), the step length of the affected side (τ = 0.74, p < 0.05), and the stride length (τ = 0.64, p < 0.05). Assessment of the BBS prior to AFO prescription may potentially help clinicians to estimate the gait speed achievable following orthotic intervention in patients with stroke. Implications for Rehabilitation Assessment of the BBS prior to AFO prescription may help orthotists to estimate the gait speed following an orthotic intervention in patients with stroke. Assessment of the BBS prior to AFO prescription may help orthotists to understand overall balance and postural control abilities in patients with stroke. A larger scale multifactorial analysis is warranted to confirm the results of this pilot study.
Pye, Stephen R; Sheppard, Thérèse; Joseph, Rebecca M; Lunt, Mark; Girard, Nadyne; Haas, Jennifer S; Bates, David W; Buckeridge, David L; van Staa, Tjeerd P; Tamblyn, Robyn; Dixon, William G
2018-04-17
Real-world data for observational research commonly require formatting and cleaning prior to analysis. Data preparation steps are rarely reported adequately and are likely to vary between research groups. Variation in methodology could potentially affect study outcomes. This study aimed to develop a framework to define and document drug data preparation and to examine the impact of different assumptions on results. An algorithm for processing prescription data was developed and tested using data from the Clinical Practice Research Datalink (CPRD). The impact of varying assumptions was examined by estimating the association between 2 exemplar medications (oral hypoglycaemic drugs and glucocorticoids) and cardiovascular events after preparing multiple datasets derived from the same source prescription data. Each dataset was analysed using Cox proportional hazards modelling. The algorithm included 10 decision nodes and 54 possible unique assumptions. Over 11 000 possible pathways through the algorithm were identified. In both exemplar studies, similar hazard ratios and standard errors were found for the majority of pathways; however, certain assumptions had a greater influence on results. For example, in the hypoglycaemic analysis, choosing a different variable to define prescription end date altered the hazard ratios (95% confidence intervals) from 1.77 (1.56-2.00) to 2.83 (1.59-5.04). The framework offers a transparent and efficient way to perform and report drug data preparation steps. Assumptions made during data preparation can impact the results of analyses. Improving transparency regarding drug data preparation would increase the repeatability, reproducibility, and comparability of published results. © 2018 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.
Risk Based Reliability Centered Maintenance of DOD Fire Protection Systems
1999-01-01
2.2.3 Failure Mode and Effect Analysis ( FMEA )............................ 2.2.4 Failure Mode Risk Characterization...Step 2 - System functions and functional failures definition Step 3 - Failure mode and effect analysis ( FMEA ) Step 4 - Failure mode risk...system). The Interface Location column identifies the location where the FMEA of the fire protection system began or stopped. For example, for the fire
Impact of gate engineering in enhancement mode n++GaN/InAlN/AlN/GaN HEMTs
NASA Astrophysics Data System (ADS)
Adak, Sarosij; Swain, Sanjit Kumar; Rahaman, Hafizur; Sarkar, Chandan Kumar
2016-12-01
This paper illustrate the effect of gate material engineering on the performance of enhancement mode n++GaN/InAlN/AlN/GaN high electron mobility transistors (HEMTs). A comparative analysis of key device parameters is discussed for the Triple Material Gate (TMG), Dual Material Gate (DMG) and the Single Material Gate (SMG) structure HEMTs by considering the same device dimensions. The simulation results shows that an significant improvement is noticed in the key analysis parameters such as drain current (Id), transconductance (gm), cut off frequency (fT), RF current gain, maximum cut off frequency (fmax) and RF power gain of the gate material engineered devices with respect to SMG normally off n++GaN/InAlN/AlN/GaN HEMTs. This improvement is due to the existence of the perceivable step in the surface potential along the channel which successfully screens the drain potential variation in the source side of the channel for the gate engineering devices. The analysis suggested that the proposed TMG and DMG engineered structure enhancement mode n++GaN/InAlN/AlN/GaN HEMTs can be considered as a potential device for future high speed, microwave and digital application.
Muravyev, Nikita V; Koga, Nobuyoshi; Meerov, Dmitry B; Pivkina, Alla N
2017-01-25
This study focused on kinetic modeling of a specific type of multistep heterogeneous reaction comprising exothermic and endothermic reaction steps, as exemplified by the practical kinetic analysis of the experimental kinetic curves for the thermal decomposition of molten ammonium dinitramide (ADN). It is known that the thermal decomposition of ADN occurs as a consecutive two step mass-loss process comprising the decomposition of ADN and subsequent evaporation/decomposition of in situ generated ammonium nitrate. These reaction steps provide exothermic and endothermic contributions, respectively, to the overall thermal effect. The overall reaction process was deconvoluted into two reaction steps using simultaneously recorded thermogravimetry and differential scanning calorimetry (TG-DSC) curves by considering the different physical meanings of the kinetic data derived from TG and DSC by P value analysis. The kinetic data thus separated into exothermic and endothermic reaction steps were kinetically characterized using kinetic computation methods including isoconversional method, combined kinetic analysis, and master plot method. The overall kinetic behavior was reproduced as the sum of the kinetic equations for each reaction step considering the contributions to the rate data derived from TG and DSC. During reproduction of the kinetic behavior, the kinetic parameters and contributions of each reaction step were optimized using kinetic deconvolution analysis. As a result, the thermal decomposition of ADN was successfully modeled as partially overlapping exothermic and endothermic reaction steps. The logic of the kinetic modeling was critically examined, and the practical usefulness of phenomenological modeling for the thermal decomposition of ADN was illustrated to demonstrate the validity of the methodology and its applicability to similar complex reaction processes.
An Ejector Air Intake Design Method for a Novel Rocket-Based Combined-Cycle Rocket Nozzle
NASA Astrophysics Data System (ADS)
Waung, Timothy S.
Rocket-based combined-cycle (RBCC) vehicles have the potential to reduce launch costs through the use of several different air breathing engine cycles, which reduce fuel consumption. The rocket-ejector cycle, in which air is entrained into an ejector section by the rocket exhaust, is used at flight speeds below Mach 2. This thesis develops a design method for an air intake geometry around a novel RBCC rocket nozzle design for the rocket-ejector engine cycle. This design method consists of a geometry creation step in which a three-dimensional intake geometry is generated, and a simple flow analysis step which predicts the air intake mass flow rate. The air intake geometry is created using the rocket nozzle geometry and eight primary input parameters. The input parameters are selected to give the user significant control over the air intake shape. The flow analysis step uses an inviscid panel method and an integral boundary layer method to estimate the air mass flow rate through the intake geometry. Intake mass flow rate is used as a performance metric since it directly affects the amount of thrust a rocket-ejector can produce. The design method results for the air intake operating at several different points along the subsonic portion of the Ariane 4 flight profile are found to under predict mass flow rate by up to 8.6% when compared to three-dimensional computational fluid dynamics simulations for the same air intake.
Akam, Thomas; Costa, Rui; Dayan, Peter
2015-12-01
The recently developed 'two-step' behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects' investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues.
Tang, Y; Stephenson, J L; Othmer, H G
1996-01-01
We study the models for calcium (Ca) dynamics developed in earlier studies, in each of which the key component is the kinetics of intracellular inositol-1,4,5-trisphosphate-sensitive Ca channels. After rapidly equilibrating steps are eliminated, the channel kinetics in these models are represented by a single differential equation that is linear in the state of the channel. In the reduced kinetic model, the graph of the steady-state fraction of conducting channels as a function of log10(Ca) is a bell-shaped curve. Dynamically, a step increase in inositol-1,4,5-trisphosphate induces an incremental increase in the fraction of conducting channels, whereas a step increase in Ca can either potentiate or inhibit channel activation, depending on the Ca level before and after the increase. The relationships among these models are discussed, and experimental tests to distinguish between them are given. Under certain conditions the models for intracellular calcium dynamics are reduced to the singular perturbed form epsilon dx/d tau = f(x, y, p), dy/d tau = g(x, y, p). Phase-plane analysis is applied to a generic form of these simplified models to show how different types of Ca response, such as excitability, oscillations, and a sustained elevation of Ca, can arise. The generic model can also be used to study frequency encoding of hormonal stimuli, to determine the conditions for stable traveling Ca waves, and to understand the effect of channel properties on the wave speed.
Knowledge Management Orientation: An Innovative Perspective to Hospital Management.
Ghasemi, Matina; Ghadiri Nejad, Mazyar; Bagzibagli, Kemal
2017-12-01
By considering innovation as a new project in hospitals, all the project management's standard steps should be followed in execution. This study investigated the validation of a new set of measures in terms of providing a procedure for knowledge management-oriented innovation that enriches the hospital management system. The relation between innovation and all the knowledge management areas, as the main constructs of project management, was illustrated by referring to project management standard steps and previous studies. Through consultations and meetings with a committee of professional project managers, a questionnaire was developed to measure ten knowledge management areas in hospital's innovation process. Additionally, a group of experts from hospital managers were invited to comment on the applicability of the questionnaires by considering if the items are measurable in hospitals practically. A close-ended, Likert-type scale items, consisted of ten sections, were developed based on project management body of knowledge thorough Delphi technique. It enables the managers to evaluate hospitals' situation to be aware whether the organization follows the knowledge management standards in innovation process or not. By pilot study, confirmatory factor analysis and exploratory factor analysis were conducted to ensure the validity and reliability of the measurement items. The developed items seem to have a potential to help hospital managers and subsequently delivering new products/services successfully based on the standard procedures in their organization. In all innovation processes, the knowledge management areas and their standard steps help hospital managers by a new tool as questionnaire format.
Fatigue Life Methodology for Bonded Composite Skin/Stringer Configurations
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Paris, Isabelle L.; OBrien, T. Kevin; Minguet, Pierre J.
2001-01-01
A methodology is presented for determining the fatigue life of composite structures based on fatigue characterization data and geometric nonlinear finite element (FE) analyses. To demonstrate the approach, predicted results were compared to fatigue tests performed on specimens which represented a tapered composite flange bonded onto a composite skin. In a first step, tension tests were performed to evaluate the debonding mechanisms between the flange and the skin. In a second step, a 2D FE model was developed to analyze the tests. To predict matrix cracking onset, the relationship between the tension load and the maximum principal stresses transverse to the fiber direction was determined through FE analysis. Transverse tension fatigue life data were used to -enerate an onset fatigue life P-N curve for matrix cracking. The resulting prediction was in good agreement with data from the fatigue tests. In a third step, a fracture mechanics approach based on FE analysis was used to determine the relationship between the tension load and the critical energy release rate. Mixed mode energy release rate fatigue life data were used to create a fatigue life onset G-N curve for delamination. The resulting prediction was in good agreement with data from the fatigue tests. Further, the prediction curve for cumulative life to failure was generated from the previous onset fatigue life curves. The results showed that the methodology offers a significant potential to Predict cumulative fatigue life of composite structures.
Tomei, M Concetta; Mosca Angelucci, Domenica; Levantesi, Caterina
2016-03-01
Sequential anaerobic-aerobic digestion has been demonstrated to be effective for enhanced sludge stabilization, in terms of increased solid reduction and improvement of sludge dewaterability. In this study, we propose a modified version of the sequential anaerobic-aerobic digestion process by operating the aerobic step under mesophilic conditions (T=37 °C), in order to improve the aerobic degradation kinetics of soluble and particulate chemical oxygen demand (COD). Process performance has been assessed in terms of "classical parameters" such as volatile solids (VS) removal, biogas production, COD removal, nitrogen species, and polysaccharide and protein fate. The aerobic step was operated under intermittent aeration to achieve nitrogen removal. Aerobic mesophilic conditions consistently increased VS removal, providing 32% additional removal vs. 20% at 20 °C. Similar results were obtained for nitrogen removal, increasing from 64% up to 99% at the higher temperature. Improved sludge dewaterability was also observed with a capillary suction time decrease of ~50% during the mesophilic aerobic step. This finding may be attributable to the decreased protein content in the aerobic digested sludge. The post-aerobic digestion exerted a positive effect on the reduction of microbial indicators while no consistent improvement of hygienization related to the increased temperature was observed. The techno-economic analysis of the proposed digestion layout showed a net cost saving for sludge disposal estimated in the range of 28-35% in comparison to the single-phase anaerobic digestion. Copyright © 2015 Elsevier B.V. All rights reserved.
LANDSAT data for coastal zone management. [New Jersey
NASA Technical Reports Server (NTRS)
Mckenzie, S.
1981-01-01
The lack of adequate, current data on land and water surface conditions in New Jersey led to the search for better data collections and analysis techniques. Four-channel MSS data of Cape May County and access to the OSER computer interpretation system were provided by NASA. The spectral resolution of the data was tested and a surface cover map was produced by going through the steps of supervised classification. Topics covered include classification; change detection and improvement of spectral and spatial resolution; merging LANDSAT and map data; and potential applications for New Jersey.
Hydrothermal synthesis of tungsten doped tin dioxide nanocrystals
NASA Astrophysics Data System (ADS)
Zhou, Cailong; Li, Yufeng; Chen, Yiwen; Lin, Jing
2018-01-01
Tungsten doped tin dioxide (WTO) nanocrystals were synthesized through a one-step hydrothermal method. The structure, composition and morphology of WTO nanocrystals were characterized by x-ray diffraction, x-ray photoelectron spectroscopy, energy dispersive x-ray spectroscopy, UV-vis diffuse reflectance spectra, zeta potential analysis and high-resolution transmission electron microscopy. Results show that the as-prepared WTO nanocrystals were rutile-type structure with the size near 13 nm. Compared with the undoped tin dioxide nanocrystals, the WTO nanocrystals possessed better dispersity in ethanol phase and formed transparent sol.
Using the CABLES model to assess and minimize risk in research: control group hazards.
Koocher, G P
2002-01-01
CABLES is both an acronym and metaphor for conceptualizing research participation risk by considering 6 distinct domains in which risks of harm to research participants may exist: cognitive, affective, biological, legal, economic, and social/cultural. These domains are described and illustrated, along with suggestions for minimizing or eliminating the potential hazards to human participants in biomedical and behavioral science research. Adoption of a thoughtful ethical analysis addressing all 6 CABLES strands in designing research provides a strong protective step toward safeguarding and promoting the well-being of study participants.
Kinetics of the electric double layer formation modelled by the finite difference method
NASA Astrophysics Data System (ADS)
Valent, Ivan
2017-11-01
Dynamics of the elctric double layer formation in 100 mM NaCl solution for sudden potentail steps of 10 and 20 mV was simulated using the Poisson-Nernst-Planck theory and VLUGR2 solver for partial differential equations. The used approach was verified by comparing the obtained steady-state solution with the available exact solution. The simulations allowed for detailed analysis of the relaxation processes of the individual ions and the electric potential. Some computational aspects of the problem were discussed.
Xylan - A potential contaminant for lunar samples and Antarctic meteorites
NASA Astrophysics Data System (ADS)
Wright, I. P.; Russell, S. S.; Boyd, S. R.; Meyer, C.; Pillinger, C. T.
The possibility that lunar samples have been contaminated by the proprietary lubricant paint known as Xylan, which has been applied to screw threads in dry-N sample processing cabinets at NASA JSC, is considered. From a sample analysis using sealed-tube and stepped combustion, it is argued that the unexpectedly high concentration of organic materials found in EET A79001 is not due to Xylan contamination. It is considered unlikely that previous C and N analyses of lunar samples have been affected by the introduction of Xylan.
Swat, M J; Moodie, S; Wimalaratne, S M; Kristensen, N R; Lavielle, M; Mari, A; Magni, P; Smith, M K; Bizzotto, R; Pasotti, L; Mezzalana, E; Comets, E; Sarr, C; Terranova, N; Blaudez, E; Chan, P; Chard, J; Chatel, K; Chenel, M; Edwards, D; Franklin, C; Giorgino, T; Glont, M; Girard, P; Grenon, P; Harling, K; Hooker, A C; Kaye, R; Keizer, R; Kloft, C; Kok, J N; Kokash, N; Laibe, C; Laveille, C; Lestini, G; Mentré, F; Munafo, A; Nordgren, R; Nyberg, H B; Parra-Guillen, Z P; Plan, E; Ribba, B; Smith, G; Trocóniz, I F; Yvon, F; Milligan, P A; Harnisch, L; Karlsson, M; Hermjakob, H; Le Novère, N
2015-06-01
The lack of a common exchange format for mathematical models in pharmacometrics has been a long-standing problem. Such a format has the potential to increase productivity and analysis quality, simplify the handling of complex workflows, ensure reproducibility of research, and facilitate the reuse of existing model resources. Pharmacometrics Markup Language (PharmML), currently under development by the Drug Disease Model Resources (DDMoRe) consortium, is intended to become an exchange standard in pharmacometrics by providing means to encode models, trial designs, and modeling steps.
Swat, MJ; Moodie, S; Wimalaratne, SM; Kristensen, NR; Lavielle, M; Mari, A; Magni, P; Smith, MK; Bizzotto, R; Pasotti, L; Mezzalana, E; Comets, E; Sarr, C; Terranova, N; Blaudez, E; Chan, P; Chard, J; Chatel, K; Chenel, M; Edwards, D; Franklin, C; Giorgino, T; Glont, M; Girard, P; Grenon, P; Harling, K; Hooker, AC; Kaye, R; Keizer, R; Kloft, C; Kok, JN; Kokash, N; Laibe, C; Laveille, C; Lestini, G; Mentré, F; Munafo, A; Nordgren, R; Nyberg, HB; Parra-Guillen, ZP; Plan, E; Ribba, B; Smith, G; Trocóniz, IF; Yvon, F; Milligan, PA; Harnisch, L; Karlsson, M; Hermjakob, H; Le Novère, N
2015-01-01
The lack of a common exchange format for mathematical models in pharmacometrics has been a long-standing problem. Such a format has the potential to increase productivity and analysis quality, simplify the handling of complex workflows, ensure reproducibility of research, and facilitate the reuse of existing model resources. Pharmacometrics Markup Language (PharmML), currently under development by the Drug Disease Model Resources (DDMoRe) consortium, is intended to become an exchange standard in pharmacometrics by providing means to encode models, trial designs, and modeling steps. PMID:26225259
Mishori, Ranit; Singh, Lisa Oberoi; Levy, Brendan; Newport, Calvin
2014-04-14
Twitter is becoming an important tool in medicine, but there is little information on Twitter metrics. In order to recommend best practices for information dissemination and diffusion, it is important to first study and analyze the networks. This study describes the characteristics of four medical networks, analyzes their theoretical dissemination potential, their actual dissemination, and the propagation and distribution of tweets. Open Twitter data was used to characterize four networks: the American Medical Association (AMA), the American Academy of Family Physicians (AAFP), the American Academy of Pediatrics (AAP), and the American College of Physicians (ACP). Data were collected between July 2012 and September 2012. Visualization was used to understand the follower overlap between the groups. Actual flow of the tweets for each group was assessed. Tweets were examined using Topsy, a Twitter data aggregator. The theoretical information dissemination potential for the groups is large. A collective community is emerging, where large percentages of individuals are following more than one of the groups. The overlap across groups is small, indicating a limited amount of community cohesion and cross-fertilization. The AMA followers' network is not as active as the other networks. The AMA posted the largest number of tweets while the AAP posted the fewest. The number of retweets for each organization was low indicating dissemination that is far below its potential. To increase the dissemination potential, medical groups should develop a more cohesive community of shared followers. Tweet content must be engaging to provide a hook for retweeting and reaching potential audience. Next steps call for content analysis, assessment of the behavior and actions of the messengers and the recipients, and a larger-scale study that considers other medical groups using Twitter.
Electrical properties associated with wide intercellular clefts in rabbit Purkinje fibres.
Colatsky, T J; Tsien, R W
1979-01-01
1. Rabbit Purkinje fibres were studied using micro-electrode recordings of electrical activity or a two-micro-electrode voltage clamp. Previous morphological work had suggested that these preparations offer structural advantages for the analysis of ionic permeability mechanisms. 2. Viable preparations could be obtained consistently by exposure to a K glutamate Tyrode solution during excision and recovery. In NaCl Tyrode solution, the action potential showed a large overshoot and fully developed plateau, but no pacemaker depolarization at negative potentials. 3. The passive electrical properties were consistent with morphological evidence for the accessibility of cleft membranes within the cell bundle. Electrotonic responses to intracellular current steps showed the behaviour expected for a simple leaky capacitative cable. Capacitative current transients under voltage clamp were changed very little by an eightfold reduction in the external solution conductivity. 4. Slow current changes attributable to K depletion were small compared to those found in other cardiac preparations. The amount of depletion was close to that predicted by a cleft model which assumed free K diffusion in 1 micron clefts. 5. Step depolarizations over the plateau range of potentials evoked a slow inward current which was resistant to tetrodotoxin but blocked by D600. 6. Strong depolarizations to potentials near 0 mV elicited a transient outward current and a slowly activating late outward current. Both components resembled currents found in sheep or calf Purkinje fibres. 7. These experiments support previous interpretations of slow plateau currents in terms of genuine permeability changes. The rabbit Purkinje fibre may allow various ionic channels to be studied with relatively little interference from radial non-uniformities in membrane potential or ion concentration. Images Fig. 7 PMID:469754
Several steps/day indicators predict changes in anthropometric outcomes: HUB City Steps.
Thomson, Jessica L; Landry, Alicia S; Zoellner, Jamie M; Tudor-Locke, Catrine; Webster, Michael; Connell, Carol; Yadrick, Kathy
2012-11-15
Walking for exercise remains the most frequently reported leisure-time activity, likely because it is simple, inexpensive, and easily incorporated into most people's lifestyle. Pedometers are simple, convenient, and economical tools that can be used to quantify step-determined physical activity. Few studies have attempted to define the direct relationship between dynamic changes in pedometer-determined steps/day and changes in anthropometric and clinical outcomes. Hence, the objective of this secondary analysis was to evaluate the utility of several descriptive indicators of pedometer-determined steps/day for predicting changes in anthropometric and clinical outcomes using data from a community-based walking intervention, HUB City Steps, conducted in a southern, African American population. A secondary aim was to evaluate whether treating steps/day data for implausible values affected the ability of these data to predict intervention-induced changes in clinical and anthropometric outcomes. The data used in this secondary analysis were collected in 2010 from 269 participants in a six-month walking intervention targeting a reduction in blood pressure. Throughout the intervention, participants submitted weekly steps/day diaries based on pedometer self-monitoring. Changes (six-month minus baseline) in anthropometric (body mass index, waist circumference, percent body fat [%BF], fat mass) and clinical (blood pressure, lipids, glucose) outcomes were evaluated. Associations between steps/day indicators and changes in anthropometric and clinical outcomes were assessed using bivariate tests and multivariable linear regression analysis which controlled for demographic and baseline covariates. Significant negative bivariate associations were observed between steps/day indicators and the majority of anthropometric and clinical outcome changes (r = -0.3 to -0.2: P < 0.05). After controlling for covariates in the regression analysis, only the relationships between steps/day indicators and changes in anthropometric (not clinical) outcomes remained significant. For example, a 1,000 steps/day increase in intervention mean steps/day resulted in a 0.1% decrease in %BF. Results for the three pedometer datasets (full, truncated, and excluded) were similar and yielded few meaningful differences in interpretation of the findings. Several descriptive indicators of steps/day may be useful for predicting anthropometric outcome changes. Further, manipulating steps/day data to address implausible values has little overall effect on the ability to predict these anthropometric changes.
Factors affecting GEBV accuracy with single-step Bayesian models.
Zhou, Lei; Mrode, Raphael; Zhang, Shengli; Zhang, Qin; Li, Bugao; Liu, Jian-Feng
2018-01-01
A single-step approach to obtain genomic prediction was first proposed in 2009. Many studies have investigated the components of GEBV accuracy in genomic selection. However, it is still unclear how the population structure and the relationships between training and validation populations influence GEBV accuracy in terms of single-step analysis. Here, we explored the components of GEBV accuracy in single-step Bayesian analysis with a simulation study. Three scenarios with various numbers of QTL (5, 50, and 500) were simulated. Three models were implemented to analyze the simulated data: single-step genomic best linear unbiased prediction (GBLUP; SSGBLUP), single-step BayesA (SS-BayesA), and single-step BayesB (SS-BayesB). According to our results, GEBV accuracy was influenced by the relationships between the training and validation populations more significantly for ungenotyped animals than for genotyped animals. SS-BayesA/BayesB showed an obvious advantage over SSGBLUP with the scenarios of 5 and 50 QTL. SS-BayesB model obtained the lowest accuracy with the 500 QTL in the simulation. SS-BayesA model was the most efficient and robust considering all QTL scenarios. Generally, both the relationships between training and validation populations and LD between markers and QTL contributed to GEBV accuracy in the single-step analysis, and the advantages of single-step Bayesian models were more apparent when the trait is controlled by fewer QTL.
Geometric mean for subspace selection.
Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J
2009-02-01
Subspace selection approaches are powerful tools in pattern classification and data visualization. One of the most important subspace approaches is the linear dimensionality reduction step in the Fisher's linear discriminant analysis (FLDA), which has been successfully employed in many fields such as biometrics, bioinformatics, and multimedia information management. However, the linear dimensionality reduction step in FLDA has a critical drawback: for a classification task with c classes, if the dimension of the projected subspace is strictly lower than c - 1, the projection to a subspace tends to merge those classes, which are close together in the original feature space. If separate classes are sampled from Gaussian distributions, all with identical covariance matrices, then the linear dimensionality reduction step in FLDA maximizes the mean value of the Kullback-Leibler (KL) divergences between different classes. Based on this viewpoint, the geometric mean for subspace selection is studied in this paper. Three criteria are analyzed: 1) maximization of the geometric mean of the KL divergences, 2) maximization of the geometric mean of the normalized KL divergences, and 3) the combination of 1 and 2. Preliminary experimental results based on synthetic data, UCI Machine Learning Repository, and handwriting digits show that the third criterion is a potential discriminative subspace selection method, which significantly reduces the class separation problem in comparing with the linear dimensionality reduction step in FLDA and its several representative extensions.
NASA Astrophysics Data System (ADS)
Weisz, Elisabeth; Smith, William L.; Smith, Nadia
2013-06-01
The dual-regression (DR) method retrieves information about the Earth surface and vertical atmospheric conditions from measurements made by any high-spectral resolution infrared sounder in space. The retrieved information includes temperature and atmospheric gases (such as water vapor, ozone, and carbon species) as well as surface and cloud top parameters. The algorithm was designed to produce a high-quality product with low latency and has been demonstrated to yield accurate results in real-time environments. The speed of the retrieval is achieved through linear regression, while accuracy is achieved through a series of classification schemes and decision-making steps. These steps are necessary to account for the nonlinearity of hyperspectral retrievals. In this work, we detail the key steps that have been developed in the DR method to advance accuracy in the retrieval of nonlinear parameters, specifically cloud top pressure. The steps and their impact on retrieval results are discussed in-depth and illustrated through relevant case studies. In addition to discussing and demonstrating advances made in addressing nonlinearity in a linear geophysical retrieval method, advances toward multi-instrument geophysical analysis by applying the DR to three different operational sounders in polar orbit are also noted. For any area on the globe, the DR method achieves consistent accuracy and precision, making it potentially very valuable to both the meteorological and environmental user communities.
Kinetic Landscape of a Peptide Bond-Forming Prolyl Oligopeptidase
2017-01-01
Prolyl oligopeptidase B from Galerina marginata (GmPOPB) has recently been discovered as a peptidase capable of breaking and forming peptide bonds to yield a cyclic peptide. Despite the relevance of prolyl oligopeptidases in human biology and disease, a kinetic analysis pinpointing rate-limiting steps for a member of this enzyme family is not available. Macrocyclase enzymes are currently exploited to produce cyclic peptides with potential therapeutic applications. Cyclic peptides are promising druglike molecules because of their stability and conformational rigidity. Here we describe an in-depth kinetic characterization of a prolyl oligopeptidase acting as a macrocyclase enzyme. By combining steady-state and pre-steady-state kinetics, we propose a kinetic sequence in which a step after macrocyclization limits steady-state turnover. Additionally, product release is ordered, where the cyclic peptide departs first followed by the peptide tail. Dissociation of the peptide tail is slow and significantly contributes to the turnover rate. Furthermore, trapping of the enzyme by the peptide tail becomes significant beyond initial rate conditions. The presence of a burst of product formation and a large viscosity effect further support the rate-limiting nature of a physical step occurring after macrocyclization. This is the first detailed description of the kinetic sequence of a macrocyclase enzyme from this class. GmPOPB is among the fastest macrocyclases described to date, and this work is a necessary step toward designing broad-specificity efficient macrocyclases. PMID:28332820
Salleh, Mohd Zaki; Teh, Lay Kek; Lee, Lian Shien; Ismet, Rose Iszati; Patowary, Ashok; Joshi, Kandarp; Pasha, Ayesha; Ahmed, Azni Zain; Janor, Roziah Mohd; Hamzah, Ahmad Sazali; Adam, Aishah; Yusoff, Khalid; Hoh, Boon Peng; Hatta, Fazleen Haslinda Mohd; Ismail, Mohamad Izwan; Scaria, Vinod; Sivasubbu, Sridhar
2013-01-01
With a higher throughput and lower cost in sequencing, second generation sequencing technology has immense potential for translation into clinical practice and in the realization of pharmacogenomics based patient care. The systematic analysis of whole genome sequences to assess patient to patient variability in pharmacokinetics and pharmacodynamics responses towards drugs would be the next step in future medicine in line with the vision of personalizing medicine. Genomic DNA obtained from a 55 years old, self-declared healthy, anonymous male of Malay descent was sequenced. The subject's mother died of lung cancer and the father had a history of schizophrenia and deceased at the age of 65 years old. A systematic, intuitive computational workflow/pipeline integrating custom algorithm in tandem with large datasets of variant annotations and gene functions for genetic variations with pharmacogenomics impact was developed. A comprehensive pathway map of drug transport, metabolism and action was used as a template to map non-synonymous variations with potential functional consequences. Over 3 million known variations and 100,898 novel variations in the Malay genome were identified. Further in-depth pharmacogenetics analysis revealed a total of 607 unique variants in 563 proteins, with the eventual identification of 4 drug transport genes, 2 drug metabolizing enzyme genes and 33 target genes harboring deleterious SNVs involved in pharmacological pathways, which could have a potential role in clinical settings. The current study successfully unravels the potential of personal genome sequencing in understanding the functionally relevant variations with potential influence on drug transport, metabolism and differential therapeutic outcomes. These will be essential for realizing personalized medicine through the use of comprehensive computational pipeline for systematic data mining and analysis.
2011-01-01
Arsenic is the toxic element, which creates several problems in human being specially when inhaled through air. So the accurate and precise measurement of arsenic in suspended particulate matter (SPM) is of prime importance as it gives information about the level of toxicity in the environment, and preventive measures could be taken in the effective areas. Quality assurance is equally important in the measurement of arsenic in SPM samples before making any decision. The quality and reliability of the data of such volatile elements depends upon the measurement of uncertainty of each step involved from sampling to analysis. The analytical results quantifying uncertainty gives a measure of the confidence level of the concerned laboratory. So the main objective of this study was to determine arsenic content in SPM samples with uncertainty budget and to find out various potential sources of uncertainty, which affects the results. Keeping these facts, we have selected seven diverse sites of Delhi (National Capital of India) for quantification of arsenic content in SPM samples with uncertainty budget following sampling by HVS to analysis by Atomic Absorption Spectrometer-Hydride Generator (AAS-HG). In the measurement of arsenic in SPM samples so many steps are involved from sampling to final result and we have considered various potential sources of uncertainties. The calculation of uncertainty is based on ISO/IEC17025: 2005 document and EURACHEM guideline. It has been found that the final results mostly depend on the uncertainty in measurement mainly due to repeatability, final volume prepared for analysis, weighing balance and sampling by HVS. After the analysis of data of seven diverse sites of Delhi, it has been concluded that during the period from 31st Jan. 2008 to 7th Feb. 2008 the arsenic concentration varies from 1.44 ± 0.25 to 5.58 ± 0.55 ng/m3 with 95% confidence level (k = 2). PMID:21466671
Jordan, Susan; Gabe-Walters, Marie Ellenor; Watkins, Alan; Humphreys, Ioan; Newson, Louise; Snelgrove, Sherrill; Dennis, Michael S
2015-01-01
Background People with dementia are susceptible to adverse drug reactions (ADRs). However, they are not always closely monitored for potential problems relating to their medicines: structured nurse-led ADR Profiles have the potential to address this care gap. We aimed to assess the number and nature of clinical problems identified and addressed and changes in prescribing following introduction of nurse-led medicines’ monitoring. Design Pragmatic cohort stepped-wedge cluster Randomised Controlled Trial (RCT) of structured nurse-led medicines’ monitoring versus usual care. Setting Five UK private sector care homes Participants 41 service users, taking at least one antipsychotic, antidepressant or anti-epileptic medicine. Intervention Nurses completed the West Wales ADR (WWADR) Profile for Mental Health Medicines with each participant according to trial step. Outcomes Problems addressed and changes in medicines prescribed. Data Collection and Analysis Information was collected from participants’ notes before randomisation and after each of five monthly trial steps. The impact of the Profile on problems found, actions taken and reduction in mental health medicines was explored in multivariate analyses, accounting for data collection step and site. Results Five of 10 sites and 43 of 49 service users approached participated. Profile administration increased the number of problems addressed from a mean of 6.02 [SD 2.92] to 9.86 [4.48], effect size 3.84, 95% CI 2.57–4.11, P <0.001. For example, pain was more likely to be treated (adjusted Odds Ratio [aOR] 3.84, 1.78–8.30), and more patients attended dentists and opticians (aOR 52.76 [11.80–235.90] and 5.12 [1.45–18.03] respectively). Profile use was associated with reduction in mental health medicines (aOR 4.45, 1.15–17.22). Conclusion The WWADR Profile for Mental Health Medicines can improve the quality and safety of care, and warrants further investigation as a strategy to mitigate the known adverse effects of prescribed medicines. Trial Registration ISRCTN 48133332 PMID:26461064
Fransen, Erik; Perkisas, Stany; Verhoeven, Veronique; Beauchet, Olivier; Remmen, Roy
2017-01-01
Background Gait characteristics measured at usual pace may allow profiling in patients with cognitive problems. The influence of age, gender, leg length, modified speed or dual tasking is unclear. Methods Cross-sectional analysis was performed on a data registry containing demographic, physical and spatial-temporal gait parameters recorded in five walking conditions with a GAITRite® electronic carpet in community-dwelling older persons with memory complaints. Four cognitive stages were studied: cognitively healthy individuals, mild cognitive impaired patients, mild dementia patients and advanced dementia patients. Results The association between spatial-temporal gait characteristics and cognitive stages was the most prominent: in the entire study population using gait speed, steps per meter (translation for mean step length), swing time variability, normalised gait speed (corrected for leg length) and normalised steps per meter at all five walking conditions; in the 50-to-70 years old participants applying step width at fast pace and steps per meter at usual pace; in the 70-to-80 years old persons using gait speed and normalised gait speed at usual pace, fast pace, animal walk and counting walk or steps per meter and normalised steps per meter at all five walking conditions; in over-80 years old participants using gait speed, normalised gait speed, steps per meter and normalised steps per meter at fast pace and animal dual-task walking. Multivariable logistic regression analysis adjusted for gender predicted in two compiled models the presence of dementia or cognitive impairment with acceptable accuracy in persons with memory complaints. Conclusion Gait parameters in multiple walking conditions adjusted for age, gender and leg length showed a significant association with cognitive impairment. This study suggested that multifactorial gait analysis could be more informative than using gait analysis with only one test or one variable. Using this type of gait analysis in clinical practice could facilitate screening for cognitive impairment. PMID:28570662
Lucius, Aaron L; Maluf, Nasib K; Fischer, Christopher J; Lohman, Timothy M
2003-10-01
Helicase-catalyzed DNA unwinding is often studied using "all or none" assays that detect only the final product of fully unwound DNA. Even using these assays, quantitative analysis of DNA unwinding time courses for DNA duplexes of different lengths, L, using "n-step" sequential mechanisms, can reveal information about the number of intermediates in the unwinding reaction and the "kinetic step size", m, defined as the average number of basepairs unwound between two successive rate limiting steps in the unwinding cycle. Simultaneous nonlinear least-squares analysis using "n-step" sequential mechanisms has previously been limited by an inability to float the number of "unwinding steps", n, and m, in the fitting algorithm. Here we discuss the behavior of single turnover DNA unwinding time courses and describe novel methods for nonlinear least-squares analysis that overcome these problems. Analytic expressions for the time courses, f(ss)(t), when obtainable, can be written using gamma and incomplete gamma functions. When analytic expressions are not obtainable, the numerical solution of the inverse Laplace transform can be used to obtain f(ss)(t). Both methods allow n and m to be continuous fitting parameters. These approaches are generally applicable to enzymes that translocate along a lattice or require repetition of a series of steps before product formation.
System and method for chromatography and electrophoresis using circular optical scanning
Balch, Joseph W.; Brewer, Laurence R.; Davidson, James C.; Kimbrough, Joseph R.
2001-01-01
A system and method is disclosed for chromatography and electrophoresis using circular optical scanning. One or more rectangular microchannel plates or radial microchannel plates has a set of analysis channels for insertion of molecular samples. One or more scanning devices repeatedly pass over the analysis channels in one direction at a predetermined rotational velocity and with a predetermined rotational radius. The rotational radius may be dynamically varied so as to monitor the molecular sample at various positions along a analysis channel. Sample loading robots may also be used to input molecular samples into the analysis channels. Radial microchannel plates are built from a substrate whose analysis channels are disposed at a non-parallel angle with respect to each other. A first step in the method accesses either a rectangular or radial microchannel plate, having a set of analysis channels, and second step passes a scanning device repeatedly in one direction over the analysis channels. As a third step, the scanning device is passed over the analysis channels at dynamically varying distances from a centerpoint of the scanning device. As a fourth step, molecular samples are loaded into the analysis channels with a robot.
NASA Astrophysics Data System (ADS)
Sabbatini, S.; Fratini, G.; Arriga, N.; Papale, D.
2012-04-01
Eddy Covariance (EC) is the only technologically available direct method to measure carbon and energy fluxes between ecosystems and atmosphere. However, uncertainties related to this method have not been exhaustively assessed yet, including those deriving from post-field data processing. The latter arise because there is no exact processing sequence established for any given situation, and the sequence itself is long and complex, with many processing steps and options available. However, the consistency and inter-comparability of flux estimates may be largely affected by the adoption of different processing sequences. The goal of our work is to quantify the uncertainty introduced in each processing step by the fact that different options are available, and to study how the overall uncertainty propagates throughout the processing sequence. We propose an easy-to-use methodology to assign a confidence level to the calculated fluxes of energy and mass, based on the adopted processing sequence, and on available information such as the EC system type (e.g. open vs. closed path), the climate and the ecosystem type. The proposed methodology synthesizes the results of a massive full-factorial experiment. We use one year of raw data from 15 European flux stations and process them so as to cover all possible combinations of the available options across a selection of the most relevant processing steps. The 15 sites have been selected to be representative of different ecosystems (forests, croplands and grasslands), climates (mediterranean, nordic, arid and humid) and instrumental setup (e.g. open vs. closed path). The software used for this analysis is EddyPro™ 3.0 (www.licor.com/eddypro). The critical processing steps, selected on the basis of the different options commonly used in the FLUXNET community, are: angle of attack correction; coordinate rotation; trend removal; time lag compensation; low- and high- frequency spectral correction; correction for air density fluctuations; and length of the flux averaging interval. We illustrate the results of the full-factorial combination relative to a subset of the selected sites with particular emphasis on the total uncertainty at different time scales and aggregations, as well as a preliminary analysis of the most critical steps for their contribution to the total uncertainties and their potential relation with site set-up characteristics and ecosystem type.
Development of economic consequence methodology for process risk analysis.
Zadakbar, Omid; Khan, Faisal; Imtiaz, Syed
2015-04-01
A comprehensive methodology for economic consequence analysis with appropriate models for risk analysis of process systems is proposed. This methodology uses loss functions to relate process deviations in a given scenario to economic losses. It consists of four steps: definition of a scenario, identification of losses, quantification of losses, and integration of losses. In this methodology, the process deviations that contribute to a given accident scenario are identified and mapped to assess potential consequences. Losses are assessed with an appropriate loss function (revised Taguchi, modified inverted normal) for each type of loss. The total loss is quantified by integrating different loss functions. The proposed methodology has been examined on two industrial case studies. Implementation of this new economic consequence methodology in quantitative risk assessment will provide better understanding and quantification of risk. This will improve design, decision making, and risk management strategies. © 2014 Society for Risk Analysis.
Methodological Variables in the Analysis of Cell-Free DNA.
Bronkhorst, Abel Jacobus; Aucamp, Janine; Pretorius, Piet J
2016-01-01
In recent years, cell-free DNA (cfDNA) analysis has received increasing amounts of attention as a potential non-invasive screening tool for the early detection of genetic aberrations and a wide variety of diseases, especially cancer. However, except for some prenatal tests and BEAMing, a technique used to detect mutations in various genes of cancer patients, cfDNA analysis is not yet routinely applied in clinical practice. Although some confusing biological factors inherent to the in vivo setting play a key part, it is becoming increasingly clear that this struggle is mainly due to the lack of an analytical consensus, especially as regards quantitative analyses of cfDNA. In order to use quantitative analysis of cfDNA with confidence, process optimization and standardization are crucial. In this work we aim to elucidate the most confounding variables of each preanalytical step that must be considered for process optimization and equivalence of procedures.
Glynn, Liam G; Hayes, Patrick S; Casey, Monica; Glynn, Fergus; Alvarez-Iglesias, Alberto; Newell, John; Ólaighin, Gearóid; Heaney, David; Murphy, Andrew W
2013-05-29
Sedentary lifestyles are now becoming a major concern for governments of developed and developing countries with physical inactivity related to increased all-cause mortality, lower quality of life, and increased risk of obesity, diabetes, hypertension and many other chronic diseases. The powerful onboard computing capacity of smartphones, along with the unique relationship individuals have with their mobile phones, suggests that mobile devices have the potential to influence behavior. However, no previous trials have been conducted using smartphone technology to promote physical activity. This project has the potential to provide robust evidence in this area of innovation. The aim of this study is to evaluate the effectiveness of a smartphone application as an intervention to promote physical activity in primary care. A two-group, parallel randomized controlled trial (RCT) with a main outcome measure of mean difference in daily step count between baseline and follow up over eight weeks. A minimum of 80 active android smartphone users over 16 years of age who are able to undertake moderate physical activity are randomly assigned to the intervention group (n = 40) or to a control group (n = 40) for an eight week period. After randomization, all participants will complete a baseline period of one week during which a baseline mean daily step count will be established. The intervention group will be instructed in the usability features of the smartphone application, will be encouraged to try to achieve 10,000 steps per day as an exercise goal and will be given an exercise promotion leaflet. The control group will be encouraged to try to walk an additional 30 minutes per day along with their normal activity (the equivalent of 10,000 steps) as an exercise goal and will be given an exercise promotion leaflet. The primary outcome is mean difference in daily step count between baseline and follow-up. Secondary outcomes are systolic and diastolic blood pressure, resting heart rate, mental health score using HADS and quality of life score using Euroqol. Randomization and allocation to the intervention and groups will be carried out by an independent researcher, ensuring the allocation sequence is concealed from the study researchers until the interventions are assigned. The primary analysis is based on mean daily step count, comparing the mean difference in daily step count between the baseline and the trial periods in the intervention and control groups at follow up.
2013-01-01
Background Sedentary lifestyles are now becoming a major concern for governments of developed and developing countries with physical inactivity related to increased all-cause mortality, lower quality of life, and increased risk of obesity, diabetes, hypertension and many other chronic diseases. The powerful onboard computing capacity of smartphones, along with the unique relationship individuals have with their mobile phones, suggests that mobile devices have the potential to influence behavior. However, no previous trials have been conducted using smartphone technology to promote physical activity. This project has the potential to provide robust evidence in this area of innovation. The aim of this study is to evaluate the effectiveness of a smartphone application as an intervention to promote physical activity in primary care. Methods/design A two-group, parallel randomized controlled trial (RCT) with a main outcome measure of mean difference in daily step count between baseline and follow up over eight weeks. A minimum of 80 active android smartphone users over 16 years of age who are able to undertake moderate physical activity are randomly assigned to the intervention group (n = 40) or to a control group (n = 40) for an eight week period. After randomization, all participants will complete a baseline period of one week during which a baseline mean daily step count will be established. The intervention group will be instructed in the usability features of the smartphone application, will be encouraged to try to achieve 10,000 steps per day as an exercise goal and will be given an exercise promotion leaflet. The control group will be encouraged to try to walk an additional 30 minutes per day along with their normal activity (the equivalent of 10,000 steps) as an exercise goal and will be given an exercise promotion leaflet. The primary outcome is mean difference in daily step count between baseline and follow-up. Secondary outcomes are systolic and diastolic blood pressure, resting heart rate, mental health score using HADS and quality of life score using Euroqol. Randomization and allocation to the intervention and groups will be carried out by an independent researcher, ensuring the allocation sequence is concealed from the study researchers until the interventions are assigned. The primary analysis is based on mean daily step count, comparing the mean difference in daily step count between the baseline and the trial periods in the intervention and control groups at follow up. Trial registration Current Controlled Trials ISRCTN99944116 PMID:23714362
Abstract Interpreters for Free
NASA Astrophysics Data System (ADS)
Might, Matthew
In small-step abstract interpretations, the concrete and abstract semantics bear an uncanny resemblance. In this work, we present an analysis-design methodology that both explains and exploits that resemblance. Specifically, we present a two-step method to convert a small-step concrete semantics into a family of sound, computable abstract interpretations. The first step re-factors the concrete state-space to eliminate recursive structure; this refactoring of the state-space simultaneously determines a store-passing-style transformation on the underlying concrete semantics. The second step uses inference rules to generate an abstract state-space and a Galois connection simultaneously. The Galois connection allows the calculation of the "optimal" abstract interpretation. The two-step process is unambiguous, but nondeterministic: at each step, analysis designers face choices. Some of these choices ultimately influence properties such as flow-, field- and context-sensitivity. Thus, under the method, we can give the emergence of these properties a graph-theoretic characterization. To illustrate the method, we systematically abstract the continuation-passing style lambda calculus to arrive at two distinct families of analyses. The first is the well-known k-CFA family of analyses. The second consists of novel "environment-centric" abstract interpretations, none of which appear in the literature on static analysis of higher-order programs.
Sage Simulation Model for Technology Demonstration Convertor by a Step-by-Step Approach
NASA Technical Reports Server (NTRS)
Demko, Rikako; Penswick, L. Barry
2006-01-01
The development of a Stirling model using the 1-D Saga design code was completed using a step-by-step approach. This is a method of gradually increasing the complexity of the Saga model while observing the energy balance and energy losses at each step of the development. This step-by-step model development and energy-flow analysis can clarify where the losses occur, their impact, and suggest possible opportunities for design improvement.
Carol Clausen
2004-01-01
In this study, three possible improvements to a remediation process for chromated-copper-arsenate (CCA) treated wood were evaluated. The process involves two steps: oxalic acid extraction of wood fiber followed by bacterial culture with Bacillus licheniformis CC01. The three potential improvements to the oxalic acid extraction step were (1) reusing oxalic acid for...
Sullivan, Maura E; Ortega, Adrian; Wasserberg, Nir; Kaufman, Howard; Nyquist, Julie; Clark, Richard
2008-01-01
The purpose of this study was to determine if a cognitive task analysis (CTA) could capture steps and decision points that were not articulated during traditional teaching of a colonoscopy. Three expert colorectal surgeons were videotaped performing a colonoscopy. After the videotapes were transcribed, the experts participated in a CTA. A 26-step procedural checklist and a 16-step cognitive demands table was created by using information obtained in the CTA. The videotape transcriptions were transposed onto the procedural checklist and cognitive demands table to identify steps and decision points that were omitted during traditional teaching. Surgeon A described 50% of "how-to" steps and 43% of decision points. Surgeon B described 30% of steps and 25% of decisions. Surgeon C described 26% of steps and 38% of cognitive decisions. By using CTA, we were able to identify relevant steps and decision points that were omitted during traditional teaching by all 3 experts.
3,7-Dideazaneplanocin: Synthesis and antiviral analysis.
Yin, Xue-Qiang; Schneller, Stewart W
2017-12-01
Objective To synthesize 3,7-dideazaneplanocin and evaluate its antiviral potential. Methods The target 3,7-dideazaneplanocin has been prepared in five steps from a readily available cyclopentenol. A thorough in vitro antiviral analysis was conducted versus both DNA and RNA viruses. Results A rational synthesis of 3,7-dideazaneplanocin was conceived and successfully pursued in such a way that it can be adapted to various analogs of 3,7-dideazaneplanocin. Using standard antiviral assays, no activity for 3,7-dideazaneplanocn was found. Conclusion Two structural features are necessary for adenine-based carbocyclic nucleosides (like neplanocin) for potential antiviral properties: (i) inhibition of S-adenosylhomocysteine hydrolase and/or (ii) C-5' activation via the mono-nucleotide. These two requisite adenine structural features to fit these criteria are not present in in the target 3,7-dideazaneplanocin: (i) an N-7 is necessary for inhibition of the hydrolase and the N-3 is claimed to be essential for phosphorylation at C-5'. Thus, it is not surprising that 3,7-dideazaneplaoncin lacked antiviral properties.
An application of data mining in district heating substations for improving energy performance
NASA Astrophysics Data System (ADS)
Xue, Puning; Zhou, Zhigang; Chen, Xin; Liu, Jing
2017-11-01
Automatic meter reading system is capable of collecting and storing a huge number of district heating (DH) data. However, the data obtained are rarely fully utilized. Data mining is a promising technology to discover potential interesting knowledge from vast data. This paper applies data mining methods to analyse the massive data for improving energy performance of DH substation. The technical approach contains three steps: data selection, cluster analysis and association rule mining (ARM). Two-heating-season data of a substation are used for case study. Cluster analysis identifies six distinct heating patterns based on the primary heat of the substation. ARM reveals that secondary pressure difference and secondary flow rate have a strong correlation. Using the discovered rules, a fault occurring in remote flow meter installed at secondary network is detected accurately. The application demonstrates that data mining techniques can effectively extrapolate potential useful knowledge to better understand substation operation strategies and improve substation energy performance.
Leemhuis, Hans; Pijning, Tjaard; Dobruchowska, Justyna M; van Leeuwen, Sander S; Kralj, Slavko; Dijkstra, Bauke W; Dijkhuizen, Lubbert
2013-01-20
Glucansucrases are extracellular enzymes that synthesize a wide variety of α-glucan polymers and oligosaccharides, such as dextran. These carbohydrates have found numerous applications in food and health industries, and can be used as pure compounds or even be produced in situ by generally regarded as safe (GRAS) lactic acid bacteria in food applications. Research in the recent years has resulted in big steps forward in the understanding and exploitation of the biocatalytic potential of glucansucrases. This paper provides an overview of glucansucrase enzymes, their recently elucidated crystal structures, their reaction and product specificity, and the structural analysis and applications of α-glucan polymers. Furthermore, we discuss key developments in the understanding of α-glucan polymer formation based on the recently elucidated three-dimensional structures of glucansucrase proteins. Finally we discuss the (potential) applications of α-glucans produced by lactic acid bacteria in food and health related industries. Copyright © 2012 Elsevier B.V. All rights reserved.
Harinipriya, S; Sangaranarayanan, M V
2006-01-31
The evaluation of the free energy of activation pertaining to the electron-transfer reactions occurring at liquid/liquid interfaces is carried out employing a diffuse boundary model. The interfacial solvation numbers are estimated using a lattice gas model under the quasichemical approximation. The standard reduction potentials of the redox couples, appropriate inner potential differences, dielectric permittivities, as well as the width of the interface are included in the analysis. The methodology is applied to the reaction between [Fe(CN)6](3-/4-) and [Lu(biphthalocyanine)](3+/4+) at water/1,2-dichloroethane interface. The rate-determining step is inferred from the estimated free energy of activation for the constituent processes. The results indicate that the solvent shielding effect and the desolvation of the reactants at the interface play a central role in dictating the free energy of activation. The heterogeneous electron-transfer rate constant is evaluated from the molar reaction volume and the frequency factor.
Exhaled molecular profiles in the assessment of cystic fibrosis and primary ciliary dyskinesia.
Paff, T; van der Schee, M P; Daniels, J M A; Pals, G; Postmus, P E; Sterk, P J; Haarman, E G
2013-09-01
Early diagnosis and monitoring of disease activity are essential in cystic fibrosis (CF) and primary ciliary dyskinesia (PCD). We aimed to establish exhaled molecular profiles as the first step in assessing the potential of breath analysis. Exhaled breath was analyzed by electronic nose in 25 children with CF, 25 with PCD and 23 controls. Principle component reduction and canonical discriminant analysis were used to construct internally cross-validated ROC curves. CF and PCD patients had significantly different breath profiles when compared to healthy controls (CF: sensitivity 84%, specificity 65%; PCD: sensitivity 88%, specificity 52%) and from each other (sensitivity 84%, specificity 60%). Patients with and without exacerbations had significantly different breath profiles (CF: sensitivity 89%, specificity 56%; PCD: sensitivity 100%, specificity 90%). Exhaled molecular profiles significantly differ between patients with CF, PCD and controls. The eNose may have potential in disease monitoring based on the influence of exacerbations on the VOC-profile. Copyright © 2012 European Cystic Fibrosis Society. Published by Elsevier B.V. All rights reserved.
Alignment of high-throughput sequencing data inside in-memory databases.
Firnkorn, Daniel; Knaup-Gregori, Petra; Lorenzo Bermejo, Justo; Ganzinger, Matthias
2014-01-01
In times of high-throughput DNA sequencing techniques, performance-capable analysis of DNA sequences is of high importance. Computer supported DNA analysis is still an intensive time-consuming task. In this paper we explore the potential of a new In-Memory database technology by using SAP's High Performance Analytic Appliance (HANA). We focus on read alignment as one of the first steps in DNA sequence analysis. In particular, we examined the widely used Burrows-Wheeler Aligner (BWA) and implemented stored procedures in both, HANA and the free database system MySQL, to compare execution time and memory management. To ensure that the results are comparable, MySQL has been running in memory as well, utilizing its integrated memory engine for database table creation. We implemented stored procedures, containing exact and inexact searching of DNA reads within the reference genome GRCh37. Due to technical restrictions in SAP HANA concerning recursion, the inexact matching problem could not be implemented on this platform. Hence, performance analysis between HANA and MySQL was made by comparing the execution time of the exact search procedures. Here, HANA was approximately 27 times faster than MySQL which means, that there is a high potential within the new In-Memory concepts, leading to further developments of DNA analysis procedures in the future.
Voltage-gated currents in identified rat olfactory receptor neurons.
Trombley, P Q; Westbrook, G L
1991-02-01
Whole-cell recording techniques were used to characterize voltage-gated membrane currents in neonatal rat olfactory receptor neurons (ORNs) in cell culture. Mature ORNs were identified in culture by their characteristic bipolar morphology, by retrograde labeling techniques, and by olfactory marker protein (OMP) immunoreactivity. ORNs did not have spontaneous activity, but fired action potentials to depolarizing current pulses. Action potentials were blocked by tetrodotoxin (TTX), which contrasts with the TTX-resistant action potentials in salamander olfactory receptor cells (e.g., Firestein and Werblin, 1987). Prolonged, suprathreshold current pulses evoked only a single action potential; however, repetitive firing up to 35 Hz could be elicited by a series of brief depolarizing pulses. Under voltage clamp, the TTX-sensitive sodium current had activation and inactivation properties similar to other excitable cells. In TTX and 20 mM barium, sustained inward current were evoked by voltage steps positive to -30 mV. This current was blocked by Cd (100 microM) and by nifedipine (IC50 = 368 nM) consistent with L-type calcium channels in other neurons. No T-type calcium current was observed. Voltage steps positive to -20 mV also evoked an outward current that did not inactivate during 100-msec depolarizations. Tail current analysis of this current was consistent with a selective potassium conductance. The outward current was blocked by external tetraethylammonium but was unaffected by Cd or 4-aminopyridine (4-AP) or by removal of external calcium. A transient outward current was not observed. The 3 voltage-dependent conductances in cultured rat ORNs appear to be sufficient for 2 essential functions: action potential generation and transmitter release. As a single odorant-activated channel can trigger an action potential (e.g., Lynch and Barry, 1989), the repetitive firing seen with brief depolarizing pulses suggests that ORNs do not integrate sensory input, but rather act as high-fidelity relays such that each opening of an odorant-activated channel reaches the olfactory bulb glomeruli as an action potential.
Pazos, Raquel; Echevarria, Juan; Hernandez, Alvaro; Reichardt, Niels-Christian
2017-09-01
Aberrant protein glycosylation is a hallmark of cancer, infectious diseases, and autoimmune or neurodegenerative disorders. Unlocking the potential of glycans as disease markers will require rapid and unbiased glycoproteomics methods for glycan biomarker discovery. The present method is a facile and rapid protocol for qualitative analysis of protein glycosylation in complex biological mixtures. While traditional lectin arrays only provide an average signal for the glycans in the mixture, which is usually dominated by the most abundant proteins, our method provides individual lectin binding profiles for all proteins separated in the gel electrophoresis step. Proteins do not have to be excised from the gel for subsequent analysis via the lectin array but are transferred by contact diffusion from the gel to a glass slide presenting multiple copies of printed lectin arrays. Fluorescently marked glycoproteins are trapped by the printed lectins via specific carbohydrate-lectin interactions and after a washing step their binding profile with up to 20 lectin probes is analyzed with a fluorescent scanner. The method produces the equivalent of 20 lectin blots in a single experiment, giving detailed insight into the binding epitopes present in the fractionated proteins. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.
Metabolic and gene expression analysis of apple (Malus × domestica) carotenogenesis
Allan, Andrew C.
2012-01-01
Carotenoid accumulation confers distinct colouration to plant tissues, with effects on plant response to light and as well as health benefits for consumers of plant products. The carotenoid pathway is controlled by flux of metabolites, rate-limiting enzyme steps, feed-back inhibition, and the strength of sink organelles, the plastids, in the cell. In apple (Malus × domestica Borkh), fruit carotenoid concentrations are low in comparison with those in other fruit species. The apple fruit flesh, in particular, begins development with high amounts of chlorophylls and carotenoids, but in all commercial cultivars a large proportion of this is lost by fruit maturity. To understand the control of carotenoid concentrations in apple fruit, metabolic and gene expression analysis of the carotenoid pathway were measured in genotypes with varying flesh and skin colour. Considerable variation in both carotenoid concentrations and compound profile was observed between tissues and genotypes, with carotenes and xanthophylls being found only in fruit accumulating high carotenoid concentrations. The study identified potential rate-limiting steps in carotenogenesis, which suggested that the expression of ZISO, CRTISO,and LCY-ε, in particular, were significant in predicting final carotenoid accumulation in mature apple fruit. PMID:22717407
Metabolic and gene expression analysis of apple (Malus x domestica) carotenogenesis.
Ampomah-Dwamena, Charles; Dejnoprat, Supinya; Lewis, David; Sutherland, Paul; Volz, Richard K; Allan, Andrew C
2012-07-01
Carotenoid accumulation confers distinct colouration to plant tissues, with effects on plant response to light and as well as health benefits for consumers of plant products. The carotenoid pathway is controlled by flux of metabolites, rate-limiting enzyme steps, feed-back inhibition, and the strength of sink organelles, the plastids, in the cell. In apple (Malus × domestica Borkh), fruit carotenoid concentrations are low in comparison with those in other fruit species. The apple fruit flesh, in particular, begins development with high amounts of chlorophylls and carotenoids, but in all commercial cultivars a large proportion of this is lost by fruit maturity. To understand the control of carotenoid concentrations in apple fruit, metabolic and gene expression analysis of the carotenoid pathway were measured in genotypes with varying flesh and skin colour. Considerable variation in both carotenoid concentrations and compound profile was observed between tissues and genotypes, with carotenes and xanthophylls being found only in fruit accumulating high carotenoid concentrations. The study identified potential rate-limiting steps in carotenogenesis, which suggested that the expression of ZISO, CRTISO, and LCY-ε, in particular, were significant in predicting final carotenoid accumulation in mature apple fruit.
Searching regional rainfall homogeneity using atmospheric fields
NASA Astrophysics Data System (ADS)
Gabriele, Salvatore; Chiaravalloti, Francesco
2013-03-01
The correct identification of homogeneous areas in regional rainfall frequency analysis is fundamental to ensure the best selection of the probability distribution and the regional model which produce low bias and low root mean square error of quantiles estimation. In an attempt at rainfall spatial homogeneity, the paper explores a new approach that is based on meteo-climatic information. The results are verified ex-post using standard homogeneity tests applied to the annual maximum daily rainfall series. The first step of the proposed procedure selects two different types of homogeneous large regions: convective macro-regions, which contain high values of the Convective Available Potential Energy index, normally associated with convective rainfall events, and stratiform macro-regions, which are characterized by low values of the Q vector Divergence index, associated with dynamic instability and stratiform precipitation. These macro-regions are identified using Hot Spot Analysis to emphasize clusters of extreme values of the indexes. In the second step, inside each identified macro-region, homogeneous sub-regions are found using kriging interpolation on the mean direction of the Vertically Integrated Moisture Flux. To check the proposed procedure, two detailed examples of homogeneous sub-regions are examined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, P.; Sugiman-Marangos, S; Zhang, K
2010-01-01
Lipopolysaccharide is a major component of the outer membrane of Gram-negative bacteria and provides a permeability barrier to many commonly used antibiotics. ADP-heptose residues are an integral part of the LPS inner core, and mutants deficient in heptose biosynthesis demonstrate increased membrane permeability. The heptose biosynthesis pathway involves phosphorylation and dephosphorylation steps not found in other pathways for the synthesis of nucleotide sugar precursors. Consequently, the heptose biosynthetic pathway has been marked as a novel target for antibiotic adjuvants, which are compounds that facilitate and potentiate antibiotic activity. D-{alpha},{beta}-D-Heptose-1,7-bisphosphate phosphatase (GmhB) catalyzes the third essential step of LPS heptose biosynthesis.more » This study describes the first crystal structure of GmhB and enzymatic analysis of the protein. Structure-guided mutations followed by steady state kinetic analysis, together with established precedent for HAD phosphatases, suggest that GmhB functions through a phosphoaspartate intermediate. This study provides insight into the structure-function relationship of GmhB, a new target for combatting Gram-negative bacterial infection.« less
Giménez, Estela; Sanz-Nebot, Victòria; Rizzi, Andreas
2013-09-01
Glycan reductive isotope labeling (GRIL) using [(12)C]- and [(13)C]-coded aniline was used for relative quantitation of N-glycans. In a first step, the labeling method by reductive amination was optimized for this reagent. It could be demonstrated that selecting aniline as limiting reactant and using the reductant in excess is critical for achieving high derivatization yields (over 95 %) and good reproducibility (relative standard deviations ∼1-5 % for major and ∼5-10 % for minor N-glycans). In a second step, zwitterionic-hydrophilic interaction liquid chromatography in capillary columns coupled to electrospray mass spectrometry with time-of-flight analyzer (μZIC-HILIC-ESI-TOF-MS) was applied for the analysis of labeled N-glycans released from intact glycoproteins. Ovalbumin, bovine α1-acid-glycoprotein and bovine fetuin were used as test glycoproteins to establish and evaluate the methodology. Excellent separation of isomeric N-glycans and reproducible quantitation via the extracted ion chromatograms indicate a great potential of the proposed methodology for glycoproteomic analysis and for reliable relative quantitation of glycosylation variants in biological samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wunschel, David S.; Kreuzer-Martin, Helen W.; Antolick, Kathryn C.
2009-12-01
This report describes method development and preliminary evaluation for analyzing castor samples for signatures of purifying ricin. Ricin purification from the source castor seeds is essentially a problem of protein purification using common biochemical methods. Indications of protein purification will likely manifest themselves as removal of the non-protein fractions of the seed. Two major, non-protein, types of biochemical constituents in the seed are the castor oil and various carbohydrates. The oil comprises roughly half the seed weight while the carbohydrate component comprises roughly half of the remaining “mash” left after oil and hull removal. Different castor oil and carbohydrate componentsmore » can serve as indicators of specific toxin processing steps. Ricinoleic acid is a relatively unique fatty acid in nature and is the most abundant component of castor oil. The loss of ricinoleic acid indicates a step to remove oil from the seeds. The relative amounts of carbohydrates and carbohydrate-like compounds, including arabinose, xylose, myo-inositol fucose, rhamnose, glucosamine and mannose detected in the sample can also indicate specific processing steps. For instance, the differential loss of arabinose relative to mannose and N-acetyl glucosamine indicates enrichment for the protein fraction of the seed using protein precipitation. The methods developed in this project center on fatty acid and carbohydrate extraction from castor samples followed by derivatization to permit analysis by gas chromatography-mass spectrometry (GC-MS). Method descriptions herein include: the source and preparation of castor materials used for method evaluation, the equipment and description of procedure required for chemical derivatization, and the instrument parameters used in the analysis. Two types of derivatization methods describe analysis of carbohydrates and one procedure for analysis of fatty acids. Two types of GC-MS analysis is included in the method development, one employing a quadrupole MS system for compound identification and an isotope ratio MS for measuring the stable isotope ratios of deuterium and hydrogen (D/H) in fatty acids. Finally, the method for analyzing the compound abundance data is included. This study indicates that removal of ricinoleic acid is a conserved consequence of each processing step we tested. Furthermore, the stable isotope D/H ratio of ricinoleic acid distinguished between two of the three castor seed sources. Concentrations of arabinose, xylose, mannose, glucosamine and myo-inositol differentiated between crude or acetone extracted samples and samples produced by protein precipitation. Taken together these data illustrate the ability to distinguish between processes used to purify a ricin sample as well as potentially the source seeds.« less
Step-up fecal microbiota transplantation (FMT) strategy
Cui, Bota; Li, Pan; Xu, Lijuan; Peng, Zhaoyuan; Xiang, Jie; He, Zhi; Zhang, Ting; Ji, Guozhong; Nie, Yongzhan; Wu, Kaichun; Fan, Daiming; Zhang, Faming
2016-01-01
ABSTRACT Gut dysbiosis is a characteristic of inflammatory bowel disease (IBD) and is believed to play a role in the pathogenesis of IBD. Fecal microbiota transplantation (FMT) is an effective strategy to restore intestinal microbial diversity and has been reported to have a potential therapeutic value in IBD. Our recent study reported a holistic integrative therapy called “step-up FMT strategy,” which was beneficial in treating steroid-dependent IBD patients. This strategy consists of scheduled FMTs combined with steroids, anti-TNF-α antibody treatment or enteral nutrition. Herein, we will elaborate the strategy thoroughly, introducing the concept, potential indication, methodology, and safety of “step-up FMT strategy” in detail. PMID:26939622
Drupsteen, Linda; Groeneweg, Jop; Zwetsloot, Gerard I J M
2013-01-01
Many incidents have occurred because organisations have failed to learn from lessons of the past. This means that there is room for improvement in the way organisations analyse incidents, generate measures to remedy identified weaknesses and prevent reoccurrence: the learning from incidents process. To improve that process, it is necessary to gain insight into the steps of this process and to identify factors that hinder learning (bottlenecks). This paper presents a model that enables organisations to analyse the steps in a learning from incidents process and to identify the bottlenecks. The study describes how this model is used in a survey and in 3 exploratory case studies in The Netherlands. The results show that there is limited use of learning potential, especially in the evaluation stage. To improve learning, an approach that considers all steps is necessary.
Modelling river bank retreat by combining fluvial erosion, seepage and mass failure
NASA Astrophysics Data System (ADS)
Dapporto, S.; Rinaldi, M.
2003-04-01
Streambank erosion processes contribute significantly to the sediment yielded from a river system and represent an important issue in the contexts of soil degradation and river management. Bank retreat is controlled by a complex interaction of hydrologic, geotechnical, and hydraulic processes. The capability of modelling these different components allows for a full reconstruction and comprehension of the causes and rates of bank erosion. River bank retreat during a single flow event has been modelled by combining simulation of fluvial erosion, seepage, and mass failures. The study site, along the Sieve River (Central Italy), has been subject to extensive researches, including monitoring of pore water pressures for a period of 4 years. The simulation reconstructs fairly faithfully the observed changes, and is used to: a) test the potentiality and discuss advantages and limitations of such type of methodology for modelling bank retreat; c) quantify the contribution and mutual role of the different processes determining bank retreat. The hydrograph of the event is divided in a series of time steps. Modelling of the riverbank retreat includes for each step the following components: a) fluvial erosion and consequent changes in bank geometry; b) finite element seepage analysis; c) stability analysis by limit equilibrium method. Direct fluvial shear erosion is computed using empirically derived relationships expressing lateral erosion rate as a function of the excess of shear stress to the critical entrainment value for the different materials along the bank profile. Lateral erosion rate has been calibrated on the basis of the total bank retreat measured by digital terrestrial photogrammetry. Finite element seepage analysis is then conducted to reconstruct the saturated and unsaturated flow within the bank and the pore water pressure distribution for each time step. The safety factor for mass failures is then computed, using the pore water pressure distribution obtained by the seepage analysis, and the geometry of the upper bank is modified in case of failure.
Validation Database Based Thermal Analysis of an Advanced RPS Concept
NASA Technical Reports Server (NTRS)
Balint, Tibor S.; Emis, Nickolas D.
2006-01-01
Advanced RPS concepts can be conceived, designed and assessed using high-end computational analysis tools. These predictions may provide an initial insight into the potential performance of these models, but verification and validation are necessary and required steps to gain confidence in the numerical analysis results. This paper discusses the findings from a numerical validation exercise for a small advanced RPS concept, based on a thermal analysis methodology developed at JPL and on a validation database obtained from experiments performed at Oregon State University. Both the numerical and experimental configurations utilized a single GPHS module enabled design, resembling a Mod-RTG concept. The analysis focused on operating and environmental conditions during the storage phase only. This validation exercise helped to refine key thermal analysis and modeling parameters, such as heat transfer coefficients, and conductivity and radiation heat transfer values. Improved understanding of the Mod-RTG concept through validation of the thermal model allows for future improvements to this power system concept.
Hydrotreater/Distillation Column Hazard Analysis Report Rev. 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowry, Peter P.; Wagner, Katie A.
This project Hazard and Risk Analysis Report contains the results of several hazard analyses and risk assessments. An initial assessment was conducted in 2012, which included a multi-step approach ranging from design reviews to a formal What-If hazard analysis. A second What-If hazard analysis was completed during February 2013 to evaluate the operation of the hydrotreater/distillation column processes to be installed in a process enclosure within the Process Development Laboratory West (PDL-West) facility located on the PNNL campus. The qualitative analysis included participation of project and operations personnel and applicable subject matter experts. The analysis identified potential hazardous scenarios, eachmore » based on an initiating event coupled with a postulated upset condition. The unmitigated consequences of each hazardous scenario were generally characterized as a process upset; the exposure of personnel to steam, vapors or hazardous material; a spray or spill of hazardous material; the creation of a flammable atmosphere; or an energetic release from a pressure boundary.« less
Zeybek, Burak; Öge, Tufan; Kılıç, Cemil Hakan; Borahay, Mostafa A.; Kılıç, Gökhan Sami
2014-01-01
Objective To analyse the steps taking place in the operating room (OR) before the console time starts in robot-assisted gynaecologic surgery and to identify potential ways to decrease non-operative time in the OR. Material and Methods Thirteen consecutive robotic cases for benign gynaecologic disease at the Department of Obstetrics and Gynecology at University of Texas Medical Branch (UTMB) were retrospectively reviewed. The collected data included the specific terms ‘Anaesthesia Done’ (step 1), ‘Drape Done’ (step 2), and ‘Trocar In’ (step 3), all of which refer to the time before the actual surgery began and OR charges were evaluated as level 3, 4, and 5 for open abdominal/vaginal hysterectomy, laparoscopic hysterectomy, and robot-assisted hysterectomy, respectively. Results The cost of the OR for 0–30 minutes and each additional 30 minutes were $3,693 and $1,488, $4,961 and $2,426, $5,513 and $2,756 in level 3, 4, and 5 surgeries, respectively. The median time for step 1 was 12.1 min (5.25–23.3), for step 2 was 19 (4.59–44) min, and for step 3 was 25.3 (16.45–45) min. The total median time until the actual operation began was 54.58 min (40–100). The total cost was $6948.7 when the charge was calculated according to level 4 and $7771.1 when the charge was calculated according to level 5. Conclusion Robot-assisted surgery is already ‘cost-expensive’ in the preparation stage of a surgical procedure during anaesthesia induction and draping of the patient because of charging levels. Every effort should be made to shorten the time and reduce the number of instruments used without compromising care. (J Turk Ger Gynecol Assoc 2014; 15: 25–9) PMID:24790513
Incentives for market penetration of biosimilars in Belgium and in five European countries.
Swartenbroekx, N; Farfan-Portet; Espín, J; Gerkens, S
2014-12-01
Biosimilars are products similar to a biological already authorized and no longer protected by a patent. As the biological product, they contain a biological substance produced by or derived from a living organism. Alike with generics, biosimilars are potential tool to ensure savings for health systems. The current lack of market penetration of biosimilars may be seen by national authorities as a lost opportunity in terms of cost- containment. The objective of this paper is therefore to analyze the current situation in Belgium and to identify potential measures to stimulate biosimilar uptake in Belgium through an analysis of the experience in five European countries: France, Germany, The Netherlands, Spain and Sweden. This international comparison was performed using a two steps analysis: a structured review of the literature followed by a validation from experts in each country. Potential incentives and constraints were identified, i.e., prescription quotas/target, clinical guidelines, primary substitution, reference price system, fixed payment and public tendering. However, the literature reviewed provided little evaluation of the effectiveness of these policies in terms of biosimilar uptake or potential savings. The impact of these policies on biosimilar related savings is currently based on expectation and assumptions. Such kind of studies is therefore essential in the future.
Analysis Techniques for Microwave Dosimetric Data.
1985-10-01
the number of steps in the frequency list . 0062 C ----------------------------------------------------------------------- 0063 CALL FILE2() 0064...starting frequency, 0061 C the step size, and the number of steps in the frequency list . 0062 C
Optimal subinterval selection approach for power system transient stability simulation
Kim, Soobae; Overbye, Thomas J.
2015-10-21
Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less
Estimating V0[subscript 2]max Using a Personalized Step Test
ERIC Educational Resources Information Center
Webb, Carrie; Vehrs, Pat R.; George, James D.; Hager, Ronald
2014-01-01
The purpose of this study was to develop a step test with a personalized step rate and step height to predict cardiorespiratory fitness in 80 college-aged males and females using the self-reported perceived functional ability scale and data collected during the step test. Multiple linear regression analysis yielded a model (R = 0.90, SEE = 3.43…
Integrated data analysis for genome-wide research.
Steinfath, Matthias; Repsilber, Dirk; Scholz, Matthias; Walther, Dirk; Selbig, Joachim
2007-01-01
Integrated data analysis is introduced as the intermediate level of a systems biology approach to analyse different 'omics' datasets, i.e., genome-wide measurements of transcripts, protein levels or protein-protein interactions, and metabolite levels aiming at generating a coherent understanding of biological function. In this chapter we focus on different methods of correlation analyses ranging from simple pairwise correlation to kernel canonical correlation which were recently applied in molecular biology. Several examples are presented to illustrate their application. The input data for this analysis frequently originate from different experimental platforms. Therefore, preprocessing steps such as data normalisation and missing value estimation are inherent to this approach. The corresponding procedures, potential pitfalls and biases, and available software solutions are reviewed. The multiplicity of observations obtained in omics-profiling experiments necessitates the application of multiple testing correction techniques.
Relative risk analysis of the use of radiation-emitting medical devices: A preliminary application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, E.D.
This report describes the development of a risk analysis approach for evaluating the use of radiation-emitting medial devices. This effort was performed by Lawrence Livermore National Laboratory for the US Nuclear Regulatory Commission (NRC). The assessment approach has bee applied to understand the risks in using the Gamma Knife, a gamma irradiation therapy device. This effort represents an initial step to evaluate the potential role of risk analysis for developing regulations and quality assurance requirements in the use of nuclear medical devices. The risk approach identifies and assesses the most likely risk contributors and their relative importance for the medicalmore » system. The approach uses expert screening techniques and relative risk profiling to incorporate the type, quality, and quantity of data available and to present results in an easily understood form.« less
Hocum, Jonah D; Battrell, Logan R; Maynard, Ryan; Adair, Jennifer E; Beard, Brian C; Rawlings, David J; Kiem, Hans-Peter; Miller, Daniel G; Trobridge, Grant D
2015-07-07
Analyzing the integration profile of retroviral vectors is a vital step in determining their potential genotoxic effects and developing safer vectors for therapeutic use. Identifying retroviral vector integration sites is also important for retroviral mutagenesis screens. We developed VISA, a vector integration site analysis server, to analyze next-generation sequencing data for retroviral vector integration sites. Sequence reads that contain a provirus are mapped to the human genome, sequence reads that cannot be localized to a unique location in the genome are filtered out, and then unique retroviral vector integration sites are determined based on the alignment scores of the remaining sequence reads. VISA offers a simple web interface to upload sequence files and results are returned in a concise tabular format to allow rapid analysis of retroviral vector integration sites.
Problem formulation in the environmental risk assessment for genetically modified plants
Wolt, Jeffrey D.; Keese, Paul; Raybould, Alan; Burachik, Moisés; Gray, Alan; Olin, Stephen S.; Schiemann, Joachim; Sears, Mark; Wu, Felicia
2009-01-01
Problem formulation is the first step in environmental risk assessment (ERA) where policy goals, scope, assessment endpoints, and methodology are distilled to an explicitly stated problem and approach for analysis. The consistency and utility of ERAs for genetically modified (GM) plants can be improved through rigorous problem formulation (PF), producing an analysis plan that describes relevant exposure scenarios and the potential consequences of these scenarios. A properly executed PF assures the relevance of ERA outcomes for decision-making. Adopting a harmonized approach to problem formulation should bring about greater uniformity in the ERA process for GM plants among regulatory regimes globally. This paper is the product of an international expert group convened by the International Life Sciences Institute (ILSI) Research Foundation. PMID:19757133
Studies on the electron acceptors of photosystem two
NASA Astrophysics Data System (ADS)
Bowden, Simon John
The differences in temperature dependent behaviour and microwave power saturation characteristics between the g=1.9 and g=1.8 QA -Fe2+ signals are described. The dependence of these behaviourial differences on the presence or absence of bicarbonate is emphasised. By studying the EPR signals of QA-Fe2+, Q-Fe2+, Q-Fe2+TBTQ- and the oxidised non-haem iron I have found that detergent solubilisation of BBY PS2 preparations with the detergent OGP, at pH 6.0, results in loss of bicarbonate binding. New preparations, including a dodecylmaltoside prepared CP47, CP4 3, D1, D2, cytochrome bgsg complex, are described which at pH 7.5 retain native bicarbonate binding. These preparations provide a new system for studies into the "bicarbonate effect" because bicarbonate depletion can now be achieved without displacement by another anion. The new OGP particles have been used to investigate both the split pheophytin signal and the two step redox titration phenomenon associated with this signal. The low potential step of the titration was concluded to be independent of the QA/QA- mid-point potential but was found to be linked to the ability to photoreduce pheophytin; once the low potential component, suggested here to be the fluorescence quencher QL, was reduced, pheophytin photoreduction increased. A model is described to explain the two step titration and, from analysis of the signal splitting in +/- HCO3- samples, a possible structural role for bicarbonate is proposed. I have probed the structure of the PS2 electron acceptor region with the protease trypsin. The QA, iron-semiquinone; oxidised non-haem iron and cytochrome bss, EPR signals were all found to be susceptible to trypsin damage, while oxygen evolution with ferricyanide was enhanced by protease treatment. The protective effect of calcium ions against trypsin damage was demonstrated and a possible Ca2+ binding site in the binding region identified.
The Complexity of One-Step Equations
ERIC Educational Resources Information Center
Ngu, Bing
2014-01-01
An analysis of one-step equations from a cognitive load theory perspective uncovers variation within one-step equations. The complexity of one-step equations arises from the element interactivity across the operational and relational lines. The higher the number of operational and relational lines, the greater the complexity of the equations.…
Meta-Analysis in Higher Education: An Illustrative Example Using Hierarchical Linear Modeling
ERIC Educational Resources Information Center
Denson, Nida; Seltzer, Michael H.
2011-01-01
The purpose of this article is to provide higher education researchers with an illustrative example of meta-analysis utilizing hierarchical linear modeling (HLM). This article demonstrates the step-by-step process of meta-analysis using a recently-published study examining the effects of curricular and co-curricular diversity activities on racial…
DOT National Transportation Integrated Search
2013-10-01
This document provides a step-by-step description of the design and execution of a strategic job analysis, using the position of Freight Conductor as an example. This document was created to be useful for many different needs, and can be used as an e...
An Analysis of the Carpentry Occupation.
ERIC Educational Resources Information Center
McKinney, Oral O.; And Others
The general purpose of the occupational analysis is to provide workable, basic information dealing with the many and varied duties performed in the carpentry occupation. The analysis starts with the progress of a house from the first study of the blueprints to the laying out of the excavations and continuing step-by-step until the interior finish…
Farias, Manuel J S; Cheuquepán, William; Tanaka, Auro A; Feliu, Juan M
2018-03-15
This works deals with the identification of preferential site-specific activation at a model Pt surface during a multiproduct reaction. The (110)-type steps of a Pt(332) surface were selectively marked by attaching isotope-labeled 13 CO molecules to them, and ethanol oxidation was probed by in situ Foureir transfrom infrared spectroscopy in order to precisely determine the specific sites at which CO 2 , acetic acid, and acetaldehyde were preferentially formed. The (110) steps were active for splitting the C-C bond, but unexpectedly, we provide evidence that the pathway of CO 2 formation was preferentially activated at (111) terraces, rather than at (110) steps. Acetaldehyde was formed at (111) terraces at potentials comparable to those for CO 2 formation also at (111) terraces, while the acetic acid formation pathway became active only when the (110) steps were released by the oxidation of adsorbed 13 CO, at potentials higher than for the formation of CO 2 at (111) terraces of the stepped surface.
Landslide risk analysis: a multi-disciplinary methodological approach
NASA Astrophysics Data System (ADS)
Sterlacchini, S.; Frigerio, S.; Giacomelli, P.; Brambilla, M.
2007-11-01
This study describes an analysis carried out within the European community project "ALARM" (Assessment of Landslide Risk and Mitigation in Mountain Areas, 2004) on landslide risk assessment in the municipality of Corvara in Badia, Italy. This mountainous area, located in the central Dolomites (Italian Alps), poses a significant landslide hazard to several man-made and natural objects. Three parameters for determining risk were analysed as an aid to preparedness and mitigation planning: event occurrence probability, elements at risk, and the vulnerability of these elements. Initially, a landslide hazard scenario was defined; this step was followed by the identification of the potential vulnerable elements, by the estimation of the expected physical effects, due to the occurrence of a damaging phenomenon, and by the analysis of social and economic features of the area. Finally, a potential risk scenario was defined, where the relationships between the event, its physical effects, and its economic consequences were investigated. People and public administrators with training and experience in local landsliding and slope processes were involved in each step of the analysis. A "cause-effect" correlation was applied, derived from the "dose-response" equation initially used in the biological sciences and then adapted by economists for the assessment of environmental risks. The relationship was analysed from a physical point of view and the cause (the natural event) was correlated to the physical effects, i.e. the aesthetic, functional, and structural damage. An economic evaluation of direct and indirect damage was carried out considering the assets in the affected area (i.e., tourist flows, goods, transport and the effect on other social and economic activities). This study shows the importance of indirect damage, which is as significant as direct damage. The total amount of direct damage was estimated in 8 913 000 €; on the contrary, indirect damage ranged considerably from 2 840 000 to 9 350 000 €, depending on the selected temporal scenario and the expected closing time of the potentially affected structures. The multi-disciplinary approach discussed in this study may assist local decision makers in determining the nature and magnitude of the expected losses due to a dangerous event, which can be anticipated in a given study area, during a specified time period. Besides, a preventive knowledge of the prospective physical effects and economic consequences may help local decision makers to choose the best prevention and mitigation options and to decide how to allocate resources properly, so that potential benefits are maximised at an acceptable cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
W. J. Galyean; A. M. Whaley; D. L. Kelly
This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from themore » psychology literature.« less
The stepping behavior analysis of pedestrians from different age groups via a single-file experiment
NASA Astrophysics Data System (ADS)
Cao, Shuchao; Zhang, Jun; Song, Weiguo; Shi, Chang'an; Zhang, Ruifang
2018-03-01
The stepping behavior of pedestrians with different age compositions in single-file experiment is investigated in this paper. The relation between step length, step width and stepping time are analyzed by using the step measurement method based on the calculation of curvature of the trajectory. The relations of velocity-step width, velocity-step length and velocity-stepping time for different age groups are discussed and compared with previous studies. Finally effects of pedestrian gender and height on stepping laws and fundamental diagrams are analyzed. The study is helpful for understanding pedestrian dynamics of movement. Meanwhile, it offers experimental data to develop a microscopic model of pedestrian movement by considering stepping behavior.
Employing Simulation to Evaluate Designs: The APEX Approach
NASA Technical Reports Server (NTRS)
Freed, Michael A.; Shafto, Michael G.; Remington, Roger W.; Null, Cynthia H. (Technical Monitor)
1998-01-01
The key innovations of APEX are its integrated approaches to task analysis, procedure definition, and intelligent, resource-constrained multi-tasking. This paper presents a step-by-step description of how APEX is used, from scenario development through trace analysis.
Human salivary microRNAs in Cancer
Rapado-González, Óscar; Majem, Blanca; Muinelo-Romay, Laura; Álvarez-Castro, Ana; Santamaría, Anna; Gil-Moreno, Antonio; López-López, Rafael; Suárez-Cunqueiro, María Mercedes
2018-01-01
Circulating microRNAs (miRNAs) have emerged as excellent candidates for cancer biomarkers. Several recent studies have highlighted the potential use of saliva for the identification of miRNAs as novel biomarkers, which represents a great opportunity to improve diagnosis and monitor general health and disease. This review summarises the mechanisms of miRNAs deregulation in cancer, the value of targeting them with a therapeutic intention and the evidence of the potential clinical use of miRNAs expressed in saliva for the detection of different cancer types. We also provide a comprehensive review of the different methods for normalising the levels of specific miRNAs present in saliva, as this is a critical step in their analysis, and the challenge to validate salivary miRNAs as a reality to manage cancer patients. PMID:29556321
Autopoiesis and cognition in the game of life.
Beer, Randall D
2004-01-01
Maturana and Varela's notion of autopoiesis has the potential to transform the conceptual foundation of biology as well as the cognitive, behavioral, and brain sciences. In order to fully realize this potential, however, the concept of autopoiesis and its many consequences require significant further theoretical and empirical development. A crucial step in this direction is the formulation and analysis of models of autopoietic systems. This article sketches the beginnings of such a project by examining a glider from Conway's game of life in autopoietic terms. Such analyses can clarify some of the key ideas underlying autopoiesis and draw attention to some of the central open issues. This article also examines the relationship between an autopoietic perspective on cognition and recent work on dynamical approaches to the behavior and cognition of situated, embodied agents.
NASA Astrophysics Data System (ADS)
Lee, K.; Chung, E.; Park, K.
2007-12-01
Many urbanized watersheds suffer from streamflow depletion and poor stream quality, which often negatively affects related factors such as in-stream and near-stream ecologic integrity and water supply. But any watershed management which does not consider all potential risks is not proper since all hydrological components are closely related. Therefore this study has developed and applied a ten-step integrated watershed management (IWM) procedure to sustainably rehabilitate distorted hydrologic cycles due to urbanization. Step 1 of this procedure is understanding the watershed component and processes. This study proposes not only water quantity/quality monitoring but also continuous water quantity/quality simulation and estimation of annual pollutant loads from unit loads of all landuses. Step 2 is quantifying the watershed problem as potential flood damage (PFD), potential streamflow depletion (PSD), potential water quality deterioration (PWQD) and watershed evaluation index (WEI). All indicators are selected from the sustainability concept, Pressure-State- Response (PSR) model. All weights are estimated by Analytic Hierarchy Process (AHP). Four indices are calculated using composite programming, a kind of multicritera decision making technque. In Step 3 residents' preference on management objectives which consists of flood damage mitigation, prevention of streamflow depletion, and water quality enhancement are quantified. WEI can be recalculated using these values. Step 4 requires one to set the specific goals and objectives based on the results from Step 2 and 3. Objectives can include spatial flood allocation, instreamflow requirement and total maximum daily load (TMDL). Step 5 and 6 are developing all possible alternatives and to eliminate the infeasible. Step 7 is analyzing the effectiveness of all remaining feasible alternatives. The criteria of water quantity are presented as changed lowflow(Q275) and drought flow(Q355) of flow duration curve and number of days to satisfy the instreamflow requirement. Also the criteria of water quality are proposed as changed average BOD concentration and total daily loads and number of days to satisfy the TMDL. Step 8 involves the calculation of AEI using various MCDM techniques. The indicators of AEI are obtained by the sustainability concept, Drivers-Pressure-State-Impact-Response (DPSIR), an improved PSR model. All previous results are used in this step. Step 9 is estimating the benefit and cost of alternatives. Discrete Willingness To Pay (WTP) for the specific improvement of some current watershed conditions are estimated by the choice experiment method which is an economic valuation with stated presence techniques. WTPs of specific alternatives are calculated by combining AEI and choice experiment results. Therefore, the benefit of alternatives can be obtained by multiplying WTP and total household value of the sub-watershed. Finally in Step 10 the final alternatives comparing the net benefit and BC ratio are determined. Final alternatives derived from the proposed IWM procedure should not be carried out immediately but be discussed by stakeholders and decision makers. However, since plans obtained from the elaborated analyses reflect even sustainability concept, these alternatives can be apt to be accepted comparatively. This ten-step procedure will be helpful in making decision support system for sustainable IWM.
Lewelling, B.R.
2003-01-01
Riverine and palustrine system wetlands are a major ecological component of river basins in west-central Florida. Healthy wetlands are dependent upon the frequency and duration of periodic flooding or inundation. This report assesses the extent, area, depth, frequency, and duration of periodic flooding and the effects of potential surface-water withdrawals on the wetlands along Cypress Creek and the Peace, Alafia, North Prong Alafia, and South Prong Alafia Rivers. Results of the study were derived from step-backwater analysis performed at each of the rivers using the U.S. Army Corps of Engineers Hydrologic Engineering Center-River Analysis System (HEC-RAS) one-dimensional model. The step-backwater analysis was performed using selected daily mean discharges at the 10th, 50th, 70th, 80th, 90th, 98th, 99.5th, and 99.9th percentiles to compute extent of areal inundation, area of inundation, and hydraulic depth to assess the net reduction of areal inundation if 10 percent of the total river flow were diverted for potential withdrawals. The extent of areal inundation is determined by cross-sectional topography and the degree to which the channel is incised. Areal inundation occurs along the broad, low relief of the Cypress Creek floodplain during all selected discharge percentiles. However, areal inundation of the Peace and Alafia Rivers floodplains, which generally have deeply incised channels, occurs at or above discharges at the 80th percentile. The greatest area of inundation along the three rivers generally occurs between the 90th and 98th percentile discharges. The decrease in inundated area resulting from a potential 10-percent withdrawal in discharge ranged as follows: Cypress Creek, 22 to 395 acres (1.7 to 8.4 percent); Peace River, 17 to 1,900 acres (2.1 to 13.6 percent); Alafia River, 1 to 90 acres (1 to 19.6 percent); North Prong Alafia River, 1 to 46 acres (0.7 to 23.4 percent); and South Prong Alafia River, 1 to 75 acres (1.5 to 13.4 percent).
Jahnke, Heinz-Georg; Steel, Daniella; Fleischer, Stephan; Seidel, Diana; Kurz, Randy; Vinz, Silvia; Dahlenborg, Kerstin; Sartipy, Peter; Robitzki, Andrea A.
2013-01-01
Unexpected adverse effects on the cardiovascular system remain a major challenge in the development of novel active pharmaceutical ingredients (API). To overcome the current limitations of animal-based in vitro and in vivo test systems, stem cell derived human cardiomyocyte clusters (hCMC) offer the opportunity for highly predictable pre-clinical testing. The three-dimensional structure of hCMC appears more representative of tissue milieu than traditional monolayer cell culture. However, there is a lack of long-term, real time monitoring systems for tissue-like cardiac material. To address this issue, we have developed a microcavity array (MCA)-based label-free monitoring system that eliminates the need for critical hCMC adhesion and outgrowth steps. In contrast, feasible field potential derived action potential recording is possible immediately after positioning within the microcavity. Moreover, this approach allows extended observation of adverse effects on hCMC. For the first time, we describe herein the monitoring of hCMC over 35 days while preserving the hCMC structure and electrophysiological characteristics. Furthermore, we demonstrated the sensitive detection and quantification of adverse API effects using E4031, doxorubicin, and noradrenaline directly on unaltered 3D cultures. The MCA system provides multi-parameter analysis capabilities incorporating field potential recording, impedance spectroscopy, and optical read-outs on individual clusters giving a comprehensive insight into induced cellular alterations within a complex cardiac culture over days or even weeks. PMID:23861955
Development and Validation of a Multimedia-based Assessment of Scientific Inquiry Abilities
NASA Astrophysics Data System (ADS)
Kuo, Che-Yu; Wu, Hsin-Kai; Jen, Tsung-Hau; Hsu, Ying-Shao
2015-09-01
The potential of computer-based assessments for capturing complex learning outcomes has been discussed; however, relatively little is understood about how to leverage such potential for summative and accountability purposes. The aim of this study is to develop and validate a multimedia-based assessment of scientific inquiry abilities (MASIA) to cover a more comprehensive construct of inquiry abilities and target secondary school students in different grades while this potential is leveraged. We implemented five steps derived from the construct modeling approach to design MASIA. During the implementation, multiple sources of evidence were collected in the steps of pilot testing and Rasch modeling to support the validity of MASIA. Particularly, through the participation of 1,066 8th and 11th graders, MASIA showed satisfactory psychometric properties to discriminate students with different levels of inquiry abilities in 101 items in 29 tasks when Rasch models were applied. Additionally, the Wright map indicated that MASIA offered accurate information about students' inquiry abilities because of the comparability of the distributions of student abilities and item difficulties. The analysis results also suggested that MASIA offered precise measures of inquiry abilities when the components (questioning, experimenting, analyzing, and explaining) were regarded as a coherent construct. Finally, the increased mean difficulty thresholds of item responses along with three performance levels across all sub-abilities supported the alignment between our scoring rubrics and our inquiry framework. Together with other sources of validity in the pilot testing, the results offered evidence to support the validity of MASIA.
Meunier, Carl J; Roberts, James G; McCarty, Gregory S; Sombers, Leslie A
2017-02-15
Background-subtracted fast-scan cyclic voltammetry (FSCV) has emerged as a powerful analytical technique for monitoring subsecond molecular fluctuations in live brain tissue. Despite increasing utilization of FSCV, efforts to improve the accuracy of quantification have been limited due to the complexity of the technique and the dynamic recording environment. It is clear that variable electrode performance renders calibration necessary for accurate quantification; however, the nature of in vivo measurements can make conventional postcalibration difficult, or even impossible. Analyte-specific voltammograms and scaling factors that are critical for quantification can shift or fluctuate in vivo. This is largely due to impedance changes, and the effects of impedance on these measurements have not been characterized. We have previously reported that the background current can be used to predict electrode-specific scaling factors in situ. In this work, we employ model circuits to investigate the impact of impedance on FSCV measurements. Additionally, we take another step toward in situ electrode calibration by using the oxidation potential of quinones on the electrode surface to accurately predict the oxidation potential for dopamine at any point in an electrochemical experiment, as both are dependent on impedance. The model, validated both in adrenal slice and live brain tissue, enables information encoded in the shape of the background voltammogram to determine electrochemical parameters that are critical for accurate quantification. This improves data interpretation and provides a significant next step toward more automated methods for in vivo data analysis.
Patil, Maheshkumar Prakash; Singh, Rahul Dheerendra; Koli, Prashant Bhimrao; Patil, Kalpesh Tumadu; Jagdale, Bapu Sonu; Tipare, Anuja Rajesh; Kim, Gun-Do
2018-05-25
The green and one-step synthesis of silver nanoparticles (AgNPs) has been proposed as simple and ecofriendly. In the present study, a flower extract of Madhuca longifolia was used for the reduction of silver nitrate into AgNPs, with phytochemicals from the flower extract as a reducing and stabilizing agents. The synthesized AgNPs were spherical and oval shaped and about 30-50 nm sizes. The appearance of a brown color in the reaction mixture is a primary indication of AgNPs formation, and it was confirmed by observing UV-visible spectroscopy peak at 436 nm. The Energy Dispersive X-ray spectra and X-ray diffraction analysis results together confirm that the synthesized nanoparticles contain silver and silver chloride nanoparticles. The Zeta potential analysis indicates presence of negative charges on synthesized AgNPs. The FT-IR study represents involvement of functional groups in AgNPs synthesis. Synthesized AgNPs shows potential antibacterial activity against Gram-positive and Gram-negative pathogens. M. longifolia flower is a good source for AgNPs synthesis and synthesized AgNPs are applicable as antibacterial agent in therapeutics. Copyright © 2018 Elsevier Ltd. All rights reserved.
Trindade, Fábio; Ferreira, Rita; Magalhães, Beatriz; Leite-Moreira, Adelino; Falcão-Pires, Inês; Vitorino, Rui
2018-01-16
Nowadays we are surrounded by a plethora of bioinformatics tools, powerful enough to deal with the large amounts of data arising from proteomic studies, but whose application is sometimes hard to find. Therefore, we used a specific clinical problem - to discriminate pathophysiology and potential biomarkers between two similar cardiovascular diseases, aortic valve stenosis (AVS) and coronary artery disease (CAD) - to make a step-by-step guide through four bioinformatics tools: STRING, DisGeNET, Cytoscape and ClueGO. Proteome data was collected from articles available on PubMed centered on proteomic studies enrolling subjects with AVS or CAD. Through the analysis of gene ontology provided by STRING and ClueGO we could find specific biological phenomena associated with AVS, such as down-regulation of elastic fiber assembly, and with CAD, such as up-regulation of plasminogen activation. Moreover, through Cytoscape and DisGeNET we could pinpoint surrogate markers either for AVS (e.g. popeye domain containing protein 2 and 28S ribosomal protein S36, mitochondrial) or for CAD (e.g. ankyrin repeat and SOCS box protein 7) which deserve future validation. Data recycling and integration as well as research orientation are among the main advantages of resorting to bioinformatics analysis, hence these tutorials can be of great convenience for proteomics investigators. As we saw for aortic valve stenosis and coronary artery disease, it can be of great relevance to perform preliminary bioinformatics analysis with already published proteomics data. It not only saves us time in the lab (avoiding work duplication) as it points out new hypothesis to explain the phenotypical presentation of the diseases as well as new surrogate markers with clinical relevance, deserving future scrutiny. These essential steps can be easily overcome if one follows the steps proposed in our tutorial for STRING, DisGeNET, Cytoscape and ClueGO utilization. Copyright © 2017 Elsevier B.V. All rights reserved.
Ranking network of a captive rhesus macaque society: a sophisticated corporative kingdom.
Fushing, Hsieh; McAssey, Michael P; Beisner, Brianne; McCowan, Brenda
2011-03-15
We develop a three-step computing approach to explore a hierarchical ranking network for a society of captive rhesus macaques. The computed network is sufficiently informative to address the question: Is the ranking network for a rhesus macaque society more like a kingdom or a corporation? Our computations are based on a three-step approach. These steps are devised to deal with the tremendous challenges stemming from the transitivity of dominance as a necessary constraint on the ranking relations among all individual macaques, and the very high sampling heterogeneity in the behavioral conflict data. The first step simultaneously infers the ranking potentials among all network members, which requires accommodation of heterogeneous measurement error inherent in behavioral data. Our second step estimates the social rank for all individuals by minimizing the network-wide errors in the ranking potentials. The third step provides a way to compute confidence bounds for selected empirical features in the social ranking. We apply this approach to two sets of conflict data pertaining to two captive societies of adult rhesus macaques. The resultant ranking network for each society is found to be a sophisticated mixture of both a kingdom and a corporation. Also, for validation purposes, we reanalyze conflict data from twenty longhorn sheep and demonstrate that our three-step approach is capable of correctly computing a ranking network by eliminating all ranking error.
Duz, Marco; Marshall, John F; Parkin, Tim
2017-06-29
The use of electronic medical records (EMRs) offers opportunity for clinical epidemiological research. With large EMR databases, automated analysis processes are necessary but require thorough validation before they can be routinely used. The aim of this study was to validate a computer-assisted technique using commercially available content analysis software (SimStat-WordStat v.6 (SS/WS), Provalis Research) for mining free-text EMRs. The dataset used for the validation process included life-long EMRs from 335 patients (17,563 rows of data), selected at random from a larger dataset (141,543 patients, ~2.6 million rows of data) and obtained from 10 equine veterinary practices in the United Kingdom. The ability of the computer-assisted technique to detect rows of data (cases) of colic, renal failure, right dorsal colitis, and non-steroidal anti-inflammatory drug (NSAID) use in the population was compared with manual classification. The first step of the computer-assisted analysis process was the definition of inclusion dictionaries to identify cases, including terms identifying a condition of interest. Words in inclusion dictionaries were selected from the list of all words in the dataset obtained in SS/WS. The second step consisted of defining an exclusion dictionary, including combinations of words to remove cases erroneously classified by the inclusion dictionary alone. The third step was the definition of a reinclusion dictionary to reinclude cases that had been erroneously classified by the exclusion dictionary. Finally, cases obtained by the exclusion dictionary were removed from cases obtained by the inclusion dictionary, and cases from the reinclusion dictionary were subsequently reincluded using Rv3.0.2 (R Foundation for Statistical Computing, Vienna, Austria). Manual analysis was performed as a separate process by a single experienced clinician reading through the dataset once and classifying each row of data based on the interpretation of the free-text notes. Validation was performed by comparison of the computer-assisted method with manual analysis, which was used as the gold standard. Sensitivity, specificity, negative predictive values (NPVs), positive predictive values (PPVs), and F values of the computer-assisted process were calculated by comparing them with the manual classification. Lowest sensitivity, specificity, PPVs, NPVs, and F values were 99.82% (1128/1130), 99.88% (16410/16429), 94.6% (223/239), 100.00% (16410/16412), and 99.0% (100×2×0.983×0.998/[0.983+0.998]), respectively. The computer-assisted process required few seconds to run, although an estimated 30 h were required for dictionary creation. Manual classification required approximately 80 man-hours. The critical step in this work is the creation of accurate and inclusive dictionaries to ensure that no potential cases are missed. It is significantly easier to remove false positive terms from a SS/WS selected subset of a large database than search that original database for potential false negatives. The benefits of using this method are proportional to the size of the dataset to be analyzed. ©Marco Duz, John F Marshall, Tim Parkin. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 29.06.2017.
Marshall, John F; Parkin, Tim
2017-01-01
Background The use of electronic medical records (EMRs) offers opportunity for clinical epidemiological research. With large EMR databases, automated analysis processes are necessary but require thorough validation before they can be routinely used. Objective The aim of this study was to validate a computer-assisted technique using commercially available content analysis software (SimStat-WordStat v.6 (SS/WS), Provalis Research) for mining free-text EMRs. Methods The dataset used for the validation process included life-long EMRs from 335 patients (17,563 rows of data), selected at random from a larger dataset (141,543 patients, ~2.6 million rows of data) and obtained from 10 equine veterinary practices in the United Kingdom. The ability of the computer-assisted technique to detect rows of data (cases) of colic, renal failure, right dorsal colitis, and non-steroidal anti-inflammatory drug (NSAID) use in the population was compared with manual classification. The first step of the computer-assisted analysis process was the definition of inclusion dictionaries to identify cases, including terms identifying a condition of interest. Words in inclusion dictionaries were selected from the list of all words in the dataset obtained in SS/WS. The second step consisted of defining an exclusion dictionary, including combinations of words to remove cases erroneously classified by the inclusion dictionary alone. The third step was the definition of a reinclusion dictionary to reinclude cases that had been erroneously classified by the exclusion dictionary. Finally, cases obtained by the exclusion dictionary were removed from cases obtained by the inclusion dictionary, and cases from the reinclusion dictionary were subsequently reincluded using Rv3.0.2 (R Foundation for Statistical Computing, Vienna, Austria). Manual analysis was performed as a separate process by a single experienced clinician reading through the dataset once and classifying each row of data based on the interpretation of the free-text notes. Validation was performed by comparison of the computer-assisted method with manual analysis, which was used as the gold standard. Sensitivity, specificity, negative predictive values (NPVs), positive predictive values (PPVs), and F values of the computer-assisted process were calculated by comparing them with the manual classification. Results Lowest sensitivity, specificity, PPVs, NPVs, and F values were 99.82% (1128/1130), 99.88% (16410/16429), 94.6% (223/239), 100.00% (16410/16412), and 99.0% (100×2×0.983×0.998/[0.983+0.998]), respectively. The computer-assisted process required few seconds to run, although an estimated 30 h were required for dictionary creation. Manual classification required approximately 80 man-hours. Conclusions The critical step in this work is the creation of accurate and inclusive dictionaries to ensure that no potential cases are missed. It is significantly easier to remove false positive terms from a SS/WS selected subset of a large database than search that original database for potential false negatives. The benefits of using this method are proportional to the size of the dataset to be analyzed. PMID:28663163
Fritz, Laura; Hadwiger, Markus; Geier, Georg; Pittino, Gerhard; Gröller, M Eduard
2009-01-01
This paper describes advanced volume visualization and quantification for applications in non-destructive testing (NDT), which results in novel and highly effective interactive workflows for NDT practitioners. We employ a visual approach to explore and quantify the features of interest, based on transfer functions in the parameter spaces of specific application scenarios. Examples are the orientations of fibres or the roundness of particles. The applicability and effectiveness of our approach is illustrated using two specific scenarios of high practical relevance. First, we discuss the analysis of Steel Fibre Reinforced Sprayed Concrete (SFRSpC). We investigate the orientations of the enclosed steel fibres and their distribution, depending on the concrete's application direction. This is a crucial step in assessing the material's behavior under mechanical stress, which is still in its infancy and therefore a hot topic in the building industry. The second application scenario is the designation of the microstructure of ductile cast irons with respect to the contained graphite. This corresponds to the requirements of the ISO standard 945-1, which deals with 2D metallographic samples. We illustrate how the necessary analysis steps can be carried out much more efficiently using our system for 3D volumes. Overall, we show that a visual approach with custom transfer functions in specific application domains offers significant benefits and has the potential of greatly improving and optimizing the workflows of domain scientists and engineers.
Krämer, Christina E M; Wiechert, Wolfgang; Kohlheyer, Dietrich
2016-09-01
Conventional propidium iodide (PI) staining requires the execution of multiple steps prior to analysis, potentially affecting assay results as well as cell vitality. In this study, this multistep analysis method has been transformed into a single-step, non-toxic, real-time method via live-cell imaging during perfusion with 0.1 μM PI inside a microfluidic cultivation device. Dynamic PI staining was an effective live/dead analytical tool and demonstrated consistent results for single-cell death initiated by direct or indirect triggers. Application of this method for the first time revealed the apparent antibiotic tolerance of wild-type Corynebacterium glutamicum cells, as indicated by the conversion of violet fluorogenic calcein acetoxymethyl ester (CvAM). Additional implementation of this method provided insight into the induced cell lysis of Escherichia coli cells expressing a lytic toxin-antitoxin module, providing evidence for non-lytic cell death and cell resistance to toxin production. Finally, our dynamic PI staining method distinguished necrotic-like and apoptotic-like cell death phenotypes in Saccharomyces cerevisiae among predisposed descendants of nutrient-deprived ancestor cells using PO-PRO-1 or green fluorogenic calcein acetoxymethyl ester (CgAM) as counterstains. The combination of single-cell cultivation, fluorescent time-lapse imaging, and PI perfusion facilitates spatiotemporally resolved observations that deliver new insights into the dynamics of cellular behaviour.
Systems analysis and improvement to optimize pMTCT (SAIA): a cluster randomized trial
2014-01-01
Background Despite significant increases in global health investment and the availability of low-cost, efficacious interventions to prevent mother-to-child HIV transmission (pMTCT) in low- and middle-income countries with high HIV burden, the translation of scientific advances into effective delivery strategies has been slow, uneven and incomplete. As a result, pediatric HIV infection remains largely uncontrolled. A five-step, facility-level systems analysis and improvement intervention (SAIA) was designed to maximize effectiveness of pMTCT service provision by improving understanding of inefficiencies (step one: cascade analysis), guiding identification and prioritization of low-cost workflow modifications (step two: value stream mapping), and iteratively testing and redesigning these modifications (steps three through five). This protocol describes the SAIA intervention and methods to evaluate the intervention’s impact on reducing drop-offs along the pMTCT cascade. Methods This study employs a two-arm, longitudinal cluster randomized trial design. The unit of randomization is the health facility. A total of 90 facilities were identified in Côte d’Ivoire, Kenya and Mozambique (30 per country). A subset was randomly selected and assigned to intervention and comparison arms, stratified by country and service volume, resulting in 18 intervention and 18 comparison facilities across all three countries, with six intervention and six comparison facilities per country. The SAIA intervention will be implemented for six months in the 18 intervention facilities. Primary trial outcomes are designed to assess improvements in the pMTCT service cascade, and include the percentage of pregnant women being tested for HIV at the first antenatal care visit, the percentage of HIV-infected pregnant women receiving adequate prophylaxis or combination antiretroviral therapy in pregnancy, and the percentage of newborns exposed to HIV in pregnancy receiving an HIV diagnosis eight weeks postpartum. The Consolidated Framework for Implementation Research (CFIR) will guide collection and analysis of qualitative data on implementation process. Discussion This study is a pragmatic trial that has the potential benefit of improving maternal and infant outcomes by reducing drop-offs along the pMTCT cascade. The SAIA intervention is designed to provide simple tools to guide decision-making for pMTCT program staff at the facility level, and to identify low cost, contextually appropriate pMTCT improvement strategies. Trial registration ClinicalTrials.gov NCT02023658 PMID:24885976