Sample records for time-based potential step

  1. Nonadiabatic Dynamics in Single-Electron Tunneling Devices with Time-Dependent Density-Functional Theory

    NASA Astrophysics Data System (ADS)

    Dittmann, Niklas; Splettstoesser, Janine; Helbig, Nicole

    2018-04-01

    We simulate the dynamics of a single-electron source, modeled as a quantum dot with on-site Coulomb interaction and tunnel coupling to an adjacent lead in time-dependent density-functional theory. Based on this system, we develop a time-nonlocal exchange-correlation potential by exploiting analogies with quantum-transport theory. The time nonlocality manifests itself in a dynamical potential step. We explicitly link the time evolution of the dynamical step to physical relaxation timescales of the electron dynamics. Finally, we discuss prospects for simulations of larger mesoscopic systems.

  2. Nonadiabatic Dynamics in Single-Electron Tunneling Devices with Time-Dependent Density-Functional Theory.

    PubMed

    Dittmann, Niklas; Splettstoesser, Janine; Helbig, Nicole

    2018-04-13

    We simulate the dynamics of a single-electron source, modeled as a quantum dot with on-site Coulomb interaction and tunnel coupling to an adjacent lead in time-dependent density-functional theory. Based on this system, we develop a time-nonlocal exchange-correlation potential by exploiting analogies with quantum-transport theory. The time nonlocality manifests itself in a dynamical potential step. We explicitly link the time evolution of the dynamical step to physical relaxation timescales of the electron dynamics. Finally, we discuss prospects for simulations of larger mesoscopic systems.

  3. A Computational Approach to Increase Time Scales in Brownian Dynamics–Based Reaction-Diffusion Modeling

    PubMed Central

    Frazier, Zachary

    2012-01-01

    Abstract Particle-based Brownian dynamics simulations offer the opportunity to not only simulate diffusion of particles but also the reactions between them. They therefore provide an opportunity to integrate varied biological data into spatially explicit models of biological processes, such as signal transduction or mitosis. However, particle based reaction-diffusion methods often are hampered by the relatively small time step needed for accurate description of the reaction-diffusion framework. Such small time steps often prevent simulation times that are relevant for biological processes. It is therefore of great importance to develop reaction-diffusion methods that tolerate larger time steps while maintaining relatively high accuracy. Here, we provide an algorithm, which detects potential particle collisions prior to a BD-based particle displacement and at the same time rigorously obeys the detailed balance rule of equilibrium reactions. We can show that for reaction-diffusion processes of particles mimicking proteins, the method can increase the typical BD time step by an order of magnitude while maintaining similar accuracy in the reaction diffusion modelling. PMID:22697237

  4. Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks.

    PubMed

    Shelley, M J; Tao, L

    2001-01-01

    To avoid the numerical errors associated with resetting the potential following a spike in simulations of integrate-and-fire neuronal networks, Hansel et al. and Shelley independently developed a modified time-stepping method. Their particular scheme consists of second-order Runge-Kutta time-stepping, a linear interpolant to find spike times, and a recalibration of postspike potential using the spike times. Here we show analytically that such a scheme is second order, discuss the conditions under which efficient, higher-order algorithms can be constructed to treat resets, and develop a modified fourth-order scheme. To support our analysis, we simulate a system of integrate-and-fire conductance-based point neurons with all-to-all coupling. For six-digit accuracy, our modified Runge-Kutta fourth-order scheme needs a time-step of Delta(t) = 0.5 x 10(-3) seconds, whereas to achieve comparable accuracy using a recalibrated second-order or a first-order algorithm requires time-steps of 10(-5) seconds or 10(-9) seconds, respectively. Furthermore, since the cortico-cortical conductances in standard integrate-and-fire neuronal networks do not depend on the value of the membrane potential, we can attain fourth-order accuracy with computational costs normally associated with second-order schemes.

  5. Nonenzymatic Wearable Sensor for Electrochemical Analysis of Perspiration Glucose.

    PubMed

    Zhu, Xiaofei; Ju, Yinhui; Chen, Jian; Liu, Deye; Liu, Hong

    2018-05-25

    We report a nonenzymatic wearable sensor for electrochemical analysis of perspiration glucose. Multipotential steps are applied on a Au electrode, including a high negative pretreatment potential step for proton reduction which produces a localized alkaline condition, a moderate potential step for electrocatalytic oxidation of glucose under the alkaline condition, and a positive potential step to clean and reactivate the electrode surface for the next detection. Fluorocarbon-based materials were coated on the Au electrode for improving the selectivity and robustness of the sensor. A fully integrated wristband is developed for continuous real-time monitoring of perspiration glucose during physical activities, and uploading the test result to a smartphone app via Bluetooth.

  6. Effects of Imperfect Dynamic Clamp: Computational and Experimental Results

    PubMed Central

    Bettencourt, Jonathan C.; Lillis, Kyle P.; White, John A.

    2008-01-01

    In the dynamic clamp technique, a typically nonlinear feedback system delivers electrical current to an excitable cell that represents the actions of “virtual” ion channels (e.g., channels that are gated by local membrane potential or by electrical activity in neighboring biological or virtual neurons). Since the conception of this technique, there have been a number of different implementations of dynamic clamp systems, each with differing levels of flexibility and performance. Embedded hardware-based systems typically offer feedback that is very fast and precisely timed, but these systems are often expensive and sometimes inflexible. PC-based systems, on the other hand, allow the user to write software that defines an arbitrarily complex feedback system, but real-time performance in PC-based systems can be deteriorated by imperfect real-time performance. Here we systematically evaluate the performance requirements for artificial dynamic clamp knock-in of transient sodium and delayed rectifier potassium conductances. Specifically we examine the effects of controller time step duration, differential equation integration method, jitter (variability in time step), and latency (the time lag from reading inputs to updating outputs). Each of these control system flaws is artificially introduced in both simulated and real dynamic clamp experiments. We demonstrate that each of these errors affect dynamic clamp accuracy in a way that depends on the time constants and stiffness of the differential equations being solved. In simulations, time steps above 0.2 ms lead to catastrophic alteration of spike shape, but the frequency-vs.-current relationship is much more robust. Latency (the part of the time step that occurs between measuring membrane potential and injecting re-calculated membrane current) is a crucial factor as well. Experimental data are substantially more sensitive to inaccuracies than simulated data. PMID:18076999

  7. Multiple time step integrators in ab initio molecular dynamics.

    PubMed

    Luehr, Nathan; Markland, Thomas E; Martínez, Todd J

    2014-02-28

    Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy.

  8. Quadratic adaptive algorithm for solving cardiac action potential models.

    PubMed

    Chen, Min-Hung; Chen, Po-Yuan; Luo, Ching-Hsing

    2016-10-01

    An adaptive integration method is proposed for computing cardiac action potential models accurately and efficiently. Time steps are adaptively chosen by solving a quadratic formula involving the first and second derivatives of the membrane action potential. To improve the numerical accuracy, we devise an extremum-locator (el) function to predict the local extremum when approaching the peak amplitude of the action potential. In addition, the time step restriction (tsr) technique is designed to limit the increase in time steps, and thus prevent the membrane potential from changing abruptly. The performance of the proposed method is tested using the Luo-Rudy phase 1 (LR1), dynamic (LR2), and human O'Hara-Rudy dynamic (ORd) ventricular action potential models, and the Courtemanche atrial model incorporating a Markov sodium channel model. Numerical experiments demonstrate that the action potential generated using the proposed method is more accurate than that using the traditional Hybrid method, especially near the peak region. The traditional Hybrid method may choose large time steps near to the peak region, and sometimes causes the action potential to become distorted. In contrast, the proposed new method chooses very fine time steps in the peak region, but large time steps in the smooth region, and the profiles are smoother and closer to the reference solution. In the test on the stiff Markov ionic channel model, the Hybrid blows up if the allowable time step is set to be greater than 0.1ms. In contrast, our method can adjust the time step size automatically, and is stable. Overall, the proposed method is more accurate than and as efficient as the traditional Hybrid method, especially for the human ORd model. The proposed method shows improvement for action potentials with a non-smooth morphology, and it needs further investigation to determine whether the method is helpful during propagation of the action potential. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. A time-based potential step analysis of electrochemical impedance incorporating a constant phase element: a study of commercially pure titanium in phosphate buffered saline.

    PubMed

    Ehrensberger, Mark T; Gilbert, Jeremy L

    2010-05-01

    The measurement of electrochemical impedance is a valuable tool to assess the electrochemical environment that exists at the surface of metallic biomaterials. This article describes the development and validation of a new technique, potential step impedance analysis (PSIA), to assess the electrochemical impedance of materials whose interface with solution can be modeled as a simplified Randles circuit that is modified with a constant phase element. PSIA is based upon applying a step change in voltage to a working electrode and analyzing the subsequent current transient response in a combined time and frequency domain technique. The solution resistance, polarization resistance, and interfacial capacitance are found directly in the time domain. The experimental current transient is numerically transformed to the frequency domain to determine the constant phase exponent, alpha. This combined time and frequency approach was tested using current transients generated from computer simulations, from resistor-capacitor breadboard circuits, and from commercially pure titanium samples immersed in phosphate buffered saline and polarized at -800 mV or +1000 mV versus Ag/AgCl. It was shown that PSIA calculates equivalent admittance and impedance behavior over this range of potentials when compared to standard electrochemical impedance spectroscopy. This current transient approach characterizes the frequency response of the system without the need for expensive frequency response analyzers or software. Copyright 2009 Wiley Periodicals, Inc.

  10. Defining the Costs of Reusable Flexible Ureteroscope Reprocessing Using Time-Driven Activity-Based Costing.

    PubMed

    Isaacson, Dylan; Ahmad, Tessnim; Metzler, Ian; Tzou, David T; Taguchi, Kazumi; Usawachintachit, Manint; Zetumer, Samuel; Sherer, Benjamin; Stoller, Marshall; Chi, Thomas

    2017-10-01

    Careful decontamination and sterilization of reusable flexible ureteroscopes used in ureterorenoscopy cases prevent the spread of infectious pathogens to patients and technicians. However, inefficient reprocessing and unavailability of ureteroscopes sent out for repair can contribute to expensive operating room (OR) delays. Time-driven activity-based costing (TDABC) was applied to describe the time and costs involved in reprocessing. Direct observation and timing were performed for all steps in reprocessing of reusable flexible ureteroscopes following operative procedures. Estimated times needed for each step by which damaged ureteroscopes identified during reprocessing are sent for repair were characterized through interviews with purchasing analyst staff. Process maps were created for reprocessing and repair detailing individual step times and their variances. Cost data for labor and disposables used were applied to calculate per minute and average step costs. Ten ureteroscopes were followed through reprocessing. Process mapping for ureteroscope reprocessing averaged 229.0 ± 74.4 minutes, whereas sending a ureteroscope for repair required an estimated 143 minutes per repair. Most steps demonstrated low variance between timed observations. Ureteroscope drying was the longest and highest variance step at 126.5 ± 55.7 minutes and was highly dependent on manual air flushing through the ureteroscope working channel and ureteroscope positioning in the drying cabinet. Total costs for reprocessing totaled $96.13 per episode, including the cost of labor and disposable items. Utilizing TDABC delineates the full spectrum of costs associated with ureteroscope reprocessing and identifies areas for process improvement to drive value-based care. At our institution, ureteroscope drying was one clearly identified target area. Implementing training in ureteroscope drying technique could save up to 2 hours per reprocessing event, potentially preventing expensive OR delays.

  11. The hyperbolic step potential: Anti-bound states, SUSY partners and Wigner time delays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gadella, M.; Kuru, Ş.; Negro, J., E-mail: jnegro@fta.uva.es

    We study the scattering produced by a one dimensional hyperbolic step potential, which is exactly solvable and shows an unusual interest because of its asymmetric character. The analytic continuation of the scattering matrix in the momentum representation has a branch cut and an infinite number of simple poles on the negative imaginary axis which are related with the so called anti-bound states. This model does not show resonances. Using the wave functions of the anti-bound states, we obtain supersymmetric (SUSY) partners which are the series of Rosen–Morse II potentials. We have computed the Wigner reflection and transmission time delays formore » the hyperbolic step and such SUSY partners. Our results show that the more bound states a partner Hamiltonian has the smaller is the time delay. We also have evaluated time delays for the hyperbolic step potential in the classical case and have obtained striking similitudes with the quantum case. - Highlights: • The scattering matrix of hyperbolic step potential is studied. • The scattering matrix has a branch cut and an infinite number of poles. • The poles are associated to anti-bound states. • Susy partners using antibound states are computed. • Wigner time delays for the hyperbolic step and partner potentials are compared.« less

  12. Step scaling and the Yang-Mills gradient flow

    NASA Astrophysics Data System (ADS)

    Lüscher, Martin

    2014-06-01

    The use of the Yang-Mills gradient flow in step-scaling studies of lattice QCD is expected to lead to results of unprecedented precision. Step scaling is usually based on the Schrödinger functional, where time ranges over an interval [0 , T] and all fields satisfy Dirichlet boundary conditions at time 0 and T. In these calculations, potentially important sources of systematic errors are boundary lattice effects and the infamous topology-freezing problem. The latter is here shown to be absent if Neumann instead of Dirichlet boundary conditions are imposed on the gauge field at time 0. Moreover, the expectation values of gauge-invariant local fields at positive flow time (and of other well localized observables) that reside in the center of the space-time volume are found to be largely insensitive to the boundary lattice effects.

  13. Magnetic timing valves for fluid control in paper-based microfluidics.

    PubMed

    Li, Xiao; Zwanenburg, Philip; Liu, Xinyu

    2013-07-07

    Multi-step analytical tests, such as an enzyme-linked immunosorbent assay (ELISA), require delivery of multiple fluids into a reaction zone and counting the incubation time at different steps. This paper presents a new type of paper-based magnetic valves that can count the time and turn on or off a fluidic flow accordingly, enabling timed fluid control in paper-based microfluidics. The timing capability of these valves is realized using a paper timing channel with an ionic resistor, which can detect the event of a solution flowing through the resistor and trigger an electromagnet (through a simple circuit) to open or close a paper cantilever valve. Based on this principle, we developed normally-open and normally-closed valves with a timing period up to 30.3 ± 2.1 min (sufficient for an ELISA on paper-based platforms). Using the normally-open valve, we performed an enzyme-based colorimetric reaction commonly used for signal readout of ELISAs, which requires a timed delivery of an enzyme substrate to a reaction zone. This design adds a new fluid-control component to the tool set for developing paper-based microfluidic devices, and has the potential to improve the user-friendliness of these devices.

  14. Quantization of charged fields in the presence of critical potential steps

    NASA Astrophysics Data System (ADS)

    Gavrilov, S. P.; Gitman, D. M.

    2016-02-01

    QED with strong external backgrounds that can create particles from the vacuum is well developed for the so-called t -electric potential steps, which are time-dependent external electric fields that are switched on and off at some time instants. However, there exist many physically interesting situations where external backgrounds do not switch off at the time infinity. E.g., these are time-independent nonuniform electric fields that are concentrated in restricted space areas. The latter backgrounds represent a kind of spatial x -electric potential steps for charged particles. They can also create particles from the vacuum, the Klein paradox being closely related to this process. Approaches elaborated for treating quantum effects in the t -electric potential steps are not directly applicable to the x -electric potential steps and their generalization for x -electric potential steps was not sufficiently developed. We believe that the present work represents a consistent solution of the latter problem. We have considered a canonical quantization of the Dirac and scalar fields with x -electric potential step and have found in- and out-creation and annihilation operators that allow one to have particle interpretation of the physical system under consideration. To identify in- and out-operators we have performed a detailed mathematical and physical analysis of solutions of the relativistic wave equations with an x -electric potential step with subsequent QFT analysis of correctness of such an identification. We elaborated a nonperturbative (in the external field) technique that allows one to calculate all characteristics of zero-order processes, such, for example, scattering, reflection, and electron-positron pair creation, without radiation corrections, and also to calculate Feynman diagrams that describe all characteristics of processes with interaction between the in-, out-particles and photons. These diagrams have formally the usual form, but contain special propagators. Expressions for these propagators in terms of in- and out-solutions are presented. We apply the elaborated approach to two popular exactly solvable cases of x -electric potential steps, namely, to the Sauter potential and to the Klein step.

  15. Constraint Preserving Schemes Using Potential-Based Fluxes. I. Multidimensional Transport Equations (PREPRINT)

    DTIC Science & Technology

    2010-01-01

    i,j−∆t nEni ,j , u∗∗i,j =u ∗ i,j−∆t nEni ,j , un+1i,j = 1 2 (uni,j+u ∗∗ i,j ). (2.26) An alternative first-order accurate genuinely multi-dimensional...time stepping is the ex- tended Lax-Friedrichs type time stepping, un+1i,j = 1 8 (4uni,j+u n i+1,j+u n i,j+1+u n i−1,j+u n i,j−1)−∆t nEni ,j . (2.27) 13

  16. Hypoglycemia early alarm systems based on recursive autoregressive partial least squares models.

    PubMed

    Bayrak, Elif Seyma; Turksoy, Kamuran; Cinar, Ali; Quinn, Lauretta; Littlejohn, Elizabeth; Rollins, Derrick

    2013-01-01

    Hypoglycemia caused by intensive insulin therapy is a major challenge for artificial pancreas systems. Early detection and prevention of potential hypoglycemia are essential for the acceptance of fully automated artificial pancreas systems. Many of the proposed alarm systems are based on interpretation of recent values or trends in glucose values. In the present study, subject-specific linear models are introduced to capture glucose variations and predict future blood glucose concentrations. These models can be used in early alarm systems of potential hypoglycemia. A recursive autoregressive partial least squares (RARPLS) algorithm is used to model the continuous glucose monitoring sensor data and predict future glucose concentrations for use in hypoglycemia alarm systems. The partial least squares models constructed are updated recursively at each sampling step with a moving window. An early hypoglycemia alarm algorithm using these models is proposed and evaluated. Glucose prediction models based on real-time filtered data has a root mean squared error of 7.79 and a sum of squares of glucose prediction error of 7.35% for six-step-ahead (30 min) glucose predictions. The early alarm systems based on RARPLS shows good performance. A sensitivity of 86% and a false alarm rate of 0.42 false positive/day are obtained for the early alarm system based on six-step-ahead predicted glucose values with an average early detection time of 25.25 min. The RARPLS models developed provide satisfactory glucose prediction with relatively smaller error than other proposed algorithms and are good candidates to forecast and warn about potential hypoglycemia unless preventive action is taken far in advance. © 2012 Diabetes Technology Society.

  17. Hypoglycemia Early Alarm Systems Based on Recursive Autoregressive Partial Least Squares Models

    PubMed Central

    Bayrak, Elif Seyma; Turksoy, Kamuran; Cinar, Ali; Quinn, Lauretta; Littlejohn, Elizabeth; Rollins, Derrick

    2013-01-01

    Background Hypoglycemia caused by intensive insulin therapy is a major challenge for artificial pancreas systems. Early detection and prevention of potential hypoglycemia are essential for the acceptance of fully automated artificial pancreas systems. Many of the proposed alarm systems are based on interpretation of recent values or trends in glucose values. In the present study, subject-specific linear models are introduced to capture glucose variations and predict future blood glucose concentrations. These models can be used in early alarm systems of potential hypoglycemia. Methods A recursive autoregressive partial least squares (RARPLS) algorithm is used to model the continuous glucose monitoring sensor data and predict future glucose concentrations for use in hypoglycemia alarm systems. The partial least squares models constructed are updated recursively at each sampling step with a moving window. An early hypoglycemia alarm algorithm using these models is proposed and evaluated. Results Glucose prediction models based on real-time filtered data has a root mean squared error of 7.79 and a sum of squares of glucose prediction error of 7.35% for six-step-ahead (30 min) glucose predictions. The early alarm systems based on RARPLS shows good performance. A sensitivity of 86% and a false alarm rate of 0.42 false positive/day are obtained for the early alarm system based on six-step-ahead predicted glucose values with an average early detection time of 25.25 min. Conclusions The RARPLS models developed provide satisfactory glucose prediction with relatively smaller error than other proposed algorithms and are good candidates to forecast and warn about potential hypoglycemia unless preventive action is taken far in advance. PMID:23439179

  18. Home-based step training using videogame technology in people with Parkinson's disease: a single-blinded randomised controlled trial.

    PubMed

    Song, Jooeun; Paul, Serene S; Caetano, Maria Joana D; Smith, Stuart; Dibble, Leland E; Love, Rachelle; Schoene, Daniel; Menant, Jasmine C; Sherrington, Cathie; Lord, Stephen R; Canning, Colleen G; Allen, Natalie E

    2018-03-01

    To determine whether 12-week home-based exergame step training can improve stepping performance, gait and complementary physical and neuropsychological measures associated with falls in Parkinson's disease. A single-blinded randomised controlled trial. Community (experimental intervention), university laboratory (outcome measures). Sixty community-dwelling people with Parkinson's disease. Home-based step training using videogame technology. The primary outcomes were the choice stepping reaction time test and Functional Gait Assessment. Secondary outcomes included physical and neuropsychological measures associated with falls in Parkinson's disease, number of falls over six months and self-reported mobility and balance. Post intervention, there were no differences between the intervention ( n = 28) and control ( n = 25) groups in the primary or secondary outcomes except for the Timed Up and Go test, where there was a significant difference in favour of the control group ( P = 0.02). Intervention participants reported mobility improvement, whereas control participants reported mobility deterioration-between-group difference on an 11-point scale = 0.9 (95% confidence interval: -1.8 to -0.1, P = 0.03). Interaction effects between intervention and disease severity on physical function measures were observed ( P = 0.01 to P = 0.08) with seemingly positive effects for the low-severity group and potentially negative effects for the high-severity group. Overall, home-based exergame step training was not effective in improving the outcomes assessed. However, the improved physical function in the lower disease severity intervention participants as well as the self-reported improved mobility in the intervention group suggest home-based exergame step training may have benefits for some people with Parkinson's disease.

  19. Unveiling the Biometric Potential of Finger-Based ECG Signals

    PubMed Central

    Lourenço, André; Silva, Hugo; Fred, Ana

    2011-01-01

    The ECG signal has been shown to contain relevant information for human identification. Even though results validate the potential of these signals, data acquisition methods and apparatus explored so far compromise user acceptability, requiring the acquisition of ECG at the chest. In this paper, we propose a finger-based ECG biometric system, that uses signals collected at the fingers, through a minimally intrusive 1-lead ECG setup recurring to Ag/AgCl electrodes without gel as interface with the skin. The collected signal is significantly more noisy than the ECG acquired at the chest, motivating the application of feature extraction and signal processing techniques to the problem. Time domain ECG signal processing is performed, which comprises the usual steps of filtering, peak detection, heartbeat waveform segmentation, and amplitude normalization, plus an additional step of time normalization. Through a simple minimum distance criterion between the test patterns and the enrollment database, results have revealed this to be a promising technique for biometric applications. PMID:21837235

  20. Unveiling the biometric potential of finger-based ECG signals.

    PubMed

    Lourenço, André; Silva, Hugo; Fred, Ana

    2011-01-01

    The ECG signal has been shown to contain relevant information for human identification. Even though results validate the potential of these signals, data acquisition methods and apparatus explored so far compromise user acceptability, requiring the acquisition of ECG at the chest. In this paper, we propose a finger-based ECG biometric system, that uses signals collected at the fingers, through a minimally intrusive 1-lead ECG setup recurring to Ag/AgCl electrodes without gel as interface with the skin. The collected signal is significantly more noisy than the ECG acquired at the chest, motivating the application of feature extraction and signal processing techniques to the problem. Time domain ECG signal processing is performed, which comprises the usual steps of filtering, peak detection, heartbeat waveform segmentation, and amplitude normalization, plus an additional step of time normalization. Through a simple minimum distance criterion between the test patterns and the enrollment database, results have revealed this to be a promising technique for biometric applications.

  1. Development and Implementation of a Transport Method for the Transport and Reaction Simulation Engine (TaRSE) based on the Godunov-Mixed Finite Element Method

    USGS Publications Warehouse

    James, Andrew I.; Jawitz, James W.; Munoz-Carpena, Rafael

    2009-01-01

    A model to simulate transport of materials in surface water and ground water has been developed to numerically approximate solutions to the advection-dispersion equation. This model, known as the Transport and Reaction Simulation Engine (TaRSE), uses an algorithm that incorporates a time-splitting technique where the advective part of the equation is solved separately from the dispersive part. An explicit finite-volume Godunov method is used to approximate the advective part, while a mixed-finite element technique is used to approximate the dispersive part. The dispersive part uses an implicit discretization, which allows it to run stably with a larger time step than the explicit advective step. The potential exists to develop algorithms that run several advective steps, and then one dispersive step that encompasses the time interval of the advective steps. Because the dispersive step is computationally most expensive, schemes can be implemented that are more computationally efficient than non-time-split algorithms. This technique enables scientists to solve problems with high grid Peclet numbers, such as transport problems with sharp solute fronts, without spurious oscillations in the numerical approximation to the solution and with virtually no artificial diffusion.

  2. Cathodic Potential Dependence of Electrochemical Reduction of SiO2 Granules in Molten CaCl2

    NASA Astrophysics Data System (ADS)

    Yang, Xiao; Yasuda, Kouji; Nohira, Toshiyuki; Hagiwara, Rika; Homma, Takayuki

    2016-09-01

    As part of an ongoing fundamental study to develop a new process for producing solar-grade silicon, this paper examines the effects of cathodic potential on reduction kinetics, current efficiency, morphology, and purity of Si product during electrolysis of SiO2 granules in molten CaCl2 at 1123 K (850 °C). SiO2 granules were electrolyzed potentiostatically at different cathodic potentials (0.6, 0.8, 1.0, and 1.2 V vs Ca2+/Ca). The reduction kinetics was evaluated based on the growth of the reduced Si layer and the current behavior during electrolysis. The results suggest that a more negative cathodic potential is favorable for faster reduction. Current efficiencies in 60 minutes are greater than 65 pct at all the potentials examined. Si wires with sub-micron diameters are formed, and their morphologies show little dependence on the cathodic potential. The impurities in the Si product can be controlled at low level. The rate-determining step for the electrochemical reduction of SiO2 granules in molten CaCl2 changes with time. At the initial stage of electrolysis, the electron transfer is the rate-determining step. At the later stage, the diffusion of O2- ions is the rate-determining step. The major cause of the decrease in reduction rate with increasing electrolysis time is the potential drop from the current collector to the reaction front due to the increased contact resistance among the reduced Si particles.

  3. Remotely Sensed Quantitative Drought Risk Assessment in Vulnerable Agroecosystems

    NASA Astrophysics Data System (ADS)

    Dalezios, N. R.; Blanta, A.; Spyropoulos, N. V.

    2012-04-01

    Hazard may be defined as a potential threat to humans and their welfare and risk (or consequence) as the probability of a hazard occurring and creating loss. Drought is considered as one of the major natural hazards with significant impact to agriculture, environment, economy and society. This paper deals with drought risk assessment, which the first step designed to find out what the problems are and comprises three distinct steps, namely risk identification, risk management which is not covered in this paper, there should be a fourth step to address the need for feedback and to take post-audits of all risk assessment exercises. In particular, quantitative drought risk assessment is attempted by using statistical methods. For the qualification of drought, the Reconnaissance Drought Index (RDI) is employed, which is a new index based on hydrometeorological parameters, such as precipitation and potential evapotranspiration. The remotely sensed estimation of RDI is based on NOA-AVHRR satellite data for a period of 20 years (1981-2001). The study area is Thessaly, central Greece, which is a drought-prone agricultural region characterized by vulnerable agriculture. Specifically, the undertaken drought risk assessment processes are specified as follows: 1. Risk identification: This step involves drought quantification and monitoring based on remotely sensed RDI and extraction of several features such as severity, duration, areal extent, onset and end time. Moreover, it involves a drought early warning system based on the above parameters. 2. Risk estimation: This step includes an analysis of drought severity, frequency and their relationships. 3. Risk evaluation: This step covers drought evaluation based on analysis of RDI images before and after each drought episode, which usually lasts one hydrological year (12month). The results of these three-step drought assessment processes are considered quite satisfactory in a drought-prone region such as Thessaly in central Greece. Moreover, remote sensing has proven very effective in delineating spatial variability and features in drought monitoring and assessment.

  4. Automating the evaluation of flood damages: methodology and potential gains

    NASA Astrophysics Data System (ADS)

    Eleutério, Julian; Martinez, Edgar Daniel

    2010-05-01

    The evaluation of flood damage potential consists of three main steps: assessing and processing data, combining data and calculating potential damages. The first step consists of modelling hazard and assessing vulnerability. In general, this step of the evaluation demands more time and investments than the others. The second step of the evaluation consists of combining spatial data on hazard with spatial data on vulnerability. Geographic Information System (GIS) is a fundamental tool in the realization of this step. GIS software allows the simultaneous analysis of spatial and matrix data. The third step of the evaluation consists of calculating potential damages by means of damage-functions or contingent analysis. All steps demand time and expertise. However, the last two steps must be realized several times when comparing different management scenarios. In addition, uncertainty analysis and sensitivity test are made during the second and third steps of the evaluation. The feasibility of these steps could be relevant in the choice of the extent of the evaluation. Low feasibility could lead to choosing not to evaluate uncertainty or to limit the number of scenario comparisons. Several computer models have been developed over time in order to evaluate the flood risk. GIS software is largely used to realise flood risk analysis. The software is used to combine and process different types of data, and to visualise the risk and the evaluation results. The main advantages of using a GIS in these analyses are: the possibility of "easily" realising the analyses several times, in order to compare different scenarios and study uncertainty; the generation of datasets which could be used any time in future to support territorial decision making; the possibility of adding information over time to update the dataset and make other analyses. However, these analyses require personnel specialisation and time. The use of GIS software to evaluate the flood risk requires personnel with a double professional specialisation. The professional should be proficient in GIS software and in flood damage analysis (which is already a multidisciplinary field). Great effort is necessary in order to correctly evaluate flood damages, and the updating and the improvement of the evaluation over time become a difficult task. The automation of this process should bring great advance in flood management studies over time, especially for public utilities. This study has two specific objectives: (1) show the entire process of automation of the second and third steps of flood damage evaluations; and (2) analyse the induced potential gains in terms of time and expertise needed in the analysis. A programming language is used within GIS software in order to automate hazard and vulnerability data combination and potential damages calculation. We discuss the overall process of flood damage evaluation. The main result of this study is a computational tool which allows significant operational gains on flood loss analyses. We quantify these gains by means of a hypothetical example. The tool significantly reduces the time of analysis and the needs for expertise. An indirect gain is that sensitivity and cost-benefit analyses can be more easily realized.

  5. Increasing physical activity in stroke survivors using STARFISH, an interactive mobile phone application: a pilot study.

    PubMed

    Paul, Lorna; Wyke, Sally; Brewster, Stephen; Sattar, Naveed; Gill, Jason M R; Alexander, Gillian; Rafferty, Danny; McFadyen, Angus K; Ramsay, Andrew; Dybus, Aleksandra

    2016-06-01

    Following stroke, people are generally less active and more sedentary which can worsen outcomes. Mobile phone applications (apps) can support change in health behaviors. We developed STARFISH, a mobile phone app-based intervention, which incorporates evidence-based behavior change techniques (feedback, self-monitoring and social support), in which users' physical activity is visualized by fish swimming. To evaluate the potential effectiveness of STARFISH in stroke survivors. Twenty-three people with stroke (12 women; age: 56.0 ± 10.0 years, time since stroke: 4.2 ± 4.0 years) from support groups in Glasgow completed the study. Participants were sequentially allocated in a 2:1 ratio to intervention (n = 15) or control (n = 8) groups. The intervention group followed the STARFISH program for six weeks; the control group received usual care. Outcome measures included physical activity, sedentary time, heart rate, blood pressure, body mass index, Fatigue Severity Scale, Instrumental Activity of Daily Living Scale, Ten-Meter Walk Test, Stroke Specific Quality of Life Scale, and Psychological General Well-Being Index. The average daily step count increased by 39.3% (4158 to 5791 steps/day) in the intervention group and reduced by 20.2% (3694 to 2947 steps/day) in the control group (p = 0.005 for group-time interaction). Similar patterns of data and group-time interaction were seen for walking time (p = 0.002) and fatigue (p = 0.003). There were no significant group-time interactions for other outcome measures. Use of STARFISH has the potential to improve physical activity and health outcomes in people after stroke and longer term intervention trials are warranted.

  6. Peridynamic thermal diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oterkus, Selda; Madenci, Erdogan, E-mail: madenci@email.arizona.edu; Agwai, Abigail

    This study presents the derivation of ordinary state-based peridynamic heat conduction equation based on the Lagrangian formalism. The peridynamic heat conduction parameters are related to those of the classical theory. An explicit time stepping scheme is adopted for numerical solution of various benchmark problems with known solutions. It paves the way for applying the peridynamic theory to other physical fields such as neutronic diffusion and electrical potential distribution.

  7. Virus elimination during the purification of monoclonal antibodies by column chromatography and additional steps.

    PubMed

    Roberts, Peter L

    2014-01-01

    The theoretical potential for virus transmission by monoclonal antibody based therapeutic products has led to the inclusion of appropriate virus reduction steps. In this study, virus elimination by the chromatographic steps used during the purification process for two (IgG-1 & -3) monoclonal antibodies (MAbs) have been investigated. Both the Protein G (>7log) and ion-exchange (5 log) chromatography steps were very effective for eliminating both enveloped and non-enveloped viruses over the life-time of the chromatographic gel. However, the contribution made by the final gel filtration step was more limited, i.e., 3 log. Because these chromatographic columns were recycled between uses, the effectiveness of the column sanitization procedures (guanidinium chloride for protein G or NaOH for ion-exchange) were tested. By evaluating standard column runs immediately after each virus spiked run, it was possible to directly confirm that there was no cross contamination with virus between column runs (guanidinium chloride or NaOH). To further ensure the virus safety of the product, two specific virus elimination steps have also been included in the process. A solvent/detergent step based on 1% triton X-100 rapidly inactivating a range of enveloped viruses by >6 log inactivation within 1 min of a 60 min treatment time. Virus removal by virus filtration step was also confirmed to be effective for those viruses of about 50 nm or greater. In conclusion, the combination of these multiple steps ensures a high margin of virus safety for this purification process. © 2014 American Institute of Chemical Engineers.

  8. Asymmetry in Determinants of Running Speed During Curved Sprinting.

    PubMed

    Ishimura, Kazuhiro; Sakurai, Shinji

    2016-08-01

    This study investigates the potential asymmetries between inside and outside legs in determinants of curved running speed. To test these asymmetries, a deterministic model of curved running speed was constructed based on components of step length and frequency, including the distances and times of different step phases, takeoff speed and angle, velocities in different directions, and relative height of the runner's center of gravity. Eighteen athletes sprinted 60 m on the curved path of a 400-m track; trials were recorded using a motion-capture system. The variables were calculated following the deterministic model. The average speeds were identical between the 2 sides; however, the step length and frequency were asymmetric. In straight sprinting, there is a trade-off relationship between the step length and frequency; however, such a trade-off relationship was not observed in each step of curved sprinting in this study. Asymmetric vertical velocity at takeoff resulted in an asymmetric flight distance and time. The runners changed the running direction significantly during the outside foot stance because of the asymmetric centripetal force. Moreover, the outside leg had a larger tangential force and shorter stance time. These asymmetries between legs indicated the outside leg plays an important role in curved sprinting.

  9. Molecular dynamics based enhanced sampling of collective variables with very large time steps.

    PubMed

    Chen, Pei-Yang; Tuckerman, Mark E

    2018-01-14

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  10. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    NASA Astrophysics Data System (ADS)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  11. A methodology to event reconstruction from trace images.

    PubMed

    Milliet, Quentin; Delémont, Olivier; Sapin, Eric; Margot, Pierre

    2015-03-01

    The widespread use of digital imaging devices for surveillance (CCTV) and entertainment (e.g., mobile phones, compact cameras) has increased the number of images recorded and opportunities to consider the images as traces or documentation of criminal activity. The forensic science literature focuses almost exclusively on technical issues and evidence assessment [1]. Earlier steps in the investigation phase have been neglected and must be considered. This article is the first comprehensive description of a methodology to event reconstruction using images. This formal methodology was conceptualised from practical experiences and applied to different contexts and case studies to test and refine it. Based on this practical analysis, we propose a systematic approach that includes a preliminary analysis followed by four main steps. These steps form a sequence for which the results from each step rely on the previous step. However, the methodology is not linear, but it is a cyclic, iterative progression for obtaining knowledge about an event. The preliminary analysis is a pre-evaluation phase, wherein potential relevance of images is assessed. In the first step, images are detected and collected as pertinent trace material; the second step involves organising and assessing their quality and informative potential. The third step includes reconstruction using clues about space, time and actions. Finally, in the fourth step, the images are evaluated and selected as evidence. These steps are described and illustrated using practical examples. The paper outlines how images elicit information about persons, objects, space, time and actions throughout the investigation process to reconstruct an event step by step. We emphasise the hypothetico-deductive reasoning framework, which demonstrates the contribution of images to generating, refining or eliminating propositions or hypotheses. This methodology provides a sound basis for extending image use as evidence and, more generally, as clues in investigation and crime reconstruction processes. Copyright © 2015 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.

  12. Time-Accurate Solutions of Incompressible Navier-Stokes Equations for Potential Turbopump Applications

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    2001-01-01

    Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two method are compared by obtaining unsteady solutions for the evolution of twin vortices behind a at plate. Calculated results are compared with experimental and other numerical results. For an un- steady ow which requires small physical time step, pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  13. Spin-density functional theory treatment of He+-He collisions

    NASA Astrophysics Data System (ADS)

    Baxter, Matthew; Kirchner, Tom; Engel, Eberhard

    2016-09-01

    The He+-He collision system presents an interesting challenge to theory. On one hand, a full treatment of the three-electron dynamics constitutes a massive computational problem that has not been attempted yet; on the other hand, simplified independent-particle-model based descriptions may only provide partial information on either the transitions of the initial target electrons or on the transitions of the projectile electron, depending on the choice of atomic model potentials. We address the He+-He system within the spin-density functional theory framework on the exchange-only level. The Krieger-Li-Iafrate (KLI) approximation is used to calculate the exchange potentials for the spin-up and spin-down electrons, which ensures the correct asymptotic behavior of the effective (Kohn-Sham) potential consisting of exchange, Hartree and nuclear Coulomb potentials. The orbitals are propagated with the two-center basis generator method. In each time step, simplified versions of them are fed into the KLI equations to calculate the Kohn-Sham potential, which, in turn, is used to generate the orbitals in the next time step. First results for the transitions of all electrons and the resulting charge-changing total cross sections will be presented at the conference. Work supported by NSERC, Canada.

  14. Event-related potentials reveal linguistic suppression effect but not enhancement effect on categorical perception of color.

    PubMed

    Lu, Aitao; Yang, Ling; Yu, Yanping; Zhang, Meichao; Shao, Yulan; Zhang, Honghong

    2014-08-01

    The present study used the event-related potential technique to investigate the nature of linguistic effect on color perception. Four types of stimuli based on hue differences between a target color and a preceding color were used: zero hue step within-category color (0-WC); one hue step within-category color (1-WC); one hue step between-category color (1-BC); and two hue step between-category color (2-BC). The ERP results showed no significant effect of stimulus type in the 100-200 ms time window. However, in the 200-350 ms time window, ERP responses to 1-WC target color overlapped with that to 0-WC target color for right visual field (RVF) but not left visual field (LVF) presentation. For the 1-BC condition, ERP amplitudes were comparable in the two visual fields, both being significantly different from the 0-WC condition. The 2-BC condition showed the same pattern as the 1-BC condition. These results suggest that the categorical perception of color in RVF is due to linguistic suppression on within-category color discrimination but not between-category color enhancement, and that the effect is independent of early perceptual processes. © 2014 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  15. Multivariate assessment of event-related potentials with the t-CWT method.

    PubMed

    Bostanov, Vladimir

    2015-11-05

    Event-related brain potentials (ERPs) are usually assessed with univariate statistical tests although they are essentially multivariate objects. Brain-computer interface applications are a notable exception to this practice, because they are based on multivariate classification of single-trial ERPs. Multivariate ERP assessment can be facilitated by feature extraction methods. One such method is t-CWT, a mathematical-statistical algorithm based on the continuous wavelet transform (CWT) and Student's t-test. This article begins with a geometric primer on some basic concepts of multivariate statistics as applied to ERP assessment in general and to the t-CWT method in particular. Further, it presents for the first time a detailed, step-by-step, formal mathematical description of the t-CWT algorithm. A new multivariate outlier rejection procedure based on principal component analysis in the frequency domain is presented as an important pre-processing step. The MATLAB and GNU Octave implementation of t-CWT is also made publicly available for the first time as free and open source code. The method is demonstrated on some example ERP data obtained in a passive oddball paradigm. Finally, some conceptually novel applications of the multivariate approach in general and of the t-CWT method in particular are suggested and discussed. Hopefully, the publication of both the t-CWT source code and its underlying mathematical algorithm along with a didactic geometric introduction to some basic concepts of multivariate statistics would make t-CWT more accessible to both users and developers in the field of neuroscience research.

  16. Occupational Physical Activity Habits of UK Office Workers: Cross-Sectional Data from the Active Buildings Study.

    PubMed

    Smith, Lee; Sawyer, Alexia; Gardner, Benjamin; Seppala, Katri; Ucci, Marcella; Marmot, Alexi; Lally, Pippa; Fisher, Abi

    2018-06-09

    Habitual behaviours are learned responses that are triggered automatically by associated environmental cues. The unvarying nature of most workplace settings makes workplace physical activity a prime candidate for a habitual behaviour, yet the role of habit strength in occupational physical activity has not been investigated. Aims of the present study were to: (i) document occupational physical activity habit strength; and (ii) investigate associations between occupational activity habit strength and occupational physical activity levels. A sample of UK office-based workers ( n = 116; 53% female, median age 40 years, SD 10.52) was fitted with activPAL accelerometers worn for 24 h on five consecutive days, providing an objective measure of occupational step counts, stepping time, sitting time, standing time and sit-to-stand transitions. A self-report index measured the automaticity of two occupational physical activities (“being active” (e.g., walking to printers and coffee machines) and “stair climbing”). Adjusted linear regression models investigated the association between occupational activity habit strength and objectively-measured occupational step counts, stepping time, sitting time, standing time and sit-to-stand transitions. Eighty-one per cent of the sample reported habits for “being active”, and 62% reported habits for “stair climbing”. In adjusted models, reported habit strength for “being active” were positively associated with average occupational sit-to-stand transitions per hour (B = 0.340, 95% CI: 0.053 to 0.627, p = 0.021). “Stair climbing” habit strength was unexpectedly negatively associated with average hourly stepping time (B = −0.01, 95% CI: −0.01 to −0.00, p = 0.006) and average hourly occupational step count (B = −38.34, 95% CI: −72.81 to −3.88, p = 0.030), which may reflect that people with stronger stair-climbing habits compensate by walking fewer steps overall. Results suggest that stair-climbing and office-based occupational activity can be habitual. Interventions might fruitfully promote habitual workplace activity, although, in light of potential compensation effects, such interventions should perhaps focus on promoting moderate-intensity activity.

  17. SQERTSS: Dynamic rank based throttling of transition probabilities in kinetic Monte Carlo simulations

    DOE PAGES

    Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; ...

    2017-06-09

    Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of “KMC stiffness” (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps / cpu-time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order tomore » achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events -- allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm designed for use in achieving and simulating steady-state conditions in KMC simulations. Lastly, as shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.« less

  18. SQERTSS: Dynamic rank based throttling of transition probabilities in kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; Savara, Aditya

    2017-10-01

    Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of "KMC stiffness" (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps/CPU time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order to achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events-allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm is designed for use in achieving and simulating steady-state conditions in KMC simulations. As shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.

  19. Stepping to the Beat: Feasibility and Potential Efficacy of a Home-Based Auditory-Cued Step Training Program in Chronic Stroke.

    PubMed

    Wright, Rachel L; Brownless, Simone Briony; Pratt, David; Sackley, Catherine M; Wing, Alan M

    2017-01-01

    Hemiparesis after stroke typically results in a reduced walking speed, an asymmetrical gait pattern and a reduced ability to make gait adjustments. The purpose of this pilot study was to investigate the feasibility and preliminary efficacy of home-based training involving auditory cueing of stepping in place. Twelve community-dwelling participants with chronic hemiparesis completed two 3-week blocks of home-based stepping to music overlaid with an auditory metronome. Tempo of the metronome was increased 5% each week. One 3-week block used a regular metronome, whereas the other 3-week block had phase shift perturbations randomly inserted to cue stepping adjustments. All participants reported that they enjoyed training, with 75% completing all training blocks. No adverse events were reported. Walking speed, Timed Up and Go (TUG) time and Dynamic Gait Index (DGI) scores (median [inter-quartile range]) significantly improved between baseline (speed = 0.61 [0.32, 0.85] m⋅s -1 ; TUG = 20.0 [16.0, 39.9] s; DGI = 14.5 [11.3, 15.8]) and post stepping training (speed = 0.76 [0.39, 1.03] m⋅s -1 ; TUG = 16.3 [13.3, 35.1] s; DGI = 16.0 [14.0, 19.0]) and was maintained at follow-up (speed = 0.75 [0.41, 1.03] m⋅s -1 ; TUG = 16.5 [12.9, 34.1] s; DGI = 16.5 [13.5, 19.8]). This pilot study suggests that auditory-cued stepping conducted at home was feasible and well-tolerated by participants post-stroke, with improvements in walking and functional mobility. No differences were detected between regular and phase-shift training with the metronome at each assessment point.

  20. Comparing an annual and daily time-step model for predicting field-scale phosphorus loss

    USDA-ARS?s Scientific Manuscript database

    Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...

  1. A permeation theory for single-file ion channels: one- and two-step models.

    PubMed

    Nelson, Peter Hugo

    2011-04-28

    How many steps are required to model permeation through ion channels? This question is investigated by comparing one- and two-step models of permeation with experiment and MD simulation for the first time. In recent MD simulations, the observed permeation mechanism was identified as resembling a Hodgkin and Keynes knock-on mechanism with one voltage-dependent rate-determining step [Jensen et al., PNAS 107, 5833 (2010)]. These previously published simulation data are fitted to a one-step knock-on model that successfully explains the highly non-Ohmic current-voltage curve observed in the simulation. However, these predictions (and the simulations upon which they are based) are not representative of real channel behavior, which is typically Ohmic at low voltages. A two-step association/dissociation (A/D) model is then compared with experiment for the first time. This two-parameter model is shown to be remarkably consistent with previously published permeation experiments through the MaxiK potassium channel over a wide range of concentrations and positive voltages. The A/D model also provides a first-order explanation of permeation through the Shaker potassium channel, but it does not explain the asymmetry observed experimentally. To address this, a new asymmetric variant of the A/D model is developed using the present theoretical framework. It includes a third parameter that represents the value of the "permeation coordinate" (fractional electric potential energy) corresponding to the triply occupied state n of the channel. This asymmetric A/D model is fitted to published permeation data through the Shaker potassium channel at physiological concentrations, and it successfully predicts qualitative changes in the negative current-voltage data (including a transition to super-Ohmic behavior) based solely on a fit to positive-voltage data (that appear linear). The A/D model appears to be qualitatively consistent with a large group of published MD simulations, but no quantitative comparison has yet been made. The A/D model makes a network of predictions for how the elementary steps and the channel occupancy vary with both concentration and voltage. In addition, the proposed theoretical framework suggests a new way of plotting the energetics of the simulated system using a one-dimensional permeation coordinate that uses electric potential energy as a metric for the net fractional progress through the permeation mechanism. This approach has the potential to provide a quantitative connection between atomistic simulations and permeation experiments for the first time.

  2. Structure of Room Temperature Ionic Liquids on Charged Graphene: An integrated experimental and computational study

    NASA Astrophysics Data System (ADS)

    Uysal, Ahmet; Zhou, Hua; Lee, Sang Soo; Fenter, Paul; Feng, Guang; Li, Song; Cummings, Peter; Fulvio, Pasquale; Dai, Sheng; McDonough, Jake; Gogotsi, Yury

    2014-03-01

    Electrical double layer capacitors (EDLCs) with room temperature ionic liquid (RTIL) electrolytes and carbon electrodes are promising candidates for energy storage devices with high power density and long cycle life. We studied the potential and time dependent changes in the electric double layer (EDL) structure of an imidazolium-based room temperature ionic liquid (RTIL) electrolyte at an epitaxial graphene (EG) surface. We used in situ x-ray reflectivity (XR) to determine the EDL structure at static potentials, during cyclic voltammetry (CV) and potential step measurements. The static potential structures were also investigated with fully atomistic molecular dynamics (MD) simulations. Combined XR and MD results show that the EDL structure has alternating anion/cation layers within the first nanometer of the interface. The dynamical response of the EDL to potential steps has a slow component (>10 s) and the RTIL structure shows hysteresis during CV scans. We propose a conceptual model that connects nanoscale interfacial structure to the macroscopic measurements. This material is based upon work supported as part of the Fluid Interface Reactions, Structures and Transport (FIRST) Center, an Energy Frontier Research Center funded by the U.S. Department of Energy (DOE), Office of Science (SC), Office of Basic Energy

  3. A green recyclable SO(3)H-carbon catalyst derived from glycerol for the production of biodiesel from FFA-containing karanja (Pongamia glabra) oil in a single step.

    PubMed

    Prabhavathi Devi, B L A; Vijai Kumar Reddy, T; Vijaya Lakshmi, K; Prasad, R B N

    2014-02-01

    Simultaneous esterification and transesterification method is employed for the preparation of biodiesel from 7.5% free fatty acid (FFA) containing karanja (Pongamia glabra) oil using water resistant and reusable carbon-based solid acid catalyst derived from glycerol in a single step. The optimum reaction parameters for obtaining biodiesel in >99% yield by simultaneous esterification and transesterification are: methanol (1:45 mole ratio of oil), catalyst 20wt.% of oil, temperature 160°C and reaction time of 4h. After the reaction, the catalyst was easily recovered by filtration and reused for five times with out any deactivation under optimized conditions. This single-step process could be a potential route for biodiesel production from high FFA containing oils by simplifying the procedure and reducing costs and effluent generation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Text-based Analytics for Biosurveillance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charles, Lauren E.; Smith, William P.; Rounds, Jeremiah

    The ability to prevent, mitigate, or control a biological threat depends on how quickly the threat is identified and characterized. Ensuring the timely delivery of data and analytics is an essential aspect of providing adequate situational awareness in the face of a disease outbreak. This chapter outlines an analytic pipeline for supporting an advanced early warning system that can integrate multiple data sources and provide situational awareness of potential and occurring disease situations. The pipeline, includes real-time automated data analysis founded on natural language processing (NLP), semantic concept matching, and machine learning techniques, to enrich content with metadata related tomore » biosurveillance. Online news articles are presented as an example use case for the pipeline, but the processes can be generalized to any textual data. In this chapter, the mechanics of a streaming pipeline are briefly discussed as well as the major steps required to provide targeted situational awareness. The text-based analytic pipeline includes various processing steps as well as identifying article relevance to biosurveillance (e.g., relevance algorithm) and article feature extraction (who, what, where, why, how, and when). The ability to prevent, mitigate, or control a biological threat depends on how quickly the threat is identified and characterized. Ensuring the timely delivery of data and analytics is an essential aspect of providing adequate situational awareness in the face of a disease outbreak. This chapter outlines an analytic pipeline for supporting an advanced early warning system that can integrate multiple data sources and provide situational awareness of potential and occurring disease situations. The pipeline, includes real-time automated data analysis founded on natural language processing (NLP), semantic concept matching, and machine learning techniques, to enrich content with metadata related to biosurveillance. Online news articles are presented as an example use case for the pipeline, but the processes can be generalized to any textual data. In this chapter, the mechanics of a streaming pipeline are briefly discussed as well as the major steps required to provide targeted situational awareness. The text-based analytic pipeline includes various processing steps as well as identifying article relevance to biosurveillance (e.g., relevance algorithm) and article feature extraction (who, what, where, why, how, and when).« less

  5. Issues in measure-preserving three dimensional flow integrators: Self-adjointness, reversibility, and non-uniform time stepping

    DOE PAGES

    Finn, John M.

    2015-03-01

    Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. Wemore » also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.« less

  6. Integrating viscoelastic mass spring dampers into position-based dynamics to simulate soft tissue deformation in real time

    PubMed Central

    Lu, Yuhua; Liu, Qian

    2018-01-01

    We propose a novel method to simulate soft tissue deformation for virtual surgery applications. The method considers the mechanical properties of soft tissue, such as its viscoelasticity, nonlinearity and incompressibility; its speed, stability and accuracy also meet the requirements for a surgery simulator. Modifying the traditional equation for mass spring dampers (MSD) introduces nonlinearity and viscoelasticity into the calculation of elastic force. Then, the elastic force is used in the constraint projection step for naturally reducing constraint potential. The node position is enforced by the combined spring force and constraint conservative force through Newton's second law. We conduct a comparison study of conventional MSD and position-based dynamics for our new integrating method. Our approach enables stable, fast and large step simulation by freely controlling visual effects based on nonlinearity, viscoelasticity and incompressibility. We implement a laparoscopic cholecystectomy simulator to demonstrate the practicality of our method, in which liver and gallbladder deformation can be simulated in real time. Our method is an appropriate choice for the development of real-time virtual surgery applications. PMID:29515870

  7. Evaluation of viral removal by nanofiltration using real-time quantitative polymerase chain reaction.

    PubMed

    Zhao, Xiaowen; Bailey, Mark R; Emery, Warren R; Lambooy, Peter K; Chen, Dayue

    2007-06-01

    Nanofiltration is commonly introduced into purification processes of biologics produced in mammalian cells to serve as a designated step for removal of potential exogenous viral contaminants and endogenous retrovirus-like particles. The LRV (log reduction value) achieved by nanofiltration is often determined by cell-based infectivity assay, which is time-consuming and labour-intensive. We have explored the possibility of employing QPCR (quantitative PCR) to evaluate LRV achieved by nanofiltration in scaled-down studies using two model viruses, namely xenotropic murine leukemia virus and murine minute virus. We report here the successful development of a QPCR-based method suitable for quantification of virus removal by nanofiltration. The method includes a nuclease treatment step to remove free viral nucleic acids, while viral genome associated with intact virus particles is shielded from the nuclease. In addition, HIV Armored RNA was included as an internal control to ensure the accuracy and reliability of the method. The QPCRbased method described here provides several advantages such as better sensitivity, faster turnaround time, reduced cost and higher throughput over the traditional cell-based infectivity assays.

  8. Integrating viscoelastic mass spring dampers into position-based dynamics to simulate soft tissue deformation in real time.

    PubMed

    Xu, Lang; Lu, Yuhua; Liu, Qian

    2018-02-01

    We propose a novel method to simulate soft tissue deformation for virtual surgery applications. The method considers the mechanical properties of soft tissue, such as its viscoelasticity, nonlinearity and incompressibility; its speed, stability and accuracy also meet the requirements for a surgery simulator. Modifying the traditional equation for mass spring dampers (MSD) introduces nonlinearity and viscoelasticity into the calculation of elastic force. Then, the elastic force is used in the constraint projection step for naturally reducing constraint potential. The node position is enforced by the combined spring force and constraint conservative force through Newton's second law. We conduct a comparison study of conventional MSD and position-based dynamics for our new integrating method. Our approach enables stable, fast and large step simulation by freely controlling visual effects based on nonlinearity, viscoelasticity and incompressibility. We implement a laparoscopic cholecystectomy simulator to demonstrate the practicality of our method, in which liver and gallbladder deformation can be simulated in real time. Our method is an appropriate choice for the development of real-time virtual surgery applications.

  9. Fast and scalable purification of a therapeutic full-length antibody based on process crystallization.

    PubMed

    Smejkal, Benjamin; Agrawal, Neeraj J; Helk, Bernhard; Schulz, Henk; Giffard, Marion; Mechelke, Matthias; Ortner, Franziska; Heckmeier, Philipp; Trout, Bernhardt L; Hekmat, Dariusch

    2013-09-01

    The potential of process crystallization for purification of a therapeutic monoclonal IgG1 antibody was studied. The purified antibody was crystallized in non-agitated micro-batch experiments for the first time. A direct crystallization from clarified CHO cell culture harvest was inhibited by high salt concentrations. The salt concentration of the harvest was reduced by a simple pretreatment step. The crystallization process from pretreated harvest was successfully transferred to stirred tanks and scaled-up from the mL-scale to the 1 L-scale for the first time. The crystallization yield after 24 h was 88-90%. A high purity of 98.5% was reached after a single recrystallization step. A 17-fold host cell protein reduction was achieved and DNA content was reduced below the detection limit. High biological activity of the therapeutic antibody was maintained during the crystallization, dissolving, and recrystallization steps. Crystallization was also performed with impure solutions from intermediate steps of a standard monoclonal antibody purification process. It was shown that process crystallization has a strong potential to replace Protein A chromatography. Fast dissolution of the crystals was possible. Furthermore, it was shown that crystallization can be used as a concentrating step and can replace several ultra-/diafiltration steps. Molecular modeling suggested that a negative electrostatic region with interspersed exposed hydrophobic residues on the Fv domain of this antibody is responsible for the high crystallization propensity. As a result, process crystallization, following the identification of highly crystallizable antibodies using molecular modeling tools, can be recognized as an efficient, scalable, fast, and inexpensive alternative to key steps of a standard purification process for therapeutic antibodies. Copyright © 2013 Wiley Periodicals, Inc.

  10. DNA strand displacement system running logic programs.

    PubMed

    Rodríguez-Patón, Alfonso; Sainz de Murieta, Iñaki; Sosík, Petr

    2014-01-01

    The paper presents a DNA-based computing model which is enzyme-free and autonomous, not requiring a human intervention during the computation. The model is able to perform iterated resolution steps with logical formulae in conjunctive normal form. The implementation is based on the technique of DNA strand displacement, with each clause encoded in a separate DNA molecule. Propositions are encoded assigning a strand to each proposition p, and its complementary strand to the proposition ¬p; clauses are encoded comprising different propositions in the same strand. The model allows to run logic programs composed of Horn clauses by cascading resolution steps. The potential of the model is demonstrated also by its theoretical capability of solving SAT. The resulting SAT algorithm has a linear time complexity in the number of resolution steps, whereas its spatial complexity is exponential in the number of variables of the formula. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. Antibody-Mediated Small Molecule Detection Using Programmable DNA-Switches.

    PubMed

    Rossetti, Marianna; Ippodrino, Rudy; Marini, Bruna; Palleschi, Giuseppe; Porchetta, Alessandro

    2018-06-13

    The development of rapid, cost-effective, and single-step methods for the detection of small molecules is crucial for improving the quality and efficiency of many applications ranging from life science to environmental analysis. Unfortunately, current methodologies still require multiple complex, time-consuming washing and incubation steps, which limit their applicability. In this work we present a competitive DNA-based platform that makes use of both programmable DNA-switches and antibodies to detect small target molecules. The strategy exploits both the advantages of proximity-based methods and structure-switching DNA-probes. The platform is modular and versatile and it can potentially be applied for the detection of any small target molecule that can be conjugated to a nucleic acid sequence. Here the rational design of programmable DNA-switches is discussed, and the sensitive, rapid, and single-step detection of different environmentally relevant small target molecules is demonstrated.

  12. Automatic stage identification of Drosophila egg chamber based on DAPI images

    PubMed Central

    Jia, Dongyu; Xu, Qiuping; Xie, Qian; Mio, Washington; Deng, Wu-Min

    2016-01-01

    The Drosophila egg chamber, whose development is divided into 14 stages, is a well-established model for developmental biology. However, visual stage determination can be a tedious, subjective and time-consuming task prone to errors. Our study presents an objective, reliable and repeatable automated method for quantifying cell features and classifying egg chamber stages based on DAPI images. The proposed approach is composed of two steps: 1) a feature extraction step and 2) a statistical modeling step. The egg chamber features used are egg chamber size, oocyte size, egg chamber ratio and distribution of follicle cells. Methods for determining the on-site of the polytene stage and centripetal migration are also discussed. The statistical model uses linear and ordinal regression to explore the stage-feature relationships and classify egg chamber stages. Combined with machine learning, our method has great potential to enable discovery of hidden developmental mechanisms. PMID:26732176

  13. Electrically driven spin qubit based on valley mixing

    NASA Astrophysics Data System (ADS)

    Huang, Wister; Veldhorst, Menno; Zimmerman, Neil M.; Dzurak, Andrew S.; Culcer, Dimitrie

    2017-02-01

    The electrical control of single spin qubits based on semiconductor quantum dots is of great interest for scalable quantum computing since electric fields provide an alternative mechanism for qubit control compared with magnetic fields and can also be easier to produce. Here we outline the mechanism for a drastic enhancement in the electrically-driven spin rotation frequency for silicon quantum dot qubits in the presence of a step at a heterointerface. The enhancement is due to the strong coupling between the ground and excited states which occurs when the electron wave function overcomes the potential barrier induced by the interface step. We theoretically calculate single qubit gate times tπ of 170 ns for a quantum dot confined at a silicon/silicon-dioxide interface. The engineering of such steps could be used to achieve fast electrical rotation and entanglement of spin qubits despite the weak spin-orbit coupling in silicon.

  14. The impact of the Vancouver Winter Olympics on population level physical activity and sport participation among Canadian children and adolescents: population based study.

    PubMed

    Craig, Cora L; Bauman, Adrian E

    2014-09-03

    There has been much debate about the potential impact of the Olympics. The purpose of this study was to determine if hosting the 2010 Vancouver Olympic Games (OG) encouraged Canadian children to be physically active. Children 5-19 years (n = 19862) were assessed as part of the representative Canadian Physical Activity Levels Among Youth surveillance study between August 2007 and July 2011. Parents were asked if the child participated in organized physical activity or sport. In addition, children wore pedometers for 7 days to objectively provide an estimate of overall physical activity. Mean steps/day and percent participating in organized physical activity or sport were calculated by time period within year for Canada and British Columbia. The odds of participation by time period were estimated by logistic regression, controlling for age and sex. Mean steps were lower during the Olympic period compared with Pre- (607 fewer steps/day 95% CI 263-950 steps/day) and Post-Olympic (1246 fewer steps 95% CI 858-1634 steps) periods for Canada. There was no difference by time period in British Columbia. A similar pattern in mean steps by time period was observed across years, but there were no significant differences in activity within each of these periods between years. The likelihood of participating in organized physical activity or sport by time period within or across years did not differ from baseline (August-November 2007). The 2010 Olympic Games had no measurable impact on objectively measured physical activity or the prevalence of overall sports participation among Canadian children. Much greater cross-Government and long-term efforts are needed to create the conditions for an Olympic legacy effect on physical activity.

  15. Fast auto-focus scheme based on optical defocus fitting model

    NASA Astrophysics Data System (ADS)

    Wang, Yeru; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting; Cen, Min

    2018-04-01

    An optical defocus fitting model-based (ODFM) auto-focus scheme is proposed. Considering the basic optical defocus principle, the optical defocus fitting model is derived to approximate the potential-focus position. By this accurate modelling, the proposed auto-focus scheme can make the stepping motor approach the focal plane more accurately and rapidly. Two fitting positions are first determined for an arbitrary initial stepping motor position. Three images (initial image and two fitting images) at these positions are then collected to estimate the potential-focus position based on the proposed ODFM method. Around the estimated potential-focus position, two reference images are recorded. The auto-focus procedure is then completed by processing these two reference images and the potential-focus image to confirm the in-focus position using a contrast based method. Experimental results prove that the proposed scheme can complete auto-focus within only 5 to 7 steps with good performance even under low-light condition.

  16. Synthesis of walking sounds for alleviating gait disturbances in Parkinson's disease.

    PubMed

    Rodger, Matthew W M; Young, William R; Craig, Cathy M

    2014-05-01

    Managing gait disturbances in people with Parkinson's disease is a pressing challenge, as symptoms can contribute to injury and morbidity through an increased risk of falls. While drug-based interventions have limited efficacy in alleviating gait impairments, certain nonpharmacological methods, such as cueing, can also induce transient improvements to gait. The approach adopted here is to use computationally-generated sounds to help guide and improve walking actions. The first method described uses recordings of force data taken from the steps of a healthy adult which in turn were used to synthesize realistic gravel-footstep sounds that represented different spatio-temporal parameters of gait, such as step duration and step length. The second method described involves a novel method of sonifying, in real time, the swing phase of gait using real-time motion-capture data to control a sound synthesis engine. Both approaches explore how simple but rich auditory representations of action based events can be used by people with Parkinson's to guide and improve the quality of their walking, reducing the risk of falls and injury. Studies with Parkinson's disease patients are reported which show positive results for both techniques in reducing step length variability. Potential future directions for how these sound approaches can be used to manage gait disturbances in Parkinson's are also discussed.

  17. Assessing potentially dangerous medical actions with the computer-based case simulation portion of the USMLE step 3 examination.

    PubMed

    Harik, Polina; Cuddy, Monica M; O'Donovan, Seosaimhin; Murray, Constance T; Swanson, David B; Clauser, Brian E

    2009-10-01

    The 2000 Institute of Medicine report on patient safety brought renewed attention to the issue of preventable medical errors, and subsequently specialty boards and the National Board of Medical Examiners were encouraged to play a role in setting expectations around safety education. This paper examines potentially dangerous actions taken by examinees during the portion of the United States Medical Licensing Examination Step 3 that is particularly well suited to evaluating lapses in physician decision making, the Computer-based Case Simulation (CCS). Descriptive statistics and a general linear modeling approach were used to analyze dangerous actions ordered by 25,283 examinees that completed CCS for the first time between November 2006 and January 2008. More than 20% of examinees ordered at least one dangerous action with the potential to cause significant patient harm. The propensity to order dangerous actions may vary across clinical cases. The CCS format may provide a means of collecting important information about patient-care situations in which examinees may be more likely to commit dangerous actions and the propensity of examinees to order dangerous tests and treatments.

  18. Rotational paper-based electrochemiluminescence immunodevices for sensitive and multiplexed detection of cancer biomarkers.

    PubMed

    Sun, Xiange; Li, Bowei; Tian, Chunyuan; Yu, Fabiao; Zhou, Na; Zhan, Yinghua; Chen, Lingxin

    2018-05-12

    This paper describes a novel rotational paper-based analytical device (RPAD) to implement multi-step electrochemiluminescence (ECL) immunoassays. The integrated paper-based rotational valves can be easily controlled by rotating paper discs manually and this advantage makes it user-friendly to untrained users to carry out the multi-step assays. In addition, the rotational valves are reusable and the response time can be shortened to several seconds, which promotes the rotational paper-based device to have great advantages in multi-step operations. Under the control of rotational valves, multi-step ECL immunoassays were conducted on the rotational device for the multiplexed detection of carcinoembryonic antigen (CEA) and prostate specific antigen (PSA). The rotational device exhibited excellent analytical performance for CEA and PSA, and they could be detected in the linear ranges of 0.1-100 ng mL -1 and 0.1-50 ng mL -1 with detection limits down to 0.07 ng mL -1 and 0.03 ng mL -1 , respectively, which were within the ranges of clinical concentrations. We hope this technique will open a new avenue for the fabrication of paper-based valves and provide potential application in clinical diagnostics. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Coarse-Grained Models Reveal Functional Dynamics – II. Molecular Dynamics Simulation at the Coarse-Grained Level – Theories and Biological Applications

    PubMed Central

    Chng, Choon-Peng; Yang, Lee-Wei

    2008-01-01

    Molecular dynamics (MD) simulation has remained the most indispensable tool in studying equilibrium/non-equilibrium conformational dynamics since its advent 30 years ago. With advances in spectroscopy accompanying solved biocomplexes in growing sizes, sampling their dynamics that occur at biologically interesting spatial/temporal scales becomes computationally intractable; this motivated the use of coarse-grained (CG) approaches. CG-MD models are used to study folding and conformational transitions in reduced resolution and can employ enlarged time steps due to the absence of some of the fastest motions in the system. The Boltzmann-Inversion technique, heavily used in parameterizing these models, provides a smoothed-out effective potential on which molecular conformation evolves at a faster pace thus stretching simulations into tens of microseconds. As a result, a complete catalytic cycle of HIV-1 protease or the assembly of lipid-protein mixtures could be investigated by CG-MD to gain biological insights. In this review, we survey the theories developed in recent years, which are categorized into Folding-based and Molecular-Mechanics-based. In addition, physical bases in the selection of CG beads/time-step, the choice of effective potentials, representation of solvent, and restoration of molecular representations back to their atomic details are systematically discussed. PMID:19812774

  20. Development of a Robust and Cost-Effective Friction Stir Welding Process for Use in Advanced Military Vehicles

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Arakere, G.; Pandurangan, B.; Hariharan, A.; Yen, C.-F.; Cheeseman, B. A.

    2011-02-01

    To respond to the advent of more lethal threats, recently designed aluminum-armor-based military-vehicle systems have resorted to an increasing use of higher strength aluminum alloys (with superior ballistic resistance against armor piercing (AP) threats and with high vehicle-light weighing potential). Unfortunately, these alloys are not very amenable to conventional fusion-based welding technologies and in-order to obtain high-quality welds, solid-state joining technologies such as Friction stir welding (FSW) have to be employed. However, since FSW is a relatively new and fairly complex joining technology, its introduction into advanced military vehicle structures is not straight forward and entails a comprehensive multi-step approach. One such (three-step) approach is developed in the present work. Within the first step, experimental and computational techniques are utilized to determine the optimal tool design and the optimal FSW process parameters which result in maximal productivity of the joining process and the highest quality of the weld. Within the second step, techniques are developed for the identification and qualification of the optimal weld joint designs in different sections of a prototypical military vehicle structure. In the third step, problems associated with the fabrication of a sub-scale military vehicle test structure and the blast survivability of the structure are assessed. The results obtained and the lessons learned are used to judge the potential of the current approach in shortening the development time and in enhancing reliability and blast survivability of military vehicle structures.

  1. Steps to Starting a Small Business. Student Notebook.

    ERIC Educational Resources Information Center

    Wisconsin Univ., Madison. Vocational Studies Center.

    This student notebook provides student materials for a program of study made up of a series of community-based activities potentially leading to the start-up of one's own business, while at the same time providing a better understanding of the American economic system of free enterprise. It begins with a glossary. For each of 15 units (1 more unit…

  2. One-step fabrication of PEGylated fluorescent nanodiamonds through the thiol-ene click reaction and their potential for biological imaging

    NASA Astrophysics Data System (ADS)

    Huang, Hongye; Liu, Meiying; Tuo, Xun; Chen, Junyu; Mao, Liucheng; Wen, Yuanqing; Tian, Jianwen; Zhou, Naigen; Zhang, Xiaoyong; Wei, Yen

    2018-05-01

    Over the past years, fluorescent carbon nanoparticles have got growing interest for biological imaging. Fluorescent nanodiamonds (FNDs) are novel fluorescent carbon nanoparticles with multitudinous useful properties, including remarkable fluorescence properties, extremely low toxicity and high refractive index. However, facile preparation of FNDs with designable properties and functions from non-fluorescent detonation nanodiamonds (DNDs) has demonstrated to be challengeable. In this work, we reported for the first time that preparation of Polyethylene glycol (PEG) functionalized FNDs through a one-step thiol-ene click reaction using thiol containing PEG (PEG-SH) as the coating agent. Based on the characterization results, we demonstrated that PEG-SH could be efficiently introduced on DNDs to obtain FNDs through the thiol-ene click chemistry. The resultant FND-PEG composites showed high water dispersibility, strong fluorescence and low cytotoxicity. Moreover, FND-PEG composites could be internalized by cells and displayed good cell dyeing performance. All of these features implied that FND-PEG composites are of great potential for biological imaging. Taken together, a facile one-step strategy based on the one-step thiol-ene click reaction has been developed for efficient preparation of FND-PEG composites from non-fluorescent DNDs. The strategy should be also useful for fabrication of many other functional FNDs via using different thiol containing compounds for the universality of thiol-ene click reaction.

  3. Identification of the period of stability in a balance test after stepping up using a simplified cumulative sum.

    PubMed

    Safieddine, Doha; Chkeir, Aly; Herlem, Cyrille; Bera, Delphine; Collart, Michèle; Novella, Jean-Luc; Dramé, Moustapha; Hewson, David J; Duchêne, Jacques

    2017-11-01

    Falls are a major cause of death in older people. One method used to predict falls is analysis of Centre of Pressure (CoP) displacement, which provides a measure of balance quality. The Balance Quality Tester (BQT) is a device based on a commercial bathroom scale that calculates instantaneous values of vertical ground reaction force (Fz) as well as the CoP in both anteroposterior (AP) and mediolateral (ML) directions. The entire testing process needs to take no longer than 12 s to ensure subject compliance, making it vital that calculations related to balance are only calculated for the period when the subject is static. In the present study, a method is presented to detect the stabilization period after a subject has stepped onto the BQT. Four different phases of the test are identified (stepping-on, stabilization, balancing, stepping-off), ensuring that subjects are static when parameters from the balancing phase are calculated. The method, based on a simplified cumulative sum (CUSUM) algorithm, could detect the change between unstable and stable stance. The time taken to stabilize significantly affected the static balance variables of surface area and trajectory velocity, and was also related to Timed-up-and-Go performance. Such a finding suggests that the time to stabilize could be a worthwhile parameter to explore as a potential indicator of balance problems and fall risk in older people. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  4. Transfer effects of step training on stepping performance in untrained directions in older adults: A randomized controlled trial.

    PubMed

    Okubo, Yoshiro; Menant, Jasmine; Udyavar, Manasa; Brodie, Matthew A; Barry, Benjamin K; Lord, Stephen R; L Sturnieks, Daina

    2017-05-01

    Although step training improves the ability of quick stepping, some home-based step training systems train limited stepping directions and may cause harm by reducing stepping performance in untrained directions. This study examines the possible transfer effects of step training on stepping performance in untrained directions in older people. Fifty four older adults were randomized into: forward step training (FT); lateral plus forward step training (FLT); or no training (NT) groups. FT and FLT participants undertook a 15-min training session involving 200 step repetitions. Prior to and post training, choice stepping reaction time and stepping kinematics in untrained, diagonal and lateral directions were assessed. Significant interactions of group and time (pre/post-assessment) were evident for the first step after training indicating negative (delayed response time) and positive (faster peak stepping speed) transfer effects in the diagonal direction in the FT group. However, when the second to the fifth steps after training were included in the analysis, there were no significant interactions of group and time for measures in the diagonal stepping direction. Step training only in the forward direction improved stepping speed but may acutely slow response times in the untrained diagonal direction. However, this acute effect appears to dissipate after a few repeated step trials. Step training in both forward and lateral directions appears to induce no negative transfer effects in diagonal stepping. These findings suggest home-based step training systems present low risk of harm through negative transfer effects in untrained stepping directions. ANZCTR 369066. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Noise Enhances Action Potential Generation in Mouse Sensory Neurons via Stochastic Resonance.

    PubMed

    Onorato, Irene; D'Alessandro, Giuseppina; Di Castro, Maria Amalia; Renzi, Massimiliano; Dobrowolny, Gabriella; Musarò, Antonio; Salvetti, Marco; Limatola, Cristina; Crisanti, Andrea; Grassi, Francesca

    2016-01-01

    Noise can enhance perception of tactile and proprioceptive stimuli by stochastic resonance processes. However, the mechanisms underlying this general phenomenon remain to be characterized. Here we studied how externally applied noise influences action potential firing in mouse primary sensory neurons of dorsal root ganglia, modelling a basic process in sensory perception. Since noisy mechanical stimuli may cause stochastic fluctuations in receptor potential, we examined the effects of sub-threshold depolarizing current steps with superimposed random fluctuations. We performed whole cell patch clamp recordings in cultured neurons of mouse dorsal root ganglia. Noise was added either before and during the step, or during the depolarizing step only, to focus onto the specific effects of external noise on action potential generation. In both cases, step + noise stimuli triggered significantly more action potentials than steps alone. The normalized power norm had a clear peak at intermediate noise levels, demonstrating that the phenomenon is driven by stochastic resonance. Spikes evoked in step + noise trials occur earlier and show faster rise time as compared to the occasional ones elicited by steps alone. These data suggest that external noise enhances, via stochastic resonance, the recruitment of transient voltage-gated Na channels, responsible for action potential firing in response to rapid step-wise depolarizing currents.

  6. Noise Enhances Action Potential Generation in Mouse Sensory Neurons via Stochastic Resonance

    PubMed Central

    Onorato, Irene; D'Alessandro, Giuseppina; Di Castro, Maria Amalia; Renzi, Massimiliano; Dobrowolny, Gabriella; Musarò, Antonio; Salvetti, Marco; Limatola, Cristina; Crisanti, Andrea; Grassi, Francesca

    2016-01-01

    Noise can enhance perception of tactile and proprioceptive stimuli by stochastic resonance processes. However, the mechanisms underlying this general phenomenon remain to be characterized. Here we studied how externally applied noise influences action potential firing in mouse primary sensory neurons of dorsal root ganglia, modelling a basic process in sensory perception. Since noisy mechanical stimuli may cause stochastic fluctuations in receptor potential, we examined the effects of sub-threshold depolarizing current steps with superimposed random fluctuations. We performed whole cell patch clamp recordings in cultured neurons of mouse dorsal root ganglia. Noise was added either before and during the step, or during the depolarizing step only, to focus onto the specific effects of external noise on action potential generation. In both cases, step + noise stimuli triggered significantly more action potentials than steps alone. The normalized power norm had a clear peak at intermediate noise levels, demonstrating that the phenomenon is driven by stochastic resonance. Spikes evoked in step + noise trials occur earlier and show faster rise time as compared to the occasional ones elicited by steps alone. These data suggest that external noise enhances, via stochastic resonance, the recruitment of transient voltage-gated Na channels, responsible for action potential firing in response to rapid step-wise depolarizing currents. PMID:27525414

  7. Investigation of correlation classification techniques

    NASA Technical Reports Server (NTRS)

    Haskell, R. E.

    1975-01-01

    A two-step classification algorithm for processing multispectral scanner data was developed and tested. The first step is a single pass clustering algorithm that assigns each pixel, based on its spectral signature, to a particular cluster. The output of that step is a cluster tape in which a single integer is associated with each pixel. The cluster tape is used as the input to the second step, where ground truth information is used to classify each cluster using an iterative method of potentials. Once the clusters have been assigned to classes the cluster tape is read pixel-by-pixel and an output tape is produced in which each pixel is assigned to its proper class. In addition to the digital classification programs, a method of using correlation clustering to process multispectral scanner data in real time by means of an interactive color video display is also described.

  8. Kinect-based choice reaching and stepping reaction time tests for clinical and in-home assessment of fall risk in older people: a prospective study.

    PubMed

    Ejupi, Andreas; Gschwind, Yves J; Brodie, Matthew; Zagler, Wolfgang L; Lord, Stephen R; Delbaere, Kim

    2016-01-01

    Quick protective reactions such as reaching or stepping are important to avoid a fall or minimize injuries. We developed Kinect-based choice reaching and stepping reaction time tests (Kinect-based CRTs) and evaluated their ability to differentiate between older fallers and non-fallers and the feasibility of administering them at home. A total of 94 community-dwelling older people were assessed on the Kinect-based CRTs in the laboratory and were followed-up for falls for 6 months. Additionally, a subgroup (n = 20) conducted the Kinect-based CRTs at home. Signal processing algorithms were developed to extract features for reaction, movement and the total time from the Kinect skeleton data. Nineteen participants (20.2 %) reported a fall in the 6 months following the assessment. The reaction time (fallers: 797 ± 136 ms, non-fallers: 714 ± 89 ms), movement time (fallers: 392 ± 50 ms, non-fallers: 358 ± 51 ms) and total time (fallers: 1189 ± 170 ms, non-fallers: 1072 ± 109 ms) of the reaching reaction time test differentiated well between the fallers and non-fallers. The stepping reaction time test did not significantly discriminate between the two groups in the prospective study. The correlations between the laboratory and in-home assessments were 0.689 for the reaching reaction time and 0.860 for stepping reaction time. The study findings indicate that the Kinect-based CRT tests are feasible to administer in clinical and in-home settings, and thus represents an important step towards the development of sensor-based fall risk self-assessments. With further validation, the assessments may prove useful as a fall risk screen and home-based assessment measures for monitoring changes over time and effects of fall prevention interventions.

  9. A solution to the Navier-Stokes equations based upon the Newton Kantorovich method

    NASA Technical Reports Server (NTRS)

    Davis, J. E.; Gabrielsen, R. E.; Mehta, U. B.

    1977-01-01

    An implicit finite difference scheme based on the Newton-Kantorovich technique was developed for the numerical solution of the nonsteady, incompressible, two-dimensional Navier-Stokes equations in conservation-law form. The algorithm was second-order-time accurate, noniterative with regard to the nonlinear terms in the vorticity transport equation except at the earliest few time steps, and spatially factored. Numerical results were obtained with the technique for a circular cylinder at Reynolds number 15. Results indicate that the technique is in excellent agreement with other numerical techniques for all geometries and Reynolds numbers investigated, and indicates a potential for significant reduction in computation time over current iterative techniques.

  10. [Procedural analysis of acid-base balance disorder: case serials in 4 patents].

    PubMed

    Ma, Chunyuan; Wang, Guijie

    2017-05-01

    To establish the standardization process of acid-base balance analysis, analyze cases of acid-base balance disorder with the aid of acid-base balance coordinate graph. The acid-base balance theory were reviewed systematically on recent research progress, and the important concepts, definitions, formulas, parameters, regularity and inference in the analysis of acid-base balance were studied. The analysis of acid-base balance disordered processes and steps were figured. The application of acid-base balance coordinate graph in the cases was introduced. The method of "four parameters-four steps" analysis was put forward to analyze the acid-base balance disorders completely. "Four parameters" included pH, arterial partial pressure of carbon dioxide (PaCO 2 ), HCO 3 - and anion gap (AG). "Four steps" were outlined by following aspects: (1) according to the pH, PaCO 2 and HCO 3 - , the primary or main types of acid-base balance disorder was determined; (2) primary or main types of acid-base disorder were used to choose the appropriate compensation formula and to determine the presence of double mixed acid-base balance disorder; (3) the primary acid-base balance disorders were divided into two parts: respiratory acidosis or respiratory alkalosis, at the same time, the potential HCO 3 - should be calculated, the measured HCO 3 - should be replaced with potential HCO 3 - , to determine whether there were three mixed acid-base disorders; (4) based on the above analysis the data judged as the simple AG increased-metabolic acidosis was needed to be further analyzed. The ratio of ΔAG↑/ΔHCO 3 - ↓ was also needed to be calculated, to determine whether there was normal AG metabolic acidosis or metabolic alkalosis. In the clinical practice, PaCO 2 (as the abscissa) and HCO 3 - (as the ordinate) were used to establish a rectangular coordinate system, through origin (0, 0) and coordinate point (40, 24) could be a straight line, and all points on the straight line pH were equal to 7.40. The acid-base balance coordinate graph could be divided into seven areas by three straight lines [namely pH = 7.40 isoline, PaCO 2 = 40 mmHg (1 mmHg = 0.133 kPa) line and HCO 3 - = 24 mmol/L line]: main respiratory alkalosis area, main metabolic alkalosis area, respiratory + metabolic alkalosis area, main respiratory acidosis area, main metabolic acidosis area, respiratory + metabolic acidosis area and normal area. It was easier to determine the type of acid-base balance disorders by identifying the location of the (PaCO 2 , HCO 3 - ) or (PaCO 2 , potential HCO 3 - ) point on the acid-base balance coordinate graph. "Four parameters-four steps" method is systematic and comprehensive. At the same time, by using the acid-base balance coordinate graph, it is simpler to estimate the types of acid-base balance disorders. It is worthy of popularizing and generalizing.

  11. TRUST84. Sat-Unsat Flow in Deformable Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narasimhan, T.N.

    1984-11-01

    TRUST84 solves for transient and steady-state flow in variably saturated deformable media in one, two, or three dimensions. It can handle porous media, fractured media, or fractured-porous media. Boundary conditions may be an arbitrary function of time. Sources or sinks may be a function of time or of potential. The theoretical model considers a general three-dimensional field of flow in conjunction with a one-dimensional vertical deformation field. The governing equation expresses the conservation of fluid mass in an elemental volume that has a constant volume of solids. Deformation of the porous medium may be nonelastic. Permeability and the compressibility coefficientsmore » may be nonlinearly related to effective stress. Relationships between permeability and saturation with pore water pressure in the unsaturated zone may be characterized by hysteresis. The relation between pore pressure change and effective stress change may be a function of saturation. The basic calculational model of the conductive heat transfer code TRUMP is applied in TRUST84 to the flow of fluids in porous media. The model combines an integrated finite difference algorithm for numerically solving the governing equation with a mixed explicit-implicit iterative scheme in which the explicit changes in potential are first computed for all elements in the system, after which implicit corrections are made only for those elements for which the stable time-step is less than the time-step being used. Time-step sizes are automatically controlled to optimize the number of iterations, to control maximum change to potential during a time-step, and to obtain desired output information. Time derivatives, estimated on the basis of system behavior during the two previous time-steps, are used to start the iteration process and to evaluate nonlinear coefficients. Both heterogeneity and anisotropy can be handled.« less

  12. 12-step participation and outcomes over 7 years among adolescent substance use patients with and without psychiatric comorbidity.

    PubMed

    Chi, Felicia W; Sterling, Stacy; Campbell, Cynthia I; Weisner, Constance

    2013-01-01

    This study examines the associations between 12-step participation and outcomes over 7 years among 419 adolescent substance use patients with and without psychiatric comorbidities. Although level of participation decreased over time for both groups, comorbid adolescents participated in 12-step groups at comparable or higher levels across time points. Results from mixed-effects logistic regression models indicated that for both groups, 12-step participation was associated with both alcohol and drug abstinence at follow-ups, increasing the likelihood of either by at least 3 times. Findings highlight the potential benefits of 12-step participation in maintaining long-term recovery for adolescents with and without psychiatric disorders.

  13. The electrical response of turtle cones to flashes and steps of light.

    PubMed

    Baylor, D A; Hodgkin, A L; Lamb, T D

    1974-11-01

    1. The linear response of turtle cones to weak flashes or steps of light was usually well fitted by equations based on a chain of six or seven reactions with time constants varying over about a 6-fold range.2. The temperature coefficient (Q(10)) of the reciprocal of the time to peak of the response to a flash was 1.8 (15-25 degrees C), corresponding to an activation energy of 10 kcal/mole.3. Electrical measurements with one internal electrode and a balancing circuit gave the following results on red-sensitive cones of high resistance: resistance across cell surface in dark 50-170 MOmega; time constant in dark 4-6.5 msec. The effect of a bright light was to increase the resistance and time constant by 10-30%.4. If the cell time constant, resting potential and maximum hyperpolarization are known, the fraction of ionic channels blocked by light at any instant can be calculated from the hyperpolarization and its rate of change. At times less than 50 msec the shape of this relation is consistent with the idea that the concentration of a blocking molecule which varies linearly with light intensity is in equilibrium with the fraction of ionic channels blocked.5. The rising phase of the response to flashes and steps of light covering a 10(5)-fold range of intensities is well fitted by a theory in which the essential assumptions are that (i) light starts a linear chain of reactions leading to the production of a substance which blocks ionic channels in the outer segment, (ii) an equilibrium between the blocking molecules and unblocked channels is established rapidly, and (iii) the electrical properties of the cell can be represented by a simple circuit with a time constant in the dark of about 6 msec.6. Deviations from the simple theory which occur after 50 msec are attributed partly to a time-dependent desensitization mechanism and partly to a change in saturation potential resulting from a voltage-dependent change in conductance.7. The existence of several components in the relaxation of the potential to its resting level can be explained by supposing that the ;substance' which blocks light sensitive ionic channels is inactivated in a series of steps.

  14. Impact of temporal resolution of inputs on hydrological model performance: An analysis based on 2400 flood events

    NASA Astrophysics Data System (ADS)

    Ficchì, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-07-01

    Hydro-climatic data at short time steps are considered essential to model the rainfall-runoff relationship, especially for short-duration hydrological events, typically flash floods. Also, using fine time step information may be beneficial when using or analysing model outputs at larger aggregated time scales. However, the actual gain in prediction efficiency using short time-step data is not well understood or quantified. In this paper, we investigate the extent to which the performance of hydrological modelling is improved by short time-step data, using a large set of 240 French catchments, for which 2400 flood events were selected. Six-minute rain gauge data were available and the GR4 rainfall-runoff model was run with precipitation inputs at eight different time steps ranging from 6 min to 1 day. Then model outputs were aggregated at seven different reference time scales ranging from sub-hourly to daily for a comparative evaluation of simulations at different target time steps. Three classes of model performance behaviour were found for the 240 test catchments: (i) significant improvement of performance with shorter time steps; (ii) performance insensitivity to the modelling time step; (iii) performance degradation as the time step becomes shorter. The differences between these groups were analysed based on a number of catchment and event characteristics. A statistical test highlighted the most influential explanatory variables for model performance evolution at different time steps, including flow auto-correlation, flood and storm duration, flood hydrograph peakedness, rainfall-runoff lag time and precipitation temporal variability.

  15. Algorithm-enabled partial-angular-scan configurations for dual-energy CT.

    PubMed

    Chen, Buxin; Zhang, Zheng; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan

    2018-05-01

    We seek to investigate an optimization-based one-step method for image reconstruction that explicitly compensates for nonlinear spectral response (i.e., the beam-hardening effect) in dual-energy CT, to investigate the feasibility of the one-step method for enabling two dual-energy partial-angular-scan configurations, referred to as the short- and half-scan configurations, on standard CT scanners without involving additional hardware, and to investigate the potential of the short- and half-scan configurations in reducing imaging dose and scan time in a single-kVp-switch full-scan configuration in which two full rotations are made for collection of dual-energy data. We use the one-step method to reconstruct images directly from dual-energy data through solving a nonconvex optimization program that specifies the images to be reconstructed in dual-energy CT. Dual-energy full-scan data are generated from numerical phantoms and collected from physical phantoms with the standard single-kVp-switch full-scan configuration, whereas dual-energy short- and half-scan data are extracted from the corresponding full-scan data. Besides visual inspection and profile-plot comparison, the reconstructed images are analyzed also in quantitative studies based upon tasks of linear-attenuation-coefficient and material-concentration estimation and of material differentiation. Following the performance of a computer-simulation study to verify that the one-step method can reconstruct numerically accurately basis and monochromatic images of numerical phantoms, we reconstruct basis and monochromatic images by using the one-step method from real data of physical phantoms collected with the full-, short-, and half-scan configurations. Subjective inspection based upon visualization and profile-plot comparison reveals that monochromatic images, which are used often in practical applications, reconstructed from the full-, short-, and half-scan data are largely visually comparable except for some differences in texture details. Moreover, quantitative studies based upon tasks of linear-attenuation-coefficient and material-concentration estimation and of material differentiation indicate that the short- and half-scan configurations yield results in close agreement with the ground-truth information and that of the full-scan configuration. The one-step method considered can compensate effectively for the nonlinear spectral response in full- and partial-angular-scan dual-energy CT. It can be exploited for enabling partial-angular-scan configurations on standard CT scanner without involving additional hardware. Visual inspection and quantitative studies reveal that, with the one-step method, partial-angular-scan configurations considered can perform at a level comparable to that of the full-scan configuration, thus suggesting the potential of the two partial-angular-scan configurations in reducing imaging dose and scan time in the standard single-kVp-switch full-scan CT in which two full rotations are performed. The work also yields insights into the investigation and design of other nonstandard scan configurations of potential practical significance in dual-energy CT. © 2018 American Association of Physicists in Medicine.

  16. A Lyapunov and Sacker–Sell spectral stability theory for one-step methods

    DOE PAGES

    Steyer, Andrew J.; Van Vleck, Erik S.

    2018-04-13

    Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less

  17. A Lyapunov and Sacker–Sell spectral stability theory for one-step methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steyer, Andrew J.; Van Vleck, Erik S.

    Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less

  18. Gaussian process regression for geometry optimization

    NASA Astrophysics Data System (ADS)

    Denzel, Alexander; Kästner, Johannes

    2018-03-01

    We implemented a geometry optimizer based on Gaussian process regression (GPR) to find minimum structures on potential energy surfaces. We tested both a two times differentiable form of the Matérn kernel and the squared exponential kernel. The Matérn kernel performs much better. We give a detailed description of the optimization procedures. These include overshooting the step resulting from GPR in order to obtain a higher degree of interpolation vs. extrapolation. In a benchmark against the Limited-memory Broyden-Fletcher-Goldfarb-Shanno optimizer of the DL-FIND library on 26 test systems, we found the new optimizer to generally reduce the number of required optimization steps.

  19. Reconstructing Genetic Regulatory Networks Using Two-Step Algorithms with the Differential Equation Models of Neural Networks.

    PubMed

    Chen, Chi-Kan

    2017-07-26

    The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two-step algorithms can potentially incorporate with different nonlinear differential equation models to reconstruct the GRN.

  20. Selective thermal transformation of old computer printed circuit boards to Cu-Sn based alloy.

    PubMed

    Shokri, Ali; Pahlevani, Farshid; Cole, Ivan; Sahajwalla, Veena

    2017-09-01

    This study investigates, verifies and determines the optimal parameters for the selective thermal transformation of problematic electronic waste (e-waste) to produce value-added copper-tin (Cu-Sn) based alloys; thereby demonstrating a novel new pathway for the cost-effective recovery of resources from one of the world's fastest growing and most challenging waste streams. Using outdated computer printed circuit boards (PCBs), a ubiquitous component of e-waste, we investigated transformations across a range of temperatures and time frames. Results indicate a two-step heat treatment process, using a low temperature step followed by a high temperature step, can be used to produce and separate off, first, a lead (Pb) based alloy and, subsequently, a Cu-Sn based alloy. We also found a single-step heat treatment process at a moderate temperature of 900 °C can be used to directly transform old PCBs to produce a Cu-Sn based alloy, while capturing the Pb and antimony (Sb) as alloying elements to prevent the emission of these low melting point elements. These results demonstrate old computer PCBs, large volumes of which are already within global waste stockpiles, can be considered a potential source of value-added metal alloys, opening up a new opportunity for utilizing e-waste to produce metal alloys in local micro-factories. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Issues in measure-preserving three dimensional flow integrators: Self-adjointness, reversibility, and non-uniform time stepping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finn, John M., E-mail: finn@lanl.gov

    2015-03-15

    Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint.more » We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012)], appears to work very well.« less

  2. Terminal-Area Aircraft Intent Inference Approach Based on Online Trajectory Clustering.

    PubMed

    Yang, Yang; Zhang, Jun; Cai, Kai-quan

    2015-01-01

    Terminal-area aircraft intent inference (T-AII) is a prerequisite to detect and avoid potential aircraft conflict in the terminal airspace. T-AII challenges the state-of-the-art AII approaches due to the uncertainties of air traffic situation, in particular due to the undefined flight routes and frequent maneuvers. In this paper, a novel T-AII approach is introduced to address the limitations by solving the problem with two steps that are intent modeling and intent inference. In the modeling step, an online trajectory clustering procedure is designed for recognizing the real-time available routes in replacing of the missed plan routes. In the inference step, we then present a probabilistic T-AII approach based on the multiple flight attributes to improve the inference performance in maneuvering scenarios. The proposed approach is validated with real radar trajectory and flight attributes data of 34 days collected from Chengdu terminal area in China. Preliminary results show the efficacy of the presented approach.

  3. Gold Nanorod-based Photo-PCR System for One-Step, Rapid Detection of Bacteria

    PubMed Central

    Kim, Jinjoo; Kim, Hansol; Park, Ji Ho; Jon, Sangyong

    2017-01-01

    The polymerase chain reaction (PCR) has been an essential tool for diagnosis of infectious diseases, but conventional PCR still has some limitations with respect to applications to point-of-care (POC) diagnostic systems that require rapid detection and miniaturization. Here we report a light-based PCR method, termed as photo-PCR, which enables rapid detection of bacteria in a single step. In the photo-PCR system, poly(enthylene glycol)-modified gold nanorods (PEG-GNRs), used as a heat generator, are added into the PCR mixture, which is subsequently periodically irradiated with a 808-nm laser to create thermal cycling. Photo-PCR was able to significantly reduce overall thermal cycling time by integrating bacterial cell lysis and DNA amplification into a single step. Furthermore, when combined with KAPA2G fast polymerase and cooling system, the entire process of bacterial genomic DNA extraction and amplification was further shortened, highlighting the potential of photo-PCR for use in a portable, POC diagnostic system. PMID:29071186

  4. Centrifugal Microfluidic System for Nucleic Acid Amplification and Detection.

    PubMed

    Miao, Baogang; Peng, Niancai; Li, Lei; Li, Zheng; Hu, Fei; Zhang, Zengming; Wang, Chaohui

    2015-11-04

    We report here the development of a rapid PCR microfluidic system comprising a double-shaft turntable and centrifugal-based disc that rapidly drives the PCR mixture between chambers set at different temperatures, and the bidirectional flow improved the space utilization of the disc. Three heating resistors and thermistors maintained uniform, specific temperatures for the denaturation, annealing, and extension steps of the PCR. Infrared imaging showed that there was little thermal interference between reaction chambers; the system enabled the cycle number and reaction time of each step to be independently adjusted. To validate the function and efficiency of the centrifugal microfluidic system, a 350-base pair target gene from the hepatitis B virus was amplified and quantitated by fluorescence detection. By optimizing the cycling parameters, the reaction time was reduced to 32 min as compared to 120 min for a commercial PCR machine. DNA samples with concentrations ranging from 10 to 10⁶ copies/mL could be quantitatively analyzed using this system. This centrifugal-based microfluidic platform is a useful system and possesses industrialization potential that can be used for portable diagnostics.

  5. Centrifugal Microfluidic System for Nucleic Acid Amplification and Detection

    PubMed Central

    Miao, Baogang; Peng, Niancai; Li, Lei; Li, Zheng; Hu, Fei; Zhang, Zengming; Wang, Chaohui

    2015-01-01

    We report here the development of a rapid PCR microfluidic system comprising a double-shaft turntable and centrifugal-based disc that rapidly drives the PCR mixture between chambers set at different temperatures, and the bidirectional flow improved the space utilization of the disc. Three heating resistors and thermistors maintained uniform, specific temperatures for the denaturation, annealing, and extension steps of the PCR. Infrared imaging showed that there was little thermal interference between reaction chambers; the system enabled the cycle number and reaction time of each step to be independently adjusted. To validate the function and efficiency of the centrifugal microfluidic system, a 350-base pair target gene from the hepatitis B virus was amplified and quantitated by fluorescence detection. By optimizing the cycling parameters, the reaction time was reduced to 32 min as compared to 120 min for a commercial PCR machine. DNA samples with concentrations ranging from 10 to 106 copies/mL could be quantitatively analyzed using this system. This centrifugal-based microfluidic platform is a useful system and possesses industrialization potential that can be used for portable diagnostics. PMID:26556354

  6. Analysis on burnup step effect for evaluating reactor criticality and fuel breeding ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saputra, Geby; Purnama, Aditya Rizki; Permana, Sidik

    Criticality condition of the reactors is one of the important factors for evaluating reactor operation and nuclear fuel breeding ratio is another factor to show nuclear fuel sustainability. This study analyzes the effect of burnup steps and cycle operation step for evaluating the criticality condition of the reactor as well as the performance of nuclear fuel breeding or breeding ratio (BR). Burnup step is performed based on a day step analysis which is varied from 10 days up to 800 days and for cycle operation from 1 cycle up to 8 cycles reactor operations. In addition, calculation efficiency based onmore » the variation of computer processors to run the analysis in term of time (time efficiency in the calculation) have been also investigated. Optimization method for reactor design analysis which is used a large fast breeder reactor type as a reference case was performed by adopting an established reactor design code of JOINT-FR. The results show a criticality condition becomes higher for smaller burnup step (day) and for breeding ratio becomes less for smaller burnup step (day). Some nuclides contribute to make better criticality when smaller burnup step due to individul nuclide half-live. Calculation time for different burnup step shows a correlation with the time consuming requirement for more details step calculation, although the consuming time is not directly equivalent with the how many time the burnup time step is divided.« less

  7. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    NASA Astrophysics Data System (ADS)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  8. Optimization of cryoprotectant loading into murine and human oocytes.

    PubMed

    Karlsson, Jens O M; Szurek, Edyta A; Higgins, Adam Z; Lee, Sang R; Eroglu, Ali

    2014-02-01

    Loading of cryoprotectants into oocytes is an important step of the cryopreservation process, in which the cells are exposed to potentially damaging osmotic stresses and chemical toxicity. Thus, we investigated the use of physics-based mathematical optimization to guide design of cryoprotectant loading methods for mouse and human oocytes. We first examined loading of 1.5 M dimethyl sulfoxide (Me(2)SO) into mouse oocytes at 23°C. Conventional one-step loading resulted in rates of fertilization (34%) and embryonic development (60%) that were significantly lower than those of untreated controls (95% and 94%, respectively). In contrast, the mathematically optimized two-step method yielded much higher rates of fertilization (85%) and development (87%). To examine the causes for oocyte damage, we performed experiments to separate the effects of cell shrinkage and Me(2)SO exposure time, revealing that neither shrinkage nor Me(2)SO exposure single-handedly impairs the fertilization and development rates. Thus, damage during one-step Me(2)SO addition appears to result from interactions between the effects of Me(2)SO toxicity and osmotic stress. We also investigated Me(2)SO loading into mouse oocytes at 30°C. At this temperature, fertilization rates were again lower after one-step loading (8%) in comparison to mathematically optimized two-step loading (86%) and untreated controls (96%). Furthermore, our computer algorithm generated an effective strategy for reducing Me(2)SO exposure time, using hypotonic diluents for cryoprotectant solutions. With this technique, 1.5 M Me(2)SO was successfully loaded in only 2.5 min, with 92% fertilizability. Based on these promising results, we propose new methods to load cryoprotectants into human oocytes, designed using our mathematical optimization approach. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Optimization of Cryoprotectant Loading into Murine and Human Oocytes

    PubMed Central

    Karlsson, Jens O.M.; Szurek, Edyta A.; Higgins, Adam Z.; Lee, Sang R.; Eroglu, Ali

    2014-01-01

    Loading of cryoprotectants into oocytes is an important step of the cryopreservation process, in which the cells are exposed to potentially damaging osmotic stresses and chemical toxicity. Thus, we investigated the use of physics-based mathematical optimization to guide design of cryoprotectant loading methods for mouse and human oocytes. We first examined loading of 1.5 M dimethylsulfoxide (Me2SO) into mouse oocytes at 23°C. Conventional one-step loading resulted in rates of fertilization (34%) and embryonic development (60%) that were significantly lower than those of untreated controls (95% and 94%, respectively). In contrast, the mathematically optimized two-step method yielded much higher rates of fertilization (85%) and development (87%). To examine the causes for oocyte damage, we performed experiments to separate the effects of cell shrinkage and Me2SO exposure time, revealing that neither shrinkage nor Me2SO exposure single-handedly impairs the fertilization and development rates. Thus, damage during one-step Me2SO addition appears to result from interactions between the effects of Me2SO toxicity and osmotic stress. We also investigated Me2SO loading into mouse oocytes at 30°C. At this temperature, fertilization rates were again lower after one-step loading (8%) in comparison to mathematically optimized two-step loading (86%) and untreated controls (96%). Furthermore, our computer algorithm generated an effective strategy for reducing Me2SO exposure time, using hypotonic diluents for cryoprotectant solutions. With this technique, 1.5 M Me2SO was successfully loaded in only 2.5 min, with 92% fertilizability. Based on these promising results, we propose new methods to load cryoprotectants into human oocytes, designed using our mathematical optimization approach. PMID:24246951

  10. Performance of the Seven-Step Procedure in Problem-Based Hospitality Management Education

    ERIC Educational Resources Information Center

    Zwaal, Wichard; Otting, Hans

    2016-01-01

    The study focuses on the seven-step procedure (SSP) in problem-based learning (PBL). The way students apply the seven-step procedure will help us understand how students work in a problem-based learning curriculum. So far, little is known about how students rate the performance and importance of the different steps, the amount of time they spend…

  11. Do walking strategies to increase physical activity reduce reported sitting in workplaces: a randomized control trial

    PubMed Central

    Gilson, Nicholas D; Puig-Ribera, Anna; McKenna, Jim; Brown, Wendy J; Burton, Nicola W; Cooke, Carlton B

    2009-01-01

    Background Interventions designed to increase workplace physical activity may not automatically reduce high volumes of sitting, a behaviour independently linked to chronic diseases such as obesity and type II diabetes. This study compared the impact two different walking strategies had on step counts and reported sitting times. Methods Participants were white-collar university employees (n = 179; age 41.3 ± 10.1 years; 141 women), who volunteered and undertook a standardised ten-week intervention at three sites. Pre-intervention step counts (Yamax SW-200) and self-reported sitting times were measured over five consecutive workdays. Using pre-intervention step counts, employees at each site were randomly allocated to a control group (n = 60; maintain normal behaviour), a route-based walking group (n = 60; at least 10 minutes sustained walking each workday) or an incidental walking group (n = 59; walking in workday tasks). Workday step counts and reported sitting times were re-assessed at the beginning, mid- and endpoint of intervention and group mean± SD steps/day and reported sitting times for pre-intervention and intervention measurement points compared using a mixed factorial ANOVA; paired sample-t-tests were used for follow-up, simple effect analyses. Results A significant interactive effect (F = 3.5; p < 0.003) was found between group and step counts. Daily steps for controls decreased over the intervention period (-391 steps/day) and increased for route (968 steps/day; t = 3.9, p < 0.000) and incidental (699 steps/day; t = 2.5, p < 0.014) groups. There were no significant changes for reported sitting times, but average values did decrease relative to the control (routes group = 7 minutes/day; incidental group = 15 minutes/day). Reductions were most evident for the incidental group in the first week of intervention, where reported sitting decreased by an average of 21 minutes/day (t = 1.9; p < 0.057). Conclusion Compared to controls, both route and incidental walking increased physical activity in white-collar employees. Our data suggests that workplace walking, particularly through incidental movement, also has the potential to decrease employee sitting times, but there is a need for on-going research using concurrent and objective measures of sitting, standing and walking. PMID:19619295

  12. Modifications to WRF's dynamical core to improve the treatment of moisture for large-eddy simulations: WRF DY-CORE MOISTURE TREATMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Heng; Endo, Satoshi; Wong, May

    Yamaguchi and Feingold (2012) note that the cloud fields in their Weather Research and Forecasting (WRF) large-eddy simulations (LESs) of marine stratocumulus exhibit a strong sensitivity to time stepping choices. In this study, we reproduce and analyze this sensitivity issue using two stratocumulus cases, one marine and one continental. Results show that (1) the sensitivity is associated with spurious motions near the moisture jump between the boundary layer and the free atmosphere, and (2) these spurious motions appear to arise from neglecting small variations in water vapor mixing ratio (qv) in the pressure gradient calculation in the acoustic sub­stepping portionmore » of the integration procedure. We show that this issue is remedied in the WRF dynamical core by replacing the prognostic equation for the potential temperature θ with one for the moist potential temperature θm=θ(1+1.61qv), which allows consistent treatment of moisture in the calculation of pressure during the acoustic sub­steps. With this modification, the spurious motions and the sensitivity to the time stepping settings (i.e., the dynamic time step length and number of acoustic sub­steps) are eliminated in both of the example stratocumulus cases. This modification improves the applicability of WRF for LES applications, and possibly other models using similar dynamical core formulations, and also permits the use of longer time steps than in the original code.« less

  13. Modifications to WRFs dynamical core to improve the treatment of moisture for large-eddy simulations

    DOE PAGES

    Xiao, Heng; Endo, Satoshi; Wong, May; ...

    2015-10-29

    Yamaguchi and Feingold (2012) note that the cloud fields in their large-eddy simulations (LESs) of marine stratocumulus using the Weather Research and Forecasting (WRF) model exhibit a strong sensitivity to time stepping choices. In this study, we reproduce and analyze this sensitivity issue using two stratocumulus cases, one marine and one continental. Results show that (1) the sensitivity is associated with spurious motions near the moisture jump between the boundary layer and the free atmosphere, and (2) these spurious motions appear to arise from neglecting small variations in water vapor mixing ratio (qv) in the pressure gradient calculation in themore » acoustic sub-stepping portion of the integration procedure. We show that this issue is remedied in the WRF dynamical core by replacing the prognostic equation for the potential temperature θ with one for the moist potential temperature θm=θ(1+1.61qv), which allows consistent treatment of moisture in the calculation of pressure during the acoustic sub-steps. With this modification, the spurious motions and the sensitivity to the time stepping settings (i.e., the dynamic time step length and number of acoustic sub-steps) are eliminated in both of the example stratocumulus cases. In conclusion, this modification improves the applicability of WRF for LES applications, and possibly other models using similar dynamical core formulations, and also permits the use of longer time steps than in the original code.« less

  14. How many steps/day are enough? For adults.

    PubMed

    Tudor-Locke, Catrine; Craig, Cora L; Brown, Wendy J; Clemes, Stacy A; De Cocker, Katrien; Giles-Corti, Billie; Hatano, Yoshiro; Inoue, Shigeru; Matsudo, Sandra M; Mutrie, Nanette; Oppert, Jean-Michel; Rowe, David A; Schmidt, Michael D; Schofield, Grant M; Spence, John C; Teixeira, Pedro J; Tully, Mark A; Blair, Steven N

    2011-07-28

    Physical activity guidelines from around the world are typically expressed in terms of frequency, duration, and intensity parameters. Objective monitoring using pedometers and accelerometers offers a new opportunity to measure and communicate physical activity in terms of steps/day. Various step-based versions or translations of physical activity guidelines are emerging, reflecting public interest in such guidance. However, there appears to be a wide discrepancy in the exact values that are being communicated. It makes sense that step-based recommendations should be harmonious with existing evidence-based public health guidelines that recognize that "some physical activity is better than none" while maintaining a focus on time spent in moderate-to-vigorous physical activity (MVPA). Thus, the purpose of this review was to update our existing knowledge of "How many steps/day are enough?", and to inform step-based recommendations consistent with current physical activity guidelines. Normative data indicate that healthy adults typically take between 4,000 and 18,000 steps/day, and that 10,000 steps/day is reasonable for this population, although there are notable "low active populations." Interventions demonstrate incremental increases on the order of 2,000-2,500 steps/day. The results of seven different controlled studies demonstrate that there is a strong relationship between cadence and intensity. Further, despite some inter-individual variation, 100 steps/minute represents a reasonable floor value indicative of moderate intensity walking. Multiplying this cadence by 30 minutes (i.e., typical of a daily recommendation) produces a minimum of 3,000 steps that is best used as a heuristic (i.e., guiding) value, but these steps must be taken over and above habitual activity levels to be a true expression of free-living steps/day that also includes recommendations for minimal amounts of time in MVPA. Computed steps/day translations of time in MVPA that also include estimates of habitual activity levels equate to 7,100 to 11,000 steps/day. A direct estimate of minimal amounts of MVPA accumulated in the course of objectively monitored free-living behaviour is 7,000-8,000 steps/day. A scale that spans a wide range of incremental increases in steps/day and is congruent with public health recognition that "some physical activity is better than none," yet still incorporates step-based translations of recommended amounts of time in MVPA may be useful in research and practice. The full range of users (researchers to practitioners to the general public) of objective monitoring instruments that provide step-based outputs require good reference data and evidence-based recommendations to be able to design effective health messages congruent with public health physical activity guidelines, guide behaviour change, and ultimately measure, track, and interpret steps/day.

  15. How many steps/day are enough? for adults

    PubMed Central

    2011-01-01

    Physical activity guidelines from around the world are typically expressed in terms of frequency, duration, and intensity parameters. Objective monitoring using pedometers and accelerometers offers a new opportunity to measure and communicate physical activity in terms of steps/day. Various step-based versions or translations of physical activity guidelines are emerging, reflecting public interest in such guidance. However, there appears to be a wide discrepancy in the exact values that are being communicated. It makes sense that step-based recommendations should be harmonious with existing evidence-based public health guidelines that recognize that "some physical activity is better than none" while maintaining a focus on time spent in moderate-to-vigorous physical activity (MVPA). Thus, the purpose of this review was to update our existing knowledge of "How many steps/day are enough?", and to inform step-based recommendations consistent with current physical activity guidelines. Normative data indicate that healthy adults typically take between 4,000 and 18,000 steps/day, and that 10,000 steps/day is reasonable for this population, although there are notable "low active populations." Interventions demonstrate incremental increases on the order of 2,000-2,500 steps/day. The results of seven different controlled studies demonstrate that there is a strong relationship between cadence and intensity. Further, despite some inter-individual variation, 100 steps/minute represents a reasonable floor value indicative of moderate intensity walking. Multiplying this cadence by 30 minutes (i.e., typical of a daily recommendation) produces a minimum of 3,000 steps that is best used as a heuristic (i.e., guiding) value, but these steps must be taken over and above habitual activity levels to be a true expression of free-living steps/day that also includes recommendations for minimal amounts of time in MVPA. Computed steps/day translations of time in MVPA that also include estimates of habitual activity levels equate to 7,100 to 11,000 steps/day. A direct estimate of minimal amounts of MVPA accumulated in the course of objectively monitored free-living behaviour is 7,000-8,000 steps/day. A scale that spans a wide range of incremental increases in steps/day and is congruent with public health recognition that "some physical activity is better than none," yet still incorporates step-based translations of recommended amounts of time in MVPA may be useful in research and practice. The full range of users (researchers to practitioners to the general public) of objective monitoring instruments that provide step-based outputs require good reference data and evidence-based recommendations to be able to design effective health messages congruent with public health physical activity guidelines, guide behaviour change, and ultimately measure, track, and interpret steps/day. PMID:21798015

  16. A two steps solution approach to solving large nonlinear models: application to a problem of conjunctive use.

    PubMed

    Vieira, J; Cunha, M C

    2011-01-01

    This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.

  17. A Novel 3D Label-Free Monitoring System of hES-Derived Cardiomyocyte Clusters: A Step Forward to In Vitro Cardiotoxicity Testing

    PubMed Central

    Jahnke, Heinz-Georg; Steel, Daniella; Fleischer, Stephan; Seidel, Diana; Kurz, Randy; Vinz, Silvia; Dahlenborg, Kerstin; Sartipy, Peter; Robitzki, Andrea A.

    2013-01-01

    Unexpected adverse effects on the cardiovascular system remain a major challenge in the development of novel active pharmaceutical ingredients (API). To overcome the current limitations of animal-based in vitro and in vivo test systems, stem cell derived human cardiomyocyte clusters (hCMC) offer the opportunity for highly predictable pre-clinical testing. The three-dimensional structure of hCMC appears more representative of tissue milieu than traditional monolayer cell culture. However, there is a lack of long-term, real time monitoring systems for tissue-like cardiac material. To address this issue, we have developed a microcavity array (MCA)-based label-free monitoring system that eliminates the need for critical hCMC adhesion and outgrowth steps. In contrast, feasible field potential derived action potential recording is possible immediately after positioning within the microcavity. Moreover, this approach allows extended observation of adverse effects on hCMC. For the first time, we describe herein the monitoring of hCMC over 35 days while preserving the hCMC structure and electrophysiological characteristics. Furthermore, we demonstrated the sensitive detection and quantification of adverse API effects using E4031, doxorubicin, and noradrenaline directly on unaltered 3D cultures. The MCA system provides multi-parameter analysis capabilities incorporating field potential recording, impedance spectroscopy, and optical read-outs on individual clusters giving a comprehensive insight into induced cellular alterations within a complex cardiac culture over days or even weeks. PMID:23861955

  18. Variable aperture-based ptychographical iterative engine method

    NASA Astrophysics Data System (ADS)

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.

  19. An adaptive time-stepping strategy for solving the phase field crystal model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Zhengru, E-mail: zrzhang@bnu.edu.cn; Ma, Yuan, E-mail: yuner1022@gmail.com; Qiao, Zhonghua, E-mail: zqiao@polyu.edu.hk

    2013-09-15

    In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. Themore » numerical experiments demonstrate that the CPU time is significantly saved for long time simulations.« less

  20. Alpha neurofeedback training improves SSVEP-based BCI performance.

    PubMed

    Wan, Feng; da Cruz, Janir Nuno; Nan, Wenya; Wong, Chi Man; Vai, Mang I; Rosa, Agostinho

    2016-06-01

    Steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) can provide relatively easy, reliable and high speed communication. However, the performance is still not satisfactory, especially in some users who are not able to generate strong enough SSVEP signals. This work aims to strengthen a user's SSVEP by alpha down-regulating neurofeedback training (NFT) and consequently improve the performance of the user in using SSVEP-based BCIs. An experiment with two steps was designed and conducted. The first step was to investigate the relationship between the resting alpha activity and the SSVEP-based BCI performance, in order to determine the training parameter for the NFT. Then in the second step, half of the subjects with 'low' performance (i.e. BCI classification accuracy <80%) were randomly assigned to a NFT group to perform a real-time NFT, and the rest half to a non-NFT control group for comparison. The first step revealed a significant negative correlation between the BCI performance and the individual alpha band (IAB) amplitudes in the eyes-open resting condition in a total of 33 subjects. In the second step, it was found that during the IAB down-regulating NFT, on average the subjects were able to successfully decrease their IAB amplitude over training sessions. More importantly, the NFT group showed an average increase of 16.5% in the SSVEP signal SNR (signal-to-noise ratio) and an average increase of 20.3% in the BCI classification accuracy, which was significant compared to the non-NFT control group. These findings indicate that the alpha down-regulating NFT can be used to improve the SSVEP signal quality and the subjects' performance in using SSVEP-based BCIs. It could be helpful to the SSVEP related studies and would contribute to more effective SSVEP-based BCI applications.

  1. One-step electrochemical deposition of Schiff base cobalt complex as effective water oxidation catalyst

    NASA Astrophysics Data System (ADS)

    Huang, Binbin; Wang, Yan; Zhan, Shuzhong; Ye, Jianshan

    2017-02-01

    Schiff base metal complexes have been applied in many fields, especially, a potential homogeneous catalyst for water splitting. However, the high overpotential, time consumed synthesis process and complicated working condition largely limit their application. In the present work, a one-step approach to fabricate Schiff base cobalt complex modified electrode is developed. Microrod clusters (MRC) and rough spherical particles (RSP) can be obtained on the ITO electrode through different electrochemical deposition condition. Both of the MRC and RSP present favorable activity for oxygen evolution reaction (OER) compared to the commercial Co3O4, taking an overpotential of 650 mV and 450 mV to drive appreciable catalytic current respectively. The highly active and stable RSP shows a Tafel plot of 84 mV dec-1 and negligible decrease of the current density for 12 h bulk electrolysis. The synthesis strategy of effective and stable catalyst in this work provide a simple method to fabricate heterogeneous OER catalyst with Schiff base metal complex.

  2. Lipid extraction methods from microalgal biomass harvested by two different paths: screening studies toward biodiesel production.

    PubMed

    Ríos, Sergio D; Castañeda, Joandiet; Torras, Carles; Farriol, Xavier; Salvadó, Joan

    2013-04-01

    Microalgae can grow rapidly and capture CO2 from the atmosphere to convert it into complex organic molecules such as lipids (biodiesel feedstock). High scale economically feasible microalgae based oil depends on optimizing the entire process production. This process can be divided in three very different but directly related steps (production, concentration, lipid extraction and transesterification). The aim of this study is to identify the best method of lipid extraction to undergo the potentiality of some microalgal biomass obtained from two different harvesting paths. The first path used all physicals concentration steps, and the second path was a combination of chemical and physical concentration steps. Three microalgae species were tested: Phaeodactylum tricornutum, Nannochloropsis gaditana, and Chaetoceros calcitrans One step lipid extraction-transesterification reached the same fatty acid methyl ester yield as the Bligh and Dyer and soxhlet extraction with n-hexane methods with the corresponding time, cost and solvent saving. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. On the correct use of stepped-sine excitations for the measurement of time-varying bioimpedance.

    PubMed

    Louarroudi, E; Sanchez, B

    2017-02-01

    When a linear time-varying (LTV) bioimpedance is measured using stepped-sine excitations, a compromise must be made: the temporal distortions affecting the data depend on the experimental time, which in turn sets the data accuracy and limits the temporal bandwidth of the system that needs to be measured. Here, the experimental time required to measure linear time-invariant bioimpedance with a specified accuracy is analyzed for different stepped-sine excitation setups. We provide simple equations that allow the reader to know whether LTV bioimpedance can be measured through repeated time- invariant stepped-sine experiments. Bioimpedance technology is on the rise thanks to a plethora of healthcare monitoring applications. The results presented can help to avoid distortions in the data while measuring accurately non-stationary physiological phenomena. The impact of the work presented is broad, including the potential of enhancing bioimpedance studies and healthcare devices using bioimpedance technology.

  4. Learning to Predict Chemical Reactions

    PubMed Central

    Kayala, Matthew A.; Azencott, Chloé-Agathe; Chen, Jonathan H.

    2011-01-01

    Being able to predict the course of arbitrary chemical reactions is essential to the theory and applications of organic chemistry. Approaches to the reaction prediction problems can be organized around three poles corresponding to: (1) physical laws; (2) rule-based expert systems; and (3) inductive machine learning. Previous approaches at these poles respectively are not high-throughput, are not generalizable or scalable, or lack sufficient data and structure to be implemented. We propose a new approach to reaction prediction utilizing elements from each pole. Using a physically inspired conceptualization, we describe single mechanistic reactions as interactions between coarse approximations of molecular orbitals (MOs) and use topological and physicochemical attributes as descriptors. Using an existing rule-based system (Reaction Explorer), we derive a restricted chemistry dataset consisting of 1630 full multi-step reactions with 2358 distinct starting materials and intermediates, associated with 2989 productive mechanistic steps and 6.14 million unproductive mechanistic steps. And from machine learning, we pose identifying productive mechanistic steps as a statistical ranking, information retrieval, problem: given a set of reactants and a description of conditions, learn a ranking model over potential filled-to-unfilled MO interactions such that the top ranked mechanistic steps yield the major products. The machine learning implementation follows a two-stage approach, in which we first train atom level reactivity filters to prune 94.00% of non-productive reactions with a 0.01% error rate. Then, we train an ensemble of ranking models on pairs of interacting MOs to learn a relative productivity function over mechanistic steps in a given system. Without the use of explicit transformation patterns, the ensemble perfectly ranks the productive mechanism at the top 89.05% of the time, rising to 99.86% of the time when the top four are considered. Furthermore, the system is generalizable, making reasonable predictions over reactants and conditions which the rule-based expert does not handle. A web interface to the machine learning based mechanistic reaction predictor is accessible through our chemoinformatics portal (http://cdb.ics.uci.edu) under the Toolkits section. PMID:21819139

  5. Integrating Near-Real Time Hydrologic-Response Monitoring and Modeling for Improved Assessments of Slope Stability Along the Coastal Bluffs of the Puget Sound Rail Corridor, Washington State

    NASA Astrophysics Data System (ADS)

    Mirus, B. B.; Baum, R. L.; Stark, B.; Smith, J. B.; Michel, A.

    2015-12-01

    Previous USGS research on landslide potential in hillside areas and coastal bluffs around Puget Sound, WA, has identified rainfall thresholds and antecedent moisture conditions that correlate with heightened probability of shallow landslides. However, physically based assessments of temporal and spatial variability in landslide potential require improved quantitative characterization of the hydrologic controls on landslide initiation in heterogeneous geologic materials. Here we present preliminary steps towards integrating monitoring of hydrologic response with physically based numerical modeling to inform the development of a landslide warning system for a railway corridor along the eastern shore of Puget Sound. We instrumented two sites along the steep coastal bluffs - one active landslide and one currently stable slope with the potential for failure - to monitor rainfall, soil-moisture, and pore-pressure dynamics in near-real time. We applied a distributed model of variably saturated subsurface flow for each site, with heterogeneous hydraulic-property distributions based on our detailed site characterization of the surficial colluvium and the underlying glacial-lacustrine deposits that form the bluffs. We calibrated the model with observed volumetric water content and matric potential time series, then used simulated pore pressures from the calibrated model to calculate the suction stress and the corresponding distribution of the factor of safety against landsliding with the infinite slope approximation. Although the utility of the model is limited by uncertainty in the deeper groundwater flow system, the continuous simulation of near-surface hydrologic response can help to quantify the temporal variations in the potential for shallow slope failures at the two sites. Thus the integration of near-real time monitoring and physically based modeling contributes a useful tool towards mitigating hazards along the Puget Sound railway corridor.

  6. Separation of Intercepted Multi-Radar Signals Based on Parameterized Time-Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Lu, W. L.; Xie, J. W.; Wang, H. M.; Sheng, C.

    2016-09-01

    Modern radars use complex waveforms to obtain high detection performance and low probabilities of interception and identification. Signals intercepted from multiple radars overlap considerably in both the time and frequency domains and are difficult to separate with primary time parameters. Time-frequency analysis (TFA), as a key signal-processing tool, can provide better insight into the signal than conventional methods. In particular, among the various types of TFA, parameterized time-frequency analysis (PTFA) has shown great potential to investigate the time-frequency features of such non-stationary signals. In this paper, we propose a procedure for PTFA to separate overlapped radar signals; it includes five steps: initiation, parameterized time-frequency analysis, demodulating the signal of interest, adaptive filtering and recovering the signal. The effectiveness of the method was verified with simulated data and an intercepted radar signal received in a microwave laboratory. The results show that the proposed method has good performance and has potential in electronic reconnaissance applications, such as electronic intelligence, electronic warfare support measures, and radar warning.

  7. Noise and contrast comparison of visual and infrared images of hazards as seen inside an automobile

    NASA Astrophysics Data System (ADS)

    Meitzler, Thomas J.; Bryk, Darryl; Sohn, Eui J.; Lane, Kimberly; Bednarz, David; Jusela, Daniel; Ebenstein, Samuel; Smith, Gregory H.; Rodin, Yelena; Rankin, James S., II; Samman, Amer M.

    2000-06-01

    The purpose of this experiment was to quantitatively measure driver performance for detecting potential road hazards in visual and infrared (IR) imagery of road scenes containing varying combinations of contrast and noise. This pilot test is a first step toward comparing various IR and visual sensors and displays for the purpose of an enhanced vision system to go inside the driver compartment. Visible and IR road imagery obtained was displayed on a large screen and on a PC monitor and subject response times were recorded. Based on the response time, detection probabilities were computed and compared to the known time of occurrence of a driving hazard. The goal was to see what combinations of sensor, contrast and noise enable subjects to have a higher detection probability of potential driving hazards.

  8. Accurate step-FMCW ultrasound ranging and comparison with pulse-echo signaling methods

    NASA Astrophysics Data System (ADS)

    Natarajan, Shyam; Singh, Rahul S.; Lee, Michael; Cox, Brian P.; Culjat, Martin O.; Grundfest, Warren S.; Lee, Hua

    2010-03-01

    This paper presents a method setup for high-frequency ultrasound ranging based on stepped frequency-modulated continuous waves (FMCW), potentially capable of producing a higher signal-to-noise ratio (SNR) compared to traditional pulse-echo signaling. In current ultrasound systems, the use of higher frequencies (10-20 MHz) to enhance resolution lowers signal quality due to frequency-dependent attenuation. The proposed ultrasound signaling format, step-FMCW, is well-known in the radar community, and features lower peak power, wider dynamic range, lower noise figure and simpler electronics in comparison to pulse-echo systems. In pulse-echo ultrasound ranging, distances are calculated using the transmit times between a pulse and its subsequent echoes. In step-FMCW ultrasonic ranging, the phase and magnitude differences at stepped frequencies are used to sample the frequency domain. Thus, by taking the inverse Fourier transform, a comprehensive range profile is recovered that has increased immunity to noise over conventional ranging methods. Step-FMCW and pulse-echo waveforms were created using custom-built hardware consisting of an arbitrary waveform generator and dual-channel super heterodyne receiver, providing high SNR and in turn, accuracy in detection.

  9. Algorithms of GPU-enabled reactive force field (ReaxFF) molecular dynamics.

    PubMed

    Zheng, Mo; Li, Xiaoxia; Guo, Li

    2013-04-01

    Reactive force field (ReaxFF), a recent and novel bond order potential, allows for reactive molecular dynamics (ReaxFF MD) simulations for modeling larger and more complex molecular systems involving chemical reactions when compared with computation intensive quantum mechanical methods. However, ReaxFF MD can be approximately 10-50 times slower than classical MD due to its explicit modeling of bond forming and breaking, the dynamic charge equilibration at each time-step, and its one order smaller time-step than the classical MD, all of which pose significant computational challenges in simulation capability to reach spatio-temporal scales of nanometers and nanoseconds. The very recent advances of graphics processing unit (GPU) provide not only highly favorable performance for GPU enabled MD programs compared with CPU implementations but also an opportunity to manage with the computing power and memory demanding nature imposed on computer hardware by ReaxFF MD. In this paper, we present the algorithms of GMD-Reax, the first GPU enabled ReaxFF MD program with significantly improved performance surpassing CPU implementations on desktop workstations. The performance of GMD-Reax has been benchmarked on a PC equipped with a NVIDIA C2050 GPU for coal pyrolysis simulation systems with atoms ranging from 1378 to 27,283. GMD-Reax achieved speedups as high as 12 times faster than Duin et al.'s FORTRAN codes in Lammps on 8 CPU cores and 6 times faster than the Lammps' C codes based on PuReMD in terms of the simulation time per time-step averaged over 100 steps. GMD-Reax could be used as a new and efficient computational tool for exploiting very complex molecular reactions via ReaxFF MD simulation on desktop workstations. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Real‐time monitoring and control of the load phase of a protein A capture step

    PubMed Central

    Rüdt, Matthias; Brestrich, Nina; Rolinger, Laura

    2016-01-01

    ABSTRACT The load phase in preparative Protein A capture steps is commonly not controlled in real‐time. The load volume is generally based on an offline quantification of the monoclonal antibody (mAb) prior to loading and on a conservative column capacity determined by resin‐life time studies. While this results in a reduced productivity in batch mode, the bottleneck of suitable real‐time analytics has to be overcome in order to enable continuous mAb purification. In this study, Partial Least Squares Regression (PLS) modeling on UV/Vis absorption spectra was applied to quantify mAb in the effluent of a Protein A capture step during the load phase. A PLS model based on several breakthrough curves with variable mAb titers in the HCCF was successfully calibrated. The PLS model predicted the mAb concentrations in the effluent of a validation experiment with a root mean square error (RMSE) of 0.06 mg/mL. The information was applied to automatically terminate the load phase, when a product breakthrough of 1.5 mg/mL was reached. In a second part of the study, the sensitivity of the method was further increased by only considering small mAb concentrations in the calibration and by subtracting an impurity background signal. The resulting PLS model exhibited a RMSE of prediction of 0.01 mg/mL and was successfully applied to terminate the load phase, when a product breakthrough of 0.15 mg/mL was achieved. The proposed method has hence potential for the real‐time monitoring and control of capture steps at large scale production. This might enhance the resin capacity utilization, eliminate time‐consuming offline analytics, and contribute to the realization of continuous processing. Biotechnol. Bioeng. 2017;114: 368–373. © 2016 The Authors. Biotechnology and Bioengineering published by Wiley Periodicals, Inc. PMID:27543789

  11. Impact of the Parameter Identification of Plastic Potentials on the Finite Element Simulation of Sheet Metal Forming

    NASA Astrophysics Data System (ADS)

    Rabahallah, M.; Bouvier, S.; Balan, T.; Bacroix, B.; Teodosiu, C.

    2007-04-01

    In this work, an implicit, backward Euler time integration scheme is developed for an anisotropic, elastic-plastic model based on strain-rate potentials. The constitutive algorithm includes a sub-stepping procedure to deal with the strong nonlinearity of the plastic potentials when applied to FCC materials. The algorithm is implemented in the static implicit version of the Abaqus finite element code. Several recent plastic potentials have been implemented in this framework. The most accurate potentials require the identification of about twenty material parameters. Both mechanical tests and micromechanical simulations have been used for their identification, for a number of BCC and FCC materials. The impact of the identification procedure on the prediction of ears in cup drawing is investigated.

  12. A coupled weather generator - rainfall-runoff approach on hourly time steps for flood risk analysis

    NASA Astrophysics Data System (ADS)

    Winter, Benjamin; Schneeberger, Klaus; Dung Nguyen, Viet; Vorogushyn, Sergiy; Huttenlau, Matthias; Merz, Bruno; Stötter, Johann

    2017-04-01

    The evaluation of potential monetary damage of flooding is an essential part of flood risk management. One possibility to estimate the monetary risk is to analyze long time series of observed flood events and their corresponding damages. In reality, however, only few flood events are documented. This limitation can be overcome by the generation of a set of synthetic, physically and spatial plausible flood events and subsequently the estimation of the resulting monetary damages. In the present work, a set of synthetic flood events is generated by a continuous rainfall-runoff simulation in combination with a coupled weather generator and temporal disaggregation procedure for the study area of Vorarlberg (Austria). Most flood risk studies focus on daily time steps, however, the mesoscale alpine study area is characterized by short concentration times, leading to large differences between daily mean and daily maximum discharge. Accordingly, an hourly time step is needed for the simulations. The hourly metrological input for the rainfall-runoff model is generated in a two-step approach. A synthetic daily dataset is generated by a multivariate and multisite weather generator and subsequently disaggregated to hourly time steps with a k-Nearest-Neighbor model. Following the event generation procedure, the negative consequences of flooding are analyzed. The corresponding flood damage for each synthetic event is estimated by combining the synthetic discharge at representative points of the river network with a loss probability relation for each community in the study area. The loss probability relation is based on exposure and susceptibility analyses on a single object basis (residential buildings) for certain return periods. For these impact analyses official inundation maps of the study area are used. Finally, by analyzing the total event time series of damages, the expected annual damage or losses associated with a certain probability of occurrence can be estimated for the entire study area.

  13. Development and comparative evaluation of SYBR Green I-based one-step real-time RT-PCR assay for detection and quantification of West Nile virus in human patients.

    PubMed

    Kumar, Jyoti S; Saxena, Divyasha; Parida, Manmohan

    2014-01-01

    The recent outbreaks of West Nile Virus (WNV) in the Northeastern American continents and other regions of the world have made it essential to develop an efficient protocol for surveillance of WN virus. Nucleic acid based techniques like, RT-PCR have the advantage of sensitivity, specificity and rapidity. A one step single tube Env gene specific real-time RT-PCR was developed for early and reliable clinical diagnosis of WNV infection in clinical samples. The applicability of this assay for clinical diagnosis was validated with 105 suspected acute-phase serum and plasma samples from the recent epidemic of mysterious fever in Tamil Nadu, India in 2009-10. The comparative evaluation revealed the higher sensitivity of real-time RT-PCR assay by picking up 4 additional samples with low copy number of template in comparison to conventional RT-PCR. All the real-time positive samples further confirmed by CDC reported TaqMan real-time RT-PCR and quantitative real-time RT-PCR assays for the simultaneous detection of WNV lineage 1 and 2 strains. The quantitation of the viral load samples was done using a standard curve. These findings demonstrated that the assay has the potential usefulness for clinical diagnosis due to detection and quantification of WNV in acute-phase patient serum samples. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Use of Visual and Proprioceptive Feedback to Improve Gait Speed and Spatiotemporal Symmetry Following Chronic Stroke: A Case Series

    PubMed Central

    Feasel, Jeff; Wentz, Erin; Brooks, Frederick P.; Whitton, Mary C.

    2012-01-01

    Background and Purpose Persistent deficits in gait speed and spatiotemporal symmetry are prevalent following stroke and can limit the achievement of community mobility goals. Rehabilitation can improve gait speed, but has shown limited ability to improve spatiotemporal symmetry. The incorporation of combined visual and proprioceptive feedback regarding spatiotemporal symmetry has the potential to be effective at improving gait. Case Description A 60-year-old man (18 months poststroke) and a 53-year-old woman (21 months poststroke) each participated in gait training to improve gait speed and spatiotemporal symmetry. Each patient performed 18 sessions (6 weeks) of combined treadmill-based gait training followed by overground practice. To assist with relearning spatiotemporal symmetry, treadmill-based training for both patients was augmented with continuous, real-time visual and proprioceptive feedback from an immersive virtual environment and a dual belt treadmill, respectively. Outcomes Both patients improved gait speed (patient 1: 0.35 m/s improvement; patient 2: 0.26 m/s improvement) and spatiotemporal symmetry. Patient 1, who trained with step-length symmetry feedback, improved his step-length symmetry ratio, but not his stance-time symmetry ratio. Patient 2, who trained with stance-time symmetry feedback, improved her stance-time symmetry ratio. She had no step-length asymmetry before training. Discussion Both patients made improvements in gait speed and spatiotemporal symmetry that exceeded those reported in the literature. Further work is needed to ascertain the role of combined visual and proprioceptive feedback for improving gait speed and spatiotemporal symmetry after chronic stroke. PMID:22228605

  15. On improving the iterative convergence properties of an implicit approximate-factorization finite difference algorithm. [considering transonic flow

    NASA Technical Reports Server (NTRS)

    Desideri, J. A.; Steger, J. L.; Tannehill, J. C.

    1978-01-01

    The iterative convergence properties of an approximate-factorization implicit finite-difference algorithm are analyzed both theoretically and numerically. Modifications to the base algorithm were made to remove the inconsistency in the original implementation of artificial dissipation. In this way, the steady-state solution became independent of the time-step, and much larger time-steps can be used stably. To accelerate the iterative convergence, large time-steps and a cyclic sequence of time-steps were used. For a model transonic flow problem governed by the Euler equations, convergence was achieved with 10 times fewer time-steps using the modified differencing scheme. A particular form of instability due to variable coefficients is also analyzed.

  16. Independent component analysis (ICA) and self-organizing map (SOM) approach to multidetection system for network intruders

    NASA Astrophysics Data System (ADS)

    Abdi, Abdi M.; Szu, Harold H.

    2003-04-01

    With the growing rate of interconnection among computer systems, network security is becoming a real challenge. Intrusion Detection System (IDS) is designed to protect the availability, confidentiality and integrity of critical network information systems. Today"s approach to network intrusion detection involves the use of rule-based expert systems to identify an indication of known attack or anomalies. However, these techniques are less successful in identifying today"s attacks. Hackers are perpetually inventing new and previously unanticipated techniques to compromise information infrastructure. This paper proposes a dynamic way of detecting network intruders on time serious data. The proposed approach consists of a two-step process. Firstly, obtaining an efficient multi-user detection method, employing the recently introduced complexity minimization approach as a generalization of a standard ICA. Secondly, we identified unsupervised learning neural network architecture based on Kohonen"s Self-Organizing Map for potential functional clustering. These two steps working together adaptively will provide a pseudo-real time novelty detection attribute to supplement the current intrusion detection statistical methodology.

  17. Preliminary Investigation of Time Remaining Display on the Computer-based Emergency Operating Procedure

    NASA Astrophysics Data System (ADS)

    Suryono, T. J.; Gofuku, A.

    2018-02-01

    One of the important thing in the mitigation of accidents in nuclear power plant accidents is time management. The accidents should be resolved as soon as possible in order to prevent the core melting and the release of radioactive material to the environment. In this case, operators should follow the emergency operating procedure related with the accident, in step by step order and in allowable time. Nowadays, the advanced main control rooms are equipped with computer-based procedures (CBPs) which is make it easier for operators to do their tasks of monitoring and controlling the reactor. However, most of the CBPs do not include the time remaining display feature which informs operators of time available for them to execute procedure steps and warns them if the they reach the time limit. Furthermore, the feature will increase the awareness of operators about their current situation in the procedure. This paper investigates this issue. The simplified of emergency operating procedure (EOP) of steam generator tube rupture (SGTR) accident of PWR plant is applied. In addition, the sequence of actions on each step of the procedure is modelled using multilevel flow modelling (MFM) and influenced propagation rule. The prediction of action time on each step is acquired based on similar case accidents and the Support Vector Regression. The derived time will be processed and then displayed on a CBP user interface.

  18. A seminested PCR assay for detection and typing of human papillomavirus based on E1 gene sequences.

    PubMed

    Cavalcante, Gustavo Henrique O; de Araújo, Josélio M G; Fernandes, José Veríssimo; Lanza, Daniel C F

    2018-05-01

    HPV infection is considered one of the leading causes of cervical cancer in the world. To date, more than 180 types of HPV have been described and viral typing is critical for defining the prognosis of cancer. In this work, a seminested PCR which allow fast and inexpensively detection and typing of HPV is presented. The system is based on the amplification of a variable length region within the viral gene E1, using three primers that potentially anneal in all HPV genomes. The amplicons produced in the first step can be identified by high resolution electrophoresis or direct sequencing. The seminested step includes nine specific primers which can be used in multiplex or individual reactions to discriminate the main types of HPV by amplicon size differentiation using agarose electrophoresis, reducing the time spent and cost per analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Variable aperture-based ptychographical iterative engine method.

    PubMed

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  20. The stepping behavior analysis of pedestrians from different age groups via a single-file experiment

    NASA Astrophysics Data System (ADS)

    Cao, Shuchao; Zhang, Jun; Song, Weiguo; Shi, Chang'an; Zhang, Ruifang

    2018-03-01

    The stepping behavior of pedestrians with different age compositions in single-file experiment is investigated in this paper. The relation between step length, step width and stepping time are analyzed by using the step measurement method based on the calculation of curvature of the trajectory. The relations of velocity-step width, velocity-step length and velocity-stepping time for different age groups are discussed and compared with previous studies. Finally effects of pedestrian gender and height on stepping laws and fundamental diagrams are analyzed. The study is helpful for understanding pedestrian dynamics of movement. Meanwhile, it offers experimental data to develop a microscopic model of pedestrian movement by considering stepping behavior.

  1. Connecting today's climates to future climate analogs to facilitate movement of species under climate change.

    PubMed

    Littlefield, Caitlin E; McRae, Brad H; Michalak, Julia L; Lawler, Joshua J; Carroll, Carlos

    2017-12-01

    Increasing connectivity is an important strategy for facilitating species range shifts and maintaining biodiversity in the face of climate change. To date, however, few researchers have included future climate projections in efforts to prioritize areas for increasing connectivity. We identified key areas likely to facilitate climate-induced species' movement across western North America. Using historical climate data sets and future climate projections, we mapped potential species' movement routes that link current climate conditions to analogous climate conditions in the future (i.e., future climate analogs) with a novel moving-window analysis based on electrical circuit theory. In addition to tracing shifting climates, the approach accounted for landscape permeability and empirically derived species' dispersal capabilities. We compared connectivity maps generated with our climate-change-informed approach with maps of connectivity based solely on the degree of human modification of the landscape. Including future climate projections in connectivity models substantially shifted and constrained priority areas for movement to a smaller proportion of the landscape than when climate projections were not considered. Potential movement, measured as current flow, decreased in all ecoregions when climate projections were included, particularly when dispersal was limited, which made climate analogs inaccessible. Many areas emerged as important for connectivity only when climate change was modeled in 2 time steps rather than in a single time step. Our results illustrate that movement routes needed to track changing climatic conditions may differ from those that connect present-day landscapes. Incorporating future climate projections into connectivity modeling is an important step toward facilitating successful species movement and population persistence in a changing climate. © 2017 Society for Conservation Biology.

  2. A New Two-Step Approach for Hands-On Teaching of Gene Technology: Effects on Students' Activities During Experimentation in an Outreach Gene Technology Lab

    NASA Astrophysics Data System (ADS)

    Scharfenberg, Franz-Josef; Bogner, Franz X.

    2011-08-01

    Emphasis on improving higher level biology education continues. A new two-step approach to the experimental phases within an outreach gene technology lab, derived from cognitive load theory, is presented. We compared our approach using a quasi-experimental design with the conventional one-step mode. The difference consisted of additional focused discussions combined with students writing down their ideas (step one) prior to starting any experimental procedure (step two). We monitored students' activities during the experimental phases by continuously videotaping 20 work groups within each approach ( N = 131). Subsequent classification of students' activities yielded 10 categories (with well-fitting intra- and inter-observer scores with respect to reliability). Based on the students' individual time budgets, we evaluated students' roles during experimentation from their prevalent activities (by independently using two cluster analysis methods). Independently of the approach, two common clusters emerged, which we labeled as `all-rounders' and as `passive students', and two clusters specific to each approach: `observers' as well as `high-experimenters' were identified only within the one-step approach whereas under the two-step conditions `managers' and `scribes' were identified. Potential changes in group-leadership style during experimentation are discussed, and conclusions for optimizing science teaching are drawn.

  3. Quantification of depositional changes and paleo-seismic activities from laminated sediments using outcrop data

    NASA Astrophysics Data System (ADS)

    Weidlich, O.; Bernecker, M.

    2004-04-01

    Measurements of laminations from marine and limnic sediments are commonly a time-consuming procedure. However, the resulting quantitative proxies are of importance for the interpretation of both, climate changes and paleo-seismic activities. Digital image analysis accelerates the generation and interpretation of large data sets from laminated sediments based on contrasting grey values of dark and light laminae. Statistical transformation and correlation of the grey value signals reflect high frequency cycles due to changing mean laminae thicknesses, and thus provide data monitoring climate change. Perturbations (e.g., slumping structures, seismites, and tsunamites) of the commonly continuous laminae record seismic activities and obtain proxies for paleo-earthquake frequency. Using outcrop data from (i) the Pleistocene Lisan Formation of Jordan (Dead Sea Basin) and (ii) the Carboniferous-Permian Copacabana Formation of Bolivia (Lake Titicaca), we present a two-step approach to gain high-resolution time series based on field data for both purposes from unconsolidated and lithified outcrops. Step 1 concerns the construction of a continuous digital phototransect and step 2 covers the creation of a grey density curve based on digital photos along a line transect using image analysis. The applied automated image analysis technique provides a continuous digital record of the studied sections and, therefore, serves as useful tool for the evaluation of further proxy data. Analysing the obtained grey signal of the light and dark laminae of varves using phototransects, we discuss the potential and limitations of the proposed technique.

  4. Sequence-Based Prediction of RNA-Binding Residues in Proteins.

    PubMed

    Walia, Rasna R; El-Manzalawy, Yasser; Honavar, Vasant G; Dobbs, Drena

    2017-01-01

    Identifying individual residues in the interfaces of protein-RNA complexes is important for understanding the molecular determinants of protein-RNA recognition and has many potential applications. Recent technical advances have led to several high-throughput experimental methods for identifying partners in protein-RNA complexes, but determining RNA-binding residues in proteins is still expensive and time-consuming. This chapter focuses on available computational methods for identifying which amino acids in an RNA-binding protein participate directly in contacting RNA. Step-by-step protocols for using three different web-based servers to predict RNA-binding residues are described. In addition, currently available web servers and software tools for predicting RNA-binding sites, as well as databases that contain valuable information about known protein-RNA complexes, RNA-binding motifs in proteins, and protein-binding recognition sites in RNA are provided. We emphasize sequence-based methods that can reliably identify interfacial residues without the requirement for structural information regarding either the RNA-binding protein or its RNA partner.

  5. Sequence-Based Prediction of RNA-Binding Residues in Proteins

    PubMed Central

    Walia, Rasna R.; EL-Manzalawy, Yasser; Honavar, Vasant G.; Dobbs, Drena

    2017-01-01

    Identifying individual residues in the interfaces of protein–RNA complexes is important for understanding the molecular determinants of protein–RNA recognition and has many potential applications. Recent technical advances have led to several high-throughput experimental methods for identifying partners in protein–RNA complexes, but determining RNA-binding residues in proteins is still expensive and time-consuming. This chapter focuses on available computational methods for identifying which amino acids in an RNA-binding protein participate directly in contacting RNA. Step-by-step protocols for using three different web-based servers to predict RNA-binding residues are described. In addition, currently available web servers and software tools for predicting RNA-binding sites, as well as databases that contain valuable information about known protein–RNA complexes, RNA-binding motifs in proteins, and protein-binding recognition sites in RNA are provided. We emphasize sequence-based methods that can reliably identify interfacial residues without the requirement for structural information regarding either the RNA-binding protein or its RNA partner. PMID:27787829

  6. Noninvasive Electroencephalogram Based Control of a Robotic Arm for Writing Task Using Hybrid BCI System.

    PubMed

    Gao, Qiang; Dou, Lixiang; Belkacem, Abdelkader Nasreddine; Chen, Chao

    2017-01-01

    A novel hybrid brain-computer interface (BCI) based on the electroencephalogram (EEG) signal which consists of a motor imagery- (MI-) based online interactive brain-controlled switch, "teeth clenching" state detector, and a steady-state visual evoked potential- (SSVEP-) based BCI was proposed to provide multidimensional BCI control. MI-based BCI was used as single-pole double throw brain switch (SPDTBS). By combining the SPDTBS with 4-class SSEVP-based BCI, movement of robotic arm was controlled in three-dimensional (3D) space. In addition, muscle artifact (EMG) of "teeth clenching" condition recorded from EEG signal was detected and employed as interrupter, which can initialize the statement of SPDTBS. Real-time writing task was implemented to verify the reliability of the proposed noninvasive hybrid EEG-EMG-BCI. Eight subjects participated in this study and succeeded to manipulate a robotic arm in 3D space to write some English letters. The mean decoding accuracy of writing task was 0.93 ± 0.03. Four subjects achieved the optimal criteria of writing the word "HI" which is the minimum movement of robotic arm directions (15 steps). Other subjects had needed to take from 2 to 4 additional steps to finish the whole process. These results suggested that our proposed hybrid noninvasive EEG-EMG-BCI was robust and efficient for real-time multidimensional robotic arm control.

  7. Noninvasive Electroencephalogram Based Control of a Robotic Arm for Writing Task Using Hybrid BCI System

    PubMed Central

    Gao, Qiang

    2017-01-01

    A novel hybrid brain-computer interface (BCI) based on the electroencephalogram (EEG) signal which consists of a motor imagery- (MI-) based online interactive brain-controlled switch, “teeth clenching” state detector, and a steady-state visual evoked potential- (SSVEP-) based BCI was proposed to provide multidimensional BCI control. MI-based BCI was used as single-pole double throw brain switch (SPDTBS). By combining the SPDTBS with 4-class SSEVP-based BCI, movement of robotic arm was controlled in three-dimensional (3D) space. In addition, muscle artifact (EMG) of “teeth clenching” condition recorded from EEG signal was detected and employed as interrupter, which can initialize the statement of SPDTBS. Real-time writing task was implemented to verify the reliability of the proposed noninvasive hybrid EEG-EMG-BCI. Eight subjects participated in this study and succeeded to manipulate a robotic arm in 3D space to write some English letters. The mean decoding accuracy of writing task was 0.93 ± 0.03. Four subjects achieved the optimal criteria of writing the word “HI” which is the minimum movement of robotic arm directions (15 steps). Other subjects had needed to take from 2 to 4 additional steps to finish the whole process. These results suggested that our proposed hybrid noninvasive EEG-EMG-BCI was robust and efficient for real-time multidimensional robotic arm control. PMID:28660211

  8. Photovoltaic central station step and touch potential considerations in grounding system design

    NASA Technical Reports Server (NTRS)

    Engmann, G.

    1983-01-01

    The probability of hazardous step and touch potentials is an important consideration in central station grounding system design. Steam turbine generating station grounding system design is based on accepted industry practices and there is extensive in-service experience with these grounding systems. A photovoltaic (PV) central station is a relatively new concept and there is limited experience with PV station grounding systems. The operation and physical configuration of a PV central station is very different from a steam electric station. A PV station bears some similarity to a substation and the PV station step and touch potentials might be addressed as they are in substation design. However, the PV central station is a generating station and it is appropriate to examine the effect that the differences and similarities of the two types of generating stations have on step and touch potential considerations.

  9. Single-step electrodeposition of CIS thin films with the complexing agent triethanolamine

    NASA Astrophysics Data System (ADS)

    Chiu, Yu-Shuen; Hsieh, Mu-Tao; Chang, Chih-Min; Chen, Chun-Shuo; Whang, Thou-Jen

    2014-04-01

    Some difficulties have long been encountered by single-step electrodeposition such as the optimization of electrolyte composition, deposition potentials, deposition time, and pH values. The approach of introducing ternary components into single-step electrodeposition is rather challenging especially due to the different values of the equilibrium potential for each constituent. Complexing agents play an important role in single-step electrodeposition of CuInSe2 (CIS), since the equilibrium potential of every constituent can be brought closer to each other when complexing agents are employed. In this work, single-step electrodeposition of CIS was enhanced by adding triethanolamine (TEA) into deposition bath, the CIS thin films were improved consequently in the form of polycrystalline cauliflower structures through the examination of SEM images and XRD patterns. The optimum composition of the solution for single-step electrodeposition of CIS is found to be 5 mM CuCl2, 22 mM InCl3, and 22 mM SeO2 at pH 1.5 with 0.1 M TEA. The structures, compositions, and morphologies of as-deposited and of annealed films were investigated.

  10. Using cadence to study free-living ambulatory behaviour.

    PubMed

    Tudor-Locke, Catrine; Rowe, David A

    2012-05-01

    The health benefits of a physically active lifestyle across a person's lifespan have been established. If there is any single physical activity behaviour that we should measure well and promote effectively, it is ambulatory activity and, more specifically, walking. Since public health physical activity guidelines include statements related to intensity of activity, it follows that we need to measure and promote free-living patterns of ambulatory activity that are congruent with this intent. The purpose of this review article is to present and summarize the potential for using cadence (steps/minute) to represent such behavioural patterns of ambulatory activity in free-living. Cadence is one of the spatio-temporal parameters of gait or walking speed. It is typically assessed using short-distance walks in clinical research and practice, but free-living cadence can be captured with a number of commercially available accelerometers that possess time-stamping technology. This presents a unique opportunity to use the same metric to communicate both ambulatory performance (assessed under testing conditions) and behaviour (assessed in the real world). Ranges for normal walking cadence assessed under laboratory conditions are 96-138 steps/minute for women and 81-135 steps/minute for men across their lifespan. The correlation between mean cadence and intensity (assessed with indirect calorimetry and expressed as metabolic equivalents [METs]) based on five treadmill/overground walking studies, is r = 0.93 and 100 steps/minute is considered to be a reasonable heuristic value indicative of walking at least at absolutely-defined moderate intensity (i.e. minimally, 3 METs) in adults. The weighted mean cadence derived from eight studies that have observed pedestrian cadence under natural conditions was 115.2 steps/minute, demonstrating that achieving 100 steps/minute is realistic in specific settings that occur in real life. However, accelerometer data collected in a large, representative sample suggest that self-selected walking at a cadence equivalent to ≥100 steps/minute is a rare occurrence in free-living adults. Specifically, the National Health and Nutrition Examination Survey (NHANES) data show that US adults spent ≅4.8 hours/day in non-movement (i.e. zero cadence) during wearing time, ≅8.7 hours at 1-59 steps/minute, ≅16 minutes/day at cadences of 60-79 steps/minute, ≅8 minutes at 80-99 steps/minute, ≅5 minutes at 100-119 steps/minute, and ≅2 minutes at 120+ steps/minute. Cadence appears to be sensitive to change with intervention, and capitalizing on the natural tempo of music is an obvious means of targeting cadence. Cadence could potentially be used effectively in epidemiological study, intervention and behavioural research, dose-response studies, determinants studies and in prescription and practice. It is easily interpretable by researchers, clinicians, programme staff and the lay public, and therefore offers the potential to bridge science, practice and real life.

  11. A fast, time-accurate unsteady full potential scheme

    NASA Technical Reports Server (NTRS)

    Shankar, V.; Ide, H.; Gorski, J.; Osher, S.

    1985-01-01

    The unsteady form of the full potential equation is solved in conservation form by an implicit method based on approximate factorization. At each time level, internal Newton iterations are performed to achieve time accuracy and computational efficiency. A local time linearization procedure is introduced to provide a good initial guess for the Newton iteration. A novel flux-biasing technique is applied to generate proper forms of the artificial viscosity to treat hyperbolic regions with shocks and sonic lines present. The wake is properly modeled by accounting not only for jumps in phi, but also for jumps in higher derivatives of phi, obtained by imposing the density to be continuous across the wake. The far field is modeled using the Riemann invariants to simulate nonreflecting boundary conditions. The resulting unsteady method performs well which, even at low reduced frequency levels of 0.1 or less, requires fewer than 100 time steps per cycle at transonic Mach numbers. The code is fully vectorized for the CRAY-XMP and the VPS-32 computers.

  12. Capillary fluctuations of surface steps: An atomistic simulation study for the model Cu(111) system

    NASA Astrophysics Data System (ADS)

    Freitas, Rodrigo; Frolov, Timofey; Asta, Mark

    2017-10-01

    Molecular dynamics (MD) simulations are employed to investigate the capillary fluctuations of steps on the surface of a model metal system. The fluctuation spectrum, characterized by the wave number (k ) dependence of the mean squared capillary-wave amplitudes and associated relaxation times, is calculated for 〈110 〉 and 〈112 〉 steps on the {111 } surface of elemental copper near the melting temperature of the classical potential model considered. Step stiffnesses are derived from the MD results, yielding values from the largest system sizes of (37 ±1 ) meV/A ˚ for the different line orientations, implying that the stiffness is isotropic within the statistical precision of the calculations. The fluctuation lifetimes are found to vary by approximately four orders of magnitude over the range of wave numbers investigated, displaying a k dependence consistent with kinetics governed by step-edge mediated diffusion. The values for step stiffness derived from these simulations are compared to step free energies for the same system and temperature obtained in a recent MD-based thermodynamic-integration (TI) study [Freitas, Frolov, and Asta, Phys. Rev. B 95, 155444 (2017), 10.1103/PhysRevB.95.155444]. Results from the capillary-fluctuation analysis and TI calculations yield statistically significant differences that are discussed within the framework of statistical-mechanical theories for configurational contributions to step free energies.

  13. Adaptive Time Stepping for Transient Network Flow Simulation in Rocket Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok K.; Ravindran, S. S.

    2017-01-01

    Fluid and thermal transients found in rocket propulsion systems such as propellant feedline system is a complex process involving fast phases followed by slow phases. Therefore their time accurate computation requires use of short time step initially followed by the use of much larger time step. Yet there are instances that involve fast-slow-fast phases. In this paper, we present a feedback control based adaptive time stepping algorithm, and discuss its use in network flow simulation of fluid and thermal transients. The time step is automatically controlled during the simulation by monitoring changes in certain key variables and by feedback. In order to demonstrate the viability of time adaptivity for engineering problems, we applied it to simulate water hammer and cryogenic chill down in pipelines. Our comparison and validation demonstrate the accuracy and efficiency of this adaptive strategy.

  14. A Paper-Based Device for Performing Loop-Mediated Isothermal Amplification with Real-Time Simultaneous Detection of Multiple DNA Targets.

    PubMed

    Seok, Youngung; Joung, Hyou-Arm; Byun, Ju-Young; Jeon, Hyo-Sung; Shin, Su Jeong; Kim, Sanghyo; Shin, Young-Beom; Han, Hyung Soo; Kim, Min-Gon

    2017-01-01

    Paper-based diagnostic devices have many advantages as a one of the multiple diagnostic test platforms for point-of-care (POC) testing because they have simplicity, portability, and cost-effectiveness. However, despite high sensitivity and specificity of nucleic acid testing (NAT), the development of NAT based on a paper platform has not progressed as much as the others because various specific conditions for nucleic acid amplification reactions such as pH, buffer components, and temperature, inhibitions from technical differences of paper-based device. Here, we propose a paper-based device for performing loop-mediated isothermal amplification (LAMP) with real-time simultaneous detection of multiple DNA targets. We determined the optimal chemical components to enable dry conditions for the LAMP reaction without lyophilization or other techniques. We also devised the simple paper device structure by sequentially stacking functional layers, and employed a newly discovered property of hydroxynaphthol blue fluorescence to analyze real-time LAMP signals in the paper device. This proposed platform allowed analysis of three different meningitis DNA samples in a single device with single-step operation. This LAMP-based multiple diagnostic device has potential for real-time analysis with quantitative detection of 10 2 -10 5 copies of genomic DNA. Furthermore, we propose the transformation of DNA amplification devices to a simple and affordable paper system approach with great potential for realizing a paper-based NAT system for POC testing.

  15. A Paper-Based Device for Performing Loop-Mediated Isothermal Amplification with Real-Time Simultaneous Detection of Multiple DNA Targets

    PubMed Central

    Seok, Youngung; Joung, Hyou-Arm; Byun, Ju-Young; Jeon, Hyo-Sung; Shin, Su Jeong; Kim, Sanghyo; Shin, Young-Beom; Han, Hyung Soo; Kim, Min-Gon

    2017-01-01

    Paper-based diagnostic devices have many advantages as a one of the multiple diagnostic test platforms for point-of-care (POC) testing because they have simplicity, portability, and cost-effectiveness. However, despite high sensitivity and specificity of nucleic acid testing (NAT), the development of NAT based on a paper platform has not progressed as much as the others because various specific conditions for nucleic acid amplification reactions such as pH, buffer components, and temperature, inhibitions from technical differences of paper-based device. Here, we propose a paper-based device for performing loop-mediated isothermal amplification (LAMP) with real-time simultaneous detection of multiple DNA targets. We determined the optimal chemical components to enable dry conditions for the LAMP reaction without lyophilization or other techniques. We also devised the simple paper device structure by sequentially stacking functional layers, and employed a newly discovered property of hydroxynaphthol blue fluorescence to analyze real-time LAMP signals in the paper device. This proposed platform allowed analysis of three different meningitis DNA samples in a single device with single-step operation. This LAMP-based multiple diagnostic device has potential for real-time analysis with quantitative detection of 102-105 copies of genomic DNA. Furthermore, we propose the transformation of DNA amplification devices to a simple and affordable paper system approach with great potential for realizing a paper-based NAT system for POC testing. PMID:28740546

  16. Is peracetic acid suitable for the cleaning step of reprocessing flexible endoscopes?

    PubMed

    Kampf, Günter; Fliss, Patricia M; Martiny, Heike

    2014-09-16

    The bioburden (blood, protein, pathogens and biofilm) on flexible endoscopes after use is often high and its removal is essential to allow effective disinfection, especially in the case of peracetic acid-based disinfectants, which are easily inactivated by organic material. Cleaning processes using conventional cleaners remove a variable but often sufficient amount of the bioburden. Some formulations based on peracetic acid are recommended by manufacturers for the cleaning step. We performed a systematic literature search and reviewed the available evidence to clarify the suitability of peracetic acid-based formulations for cleaning flexible endoscopes. A total of 243 studies were evaluated. No studies have yet demonstrated that peracetic acid-based cleaners are as effective as conventional cleaners. Some peracetic acid-based formulations have demonstrated some biofilm-cleaning effects and no biofilm-fixation potential, while others have a limited cleaning effect and a clear biofilm-fixation potential. All published data demonstrated a limited blood cleaning effect and a substantial blood and nerve tissue fixation potential of peracetic acid. No evidence-based guidelines on reprocessing flexible endoscopes currently recommend using cleaners containing peracetic acid, but some guidelines clearly recommend not using them because of their fixation potential. Evidence from some outbreaks, especially those involving highly multidrug-resistant gram-negative pathogens, indicated that disinfection using peracetic acid may be insufficient if the preceding cleaning step is not performed adequately. Based on this review we conclude that peracetic acid-based formulations should not be used for cleaning flexible endoscopes.

  17. Is peracetic acid suitable for the cleaning step of reprocessing flexible endoscopes?

    PubMed Central

    Kampf, Günter; Fliss, Patricia M; Martiny, Heike

    2014-01-01

    The bioburden (blood, protein, pathogens and biofilm) on flexible endoscopes after use is often high and its removal is essential to allow effective disinfection, especially in the case of peracetic acid-based disinfectants, which are easily inactivated by organic material. Cleaning processes using conventional cleaners remove a variable but often sufficient amount of the bioburden. Some formulations based on peracetic acid are recommended by manufacturers for the cleaning step. We performed a systematic literature search and reviewed the available evidence to clarify the suitability of peracetic acid-based formulations for cleaning flexible endoscopes. A total of 243 studies were evaluated. No studies have yet demonstrated that peracetic acid-based cleaners are as effective as conventional cleaners. Some peracetic acid-based formulations have demonstrated some biofilm-cleaning effects and no biofilm-fixation potential, while others have a limited cleaning effect and a clear biofilm-fixation potential. All published data demonstrated a limited blood cleaning effect and a substantial blood and nerve tissue fixation potential of peracetic acid. No evidence-based guidelines on reprocessing flexible endoscopes currently recommend using cleaners containing peracetic acid, but some guidelines clearly recommend not using them because of their fixation potential. Evidence from some outbreaks, especially those involving highly multidrug-resistant gram-negative pathogens, indicated that disinfection using peracetic acid may be insufficient if the preceding cleaning step is not performed adequately. Based on this review we conclude that peracetic acid-based formulations should not be used for cleaning flexible endoscopes. PMID:25228941

  18. Role of step size and max dwell time in anatomy based inverse optimization for prostate implants

    PubMed Central

    Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R.; Sresty, N.V. Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha

    2013-01-01

    In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants. PMID:24049323

  19. 7 CFR 1463.106 - Base quota levels for eligible tobacco producers.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm's average... (35-36)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 6 Multiply the sum from Step 5 times the farm... (35-36)—.94264 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm...

  20. 7 CFR 1463.106 - Base quota levels for eligible tobacco producers.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm's average... (35-36)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 6 Multiply the sum from Step 5 times the farm... (35-36)—.94264 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm...

  1. 7 CFR 1463.106 - Base quota levels for eligible tobacco producers.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm's average... (35-36)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 6 Multiply the sum from Step 5 times the farm... (35-36)—.94264 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm...

  2. 7 CFR 1463.106 - Base quota levels for eligible tobacco producers.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm's average... (35-36)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 6 Multiply the sum from Step 5 times the farm... (35-36)—.94264 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm...

  3. A whole-body physiologically based pharmacokinetic (WB-PBPK) model of ciprofloxacin: a step towards predicting bacterial killing at sites of infection.

    PubMed

    Sadiq, Muhammad W; Nielsen, Elisabet I; Khachman, Dalia; Conil, Jean-Marie; Georges, Bernard; Houin, Georges; Laffont, Celine M; Karlsson, Mats O; Friberg, Lena E

    2017-04-01

    The purpose of this study was to develop a whole-body physiologically based pharmacokinetic (WB-PBPK) model for ciprofloxacin for ICU patients, based on only plasma concentration data. In a next step, tissue and organ concentration time profiles in patients were predicted using the developed model. The WB-PBPK model was built using a non-linear mixed effects approach based on data from 102 adult intensive care unit patients. Tissue to plasma distribution coefficients (Kp) were available from the literature and used as informative priors. The developed WB-PBPK model successfully characterized both the typical trends and variability of the available ciprofloxacin plasma concentration data. The WB-PBPK model was thereafter combined with a pharmacokinetic-pharmacodynamic (PKPD) model, developed based on in vitro time-kill data of ciprofloxacin and Escherichia coli to illustrate the potential of this type of approach to predict the time-course of bacterial killing at different sites of infection. The predicted unbound concentration-time profile in extracellular tissue was driving the bacterial killing in the PKPD model and the rate and extent of take-over of mutant bacteria in different tissues were explored. The bacterial killing was predicted to be most efficient in lung and kidney, which correspond well to ciprofloxacin's indications pneumonia and urinary tract infections. Furthermore, a function based on available information on bacterial killing by the immune system in vivo was incorporated. This work demonstrates the development and application of a WB-PBPK-PD model to compare killing of bacteria with different antibiotic susceptibility, of value for drug development and the optimal use of antibiotics .

  4. EXPONENTIAL TIME DIFFERENCING FOR HODGKIN–HUXLEY-LIKE ODES

    PubMed Central

    Börgers, Christoph; Nectow, Alexander R.

    2013-01-01

    Several authors have proposed the use of exponential time differencing (ETD) for Hodgkin–Huxley-like partial and ordinary differential equations (PDEs and ODEs). For Hodgkin–Huxley-like PDEs, ETD is attractive because it can deal effectively with the stiffness issues that diffusion gives rise to. However, large neuronal networks are often simulated assuming “space-clamped” neurons, i.e., using the Hodgkin–Huxley ODEs, in which there are no diffusion terms. Our goal is to clarify whether ETD is a good idea even in that case. We present a numerical comparison of first- and second-order ETD with standard explicit time-stepping schemes (Euler’s method, the midpoint method, and the classical fourth-order Runge–Kutta method). We find that in the standard schemes, the stable computation of the very rapid rising phase of the action potential often forces time steps of a small fraction of a millisecond. This can result in an expensive calculation yielding greater overall accuracy than needed. Although it is tempting at first to try to address this issue with adaptive or fully implicit time-stepping, we argue that neither is effective here. The main advantage of ETD for Hodgkin–Huxley-like systems of ODEs is that it allows underresolution of the rising phase of the action potential without causing instability, using time steps on the order of one millisecond. When high quantitative accuracy is not necessary and perhaps, because of modeling inaccuracies, not even useful, ETD allows much faster simulations than standard explicit time-stepping schemes. The second-order ETD scheme is found to be substantially more accurate than the first-order one even for large values of Δt. PMID:24058276

  5. Combined non-parametric and parametric approach for identification of time-variant systems

    NASA Astrophysics Data System (ADS)

    Dziedziech, Kajetan; Czop, Piotr; Staszewski, Wieslaw J.; Uhl, Tadeusz

    2018-03-01

    Identification of systems, structures and machines with variable physical parameters is a challenging task especially when time-varying vibration modes are involved. The paper proposes a new combined, two-step - i.e. non-parametric and parametric - modelling approach in order to determine time-varying vibration modes based on input-output measurements. Single-degree-of-freedom (SDOF) vibration modes from multi-degree-of-freedom (MDOF) non-parametric system representation are extracted in the first step with the use of time-frequency wavelet-based filters. The second step involves time-varying parametric representation of extracted modes with the use of recursive linear autoregressive-moving-average with exogenous inputs (ARMAX) models. The combined approach is demonstrated using system identification analysis based on the experimental mass-varying MDOF frame-like structure subjected to random excitation. The results show that the proposed combined method correctly captures the dynamics of the analysed structure, using minimum a priori information on the model.

  6. Comparison of Methods for Demonstrating Passage of Time When Using Computer-Based Video Prompting

    ERIC Educational Resources Information Center

    Mechling, Linda C.; Bryant, Kathryn J.; Spencer, Galen P.; Ayres, Kevin M.

    2015-01-01

    Two different video-based procedures for presenting the passage of time (how long a step lasts) were examined. The two procedures were presented within the framework of video prompting to promote independent multi-step task completion across four young adults with moderate intellectual disability. The two procedures demonstrating passage of the…

  7. Investigation to biodiesel production by the two-step homogeneous base-catalyzed transesterification.

    PubMed

    Ye, Jianchu; Tu, Song; Sha, Yong

    2010-10-01

    For the two-step transesterification biodiesel production made from the sunflower oil, based on the kinetics model of the homogeneous base-catalyzed transesterification and the liquid-liquid phase equilibrium of the transesterification product, the total methanol/oil mole ratio, the total reaction time, and the split ratios of methanol and reaction time between the two reactors in the stage of the two-step reaction are determined quantitatively. In consideration of the transesterification intermediate product, both the traditional distillation separation process and the improved separation process of the two-step reaction product are investigated in detail by means of the rigorous process simulation. In comparison with the traditional distillation process, the improved separation process of the two-step reaction product has distinct advantage in the energy duty and equipment requirement due to replacement of the costly methanol-biodiesel distillation column. Copyright 2010 Elsevier Ltd. All rights reserved.

  8. Asynchronous adaptive time step in quantitative cellular automata modeling

    PubMed Central

    Zhu, Hao; Pang, Peter YH; Sun, Yan; Dhar, Pawan

    2004-01-01

    Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment. PMID:15222901

  9. Curved-line search algorithm for ab initio atomic structure relaxation

    NASA Astrophysics Data System (ADS)

    Chen, Zhanghui; Li, Jingbo; Li, Shushen; Wang, Lin-Wang

    2017-09-01

    Ab initio atomic relaxations often take large numbers of steps and long times to converge, especially when the initial atomic configurations are far from the local minimum or there are curved and narrow valleys in the multidimensional potentials. An atomic relaxation method based on on-the-flight force learning and a corresponding curved-line search algorithm is presented to accelerate this process. Results demonstrate the superior performance of this method for metal and magnetic clusters when compared with the conventional conjugate-gradient method.

  10. Defense Science And Technology: Further DOD And DOE Actions Needed to Provide Timely Conference Decisions and Analyze Risks from Changes in Participation

    DTIC Science & Technology

    2015-03-01

    Conference Planning and Food and Beverage Costs, Audit Report 11-43 (October 2011). 5White House, Executive Order 13589, Promoting Efficient Spending, 76...and conducted a content analysis of these interviews. Based on this analysis, we enumerated challenges and mitigation strategies as well as benefits...of officials, asking respondents to rate the effect of each potential mitigation strategy and prioritize steps for implementing the strategies . We

  11. Proposed phase 2/ step 2 in-vitro test on basis of EN 14561 for standardised testing of the wound antiseptics PVP-iodine, chlorhexidine digluconate, polihexanide and octenidine dihydrochloride.

    PubMed

    Schedler, Kathrin; Assadian, Ojan; Brautferger, Uta; Müller, Gerald; Koburger, Torsten; Classen, Simon; Kramer, Axel

    2017-02-13

    Currently, there is no agreed standard for exploring the antimicrobial activity of wound antiseptics in a phase 2/ step 2 test protocol. In the present study, a standardised in-vitro test is proposed, which allows to test potential antiseptics in a more realistically simulation of conditions found in wounds as in a suspension test. Furthermore, factors potentially influencing test results such as type of materials used as test carrier or various compositions of organic soil challenge were investigated in detail. This proposed phase 2/ step 2 test method was modified on basis of the EN 14561 by drying the microbial test suspension on a metal carrier for 1 h, overlaying the test wound antiseptic, washing-off, neutralization, and dispersion at serial dilutions at the end of the required exposure time yielded reproducible, consistent test results. The difference between the rapid onset of the antiseptic effect of PVP-I and the delayed onset especially of polihexanide was apparent. Among surface-active antimicrobial compounds, octenidine was more effective than chlorhexidine digluconate and polihexanide, with some differences depending on the test organisms. However, octenidine and PVP-I were approximately equivalent in efficiency and microbial spectrum, while polihexanide required longer exposure times or higher concentrations for a comparable antimicrobial efficacy. Overall, this method allowed testing and comparing differ liquid and gel based antimicrobial compounds in a standardised setting.

  12. A facile one-step approach for the fabrication of polypyrrole nanowire/carbon fiber hybrid electrodes for flexible high performance solid-state supercapacitors

    NASA Astrophysics Data System (ADS)

    Huang, Sanqing; Han, Yichuan; Lyu, Siwei; Lin, Wenzhen; Chen, Peishan; Fang, Shaoli

    2017-10-01

    Wearable electronics are in high demand, requiring that all the components are flexible. Here we report a facile approach for the fabrication of flexible polypyrrole nanowire (NPPy)/carbon fiber (CF) hybrid electrodes with high electrochemical activity using a low-cost, one-step electrodeposition method. The structure of the NPPy/CF electrodes can be easily controlled by the applied electrical potential and electrodeposition time. Our NPPy/CF-based electrodes showed high flexibility, conductivity, and stability, making them ideal for flexible all-solid-state fiber supercapacitors. The resulting NPPy/CF-based supercapacitors provided a high specific capacitance of 148.4 F g-1 at 0.128 A g-1, which is much higher than for supercapacitors based on polypyrrole film/CF (38.3 F g-1) and pure CF (0.6 F g-1) under the same conditions. The NPPy/CF-based supercapacitors also showed high bending and cycling stability, retaining 84% of the initial capacitance after 500 bending cycles, and 91% of the initial capacitance after 5000 charge/discharge cycles.

  13. A facile one-step approach for the fabrication of polypyrrole nanowire/carbon fiber hybrid electrodes for flexible high performance solid-state supercapacitors.

    PubMed

    Huang, Sanqing; Han, Yichuan; Lyu, Siwei; Lin, Wenzhen; Chen, Peishan; Fang, Shaoli

    2017-10-27

    Wearable electronics are in high demand, requiring that all the components are flexible. Here we report a facile approach for the fabrication of flexible polypyrrole nanowire (NPPy)/carbon fiber (CF) hybrid electrodes with high electrochemical activity using a low-cost, one-step electrodeposition method. The structure of the NPPy/CF electrodes can be easily controlled by the applied electrical potential and electrodeposition time. Our NPPy/CF-based electrodes showed high flexibility, conductivity, and stability, making them ideal for flexible all-solid-state fiber supercapacitors. The resulting NPPy/CF-based supercapacitors provided a high specific capacitance of 148.4 F g -1 at 0.128 A g -1 , which is much higher than for supercapacitors based on polypyrrole film/CF (38.3 F g -1 ) and pure CF (0.6 F g -1 ) under the same conditions. The NPPy/CF-based supercapacitors also showed high bending and cycling stability, retaining 84% of the initial capacitance after 500 bending cycles, and 91% of the initial capacitance after 5000 charge/discharge cycles.

  14. Accessing Computers in Education, One Byte at a Time.

    ERIC Educational Resources Information Center

    Manzo, Anthony V.

    This paper discusses computers and their potential role in education. The term "byte" is first explained, to emphasize the idea that the use of computers should be implemented one "byte" or step at a time. The reasons for this approach are then outlined. Potential applications in computer usage in educational administration are suggested, computer…

  15. A Predictive Model for Toxicity Effects Assessment of Biotransformed Hepatic Drugs Using Iterative Sampling Method.

    PubMed

    Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella

    2016-12-09

    Measuring toxicity is one of the main steps in drug development. Hence, there is a high demand for computational models to predict the toxicity effects of the potential drugs. In this study, we used a dataset, which consists of four toxicity effects:mutagenic, tumorigenic, irritant and reproductive effects. The proposed model consists of three phases. In the first phase, rough set-based methods are used to select the most discriminative features for reducing the classification time and improving the classification performance. Due to the imbalanced class distribution, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. ITerative Sampling (ITS) method is proposed to avoid the limitations of those methods. ITS method has two steps. The first step (sampling step) iteratively modifies the prior distribution of the minority and majority classes. In the second step, a data cleaning method is used to remove the overlapping that is produced from the first step. In the third phase, Bagging classifier is used to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed model performed well in classifying the unknown samples according to all toxic effects in the imbalanced datasets.

  16. Efficient variable time-stepping scheme for intense field-atom interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerjan, C.; Kosloff, R.

    1993-03-01

    The recently developed Residuum method [Tal-Ezer, Kosloff, and Cerjan, J. Comput. Phys. 100, 179 (1992)], a Krylov subspace technique with variable time-step integration for the solution of the time-dependent Schroedinger equation, is applied to the frequently used soft Coulomb potential in an intense laser field. This one-dimensional potential has asymptotic Coulomb dependence with a softened'' singularity at the origin; thus it models more realistic phenomena. Two of the more important quantities usually calculated in this idealized system are the photoelectron and harmonic photon generation spectra. These quantities are shown to be sensitive to the choice of a numerical integration scheme:more » some spectral features are incorrectly calculated or missing altogether. Furthermore, the Residuum method allows much larger grid spacings for equivalent or higher accuracy in addition to the advantages of variable time stepping. Finally, it is demonstrated that enhanced high-order harmonic generation accompanies intense field stabilization and that preparation of the atom in an intermediate Rydberg state leads to stabilization at much lower laser intensity.« less

  17. A global reaction route mapping-based kinetic Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Mitchell, Izaac; Irle, Stephan; Page, Alister J.

    2016-07-01

    We propose a new on-the-fly kinetic Monte Carlo (KMC) method that is based on exhaustive potential energy surface searching carried out with the global reaction route mapping (GRRM) algorithm. Starting from any given equilibrium state, this GRRM-KMC algorithm performs a one-step GRRM search to identify all surrounding transition states. Intrinsic reaction coordinate pathways are then calculated to identify potential subsequent equilibrium states. Harmonic transition state theory is used to calculate rate constants for all potential pathways, before a standard KMC accept/reject selection is performed. The selected pathway is then used to propagate the system forward in time, which is calculated on the basis of 1st order kinetics. The GRRM-KMC algorithm is validated here in two challenging contexts: intramolecular proton transfer in malonaldehyde and surface carbon diffusion on an iron nanoparticle. We demonstrate that in both cases the GRRM-KMC method is capable of reproducing the 1st order kinetics observed during independent quantum chemical molecular dynamics simulations using the density-functional tight-binding potential.

  18. A global reaction route mapping-based kinetic Monte Carlo algorithm.

    PubMed

    Mitchell, Izaac; Irle, Stephan; Page, Alister J

    2016-07-14

    We propose a new on-the-fly kinetic Monte Carlo (KMC) method that is based on exhaustive potential energy surface searching carried out with the global reaction route mapping (GRRM) algorithm. Starting from any given equilibrium state, this GRRM-KMC algorithm performs a one-step GRRM search to identify all surrounding transition states. Intrinsic reaction coordinate pathways are then calculated to identify potential subsequent equilibrium states. Harmonic transition state theory is used to calculate rate constants for all potential pathways, before a standard KMC accept/reject selection is performed. The selected pathway is then used to propagate the system forward in time, which is calculated on the basis of 1st order kinetics. The GRRM-KMC algorithm is validated here in two challenging contexts: intramolecular proton transfer in malonaldehyde and surface carbon diffusion on an iron nanoparticle. We demonstrate that in both cases the GRRM-KMC method is capable of reproducing the 1st order kinetics observed during independent quantum chemical molecular dynamics simulations using the density-functional tight-binding potential.

  19. Learning to predict chemical reactions.

    PubMed

    Kayala, Matthew A; Azencott, Chloé-Agathe; Chen, Jonathan H; Baldi, Pierre

    2011-09-26

    Being able to predict the course of arbitrary chemical reactions is essential to the theory and applications of organic chemistry. Approaches to the reaction prediction problems can be organized around three poles corresponding to: (1) physical laws; (2) rule-based expert systems; and (3) inductive machine learning. Previous approaches at these poles, respectively, are not high throughput, are not generalizable or scalable, and lack sufficient data and structure to be implemented. We propose a new approach to reaction prediction utilizing elements from each pole. Using a physically inspired conceptualization, we describe single mechanistic reactions as interactions between coarse approximations of molecular orbitals (MOs) and use topological and physicochemical attributes as descriptors. Using an existing rule-based system (Reaction Explorer), we derive a restricted chemistry data set consisting of 1630 full multistep reactions with 2358 distinct starting materials and intermediates, associated with 2989 productive mechanistic steps and 6.14 million unproductive mechanistic steps. And from machine learning, we pose identifying productive mechanistic steps as a statistical ranking, information retrieval problem: given a set of reactants and a description of conditions, learn a ranking model over potential filled-to-unfilled MO interactions such that the top-ranked mechanistic steps yield the major products. The machine learning implementation follows a two-stage approach, in which we first train atom level reactivity filters to prune 94.00% of nonproductive reactions with a 0.01% error rate. Then, we train an ensemble of ranking models on pairs of interacting MOs to learn a relative productivity function over mechanistic steps in a given system. Without the use of explicit transformation patterns, the ensemble perfectly ranks the productive mechanism at the top 89.05% of the time, rising to 99.86% of the time when the top four are considered. Furthermore, the system is generalizable, making reasonable predictions over reactants and conditions which the rule-based expert does not handle. A web interface to the machine learning based mechanistic reaction predictor is accessible through our chemoinformatics portal ( http://cdb.ics.uci.edu) under the Toolkits section.

  20. Solution of the Average-Passage Equations for the Incompressible Flow through Multiple-Blade-Row Turbomachinery

    DTIC Science & Technology

    1994-02-01

    numerical treatment. An explicit numerical procedure based on Runqe-Kutta time stepping for cell-centered, hexahedral finite volumes is...An explicit numerical procedure based on Runge-Kutta time stepping for cell-centered, hexahedral finite volumes is outlined for the approximate...Discretization 16 3.1 Cell-Centered Finite -Volume Discretization in Space 16 3.2 Artificial Dissipation 17 3.3 Time Integration 21 3.4 Convergence

  1. Physical Activity Patterns and Sedentary Behavior in Older Women With Urinary Incontinence: an Accelerometer-based Study.

    PubMed

    Chu, Christine M; Khanijow, Kavita D; Schmitz, Kathryn H; Newman, Diane K; Arya, Lily A; Harvie, Heidi S

    2018-01-10

    Objective physical activity data for women with urinary incontinence are lacking. We investigated the relationship between physical activity, sedentary behavior, and the severity of urinary symptoms in older community-dwelling women with urinary incontinence using accelerometers. This is a secondary analysis of a study that measured physical activity (step count, moderate-to-vigorous physical activity time) and sedentary behavior (percentage of sedentary time, number of sedentary bouts per day) using a triaxial accelerometer in older community-dwelling adult women not actively seeking treatment of their urinary symptoms. The relationship between urinary symptoms and physical activity variables was measured using linear regression. Our cohort of 35 community-dwelling women (median, age, 71 years) demonstrated low physical activity (median daily step count, 2168; range, 687-5205) and high sedentary behavior (median percentage of sedentary time, 74%; range, 54%-89%). Low step count was significantly associated with nocturia (P = 0.02). Shorter duration of moderate-to-vigorous physical activity time was significantly associated with nocturia (P = 0.001), nocturnal enuresis (P = 0.04), and greater use of incontinence products (P = 0.04). Greater percentage of time spent in sedentary behavior was also significantly associated with nocturia (P = 0.016). Low levels of physical activity are associated with greater nocturia and nocturnal enuresis. Sedentary behavior is a new construct that may be associated with lower urinary tract symptoms. Physical activity and sedentary behavior represent potential new targets for treating nocturnal urinary tract symptoms.

  2. Two-Step Formal Advertisement: An Examination.

    DTIC Science & Technology

    1976-10-01

    The purpose of this report is to examine the potential application of the Two-Step Formal Advertisement method of procurement. Emphasis is placed on...Step formal advertising is a method of procurement designed to take advantage of negotiation flexibility and at the same time obtain the benefits of...formal advertising . It is used where the specifications are not sufficiently definite or may be too restrictive to permit full and free competition

  3. Highly Sensitive Bacteriophage-Based Detection of Brucella abortus in Mixed Culture and Spiked Blood

    PubMed Central

    Sergueev, Kirill V.; Filippov, Andrey A.; Nikolich, Mikeljon P.

    2017-01-01

    For decades, bacteriophages (phages) have been used for Brucella species identification in the diagnosis and epidemiology of brucellosis. Traditional Brucella phage typing is a multi-day procedure including the isolation of a pure culture, a step that can take up to three weeks. In this study, we focused on the use of brucellaphages for sensitive detection of the pathogen in clinical and other complex samples, and developed an indirect method of Brucella detection using real-time quantitative PCR monitoring of brucellaphage DNA amplification via replication on live Brucella cells. This assay allowed the detection of single bacteria (down to 1 colony-forming unit per milliliter) within 72 h without DNA extraction and purification steps. The technique was equally efficient with Brucella abortus pure culture and with mixed cultures of B. abortus and α-proteobacterial near neighbors that can be misidentified as Brucella spp., Ochrobactrum anthropi and Afipia felis. The addition of a simple short sample preparation step enabled the indirect phage-based detection of B. abortus in spiked blood, with the same high sensitivity. This indirect phage-based detection assay enables the rapid and sensitive detection of live B. abortus in mixed cultures and in blood samples, and can potentially be applied for detection in other clinical samples and other complex sample types. PMID:28604602

  4. Alpha neurofeedback training improves SSVEP-based BCI performance

    NASA Astrophysics Data System (ADS)

    Wan, Feng; Nuno da Cruz, Janir; Nan, Wenya; Wong, Chi Man; Vai, Mang I.; Rosa, Agostinho

    2016-06-01

    Objective. Steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) can provide relatively easy, reliable and high speed communication. However, the performance is still not satisfactory, especially in some users who are not able to generate strong enough SSVEP signals. This work aims to strengthen a user’s SSVEP by alpha down-regulating neurofeedback training (NFT) and consequently improve the performance of the user in using SSVEP-based BCIs. Approach. An experiment with two steps was designed and conducted. The first step was to investigate the relationship between the resting alpha activity and the SSVEP-based BCI performance, in order to determine the training parameter for the NFT. Then in the second step, half of the subjects with ‘low’ performance (i.e. BCI classification accuracy <80%) were randomly assigned to a NFT group to perform a real-time NFT, and the rest half to a non-NFT control group for comparison. Main results. The first step revealed a significant negative correlation between the BCI performance and the individual alpha band (IAB) amplitudes in the eyes-open resting condition in a total of 33 subjects. In the second step, it was found that during the IAB down-regulating NFT, on average the subjects were able to successfully decrease their IAB amplitude over training sessions. More importantly, the NFT group showed an average increase of 16.5% in the SSVEP signal SNR (signal-to-noise ratio) and an average increase of 20.3% in the BCI classification accuracy, which was significant compared to the non-NFT control group. Significance. These findings indicate that the alpha down-regulating NFT can be used to improve the SSVEP signal quality and the subjects’ performance in using SSVEP-based BCIs. It could be helpful to the SSVEP related studies and would contribute to more effective SSVEP-based BCI applications.

  5. Interface induced spin-orbit interaction in silicon quantum dots and prospects of scalability

    NASA Astrophysics Data System (ADS)

    Ferdous, Rifat; Wai, Kok; Veldhorst, Menno; Hwang, Jason; Yang, Henry; Klimeck, Gerhard; Dzurak, Andrew; Rahman, Rajib

    A scalable quantum computing architecture requires reproducibility over key qubit properties, like resonance frequency, coherence time etc. Randomness in these properties would necessitate individual knowledge of each qubit in a quantum computer. Spin qubits hosted in Silicon (Si) quantum dots (QD) is promising as a potential building block for a large-scale quantum computer, because of their longer coherence times. The Stark shift of the electron g-factor in these QDs has been used to selectively address multiple qubits. From atomistic tight-binding studies we investigated the effect of interface non-ideality on the Stark shift of the g-factor in a Si QD. We find that based on the location of a monoatomic step at the interface with respect to the dot center both the sign and magnitude of the Stark shift change. Thus the presence of interface steps in these devices will cause variability in electron g-factor and its Stark shift based on the location of the qubit. This behavior will also cause varying sensitivity to charge noise from one qubit to another, which will randomize the dephasing times T2*. This predicted device-to-device variability is experimentally observed recently in three qubits fabricated at a Si/Si02 interface, which validates the issues discussed.

  6. Rapid Design of Knowledge-Based Scoring Potentials for Enrichment of Near-Native Geometries in Protein-Protein Docking.

    PubMed

    Sasse, Alexander; de Vries, Sjoerd J; Schindler, Christina E M; de Beauchêne, Isaure Chauvot; Zacharias, Martin

    2017-01-01

    Protein-protein docking protocols aim to predict the structures of protein-protein complexes based on the structure of individual partners. Docking protocols usually include several steps of sampling, clustering, refinement and re-scoring. The scoring step is one of the bottlenecks in the performance of many state-of-the-art protocols. The performance of scoring functions depends on the quality of the generated structures and its coupling to the sampling algorithm. A tool kit, GRADSCOPT (GRid Accelerated Directly SCoring OPTimizing), was designed to allow rapid development and optimization of different knowledge-based scoring potentials for specific objectives in protein-protein docking. Different atomistic and coarse-grained potentials can be created by a grid-accelerated directly scoring dependent Monte-Carlo annealing or by a linear regression optimization. We demonstrate that the scoring functions generated by our approach are similar to or even outperform state-of-the-art scoring functions for predicting near-native solutions. Of additional importance, we find that potentials specifically trained to identify the native bound complex perform rather poorly on identifying acceptable or medium quality (near-native) solutions. In contrast, atomistic long-range contact potentials can increase the average fraction of near-native poses by up to a factor 2.5 in the best scored 1% decoys (compared to existing scoring), emphasizing the need of specific docking potentials for different steps in the docking protocol.

  7. Polyhydroxyalkanoate Production on Waste Water Treatment Plants: Process Scheme, Operating Conditions and Potential Analysis for German and European Municipal Waste Water Treatment Plants

    PubMed Central

    Pittmann, Timo; Steinmetz, Heidrun

    2017-01-01

    This work describes the production of polyhydroxyalkanoates (PHA) as a side stream process on a municipal waste water treatment plant (WWTP) and a subsequent analysis of the production potential in Germany and the European Union (EU). Therefore, tests with different types of sludge from a WWTP were investigated regarding their volatile fatty acids (VFA) production-potential. Afterwards, primary sludge was used as substrate to test a series of operating conditions (temperature, pH, retention time (RT) and withdrawal (WD)) in order to find suitable settings for a high and stable VFA production. In a second step, various tests regarding a high PHA production and stable PHA composition to determine the influence of substrate concentration, temperature, pH and cycle time of an installed feast/famine-regime were conducted. Experiments with a semi-continuous reactor operation showed that a short RT of 4 days and a small WD of 25% at pH = 6 and around 30 °C is preferable for a high VFA production rate (PR) of 1913 mgVFA/(L×d) and a stable VFA composition. A high PHA production up to 28.4% of cell dry weight (CDW) was reached at lower substrate concentration, 20 °C, neutral pH-value and a 24 h cycle time. A final step a potential analysis, based on the results and detailed data from German waste water treatment plants, showed that the theoretically possible production of biopolymers in Germany amounts to more than 19% of the 2016 worldwide biopolymer production. In addition, a profound estimation regarding the EU showed that in theory about 120% of the worldwide biopolymer production (in 2016) could be produced on European waste water treatment plants. PMID:28952533

  8. Leap Frog and Time Step Sub-Cycle Scheme for Coupled Neutronics and Thermal-Hydraulic Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, S.

    2002-07-01

    As the result of the advancing TCP/IP based inter-process communication technology, more and more legacy thermal-hydraulic codes have been coupled with neutronics codes to provide best-estimate capabilities for reactivity related reactor transient analysis. Most of the coupling schemes are based on closely coupled serial or parallel approaches. Therefore, the execution of the coupled codes usually requires significant CPU time, when a complicated system is analyzed. Leap Frog scheme has been used to reduce the run time. The extent of the decoupling is usually determined based on a trial and error process for a specific analysis. It is the intent ofmore » this paper to develop a set of general criteria, which can be used to invoke the automatic Leap Frog algorithm. The algorithm will not only provide the run time reduction but also preserve the accuracy. The criteria will also serve as the base of an automatic time step sub-cycle scheme when a sudden reactivity change is introduced and the thermal-hydraulic code is marching with a relatively large time step. (authors)« less

  9. Study on Potential Changes in Geological and Disposal Environment Caused by 'Natural Phenomena' on a HLW Disposal System

    NASA Astrophysics Data System (ADS)

    Kawamura, M.; Umeda, K.; Ohi, T.; Ishimaru, T.; Niizato, T.; Yasue, K.; Makino, H.

    2007-12-01

    We have developed a formal evaluation method to assess the potential impact of natural phenomena (earthquakes and faulting; volcanism; uplift, subsidence, denudation and sedimentation; climatic and sea-level changes) on a High Level Radioactive Waste (HLW) Disposal System. In 2000, we had developed perturbation scenarios in a generic and conservative sense and illustrated the potential impact on a HLW disposal system. As results of the development of perturbation scenarios, two points were highlighted for consideration in subsequent work: improvement of the scenarios from the viewpoints of reality, transparency, traceability and consistency and avoiding extreme conservatism. Subsequently, we have thus developed a new procedure for describing such perturbation scenarios based on further studies of the characteristics of these natural perturbation phenomena in Japan. The approach to describing the perturbation scenario is effectively developed in five steps: Step 1: Description of potential process of phenomena and their impacts on the geological environment. Step 2: Characterization of potential changes of geological environment in terms of T-H-M-C (Thermal - Hydrological - Mechanical - Chemical) processes. The focus is on specific T-H-M-C parameters that influence geological barrier performance, utilizing the input from Step 1. Step 3: Classification of potential influences, based on similarity of T-H-M-C perturbations. This leads to development of perturbation scenarios to serve as a basis for consequence analysis. Step 4: Establishing models and parameters for performance assessment. Step 5: Calculation and assessment. This study focuses on identifying key T-H-M-C process associated with perturbations at Step 2. This framework has two advantages. First one is assuring maintenance of traceability during the scenario construction processes, facilitating the production and structuring of suitable records. The second is providing effective elicitation and organization of information from a wide range of investigations of earth sciences within a performance assessment context. In this framework, scenario development work proceeds in a stepwise manner, to ensure clear identification of the impact of processes associated with these phenomena on a HLW disposal system. Output is organized to create credible scenarios with required transparency, consistency, traceability and adequate conservatism. In this presentation, the potential impact of natural phenomena in the viewpoint of performance assessment for HLW disposal will be discussed and modeled using the approach.

  10. An initial study on the estimation of time-varying volumetric treatment images and 3D tumor localization from single MV cine EPID images

    PubMed Central

    Mishra, Pankaj; Li, Ruijiang; Mak, Raymond H.; Rottmann, Joerg; Bryant, Jonathan H.; Williams, Christopher L.; Berbeco, Ross I.; Lewis, John H.

    2014-01-01

    Purpose: In this work the authors develop and investigate the feasibility of a method to estimate time-varying volumetric images from individual MV cine electronic portal image device (EPID) images. Methods: The authors adopt a two-step approach to time-varying volumetric image estimation from a single cine EPID image. In the first step, a patient-specific motion model is constructed from 4DCT. In the second step, parameters in the motion model are tuned according to the information in the EPID image. The patient-specific motion model is based on a compact representation of lung motion represented in displacement vector fields (DVFs). DVFs are calculated through deformable image registration (DIR) of a reference 4DCT phase image (typically peak-exhale) to a set of 4DCT images corresponding to different phases of a breathing cycle. The salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). PCA decouples the spatial and temporal components of the DVFs. Spatial information is represented in eigenvectors and the temporal information is represented by eigen-coefficients. To generate a new volumetric image, the eigen-coefficients are updated via cost function optimization based on digitally reconstructed radiographs and projection images. The updated eigen-coefficients are then multiplied with the eigenvectors to obtain updated DVFs that, in turn, give the volumetric image corresponding to the cine EPID image. Results: The algorithm was tested on (1) Eight digital eXtended CArdiac-Torso phantom datasets based on different irregular patient breathing patterns and (2) patient cine EPID images acquired during SBRT treatments. The root-mean-squared tumor localization error is (0.73 ± 0.63 mm) for the XCAT data and (0.90 ± 0.65 mm) for the patient data. Conclusions: The authors introduced a novel method of estimating volumetric time-varying images from single cine EPID images and a PCA-based lung motion model. This is the first method to estimate volumetric time-varying images from single MV cine EPID images, and has the potential to provide volumetric information with no additional imaging dose to the patient. PMID:25086523

  11. An initial study on the estimation of time-varying volumetric treatment images and 3D tumor localization from single MV cine EPID images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, Pankaj, E-mail: pankaj.mishra@varian.com; Mak, Raymond H.; Rottmann, Joerg

    2014-08-15

    Purpose: In this work the authors develop and investigate the feasibility of a method to estimate time-varying volumetric images from individual MV cine electronic portal image device (EPID) images. Methods: The authors adopt a two-step approach to time-varying volumetric image estimation from a single cine EPID image. In the first step, a patient-specific motion model is constructed from 4DCT. In the second step, parameters in the motion model are tuned according to the information in the EPID image. The patient-specific motion model is based on a compact representation of lung motion represented in displacement vector fields (DVFs). DVFs are calculatedmore » through deformable image registration (DIR) of a reference 4DCT phase image (typically peak-exhale) to a set of 4DCT images corresponding to different phases of a breathing cycle. The salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). PCA decouples the spatial and temporal components of the DVFs. Spatial information is represented in eigenvectors and the temporal information is represented by eigen-coefficients. To generate a new volumetric image, the eigen-coefficients are updated via cost function optimization based on digitally reconstructed radiographs and projection images. The updated eigen-coefficients are then multiplied with the eigenvectors to obtain updated DVFs that, in turn, give the volumetric image corresponding to the cine EPID image. Results: The algorithm was tested on (1) Eight digital eXtended CArdiac-Torso phantom datasets based on different irregular patient breathing patterns and (2) patient cine EPID images acquired during SBRT treatments. The root-mean-squared tumor localization error is (0.73 ± 0.63 mm) for the XCAT data and (0.90 ± 0.65 mm) for the patient data. Conclusions: The authors introduced a novel method of estimating volumetric time-varying images from single cine EPID images and a PCA-based lung motion model. This is the first method to estimate volumetric time-varying images from single MV cine EPID images, and has the potential to provide volumetric information with no additional imaging dose to the patient.« less

  12. Lean six sigma methodologies improve clinical laboratory efficiency and reduce turnaround times.

    PubMed

    Inal, Tamer C; Goruroglu Ozturk, Ozlem; Kibar, Filiz; Cetiner, Salih; Matyar, Selcuk; Daglioglu, Gulcin; Yaman, Akgun

    2018-01-01

    Organizing work flow is a major task of laboratory management. Recently, clinical laboratories have started to adopt methodologies such as Lean Six Sigma and some successful implementations have been reported. This study used Lean Six Sigma to simplify the laboratory work process and decrease the turnaround time by eliminating non-value-adding steps. The five-stage Six Sigma system known as define, measure, analyze, improve, and control (DMAIC) is used to identify and solve problems. The laboratory turnaround time for individual tests, total delay time in the sample reception area, and percentage of steps involving risks of medical errors and biological hazards in the overall process are measured. The pre-analytical process in the reception area was improved by eliminating 3 h and 22.5 min of non-value-adding work. Turnaround time also improved for stat samples from 68 to 59 min after applying Lean. Steps prone to medical errors and posing potential biological hazards to receptionists were reduced from 30% to 3%. Successful implementation of Lean Six Sigma significantly improved all of the selected performance metrics. This quality-improvement methodology has the potential to significantly improve clinical laboratories. © 2017 Wiley Periodicals, Inc.

  13. Uranium phase diagram from first principles

    NASA Astrophysics Data System (ADS)

    Yanilkin, Alexey; Kruglov, Ivan; Migdal, Kirill; Oganov, Artem; Pokatashkin, Pavel; Sergeev, Oleg

    2017-06-01

    The work is devoted to the investigation of uranium phase diagram up to pressure of 1 TPa and temperature of 15 kK based on density functional theory. First of all the comparison of pseudopotential and full potential calculations is carried out for different uranium phases. In the second step, phase diagram at zero temperature is investigated by means of program USPEX and pseudopotential calculations. Stable and metastable structures with close energies are selected. In order to obtain phase diagram at finite temperatures the preliminary selection of stable phases is made by free energy calculation based on small displacement method. For remaining candidates the accurate values of free energy are obtained by means of thermodynamic integration method (TIM). For this purpose quantum molecular dynamics are carried out at different volumes and temperatures. Interatomic potentials based machine learning are developed in order to consider large systems and long times for TIM. The potentials reproduce the free energy with the accuracy 1-5 meV/atom, which is sufficient for prediction of phase transitions. The equilibrium curves of different phases are obtained based on free energies. Melting curve is calculated by modified Z-method with developed potential.

  14. Epicenter location by analysis for interictal spikes

    NASA Technical Reports Server (NTRS)

    Hand, C.

    2001-01-01

    The MEG recording is a quick and painless process that requires no surgery. This approach has the potential to save time, reduce patient discomfort, and eliminates a painful and potentially dangerous surgical step in the treatment procedure.

  15. A computational kinetic model of diffusion for molecular systems.

    PubMed

    Teo, Ivan; Schulten, Klaus

    2013-09-28

    Regulation of biomolecular transport in cells involves intra-protein steps like gating and passage through channels, but these steps are preceded by extra-protein steps, namely, diffusive approach and admittance of solutes. The extra-protein steps develop over a 10-100 nm length scale typically in a highly particular environment, characterized through the protein's geometry, surrounding electrostatic field, and location. In order to account for solute energetics and mobility of solutes in this environment at a relevant resolution, we propose a particle-based kinetic model of diffusion based on a Markov State Model framework. Prerequisite input data consist of diffusion coefficient and potential of mean force maps generated from extensive molecular dynamics simulations of proteins and their environment that sample multi-nanosecond durations. The suggested diffusion model can describe transport processes beyond microsecond duration, relevant for biological function and beyond the realm of molecular dynamics simulation. For this purpose the systems are represented by a discrete set of states specified by the positions, volumes, and surface elements of Voronoi grid cells distributed according to a density function resolving the often intricate relevant diffusion space. Validation tests carried out for generic diffusion spaces show that the model and the associated Brownian motion algorithm are viable over a large range of parameter values such as time step, diffusion coefficient, and grid density. A concrete application of the method is demonstrated for ion diffusion around and through the Eschericia coli mechanosensitive channel of small conductance ecMscS.

  16. Integrated Microfluidic Devices for Automated Microarray-Based Gene Expression and Genotyping Analysis

    NASA Astrophysics Data System (ADS)

    Liu, Robin H.; Lodes, Mike; Fuji, H. Sho; Danley, David; McShea, Andrew

    Microarray assays typically involve multistage sample processing and fluidic handling, which are generally labor-intensive and time-consuming. Automation of these processes would improve robustness, reduce run-to-run and operator-to-operator variation, and reduce costs. In this chapter, a fully integrated and self-contained microfluidic biochip device that has been developed to automate the fluidic handling steps for microarray-based gene expression or genotyping analysis is presented. The device consists of a semiconductor-based CustomArray® chip with 12,000 features and a microfluidic cartridge. The CustomArray was manufactured using a semiconductor-based in situ synthesis technology. The micro-fluidic cartridge consists of microfluidic pumps, mixers, valves, fluid channels, and reagent storage chambers. Microarray hybridization and subsequent fluidic handling and reactions (including a number of washing and labeling steps) were performed in this fully automated and miniature device before fluorescent image scanning of the microarray chip. Electrochemical micropumps were integrated in the cartridge to provide pumping of liquid solutions. A micromixing technique based on gas bubbling generated by electrochemical micropumps was developed. Low-cost check valves were implemented in the cartridge to prevent cross-talk of the stored reagents. Gene expression study of the human leukemia cell line (K562) and genotyping detection and sequencing of influenza A subtypes have been demonstrated using this integrated biochip platform. For gene expression assays, the microfluidic CustomArray device detected sample RNAs with a concentration as low as 0.375 pM. Detection was quantitative over more than three orders of magnitude. Experiment also showed that chip-to-chip variability was low indicating that the integrated microfluidic devices eliminate manual fluidic handling steps that can be a significant source of variability in genomic analysis. The genotyping results showed that the device identified influenza A hemagglutinin and neuraminidase subtypes and sequenced portions of both genes, demonstrating the potential of integrated microfluidic and microarray technology for multiple virus detection. The device provides a cost-effective solution to eliminate labor-intensive and time-consuming fluidic handling steps and allows microarray-based DNA analysis in a rapid and automated fashion.

  17. Fast intersection detection algorithm for PC-based robot off-line programming

    NASA Astrophysics Data System (ADS)

    Fedrowitz, Christian H.

    1994-11-01

    This paper presents a method for fast and reliable collision detection in complex production cells. The algorithm is part of the PC-based robot off-line programming system of the University of Siegen (Ropsus). The method is based on a solid model which is managed by a simplified constructive solid geometry model (CSG-model). The collision detection problem is divided in two steps. In the first step the complexity of the problem is reduced in linear time. In the second step the remaining solids are tested for intersection. For this the Simplex algorithm, which is known from linear optimization, is used. It computes a point which is common to two convex polyhedra. The polyhedra intersect, if such a point exists. Regarding the simplified geometrical model of Ropsus the algorithm runs also in linear time. In conjunction with the first step a resultant collision detection algorithm is found which requires linear time in all. Moreover it computes the resultant intersection polyhedron using the dual transformation.

  18. Nanoionics-Based Switches for Radio-Frequency Applications

    NASA Technical Reports Server (NTRS)

    Nessel, James; Lee, Richard

    2010-01-01

    Nanoionics-based devices have shown promise as alternatives to microelectromechanical systems (MEMS) and semiconductor diode devices for switching radio-frequency (RF) signals in diverse systems. Examples of systems that utilize RF switches include phase shifters for electronically steerable phased-array antennas, multiplexers, cellular telephones and other radio transceivers, and other portable electronic devices. Semiconductor diode switches can operate at low potentials (about 1 to 3 V) and high speeds (switching times of the order of nanoseconds) but are characterized by significant insertion loss, high DC power consumption, low isolation, and generation of third-order harmonics and intermodulation distortion (IMD). MEMS-based switches feature low insertion loss (of the order of 0.2 dB), low DC power consumption (picowatts), high isolation (>30 dB), and low IMD, but contain moving parts, are not highly reliable, and must be operated at high actuation potentials (20 to 60 V) generated and applied by use of complex circuitry. In addition, fabrication of MEMS is complex, involving many processing steps. Nanoionics-based switches offer the superior RF performance and low power consumption of MEMS switches, without need for the high potentials and complex circuitry necessary for operation of MEMS switches. At the same time, nanoionics-based switches offer the high switching speed of semiconductor devices. Also, like semiconductor devices, nanoionics-based switches can be fabricated relatively inexpensively by use of conventional integrated-circuit fabrication techniques. More over, nanoionics-based switches have simple planar structures that can easily be integrated into RF power-distribution circuits.

  19. A water-based training program that include perturbation exercises to improve stepping responses in older adults: study protocol for a randomized controlled cross-over trial

    PubMed Central

    Melzer, Itshak; Elbar, Ori; Tsedek, Irit; Oddsson, Lars IE

    2008-01-01

    Background Gait and balance impairments may increase the risk of falls, the leading cause of accidental death in the elderly population. Fall-related injuries constitute a serious public health problem associated with high costs for society as well as human suffering. A rapid step is the most important protective postural strategy, acting to recover equilibrium and prevent a fall from initiating. It can arise from large perturbations, but also frequently as a consequence of volitional movements. We propose to use a novel water-based training program which includes specific perturbation exercises that will target the stepping responses that could potentially have a profound effect in reducing risk of falling. We describe the water-based balance training program and a study protocol to evaluate its efficacy (Trial registration number #NCT00708136). Methods/Design The proposed water-based training program involves use of unpredictable, multi-directional perturbations in a group setting to evoke compensatory and volitional stepping responses. Perturbations are made by pushing slightly the subjects and by water turbulence, in 24 training sessions conducted over 12 weeks. Concurrent cognitive tasks during movement tasks are included. Principles of physical training and exercise including awareness, continuity, motivation, overload, periodicity, progression and specificity were used in the development of this novel program. Specific goals are to increase the speed of stepping responses and improve the postural control mechanism and physical functioning. A prospective, randomized, cross-over trial with concealed allocation, assessor blinding and intention-to-treat analysis will be performed to evaluate the efficacy of the water-based training program. A total of 36 community-dwelling adults (age 65–88) with no recent history of instability or falling will be assigned to either the perturbation-based training or a control group (no training). Voluntary step reaction times and postural stability using stabiliogram diffusion analysis will be tested before and after the 12 weeks of training. Discussion This study will determine whether a water-based balance training program that includes perturbation exercises, in a group setting, can improve speed of voluntary stepping responses and improve balance control. Results will help guide the development of more cost-effective interventions that can prevent the occurrence of falls in the elderly. PMID:18706103

  20. Balance confidence is related to features of balance and gait in individuals with chronic stroke

    PubMed Central

    Schinkel-Ivy, Alison; Wong, Jennifer S.; Mansfield, Avril

    2016-01-01

    Reduced balance confidence is associated with impairments in features of balance and gait in individuals with sub-acute stroke. However, an understanding of these relationships in individuals at the chronic stage of stroke recovery is lacking. This study aimed to quantify relationships between balance confidence and specific features of balance and gait in individuals with chronic stroke. Participants completed a balance confidence questionnaire and clinical balance assessment (quiet standing, walking, and reactive stepping) at 6 months post-discharge from inpatient stroke rehabilitation. Regression analyses were performed using balance confidence as a predictor variable and quiet standing, walking, and reactive stepping outcome measures as the dependent variables. Walking velocity was positively correlated with balance confidence, while medio-lateral centre of pressure excursion (quiet standing) and double support time, step width variability, and step time variability (walking) were negatively correlated with balance confidence. This study provides insight into the relationships between balance confidence and balance and gait measures in individuals with chronic stroke, suggesting that individuals with low balance confidence exhibited impaired control of quiet standing as well as walking characteristics associated with cautious gait strategies. Future work should identify the direction of these relationships to inform community-based stroke rehabilitation programs for individuals with chronic stroke, and determine the potential utility of incorporating interventions to improve balance confidence into these programs. PMID:27955809

  1. People with diabetic peripheral neuropathy display a decreased stepping accuracy during walking: potential implications for risk of tripping.

    PubMed

    Handsaker, J C; Brown, S J; Bowling, F L; Marple-Horvat, D E; Boulton, A J M; Reeves, N D

    2016-05-01

    To examine the stepping accuracy of people with diabetes and diabetic peripheral neuropathy. Fourteen patients with diabetic peripheral neuropathy (DPN), 12 patients with diabetes but no neuropathy (D) and 10 healthy non-diabetic control participants (C). Accuracy of stepping was measured whilst the participants walked along a walkway consisting of 18 stepping targets. Preliminary data on visual gaze characteristics were also captured in a subset of participants (diabetic peripheral neuropathy group: n = 4; diabetes-alone group: n = 4; and control group: n = 4) during the same task. Patients in the diabetic peripheral neuropathy group, and patients in the diabetes-alone group were significantly less accurate at stepping on targets than were control subjects (P < 0.05). Preliminary visual gaze analysis identified that patients diabetic peripheral neuropathy were slower to look between targets, resulting in less time being spent looking at a target before foot-target contact. Impaired motor control is theorized to be a major factor underlying the changes in stepping accuracy, and potentially altered visual gaze behaviour may also play a role. Reduced stepping accuracy may indicate a decreased ability to control the placement of the lower limbs, leading to patients with neuropathy potentially being less able to avoid observed obstacles during walking. © 2015 Diabetes UK.

  2. Toward Scalable Trustworthy Computing Using the Human-Physiology-Immunity Metaphor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hively, Lee M; Sheldon, Frederick T

    The cybersecurity landscape consists of an ad hoc patchwork of solutions. Optimal cybersecurity is difficult for various reasons: complexity, immense data and processing requirements, resource-agnostic cloud computing, practical time-space-energy constraints, inherent flaws in 'Maginot Line' defenses, and the growing number and sophistication of cyberattacks. This article defines the high-priority problems and examines the potential solution space. In that space, achieving scalable trustworthy computing and communications is possible through real-time knowledge-based decisions about cyber trust. This vision is based on the human-physiology-immunity metaphor and the human brain's ability to extract knowledge from data and information. The article outlines future steps towardmore » scalable trustworthy systems requiring a long-term commitment to solve the well-known challenges.« less

  3. Camera-augmented mobile C-arm (CamC): A feasibility study of augmented reality imaging in the operating room.

    PubMed

    von der Heide, Anna Maria; Fallavollita, Pascal; Wang, Lejing; Sandner, Philipp; Navab, Nassir; Weidert, Simon; Euler, Ekkehard

    2018-04-01

    In orthopaedic trauma surgery, image-guided procedures are mostly based on fluoroscopy. The reduction of radiation exposure is an important goal. The purpose of this work was to investigate the impact of a camera-augmented mobile C-arm (CamC) on radiation exposure and the surgical workflow during a first clinical trial. Applying a workflow-oriented approach, 10 general workflow steps were defined to compare the CamC to traditional C-arms. The surgeries included were arbitrarily identified and assigned to the study. The evaluation criteria were radiation exposure and operation time for each workflow step and the entire surgery. The evaluation protocol was designed and conducted in a single-centre study. The radiation exposure was remarkably reduced by 18 X-ray shots 46% using the CamC while keeping similar surgery times. The intuitiveness of the system, its easy integration into the surgical workflow, and its great potential to reduce radiation have been demonstrated. Copyright © 2017 John Wiley & Sons, Ltd.

  4. TSCA Work Plan: 2012 Scoring of Potential Candidate Chemicals Entering Step 2

    EPA Pesticide Factsheets

    In 2012, EPA scored these chemicals based on hazard, exposure and persistence/bioaccumulation criteria as part of Step 2 in the Work Plan methodology in order to identify candidate chemicals for near-term review and assessment under TSCA.

  5. Being Prepared for Climate Change: Checklists of Potential Climate Change Risks, from Step 3

    EPA Pesticide Factsheets

    The Being Prepared for Climate Change workbook is a guide for constructing a climate change adaptation plan based on identifying risks and their consequences. These checklists (from Step 3 of the workbook) help users identify risks.

  6. Inhibition of Insulin Amyloid Fibrillation by a Novel Amphipathic Heptapeptide

    PubMed Central

    Ratha, Bhisma N.; Ghosh, Anirban; Brender, Jeffrey R.; Gayen, Nilanjan; Ilyas, Humaira; Neeraja, Chilukoti; Das, Kali P.; Mandal, Atin K.; Bhunia, Anirban

    2016-01-01

    The aggregation of insulin into amyloid fibers has been a limiting factor in the development of fast acting insulin analogues, creating a demand for excipients that limit aggregation. Despite the potential demand, inhibitors specifically targeting insulin have been few in number. Here we report a non-toxic and serum stable-designed heptapeptide, KR7 (KPWWPRR-NH2), that differs significantly from the primarily hydrophobic sequences that have been previously used to interfere with insulin amyloid fibrillation. Thioflavin T fluorescence assays, circular dichroism spectroscopy, and one-dimensional proton NMR experiments suggest KR7 primarily targets the fiber elongation step with little effect on the early oligomerization steps in the lag time period. From confocal fluorescence and atomic force microscopy experiments, the net result appears to be the arrest of aggregation in an early, non-fibrillar aggregation stage. This mechanism is noticeably different from previous peptide-based inhibitors, which have primarily shifted the lag time with little effect on later stages of aggregation. As insulin is an important model system for understanding protein aggregation, the new peptide may be an important tool for understanding peptide-based inhibition of amyloid formation. PMID:27679488

  7. A Study on Human Oriented Autonomous Distributed Manufacturing System —Real-time Scheduling Method Based on Preference of Human Operators

    NASA Astrophysics Data System (ADS)

    Iwamura, Koji; Kuwahara, Shinya; Tanimizu, Yoshitaka; Sugimura, Nobuhiro

    Recently, new distributed architectures of manufacturing systems are proposed, aiming at realizing more flexible control structures of the manufacturing systems. Many researches have been carried out to deal with the distributed architectures for planning and control of the manufacturing systems. However, the human operators have not yet been discussed for the autonomous components of the distributed manufacturing systems. A real-time scheduling method is proposed, in this research, to select suitable combinations of the human operators, the resources and the jobs for the manufacturing processes. The proposed scheduling method consists of following three steps. In the first step, the human operators select their favorite manufacturing processes which they will carry out in the next time period, based on their preferences. In the second step, the machine tools and the jobs select suitable combinations for the next machining processes. In the third step, the automated guided vehicles and the jobs select suitable combinations for the next transportation processes. The second and third steps are carried out by using the utility value based method and the dispatching rule-based method proposed in the previous researches. Some case studies have been carried out to verify the effectiveness of the proposed method.

  8. Cost minimization analysis of different growth hormone pen devices based on time-and-motion simulations

    PubMed Central

    2010-01-01

    Background Numerous pen devices are available to administer recombinant Human Growth Hormone (rhGH), and both patients and health plans have varying issues to consider when selecting a particular product and device for daily use. Therefore, the present study utilized multi-dimensional product analysis to assess potential time involvement, required weekly administration steps, and utilization costs relative to daily rhGH administration. Methods Study objectives were to conduct 1) Time-and-Motion (TM) simulations in a randomized block design that allowed time and steps comparisons related to rhGH preparation, administration and storage, and 2) a Cost Minimization Analysis (CMA) relative to opportunity and supply costs. Nurses naïve to rhGH administration and devices were recruited to evaluate four rhGH pen devices (2 in liquid form, 2 requiring reconstitution) via TM simulations. Five videotaped and timed trials for each product were evaluated based on: 1) Learning (initial use instructions), 2) Preparation (arrange device for use), 3) Administration (actual simulation manikin injection), and 4) Storage (maintain product viability between doses), in addition to assessment of steps required for weekly use. The CMA applied micro-costing techniques related to opportunity costs for caregivers (categorized as wages), non-drug medical supplies, and drug product costs. Results Norditropin® NordiFlex and Norditropin® NordiPen (NNF and NNP, Novo Nordisk, Inc., Bagsværd, Denmark) took less weekly Total Time (p < 0.05) to use than either of the comparator products, Genotropin® Pen (GTP, Pfizer, Inc, New York, New York) or HumatroPen® (HTP, Eli Lilly and Company, Indianapolis, Indiana). Time savings were directly related to differences in new package Preparation times (NNF (1.35 minutes), NNP (2.48 minutes) GTP (4.11 minutes), HTP (8.64 minutes), p < 0.05)). Administration and Storage times were not statistically different. NNF (15.8 minutes) and NNP (16.2 minutes) also took less time to Learn than HTP (24.0 minutes) and GTP (26.0 minutes), p < 0.05). The number of weekly required administration steps was also least with NNF and NNP. Opportunity cost savings were greater in devices that were easier to prepare for use; GTP represented an 11.8% drug product savings over NNF, NNP and HTP at time of study. Overall supply costs represented <1% of drug costs for all devices. Conclusions Time-and-motion simulation data used to support a micro-cost analysis demonstrated that the pen device with the greater time demand has highest net costs. PMID:20377905

  9. Cost minimization analysis of different growth hormone pen devices based on time-and-motion simulations.

    PubMed

    Nickman, Nancy A; Haak, Sandra W; Kim, Jaewhan

    2010-04-08

    Numerous pen devices are available to administer recombinant Human Growth Hormone (rhGH), and both patients and health plans have varying issues to consider when selecting a particular product and device for daily use. Therefore, the present study utilized multi-dimensional product analysis to assess potential time involvement, required weekly administration steps, and utilization costs relative to daily rhGH administration. Study objectives were to conduct 1) Time-and-Motion (TM) simulations in a randomized block design that allowed time and steps comparisons related to rhGH preparation, administration and storage, and 2) a Cost Minimization Analysis (CMA) relative to opportunity and supply costs. Nurses naïve to rhGH administration and devices were recruited to evaluate four rhGH pen devices (2 in liquid form, 2 requiring reconstitution) via TM simulations. Five videotaped and timed trials for each product were evaluated based on: 1) Learning (initial use instructions), 2) Preparation (arrange device for use), 3) Administration (actual simulation manikin injection), and 4) Storage (maintain product viability between doses), in addition to assessment of steps required for weekly use. The CMA applied micro-costing techniques related to opportunity costs for caregivers (categorized as wages), non-drug medical supplies, and drug product costs. Norditropin(R) NordiFlex and Norditropin(R) NordiPen (NNF and NNP, Novo Nordisk, Inc., Bagsvaerd, Denmark) took less weekly Total Time (p < 0.05) to use than either of the comparator products, Genotropin(R) Pen (GTP, Pfizer, Inc, New York, New York) or HumatroPen(R) (HTP, Eli Lilly and Company, Indianapolis, Indiana). Time savings were directly related to differences in new package Preparation times (NNF (1.35 minutes), NNP (2.48 minutes) GTP (4.11 minutes), HTP (8.64 minutes), p < 0.05)). Administration and Storage times were not statistically different. NNF (15.8 minutes) and NNP (16.2 minutes) also took less time to Learn than HTP (24.0 minutes) and GTP (26.0 minutes), p < 0.05). The number of weekly required administration steps was also least with NNF and NNP. Opportunity cost savings were greater in devices that were easier to prepare for use; GTP represented an 11.8% drug product savings over NNF, NNP and HTP at time of study. Overall supply costs represented <1% of drug costs for all devices. Time-and-motion simulation data used to support a micro-cost analysis demonstrated that the pen device with the greater time demand has highest net costs.

  10. Continuous track paths reveal additive evidence integration in multistep decision making.

    PubMed

    Buc Calderon, Cristian; Dewulf, Myrtille; Gevers, Wim; Verguts, Tom

    2017-10-03

    Multistep decision making pervades daily life, but its underlying mechanisms remain obscure. We distinguish four prominent models of multistep decision making, namely serial stage, hierarchical evidence integration, hierarchical leaky competing accumulation (HLCA), and probabilistic evidence integration (PEI). To empirically disentangle these models, we design a two-step reward-based decision paradigm and implement it in a reaching task experiment. In a first step, participants choose between two potential upcoming choices, each associated with two rewards. In a second step, participants choose between the two rewards selected in the first step. Strikingly, as predicted by the HLCA and PEI models, the first-step decision dynamics were initially biased toward the choice representing the highest sum/mean before being redirected toward the choice representing the maximal reward (i.e., initial dip). Only HLCA and PEI predicted this initial dip, suggesting that first-step decision dynamics depend on additive integration of competing second-step choices. Our data suggest that potential future outcomes are progressively unraveled during multistep decision making.

  11. Resuscitator’s perceptions and time for corrective ventilation steps during neonatal resuscitation☆

    PubMed Central

    Sharma, Vinay; Lakshminrusimha, Satyan; Carrion, Vivien; Mathew, Bobby

    2016-01-01

    Background The 2010 neonatal resuscitation program (NRP) guidelines incorporate ventilation corrective steps (using the mnemonic – MRSOPA) into the resuscitation algorithm. The perception of neonatal providers, time taken to perform these maneuvers or the effectiveness of these additional steps has not been evaluated. Methods Using two simulated clinical scenarios of varying degrees of cardiovascular compromise –perinatal asphyxia with (i) bradycardia (heart rate – 40 min−1) and (ii) cardiac arrest, 35 NRP certified providers were evaluated for preference to performing these corrective measures, the time taken for performing these steps and time to onset of chest compressions. Results The average time taken to perform ventilation corrective steps (MRSOPA) was 48.9 ± 21.4 s. Providers were less likely to perform corrective steps and proceed directly to endotracheal intubation in the scenario of cardiac arrest as compared to a state of bradycardia. Cardiac compressions were initiated significantly sooner in the scenario of cardiac arrest 89 ± 24 s as compared to severe bradycardia 122 ± 23 s, p < 0.0001. There were no differences in the time taken to initiation of chest compressions between physicians or mid-level care providers or with the level of experience of the provider. Conclusions Effective ventilation of the lungs with corrective steps using a mask is important in most cases of neonatal resuscitation. Neonatal resuscitators prefer early endotracheal intubation and initiation of chest compressions in the presence of asystolic cardiac arrest. Corrective ventilation steps can potentially postpone initiation of chest compressions and may delay return of spontaneous circulation in the presence of severe cardiovascular compromise. PMID:25796996

  12. Evidence-based practice, step by step: critical appraisal of the evidence: part II: digging deeper--examining the "keeper" studies.

    PubMed

    Fineout-Overholt, Ellen; Melnyk, Bernadette Mazurek; Stillwell, Susan B; Williamson, Kathleen M

    2010-09-01

    This is the sixth article in a series from the Arizona State University College of Nursing and Health Innovation's Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time. Articles will appear every two months to allow you time to incorporate information as you work toward implementing EBP at your institution. Also, we've scheduled "Chat with the Authors" calls every few months to provide a direct line to the experts to help you resolve questions. Details about how to participate in the next call will be published with November's Evidence-Based Practice, Step by Step.

  13. Developing Induced Pluripotent Stem Cell-Based Therapy for the Masses.

    PubMed

    Rao, Mahendra S; Atala, Anthony

    2016-02-01

    The discovery of induced pluripotent stem cells and the ability to manufacture them using clinically compliant protocols has the potential to revolutionize the field of regenerative medicine. However, realizing this potential requires the development of processes that are reliable, reproducible, and cost-effective and that at the same time do not compromise the safety of the individuals receiving this therapy. In the present report, we discuss how cost reductions can be obtained using our experience with obtaining approval of biologic agents, autologous therapy, and the recent approval of cord blood banks. Significance: For therapy to be widely available, the cost of manufacturing stem cells must be reduced. The steps proposed in the present report, when implemented, have the potential to reduce these costs significantly. ©AlphaMed Press.

  14. Index Fund Selections with GAs and Classifications Based on Turnover

    NASA Astrophysics Data System (ADS)

    Orito, Yukiko; Motoyama, Takaaki; Yamazaki, Genji

    It is well known that index fund selections are important for the risk hedge of investment in a stock market. The`selection’means that for`stock index futures’, n companies of all ones in the market are selected. For index fund selections, Orito et al.(6) proposed a method consisting of the following two steps : Step 1 is to select N companies in the market with a heuristic rule based on the coefficient of determination between the return rate of each company in the market and the increasing rate of the stock price index. Step 2 is to construct a group of n companies by applying genetic algorithms to the set of N companies. We note that the rule of Step 1 is not unique. The accuracy of the results using their method depends on the length of time data (price data) in the experiments. The main purpose of this paper is to introduce a more`effective rule’for Step 1. The rule is based on turnover. The method consisting of Step 1 based on turnover and Step 2 is examined with numerical experiments for the 1st Section of Tokyo Stock Exchange. The results show that with our method, it is possible to construct the more effective index fund than the results of Orito et al.(6). The accuracy of the results using our method depends little on the length of time data (turnover data). The method especially works well when the increasing rate of the stock price index over a period can be viewed as a linear time series data.

  15. GBAS Ionospheric Anomaly Monitoring Based on a Two-Step Approach

    PubMed Central

    Zhao, Lin; Yang, Fuxin; Li, Liang; Ding, Jicheng; Zhao, Yuxin

    2016-01-01

    As one significant component of space environmental weather, the ionosphere has to be monitored using Global Positioning System (GPS) receivers for the Ground-Based Augmentation System (GBAS). This is because an ionospheric anomaly can pose a potential threat for GBAS to support safety-critical services. The traditional code-carrier divergence (CCD) methods, which have been widely used to detect the variants of the ionospheric gradient for GBAS, adopt a linear time-invariant low-pass filter to suppress the effect of high frequency noise on the detection of the ionospheric anomaly. However, there is a counterbalance between response time and estimation accuracy due to the fixed time constants. In order to release the limitation, a two-step approach (TSA) is proposed by integrating the cascaded linear time-invariant low-pass filters with the adaptive Kalman filter to detect the ionospheric gradient anomaly. The performance of the proposed method is tested by using simulated and real-world data, respectively. The simulation results show that the TSA can detect ionospheric gradient anomalies quickly, even when the noise is severer. Compared to the traditional CCD methods, the experiments from real-world GPS data indicate that the average estimation accuracy of the ionospheric gradient improves by more than 31.3%, and the average response time to the ionospheric gradient at a rate of 0.018 m/s improves by more than 59.3%, which demonstrates the ability of TSA to detect a small ionospheric gradient more rapidly. PMID:27240367

  16. Exploiting Dragon Envisat Times Series and Other Earth Observation Data

    NASA Astrophysics Data System (ADS)

    Marie, Tiphanie; Lai, Xijun; Huber, Claire; Chen, Xiaoling; Uribe, Carlos; Huang, Shifeng; Lafaye, Murielle; Yesou, Herve

    2010-10-01

    Earth Observation data were used for mapping potential Schistosomiasis japonica distribution, within Poyang Lake (Jiangxi Province, PR China). In the first of two steps, areas suitable for the development of Oncomelania hupensis, the intermediate host snail of Schistosoma japonicum, were derived from submersion time parameters and vegetation community indicators. Y early maps from 2003 to 2008 indicate five principally potential endemic areas: Poyang Lake National Nature Reserve, Dalianzi Hu, Gan Delta, Po Jiang and Xi He. Monthly maps showing the annual dynamic of potential O. hupensis presence areas were obtained from December 2005 to December 2008. In a second step human potential transmission risk was handled through the mapping of settlements and the identification of some human activities. The urban areas and settlements were mapped all around the lake and fishing net locations in the central part of Poyang Lake were identified. Finally, data crossing of the different parameters highlight the potential risk of transmission in most of the fishing nets areas.

  17. Development of a real time activity monitoring Android application utilizing SmartStep.

    PubMed

    Hegde, Nagaraj; Melanson, Edward; Sazonov, Edward

    2016-08-01

    Footwear based activity monitoring systems are becoming popular in academic research as well as consumer industry segments. In our previous work, we had presented developmental aspects of an insole based activity and gait monitoring system-SmartStep, which is a socially acceptable, fully wireless and versatile insole. The present work describes the development of an Android application that captures the SmartStep data wirelessly over Bluetooth Low energy (BLE), computes features on the received data, runs activity classification algorithms and provides real time feedback. The development of activity classification methods was based on the the data from a human study involving 4 participants. Participants were asked to perform activities of sitting, standing, walking, and cycling while they wore SmartStep insole system. Multinomial Logistic Discrimination (MLD) was utilized in the development of machine learning model for activity prediction. The resulting classification model was implemented in an Android Smartphone. The Android application was benchmarked for power consumption and CPU loading. Leave one out cross validation resulted in average accuracy of 96.9% during model training phase. The Android application for real time activity classification was tested on a human subject wearing SmartStep resulting in testing accuracy of 95.4%.

  18. Stance-phase force on the opposite limb dictates swing-phase afferent presynaptic inhibition during locomotion

    PubMed Central

    Hayes, Heather Brant; Chang, Young-Hui

    2012-01-01

    Presynaptic inhibition is a powerful mechanism for selectively and dynamically gating sensory inputs entering the spinal cord. We investigated how hindlimb mechanics influence presynaptic inhibition during locomotion using pioneering approaches in an in vitro spinal cord–hindlimb preparation. We recorded lumbar dorsal root potentials to measure primary afferent depolarization-mediated presynaptic inhibition and compared their dependence on hindlimb endpoint forces, motor output, and joint kinematics. We found that stance-phase force on the opposite limb, particularly at toe contact, strongly influenced the magnitude and timing of afferent presynaptic inhibition in the swinging limb. Presynaptic inhibition increased in proportion to opposite limb force, as well as locomotor frequency. This form of presynaptic inhibition binds the sensorimotor states of the two limbs, adjusting sensory inflow to the swing limb based on forces generated by the stance limb. Functionally, it may serve to adjust swing-phase sensory transmission based on locomotor task, speed, and step-to-step environmental perturbations. PMID:22442562

  19. Assessing the performance of regional landslide early warning models: the EDuMaP method

    NASA Astrophysics Data System (ADS)

    Calvello, M.; Piciullo, L.

    2015-10-01

    The paper proposes the evaluation of the technical performance of a regional landslide early warning system by means of an original approach, called EDuMaP method, comprising three successive steps: identification and analysis of the Events (E), i.e. landslide events and warning events derived from available landslides and warnings databases; definition and computation of a Duration Matrix (DuMa), whose elements report the time associated with the occurrence of landslide events in relation to the occurrence of warning events, in their respective classes; evaluation of the early warning model Performance (P) by means of performance criteria and indicators applied to the duration matrix. During the first step, the analyst takes into account the features of the warning model by means of ten input parameters, which are used to identify and classify landslide and warning events according to their spatial and temporal characteristics. In the second step, the analyst computes a time-based duration matrix having a number of rows and columns equal to the number of classes defined for the warning and landslide events, respectively. In the third step, the analyst computes a series of model performance indicators derived from a set of performance criteria, which need to be defined by considering, once again, the features of the warning model. The proposed method is based on a framework clearly distinguishing between local and regional landslide early warning systems as well as among correlation laws, warning models and warning systems. The applicability, potentialities and limitations of the EDuMaP method are tested and discussed using real landslides and warnings data from the municipal early warning system operating in Rio de Janeiro (Brazil).

  20. Assessing the performance of regional landslide early warning models: the EDuMaP method

    NASA Astrophysics Data System (ADS)

    Calvello, M.; Piciullo, L.

    2016-01-01

    A schematic of the components of regional early warning systems for rainfall-induced landslides is herein proposed, based on a clear distinction between warning models and warning systems. According to this framework an early warning system comprises a warning model as well as a monitoring and warning strategy, a communication strategy and an emergency plan. The paper proposes the evaluation of regional landslide warning models by means of an original approach, called the "event, duration matrix, performance" (EDuMaP) method, comprising three successive steps: identification and analysis of the events, i.e., landslide events and warning events derived from available landslides and warnings databases; definition and computation of a duration matrix, whose elements report the time associated with the occurrence of landslide events in relation to the occurrence of warning events, in their respective classes; evaluation of the early warning model performance by means of performance criteria and indicators applied to the duration matrix. During the first step the analyst identifies and classifies the landslide and warning events, according to their spatial and temporal characteristics, by means of a number of model parameters. In the second step, the analyst computes a time-based duration matrix with a number of rows and columns equal to the number of classes defined for the warning and landslide events, respectively. In the third step, the analyst computes a series of model performance indicators derived from a set of performance criteria, which need to be defined by considering, once again, the features of the warning model. The applicability, potentialities and limitations of the EDuMaP method are tested and discussed using real landslides and warning data from the municipal early warning system operating in Rio de Janeiro (Brazil).

  1. Sequence-dependent response of DNA to torsional stress: a potential biological regulation mechanism.

    PubMed

    Reymer, Anna; Zakrzewska, Krystyna; Lavery, Richard

    2018-02-28

    Torsional restraints on DNA change in time and space during the life of the cell and are an integral part of processes such as gene expression, DNA repair and packaging. The mechanical behavior of DNA under torsional stress has been studied on a mesoscopic scale, but little is known concerning its response at the level of individual base pairs and the effects of base pair composition. To answer this question, we have developed a geometrical restraint that can accurately control the total twist of a DNA segment during all-atom molecular dynamics simulations. By applying this restraint to four different DNA oligomers, we are able to show that DNA responds to both under- and overtwisting in a very heterogeneous manner. Certain base pair steps, in specific sequence environments, are able to absorb most of the torsional stress, leaving other steps close to their relaxed conformation. This heterogeneity also affects the local torsional modulus of DNA. These findings suggest that modifying torsional stress on DNA could act as a modulator for protein binding via the heterogeneous changes in local DNA structure.

  2. Sequence-dependent response of DNA to torsional stress: a potential biological regulation mechanism

    PubMed Central

    Reymer, Anna; Zakrzewska, Krystyna; Lavery, Richard

    2018-01-01

    Abstract Torsional restraints on DNA change in time and space during the life of the cell and are an integral part of processes such as gene expression, DNA repair and packaging. The mechanical behavior of DNA under torsional stress has been studied on a mesoscopic scale, but little is known concerning its response at the level of individual base pairs and the effects of base pair composition. To answer this question, we have developed a geometrical restraint that can accurately control the total twist of a DNA segment during all-atom molecular dynamics simulations. By applying this restraint to four different DNA oligomers, we are able to show that DNA responds to both under- and overtwisting in a very heterogeneous manner. Certain base pair steps, in specific sequence environments, are able to absorb most of the torsional stress, leaving other steps close to their relaxed conformation. This heterogeneity also affects the local torsional modulus of DNA. These findings suggest that modifying torsional stress on DNA could act as a modulator for protein binding via the heterogeneous changes in local DNA structure. PMID:29267977

  3. Portable and Error-Free DNA-Based Data Storage.

    PubMed

    Yazdi, S M Hossein Tabatabaei; Gabrys, Ryan; Milenkovic, Olgica

    2017-07-10

    DNA-based data storage is an emerging nonvolatile memory technology of potentially unprecedented density, durability, and replication efficiency. The basic system implementation steps include synthesizing DNA strings that contain user information and subsequently retrieving them via high-throughput sequencing technologies. Existing architectures enable reading and writing but do not offer random-access and error-free data recovery from low-cost, portable devices, which is crucial for making the storage technology competitive with classical recorders. Here we show for the first time that a portable, random-access platform may be implemented in practice using nanopore sequencers. The novelty of our approach is to design an integrated processing pipeline that encodes data to avoid costly synthesis and sequencing errors, enables random access through addressing, and leverages efficient portable sequencing via new iterative alignment and deletion error-correcting codes. Our work represents the only known random access DNA-based data storage system that uses error-prone nanopore sequencers, while still producing error-free readouts with the highest reported information rate/density. As such, it represents a crucial step towards practical employment of DNA molecules as storage media.

  4. Combining medically assisted treatment and Twelve-Step programming: a perspective and review.

    PubMed

    Galanter, Marc

    2018-01-01

    People with severe substance use disorders require long-term rehabilitative care after the initial treatment. There is, however, a deficit in the availability of such care. This may be due both to inadequate medical coverage and insufficient use of community-based Twelve-Step programs in many treatment facilities. In order to address this deficit, rehabilitative care for severe substance use disorders could be promoted through collaboration between practitioners of medically assisted treatment, employing medications, and Twelve-Step-oriented practitioners. To describe the limitations and benefits in applying biomedical approaches and Twelve-Step resources in the rehabilitation of persons with severe substance use disorders; and to assess how the two approaches can be employed together to improve clinical outcome. Empirical literature focusing on clinical and manpower issues is reviewed with regard (a) to limitations in available treatment options in ambulatory and residential addiction treatment facilities for persons with severe substance use disorders, (b) problems of long-term rehabilitation particular to opioid-dependent persons, associated with the limitations of pharmacologic approaches, (c) the relative effectiveness of biomedical and Twelve-Step approaches in the clinical context, and (d) the potential for enhanced use of these approaches, singly and in combination, to address perceived deficits. The biomedical and Twelve-Step-oriented approaches are based on differing theoretical and empirically grounded models. Research-based opportunities are reviewed for improving addiction rehabilitation resources with enhanced collaboration between practitioners of these two potentially complementary practice models. This can involve medications for both acute and chronic treatment for substances for which such medications are available, and Twelve-Step-based support for abstinence and long-term rehabilitation. Clinical and Scientific Significance: Criteria for developing evidence-based approaches for combined treatment should be developed, and research for evidence-based treatment on this basis can be undertaken in order to develop improved clinical outcome.

  5. Finite-difference fluid dynamics computer mathematical models for the design and interpretation of experiments for space flight. [atmospheric general circulation experiment, convection in a float zone, and the Bridgman-Stockbarger crystal growing system

    NASA Technical Reports Server (NTRS)

    Roberts, G. O.; Fowlis, W. W.; Miller, T. L.

    1984-01-01

    Numerical methods are used to design a spherical baroclinic flow model experiment of the large scale atmosphere flow for Spacelab. The dielectric simulation of radial gravity is only dominant in a low gravity environment. Computer codes are developed to study the processes at work in crystal growing systems which are also candidates for space flight. Crystalline materials rarely achieve their potential properties because of imperfections and component concentration variations. Thermosolutal convection in the liquid melt can be the cause of these imperfections. Such convection is suppressed in a low gravity environment. Two and three dimensional finite difference codes are being used for this work. Nonuniform meshes and implicit iterative methods are used. The iterative method for steady solutions is based on time stepping but has the options of different time steps for velocity and temperature and of a time step varying smoothly with position according to specified powers of the mesh spacings. This allows for more rapid convergence. The code being developed for the crystal growth studies allows for growth of the crystal as the solid-liquid interface. The moving interface is followed using finite differences; shape variations are permitted. For convenience in applying finite differences in the solid and liquid, a time dependent coordinate transformation is used to make this interface a coordinate surface.

  6. Training Rapid Stepping Responses in an Individual With Stroke

    PubMed Central

    Inness, Elizabeth L.; Komar, Janice; Biasin, Louis; Brunton, Karen; Lakhani, Bimal; McIlroy, William E.

    2011-01-01

    Background and Purpose Compensatory stepping reactions are important responses to prevent a fall following a postural perturbation. People with hemiparesis following a stroke show delayed initiation and execution of stepping reactions and often are found to be unable to initiate these steps with the more-affected limb. This case report describes a targeted training program involving repeated postural perturbations to improve control of compensatory stepping in an individual with stroke. Case Description Compensatory stepping reactions of a 68-year-old man were examined 52 days after left hemorrhagic stroke. He required assistance to prevent a fall in all trials administered during his initial examination because he showed weight-bearing asymmetry (with more weight borne on the more-affected right side), was unable to initiate stepping with the right leg (despite blocking of the left leg in some trials), and demonstrated delayed response times. The patient completed 6 perturbation training sessions (30–60 minutes per session) that aimed to improve preperturbation weight-bearing symmetry, to encourage stepping with the right limb, and to reduce step initiation and completion times. Outcomes Improved efficacy of compensatory stepping reactions with training and reduced reliance on assistance to prevent falling were observed. Improvements were noted in preperturbation asymmetry and step timing. Blocking the left foot was effective in encouraging stepping with the more-affected right foot. Discussion This case report demonstrates potential short-term adaptations in compensatory stepping reactions following perturbation training in an individual with stroke. Future work should investigate the links between improved compensatory step characteristics and fall risk in this vulnerable population. PMID:21511992

  7. Computationally efficient simulation of electrical activity at cell membranes interacting with self-generated and externally imposed electric fields

    NASA Astrophysics Data System (ADS)

    Agudelo-Toro, Andres; Neef, Andreas

    2013-04-01

    Objective. We present a computational method that implements a reduced set of Maxwell's equations to allow simulation of cells under realistic conditions: sub-micron cell morphology, a conductive non-homogeneous space and various ion channel properties and distributions. Approach. While a reduced set of Maxwell's equations can be used to couple membrane currents to extra- and intracellular potentials, this approach is rarely taken, most likely because adequate computational tools are missing. By using these equations, and introducing an implicit solver, numerical stability is attained even with large time steps. The time steps are limited only by the time development of the membrane potentials. Main results. This method allows simulation times of tens of minutes instead of weeks, even for complex problems. The extracellular fields are accurately represented, including secondary fields, which originate at inhomogeneities of the extracellular space and can reach several millivolts. We present a set of instructive examples that show how this method can be used to obtain reference solutions for problems, which might not be accurately captured by the traditional approaches. This includes the simulation of realistic magnitudes of extracellular action potential signals in restricted extracellular space. Significance. The electric activity of neurons creates extracellular potentials. Recent findings show that these endogenous fields act back onto the neurons, contributing to the synchronization of population activity. The influence of endogenous fields is also relevant for understanding therapeutic approaches such as transcranial direct current, transcranial magnetic and deep brain stimulation. The mutual interaction between fields and membrane currents is not captured by today's concepts of cellular electrophysiology, including the commonly used activation function, as those concepts are based on isolated membranes in an infinite, isopotential extracellular space. The presented tool makes simulations with detailed morphology and implicit interactions of currents and fields available to the electrophysiology community.

  8. Dynamic Pathfinders: Leveraging Your OPAC to Create Resource Guides

    ERIC Educational Resources Information Center

    Hunter, Ben

    2008-01-01

    Library pathfinders are a time-tested method of leading library users to important resources. However, paper-based pathfinders suffer from space limitations, and both paper-based and Web-based pathfinders require frequent updates to keep up with new library acquisitions. This article details a step-by-step method to create an online dynamic…

  9. Enhanced capillary electrophoretic screening of Alzheimer based on direct apolipoprotein E genotyping and one-step multiplex PCR.

    PubMed

    Woo, Nain; Kim, Su-Kang; Sun, Yucheng; Kang, Seong Ho

    2018-01-01

    Human apolipoprotein E (ApoE) is associated with high cholesterol levels, coronary artery disease, and especially Alzheimer's disease. In this study, we developed an ApoE genotyping and one-step multiplex polymerase chain reaction (PCR) based-capillary electrophoresis (CE) method for the enhanced diagnosis of Alzheimer's. The primer mixture of ApoE genes enabled the performance of direct one-step multiplex PCR from whole blood without DNA purification. The combination of direct ApoE genotyping and one-step multiplex PCR minimized the risk of DNA loss or contamination due to the process of DNA purification. All amplified PCR products with different DNA lengths (112-, 253-, 308-, 444-, and 514-bp DNA) of the ApoE genes were analyzed within 2min by an extended voltage programming (VP)-based CE under the optimal conditions. The extended VP-based CE method was at least 120-180 times faster than conventional slab gel electrophoresis methods In particular, all amplified DNA fragments were detected in less than 10 PCR cycles using a laser-induced fluorescence detector. The detection limits of the ApoE genes were 6.4-62.0pM, which were approximately 100-100,000 times more sensitive than previous Alzheimer's diagnosis methods In addition, the combined one-step multiplex PCR and extended VP-based CE method was also successfully applied to the analysis of ApoE genotypes in Alzheimer's patients and normal samples and confirmed the distribution probability of allele frequencies. This combination of direct one-step multiplex PCR and an extended VP-based CE method should increase the diagnostic reliability of Alzheimer's with high sensitivity and short analysis time even with direct use of whole blood. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. ATLAS - A new Lagrangian transport and mixing model with detailed stratospheric chemistry

    NASA Astrophysics Data System (ADS)

    Wohltmann, I.; Rex, M.; Lehmann, R.

    2009-04-01

    We present a new global Chemical Transport Model (CTM) with full stratospheric chemistry and Lagrangian transport and mixing called ATLAS. Lagrangian models have some crucial advantages over Eulerian grid-box based models, like no numerical diffusion, no limitation of the time step of the model by the CFL criterion, conservation of mixing ratios by design and easy parallelization of code. The transport module is based on a trajectory code developed at the Alfred Wegener Institute. The horizontal and vertical resolution, the vertical coordinate system (pressure, potential temperature, hybrid coordinate) and the time step of the model are flexible, so that the model can be used both for process studies and long-time runs over several decades. Mixing of the Lagrangian air parcels is parameterized based on the local shear and strain of the flow with a method similar to that used in the CLaMS model, but with some modifications like a triangulation that introduces no vertical layers. The stratospheric chemistry module was developed at the Institute and includes 49 species and 170 reactions and a detailed treatment of heterogenous chemistry on polar stratospheric clouds. We present an overview over the model architecture, the transport and mixing concept and some validation results. Comparison of model results with tracer data from flights of the ER2 aircraft in the stratospheric polar vortex in 1999/2000 which are able to resolve fine tracer filaments show that excellent agreement with observed tracer structures can be achieved with a suitable mixing parameterization.

  11. Reverse-Time Imaging Based on Full-Waveform Inverted Velocity Model for Nondestructive Testing of Heterogeneous Engineered Structures

    NASA Astrophysics Data System (ADS)

    Nguyen, L. T.; Modrak, R. T.; Saenger, E. H.; Tromp, J.

    2017-12-01

    Reverse-time migration (RTM) can reconstruct reflectors and scatterers by cross-correlating the source wavefield and the receiver wavefield given a known velocity model of the background. In nondestructive testing, however, the engineered structure under inspection is often composed of layers of various materials and the background material has been degraded non-uniformly because of environmental or operational effects. On the other hand, ultrasonic waveform tomography based on the principles of full-waveform inversion (FWI) has succeeded in detecting anomalous features in engineered structures. But the building of the wave velocity model of the comprehensive small-size and high-contrast defect(s) is difficult because it requires computationally expensive high-frequency numerical wave simulations and an accurate understanding of large-scale background variations of the engineered structure.To reduce computational cost and improve detection of small defects, a useful approach is to divide the waveform tomography procedure into two steps: first, a low-frequency model-building step aimed at recovering background structure using FWI, and second, a high-frequency imaging step targeting defects using RTM. Through synthetic test cases, we show that the two-step procedure appears more promising in most cases than a single-step inversion. In particular, we find that the new workflow succeeds in the challenging scenario where the defect lies along preexisting layer interface in a composite bridge deck and in related experiments involving noisy data or inaccurate source parameters. The results reveal the potential of the new wavefield imaging method and encourage further developments in data processing, enhancing computation power, and optimizing the imaging workflow itself so that the procedure can efficiently be applied to geometrically complex 3D solids and waveguides. Lastly, owing to the scale invariance of the elastic wave equation, this imaging procedure can be transferred to applications in regional scales as well.

  12. [Predicting the outcome in severe injuries: an analysis of 2069 patients from the trauma register of the German Society of Traumatology (DGU)].

    PubMed

    Rixen, D; Raum, M; Bouillon, B; Schlosser, L E; Neugebauer, E

    2001-03-01

    On hospital admission numerous variables are documented from multiple trauma patients. The value of these variables to predict outcome are discussed controversially. The aim was the ability to initially determine the probability of death of multiple trauma patients. Thus, a multivariate probability model was developed based on data obtained from the trauma registry of the Deutsche Gesellschaft für Unfallchirurgie (DGU). On hospital admission the DGU trauma registry collects more than 30 variables prospectively. In the first step of analysis those variables were selected, that were assumed to be clinical predictors for outcome from literature. In a second step a univariate analysis of these variables was performed. For all primary variables with univariate significance in outcome prediction a multivariate logistic regression was performed in the third step and a multivariate prognostic model was developed. 2069 patients from 20 hospitals were prospectively included in the trauma registry from 01.01.1993-31.12.1997 (age 39 +/- 19 years; 70.0% males; ISS 22 +/- 13; 18.6% lethality). From more than 30 initially documented variables, the age, the GCS, the ISS, the base excess (BE) and the prothrombin time were the most important prognostic factors to predict the probability of death (P(death)). The following prognostic model was developed: P(death) = 1/1 + e(-[k + beta 1(age) + beta 2(GCS) + beta 3(ISS) + beta 4(BE) + beta 5(prothrombin time)]) where: k = -0.1551, beta 1 = 0.0438 with p < 0.0001, beta 2 = -0.2067 with p < 0.0001, beta 3 = 0.0252 with p = 0.0071, beta 4 = -0.0840 with p < 0.0001 and beta 5 = -0.0359 with p < 0.0001. Each of the five variables contributed significantly to the multifactorial model. These data show that the age, GCS, ISS, base excess and prothrombin time are potentially important predictors to initially identify multiple trauma patients with a high risk of lethality. With the base excess and prothrombin time value, as only variables of this multifactorial model that can be therapeutically influenced, it might be possible to better guide early and aggressive therapy.

  13. Meeting Wise: Making the Most of Collaborative Time for Educators

    ERIC Educational Resources Information Center

    Boudett, Kathryn Parker; City, Elizabeth A.

    2014-01-01

    This book, by two editors of "Data Wise: A Step-by-Step Guide to Using Assessment Results to Improve Teaching and Learning," attempts to bring about a fundamental shift in how educators think about the meetings we attend. They make the case that these gatherings are potentially the most important venue where adult and organizational…

  14. Minimization of diauxic growth lag-phase for high-efficiency biogas production.

    PubMed

    Kim, Min Jee; Kim, Sang Hun

    2017-02-01

    The objective of this study was to develop a minimization method of a diauxic growth lag-phase for the biogas production from agricultural by-products (ABPs). Specifically, the effects of proximate composition on the biogas production and degradation rates of the ABPs were investigated, and a new method based on proximate composition combinations was developed to minimize the diauxic growth lag-phase. Experiments were performed using biogas potential tests at a substrate loading of 2.5 g VS/L and feed to microorganism ratio (F/M) of 0.5 under the mesophilic condition. The ABPs were classified based on proximate composition (carbohydrate, protein, and fat etc.). The biogas production patterns, lag phase, and times taken for 90% biogas production (T90) were used for the evaluation of the biogas production with biochemical methane potential (BMP) test. The high- or medium-carbohydrate and low-fat ABPs (cheese whey, cabbage, and skim milk) showed a single step digestion process and low-carbohydrate and high-fat ABPs (bean curd and perilla seed) showed a two-step digestion process. The mixture of high-fat ABPs and high-carbohydrate ABPs reduced the lag-phase and increased the biogas yield more than that from single ABP by 35-46%. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Raman spectral post-processing for oral tissue discrimination – a step for an automatized diagnostic system

    PubMed Central

    Carvalho, Luis Felipe C. S.; Nogueira, Marcelo Saito; Neto, Lázaro P. M.; Bhattacharjee, Tanmoy T.; Martin, Airton A.

    2017-01-01

    Most oral injuries are diagnosed by histopathological analysis of a biopsy, which is an invasive procedure and does not give immediate results. On the other hand, Raman spectroscopy is a real time and minimally invasive analytical tool with potential for the diagnosis of diseases. The potential for diagnostics can be improved by data post-processing. Hence, this study aims to evaluate the performance of preprocessing steps and multivariate analysis methods for the classification of normal tissues and pathological oral lesion spectra. A total of 80 spectra acquired from normal and abnormal tissues using optical fiber Raman-based spectroscopy (OFRS) were subjected to PCA preprocessing in the z-scored data set, and the KNN (K-nearest neighbors), J48 (unpruned C4.5 decision tree), RBF (radial basis function), RF (random forest), and MLP (multilayer perceptron) classifiers at WEKA software (Waikato environment for knowledge analysis), after area normalization or maximum intensity normalization. Our results suggest the best classification was achieved by using maximum intensity normalization followed by MLP. Based on these results, software for automated analysis can be generated and validated using larger data sets. This would aid quick comprehension of spectroscopic data and easy diagnosis by medical practitioners in clinical settings. PMID:29188115

  16. Raman spectral post-processing for oral tissue discrimination - a step for an automatized diagnostic system.

    PubMed

    Carvalho, Luis Felipe C S; Nogueira, Marcelo Saito; Neto, Lázaro P M; Bhattacharjee, Tanmoy T; Martin, Airton A

    2017-11-01

    Most oral injuries are diagnosed by histopathological analysis of a biopsy, which is an invasive procedure and does not give immediate results. On the other hand, Raman spectroscopy is a real time and minimally invasive analytical tool with potential for the diagnosis of diseases. The potential for diagnostics can be improved by data post-processing. Hence, this study aims to evaluate the performance of preprocessing steps and multivariate analysis methods for the classification of normal tissues and pathological oral lesion spectra. A total of 80 spectra acquired from normal and abnormal tissues using optical fiber Raman-based spectroscopy (OFRS) were subjected to PCA preprocessing in the z-scored data set, and the KNN (K-nearest neighbors), J48 (unpruned C4.5 decision tree), RBF (radial basis function), RF (random forest), and MLP (multilayer perceptron) classifiers at WEKA software (Waikato environment for knowledge analysis), after area normalization or maximum intensity normalization. Our results suggest the best classification was achieved by using maximum intensity normalization followed by MLP. Based on these results, software for automated analysis can be generated and validated using larger data sets. This would aid quick comprehension of spectroscopic data and easy diagnosis by medical practitioners in clinical settings.

  17. Optimization of method for zinc analysis in several bee products on renewable mercury film silver based electrode.

    PubMed

    Opoka, Włodzimierz; Szlósarczyk, Marek; Maślanka, Anna; Piech, Robert; Baś, Bogusław; Włodarczyk, Edyta; Krzek, Jan

    2013-01-01

    Zinc is an interesting target for detection as it is one of the elements necessary for the proper functioning of the human body, its excess and deficiency can cause several symptoms. Several techniques including electrochemistry have been developed but require laboratory equipment, preparative steps and mercury or complex working electrodes. We here described the development of a robust, simple and commercially available electrochemical system. Differential pulse (DP) voltammetry was used for this purpose with the cyclic renewable mercury film silver based electrode (Hg(Ag)FE) and 0.05 M KNO3 solution as a supporting electrolyte. The effect of various factors such as: preconcentration potential and time, pulse amplitude and width, step potential and supporting electrolyte composition are optimized. The limits of detection (LOD) and quantification (LOQ) were 1.62 ng/mL and 4.85 ng/mL, respectively. The repeatability of the method at a concentration level of the analyte as low as 3 ng/mL, expressed as RSD is 3.5% (n = 6). Recovery was determined using certified reference material: Virginia Tobacco Leaves (CTA-VTL-2). The recovery of zinc ranged from 96.6 to 106.5%. The proposed method was successfully applied for determination of zinc in bee products (honey, propolis and diet supplements) after digestion procedure.

  18. Activity monitor intervention to promote physical activity of physicians-in-training: randomized controlled trial.

    PubMed

    Thorndike, Anne N; Mills, Sarah; Sonnenberg, Lillian; Palakshappa, Deepak; Gao, Tian; Pau, Cindy T; Regan, Susan

    2014-01-01

    Physicians are expected to serve as role models for healthy lifestyles, but long work hours reduce time for healthy behaviors. A hospital-based physical activity intervention could improve physician health and increase counseling about exercise. We conducted a two-phase intervention among 104 medical residents at a large hospital in Boston, Massachusetts. Phase 1 was a 6-week randomized controlled trial comparing daily steps of residents assigned to an activity monitor displaying feedback about steps and energy consumed (intervention) or to a blinded monitor (control). Phase 2 immediately followed and was a 6-week non-randomized team steps competition in which all participants wore monitors with feedback. Phase 1 outcomes were: 1) median steps/day and 2) proportion of days activity monitor worn. The Phase 2 outcome was mean steps/day on days monitor worn (≥500 steps/day). Physiologic measurements were collected at baseline and study end. Median steps/day were compared using Wilcoxon rank-sum tests. Mean steps were compared using repeated measures regression analyses. In Phase 1, intervention and control groups had similar activity (6369 vs. 6063 steps/day, p = 0.16) and compliance with wearing the monitor (77% vs. 77% of days, p = 0.73). In Phase 2 (team competition), residents recorded more steps/day than during Phase 1 (CONTROL: 7,971 vs. 7,567, p = 0.002; 7,832 vs. 7,739, p = 0.13). Mean compliance with wearing the activity monitor decreased for both groups during Phase 2 compared to Phase 1 (60% vs. 77%, p<0.001). Mean systolic blood pressure decreased (p = 0.004) and HDL cholesterol increased (p<0.001) among all participants at end of study compared to baseline. Although the activity monitor intervention did not have a major impact on activity or health, the high participation rates of busy residents and modest changes in steps, blood pressure, and HDL suggest that more intensive hospital-based wellness programs have potential for promoting healthier lifestyles among physicians. Clinicaltrials.gov NCT01287208.

  19. Activity Monitor Intervention to Promote Physical Activity of Physicians-In-Training: Randomized Controlled Trial

    PubMed Central

    Thorndike, Anne N.; Mills, Sarah; Sonnenberg, Lillian; Palakshappa, Deepak; Gao, Tian; Pau, Cindy T.; Regan, Susan

    2014-01-01

    Background Physicians are expected to serve as role models for healthy lifestyles, but long work hours reduce time for healthy behaviors. A hospital-based physical activity intervention could improve physician health and increase counseling about exercise. Methods We conducted a two-phase intervention among 104 medical residents at a large hospital in Boston, Massachusetts. Phase 1 was a 6-week randomized controlled trial comparing daily steps of residents assigned to an activity monitor displaying feedback about steps and energy consumed (intervention) or to a blinded monitor (control). Phase 2 immediately followed and was a 6-week non-randomized team steps competition in which all participants wore monitors with feedback. Phase 1 outcomes were: 1) median steps/day and 2) proportion of days activity monitor worn. The Phase 2 outcome was mean steps/day on days monitor worn (≥500 steps/day). Physiologic measurements were collected at baseline and study end. Median steps/day were compared using Wilcoxon rank-sum tests. Mean steps were compared using repeated measures regression analyses. Results In Phase 1, intervention and control groups had similar activity (6369 vs. 6063 steps/day, p = 0.16) and compliance with wearing the monitor (77% vs. 77% of days, p = 0.73). In Phase 2 (team competition), residents recorded more steps/day than during Phase 1 (Control: 7,971 vs. 7,567, p = 0.002; Intervention: 7,832 vs. 7,739, p = 0.13). Mean compliance with wearing the activity monitor decreased for both groups during Phase 2 compared to Phase 1 (60% vs. 77%, p<0.001). Mean systolic blood pressure decreased (p = 0.004) and HDL cholesterol increased (p<0.001) among all participants at end of study compared to baseline. Conclusions Although the activity monitor intervention did not have a major impact on activity or health, the high participation rates of busy residents and modest changes in steps, blood pressure, and HDL suggest that more intensive hospital-based wellness programs have potential for promoting healthier lifestyles among physicians. Trial Registration Clinicaltrials.gov NCT01287208. PMID:24950218

  20. Correlation dynamics and enhanced signals for the identification of serial biomolecules and DNA bases.

    PubMed

    Ahmed, Towfiq; Haraldsen, Jason T; Rehr, John J; Di Ventra, Massimiliano; Schuller, Ivan; Balatsky, Alexander V

    2014-03-28

    Nanopore-based sequencing has demonstrated a significant potential for the development of fast, accurate, and cost-efficient fingerprinting techniques for next generation molecular detection and sequencing. We propose a specific multilayered graphene-based nanopore device architecture for the recognition of single biomolecules. Molecular detection and analysis can be accomplished through the detection of transverse currents as the molecule or DNA base translocates through the nanopore. To increase the overall signal-to-noise ratio and the accuracy, we implement a new 'multi-point cross-correlation' technique for identification of DNA bases or other molecules on the single molecular level. We demonstrate that the cross-correlations between each nanopore will greatly enhance the transverse current signal for each molecule. We implement first-principles transport calculations for DNA bases surveyed across a multilayered graphene nanopore system to illustrate the advantages of the proposed geometry. A time-series analysis of the cross-correlation functions illustrates the potential of this method for enhancing the signal-to-noise ratio. This work constitutes a significant step forward in facilitating fingerprinting of single biomolecules using solid state technology.

  1. Regularized two-step brain activity reconstruction from spatiotemporal EEG data

    NASA Astrophysics Data System (ADS)

    Alecu, Teodor I.; Voloshynovskiy, Sviatoslav; Pun, Thierry

    2004-10-01

    We are aiming at using EEG source localization in the framework of a Brain Computer Interface project. We propose here a new reconstruction procedure, targeting source (or equivalently mental task) differentiation. EEG data can be thought of as a collection of time continuous streams from sparse locations. The measured electric potential on one electrode is the result of the superposition of synchronized synaptic activity from sources in all the brain volume. Consequently, the EEG inverse problem is a highly underdetermined (and ill-posed) problem. Moreover, each source contribution is linear with respect to its amplitude but non-linear with respect to its localization and orientation. In order to overcome these drawbacks we propose a novel two-step inversion procedure. The solution is based on a double scale division of the solution space. The first step uses a coarse discretization and has the sole purpose of globally identifying the active regions, via a sparse approximation algorithm. The second step is applied only on the retained regions and makes use of a fine discretization of the space, aiming at detailing the brain activity. The local configuration of sources is recovered using an iterative stochastic estimator with adaptive joint minimum energy and directional consistency constraints.

  2. Real-Time Indoor Scene Description for the Visually Impaired Using Autoencoder Fusion Strategies with Visible Cameras.

    PubMed

    Malek, Salim; Melgani, Farid; Mekhalfi, Mohamed Lamine; Bazi, Yakoub

    2017-11-16

    This paper describes three coarse image description strategies, which are meant to promote a rough perception of surrounding objects for visually impaired individuals, with application to indoor spaces. The described algorithms operate on images (grabbed by the user, by means of a chest-mounted camera), and provide in output a list of objects that likely exist in his context across the indoor scene. In this regard, first, different colour, texture, and shape-based feature extractors are generated, followed by a feature learning step by means of AutoEncoder (AE) models. Second, the produced features are fused and fed into a multilabel classifier in order to list the potential objects. The conducted experiments point out that fusing a set of AE-learned features scores higher classification rates with respect to using the features individually. Furthermore, with respect to reference works, our method: (i) yields higher classification accuracies, and (ii) runs (at least four times) faster, which enables a potential full real-time application.

  3. Rapidly Characterizing the Fast Dynamics of RNA Genetic Circuitry with Cell-Free Transcription–Translation (TX-TL) Systems

    PubMed Central

    2014-01-01

    RNA regulators are emerging as powerful tools to engineer synthetic genetic networks or rewire existing ones. A potential strength of RNA networks is that they may be able to propagate signals on time scales that are set by the fast degradation rates of RNAs. However, a current bottleneck to verifying this potential is the slow design-build-test cycle of evaluating these networks in vivo. Here, we adapt an Escherichia coli-based cell-free transcription-translation (TX-TL) system for rapidly prototyping RNA networks. We used this system to measure the response time of an RNA transcription cascade to be approximately five minutes per step of the cascade. We also show that this response time can be adjusted with temperature and regulator threshold tuning. Finally, we use TX-TL to prototype a new RNA network, an RNA single input module, and show that this network temporally stages the expression of two genes in vivo. PMID:24621257

  4. Impaired Response Selection During Stepping Predicts Falls in Older People-A Cohort Study.

    PubMed

    Schoene, Daniel; Delbaere, Kim; Lord, Stephen R

    2017-08-01

    Response inhibition, an important executive function, has been identified as a risk factor for falls in older people. This study investigated whether step tests that include different levels of response inhibition differ in their ability to predict falls and whether such associations are mediated by measures of attention, speed, and/or balance. A cohort study with a 12-month follow-up was conducted in community-dwelling older people without major cognitive and mobility impairments. Participants underwent 3 step tests: (1) choice stepping reaction time (CSRT) requiring rapid decision making and step initiation; (2) inhibitory choice stepping reaction time (iCSRT) requiring additional response inhibition and response-selection (go/no-go); and (3) a Stroop Stepping Test (SST) under congruent and incongruent conditions requiring conflict resolution. Participants also completed tests of processing speed, balance, and attention as potential mediators. Ninety-three of the 212 participants (44%) fell in the follow-up period. Of the step tests, only components of the iCSRT task predicted falls in this time with the relative risk per standard deviation for the reaction time (iCSRT-RT) = 1.23 (95%CI = 1.10-1.37). Multiple mediation analysis indicated that the iCSRT-RT was independently associated with falls and not mediated through slow processing speed, poor balance, or inattention. Combined stepping and response inhibition as measured in a go/no-go test stepping paradigm predicted falls in older people. This suggests that integrity of the response-selection component of a voluntary stepping response is crucial for minimizing fall risk. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.

  5. 75 FR 39042 - Solicitation for a Cooperative Agreement: The Norval Morris Project Implementation Phase

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-07

    ... model designed to provide correctional agencies with a step-by-step approach to promote systemic change..., evidence-based approaches, evaluate their potential to inform correctional policy and practice, create... outside the corrections field to develop interdisciplinary approaches and draw on professional networks...

  6. Technology: nursing the system. Technology and the potential for entrepreneurship.

    PubMed

    Simpson, R L

    1997-10-01

    Many nurses are stepping beyond the boundaries of traditional practice and creating their own business or service centers. New entrepreneurial opportunities include working on computer-based patient records, providing consulting services, developing policies and more. Getting involved--joining informatics groups, taking classes--is the first step.

  7. LabVIEW-based sequential-injection analysis system for the determination of trace metals by square-wave anodic and adsorptive stripping voltammetry on mercury-film electrodes.

    PubMed

    Economou, Anastasios; Voulgaropoulos, Anastasios

    2003-01-01

    The development of a dedicated automated sequential-injection analysis apparatus for anodic stripping voltammetry (ASV) and adsorptive stripping voltammetry (AdSV) is reported. The instrument comprised a peristaltic pump, a multiposition selector valve and a home-made potentiostat and used a mercury-film electrode as the working electrodes in a thin-layer electrochemical detector. Programming of the experimental sequence was performed in LabVIEW 5.1. The sequence of operations included formation of the mercury film, electrolytic or adsorptive accumulation of the analyte on the electrode surface, recording of the voltammetric current-potential response, and cleaning of the electrode. The stripping step was carried out by applying a square-wave (SW) potential-time excitation signal to the working electrode. The instrument allowed unattended operation since multiple-step sequences could be readily implemented through the purpose-built software. The utility of the analyser was tested for the determination of copper(II), cadmium(II), lead(II) and zinc(II) by SWASV and of nickel(II), cobalt(II) and uranium(VI) by SWAdSV.

  8. LabVIEW-based sequential-injection analysis system for the determination of trace metals by square-wave anodic and adsorptive stripping voltammetry on mercury-film electrodes

    PubMed Central

    Economou, Anastasios; Voulgaropoulos, Anastasios

    2003-01-01

    The development of a dedicated automated sequential-injection analysis apparatus for anodic stripping voltammetry (ASV) and adsorptive stripping voltammetry (AdSV) is reported. The instrument comprised a peristaltic pump, a multiposition selector valve and a home-made potentiostat and used a mercury-film electrode as the working electrodes in a thin-layer electrochemical detector. Programming of the experimental sequence was performed in LabVIEW 5.1. The sequence of operations included formation of the mercury film, electrolytic or adsorptive accumulation of the analyte on the electrode surface, recording of the voltammetric current-potential response, and cleaning of the electrode. The stripping step was carried out by applying a square-wave (SW) potential-time excitation signal to the working electrode. The instrument allowed unattended operation since multiple-step sequences could be readily implemented through the purpose-built software. The utility of the analyser was tested for the determination of copper(II), cadmium(II), lead(II) and zinc(II) by SWASV and of nickel(II), cobalt(II) and uranium(VI) by SWAdSV. PMID:18924623

  9. Time-Delayed Two-Step Selective Laser Photodamage of Dye-Biomolecule Complexes

    NASA Astrophysics Data System (ADS)

    Andreoni, A.; Cubeddu, R.; de Silvestri, S.; Laporta, P.; Svelto, O.

    1980-08-01

    A scheme is proposed for laser-selective photodamage of biological molecules, based on time-delayed two-step photoionization of a dye molecule bound to the biomolecule. The validity of the scheme is experimentally demonstrated in the case of the dye Proflavine, bound to synthetic polynucleotides.

  10. Melatonin: a universal time messenger.

    PubMed

    Erren, Thomas C; Reiter, Russel J

    2015-01-01

    Temporal organization plays a key role in humans, and presumably all species on Earth. A core building block of the chronobiological architecture is the master clock, located in the suprachi asmatic nuclei [SCN], which organizes "when" things happen in sub-cellular biochemistry, cells, organs and organisms, including humans. Conceptually, time messenging should follow a 5 step-cascade. While abundant evidence suggests how steps 1 through 4 work, step 5 of "how is central time information transmitted througout the body?" awaits elucidation. Step 1: Light provides information on environmental (external) time; Step 2: Ocular interfaces between light and biological (internal) time are intrinsically photosensitive retinal ganglion cells [ipRGS] and rods and cones; Step 3: Via the retinohypothalamic tract external time information reaches the light-dependent master clock in the brain, viz the SCN; Step 4: The SCN translate environmental time information into biological time and distribute this information to numerous brain structures via a melanopsin-based network. Step 5: Melatonin, we propose, transmits, or is a messenger of, internal time information to all parts of the body to allow temporal organization which is orchestrated by the SCN. Key reasons why we expect melatonin to have such role include: First, melatonin, as the chemical expression of darkness, is centrally involved in time- and timing-related processes such as encoding clock and calendar information in the brain; Second, melatonin travels throughout the body without limits and is thus a ubiquitous molecule. The chemial conservation of melatonin in all tested species could make this molecule a candidate for a universal time messenger, possibly constituting a legacy of an all-embracing evolutionary history.

  11. Compartmentalized partnered replication for the directed evolution of genetic parts and circuits.

    PubMed

    Abil, Zhanar; Ellefson, Jared W; Gollihar, Jimmy D; Watkins, Ella; Ellington, Andrew D

    2017-12-01

    Compartmentalized partnered replication (CPR) is an emulsion-based directed evolution method based on a robust and modular phenotype-genotype linkage. In contrast to other in vivo directed evolution approaches, CPR largely mitigates host fitness effects due to a relatively short expression time of the gene of interest. CPR is based on gene circuits in which the selection of a 'partner' function from a library leads to the production of a thermostable polymerase. After library preparation, bacteria produce partner proteins that can potentially lead to enhancement of transcription, translation, gene regulation, and other aspects of cellular metabolism that reinforce thermostable polymerase production. Individual cells are then trapped in water-in-oil emulsion droplets in the presence of primers and dNTPs, followed by the recovery of the partner genes via emulsion PCR. In this step, droplets with cells expressing partner proteins that promote polymerase production will produce higher copy numbers of the improved partner gene. The resulting partner genes can subsequently be recloned for the next round of selection. Here, we present a step-by-step guideline for the procedure by providing examples of (i) selection of T7 RNA polymerases that recognize orthogonal promoters and (ii) selection of tRNA for enhanced amber codon suppression. A single round of CPR should take ∼3-5 d, whereas a whole directed evolution can be performed in 3-10 rounds, depending on selection efficiency.

  12. A variational numerical method based on finite elements for the nonlinear solution characteristics of the periodically forced Chen system

    NASA Astrophysics Data System (ADS)

    Khan, Sabeel M.; Sunny, D. A.; Aqeel, M.

    2017-09-01

    Nonlinear dynamical systems and their solutions are very sensitive to initial conditions and therefore need to be approximated carefully. In this article, we present and analyze nonlinear solution characteristics of the periodically forced Chen system with the application of a variational method based on the concept of finite time-elements. Our approach is based on the discretization of physical time space into finite elements where each time-element is mapped to a natural time space. The solution of the system is then determined in natural time space using a set of suitable basis functions. The numerical algorithm is presented and implemented to compute and analyze nonlinear behavior at different time-step sizes. The obtained results show an excellent agreement with the classical RK-4 and RK-5 methods. The accuracy and convergence of the method is shown by comparing numerically computed results with the exact solution for a test problem. The presented method has shown a great potential in dealing with the solutions of nonlinear dynamical systems and thus can be utilized in delineating different features and characteristics of their solutions.

  13. Synthesis, characterization of novel chitosan based water dispersible polyurethanes and their potential deployment as antibacterial textile finish.

    PubMed

    Arshad, Noureen; Zia, Khalid Mahmood; Jabeen, Farukh; Anjum, Muhammad Naveed; Akram, Nadia; Zuber, Mohammad

    2018-05-01

    Our current research work comprised of synthesis of a series of novel chitosan based water dispersible polyurethanes. The synthesis was carried out in three steps, in first step, the NCO end capped PU-prepolymer was formed through the reaction between Polyethylene glycol (PEG) (Mn = 600), Dimethylolpropionic acid (DMPA) and Isophorone diisocyanate (IPDI). In second step, the neutralization step was carried out by using Triethylamine (TEA) which resulted the formation of neutralized NCO terminated PU-prepolymer, after that the last step chain extension was performed by the addition of chitosan and followed the formation of dispersion by adding calculated amount of water. The proposed structure of CS-WDPUs was confirmed by using FTIR technique. The antimicrobial activities of the plain weave poly-cotton printed and dyed textile swatches after application of CS-WDPUs were also evaluated. The results showed that the chitosan incorporation in to PU backbone has markedly enhanced the antibacterial activity of WDPUs. These synthesized CS-WDPUs are eco-friendly antimicrobial finishes (using natural bioactive agents such as chitosan) with potential applications on polyester/cotton textiles. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Modeling the stepping mechanism in negative lightning leaders

    NASA Astrophysics Data System (ADS)

    Iudin, Dmitry; Syssoev, Artem; Davydenko, Stanislav; Rakov, Vladimir

    2017-04-01

    It is well-known that the negative leaders develop in a step manner using a mechanism of the so-called space leaders in contrary to positive ones, which propagate continuously. Despite this fact has been known for about a hundred years till now no one had developed any plausible model explaining this asymmetry. In this study we suggest a model of the stepped development of the negative lightning leader which for the first time allows carrying out the numerical simulation of its evolution. The model is based on the probability approach and description of temporal evolution of the discharge channels. One of the key features of our model is accounting for the presence of so called space streamers/leaders which play a fundamental role in the formation of negative leader's steps. Their appearance becomes possible due to the accounting of potential influence of the space charge injected into the discharge gap by the streamer corona. The model takes into account an asymmetry of properties of negative and positive streamers which is based on well-known from numerous laboratory measurements fact that positive streamers need about twice weaker electric field to appear and propagate as compared to negative ones. An extinction of the conducting channel as a possible way of its evolution is also taken into account. This allows us to describe the leader channel's sheath formation. To verify the morphology and characteristics of the model discharge, we use the results of the high-speed video observations of natural negative stepped leaders. We can conclude that the key properties of the model and natural negative leaders are very similar.

  15. Influence of the random walk finite step on the first-passage probability

    NASA Astrophysics Data System (ADS)

    Klimenkova, Olga; Menshutin, Anton; Shchur, Lev

    2018-01-01

    A well known connection between first-passage probability of random walk and distribution of electrical potential described by Laplace equation is studied. We simulate random walk in the plane numerically as a discrete time process with fixed step length. We measure first-passage probability to touch the absorbing sphere of radius R in 2D. We found a regular deviation of the first-passage probability from the exact function, which we attribute to the finiteness of the random walk step.

  16. A global reaction route mapping-based kinetic Monte Carlo algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Izaac; Page, Alister J., E-mail: sirle@chem.nagoya-u.ac.jp, E-mail: alister.page@newcastle.edu.au; Irle, Stephan, E-mail: sirle@chem.nagoya-u.ac.jp, E-mail: alister.page@newcastle.edu.au

    2016-07-14

    We propose a new on-the-fly kinetic Monte Carlo (KMC) method that is based on exhaustive potential energy surface searching carried out with the global reaction route mapping (GRRM) algorithm. Starting from any given equilibrium state, this GRRM-KMC algorithm performs a one-step GRRM search to identify all surrounding transition states. Intrinsic reaction coordinate pathways are then calculated to identify potential subsequent equilibrium states. Harmonic transition state theory is used to calculate rate constants for all potential pathways, before a standard KMC accept/reject selection is performed. The selected pathway is then used to propagate the system forward in time, which is calculatedmore » on the basis of 1st order kinetics. The GRRM-KMC algorithm is validated here in two challenging contexts: intramolecular proton transfer in malonaldehyde and surface carbon diffusion on an iron nanoparticle. We demonstrate that in both cases the GRRM-KMC method is capable of reproducing the 1st order kinetics observed during independent quantum chemical molecular dynamics simulations using the density-functional tight-binding potential.« less

  17. Mechanistic Kinetic Modeling of Thiol-Michael Addition Photopolymerizations via Photocaged "Superbase" Generators: An Analytical Approach.

    PubMed

    Claudino, Mauro; Zhang, Xinpeng; Alim, Marvin D; Podgórski, Maciej; Bowman, Christopher N

    2016-11-08

    A kinetic mechanism and the accompanying mathematical framework are presented for base-mediated thiol-Michael photopolymerization kinetics involving a photobase generator. Here, model kinetic predictions demonstrate excellent agreement with a representative experimental system composed of 2-(2-nitrophenyl)propyloxycarbonyl-1,1,3,3-tetramethylguanidine (NPPOC-TMG) as a photobase generator that is used to initiate thiol-vinyl sulfone Michael addition reactions and polymerizations. Modeling equations derived from a basic mechanistic scheme indicate overall polymerization rates that follow a pseudo-first-order kinetic process in the base and coreactant concentrations, controlled by the ratio of the propagation to chain-transfer kinetic parameters ( k p / k CT ) which is dictated by the rate-limiting step and controls the time necessary to reach gelation. Gelation occurs earlier as the k p / k CT ratio reaches a critical value, wherefrom gel times become nearly independent of k p / k CT . The theoretical approach allowed determining the effect of induction time on the reaction kinetics due to initial acid-base neutralization for the photogenerated base caused by the presence of protic contaminants. Such inhibition kinetics may be challenging for reaction systems that require high curing rates but are relevant for chemical systems that need to remain kinetically dormant until activated although at the ultimate cost of lower polymerization rates. The pure step-growth character of this living polymerization and the exhibited kinetics provide unique potential for extended dark-cure reactions and uniform material properties. The general kinetic model is applicable to photobase initiators where photolysis follows a unimolecular cleavage process releasing a strong base catalyst without cogeneration of intermediate radical species.

  18. Regionally Implicit Discontinuous Galerkin Methods for Solving the Relativistic Vlasov-Maxwell System Submitted to Iowa State University

    NASA Astrophysics Data System (ADS)

    Guthrey, Pierson Tyler

    The relativistic Vlasov-Maxwell system (RVM) models the behavior of collisionless plasma, where electrons and ions interact via the electromagnetic fields they generate. In the RVM system, electrons could accelerate to significant fractions of the speed of light. An idea that is actively being pursued by several research groups around the globe is to accelerate electrons to relativistic speeds by hitting a plasma with an intense laser beam. As the laser beam passes through the plasma it creates plasma wakes, much like a ship passing through water, which can trap electrons and push them to relativistic speeds. Such setups are known as laser wakefield accelerators, and have the potential to yield particle accelerators that are significantly smaller than those currently in use. Ultimately, the goal of such research is to harness the resulting electron beams to generate electromagnetic waves that can be used in medical imaging applications. High-order accurate numerical discretizations of kinetic Vlasov plasma models are very effective at yielding low-noise plasma simulations, but are computationally expensive to solve because of the high dimensionality. In addition to the general difficulties inherent to numerically simulating Vlasov models, the relativistic Vlasov-Maxwell system has unique challenges not present in the non-relativistic case. One such issue is that operator splitting of the phase gradient leads to potential instabilities, thus we require an alternative to operator splitting of the phase. The goal of the current work is to develop a new class of high-order accurate numerical methods for solving kinetic Vlasov models of plasma. The main discretization in configuration space is handled via a high-order finite element method called the discontinuous Galerkin method (DG). One difficulty is that standard explicit time-stepping methods for DG suffer from time-step restrictions that are significantly worse than what a simple Courant-Friedrichs-Lewy (CFL) argument requires. The maximum stable time-step scales inversely with the highest degree in the DG polynomial approximation space and becomes progressively smaller with each added spatial dimension. In this work, we overcome this difficulty by introducing a novel time-stepping strategy: the regionally-implicit discontinuous Galerkin (RIDG) method. The RIDG is method is based on an extension of the Lax-Wendroff DG (LxW-DG) method, which previously had been shown to be equivalent (for linear constant coefficient problems) to a predictor-corrector approach, where the prediction is computed by a space-time DG method (STDG). The corrector is an explicit method that uses the space-time reconstructed solution from the predictor step. In this work, we modify the predictor to include not just local information, but also neighboring information. With this modification, we show that the stability is greatly enhanced; we show that we can remove the polynomial degree dependence of the maximum time-step and show vastly improved time-steps in multiple spatial dimensions. Upon the development of the general RIDG method, we apply it to the non-relativistic 1D1V Vlasov-Poisson equations and the relativistic 1D2V Vlasov-Maxwell equations. For each we validate the high-order method on several test cases. In the final test case, we demonstrate the ability of the method to simulate the acceleration of electrons to relativistic speeds in a simplified test case.

  19. Organizational-Level Strategies With or Without an Activity Tracker to Reduce Office Workers’ Sitting Time: Rationale and Study Design of a Pilot Cluster-Randomized Trial

    PubMed Central

    Fjeldsoe, Brianna S; Young, Duncan C; Winkler, Elisabeth A H; Dunstan, David W; Straker, Leon M; Brakenridge, Christian J; Healy, Genevieve N

    2016-01-01

    Background The office workplace is a key setting in which to address excessive sitting time and inadequate physical activity. One major influence on workplace sitting is the organizational environment. However, the impact of organizational-level strategies on individual level activity change is unknown. Further, the emergence of sophisticated, consumer-targeted wearable activity trackers that facilitate real-time self-monitoring of activity, may be a useful adjunct to support organizational-level strategies, but to date have received little evaluation in this workplace setting. Objective The aim of this study is to evaluate the feasibility, acceptability, and effectiveness of organizational-level strategies with or without an activity tracker on sitting, standing, and stepping in office workers in the short (3 months, primary aim) and long-term (12 months, secondary aim). Methods This study is a pilot, cluster-randomized trial (with work teams as the unit of clustering) of two interventions in office workers: organizational-level support strategies (eg, visible management support, emails) or organizational-level strategies plus the use of a waist-worn activity tracker (the LUMOback) that enables self-monitoring of sitting, standing, and stepping time and enables users to set sitting and posture alerts. The key intervention message is to ‘Stand Up, Sit Less, and Move More.’ Intervention elements will be implemented from within the organization by the Head of Workplace Wellbeing. Participants will be recruited via email and enrolled face-to-face. Assessments will occur at baseline, 3, and 12 months. Time spent sitting, sitting in prolonged (≥30 minute) bouts, standing, and stepping during work hours and across the day will be measured with activPAL3 activity monitors (7 days, 24 hours/day protocol), with total sitting time and sitting time during work hours the primary outcomes. Web-based questionnaires, LUMOback recorded data, telephone interviews, and focus groups will measure the feasibility and acceptability of both interventions and potential predictors of behavior change. Results Baseline and follow-up data collection has finished. Results are expected in 2016. Conclusions This pilot, cluster-randomized trial will evaluate the feasibility, acceptability, and effectiveness of two interventions targeting reductions in sitting and increases in standing and stepping in office workers. Few studies have evaluated these intervention strategies and this study has the potential to contribute both short and long-term findings. PMID:27226457

  20. Improving efficiency and safety in external beam radiation therapy treatment delivery using a Kaizen approach.

    PubMed

    Kapur, Ajay; Adair, Nilda; O'Brien, Mildred; Naparstek, Nikoleta; Cangelosi, Thomas; Zuvic, Petrina; Joseph, Sherin; Meier, Jason; Bloom, Beatrice; Potters, Louis

    Modern external beam radiation therapy treatment delivery processes potentially increase the number of tasks to be performed by therapists and thus opportunities for errors, yet the need to treat a large number of patients daily requires a balanced allocation of time per treatment slot. The goal of this work was to streamline the underlying workflow in such time-interval constrained processes to enhance both execution efficiency and active safety surveillance using a Kaizen approach. A Kaizen project was initiated by mapping the workflow within each treatment slot for 3 Varian TrueBeam linear accelerators. More than 90 steps were identified, and average execution times for each were measured. The time-consuming steps were stratified into a 2 × 2 matrix arranged by potential workflow improvement versus the level of corrective effort required. A work plan was created to launch initiatives with high potential for workflow improvement but modest effort to implement. Time spent on safety surveillance and average durations of treatment slots were used to assess corresponding workflow improvements. Three initiatives were implemented to mitigate unnecessary therapist motion, overprocessing of data, and wait time for data transfer defects, respectively. A fourth initiative was implemented to make the division of labor by treating therapists as well as peer review more explicit. The average duration of treatment slots reduced by 6.7% in the 9 months following implementation of the initiatives (P = .001). A reduction of 21% in duration of treatment slots was observed on 1 of the machines (P < .001). Time spent on safety reviews remained the same (20% of the allocated interval), but the peer review component increased. The Kaizen approach has the potential to improve operational efficiency and safety with quick turnaround in radiation therapy practice by addressing non-value-adding steps characteristic of individual department workflows. Higher effort opportunities are identified to guide continual downstream quality improvements. Copyright © 2017 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  1. Nanocrystal Additives for Advanced Lubricants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooper, Gregory; Lohuis, James; Demas, Nicholaos

    The innovations in engine and drivetrain lubricants are mainly driven by ever more stringent regulations, which demand better fuel economy, lower carbon emission, and less pollution. Many technologies are being developed for the next generations of vehicles to achieve these goals. Even if these technologies can be adopted, there still is a significant need for a “drop-in” lubricant solution for the existing ground vehicle fleet to reap immediate fuel savings at the same time reduce the pollution. Dramatic improvements were observed when Pixelligent’s proprietary, mono-dispersed, and highly scalable metal oxide nanocrystals were added to the base oils. The dispersions inmore » base and formulated oils are clear and without any change of appearance and viscosity. However, the benefits provided by the nanocrystals were limited to the base oils due to the interference of exiting additives in the fully formulated oils. Developing a prototype formulation including the nanocrystals that can demonstrate the same improvements observed in the base oils is a critical step toward the commercialization of these advanced nano-additives. A ‘bottom-up’ approach was adopted to develop a prototype lubricant formulation to avoid the complicated interactions with the multitude of additives, only minimal numbers of most essential additives are added, step by step, into the formulation, to ensure that they are compatible with the nanocrystals and do not compromise their tribological performance. Tribological performance are characterized to come up with the best formulations that can demonstrate the commercial potential of the nano-additives.« less

  2. Particle sizing of pharmaceutical aerosols via direct imaging of particle settling velocities.

    PubMed

    Fishler, Rami; Verhoeven, Frank; de Kruijf, Wilbur; Sznitman, Josué

    2018-02-15

    We present a novel method for characterizing in near real-time the aerodynamic particle size distributions from pharmaceutical inhalers. The proposed method is based on direct imaging of airborne particles followed by a particle-by-particle measurement of settling velocities using image analysis and particle tracking algorithms. Due to the simplicity of the principle of operation, this method has the potential of circumventing potential biases of current real-time particle analyzers (e.g. Time of Flight analysis), while offering a cost effective solution. The simple device can also be constructed in laboratory settings from off-the-shelf materials for research purposes. To demonstrate the feasibility and robustness of the measurement technique, we have conducted benchmark experiments whereby aerodynamic particle size distributions are obtained from several commercially-available dry powder inhalers (DPIs). Our measurements yield size distributions (i.e. MMAD and GSD) that are closely in line with those obtained from Time of Flight analysis and cascade impactors suggesting that our imaging-based method may embody an attractive methodology for rapid inhaler testing and characterization. In a final step, we discuss some of the ongoing limitations of the current prototype and conceivable routes for improving the technique. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. A Cluster Randomized Controlled Trial to Reduce Office Workers' Sitting Time: Effect on Activity Outcomes.

    PubMed

    Healy, Genevieve N; Eakin, Elizabeth G; Owen, Neville; Lamontagne, Anthony D; Moodie, Marj; Winkler, Elisabeth A H; Fjeldsoe, Brianna S; Wiesner, Glen; Willenberg, Lisa; Dunstan, David W

    2016-09-01

    This study aimed to evaluate the initial and long-term effectiveness of a workplace intervention compared with usual practice, targeting the reduction of sitting on activity outcomes. Office worksites (≥1 km apart) from a single organization in Victoria, Australia, were cluster randomized to intervention (n = 7) or control (n = 7). Participants were 231 desk-based office workers (5-39 participants per worksite) working at least 0.6 full-time equivalent. The workplace-delivered intervention addressed organizational, physical environment, and individual behavioral changes to reduce sitting time. Assessments occurred at baseline, 3 months, and 12 months, with the primary outcome participants' objectively measured (activPAL3 device) workplace sitting time (minutes per 8-h workday). Secondary activity outcomes were workplace time spent standing, stepping (light, moderate to vigorous, and total), and in prolonged (≥30 min) sitting bouts (hours per 8-h workday); usual duration of workplace sitting bouts; and overall sitting, standing, and stepping time (minutes per 16-h day). Analysis was by linear mixed models, accounting for repeated-measures and clustering and adjusting for baseline values and potential confounders. At baseline, on average, participants (68% women; mean ± SD age = 45.6 ± 9.4 yr) sat, stood, and stepped for 78.8% ± 9.5%, 14.3% ± 8.2%, and 6.9% ± 2.9% of work hours, respectively. Workplace sitting time was significantly reduced in the intervention group compared with the controls at 3 months (-99.1 [95% confidence interval = -116.3 to -81.8] min per 8-h workday) and 12 months (-45.4 [-64.6 to -26.2] min per 8-h workday). Significant intervention effects (all favoring intervention) were observed for standing, prolonged sitting, and usual sitting bout duration at work, as well as overall sitting and standing time, with no significant or meaningful effects observed for stepping. This workplace-delivered multicomponent intervention was successful at reducing workplace and overall daily sitting time in both the short term and the long term.

  4. Sewer infiltration/inflow: long-term monitoring based on diurnal variation of pollutant mass flux.

    PubMed

    Bares, V; Stránský, D; Sýkora, P

    2009-01-01

    The paper deals with a method for quantification of infiltrating groundwater based on the variation of diurnal pollutant load and continuous water quality and quantity monitoring. Although the method gives us the potential to separate particular components of wastewater hygrograph, several aspects of the method should be discussed. Therefore, the paper investigates the cost-effectiveness, the relevance of pollutant load from surface waters (groundwater) and the influence of measurement time step. These aspects were studied in an experimental catchment of Prague sewer system, Czech Republic, within a three-month period. The results indicate high contribution of parasitic waters on night minimal discharge. Taking into account the uncertainty of the results and time-consuming maintenance of the sensor, the principal advantages of the method are evaluated. The study introduces a promising potential of the discussed measuring concept for quantification of groundwater infiltrating into the sewer system. It is shown that the conventional approach is sufficient and cost-effective even in those catchments, where significant contribution of foul sewage in night minima would have been assumed.

  5. A sub-1-volt analog metal oxide memristive-based synaptic device with large conductance change for energy-efficient spike-based computing systems

    NASA Astrophysics Data System (ADS)

    Hsieh, Cheng-Chih; Roy, Anupam; Chang, Yao-Feng; Shahrjerdi, Davood; Banerjee, Sanjay K.

    2016-11-01

    Nanoscale metal oxide memristors have potential in the development of brain-inspired computing systems that are scalable and efficient. In such systems, memristors represent the native electronic analogues of the biological synapses. In this work, we show cerium oxide based bilayer memristors that are forming-free, low-voltage (˜|0.8 V|), energy-efficient (full on/off switching at ˜8 pJ with 20 ns pulses, intermediate states switching at ˜fJ), and reliable. Furthermore, pulse measurements reveal the analog nature of the memristive device; that is, it can directly be programmed to intermediate resistance states. Leveraging this finding, we demonstrate spike-timing-dependent plasticity, a spike-based Hebbian learning rule. In those experiments, the memristor exhibits a marked change in the normalized synaptic strength (>30 times), when the pre- and post-synaptic neural spikes overlap. This demonstration is an important step towards the physical construction of high density and high connectivity neural networks.

  6. Photoinduced diffusion molecular transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozenbaum, Viktor M., E-mail: vik-roz@mail.ru, E-mail: litrakh@gmail.com; Dekhtyar, Marina L.; Lin, Sheng Hsien

    2016-08-14

    We consider a Brownian photomotor, namely, the directed motion of a nanoparticle in an asymmetric periodic potential under the action of periodic rectangular resonant laser pulses which cause charge redistribution in the particle. Based on the kinetics for the photoinduced electron redistribution between two or three energy levels of the particle, the time dependence of its potential energy is derived and the average directed velocity is calculated in the high-temperature approximation (when the spatial amplitude of potential energy fluctuations is small relative to the thermal energy). The thus developed theory of photoinduced molecular transport appears applicable not only to conventionalmore » dichotomous Brownian motors (with only two possible potential profiles) but also to a much wider variety of molecular nanomachines. The distinction between the realistic time dependence of the potential energy and that for a dichotomous process (a step function) is represented in terms of relaxation times (they can differ on the time intervals of the dichotomous process). As shown, a Brownian photomotor has the maximum average directed velocity at (i) large laser pulse intensities (resulting in short relaxation times on laser-on intervals) and (ii) excited state lifetimes long enough to permit efficient photoexcitation but still much shorter than laser-off intervals. A Brownian photomotor with optimized parameters is exemplified by a cylindrically shaped semiconductor nanocluster which moves directly along a polar substrate due to periodically photoinduced dipole moment (caused by the repetitive excited electron transitions to a non-resonant level of the nanocylinder surface impurity).« less

  7. Real-time fluorescence ligase chain reaction for sensitive detection of single nucleotide polymorphism based on fluorescence resonance energy transfer.

    PubMed

    Sun, Yueying; Lu, Xiaohui; Su, Fengxia; Wang, Limei; Liu, Chenghui; Duan, Xinrui; Li, Zhengping

    2015-12-15

    Most of practical methods for detection of single nucleotide polymorphism (SNP) need at least two steps: amplification (usually by PCR) and detection of SNP by using the amplification products. Ligase chain reaction (LCR) can integrate the amplification and allele discrimination in one step. However, the detection of LCR products still remains a great challenge for highly sensitive and quantitative SNP detection. Herein, a simple but robust strategy for real-time fluorescence LCR has been developed for highly sensitive and quantitative SNP detection. A pair of LCR probes are firstly labeled with a fluorophore and a quencher, respectively. When the pair of LCR probes are ligated in LCR, the fluorophore will be brought close to the quencher, and thus, the fluorescence will be specifically quenched by fluorescence resonance energy transfer (FRET). The decrease of fluorescence intensity resulted from FRET can be real-time monitored in the LCR process. With the proposed real-time fluorescence LCR assay, 10 aM DNA targets or 100 pg genomic DNA can be accurately determined and as low as 0.1% mutant DNA can be detected in the presence of a large excess of wild-type DNA, indicating the high sensitivity and specificity. The real-time measuring does not require the detection step after LCR and gives a wide dynamic range for detection of DNA targets (from 10 aM to 1 pM). As LCR has been widely used for detection of SNP, DNA methylation, mRNA and microRNA, the real-time fluorescence LCR assay shows great potential for various genetic analysis. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Detailed computational procedure for design of cascade blades with prescribed velocity distributions in compressible potential flows

    NASA Technical Reports Server (NTRS)

    Costello, George R; Cummings, Robert L; Sinnette, John T , Jr

    1952-01-01

    A detailed step-by-step computational outline is presented for the design of two-dimensional cascade blades having a prescribed velocity distribution on the blade in a potential flow of the usual compressible fluid. The outline is based on the assumption that the magnitude of the velocity in the flow of the usual compressible nonviscous fluid is proportional to the magnitude of the velocity in the flow of a compressible nonviscous fluid with linear pressure-volume relation.

  9. Data assimilation of citizen collected information for real-time flood hazard mapping

    NASA Astrophysics Data System (ADS)

    Sayama, T.; Takara, K. T.

    2017-12-01

    Many studies in data assimilation in hydrology have focused on the integration of satellite remote sensing and in-situ monitoring data into hydrologic or land surface models. For flood predictions also, recent studies have demonstrated to assimilate remotely sensed inundation information with flood inundation models. In actual flood disaster situations, citizen collected information including local reports by residents and rescue teams and more recently tweets via social media also contain valuable information. The main interest of this study is how to effectively use such citizen collected information for real-time flood hazard mapping. Here we propose a new data assimilation technique based on pre-conducted ensemble inundation simulations and update inundation depth distributions sequentially when local data becomes available. The propose method is composed by the following two-steps. The first step is based on weighting average of preliminary ensemble simulations, whose weights are updated by Bayesian approach. The second step is based on an optimal interpolation, where the covariance matrix is calculated from the ensemble simulations. The proposed method was applied to case studies including an actual flood event occurred. It considers two situations with more idealized one by assuming continuous flood inundation depth information is available at multiple locations. The other one, which is more realistic case during such a severe flood disaster, assumes uncertain and non-continuous information is available to be assimilated. The results show that, in the first idealized situation, the large scale inundation during the flooding was estimated reasonably with RMSE < 0.4 m in average. For the second more realistic situation, the error becomes larger (RMSE 0.5 m) and the impact of the optimal interpolation becomes comparatively less effective. Nevertheless, the applications of the proposed data assimilation method demonstrated a high potential of this method for assimilating citizen collected information for real-time flood hazard mapping in the future.

  10. Mixed-mode ion exchange-based integrated proteomics technology for fast and deep plasma proteome profiling.

    PubMed

    Xue, Lu; Lin, Lin; Zhou, Wenbin; Chen, Wendong; Tang, Jun; Sun, Xiujie; Huang, Peiwu; Tian, Ruijun

    2018-06-09

    Plasma proteome profiling by LC-MS based proteomics has drawn great attention recently for biomarker discovery from blood liquid biopsy. Due to standard multi-step sample preparation could potentially cause plasma protein degradation and analysis variation, integrated proteomics sample preparation technologies became promising solution towards this end. Here, we developed a fully integrated proteomics sample preparation technology for both fast and deep plasma proteome profiling under its native pH. All the sample preparation steps, including protein digestion and two-dimensional fractionation by both mixed-mode ion exchange and high-pH reversed phase mechanism were integrated into one spintip device for the first time. The mixed-mode ion exchange beads design achieved the sample loading at neutral pH and protein digestion within 30 min. Potential sample loss and protein degradation by pH changing could be voided. 1 μL of plasma sample with depletion of high abundant proteins was processed by the developed technology with 12 equally distributed fractions and analyzed with 12 h of LC-MS gradient time, resulting in the identification of 862 proteins. The combination of the Mixed-mode-SISPROT and data-independent MS method achieved fast plasma proteome profiling in 2 h with high identification overlap and quantification precision for a proof-of-concept study of plasma samples from 5 healthy donors. We expect that the Mixed-mode-SISPROT become a generally applicable sample preparation technology for clinical oriented plasma proteome profiling. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Long-term Outcomes After Stepping Down Asthma Controller Medications: A Claims-Based, Time-to-Event Analysis.

    PubMed

    Rank, Matthew A; Johnson, Ryan; Branda, Megan; Herrin, Jeph; van Houten, Holly; Gionfriddo, Michael R; Shah, Nilay D

    2015-09-01

    Long-term outcomes after stepping down asthma medications are not well described. This study was a retrospective time-to-event analysis of individuals diagnosed with asthma who stepped down their asthma controller medications using a US claims database spanning 2000 to 2012. Four-month intervals were established and a step-down event was defined by a ≥ 50% decrease in days-supplied of controller medications from one interval to the next; this definition is inclusive of step-down that occurred without health-care provider guidance or as a consequence of a medication adherence lapse. Asthma stability in the period prior to step-down was defined by not having an asthma exacerbation (inpatient visit, ED visit, or dispensing of a systemic corticosteroid linked to an asthma visit) and having fewer than two rescue inhaler claims in a 4-month period. The primary outcome in the period following step-down was time-to-first asthma exacerbation. Thirty-two percent of the 26,292 included individuals had an asthma exacerbation in the 24-month period following step-down of asthma controller medication, though only 7% had an ED visit or hospitalization for asthma. The length of asthma stability prior to stepping down asthma medication was strongly associated with the risk of an asthma exacerbation in the subsequent 24-month period: < 4 months' stability, 44%; 4 to 7 months, 34%; 8 to 11 months, 30%; and ≥ 12 months, 21% (P < .001). In a large, claims-based, real-world study setting, 32% of individuals have an asthma exacerbation in the 2 years following a step-down event.

  12. Efficient Control Law Simulation for Multiple Mobile Robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Driessen, B.J.; Feddema, J.T.; Kotulski, J.D.

    1998-10-06

    In this paper we consider the problem of simulating simple control laws involving large numbers of mobile robots. Such simulation can be computationally prohibitive if the number of robots is large enough, say 1 million, due to the 0(N2 ) cost of each time step. This work therefore uses hierarchical tree-based methods for calculating the control law. These tree-based approaches have O(NlogN) cost per time step, thus allowing for efficient simulation involving a large number of robots. For concreteness, a decentralized control law which involves only the distance and bearing to the closest neighbor robot will be considered. The timemore » to calculate the control law for each robot at each time step is demonstrated to be O(logN).« less

  13. Feasibility of a real-time self-monitoring device for sitting less and moving more: a randomised controlled trial

    PubMed Central

    Martin, Anne; Adams, Jacob M; Bunn, Christopher; Gill, Jason M R; Gray, Cindy M; Hunt, Kate; Maxwell, Douglas J; van der Ploeg, Hidde P; Wyke, Sally

    2017-01-01

    Objectives Time spent inactive and sedentary are both associated with poor health. Self-monitoring of walking, using pedometers for real-time feedback, is effective at increasing physical activity. This study evaluated the feasibility of a new pocket-worn sedentary time and physical activity real-time self-monitoring device (SitFIT). Methods Forty sedentary men were equally randomised into two intervention groups. For 4 weeks, one group received a SitFIT providing feedback on steps and time spent sedentary (lying/sitting); the other group received a SitFIT providing feedback on steps and time spent upright (standing/stepping). Change in sedentary time, standing time, stepping time and step count was assessed using activPAL monitors at baseline, 4-week follow-up (T1) and 12-week (T2) follow-up. Semistructured interviews were conducted after 4 and 12 weeks. Results The SitFIT was reported as acceptable and usable and seen as a motivating tool to reduce sedentary time by both groups. On average, participants reduced their sedentary time by 7.8 minutes/day (95% CI −55.4 to 39.7) (T1) and by 8.2 minutes/day (95% CI −60.1 to 44.3) (T2). They increased standing time by 23.2 minutes/day (95% CI 4.0 to 42.5) (T1) and 16.2 minutes/day (95% CI −13.9 to 46.2) (T2). Stepping time was increased by 8.5 minutes/day (95% CI 0.9 to 16.0) (T1) and 9.0 minutes/day (95% CI 0.5 to 17.5) (T2). There were no between-group differences at either follow-up time points. Conclusion The SitFIT was perceived as a useful tool for self-monitoring of sedentary time. It has potential as a real-time self-monitoring device to reduce sedentary and increase upright time. PMID:29081985

  14. A rapid and ultrasensitive SERRS assay for histidine and tyrosine based on azo coupling.

    PubMed

    Sui, Huimin; Wang, Yue; Yu, Zhi; Cong, Qian; Han, Xiao Xia; Zhao, Bing

    2016-10-01

    A simple and highly sensitive surface-enhanced resonance Raman scattering (SERRS)-based approach coupled with azo coupling reaction has been put forward for quantitative analysis of histidine and tyrosine. The SERRS-based assay is simple and rapid by mixing the azo reaction products with silver nanoparticles (AgNPs) for measurements within 2min. The limits of detection (LODs) of the method are as low as 4.33×10(-11) and 8.80×10(-11)M for histidine and tyrosine, respectively. Moreover, the SERRS fingerprint information specific to corresponding amino acids guarantees the selective detection for the target histidine and tyrosine. The results from serum indicated the potential application of the proposed approach into biological samples. Compared with the methods ever reported, the main advantages of this methodology are simpleness, rapidity without time-consuming separation or pretreatment steps, high sensitivity, selectivity and the potential for determination of other molecules containing imidazole or phenol groups. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. A hadoop-based method to predict potential effective drug combination.

    PubMed

    Sun, Yifan; Xiong, Yi; Xu, Qian; Wei, Dongqing

    2014-01-01

    Combination drugs that impact multiple targets simultaneously are promising candidates for combating complex diseases due to their improved efficacy and reduced side effects. However, exhaustive screening of all possible drug combinations is extremely time-consuming and impractical. Here, we present a novel Hadoop-based approach to predict drug combinations by taking advantage of the MapReduce programming model, which leads to an improvement of scalability of the prediction algorithm. By integrating the gene expression data of multiple drugs, we constructed data preprocessing and the support vector machines and naïve Bayesian classifiers on Hadoop for prediction of drug combinations. The experimental results suggest that our Hadoop-based model achieves much higher efficiency in the big data processing steps with satisfactory performance. We believed that our proposed approach can help accelerate the prediction of potential effective drugs with the increasing of the combination number at an exponential rate in future. The source code and datasets are available upon request.

  16. A Hadoop-Based Method to Predict Potential Effective Drug Combination

    PubMed Central

    Xiong, Yi; Xu, Qian; Wei, Dongqing

    2014-01-01

    Combination drugs that impact multiple targets simultaneously are promising candidates for combating complex diseases due to their improved efficacy and reduced side effects. However, exhaustive screening of all possible drug combinations is extremely time-consuming and impractical. Here, we present a novel Hadoop-based approach to predict drug combinations by taking advantage of the MapReduce programming model, which leads to an improvement of scalability of the prediction algorithm. By integrating the gene expression data of multiple drugs, we constructed data preprocessing and the support vector machines and naïve Bayesian classifiers on Hadoop for prediction of drug combinations. The experimental results suggest that our Hadoop-based model achieves much higher efficiency in the big data processing steps with satisfactory performance. We believed that our proposed approach can help accelerate the prediction of potential effective drugs with the increasing of the combination number at an exponential rate in future. The source code and datasets are available upon request. PMID:25147789

  17. Genome-wide base-resolution mapping of DNA methylation in single cells using single-cell bisulfite sequencing (scBS-seq).

    PubMed

    Clark, Stephen J; Smallwood, Sébastien A; Lee, Heather J; Krueger, Felix; Reik, Wolf; Kelsey, Gavin

    2017-03-01

    DNA methylation (DNAme) is an important epigenetic mark in diverse species. Our current understanding of DNAme is based on measurements from bulk cell samples, which obscures intercellular differences and prevents analyses of rare cell types. Thus, the ability to measure DNAme in single cells has the potential to make important contributions to the understanding of several key biological processes, such as embryonic development, disease progression and aging. We have recently reported a method for generating genome-wide DNAme maps from single cells, using single-cell bisulfite sequencing (scBS-seq), allowing the quantitative measurement of DNAme at up to 50% of CpG dinucleotides throughout the mouse genome. Here we present a detailed protocol for scBS-seq that includes our most recent developments to optimize recovery of CpGs, mapping efficiency and success rate; reduce hands-on time; and increase sample throughput with the option of using an automated liquid handler. We provide step-by-step instructions for each stage of the method, comprising cell lysis and bisulfite (BS) conversion, preamplification and adaptor tagging, library amplification, sequencing and, lastly, alignment and methylation calling. An individual with relevant molecular biology expertise can complete library preparation within 3 d. Subsequent computational steps require 1-3 d for someone with bioinformatics expertise.

  18. An operational procedure for rapid flood risk assessment in Europe

    NASA Astrophysics Data System (ADS)

    Dottori, Francesco; Kalas, Milan; Salamon, Peter; Bianchi, Alessandra; Alfieri, Lorenzo; Feyen, Luc

    2017-07-01

    The development of methods for rapid flood mapping and risk assessment is a key step to increase the usefulness of flood early warning systems and is crucial for effective emergency response and flood impact mitigation. Currently, flood early warning systems rarely include real-time components to assess potential impacts generated by forecasted flood events. To overcome this limitation, this study describes the benchmarking of an operational procedure for rapid flood risk assessment based on predictions issued by the European Flood Awareness System (EFAS). Daily streamflow forecasts produced for major European river networks are translated into event-based flood hazard maps using a large map catalogue derived from high-resolution hydrodynamic simulations. Flood hazard maps are then combined with exposure and vulnerability information, and the impacts of the forecasted flood events are evaluated in terms of flood-prone areas, economic damage and affected population, infrastructures and cities.An extensive testing of the operational procedure has been carried out by analysing the catastrophic floods of May 2014 in Bosnia-Herzegovina, Croatia and Serbia. The reliability of the flood mapping methodology is tested against satellite-based and report-based flood extent data, while modelled estimates of economic damage and affected population are compared against ground-based estimations. Finally, we evaluate the skill of risk estimates derived from EFAS flood forecasts with different lead times and combinations of probabilistic forecasts. Results highlight the potential of the real-time operational procedure in helping emergency response and management.

  19. 5-Methylation of Cytosine in CG:CG Base-Pair Steps: A Physicochemical Mechanism for the Epigenetic Control of DNA Nanomechanics

    NASA Astrophysics Data System (ADS)

    Yusufaly, Tahir; Olson, Wilma; Li, Yun

    2014-03-01

    Van der Waals density functional theory is integrated with analysis of a non-redundant set of protein-DNA crystal structures from the Nucleic Acid Database to study the stacking energetics of CG:CG base-pair steps, specifically the role of cytosine 5-methylation. Principal component analysis of the steps reveals the dominant collective motions to correspond to a tensile ``opening'' mode and two shear ``sliding'' and ``tearing'' modes in the orthogonal plane. The stacking interactions of the methyl groups are observed to globally inhibit CG:CG step overtwisting while simultaneously softening the modes locally via potential energy modulations that create metastable states. The results have implications for the epigenetic control of DNA mechanics.

  20. Transient-state kinetic approach to mechanisms of enzymatic catalysis.

    PubMed

    Fisher, Harvey F

    2005-03-01

    Transient-state kinetics by its inherent nature can potentially provide more directly observed detailed resolution of discrete events in the mechanistic time courses of enzyme-catalyzed reactions than its more widely used steady-state counterpart. The use of the transient-state approach, however, has been severely limited by the lack of any theoretically sound and applicable basis of interpreting the virtual cornucopia of time and signal-dependent phenomena that it provides. This Account describes the basic kinetic behavior of the transient state, critically examines some currently used analytic methods, discusses the application of a new and more soundly based "resolved component transient-state time-course method" to the L-glutamate-dehydrogenase reaction, and establishes new approaches for the analysis of both single- and multiple-step substituted transient-state kinetic isotope effects.

  1. Self-energy renormalization for inhomogeneous nonequilibrium systems and field expansion via complete set of time-dependent wavefunctions

    NASA Astrophysics Data System (ADS)

    Kuwahara, Y.; Nakamura, Y.; Yamanaka, Y.

    2018-04-01

    The way to determine the renormalized energy of inhomogeneous systems of a quantum field under an external potential is established for both equilibrium and nonequilibrium scenarios based on thermo field dynamics. The key step is to find an extension of the on-shell concept valid in homogeneous case. In the nonequilibrium case, we expand the field operator by time-dependent wavefunctions that are solutions of the appropriately chosen differential equation, synchronizing with temporal change of thermal situation, and the quantum transport equation is derived from the renormalization procedure. Through numerical calculations of a triple-well model with a reservoir, we show that the number distribution and the time-dependent wavefunctions are relaxed consistently to the correct equilibrium forms at the long-term limit.

  2. Team-Based Introductory Research Experiences in Mathematics

    ERIC Educational Resources Information Center

    Baum, Brittany Smith; Rowell, Ginger Holmes; Green, Lisa; Yantz, Jennifer; Beck, Jesse; Cheatham, Thomas; Stephens, D. Christopher; Nelson, Donald

    2017-01-01

    As part of Middle Tennessee State University's (MTSU's) initiative to improve retention of at-risk STEM majors, they recruit first-time, full-time freshman STEM majors with mathematics ACT scores of 19 to 23 to participate in MTSU's "Mathematics as a FirstSTEP to Success in STEM" project (FirstSTEP). This article overviews MTSU's…

  3. Short-term Time Step Convergence in a Climate Model

    DOE PAGES

    Wan, Hui; Rasch, Philip J.; Taylor, Mark; ...

    2015-02-11

    A testing procedure is designed to assess the convergence property of a global climate model with respect to time step size, based on evaluation of the root-mean-square temperature difference at the end of very short (1 h) simulations with time step sizes ranging from 1 s to 1800 s. A set of validation tests conducted without sub-grid scale parameterizations confirmed that the method was able to correctly assess the convergence rate of the dynamical core under various configurations. The testing procedure was then applied to the full model, and revealed a slow convergence of order 0.4 in contrast to themore » expected first-order convergence. Sensitivity experiments showed without ambiguity that the time stepping errors in the model were dominated by those from the stratiform cloud parameterizations, in particular the cloud microphysics. This provides a clear guidance for future work on the design of more accurate numerical methods for time stepping and process coupling in the model.« less

  4. Effectiveness of inquiry-based learning in an undergraduate exercise physiology course.

    PubMed

    Nybo, Lars; May, Michael

    2015-06-01

    The present study was conducted to investigate the effects of changing a laboratory physiology course for undergraduate students from a traditional step-by-step guided structure to an inquiry-based approach. With this aim in mind, quantitative and qualitative evaluations of learning outcomes (individual subject-specific tests and group interviews) were performed for a laboratory course in cardiorespiratory exercise physiology that was conducted in one year with a traditional step-by-step guided manual (traditional course) and the next year completed with an inquiry-based structure (I-based course). The I-based course was a guided inquiry course where students had to design the experimental protocol and conduct their own study on the basis of certain predefined criteria (i.e., they should evaluate respiratory responses to submaximal and maximal exercise and provide indirect and direct measures of aerobic exercise capacity). The results indicated that the overall time spent on the experimental course as well as self-evaluated learning outcomes were similar across groups. However, students in the I-based course used more time in preparation (102 ± 5 min) than students in the traditional course (42 ± 3 min, P < 0.05), and 65 ± 5% students in the I-based course searched for additional literature before experimentation compared with only 2 ± 1% students in the traditional course. Furthermore, students in the I-based course achieved a higher (P < 0.05) average score on the quantitative test (45 ± 3%) compared with students in the traditional course (31 ± 4%). Although students were unfamiliar with cardiorespiratory exercise physiology and the experimental methods before the course, it appears that an inquiry-based approach rather than one that provides students with step-by-step instructions may benefit learning outcomes in a laboratory physiology course. Copyright © 2015 The American Physiological Society.

  5. Determination of the apparent transfer coefficient for CO oxidation on Pt(poly), Pt(111), Pt(665) and Pt(332) using a potential modulation technique.

    PubMed

    Wang, Han-Chun; Ernst, Siegfried; Baltruschat, Helmut

    2010-03-07

    The apparent transfer coefficient, which gives the magnitude of the potential dependence of the electrochemical reaction rates, is the key quantity for the elucidation of electrochemical reaction mechanisms. We introduce the application of an ac method to determine the apparent transfer coefficient alpha' for the oxidation of pre-adsorbed CO at polycrystalline and single-crystalline Pt electrodes in sulfuric acid. The method allows to record alpha' quasi continuously as a function of potential (and time) in cyclic voltammetry or at a fixed potential, with the reaction rate varying with time. At all surfaces (Pt(poly), Pt(111), Pt(665), and Pt(332)) we clearly observed a transition of the apparent transfer coefficient from values around 1.5 at low potentials to values around 0.5 at higher potentials. Changes of the apparent transfer coefficients for the CO oxidation with potential were observed previously, but only from around 0.7 to values as low as 0.2. In contrast, our experimental findings completely agree with the simulation by Koper et al., J. Chem. Phys., 1998, 109, 6051-6062. They can be understood in the framework of a Langmuir-Hinshelwood mechanism. The transition occurs when the sum of the rate constants for the forward reaction (first step: potential dependent OH adsorption, second step: potential dependent oxidation of CO(ad) with OH(ad)) exceeds the rate constant for the back-reaction of the first step. We expect that the ac method for the determination of the apparent transfer coefficient, which we used here, will be of great help also in many other cases, especially under steady conditions, where the major limitations of the method are avoided.

  6. A Quality Classification System for Young Hardwood Trees - The First Step in Predicting Future Products

    Treesearch

    David L. Sonderman; Robert L. Brisbin

    1978-01-01

    Forest managers have no objective way to determine the relative value of culturally treated forest stands in terms of product potential. This paper describes the first step in the development of a quality classification system based on the measurement of individual tree characteristics for young hardwood stands.

  7. Objectively Measured Patterns of Activities of Different Intensity Categories and Steps Taken Among Working Adults in a Multi-ethnic Asian Population.

    PubMed

    Müller-Riemenschneider, Falk; Ng, Sheryl Hui Xian; Koh, David; Chu, Anne Hin Yee

    2016-06-01

    To objectively assess sedentary behavior (SB), light- and moderate-to-vigorous intensity physical activity (MVPA), and steps among Singaporean office-based workers across days of the week. A convenience sample of office-based employees of a public University was recruited. Time spent for SB, light-, and MVPA using different validated accelerometry counts per minute (CPM), and step count were determined. Depending on applied CPM for SB (less than 100, less than 150 and less than 200 CPM), 107 working adults spent between 69.2% and 76.4% of their daily wakeful time in SB. Time spent in SB and MVPA were higher on weekdays than weekends. The hourly analysis highlights patterns of greater SB during usual working hours on weekdays but not on weekends. SB at work contributes greatly toward total daily sitting time. Low PA levels and high SB levels were found on weekends.

  8. Least-squares finite element methods for compressible Euler equations

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Carey, G. F.

    1990-01-01

    A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.

  9. Fast time- and frequency-domain finite-element methods for electromagnetic analysis

    NASA Astrophysics Data System (ADS)

    Lee, Woochan

    Fast electromagnetic analysis in time and frequency domain is of critical importance to the design of integrated circuits (IC) and other advanced engineering products and systems. Many IC structures constitute a very large scale problem in modeling and simulation, the size of which also continuously grows with the advancement of the processing technology. This results in numerical problems beyond the reach of existing most powerful computational resources. Different from many other engineering problems, the structure of most ICs is special in the sense that its geometry is of Manhattan type and its dielectrics are layered. Hence, it is important to develop structure-aware algorithms that take advantage of the structure specialties to speed up the computation. In addition, among existing time-domain methods, explicit methods can avoid solving a matrix equation. However, their time step is traditionally restricted by the space step for ensuring the stability of a time-domain simulation. Therefore, making explicit time-domain methods unconditionally stable is important to accelerate the computation. In addition to time-domain methods, frequency-domain methods have suffered from an indefinite system that makes an iterative solution difficult to converge fast. The first contribution of this work is a fast time-domain finite-element algorithm for the analysis and design of very large-scale on-chip circuits. The structure specialty of on-chip circuits such as Manhattan geometry and layered permittivity is preserved in the proposed algorithm. As a result, the large-scale matrix solution encountered in the 3-D circuit analysis is turned into a simple scaling of the solution of a small 1-D matrix, which can be obtained in linear (optimal) complexity with negligible cost. Furthermore, the time step size is not sacrificed, and the total number of time steps to be simulated is also significantly reduced, thus achieving a total cost reduction in CPU time. The second contribution is a new method for making an explicit time-domain finite-element method (TDFEM) unconditionally stable for general electromagnetic analysis. In this method, for a given time step, we find the unstable modes that are the root cause of instability, and deduct them directly from the system matrix resulting from a TDFEM based analysis. As a result, an explicit TDFEM simulation is made stable for an arbitrarily large time step irrespective of the space step. The third contribution is a new method for full-wave applications from low to very high frequencies in a TDFEM based on matrix exponential. In this method, we directly deduct the eigenmodes having large eigenvalues from the system matrix, thus achieving a significantly increased time step in the matrix exponential based TDFEM. The fourth contribution is a new method for transforming the indefinite system matrix of a frequency-domain FEM to a symmetric positive definite one. We deduct non-positive definite component directly from the system matrix resulting from a frequency-domain FEM-based analysis. The resulting new representation of the finite-element operator ensures an iterative solution to converge in a small number of iterations. We then add back the non-positive definite component to synthesize the original solution with negligible cost.

  10. Hierarchical modeling of heat transfer in silicon-based electronic devices

    NASA Astrophysics Data System (ADS)

    Goicochea Pineda, Javier V.

    In this work a methodology for the hierarchical modeling of heat transfer in silicon-based electronic devices is presented. The methodology includes three steps to integrate the different scales involved in the thermal analysis of these devices. The steps correspond to: (i) the estimation of input parameters and thermal properties required to solve the Boltzmann transport equation (BTE) for phonons by means of molecular dynamics (MD) simulations, (ii) the quantum correction of some of the properties estimated with MD to make them suitable for BTE and (iii) the numerical solution of the BTE using the lattice Boltzmann method (LBM) under the single mode relaxation time approximation subject to different initial and boundary conditions, including non-linear dispersion relations and different polarizations in the [100] direction. Each step of the methodology is validated with numerical, analytical or experimental reported data. In the first step of the methodology, properties such as, phonon relaxation times, dispersion relations, group and phase velocities and specific heat are obtained with MD at of 300 and 1000 K (i.e. molecular temperatures). The estimation of the properties considers the anhamonic nature of the potential energy function, including the thermal expansion of the crystal. Both effects are found to modify the dispersion relations with temperature. The behavior of the phonon relaxation times for each mode (i.e. longitudinal and transverse, acoustic and optical phonons) is identified using power functions. The exponents of the acoustic modes are agree with those predicted theoretically perturbation theory at high temperatures, while those for the optical modes are higher. All properties estimated with MD are validated with values for the thermal conductivity obtained from the Green-Kubo method. It is found that the relative contribution of acoustic modes to the overall thermal conductivity is approximately 90% at both temperatures. In the second step, two new quantum correction alternatives are applied to correct the results obtained with MD. The alternatives consider the quantization of the energy per phonon mode. In addition, the effect of isotope scattering is included in the phonon-phonon relaxation time values previously determined in the first step. It is found that both the quantization of the energy and the inclusion of scattering with isotopes significant reduce the contribution of high-frequency modes to the overall thermal conductivity. After these two effects are considered, the contribution of optical modes reduces to less than 2.4%. In this step, two sets of properties are obtained. The first one results from the application of quantum corrections to abovementioned properties, while the second is obtained including also the isotope scattering. These sets of properties are identified in this work as isotope-enriched silicon (isoSi) and natural silicon (natSi) and are used along other phonon relaxation time models in the last step of our methodology. Before we solve the BTE using the LBM, a new dispersive lattice Boltzmann formulation is proposed. The new dispersive formulation is based on constant lattice spacings (CLS) and flux limiters, rather than constant time steps (as previously reported). It is found that the new formulation significantly reduces the computation cost and complexity of the solution of the BTE, without affecting the thermal predictions. Lastly, in the last step of our methodology, we solve the BTE. The equation is solved under the relaxation time approximation using our thermal properties estimated for isoSi and natSi and using two phonon formulations. The phonon formulations include a gray model and the new dispersive method. For comparison purposes, the BTE is also solved using the phenomenological and theoretical phonon relaxation time models of Holland, and Han and Klemens. Different thermal predictions in steady and transient states are performed to illustrate the application of the methodology in one- and two-dimensional silicon films and in silicon-over-insulator (SOI) transistors. These include the determination of bulk and film thermal conductivities (i.e. out-of-plane and in-plane), and the transient evolution of the wall heat flux and temperature for films of different thicknesses. In addition, the physics of phonons is further analyzed in terms of the influence and behavior of acoustic and optical modes in the thermal predictions and the effect of phonon confinement in the thermal response of SOI-like transistors subject to different self-heating conditions.

  11. One-step Synthesis of Ordered Pd@TiO2 Nanofibers Array Film as Outstanding NH3 Gas Sensor at Room Temperature.

    PubMed

    Wu, Hongyuan; Huang, Haitao; Zhou, Jiao; Hong, Dahai; Ikram, Muhammad; Rehman, Afrasiab Ur; Li, Li; Shi, Keying

    2017-11-07

    The one dimensional (1D) ordered porous Pd@TiO 2 nanofibers (NFs) array film have been fabricated via a facile one-step synthesis of the electrospinning approach. The Pd@TiO 2 NFs (PTND3) contained Pd (2.0 wt %) and C, N element (16.2 wt %) display high dispersion of Pd nanoparticles (NPs) on TiO 2 NFs. Adding Pd meshed with C, N element to TiO 2 based NFs might contribute to generation of Lewis acid sites and Brønsted acid sites, which have been recently shown to enhance NH 3 adsorption-desorption ability; Pd NPs could increase the quantity of adsorbed O 2 on the surface of TiO 2 based NFs, and accelerated the O 2 molecule-ion conversion rate, enhanced the ability of electron transmission. The response time of PTND3 sensor towards 100 ppm NH 3 is only 3 s at room temperature (RT). Meantime, the response and response time of the PTND3 to the NH 3 is 1 and 14s even at the concentration of 100 ppb. Therefore, the ordered Pd@TiO 2 NFs array NH 3 sensor display great potential for practical applications.

  12. A fully disposable and integrated paper-based device for nucleic acid extraction, amplification and detection.

    PubMed

    Tang, Ruihua; Yang, Hui; Gong, Yan; You, MinLi; Liu, Zhi; Choi, Jane Ru; Wen, Ting; Qu, Zhiguo; Mei, Qibing; Xu, Feng

    2017-03-29

    Nucleic acid testing (NAT) has been widely used for disease diagnosis, food safety control and environmental monitoring. At present, NAT mainly involves nucleic acid extraction, amplification and detection steps that heavily rely on large equipment and skilled workers, making the test expensive, time-consuming, and thus less suitable for point-of-care (POC) applications. With advances in paper-based microfluidic technologies, various integrated paper-based devices have recently been developed for NAT, which however require off-chip reagent storage, complex operation steps and equipment-dependent nucleic acid amplification, restricting their use for POC testing. To overcome these challenges, we demonstrate a fully disposable and integrated paper-based sample-in-answer-out device for NAT by integrating nucleic acid extraction, helicase-dependent isothermal amplification and lateral flow assay detection into one paper device. This simple device allows on-chip dried reagent storage and equipment-free nucleic acid amplification with simple operation steps, which could be performed by untrained users in remote settings. The proposed device consists of a sponge-based reservoir and a paper-based valve for nucleic acid extraction, an integrated battery, a PTC ultrathin heater, temperature control switch and on-chip dried enzyme mix storage for isothermal amplification, and a lateral flow test strip for naked-eye detection. It can sensitively detect Salmonella typhimurium, as a model target, with a detection limit of as low as 10 2 CFU ml -1 in wastewater and egg, and 10 3 CFU ml -1 in milk and juice in about an hour. This fully disposable and integrated paper-based device has great potential for future POC applications in resource-limited settings.

  13. Robust estimation-free prescribed performance back-stepping control of air-breathing hypersonic vehicles without affine models

    NASA Astrophysics Data System (ADS)

    Bu, Xiangwei; Wu, Xiaoyan; Huang, Jiaqi; Wei, Daozhi

    2016-11-01

    This paper investigates the design of a novel estimation-free prescribed performance non-affine control strategy for the longitudinal dynamics of an air-breathing hypersonic vehicle (AHV) via back-stepping. The proposed control scheme is capable of guaranteeing tracking errors of velocity, altitude, flight-path angle, pitch angle and pitch rate with prescribed performance. By prescribed performance, we mean that the tracking error is limited to a predefined arbitrarily small residual set, with convergence rate no less than a certain constant, exhibiting maximum overshoot less than a given value. Unlike traditional back-stepping designs, there is no need of an affine model in this paper. Moreover, both the tedious analytic and numerical computations of time derivatives of virtual control laws are completely avoided. In contrast to estimation-based strategies, the presented estimation-free controller possesses much lower computational costs, while successfully eliminating the potential problem of parameter drifting. Owing to its independence on an accurate AHV model, the studied methodology exhibits excellent robustness against system uncertainties. Finally, simulation results from a fully nonlinear model clarify and verify the design.

  14. A muscle-driven approach to restore stepping with an exoskeleton for individuals with paraplegia.

    PubMed

    Chang, Sarah R; Nandor, Mark J; Li, Lu; Kobetic, Rudi; Foglyano, Kevin M; Schnellenberger, John R; Audu, Musa L; Pinault, Gilles; Quinn, Roger D; Triolo, Ronald J

    2017-05-30

    Functional neuromuscular stimulation, lower limb orthosis, powered lower limb exoskeleton, and hybrid neuroprosthesis (HNP) technologies can restore stepping in individuals with paraplegia due to spinal cord injury (SCI). However, a self-contained muscle-driven controllable exoskeleton approach based on an implanted neural stimulator to restore walking has not been previously demonstrated, which could potentially result in system use outside the laboratory and viable for long term use or clinical testing. In this work, we designed and evaluated an untethered muscle-driven controllable exoskeleton to restore stepping in three individuals with paralysis from SCI. The self-contained HNP combined neural stimulation to activate the paralyzed muscles and generate joint torques for limb movements with a controllable lower limb exoskeleton to stabilize and support the user. An onboard controller processed exoskeleton sensor signals, determined appropriate exoskeletal constraints and stimulation commands for a finite state machine (FSM), and transmitted data over Bluetooth to an off-board computer for real-time monitoring and data recording. The FSM coordinated stimulation and exoskeletal constraints to enable functions, selected with a wireless finger switch user interface, for standing up, standing, stepping, or sitting down. In the stepping function, the FSM used a sensor-based gait event detector to determine transitions between gait phases of double stance, early swing, late swing, and weight acceptance. The HNP restored stepping in three individuals with motor complete paralysis due to SCI. The controller appropriately coordinated stimulation and exoskeletal constraints using the sensor-based FSM for subjects with different stimulation systems. The average range of motion at hip and knee joints during walking were 8.5°-20.8° and 14.0°-43.6°, respectively. Walking speeds varied from 0.03 to 0.06 m/s, and cadences from 10 to 20 steps/min. A self-contained muscle-driven exoskeleton was a feasible intervention to restore stepping in individuals with paraplegia due to SCI. The untethered hybrid system was capable of adjusting to different individuals' needs to appropriately coordinate exoskeletal constraints with muscle activation using a sensor-driven FSM for stepping. Further improvements for out-of-the-laboratory use should include implantation of plantar flexor muscles to improve walking speed and power assist as needed at the hips and knees to maintain walking as muscles fatigue.

  15. Orbit and uncertainty propagation: a comparison of Gauss-Legendre-, Dormand-Prince-, and Chebyshev-Picard-based approaches

    NASA Astrophysics Data System (ADS)

    Aristoff, Jeffrey M.; Horwood, Joshua T.; Poore, Aubrey B.

    2014-01-01

    We present a new variable-step Gauss-Legendre implicit-Runge-Kutta-based approach for orbit and uncertainty propagation, VGL-IRK, which includes adaptive step-size error control and which collectively, rather than individually, propagates nearby sigma points or states. The performance of VGL-IRK is compared to a professional (variable-step) implementation of Dormand-Prince 8(7) (DP8) and to a fixed-step, optimally-tuned, implementation of modified Chebyshev-Picard iteration (MCPI). Both nearly-circular and highly-elliptic orbits are considered using high-fidelity gravity models and realistic integration tolerances. VGL-IRK is shown to be up to eleven times faster than DP8 and up to 45 times faster than MCPI (for the same accuracy), in a serial computing environment. Parallelization of VGL-IRK and MCPI is also discussed.

  16. Probabilistic drought intensification forecasts using temporal patterns of satellite-derived drought indicators

    NASA Astrophysics Data System (ADS)

    Park, Sumin; Im, Jungho; Park, Seonyeong

    2016-04-01

    A drought occurs when the condition of below-average precipitation in a region continues, resulting in prolonged water deficiency. A drought can last for weeks, months or even years, so can have a great influence on various ecosystems including human society. In order to effectively reduce agricultural and economic damage caused by droughts, drought monitoring and forecasts are crucial. Drought forecast research is typically conducted using in situ observations (or derived indices such as Standardized Precipitation Index (SPI)) and physical models. Recently, satellite remote sensing has been used for short term drought forecasts in combination with physical models. In this research, drought intensification was predicted using satellite-derived drought indices such as Normalized Difference Drought Index (NDDI), Normalized Multi-band Drought Index (NMDI), and Scaled Drought Condition Index (SDCI) generated from Moderate Resolution Imaging Spectroradiometer (MODIS) and Tropical Rainfall Measuring Mission (TRMM) products over the Korean Peninsula. Time series of each drought index at the 8 day interval was investigated to identify drought intensification patterns. Drought condition at the previous time step (i.e., 8 days before) and change in drought conditions between two previous time steps (e.g., between 16 days and 8 days before the time step to forecast) Results show that among three drought indices, SDCI provided the best performance to predict drought intensification compared to NDDI and NMDI through qualitative assessment. When quantitatively compared with SPI, SDCI showed a potential to be used for forecasting short term drought intensification. Finally this research provided a SDCI-based equation to predict short term drought intensification optimized over the Korean Peninsula.

  17. Adaptive macro finite elements for the numerical solution of monodomain equations in cardiac electrophysiology.

    PubMed

    Heidenreich, Elvio A; Ferrero, José M; Doblaré, Manuel; Rodríguez, José F

    2010-07-01

    Many problems in biology and engineering are governed by anisotropic reaction-diffusion equations with a very rapidly varying reaction term. This usually implies the use of very fine meshes and small time steps in order to accurately capture the propagating wave while avoiding the appearance of spurious oscillations in the wave front. This work develops a family of macro finite elements amenable for solving anisotropic reaction-diffusion equations with stiff reactive terms. The developed elements are incorporated on a semi-implicit algorithm based on operator splitting that includes adaptive time stepping for handling the stiff reactive term. A linear system is solved on each time step to update the transmembrane potential, whereas the remaining ordinary differential equations are solved uncoupled. The method allows solving the linear system on a coarser mesh thanks to the static condensation of the internal degrees of freedom (DOF) of the macroelements while maintaining the accuracy of the finer mesh. The method and algorithm have been implemented in parallel. The accuracy of the method has been tested on two- and three-dimensional examples demonstrating excellent behavior when compared to standard linear elements. The better performance and scalability of different macro finite elements against standard finite elements have been demonstrated in the simulation of a human heart and a heterogeneous two-dimensional problem with reentrant activity. Results have shown a reduction of up to four times in computational cost for the macro finite elements with respect to equivalent (same number of DOF) standard linear finite elements as well as good scalability properties.

  18. a Two-Step Classification Approach to Distinguishing Similar Objects in Mobile LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    He, H.; Khoshelham, K.; Fraser, C.

    2017-09-01

    Nowadays, lidar is widely used in cultural heritage documentation, urban modeling, and driverless car technology for its fast and accurate 3D scanning ability. However, full exploitation of the potential of point cloud data for efficient and automatic object recognition remains elusive. Recently, feature-based methods have become very popular in object recognition on account of their good performance in capturing object details. Compared with global features describing the whole shape of the object, local features recording the fractional details are more discriminative and are applicable for object classes with considerable similarity. In this paper, we propose a two-step classification approach based on point feature histograms and the bag-of-features method for automatic recognition of similar objects in mobile lidar point clouds. Lamp post, street light and traffic sign are grouped as one category in the first-step classification for their inter similarity compared with tree and vehicle. A finer classification of the lamp post, street light and traffic sign based on the result of the first-step classification is implemented in the second step. The proposed two-step classification approach is shown to yield a considerable improvement over the conventional one-step classification approach.

  19. Simple design for DNA nanotubes from a minimal set of unmodified strands: rapid, room-temperature assembly and readily tunable structure.

    PubMed

    Hamblin, Graham D; Hariri, Amani A; Carneiro, Karina M M; Lau, Kai L; Cosa, Gonzalo; Sleiman, Hanadi F

    2013-04-23

    DNA nanotubes have great potential as nanoscale scaffolds for the organization of materials and the templation of nanowires and as drug delivery vehicles. Current methods for making DNA nanotubes either rely on a tile-based step-growth polymerization mechanism or use a large number of component strands and long annealing times. Step-growth polymerization gives little control over length, is sensitive to stoichiometry, and is slow to generate long products. Here, we present a design strategy for DNA nanotubes that uses an alternative, more controlled growth mechanism, while using just five unmodified component strands and a long enzymatically produced backbone. These tubes form rapidly at room temperature and have numerous, orthogonal sites available for the programmable incorporation of arrays of cargo along their length. As a proof-of-concept, cyanine dyes were organized into two distinct patterns by inclusion into these DNA nanotubes.

  20. Cut-off values for step count and TV viewing time as discriminators of hyperglycaemia in Brazilian children and adolescents.

    PubMed

    Gordia, Alex Pinheiro; Quadros, Teresa Maria Bianchini de; Silva, Luciana Rodrigues; Mota, Jorge

    2016-09-01

    The use of step count and TV viewing time to discriminate youngsters with hyperglycaemia is still a matter of debate. To establish cut-off values for step count and TV viewing time in children and adolescents using glycaemia as the reference criterion. A cross-sectional study was conducted on 1044 schoolchildren aged 6-18 years from Northeastern Brazil. Daily step counts were assessed with a pedometer over 1 week and TV viewing time by self-report. The area under the curve (AUC) ranged from 0.52-0.61 for step count and from 0.49-0.65 for TV viewing time. The daily step count with the highest discriminatory power for hyperglycaemia was 13 884 (sensitivity = 77.8; specificity = 51.8) for male children and 12 371 (sensitivity = 55.6; specificity = 55.5) and 11 292 (sensitivity = 57.7; specificity = 48.6) for female children and adolescents respectively. The cut-off for TV viewing time with the highest discriminatory capacity for hyperglycaemia was 3 hours/day (sensitivity = 57.7-77.8; specificity = 48.6-53.2). This study represents the first step for the development of criteria based on cardiometabolic risk factors for step count and TV viewing time in youngsters. However, the present cut-off values have limited practical application because of their poor accuracy and low sensitivity and specificity.

  1. Development of a time-dependent incompressible Navier-Stokes solver based on a fractional-step method

    NASA Technical Reports Server (NTRS)

    Rosenfeld, Moshe

    1990-01-01

    The main goals are the development, validation, and application of a fractional step solution method of the time-dependent incompressible Navier-Stokes equations in generalized coordinate systems. A solution method that combines a finite volume discretization with a novel choice of the dependent variables and a fractional step splitting to obtain accurate solutions in arbitrary geometries is extended to include more general situations, including cases with moving grids. The numerical techniques are enhanced to gain efficiency and generality.

  2. A multi-source data assimilation framework for flood forecasting: Accounting for runoff routing lags

    NASA Astrophysics Data System (ADS)

    Meng, S.; Xie, X.

    2015-12-01

    In the flood forecasting practice, model performance is usually degraded due to various sources of uncertainties, including the uncertainties from input data, model parameters, model structures and output observations. Data assimilation is a useful methodology to reduce uncertainties in flood forecasting. For the short-term flood forecasting, an accurate estimation of initial soil moisture condition will improve the forecasting performance. Considering the time delay of runoff routing is another important effect for the forecasting performance. Moreover, the observation data of hydrological variables (including ground observations and satellite observations) are becoming easily available. The reliability of the short-term flood forecasting could be improved by assimilating multi-source data. The objective of this study is to develop a multi-source data assimilation framework for real-time flood forecasting. In this data assimilation framework, the first step is assimilating the up-layer soil moisture observations to update model state and generated runoff based on the ensemble Kalman filter (EnKF) method, and the second step is assimilating discharge observations to update model state and runoff within a fixed time window based on the ensemble Kalman smoother (EnKS) method. This smoothing technique is adopted to account for the runoff routing lag. Using such assimilation framework of the soil moisture and discharge observations is expected to improve the flood forecasting. In order to distinguish the effectiveness of this dual-step assimilation framework, we designed a dual-EnKF algorithm in which the observed soil moisture and discharge are assimilated separately without accounting for the runoff routing lag. The results show that the multi-source data assimilation framework can effectively improve flood forecasting, especially when the runoff routing has a distinct time lag. Thus, this new data assimilation framework holds a great potential in operational flood forecasting by merging observations from ground measurement and remote sensing retrivals.

  3. Thermal bistability-based method for real-time optimization of ultralow-threshold whispering gallery mode microlasers.

    PubMed

    Lin, Guoping; Candela, Y; Tillement, O; Cai, Zhiping; Lefèvre-Seguin, V; Hare, J

    2012-12-15

    A method based on thermal bistability for ultralow-threshold microlaser optimization is demonstrated. When sweeping the pump laser frequency across a pump resonance, the dynamic thermal bistability slows down the power variation. The resulting line shape modification enables a real-time monitoring of the laser characteristic. We demonstrate this method for a functionalized microsphere exhibiting a submicrowatt laser threshold. This approach is confirmed by comparing the results with a step-by-step recording in quasi-static thermal conditions.

  4. Kinematic constraints associated with the acquisition of overarm throwing part I: step and trunk actions.

    PubMed

    Stodden, David F; Langendorfer, Stephen J; Fleisig, Glenn S; Andrews, James R

    2006-12-01

    The purposes of this study were to: (a) examine differences within specific kinematic variables and ball velocity associated with developmental component levels of step and trunk action (Roberton & Halverson, 1984), and (b) if the differences in kinematic variables were significantly associated with the differences in component levels, determine potential kinematic constraints associated with skilled throwing acquisition. Results indicated stride length (69.3 %) and time from stride foot contact to ball release (39. 7%) provided substantial contributions to ball velocity (p < .001). All trunk kinematic measures increased significantly with increasing component levels (p < .001). Results suggest that trunk linear and rotational velocities, degree of trunk tilt, time from stride foot contact to ball release, and ball velocity represented potential control parameters and, therefore, constraints on overarm throwing acquisition.

  5. An approach to improving transporting velocity in the long-range ultrasonic transportation of micro-particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Jianxin; Mei, Deqing, E-mail: meidq-127@zju.edu.cn; Yang, Keji

    2014-08-14

    In existing ultrasonic transportation methods, the long-range transportation of micro-particles is always realized in step-by-step way. Due to the substantial decrease of the driving force in each step, the transportation is lower-speed and stair-stepping. To improve the transporting velocity, a non-stepping ultrasonic transportation approach is proposed. By quantitatively analyzing the acoustic potential well, an optimal region is defined as the position, where the largest driving force is provided under the condition that the driving force is simultaneously the major component of an acoustic radiation force. To keep the micro-particle trapped in the optimal region during the whole transportation process, anmore » approach of optimizing the phase-shifting velocity and phase-shifting step is adopted. Due to the stable and large driving force, the displacement of the micro-particle is an approximately linear function of time, instead of a stair-stepping function of time as in the existing step-by-step methods. An experimental setup is also developed to validate this approach. Long-range ultrasonic transportations of zirconium beads with high transporting velocity were realized. The experimental results demonstrated that this approach is an effective way to improve transporting velocity in the long-range ultrasonic transportation of micro-particles.« less

  6. NMR diffusion simulation based on conditional random walk.

    PubMed

    Gudbjartsson, H; Patz, S

    1995-01-01

    The authors introduce here a new, very fast, simulation method for free diffusion in a linear magnetic field gradient, which is an extension of the conventional Monte Carlo (MC) method or the convolution method described by Wong et al. (in 12th SMRM, New York, 1993, p.10). In earlier NMR-diffusion simulation methods, such as the finite difference method (FD), the Monte Carlo method, and the deterministic convolution method, the outcome of the calculations depends on the simulation time step. In the authors' method, however, the results are independent of the time step, although, in the convolution method the step size has to be adequate for spins to diffuse to adjacent grid points. By always selecting the largest possible time step the computation time can therefore be reduced. Finally the authors point out that in simple geometric configurations their simulation algorithm can be used to reduce computation time in the simulation of restricted diffusion.

  7. Observation and Modeling of Clear Air Turbulence (CAT) over Europe

    NASA Astrophysics Data System (ADS)

    Sprenger, M.; Mayoraz, L.; Stauch, V.; Sharman, B.; Polymeris, J.

    2012-04-01

    CAT represents a very relevant phenomenon for aviation safety. It can lead to passenger injuries, causes an increase in fuel consumption and, under severe intensity, can involve structural damages to the aircraft. The physical processes causing CAT remain at present not fully understood. Moreover, because of its small scale, CAT cannot be represented in numerical weather prediction (NWP) models. In this study, the physical processes related to CAT and its representation in NWP models is further investigated. First, 134 CAT events over Europe are extracted from a flight monitoring data base (FDM), run by the SWISS airline and containing over 100'000 flights. The location, time, and meteorological parameters along the turbulent spots are analysed. Furthermore, the 7-km NWP model run by the Swiss National Weather Service (Meteoswiss) is used to calculate model-based CAT indices, e.g. Richardson number, Ellrod & Knapp turbulence index and a complex/combined CAT index developed at NCAR. The CAT indices simulated with COSMO-7 is then compared to the observed CAT spots, hence allowing to assess the model's performance, and potential use in a CAT warning system. In a second step, the meteorological conditions associated with CAT are investigated. To this aim, CAT events are defined as coherent structures in space and in time, i.e. their dimension and life cycle is studied, in connection with jet streams and upper-level fronts. Finally, in a third step the predictability of CAT is assessed, by comparing CAT index predictions based on different lead times of the NWP model COSMO-7

  8. Analysis of 3D poroelastodynamics using BEM based on modified time-step scheme

    NASA Astrophysics Data System (ADS)

    Igumnov, L. A.; Petrov, A. N.; Vorobtsov, I. V.

    2017-10-01

    The development of 3d boundary elements modeling of dynamic partially saturated poroelastic media using a stepping scheme is presented in this paper. Boundary Element Method (BEM) in Laplace domain and the time-stepping scheme for numerical inversion of the Laplace transform are used to solve the boundary value problem. The modified stepping scheme with a varied integration step for quadrature coefficients calculation using the symmetry of the integrand function and integral formulas of Strongly Oscillating Functions was applied. The problem with force acting on a poroelastic prismatic console end was solved using the developed method. A comparison of the results obtained by the traditional stepping scheme with the solutions obtained by this modified scheme shows that the computational efficiency is better with usage of combined formulas.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonald, A. D.; Jones, B. J. P.; Nygren, D. R.

    A new method to tag the barium daughter in the double beta decay ofmore » $$^{136}$$Xe is reported. Using the technique of single molecule fluorescent imaging (SMFI), individual barium dication (Ba$$^{++}$$) resolution at a transparent scanning surface has been demonstrated. A single-step photo-bleach confirms the single ion interpretation. Individual ions are localized with super-resolution ($$\\sim$$2~nm), and detected with a statistical significance of 12.9~$$\\sigma$$ over backgrounds. This lays the foundation for a new and potentially background-free neutrinoless double beta decay technology, based on SMFI coupled to high pressure xenon gas time projection chambers.« less

  10. Suggestions for CAP-TSD mesh and time-step input parameters

    NASA Technical Reports Server (NTRS)

    Bland, Samuel R.

    1991-01-01

    Suggestions for some of the input parameters used in the CAP-TSD (Computational Aeroelasticity Program-Transonic Small Disturbance) computer code are presented. These parameters include those associated with the mesh design and time step. The guidelines are based principally on experience with a one-dimensional model problem used to study wave propagation in the vertical direction.

  11. Research on PM2.5 time series characteristics based on data mining technology

    NASA Astrophysics Data System (ADS)

    Zhao, Lifang; Jia, Jin

    2018-02-01

    With the development of data mining technology and the establishment of environmental air quality database, it is necessary to discover the potential correlations and rules by digging the massive environmental air quality information and analyzing the air pollution process. In this paper, we have presented a sequential pattern mining method based on the air quality data and pattern association technology to analyze the PM2.5 time series characteristics. Utilizing the real-time monitoring data of urban air quality in China, the time series rule and variation properties of PM2.5 under different pollution levels are extracted and analyzed. The analysis results show that the time sequence features of the PM2.5 concentration is directly affected by the alteration of the pollution degree. The longest time that PM2.5 remained stable is about 24 hours. As the pollution degree gets severer, the instability time and step ascending time gradually changes from 12-24 hours to 3 hours. The presented method is helpful for the controlling and forecasting of the air quality while saving the measuring costs, which is of great significance for the government regulation and public prevention of the air pollution.

  12. 4 Steps for Redesigning Time for Student and Teacher Learning

    ERIC Educational Resources Information Center

    Nazareno, Lori

    2017-01-01

    Everybody complains about a lack of time in school, but few are prepared to do anything about it. Laying the foundation before making such a shift is essential to the success of the change. Once a broad-based team has been chosen to do the work, they can follow a process explained in four steps with the apt acronym of T.I.M.E.: Taking stock,…

  13. Machine learning of atmospheric chemistry. Applications to a global chemistry transport model.

    NASA Astrophysics Data System (ADS)

    Evans, M. J.; Keller, C. A.

    2017-12-01

    Atmospheric chemistry is central to many environmental issues such as air pollution, climate change, and stratospheric ozone loss. Chemistry Transport Models (CTM) are a central tool for understanding these issues, whether for research or for forecasting. These models split the atmosphere in a large number of grid-boxes and consider the emission of compounds into these boxes and their subsequent transport, deposition, and chemical processing. The chemistry is represented through a series of simultaneous ordinary differential equations, one for each compound. Given the difference in life-times between the chemical compounds (mili-seconds for O(1D) to years for CH4) these equations are numerically stiff and solving them consists of a significant fraction of the computational burden of a CTM.We have investigated a machine learning approach to solving the differential equations instead of solving them numerically. From an annual simulation of the GEOS-Chem model we have produced a training dataset consisting of the concentration of compounds before and after the differential equations are solved, together with some key physical parameters for every grid-box and time-step. From this dataset we have trained a machine learning algorithm (random regression forest) to be able to predict the concentration of the compounds after the integration step based on the concentrations and physical state at the beginning of the time step. We have then included this algorithm back into the GEOS-Chem model, bypassing the need to integrate the chemistry.This machine learning approach shows many of the characteristics of the full simulation and has the potential to be substantially faster. There are a wide range of application for such an approach - generating boundary conditions, for use in air quality forecasts, chemical data assimilation systems, centennial scale climate simulations etc. We discuss our approches' speed and accuracy, and highlight some potential future directions for improving this approach.

  14. A new approach for bioassays based on frequency- and time-domain measurements of magnetic nanoparticles.

    PubMed

    Oisjöen, Fredrik; Schneiderman, Justin F; Astalan, Andrea Prieto; Kalabukhov, Alexey; Johansson, Christer; Winkler, Dag

    2010-01-15

    We demonstrate a one-step wash-free bioassay measurement system capable of tracking biochemical binding events. Our approach combines the high resolution of frequency- and high speed of time-domain measurements in a single device in combination with a fast one-step bioassay. The one-step nature of our magnetic nanoparticle (MNP) based assay reduces the time between sample extraction and quantitative results while mitigating the risks of contamination related to washing steps. Our method also enables tracking of binding events, providing the possibility of, for example, investigation of how chemical/biological environments affect the rate of a binding process or study of the action of certain drugs. We detect specific biological binding events occurring on the surfaces of fluid-suspended MNPs that modify their magnetic relaxation behavior. Herein, we extrapolate a modest sensitivity to analyte of 100 ng/ml with the present setup using our rapid one-step bioassay. More importantly, we determine the size-distributions of the MNP systems with theoretical fits to our data obtained from the two complementary measurement modalities and demonstrate quantitative agreement between them. Copyright 2009 Elsevier B.V. All rights reserved.

  15. Two-step rapid sulfur capture. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1994-04-01

    The primary goal of this program was to test the technical and economic feasibility of a novel dry sorbent injection process called the Two-Step Rapid Sulfur Capture process for several advanced coal utilization systems. The Two-Step Rapid Sulfur Capture process consists of limestone activation in a high temperature auxiliary burner for short times followed by sorbent quenching in a lower temperature sulfur containing coal combustion gas. The Two-Step Rapid Sulfur Capture process is based on the Non-Equilibrium Sulfur Capture process developed by the Energy Technology Office of Textron Defense Systems (ETO/TDS). Based on the Non-Equilibrium Sulfur Capture studies the rangemore » of conditions for optimum sorbent activation were thought to be: activation temperature > 2,200 K for activation times in the range of 10--30 ms. Therefore, the aim of the Two-Step process is to create a very active sorbent (under conditions similar to the bomb reactor) and complete the sulfur reaction under thermodynamically favorable conditions. A flow facility was designed and assembled to simulate the temperature, time, stoichiometry, and sulfur gas concentration prevalent in the advanced coal utilization systems such as gasifiers, fluidized bed combustors, mixed-metal oxide desulfurization systems, diesel engines, and gas turbines.« less

  16. Soil moisture datasets at five sites in the central Sierra Nevada and northern Coast Ranges, California

    USGS Publications Warehouse

    Stern, Michelle A.; Anderson, Frank A.; Flint, Lorraine E.; Flint, Alan L.

    2018-05-03

    In situ soil moisture datasets are important inputs used to calibrate and validate watershed, regional, or statewide modeled and satellite-based soil moisture estimates. The soil moisture dataset presented in this report includes hourly time series of the following: soil temperature, volumetric water content, water potential, and total soil water content. Data were collected by the U.S. Geological Survey at five locations in California: three sites in the central Sierra Nevada and two sites in the northern Coast Ranges. This report provides a description of each of the study areas, procedures and equipment used, processing steps, and time series data from each site in the form of comma-separated values (.csv) tables.

  17. Quantum information processing with a travelling wave of light

    NASA Astrophysics Data System (ADS)

    Serikawa, Takahiro; Shiozawa, Yu; Ogawa, Hisashi; Takanashi, Naoto; Takeda, Shuntaro; Yoshikawa, Jun-ichi; Furusawa, Akira

    2018-02-01

    We exploit quantum information processing on a traveling wave of light, expecting emancipation from thermal noise, easy coupling to fiber communication, and potentially high operation speed. Although optical memories are technically challenging, we have an alternative approach to apply multi-step operations on traveling light, that is, continuous-variable one-way computation. So far our achievement includes generation of a one-million-mode entangled chain in time-domain, mode engineering of nonlinear resource states, and real-time nonlinear feedforward. Although they are implemented with free space optics, we are also investigating photonic integration and performed quantum teleportation with a passive liner waveguide chip as a demonstration of entangling, measurement, and feedforward. We also suggest a loop-based architecture as another model of continuous-variable computing.

  18. Study of Structural and Electrical Conductivity of Sugarcane Bagasse-Carbon with Hydrothermal Carbonization

    NASA Astrophysics Data System (ADS)

    Kurniati, M.; Nurhayati, D.; Maddu, A.

    2017-03-01

    The important part of fuel cell is the gas diffusion layer who made from carbon based material porous and conductive. The main goal of this research is to obtain carbon material from sugarcane bagasse with hydrothermal carbonization and chemical-physics activation. There were two step methods in this research. The first step was sample preparation which consisted of prepare the materials, hydrothermal carbonization and chemical-physics activation. The second one was analyze character of carbon using EDS, SEM, XRD, and LCR meter. The amount of carbon in sugarcane bagasse-carbon was about 85%-91.47% with pore morphology that already form. The degree of crystallinity of sugarcane bagasse carbon was about 13.06%-20.89%, leaving the remain as the amorphous phase. Electrical conductivity was about 5.36 x 10-2 Sm-1 - 1.11 Sm-1. Sugarcane bagasse-carbon has porous characteristic with electrical conductivity property as semiconductor. Sugarcane bagasse-carbon with hydrothermal carbonization potentially can be used as based material for fuel cell if only time of hydrothermal carbonization hold is increased.

  19. Motor potential profile and a robust method for extracting it from time series of motor positions.

    PubMed

    Wang, Hongyun

    2006-10-21

    Molecular motors are small, and, as a result, motor operation is dominated by high-viscous friction and large thermal fluctuations from the surrounding fluid environment. The small size has hindered, in many ways, the studies of physical mechanisms of molecular motors. For a macroscopic motor, it is possible to observe/record experimentally the internal operation details of the motor. This is not yet possible for molecular motors. The chemical reaction in a molecular motor has many occupancy states, each having a different effect on the motor motion. The overall effect of the chemical reaction on the motor motion can be characterized by the motor potential profile. The potential profile reveals how the motor force changes with position in a motor step, which may lead to insights into how the chemical reaction is coupled to force generation. In this article, we propose a mathematical formulation and a robust method for constructing motor potential profiles from time series of motor positions measured in single molecule experiments. Numerical examples based on simulated data are shown to demonstrate the method. Interestingly, it is the small size of molecular motors (negligible inertia) that makes it possible to recover the potential profile from time series of motor positions. For a macroscopic motor, the variation of driving force within a cycle is smoothed out by the large inertia.

  20. Enhancing the anaerobic digestion potential of dairy waste activated sludge by two step sono-alkalization pretreatment.

    PubMed

    Rani, R Uma; Kumar, S Adish; Kaliappan, S; Yeom, Ick-Tae; Banu, J Rajesh

    2014-05-01

    High efficiency resource recovery from dairy waste activated sludge (WAS) has been a focus of attention. An investigation into the influence of two step sono-alkalization pretreatment (using different alkaline agents, pH and sonic reaction times) on sludge reduction potential in a semi-continuous anaerobic reactor was performed for the first time in literature. Firstly, effect of sludge pretreatment was evaluated by COD solubilization, suspended solids reduction and biogas production. At optimized condition (4172 kJ/kg TS of supplied energy for NaOH - pH 10), COD solubilization, suspended solids reduction and biogas production was 59%, 46% and 80% higher than control. In order to clearly describe the hydrolysis of waste activated sludge during sono-alkalization pretreatment by a two step process, concentrations of ribonucleic acid (RNA) and bound extracellular polymeric substance (EPS) were also measured. Secondly, semi-continuous process performance was studied in a lab-scale semi-continuous anaerobic reactor (5L), with 4 L working volume. With three operated SRTs, the SRT of 15 d was found to be most appropriate for economic operation of the reactor. Combining pretreatment with anaerobic digestion led to 58% and 62% of suspended solids and volatile solids reduction, respectively, with an improvement of 83% in biogas production. Thus, two step sono-alkalization pretreatment laid the basis in enhancing the anaerobic digestion potential of dairy WAS. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Validity of the Instrumented Push and Release Test to Quantify Postural Responses in Persons With Multiple Sclerosis.

    PubMed

    El-Gohary, Mahmoud; Peterson, Daniel; Gera, Geetanjali; Horak, Fay B; Huisinga, Jessie M

    2017-07-01

    To test the validity of wearable inertial sensors to provide objective measures of postural stepping responses to the push and release clinical test in people with multiple sclerosis. Cross-sectional study. University medical center balance disorder laboratory. Total sample N=73; persons with multiple sclerosis (PwMS) n=52; healthy controls n=21. Stepping latency, time and number of steps required to reach stability, and initial step length were calculated using 3 inertial measurement units placed on participants' lumbar spine and feet. Correlations between inertial sensor measures and measures obtained from the laboratory-based systems were moderate to strong and statistically significant for all variables: time to release (r=.992), latency (r=.655), time to stability (r=.847), time of first heel strike (r=.665), number of steps (r=.825), and first step length (r=.592). Compared with healthy controls, PwMS demonstrated a longer time to stability and required a larger number of steps to reach stability. The instrumented push and release test is a valid measure of postural responses in PwMS and could be used as a clinical outcome measures for patient care decisions or for clinical trials aimed at improving postural control in PwMS. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  2. A transition from using multi-step procedures to a fully integrated system for performing extracorporeal photopheresis: A comparison of costs and efficiencies.

    PubMed

    Azar, Nabih; Leblond, Veronique; Ouzegdouh, Maya; Button, Paul

    2017-12-01

    The Pitié Salpêtrière Hospital Hemobiotherapy Department, Paris, France, has been providing extracorporeal photopheresis (ECP) since November 2011, and started using the Therakos ® CELLEX ® fully integrated system in 2012. This report summarizes our single-center experience of transitioning from the use of multi-step ECP procedures to the fully integrated ECP system, considering the capacity and cost implications. The total number of ECP procedures performed 2011-2015 was derived from department records. The time taken to complete a single ECP treatment using a multi-step technique and the fully integrated system at our department was assessed. Resource costs (2014€) were obtained for materials and calculated for personnel time required. Time-driven activity-based costing methods were applied to provide a cost comparison. The number of ECP treatments per year increased from 225 (2012) to 727 (2015). The single multi-step procedure took 270 min compared to 120 min for the fully integrated system. The total calculated per-session cost of performing ECP using the multi-step procedure was greater than with the CELLEX ® system (€1,429.37 and €1,264.70 per treatment, respectively). For hospitals considering a transition from multi-step procedures to fully integrated methods for ECP where cost may be a barrier, time-driven activity-based costing should be utilized to gain a more comprehensive understanding the full benefit that such a transition offers. The example from our department confirmed that there were not just cost and time savings, but that the time efficiencies gained with CELLEX ® allow for more patient treatments per year. © 2017 The Authors Journal of Clinical Apheresis Published by Wiley Periodicals, Inc.

  3. Income and Physical Activity among Adults: Evidence from Self-Reported and Pedometer-Based Physical Activity Measurements

    PubMed Central

    Kari, Jaana T.; Pehkonen, Jaakko; Hirvensalo, Mirja; Yang, Xiaolin; Hutri-Kähönen, Nina; Raitakari, Olli T.; Tammelin, Tuija H.

    2015-01-01

    This study examined the relationship between income and physical activity by using three measures to illustrate daily physical activity: the self-reported physical activity index for leisure-time physical activity, pedometer-based total steps for overall daily physical activity, and pedometer-based aerobic steps that reflect continuous steps for more than 10 min at a time. The study population consisted of 753 adults from Finland (mean age 41.7 years; 64% women) who participated in 2011 in the follow-up of the ongoing Young Finns study. Ordinary least squares models were used to evaluate the associations between income and physical activity. The consistency of the results was explored by using register-based income information from Statistics Finland, employing the instrumental variable approach, and dividing the pedometer-based physical activity according to weekdays and weekend days. The results indicated that higher income was associated with higher self-reported physical activity for both genders. The results were robust to the inclusion of the control variables and the use of register-based income information. However, the pedometer-based results were gender-specific and depended on the measurement day (weekday vs. weekend day). In more detail, the association was positive for women and negative or non-existing for men. According to the measurement day, among women, income was positively associated with aerobic steps despite the measurement day and with totals steps measured on the weekend. Among men, income was negatively associated with aerobic steps measured on weekdays. The results indicate that there is an association between income and physical activity, but the association is gender-specific and depends on the measurement type of physical activity. PMID:26317865

  4. Income and Physical Activity among Adults: Evidence from Self-Reported and Pedometer-Based Physical Activity Measurements.

    PubMed

    Kari, Jaana T; Pehkonen, Jaakko; Hirvensalo, Mirja; Yang, Xiaolin; Hutri-Kähönen, Nina; Raitakari, Olli T; Tammelin, Tuija H

    2015-01-01

    This study examined the relationship between income and physical activity by using three measures to illustrate daily physical activity: the self-reported physical activity index for leisure-time physical activity, pedometer-based total steps for overall daily physical activity, and pedometer-based aerobic steps that reflect continuous steps for more than 10 min at a time. The study population consisted of 753 adults from Finland (mean age 41.7 years; 64% women) who participated in 2011 in the follow-up of the ongoing Young Finns study. Ordinary least squares models were used to evaluate the associations between income and physical activity. The consistency of the results was explored by using register-based income information from Statistics Finland, employing the instrumental variable approach, and dividing the pedometer-based physical activity according to weekdays and weekend days. The results indicated that higher income was associated with higher self-reported physical activity for both genders. The results were robust to the inclusion of the control variables and the use of register-based income information. However, the pedometer-based results were gender-specific and depended on the measurement day (weekday vs. weekend day). In more detail, the association was positive for women and negative or non-existing for men. According to the measurement day, among women, income was positively associated with aerobic steps despite the measurement day and with totals steps measured on the weekend. Among men, income was negatively associated with aerobic steps measured on weekdays. The results indicate that there is an association between income and physical activity, but the association is gender-specific and depends on the measurement type of physical activity.

  5. Paper-based sample-to-answer molecular diagnostic platform for point-of-care diagnostics.

    PubMed

    Choi, Jane Ru; Tang, Ruihua; Wang, ShuQi; Wan Abas, Wan Abu Bakar; Pingguan-Murphy, Belinda; Xu, Feng

    2015-12-15

    Nucleic acid testing (NAT), as a molecular diagnostic technique, including nucleic acid extraction, amplification and detection, plays a fundamental role in medical diagnosis for timely medical treatment. However, current NAT technologies require relatively high-end instrumentation, skilled personnel, and are time-consuming. These drawbacks mean conventional NAT becomes impractical in many resource-limited disease-endemic settings, leading to an urgent need to develop a fast and portable NAT diagnostic tool. Paper-based devices are typically robust, cost-effective and user-friendly, holding a great potential for NAT at the point of care. In view of the escalating demand for the low cost diagnostic devices, we highlight the beneficial use of paper as a platform for NAT, the current state of its development, and the existing challenges preventing its widespread use. We suggest a strategy involving integrating all three steps of NAT into one single paper-based sample-to-answer diagnostic device for rapid medical diagnostics in the near future. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Adaptive time steps in trajectory surface hopping simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spörkel, Lasse, E-mail: spoerkel@kofo.mpg.de; Thiel, Walter, E-mail: thiel@kofo.mpg.de

    2016-05-21

    Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energymore » surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.« less

  7. Adaptive time steps in trajectory surface hopping simulations

    NASA Astrophysics Data System (ADS)

    Spörkel, Lasse; Thiel, Walter

    2016-05-01

    Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energy surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.

  8. Improved p-type conductivity in Al-rich AlGaN using multidimensional Mg-doped superlattices

    PubMed Central

    Zheng, T. C.; Lin, W.; Liu, R.; Cai, D. J.; Li, J. C.; Li, S. P.; Kang, J. Y.

    2016-01-01

    A novel multidimensional Mg-doped superlattice (SL) is proposed to enhance vertical hole conductivity in conventional Mg-doped AlGaN SL which generally suffers from large potential barrier for holes. Electronic structure calculations within the first-principle theoretical framework indicate that the densities of states (DOS) of the valence band nearby the Fermi level are more delocalized along the c-axis than that in conventional SL, and the potential barrier significantly decreases. Hole concentration is greatly enhanced in the barrier of multidimensional SL. Detailed comparisons of partial charges and decomposed DOS reveal that the improvement of vertical conductance may be ascribed to the stronger pz hybridization between Mg and N. Based on the theoretical analysis, highly conductive p-type multidimensional Al0.63Ga0.37N/Al0.51Ga0.49N SLs are grown with identified steps via metalorganic vapor-phase epitaxy. The hole concentration reaches up to 3.5 × 1018 cm−3, while the corresponding resistivity reduces to 0.7 Ω cm at room temperature, which is tens times improvement in conductivity compared with that of conventional SLs. High hole concentration can be maintained even at 100 K. High p-type conductivity in Al-rich structural material is an important step for the future design of superior AlGaN-based deep ultraviolet devices. PMID:26906334

  9. Model Predictive Control-based gait pattern generation for wearable exoskeletons.

    PubMed

    Wang, Letian; van Asseldonk, Edwin H F; van der Kooij, Herman

    2011-01-01

    This paper introduces a new method for controlling wearable exoskeletons that do not need predefined joint trajectories. Instead, it only needs basic gait descriptors such as step length, swing duration, and walking speed. End point Model Predictive Control (MPC) is used to generate the online joint trajectories based on these gait parameters. Real-time ability and control performance of the method during the swing phase of gait cycle is studied in this paper. Experiments are performed by helping a human subject swing his leg with different patterns in the LOPES gait trainer. Results show that the method is able to assist subjects to make steps with different step length and step duration without predefined joint trajectories and is fast enough for real-time implementation. Future study of the method will focus on controlling the exoskeletons in the entire gait cycle. © 2011 IEEE

  10. DSP-Based dual-polarity mass spectrum pattern recognition for bio-detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riot, V; Coffee, K; Gard, E

    2006-04-21

    The Bio-Aerosol Mass Spectrometry (BAMS) instrument analyzes single aerosol particles using a dual-polarity time-of-flight mass spectrometer recording simultaneously spectra of thirty to a hundred thousand points on each polarity. We describe here a real-time pattern recognition algorithm developed at Lawrence Livermore National Laboratory that has been implemented on a nine Digital Signal Processor (DSP) system from Signatec Incorporated. The algorithm first preprocesses independently the raw time-of-flight data through an adaptive baseline removal routine. The next step consists of a polarity dependent calibration to a mass-to-charge representation, reducing the data to about five hundred to a thousand channels per polarity. Themore » last step is the identification step using a pattern recognition algorithm based on a library of known particle signatures including threat agents and background particles. The identification step includes integrating the two polarities for a final identification determination using a score-based rule tree. This algorithm, operating on multiple channels per-polarity and multiple polarities, is well suited for parallel real-time processing. It has been implemented on the PMP8A from Signatec Incorporated, which is a computer based board that can interface directly to the two one-Giga-Sample digitizers (PDA1000 from Signatec Incorporated) used to record the two polarities of time-of-flight data. By using optimized data separation, pipelining, and parallel processing across the nine DSPs it is possible to achieve a processing speed of up to a thousand particles per seconds, while maintaining the recognition rate observed on a non-real time implementation. This embedded system has allowed the BAMS technology to improve its throughput and therefore its sensitivity while maintaining a large dynamic range (number of channels and two polarities) thus maintaining the systems specificity for bio-detection.« less

  11. Time series analysis as input for clinical predictive modeling: Modeling cardiac arrest in a pediatric ICU

    PubMed Central

    2011-01-01

    Background Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. Methods We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Results Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. Conclusions We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting. PMID:22023778

  12. Time series analysis as input for clinical predictive modeling: modeling cardiac arrest in a pediatric ICU.

    PubMed

    Kennedy, Curtis E; Turley, James P

    2011-10-24

    Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting.

  13. Solar-driven thermo- and electrochemical degradation of nitrobenzene in wastewater: Adaptation and adoption of solar STEP concept.

    PubMed

    Gu, Di; Shao, Nan; Zhu, Yanji; Wu, Hongjun; Wang, Baohui

    2017-01-05

    The STEP concept has successfully been demonstrated for driving chemical reaction by utilization of solar heat and electricity to minimize the fossil energy, meanwhile, maximize the rate of thermo- and electrochemical reactions in thermodynamics and kinetics. This pioneering investigation experimentally exhibit that the STEP concept is adapted and adopted efficiently for degradation of nitrobenzene. By employing the theoretical calculation and thermo-dependent cyclic voltammetry, the degradation potential of nitrobenzene was found to be decreased obviously, at the same time, with greatly lifting the current, while the temperature was increased. Compared with the conventional electrochemical methods, high efficiency and fast degradation rate were markedly displayed due to the co-action of thermo- and electrochemical effects and the switch of the indirect electrochemical oxidation to the direct one for oxidation of nitrobenzene. A clear conclusion on the mechanism of nitrobenzene degradation by the STEP can be schematically proposed and discussed by the combination of thermo- and electrochemistry based the analysis of the HPLC, UV-vis and degradation data. This theory and experiment provide a pilot for the treatment of nitrobenzene wastewater with high efficiency, clean operation and low carbon footprint, without any other input of energy and chemicals from solar energy. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Monitoring gait in multiple sclerosis with novel wearable motion sensors.

    PubMed

    Moon, Yaejin; McGinnis, Ryan S; Seagers, Kirsten; Motl, Robert W; Sheth, Nirav; Wright, John A; Ghaffari, Roozbeh; Sosnoff, Jacob J

    2017-01-01

    Mobility impairment is common in people with multiple sclerosis (PwMS) and there is a need to assess mobility in remote settings. Here, we apply a novel wireless, skin-mounted, and conformal inertial sensor (BioStampRC, MC10 Inc.) to examine gait characteristics of PwMS under controlled conditions. We determine the accuracy and precision of BioStampRC in measuring gait kinematics by comparing to contemporary research-grade measurement devices. A total of 45 PwMS, who presented with diverse walking impairment (Mild MS = 15, Moderate MS = 15, Severe MS = 15), and 15 healthy control subjects participated in the study. Participants completed a series of clinical walking tests. During the tests participants were instrumented with BioStampRC and MTx (Xsens, Inc.) sensors on their shanks, as well as an activity monitor GT3X (Actigraph, Inc.) on their non-dominant hip. Shank angular velocity was simultaneously measured with the inertial sensors. Step number and temporal gait parameters were calculated from the data recorded by each sensor. Visual inspection and the MTx served as the reference standards for computing the step number and temporal parameters, respectively. Accuracy (error) and precision (variance of error) was assessed based on absolute and relative metrics. Temporal parameters were compared across groups using ANOVA. Mean accuracy±precision for the BioStampRC was 2±2 steps error for step number, 6±9ms error for stride time and 6±7ms error for step time (0.6-2.6% relative error). Swing time had the least accuracy±precision (25±19ms error, 5±4% relative error) among the parameters. GT3X had the least accuracy±precision (8±14% relative error) in step number estimate among the devices. Both MTx and BioStampRC detected significantly distinct gait characteristics between PwMS with different disability levels (p<0.01). BioStampRC sensors accurately and precisely measure gait parameters in PwMS across diverse walking impairment levels and detected differences in gait characteristics by disability level in PwMS. This technology has the potential to provide granular monitoring of gait both inside and outside the clinic.

  15. Remote sensing of desert dust aerosols over the Sahel : potential use for health impact studies

    NASA Astrophysics Data System (ADS)

    Deroubaix, A. D.; Martiny, N. M.; Chiapello, I. C.; Marticorena, B. M.

    2012-04-01

    Since the end of the 70's, remote sensing monitors the desert dust aerosols due to their absorption and scattering properties and allows to make long time series which are necessary for air quality or health impact studies. In the Sahel, a huge health problem is the Meningitis Meningococcal (MM) epidemics that occur during the dry season : the dust has been suspected to be crucial to understand their onsets and dynamics. The Aerosol absorption Index (AI) is a semi-quantitative index derived from TOMS and OMI observations in the UV available at a spatial resolution of 1° (1979-2005) and 0.25° (2005-today) respectively. The comparison of the OMI-AI and AERONET Aerosol Optical thickness (AOT) shows a good agreement at a daily time-step (correlation ~0.7). The comparison of the OMI-AI with the Particle Matter (PM) measurement of the Sahelian Dust Transect is lower (~0.4) at a daily time-step but it increases at a weekly time-step (~0.6). The OMI-AI reproduces the dust seasonal cycle over the Sahel and we conclude that the OMI-AI product at a 0.25° spatial resolution is suitable for health impact studies, especially at a weekly epidemiological time-step. Despite the AI is sensitive to the aerosol altitude, it provides a daily spatial information on dust. A preliminary investigation analysis of the link between weekly OMI AI and weekly WHO epidemiological data sets is presented in Mali and Niger, showing a good agreement between the AI and the onset of the MM epidemics with a constant lag (between 1 and 2 week). The next of this study is to analyse a deeper AI time series constituted by TOMS and OMI data sets. Based on the weekly ratios PM/AI at 2 stations of the Sahelian Dust Transect, a spatialized proxy for PM from the AI has been developed. The AI as a proxy for PM and other climate variables such as Temperature (T°), Relative Humidity (RH%) and the wind (intensity and direction) could then be used to analyze the link between those variables and the MM epidemics in the most concerned countries in Western Africa, which would be an important step towards a forecasting tool for the epidemics risks in Western Africa.

  16. Overview of fast algorithm in 3D dynamic holographic display

    NASA Astrophysics Data System (ADS)

    Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian

    2013-08-01

    3D dynamic holographic display is one of the most attractive techniques for achieving real 3D vision with full depth cue without any extra devices. However, huge 3D information and data should be preceded and be computed in real time for generating the hologram in 3D dynamic holographic display, and it is a challenge even for the most advanced computer. Many fast algorithms are proposed for speeding the calculation and reducing the memory usage, such as:look-up table (LUT), compressed look-up table (C-LUT), split look-up table (S-LUT), and novel look-up table (N-LUT) based on the point-based method, and full analytical polygon-based methods, one-step polygon-based method based on the polygon-based method. In this presentation, we overview various fast algorithms based on the point-based method and the polygon-based method, and focus on the fast algorithm with low memory usage, the C-LUT, and one-step polygon-based method by the 2D Fourier analysis of the 3D affine transformation. The numerical simulations and the optical experiments are presented, and several other algorithms are compared. The results show that the C-LUT algorithm and the one-step polygon-based method are efficient methods for saving calculation time. It is believed that those methods could be used in the real-time 3D holographic display in future.

  17. Markerless EPID image guided dynamic multi-leaf collimator tracking for lung tumors

    NASA Astrophysics Data System (ADS)

    Rottmann, J.; Keall, P.; Berbeco, R.

    2013-06-01

    Compensation of target motion during the delivery of radiotherapy has the potential to improve treatment accuracy, dose conformity and sparing of healthy tissue. We implement an online image guided therapy system based on soft tissue localization (STiL) of the target from electronic portal images and treatment aperture adaptation with a dynamic multi-leaf collimator (DMLC). The treatment aperture is moved synchronously and in real time with the tumor during the entire breathing cycle. The system is implemented and tested on a Varian TX clinical linear accelerator featuring an AS-1000 electronic portal imaging device (EPID) acquiring images at a frame rate of 12.86 Hz throughout the treatment. A position update cycle for the treatment aperture consists of four steps: in the first step at time t = t0 a frame is grabbed, in the second step the frame is processed with the STiL algorithm to get the tumor position at t = t0, in a third step the tumor position at t = ti + δt is predicted to overcome system latencies and in the fourth step, the DMLC control software calculates the required leaf motions and applies them at time t = ti + δt. The prediction model is trained before the start of the treatment with data representing the tumor motion. We analyze the system latency with a dynamic chest phantom (4D motion phantom, Washington University). We estimate the average planar position deviation between target and treatment aperture in a clinical setting by driving the phantom with several lung tumor trajectories (recorded from fiducial tracking during radiotherapy delivery to the lung). DMLC tracking for lung stereotactic body radiation therapy without fiducial markers was successfully demonstrated. The inherent system latency is found to be δt = (230 ± 11) ms for a MV portal image acquisition frame rate of 12.86 Hz. The root mean square deviation between tumor and aperture position is smaller than 1 mm. We demonstrate the feasibility of real-time markerless DMLC tracking with a standard LINAC-mounted (EPID).

  18. STEPS at CSUN: Increasing Retention of Engineering and Physical Science Majors

    NASA Astrophysics Data System (ADS)

    Pedone, V. A.; Cadavid, A. C.; Horn, W.

    2012-12-01

    STEPS at CSUN seeks to increase the retention rate of first-time freshman in engineering, math, and physical science (STEM) majors from ~55% to 65%. About 40% of STEM first-time freshmen start in College Algebra because they do not take or do not pass the Mathematics Placement Test (MPT). This lengthens time to graduation, which contributes to dissatisfaction with major. STEPS at CSUN has made substantial changes to the administration of the MPT. Initial data show increases in the number of students who take the test and who place out of College Algebra, as well as increases in overall scores. STEPS at CSUN also funded the development of supplemental labs for Trigonometry and Calculus I and II, in partnership with similar labs created by the Math Department for College Algebra and Precalculus. These labs are open to all students, but are mandatory for at-risk students who have low scores on the MPT, low grades in the prerequisite course, or who failed the class the first time. Initial results are promising. Comparison of the grades of 46 Fall 2010 "at-risk" students without lab to those of 36 Fall 2011 students who enrolled in the supplementary lab show D-F grades decreased by 10% and A-B grades increased by 27%. A final retention strategy is aimed at students in the early stages of their majors. At CSUN the greatest loss of STEM majors occurs between sophomore-level and junior-level coursework because course difficulty increases and aspirations to potential careers weaken. The Summer Interdisciplinary Team Experience (SITE) is an intensive 3-week-long summer program that engages small teams of students from diverse STEM majors in faculty-mentored, team-based problem solving. This experience simulates professional work and creates strong bonds between students and between students and faculty mentors. The first two cohorts of students who have participated in SITE indicate that this experience has positively impacted their motivation to complete their STEM degree.

  19. Some studies on a solid-state sulfur probe for coal gasification systems

    NASA Technical Reports Server (NTRS)

    Jacob, K. T.; Rao, D. B.; Nelson, H. G.

    1978-01-01

    As a part of a program for the development of a sulfur probe for monitoring the sulfur potential in coal gasification reactors, an investigation was conducted regarding the efficiency of the solid electrolyte cell Ar+H2+H2S/CaS+CaF2+(Pt)//CaF2//Pt)+CaF2+CaS/H2S+H2+Ar. A demonstration is provided of the theory, design, and operation of a solid-state sulfur probe based on CaF2 electrolyte. It was found that the cell responds to changes in sulfur potential in a manner predicted by the Nernst equation. The response time of the cell at 1225 K, after a small change in temperature or gas composition, was 2.5 Hr, while at a lower temperature of 990 K the response time was approximately 9 hr. The cell emf was insensitive to a moderate increase in the flow rate of the test gas and/or the reference gas. The exact factors affecting the slow response time of galvanic cells based on a CaF2 electrolyte have not yet been determined. The rate-limiting steps may be either the kinetics of electrode reactions or the rate of transport through the electrolyte.

  20. Adaptive Implicit Non-Equilibrium Radiation Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Philip, Bobby; Wang, Zhen; Berrill, Mark A

    2013-01-01

    We describe methods for accurate and efficient long term time integra- tion of non-equilibrium radiation diffusion systems: implicit time integration for effi- cient long term time integration of stiff multiphysics systems, local control theory based step size control to minimize the required global number of time steps while control- ling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  1. Point clouds segmentation as base for as-built BIM creation

    NASA Astrophysics Data System (ADS)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2015-08-01

    In this paper, a three steps segmentation approach is proposed in order to create 3D models from point clouds acquired by TLS inside buildings. The three scales of segmentation are floors, rooms and planes composing the rooms. First, floor segmentation is performed based on analysis of point distribution along Z axis. Then, for each floor, room segmentation is achieved considering a slice of point cloud at ceiling level. Finally, planes are segmented for each room, and planes corresponding to ceilings and floors are identified. Results of each step are analysed and potential improvements are proposed. Based on segmented point clouds, the creation of as-built BIM is considered in a future work section. Not only the classification of planes into several categories is proposed, but the potential use of point clouds acquired outside buildings is also considered.

  2. A particle-in-cell method for the simulation of plasmas based on an unconditionally stable field solver

    DOE PAGES

    Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; ...

    2016-08-09

    Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowedmore » by typical CFL restrictions.« less

  3. Knee point search using cascading top-k sorting with minimized time complexity.

    PubMed

    Wang, Zheng; Tseng, Shian-Shyong

    2013-01-01

    Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.

  4. DISPATCH: a numerical simulation framework for the exa-scale era - I. Fundamentals

    NASA Astrophysics Data System (ADS)

    Nordlund, Åke; Ramsey, Jon P.; Popovas, Andrius; Küffmeier, Michael

    2018-06-01

    We introduce a high-performance simulation framework that permits the semi-independent, task-based solution of sets of partial differential equations, typically manifesting as updates to a collection of `patches' in space-time. A hybrid MPI/OpenMP execution model is adopted, where work tasks are controlled by a rank-local `dispatcher' which selects, from a set of tasks generally much larger than the number of physical cores (or hardware threads), tasks that are ready for updating. The definition of a task can vary, for example, with some solving the equations of ideal magnetohydrodynamics (MHD), others non-ideal MHD, radiative transfer, or particle motion, and yet others applying particle-in-cell (PIC) methods. Tasks do not have to be grid based, while tasks that are, may use either Cartesian or orthogonal curvilinear meshes. Patches may be stationary or moving. Mesh refinement can be static or dynamic. A feature of decisive importance for the overall performance of the framework is that time-steps are determined and applied locally; this allows potentially large reductions in the total number of updates required in cases when the signal speed varies greatly across the computational domain, and therefore a corresponding reduction in computing time. Another feature is a load balancing algorithm that operates `locally' and aims to simultaneously minimize load and communication imbalance. The framework generally relies on already existing solvers, whose performance is augmented when run under the framework, due to more efficient cache usage, vectorization, local time-stepping, plus near-linear and, in principle, unlimited OpenMP and MPI scaling.

  5. Habilitations as a bottleneck? A retrospective analysis of gender differences at the Medical University of Vienna.

    PubMed

    Steinböck, Sandra; Reichel, Eva; Pichler, Susanna; Gutiérrez-Lobos, Karin

    2016-04-01

    The share of female physicians who drop out of a university career increases disproportionately with every career step. In this project, we analysed careers at the Medical University of Vienna (formerly the Medical Faculty at the University of Vienna) in the time span from 1992 to 2012 to explore the particular role of habilitations as a potential obstacle for women striving to pursue a career in science. To gain both a macro- and micro-view of the phenomenon of habilitations, a descriptive analysis of the data found in the archive of the Medical University of Vienna was carried out as a first step. Building on these results, structured interviews with the female physicians who were involved in the habilitation procedures at that time were conducted. While hardly any gender-based differences or discrimination can be reported for the habilitation procedures themselves, the research clearly reveals that the disparity in habilitations by men and women is a manifestation of unequal access to informal networks, differences regarding integration in the scientific community and available time resources. It is unlikely that the rising number of women completing doctoral studies in the field of medicine will automatically lead to a harmonisation of habilitation numbers. The analysis of existing gender-based differences with regard to habilitations in the field of medicine shows that they result from multiple processes that are subtle and relatively resistant to change.

  6. Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics — Monte Carlo Canonical Propagation Algorithm

    PubMed Central

    Weare, Jonathan; Dinner, Aaron R.; Roux, Benoît

    2016-01-01

    A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method. PMID:26918826

  7. Application of a nonrandomized stepped wedge design to evaluate an evidence-based quality improvement intervention: a proof of concept using simulated data on patient-centered medical homes.

    PubMed

    Huynh, Alexis K; Lee, Martin L; Farmer, Melissa M; Rubenstein, Lisa V

    2016-10-21

    Stepped wedge designs have gained recognition as a method for rigorously assessing implementation of evidence-based quality improvement interventions (QIIs) across multiple healthcare sites. In theory, this design uses random assignment of sites to successive QII implementation start dates based on a timeline determined by evaluators. However, in practice, QII timing is often controlled more by site readiness. We propose an alternate version of the stepped wedge design that does not assume the randomized timing of implementation while retaining the method's analytic advantages and applying to a broader set of evaluations. To test the feasibility of a nonrandomized stepped wedge design, we developed simulated data on patient care experiences and on QII implementation that had the structures and features of the expected data from a planned QII. We then applied the design in anticipation of performing an actual QII evaluation. We used simulated data on 108,000 patients to model nonrandomized stepped wedge results from QII implementation across nine primary care sites over 12 quarters. The outcome we simulated was change in a single self-administered question on access to care used by Veterans Health Administration (VA), based in the United States, as part of its quarterly patient ratings of quality of care. Our main predictors were QII exposure and time. Based on study hypotheses, we assigned values of 4 to 11 % for improvement in access when sites were first exposed to implementation and 1 to 3 % improvement in each ensuing time period thereafter when sites continued with implementation. We included site-level (practice size) and respondent-level (gender, race/ethnicity) characteristics that might account for nonrandomized timing in site implementation of the QII. We analyzed the resulting data as a repeated cross-sectional model using HLM 7 with a three-level hierarchical data structure and an ordinal outcome. Levels in the data structure included patient ratings, timing of adoption of the QII, and primary care site. We were able to demonstrate a statistically significant improvement in adoption of the QII, as postulated in our simulation. The linear time trend while sites were in the control state was not significant, also as expected in the real life scenario of the example QII. We concluded that the nonrandomized stepped wedge design was feasible within the parameters of our planned QII with its data structure and content. Our statistical approach may be applicable to similar evaluations.

  8. Asynchronous variational integration using continuous assumed gradient elements.

    PubMed

    Wolff, Sebastian; Bucher, Christian

    2013-03-01

    Asynchronous variational integration (AVI) is a tool which improves the numerical efficiency of explicit time stepping schemes when applied to finite element meshes with local spatial refinement. This is achieved by associating an individual time step length to each spatial domain. Furthermore, long-term stability is ensured by its variational structure. This article presents AVI in the context of finite elements based on a weakened weak form (W2) Liu (2009) [1], exemplified by continuous assumed gradient elements Wolff and Bucher (2011) [2]. The article presents the main ideas of the modified AVI, gives implementation notes and a recipe for estimating the critical time step.

  9. A physical action potential generator: design, implementation and evaluation.

    PubMed

    Latorre, Malcolm A; Chan, Adrian D C; Wårdell, Karin

    2015-01-01

    The objective was to develop a physical action potential generator (Paxon) with the ability to generate a stable, repeatable, programmable, and physiological-like action potential. The Paxon has an equivalent of 40 nodes of Ranvier that were mimicked using resin embedded gold wires (Ø = 20 μm). These nodes were software controlled and the action potentials were initiated by a start trigger. Clinically used Ag-AgCl electrodes were coupled to the Paxon for functional testing. The Paxon's action potential parameters were tunable using a second order mathematical equation to generate physiologically relevant output, which was accomplished by varying the number of nodes involved (1-40 in incremental steps of 1) and the node drive potential (0-2.8 V in 0.7 mV steps), while keeping a fixed inter-nodal timing and test electrode configuration. A system noise floor of 0.07 ± 0.01 μV was calculated over 50 runs. A differential test electrode recorded a peak positive amplitude of 1.5 ± 0.05 mV (gain of 40x) at time 196.4 ± 0.06 ms, including a post trigger delay. The Paxon's programmable action potential like signal has the possibility to be used as a validation test platform for medical surface electrodes and their attached systems.

  10. Three dimensional atom-diatom quantum reactive scattering calculations using absorbing potential: speed up of the propagation scheme.

    PubMed

    Stoecklin, T

    2008-09-01

    In this paper a new propagation scheme is proposed for atom-diatom reactive calculations using a negative imaginary potential (NIP) within a time independent approach. It is based on the calculation of a rotationally adiabatic basis set, the neglected coupling terms being re-added in the following step of the propagation. The results of this approach, which we call two steps rotationally adiabatic coupled states calculations (2-RACS), are compared to those obtained using the adiabatic DVR method (J. C. Light and Z. Bazic, J. Chem. Phys., 1987, 87, 4008; C. Leforestier, J. Chem. Phys., 1991, 94, 6388), to the NIP coupled states results of the team of Baer (D. M. Charutz, I. Last and M. Baer, J. Chem. Phys., 1997, 106, 7654) and to the exact results obtained by Zhang (J. Z. H. Zhang and W. H. Miller, J. Chem. Phys., 1989, 91, 1528) for the D + H(2) reaction. The example of implementation of our method of computation of the adiabatic basis will be given here in the coupled states approximation, as this method has proved to be very efficient in many cases and is quite fast.

  11. Benchmarking, benchmarks, or best practices? Applying quality improvement principles to decrease surgical turnaround time.

    PubMed

    Mitchell, L

    1996-01-01

    The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.

  12. Multi-linear regression of sea level in the south west Pacific as a first step towards local sea level projections

    NASA Astrophysics Data System (ADS)

    Kumar, Vandhna; Meyssignac, Benoit; Melet, Angélique; Ganachaud, Alexandre

    2017-04-01

    Rising sea levels are a critical concern in small island nations. The problem is especially serious in the western south Pacific, where the total sea level rise over the last 60 years is up to 3 times the global average. In this study, we attempt to reconstruct sea levels at selected sites in the region (Suva, Lautoka, Noumea - Fiji and New Caledonia) as a mutiple-linear regression of atmospheric and oceanic variables. We focus on interannual-to-decadal scale variability, and lower (including the global mean sea level rise) over the 1979-2014 period. Sea levels are taken from tide gauge records and the ORAS4 reanalysis dataset, and are expressed as a sum of steric and mass changes as a preliminary step. The key development in our methodology is using leading wind stress curl as a proxy for the thermosteric component. This is based on the knowledge that wind stress curl anomalies can modulate the thermocline depth and resultant sea levels via Rossby wave propagation. The analysis is primarily based on correlation between local sea level and selected predictors, the dominant one being wind stress curl. In the first step, proxy boxes for wind stress curl are determined via regions of highest correlation. The proportion of sea level explained via linear regression is then removed, leaving a residual. This residual is then correlated with other locally acting potential predictors: halosteric sea level, the zonal and meridional wind stress components, and sea surface temperature. The statistically significant predictors are used in a multi-linear regression function to simulate the observed sea level. The method is able to reproduce between 40 to 80% of the variance in observed sea level. Based on the skill of the model, it has high potential in sea level projection and downscaling studies.

  13. Effects of wide step walking on swing phase hip muscle forces and spatio-temporal gait parameters.

    PubMed

    Bajelan, Soheil; Nagano, Hanatsu; Sparrow, Tony; Begg, Rezaul K

    2017-07-01

    Human walking can be viewed essentially as a continuum of anterior balance loss followed by a step that re-stabilizes balance. To secure balance an extended base of support can be assistive but healthy young adults tend to walk with relatively narrower steps compared to vulnerable populations (e.g. older adults and patients). It was, therefore, hypothesized that wide step walking may enhance dynamic balance at the cost of disturbed optimum coupling of muscle functions, leading to additional muscle work and associated reduction of gait economy. Young healthy adults may select relatively narrow steps for a more efficient gait. The current study focused on the effects of wide step walking on hip abductor and adductor muscles and spatio-temporal gait parameters. To this end, lower body kinematic data and ground reaction forces were obtained using an Optotrak motion capture system and AMTI force plates, respectively, while AnyBody software was employed for muscle force simulation. A single step of four healthy young male adults was captured during preferred walking and wide step walking. Based on preferred walking data, two parallel lines were drawn on the walkway to indicate 50% larger step width and participants targeted the lines with their heels as they walked. In addition to step width that defined walking conditions, other spatio-temporal gait parameters including step length, double support time and single support time were obtained. Average hip muscle forces during swing were modeled. Results showed that in wide step walking step length increased, Gluteus Minimus muscles were more active while Gracilis and Adductor Longus revealed considerably reduced forces. In conclusion, greater use of abductors and loss of adductor forces were found in wide step walking. Further validation is needed in future studies involving older adults and other pathological populations.

  14. Quick foot placement adjustments during gait are less accurate in individuals with focal cerebellar lesions.

    PubMed

    Hoogkamer, Wouter; Potocanac, Zrinka; Van Calenbergh, Frank; Duysens, Jacques

    2017-10-01

    Online gait corrections are frequently used to restore gait stability and prevent falling. They require shorter response times than voluntary movements which suggests that subcortical pathways contribute to the execution of online gait corrections. To evaluate the potential role of the cerebellum in these pathways we tested the hypotheses that online gait corrections would be less accurate in individuals with focal cerebellar damage than in neurologically intact controls and that this difference would be more pronounced for shorter available response times and for short step gait corrections. We projected virtual stepping stones on an instrumented treadmill while some of the approaching stepping stones were shifted forward or backward, requiring participants to adjust their foot placement. Varying the timing of those shifts allowed us to address the effect of available response time on foot placement error. In agreement with our hypothesis, individuals with focal cerebellar lesions were less accurate in adjusting their foot placement in reaction to suddenly shifted stepping stones than neurologically intact controls. However, the cerebellar lesion group's foot placement error did not increase more with decreasing available response distance or for short step versus long step adjustments compared to the control group. Furthermore, foot placement error for the non-shifting stepping stones was also larger in the cerebellar lesion group as compared to the control group. Consequently, the reduced ability to accurately adjust foot placement during walking in individuals with focal cerebellar lesions appears to be a general movement control deficit, which could contribute to increased fall risk. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Time-resolved spectral characterization of ring cavity surface emitting and ridge-type distributed feedback quantum cascade lasers by step-scan FT-IR spectroscopy.

    PubMed

    Brandstetter, Markus; Genner, Andreas; Schwarzer, Clemens; Mujagic, Elvis; Strasser, Gottfried; Lendl, Bernhard

    2014-02-10

    We present the time-resolved comparison of pulsed 2nd order ring cavity surface emitting (RCSE) quantum cascade lasers (QCLs) and pulsed 1st order ridge-type distributed feedback (DFB) QCLs using a step-scan Fourier transform infrared (FT-IR) spectrometer. Laser devices were part of QCL arrays and fabricated from the same laser material. Required grating periods were adjusted to account for the grating order. The step-scan technique provided a spectral resolution of 0.1 cm(-1) and a time resolution of 2 ns. As a result, it was possible to gain information about the tuning behavior and potential mode-hops of the investigated lasers. Different cavity-lengths were compared, including 0.9 mm and 3.2 mm long ridge-type and 0.97 mm (circumference) ring-type cavities. RCSE QCLs were found to have improved emission properties in terms of line-stability, tuning rate and maximum emission time compared to ridge-type lasers.

  16. A qualitative assessment of Toxoplasma gondii risk in ready-to-eat smallgoods processing.

    PubMed

    Mie, Tanya; Pointon, Andrew M; Hamilton, David R; Kiermeier, Andreas

    2008-07-01

    Toxoplasma gondii is one of the most common parasitic infections of humans and other warm-blooded animals. In most adults, it does not cause serious illness, but severe disease may result from infection in fetuses and immunocompromised people. Consumption of raw or undercooked meats has consistently been identified as an important source of exposure to T. gondii. Several studies indicate the potential failure to inactivate T. gondii in the processes of cured meat products, This article presents a qualitative risk-based assessment of the processing of ready-to-eat smallgoods, which include cooked or uncooked fermented meat, pâté, dried meat, slow cured meat, luncheon meat, and cooked muscle meat including ham and roast beef. The raw meat ingredients are rated with respect to their likelihood of containing T. gondii cysts and an adjustment is made based on whether all the meat from a particular source is frozen. Next, the effectiveness of common processing steps to inactivate T. gondii cysts is assessed, including addition of spices, nitrates, nitrites and salt, use of fermentation, smoking and heat treatment, and the time and temperature during maturation. It is concluded that processing steps that may be effective in the inactivation of T. gondii cysts include freezing, heat treatment, and cooking, and the interaction between salt concentration, maturation time, and temperature. The assessment is illustrated using a Microsoft Excel-based software tool that was developed to facilitate the easy assessment of four hypothetical smallgoods products.

  17. One-step fabrication of multifunctional micromotors

    NASA Astrophysics Data System (ADS)

    Gao, Wenlong; Liu, Mei; Liu, Limei; Zhang, Hui; Dong, Bin; Li, Christopher Y.

    2015-08-01

    Although artificial micromotors have undergone tremendous progress in recent years, their fabrication normally requires complex steps or expensive equipment. In this paper, we report a facile one-step method based on an emulsion solvent evaporation process to fabricate multifunctional micromotors. By simultaneously incorporating various components into an oil-in-water droplet, upon emulsification and solidification, a sphere-shaped, asymmetric, and multifunctional micromotor is formed. Some of the attractive functions of this model micromotor include autonomous movement in high ionic strength solution, remote control, enzymatic disassembly and sustained release. This one-step, versatile fabrication method can be easily scaled up and therefore may have great potential in mass production of multifunctional micromotors for a wide range of practical applications.Although artificial micromotors have undergone tremendous progress in recent years, their fabrication normally requires complex steps or expensive equipment. In this paper, we report a facile one-step method based on an emulsion solvent evaporation process to fabricate multifunctional micromotors. By simultaneously incorporating various components into an oil-in-water droplet, upon emulsification and solidification, a sphere-shaped, asymmetric, and multifunctional micromotor is formed. Some of the attractive functions of this model micromotor include autonomous movement in high ionic strength solution, remote control, enzymatic disassembly and sustained release. This one-step, versatile fabrication method can be easily scaled up and therefore may have great potential in mass production of multifunctional micromotors for a wide range of practical applications. Electronic supplementary information (ESI) available: Videos S1-S4 and Fig. S1-S3. See DOI: 10.1039/c5nr03574k

  18. Rapid detection of Enterovirus and Coxsackievirus A10 by a TaqMan based duplex one-step real time RT-PCR assay.

    PubMed

    Chen, Jingfang; Zhang, Rusheng; Ou, Xinhua; Yao, Dong; Huang, Zheng; Li, Linzhi; Sun, Biancheng

    2017-06-01

    A TaqMan based duplex one-step real time RT-PCR (rRT-PCR) assay was developed for the rapid detection of Coxsackievirus A10 (CV-A10) and other enterovirus (EVs) in clinical samples. The assay was fully evaluated and found to be specific and sensitive. When applied in 115 clinical samples, a 100% diagnostic sensitivity in CV-A10 detection and 97.4% diagnostic sensitivity in other EVs were found. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Step-to-step spatiotemporal variables and ground reaction forces of intra-individual fastest sprinting in a single session.

    PubMed

    Nagahara, Ryu; Mizutani, Mirai; Matsuo, Akifumi; Kanehisa, Hiroaki; Fukunaga, Tetsuo

    2018-06-01

    We aimed to investigate the step-to-step spatiotemporal variables and ground reaction forces during the acceleration phase for characterising intra-individual fastest sprinting within a single session. Step-to-step spatiotemporal variables and ground reaction forces produced by 15 male athletes were measured over a 50-m distance during repeated (three to five) 60-m sprints using a long force platform system. Differences in measured variables between the fastest and slowest trials were examined at each step until the 22nd step using a magnitude-based inferences approach. There were possibly-most likely higher running speed and step frequency (2nd to 22nd steps) and shorter support time (all steps) in the fastest trial than in the slowest trial. Moreover, for the fastest trial there were likely-very likely greater mean propulsive force during the initial four steps and possibly-very likely larger mean net anterior-posterior force until the 17th step. The current results demonstrate that better sprinting performance within a single session is probably achieved by 1) a high step frequency (except the initial step) with short support time at all steps, 2) exerting a greater mean propulsive force during initial acceleration, and 3) producing a greater mean net anterior-posterior force during initial and middle acceleration.

  20. Energy shift and conduction-to-valence band transition mediated by a time-dependent potential barrier in graphene

    NASA Astrophysics Data System (ADS)

    Chaves, Andrey; da Costa, D. R.; de Sousa, G. O.; Pereira, J. M.; Farias, G. A.

    2015-09-01

    We investigate the scattering of a wave packet describing low-energy electrons in graphene by a time-dependent finite-step potential barrier. Our results demonstrate that, after Klein tunneling through the barrier, the electron acquires an extra energy which depends on the rate of change of the barrier height with time. If this rate is negative, the electron loses energy and ends up as a valence band state after leaving the barrier, which effectively behaves as a positively charged quasiparticle.

  1. Numerical modeling of surface wave development under the action of wind

    NASA Astrophysics Data System (ADS)

    Chalikov, Dmitry

    2018-06-01

    The numerical modeling of two-dimensional surface wave development under the action of wind is performed. The model is based on three-dimensional equations of potential motion with a free surface written in a surface-following nonorthogonal curvilinear coordinate system in which depth is counted from a moving surface. A three-dimensional Poisson equation for the velocity potential is solved iteratively. A Fourier transform method, a second-order accuracy approximation of vertical derivatives on a stretched vertical grid and fourth-order Runge-Kutta time stepping are used. Both the input energy to waves and dissipation of wave energy are calculated on the basis of earlier developed and validated algorithms. A one-processor version of the model for PC allows us to simulate an evolution of the wave field with thousands of degrees of freedom over thousands of wave periods. A long-time evolution of a two-dimensional wave structure is illustrated by the spectra of wave surface and the input and output of energy.

  2. Cosmogenic 36Cl in karst waters: Quantifying contributions from atmospheric and bedrock sources

    NASA Astrophysics Data System (ADS)

    Johnston, V. E.; McDermott, F.

    2009-12-01

    Improved reconstructions of cosmogenic isotope production through time are crucial to understand past solar variability. As a preliminary step to derive atmospheric 36Cl/Cl solar proxy time-series from speleothems, we quantify 36Cl sources in cave dripwaters. Atmospheric 36Cl fallout rates are a potential proxy for solar output; however extraneous 36Cl derived from in-situ production in cave host-rocks could complicate the solar signal. Results from numerical modeling and preliminary geochemical data presented here show that the atmospheric 36Cl source dominates in many, but not all cave dripwaters. At favorable low elevation, mid-latitude sites, 36Cl based speleothem solar irradiance reconstructions could extend back to 500 ka, with a possible centennial scale temporal resolution. This would represent a marginal improvement in resolution compared with existing polar ice core records, with the added advantages of a wider geographic range, independent U-series constrained chronology, and the potential for contemporaneous climate signals within the same speleothem material.

  3. Patient-reported outcome measures versus inertial performance-based outcome measures: A prospective study in patients undergoing primary total knee arthroplasty.

    PubMed

    Bolink, S A A N; Grimm, B; Heyligers, I C

    2015-12-01

    Outcome assessment of total knee arthroplasty (TKA) by subjective patient reported outcome measures (PROMs) may not fully capture the functional (dis-)abilities of relevance. Objective performance-based outcome measures could provide distinct information. An ambulant inertial measurement unit (IMU) allows kinematic assessment of physical performance and could potentially be used for routine follow-up. To investigate the responsiveness of IMU measures in patients following TKA and compare outcomes with conventional PROMs. Patients with end stage knee OA (n=20, m/f=7/13; age=67.4 standard deviation 7.7 years) were measured preoperatively and one year postoperatively. IMU measures were derived during gait, sit-stand transfers and block step-up transfers. PROMs were assessed by using the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) and Knee Society Score (KSS). Responsiveness was calculated by the effect size, correlations were calculated with Spearman's rho correlation coefficient. One year after TKA, patients performed significantly better at gait, sit-to-stand transfers and block step-up transfers. Measures of time and kinematic IMU measures demonstrated significant improvements postoperatively for each performance-based test. The largest improvement was found in block step-up transfers (effect size=0.56-1.20). WOMAC function score and KSS function score demonstrated moderate correlations (Spearman's rho=0.45-0.74) with some of the physical performance-based measures pre- and postoperatively. To characterize the changes in physical function after TKA, PROMs could be supplemented by performance-based measures, assessing function during different activities and allowing kinematic characterization with an ambulant IMU. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Step by Step: Creating a Community-Based Transition Program for Students with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Hartman, Melissa A.

    2009-01-01

    Many students with intellectual disabilities want to graduate with their peers and move on to the next phase of their lives. By the time students have reached age 18, most have exhausted the coursework the school system has to offer, and they have yet to master the skills necessary for employment and independent living. Community-based transition…

  5. On the high frequency transfer of mechanical stimuli from the surface of the head to the macular neuroepithelium of the mouse.

    PubMed

    Jones, Timothy A; Lee, Choongheon; Gaines, G Christopher; Grant, J W Wally

    2015-04-01

    Vestibular macular sensors are activated by a shearing motion between the otoconial membrane and underlying receptor epithelium. Shearing motion and sensory activation in response to an externally induced head motion do not occur instantaneously. The mechanically reactive elastic and inertial properties of the intervening tissue introduce temporal constraints on the transfer of the stimulus to sensors. Treating the otoconial sensory apparatus as an overdamped second-order mechanical system, we measured the governing long time constant (Τ(L)) for stimulus transfer from the head surface to epithelium. This provided the basis to estimate the corresponding upper cutoff for the frequency response curve for mouse otoconial organs. A velocity step excitation was used as the forcing function. Hypothetically, the onset of the mechanical response to a step excitation follows an exponential rise having the form Vel(shear) = U(1-e(-t/TL)), where U is the applied shearing velocity step amplitude. The response time of the otoconial apparatus was estimated based on the activation threshold of macular neural responses to step stimuli having durations between 0.1 and 2.0 ms. Twenty adult C57BL/6 J mice were evaluated. Animals were anesthetized. The head was secured to a shaker platform using a non-invasive head clip or implanted skull screws. The shaker was driven to produce a theoretical forcing step velocity excitation at the otoconial organ. Vestibular sensory evoked potentials (VsEPs) were recorded to measure the threshold for macular neural activation. The duration of the applied step motion was reduced systematically from 2 to 0.1 ms and response threshold determined for each duration (nine durations). Hypothetically, the threshold of activation will increase according to the decrease in velocity transfer occurring at shorter step durations. The relationship between neural threshold and stimulus step duration was characterized. Activation threshold increased exponentially as velocity step duration decreased below 1.0 ms. The time constants associated with the exponential curve were Τ(L) = 0.50 ms for the head clip coupling and T(L) = 0.79 ms for skull screw preparation. These corresponded to upper -3 dB frequency cutoff points of approximately 318 and 201 Hz, respectively. T(L) ranged from 224 to 379 across individual animals using the head clip coupling. The findings were consistent with a second-order mass-spring mechanical system. Threshold data were also fitted to underdamped models post hoc. The underdamped fits suggested natural resonance frequencies on the order of 278 to 448 Hz as well as the idea that macular systems in mammals are less damped than generally acknowledged. Although estimated indirectly, it is argued that these time constants reflect largely if not entirely the mechanics of transfer to the sensory apparatus. The estimated governing time constant of 0.50 ms for composite data predicts high frequency cutoffs of at least 318 Hz for the intact otoconial apparatus of the mouse.

  6. Environmental sentinel biomonitors: integrated response systems for monitoring toxic chemicals

    NASA Astrophysics Data System (ADS)

    van der Schalie, William H.; Reuter, Roy; Shedd, Tommy R.; Knechtges, Paul L.

    2002-02-01

    Operational environments for military forces are becoming potentially more dangerous due to the increased number, use, and misuse of toxic chemicals across the entire range of military missions. Defense personnel may be exposed to harmful chemicals as a result of industrial accidents or intentional or unintentional action of enemy, friendly forces, or indigenous populations. While there has been a significant military effort to enable forces to operate safely and survive and sustain operations in nuclear, biological, chemical warfare agent environments, until recently there has not been a concomitant effort associated with potential adverse health effects from exposures of deployed personnel to toxic industrial chemicals. To provide continuous real-time toxicity assessments across a broad spectrum of individual chemicals or chemical mixtures, an Environmental Sentinel Biomonitor (ESB) system concept is proposed. An ESB system will integrate data from one or more platforms of biologically-based systems and chemical detectors placed in the environment to sense developing toxic conditions and transmit time-relevant data for use in risk assessment, mitigation, and/or management. Issues, challenges, and next steps for the ESB system concept are described, based in part on discussions at a September 2001 workshop sponsored by the U.S. Army Center for Environmental Health Research.

  7. VIP: an integrated pipeline for metagenomics of virus identification and discovery

    PubMed Central

    Li, Yang; Wang, Hao; Nie, Kai; Zhang, Chen; Zhang, Yi; Wang, Ji; Niu, Peihua; Ma, Xuejun

    2016-01-01

    Identification and discovery of viruses using next-generation sequencing technology is a fast-developing area with potential wide application in clinical diagnostics, public health monitoring and novel virus discovery. However, tremendous sequence data from NGS study has posed great challenge both in accuracy and velocity for application of NGS study. Here we describe VIP (“Virus Identification Pipeline”), a one-touch computational pipeline for virus identification and discovery from metagenomic NGS data. VIP performs the following steps to achieve its goal: (i) map and filter out background-related reads, (ii) extensive classification of reads on the basis of nucleotide and remote amino acid homology, (iii) multiple k-mer based de novo assembly and phylogenetic analysis to provide evolutionary insight. We validated the feasibility and veracity of this pipeline with sequencing results of various types of clinical samples and public datasets. VIP has also contributed to timely virus diagnosis (~10 min) in acutely ill patients, demonstrating its potential in the performance of unbiased NGS-based clinical studies with demand of short turnaround time. VIP is released under GPLv3 and is available for free download at: https://github.com/keylabivdc/VIP. PMID:27026381

  8. Personal computer study of finite-difference methods for the transonic small disturbance equation

    NASA Technical Reports Server (NTRS)

    Bland, Samuel R.

    1989-01-01

    Calculation of unsteady flow phenomena requires careful attention to the numerical treatment of the governing partial differential equations. The personal computer provides a convenient and useful tool for the development of meshes, algorithms, and boundary conditions needed to provide time accurate solution of these equations. The one-dimensional equation considered provides a suitable model for the study of wave propagation in the equations of transonic small disturbance potential flow. Numerical results for effects of mesh size, extent, and stretching, time step size, and choice of far-field boundary conditions are presented. Analysis of the discretized model problem supports these numerical results. Guidelines for suitable mesh and time step choices are given.

  9. Quantitative analysis of the thermal requirements for stepwise physical dormancy-break in seeds of the winter annual Geranium carolinianum (Geraniaceae)

    PubMed Central

    Gama-Arachchige, N. S.; Baskin, J. M.; Geneve, R. L.; Baskin, C. C.

    2013-01-01

    Background and Aims Physical dormancy (PY)-break in some annual plant species is a two-step process controlled by two different temperature and/or moisture regimes. The thermal time model has been used to quantify PY-break in several species of Fabaceae, but not to describe stepwise PY-break. The primary aims of this study were to quantify the thermal requirement for sensitivity induction by developing a thermal time model and to propose a mechanism for stepwise PY-breaking in the winter annual Geranium carolinianum. Methods Seeds of G. carolinianum were stored under dry conditions at different constant and alternating temperatures to induce sensitivity (step I). Sensitivity induction was analysed based on the thermal time approach using the Gompertz function. The effect of temperature on step II was studied by incubating sensitive seeds at low temperatures. Scanning electron microscopy, penetrometer techniques, and different humidity levels and temperatures were used to explain the mechanism of stepwise PY-break. Key Results The base temperature (Tb) for sensitivity induction was 17·2 °C and constant for all seed fractions of the population. Thermal time for sensitivity induction during step I in the PY-breaking process agreed with the three-parameter Gompertz model. Step II (PY-break) did not agree with the thermal time concept. Q10 values for the rate of sensitivity induction and PY-break were between 2·0 and 3·5 and between 0·02 and 0·1, respectively. The force required to separate the water gap palisade layer from the sub-palisade layer was significantly reduced after sensitivity induction. Conclusions Step I and step II in PY-breaking of G. carolinianum are controlled by chemical and physical processes, respectively. This study indicates the feasibility of applying the developed thermal time model to predict or manipulate sensitivity induction in seeds with two-step PY-breaking processes. The model is the first and most detailed one yet developed for sensitivity induction in PY-break. PMID:23456728

  10. Slipping during side-step cutting: anticipatory effects and familiarization.

    PubMed

    Oliveira, Anderson Souza Castelo; Silva, Priscila Brito; Lund, Morten Enemark; Farina, Dario; Kersting, Uwe Gustav

    2014-04-01

    The aim of the present study was to verify whether the expectation of perturbations while performing side-step cutting manoeuvres influences lower limb EMG activity, heel kinematics and ground reaction forces. Eighteen healthy men performed two sets of 90° side-step cutting manoeuvres. In the first set, 10 unperturbed trials (Base) were performed while stepping over a moveable force platform. In the second set, subjects were informed about the random possibility of perturbations to balance throughout 32 trials, of which eight were perturbed (Pert, 10cm translation triggered at initial contact), and the others were "catch" trials (Catch). Center of mass velocity (CoMVEL), heel acceleration (HAC), ground reaction forces (GRF) and surface electromyography (EMG) from lower limb and trunk muscles were recorded for each trial. Surface EMG was analyzed prior to initial contact (PRE), during load acceptance (LA) and propulsion (PRP) periods of the stance phase. In addition, hamstrings-quadriceps co-contraction ratios (CCR) were calculated for these time-windows. The results showed no changes in CoMVEL, HAC, peak GRF and surface EMG PRE among conditions. However, during LA, there were increases in tibialis anterior EMG (30-50%) concomitant to reduced EMG for quadriceps muscles, gluteus and rectus abdominis for Catch and Pert conditions (15-40%). In addition, quadriceps EMG was still reduced during PRP (p<.05). Consequently, CCR was greater for Catch and Pert in comparison to Base (p<.05). These results suggest that there is modulation of muscle activity towards anticipating potential instability in the lower limb joints and assure safety to complete the task. Copyright © 2014. Published by Elsevier B.V.

  11. Hydrothermal atomic force microscopy observations of barite step growth rates as a function of the aqueous barium-to-sulfate ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bracco, Jacquelyn N.; Gooijer, Yiscka; Higgins, Steven R.

    The rate of growth of ionic minerals from solutions with varying aqueous cation:anion ratios may result in significant errors in mineralization rates predicted by commonly-used affinity-based rate equations. To assess the potential influence of solute stoichiometry on barite growth, step velocities on the barite (001) surface have been measured at 108 °C using hydrothermal atomic force microscopy (HAFM) at moderate supersaturation and as a function of the aqueous barium:sulfate ratio (r). Barite growth hillocks at r ~ 1 were bounded bymore » $$\\langle$$120$$\\rangle$$ steps, however at r < 1, kink site densities increased, steps followed a direction vicinal to $$\\langle$$120$$\\rangle$$, and the [010] steps developed. At r > 1, steps roughened and rounded as the kink site density increased. Step velocities peaked at r = 1 and decreased roughly symmetrically as a function of r, indicating the attachment rates of barium and sulfate ions are similar under these conditions. We hypothesize that the differences in our observations at high and low r arise from differences in the attachment rate constants for the obtuse and acute $$\\langle$$120$$\\rangle$$ steps. Based on results at low r, the data suggests the attachment rate constant for barium ions is similar for obtuse and acute steps. Based on results at high r, the data suggests the attachment rate constant for sulfate is greater for obtuse steps than acute steps. In conclusion, utilizing a step growth model developed by Stack and Grantham (2010) the experimental step velocities as a function of r were readily fit while attempts to fit the data using a model developed by Zhang and Nancollas (1998) were less successful.« less

  12. Hydrothermal atomic force microscopy observations of barite step growth rates as a function of the aqueous barium-to-sulfate ratio

    DOE PAGES

    Bracco, Jacquelyn N.; Gooijer, Yiscka; Higgins, Steven R.

    2016-03-19

    The rate of growth of ionic minerals from solutions with varying aqueous cation:anion ratios may result in significant errors in mineralization rates predicted by commonly-used affinity-based rate equations. To assess the potential influence of solute stoichiometry on barite growth, step velocities on the barite (001) surface have been measured at 108 °C using hydrothermal atomic force microscopy (HAFM) at moderate supersaturation and as a function of the aqueous barium:sulfate ratio (r). Barite growth hillocks at r ~ 1 were bounded bymore » $$\\langle$$120$$\\rangle$$ steps, however at r < 1, kink site densities increased, steps followed a direction vicinal to $$\\langle$$120$$\\rangle$$, and the [010] steps developed. At r > 1, steps roughened and rounded as the kink site density increased. Step velocities peaked at r = 1 and decreased roughly symmetrically as a function of r, indicating the attachment rates of barium and sulfate ions are similar under these conditions. We hypothesize that the differences in our observations at high and low r arise from differences in the attachment rate constants for the obtuse and acute $$\\langle$$120$$\\rangle$$ steps. Based on results at low r, the data suggests the attachment rate constant for barium ions is similar for obtuse and acute steps. Based on results at high r, the data suggests the attachment rate constant for sulfate is greater for obtuse steps than acute steps. In conclusion, utilizing a step growth model developed by Stack and Grantham (2010) the experimental step velocities as a function of r were readily fit while attempts to fit the data using a model developed by Zhang and Nancollas (1998) were less successful.« less

  13. Finite element solution of nonlinear eddy current problems with periodic excitation and its industrial applications☆

    PubMed Central

    Bíró, Oszkár; Koczka, Gergely; Preis, Kurt

    2014-01-01

    An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer. PMID:24829517

  14. Finite element solution of nonlinear eddy current problems with periodic excitation and its industrial applications.

    PubMed

    Bíró, Oszkár; Koczka, Gergely; Preis, Kurt

    2014-05-01

    An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer.

  15. Noisy image magnification with total variation regularization and order-changed dictionary learning

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi

    2015-12-01

    Noisy low resolution (LR) images are always obtained in real applications, but many existing image magnification algorithms can not get good result from a noisy LR image. We propose a two-step image magnification algorithm to solve this problem. The proposed algorithm takes the advantages of both regularization-based method and learning-based method. The first step is based on total variation (TV) regularization and the second step is based on sparse representation. In the first step, we add a constraint on the TV regularization model to magnify the LR image and at the same time to suppress the noise in it. In the second step, we propose an order-changed dictionary training algorithm to train the dictionaries which is dominated by texture details. Experimental results demonstrate that the proposed algorithm performs better than many other algorithms when the noise is not serious. The proposed algorithm can also provide better visual quality on natural LR images.

  16. Computer implemented empirical mode decomposition method apparatus, and article of manufacture utilizing curvature extrema

    NASA Technical Reports Server (NTRS)

    Shen, Zheng (Inventor); Huang, Norden Eh (Inventor)

    2003-01-01

    A computer implemented physical signal analysis method is includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals based on local extrema and curvature extrema. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.

  17. Saving Material with Systematic Process Designs

    NASA Astrophysics Data System (ADS)

    Kerausch, M.

    2011-08-01

    Global competition is forcing the stamping industry to further increase quality, to shorten time-to-market and to reduce total cost. Continuous balancing between these classical time-cost-quality targets throughout the product development cycle is required to ensure future economical success. In today's industrial practice, die layout standards are typically assumed to implicitly ensure the balancing of company specific time-cost-quality targets. Although die layout standards are a very successful approach, there are two methodical disadvantages. First, the capabilities for tool design have to be continuously adapted to technological innovations; e.g. to take advantage of the full forming capability of new materials. Secondly, the great variety of die design aspects have to be reduced to a generic rule or guideline; e.g. binder shape, draw-in conditions or the use of drawbeads. Therefore, it is important to not overlook cost or quality opportunities when applying die design standards. This paper describes a systematic workflow with focus on minimizing material consumption. The starting point of the investigation is a full process plan for a typical structural part. All requirements are definedaccording to a predefined set of die design standards with industrial relevance are fulfilled. In a first step binder and addendum geometry is systematically checked for material saving potentials. In a second step, blank shape and draw-in are adjusted to meet thinning, wrinkling and springback targets for a minimum blank solution. Finally the identified die layout is validated with respect to production robustness versus splits, wrinkles and springback. For all three steps the applied methodology is based on finite element simulation combined with a stochastical variation of input variables. With the proposed workflow a well-balanced (time-cost-quality) production process assuring minimal material consumption can be achieved.

  18. Development of an extended Kalman filter for the self-sensing application of a spring-biased shape memory alloy wire actuator

    NASA Astrophysics Data System (ADS)

    Gurung, H.; Banerjee, A.

    2016-02-01

    This report presents the development of an extended Kalman filter (EKF) to harness the self-sensing capability of a shape memory alloy (SMA) wire, actuating a linear spring. The stress and temperature of the SMA wire, constituting the state of the system, are estimated using the EKF, from the measured change in electrical resistance (ER) of the SMA. The estimated stress is used to compute the change in length of the spring, eliminating the need for a displacement sensor. The system model used in the EKF comprises the heat balance equation and the constitutive relation of the SMA wire coupled with the force-displacement behavior of a spring. Both explicit and implicit approaches are adopted to evaluate the system model at each time-update step of the EKF. Next, in the measurement-update step, estimated states are updated based on the measured electrical resistance. It has been observed that for the same time step, the implicit approach consumes less computational time than the explicit method. To verify the implementation, EKF estimated states of the system are compared with those of an established model for different inputs to the SMA wire. An experimental setup is developed to measure the actual spring displacement and ER of the SMA, for any time-varying voltage applied to it. The process noise covariance is decided using a heuristic approach, whereas the measurement noise covariance is obtained experimentally. Finally, the EKF is used to estimate the spring displacement for a given input and the corresponding experimentally obtained ER of the SMA. The qualitative agreement between the EKF estimated displacement with that obtained experimentally reveals the true potential of this approach to harness the self-sensing capability of the SMA.

  19. Spectrum of Slip Processes on the Subduction Interface in a Continuum Framework Resolved by Rate-and State Dependent Friction and Adaptive Time Stepping

    NASA Astrophysics Data System (ADS)

    Herrendoerfer, R.; van Dinther, Y.; Gerya, T.

    2015-12-01

    To explore the relationships between subduction dynamics and the megathrust earthquake potential, we have recently developed a numerical model that bridges the gap between processes on geodynamic and earthquake cycle time scales. In a self-consistent, continuum-based framework including a visco-elasto-plastic constitutive relationship, cycles of megathrust earthquake-like ruptures were simulated through a purely slip rate-dependent friction, albeit with very low slip rates (van Dinther et al., JGR, 2013). In addition to much faster earthquakes, a range of aseismic slip processes operate at different time scales in nature. These aseismic processes likely accommodate a considerable amount of the plate convergence and are thus relevant in order to estimate the long-term seismic coupling and related hazard in subduction zones. To simulate and resolve this wide spectrum of slip processes, we innovatively implemented rate-and state dependent friction (RSF) and an adaptive time-stepping into our continuum framework. The RSF formulation, in contrast to our previous friction formulation, takes the dependency of frictional strength on a state variable into account. It thereby allows for continuous plastic yielding inside rate-weakening regions, which leads to aseismic slip. In contrast to the conventional RSF formulation, we relate slip velocities to strain rates and use an invariant formulation. Thus we do not require the a priori definition of infinitely thin, planar faults in a homogeneous elastic medium. With this new implementation of RSF, we succeed to produce consistent cycles of frictional instabilities. By changing the frictional parameter a, b, and the characteristic slip distance, we observe a transition from stable sliding to stick-slip behaviour. This transition is in general agreement with predictions from theoretical estimates of the nucleation size, thereby to first order validating our implementation. By incorporating adaptive time-stepping based on a fraction of characteristic slip distance over maximum slip velocity, we are able to resolve stick-slip events and increase computational speed. In this better resolved framework, we examine the role of aseismic slip on the megathrust cycle and its dependence on subduction velocity.

  20. Influence of curriculum type on student performance in the United States Medical Licensing Examination Step 1 and Step 2 exams: problem-based learning vs. lecture-based curriculum.

    PubMed

    Enarson, C; Cariaga-Lo, L

    2001-11-01

    The results of the United States Medical Licensing Examination Step 1 and 2 examinations are reported for students enrolled in a problem-based and traditional lecture-based curricula over a seven-year period at a single institution. There were no statistically significant differences in mean scores on either examination over the seven year period as a whole. There were statistically significant main effects noted by cohort year and curricular track for both the Step 1 and 2 examinations. These results support the general, long-term effectiveness of problem-based learning with respect to basic and clinical science knowledge acquisition. This paper reports the United States Medical Licensing Examination Step 1 and Step 2 results for students enrolled in a problem-based and traditional lecture-based learning curricula over the seven-year period (1992-98) in order to evaluate the adequacy of each curriculum in supporting students learning of the basic and clinical sciences. Six hundred and eighty-nine students who took the United States Medical Licensing Examination Step 1 and 540 students who took Step 2 for the first time over the seven-year period were included in the analyses. T-test analyses were utilized to compare students' Step 1 and Step 2 performance by curriculum groups. United States Medical Licensing Examination Step 1 scores over the seven-year period were 214 for Traditional Curriculum students and 208 for Parallel Curriculum students (t-value = 1.32, P=0.21). Mean Step 2 scores over the seven-year period were 208 for Traditional Curriculum students and 206 for Parallel Curriculum students (t-value=1.08, P=0.30). Statistically significant main effects were noted by cohort year and curricular track for both the Step 1 and Step 2 examinations. The totality of experience in both groups, although differing by curricular type, may be similar enough that the comparable scores are what should be expected. These results should be reassuring to curricular planners and faculty that problem-based learning can provide students with the knowledge needed for the subsequent phases of their medical education.

  1. One Step Quantum Key Distribution Based on EPR Entanglement.

    PubMed

    Li, Jian; Li, Na; Li, Lei-Lei; Wang, Tao

    2016-06-30

    A novel quantum key distribution protocol is presented, based on entanglement and dense coding and allowing asymptotically secure key distribution. Considering the storage time limit of quantum bits, a grouping quantum key distribution protocol is proposed, which overcomes the vulnerability of first protocol and improves the maneuverability. Moreover, a security analysis is given and a simple type of eavesdropper's attack would introduce at least an error rate of 46.875%. Compared with the "Ping-pong" protocol involving two steps, the proposed protocol does not need to store the qubit and only involves one step.

  2. Molecular dynamics with rigid bodies: Alternative formulation and assessment of its limitations when employed to simulate liquid water

    NASA Astrophysics Data System (ADS)

    Silveira, Ana J.; Abreu, Charlles R. A.

    2017-09-01

    Sets of atoms collectively behaving as rigid bodies are often used in molecular dynamics to model entire molecules or parts thereof. This is a coarse-graining strategy that eliminates degrees of freedom and supposedly admits larger time steps without abandoning the atomistic character of a model. In this paper, we rely on a particular factorization of the rotation matrix to simplify the mechanical formulation of systems containing rigid bodies. We then propose a new derivation for the exact solution of torque-free rotations, which are employed as part of a symplectic numerical integration scheme for rigid-body dynamics. We also review methods for calculating pressure in systems of rigid bodies with pairwise-additive potentials and periodic boundary conditions. Finally, simulations of liquid phases, with special focus on water, are employed to analyze the numerical aspects of the proposed methodology. Our results show that energy drift is avoided for time step sizes up to 5 fs, but only if a proper smoothing is applied to the interatomic potentials. Despite this, the effects of discretization errors are relevant, even for smaller time steps. These errors induce, for instance, a systematic failure of the expected equipartition of kinetic energy between translational and rotational degrees of freedom.

  3. Exploring inductive linearization for pharmacokinetic-pharmacodynamic systems of nonlinear ordinary differential equations.

    PubMed

    Hasegawa, Chihiro; Duffull, Stephen B

    2018-02-01

    Pharmacokinetic-pharmacodynamic systems are often expressed with nonlinear ordinary differential equations (ODEs). While there are numerous methods to solve such ODEs these methods generally rely on time-stepping solutions (e.g. Runge-Kutta) which need to be matched to the characteristics of the problem at hand. The primary aim of this study was to explore the performance of an inductive approximation which iteratively converts nonlinear ODEs to linear time-varying systems which can then be solved algebraically or numerically. The inductive approximation is applied to three examples, a simple nonlinear pharmacokinetic model with Michaelis-Menten elimination (E1), an integrated glucose-insulin model and an HIV viral load model with recursive feedback systems (E2 and E3, respectively). The secondary aim of this study was to explore the potential advantages of analytically solving linearized ODEs with two examples, again E3 with stiff differential equations and a turnover model of luteinizing hormone with a surge function (E4). The inductive linearization coupled with a matrix exponential solution provided accurate predictions for all examples with comparable solution time to the matched time-stepping solutions for nonlinear ODEs. The time-stepping solutions however did not perform well for E4, particularly when the surge was approximated by a square wave. In circumstances when either a linear ODE is particularly desirable or the uncertainty in matching the integrator to the ODE system is of potential risk, then the inductive approximation method coupled with an analytical integration method would be an appropriate alternative.

  4. Hardware design and implementation of fast DOA estimation method based on multicore DSP

    NASA Astrophysics Data System (ADS)

    Guo, Rui; Zhao, Yingxiao; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-10-01

    In this paper, we present a high-speed real-time signal processing hardware platform based on multicore digital signal processor (DSP). The real-time signal processing platform shows several excellent characteristics including high performance computing, low power consumption, large-capacity data storage and high speed data transmission, which make it able to meet the constraint of real-time direction of arrival (DOA) estimation. To reduce the high computational complexity of DOA estimation algorithm, a novel real-valued MUSIC estimator is used. The algorithm is decomposed into several independent steps and the time consumption of each step is counted. Based on the statistics of the time consumption, we present a new parallel processing strategy to distribute the task of DOA estimation to different cores of the real-time signal processing hardware platform. Experimental results demonstrate that the high processing capability of the signal processing platform meets the constraint of real-time direction of arrival (DOA) estimation.

  5. Expected Reachability-Time Games

    NASA Astrophysics Data System (ADS)

    Forejt, Vojtěch; Kwiatkowska, Marta; Norman, Gethin; Trivedi, Ashutosh

    In an expected reachability-time game (ERTG) two players, Min and Max, move a token along the transitions of a probabilistic timed automaton, so as to minimise and maximise, respectively, the expected time to reach a target. These games are concurrent since at each step of the game both players choose a timed move (a time delay and action under their control), and the transition of the game is determined by the timed move of the player who proposes the shorter delay. A game is turn-based if at any step of the game, all available actions are under the control of precisely one player. We show that while concurrent ERTGs are not always determined, turn-based ERTGs are positionally determined. Using the boundary region graph abstraction, and a generalisation of Asarin and Maler's simple function, we show that the decision problems related to computing the upper/lower values of concurrent ERTGs, and computing the value of turn-based ERTGs are decidable and their complexity is in NEXPTIME ∩ co-NEXPTIME.

  6. Fast synthesis of platinum nanopetals and nanospheres for highly-sensitive non-enzymatic detection of glucose and selective sensing of ions

    NASA Astrophysics Data System (ADS)

    Taurino, Irene; Sanzó, Gabriella; Mazzei, Franco; Favero, Gabriele; de Micheli, Giovanni; Carrara, Sandro

    2015-10-01

    Novel methods to obtain Pt nanostructured electrodes have raised particular interest due to their high performance in electrochemistry. Several nanostructuration methods proposed in the literature use costly and bulky equipment or are time-consuming due to the numerous steps they involve. Here, Pt nanostructures were produced for the first time by one-step template-free electrodeposition on Pt bare electrodes. The change in size and shape of the nanostructures is proven to be dependent on the deposition parameters and on the ratio between sulphuric acid and chloride-complexes (i.e., hexachloroplatinate or tetrachloroplatinate). To further improve the electrochemical properties of electrodes, depositions of Pt nanostructures on previously synthesised Pt nanostructures are also performed. The electroactive surface areas exhibit a two order of magnitude improvement when Pt nanostructures with the smallest size are used. All the biosensors based on Pt nanostructures and immobilised glucose oxidase display higher sensitivity as compared to bare Pt electrodes. Pt nanostructures retained an excellent electrocatalytic activity towards the direct oxidation of glucose. Finally, the nanodeposits were proven to be an excellent solid contact for ion measurements, significantly improving the time-stability of the potential. The use of these new nanostructured coatings in electrochemical sensors opens new perspectives for multipanel monitoring of human metabolism.

  7. Implementation of Competency-Based Pharmacy Education (CBPE)

    PubMed Central

    Koster, Andries; Schalekamp, Tom; Meijerman, Irma

    2017-01-01

    Implementation of competency-based pharmacy education (CBPE) is a time-consuming, complicated process, which requires agreement on the tasks of a pharmacist, commitment, institutional stability, and a goal-directed developmental perspective of all stakeholders involved. In this article the main steps in the development of a fully-developed competency-based pharmacy curriculum (bachelor, master) are described and tips are given for a successful implementation. After the choice for entering into CBPE is made and a competency framework is adopted (step 1), intended learning outcomes are defined (step 2), followed by analyzing the required developmental trajectory (step 3) and the selection of appropriate assessment methods (step 4). Designing the teaching-learning environment involves the selection of learning activities, student experiences, and instructional methods (step 5). Finally, an iterative process of evaluation and adjustment of individual courses, and the curriculum as a whole, is entered (step 6). Successful implementation of CBPE requires a system of effective quality management and continuous professional development as a teacher. In this article suggestions for the organization of CBPE and references to more detailed literature are given, hoping to facilitate the implementation of CBPE. PMID:28970422

  8. Improved kinect-based spatiotemporal and kinematic treadmill gait assessment.

    PubMed

    Eltoukhy, Moataz; Oh, Jeonghoon; Kuenze, Christopher; Signorile, Joseph

    2017-01-01

    A cost-effective, clinician friendly gait assessment tool that can automatically track patients' anatomical landmarks can provide practitioners with important information that is useful in prescribing rehabilitative and preventive therapies. This study investigated the validity and reliability of the Microsoft Kinect v2 as a potential inexpensive gait analysis tool. Ten healthy subjects walked on a treadmill at 1.3 and 1.6m·s -1 , as spatiotemporal parameters and kinematics were extracted concurrently using the Kinect and three-dimensional motion analysis. Spatiotemporal measures included step length and width, step and stride times, vertical and mediolateral pelvis motion, and foot swing velocity. Kinematic outcomes included hip, knee, and ankle joint angles in the sagittal plane. The absolute agreement and relative consistency between the two systems were assessed using interclass correlations coefficients (ICC2,1), while reproducibility between systems was established using Lin's Concordance Correlation Coefficient (rc). Comparison of ensemble curves and associated 90% confidence intervals (CI90) of the hip, knee, and ankle joint angles were performed to investigate if the Kinect sensor could consistently and accurately assess lower extremity joint motion throughout the gait cycle. Results showed that the Kinect v2 sensor has the potential to be an effective clinical assessment tool for sagittal plane knee and hip joint kinematics, as well as some spatiotemporal temporal variables including pelvis displacement and step characteristics during the gait cycle. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. The influence of age on gait parameters during the transition from a wide to a narrow pathway.

    PubMed

    Shkuratova, Nataliya; Taylor, Nicholas

    2008-06-01

    The ability to negotiate pathways of different widths is a prerequisite of daily living. However, only a few studies have investigated changes in gait parameters in response to walking on narrow pathways. The aim of this study is to examine the influence of age on gait adjustments during the transition from a wide to a narrow pathway. Two-group repeated measures design. Gait Laboratory. Twenty healthy older participants (mean [M] = 74.3 years, Standard deviation [SD] = 7.2 years); 20 healthy young participants (M = 26.6 years, SD = 6.1 years). Making the transition from walking on a wide pathway (68 cm) to walking on a narrow pathway (15 cm). Step length, step time, step width, double support time and base of support. Healthy older participants were able to make the transition from a wide to a narrow pathway successfully. There was only one significant interaction, between age and base of support (p < 0.003). Older adults decreased their base of support only when negotiating the transition step, while young participants started decreasing their base of support prior to the negotiation of transition step (p < 0.01). Adjustments to the transition from a wide to a narrow pathway are largely unaffected by normal ageing. Difficulties in making the transition to a narrow pathway during walking should not be attributed to normal age-related changes. (c) 2008 John Wiley & Sons, Ltd.

  10. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    NASA Astrophysics Data System (ADS)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  11. The effect of a school-based active commuting intervention on children's commuting physical activity and daily physical activity.

    PubMed

    McMinn, David; Rowe, David A; Murtagh, Shemane; Nelson, Norah M

    2012-05-01

    To investigate the effect of a school-based intervention called Travelling Green (TG) on children's walking to and from school and total daily physical activity. A quasi-experiment with 166 Scottish children (8-9 years) was conducted in 2009. One group (n=79) received TG and another group (n=87) acted as a comparison. The intervention lasted 6 weeks and consisted of educational lessons and goal-setting tasks. Steps and MVPA (daily, a.m. commute, p.m. commute, and total commute) were measured for 5 days pre- and post-intervention using accelerometers. Mean steps (daily, a.m., p.m., and total commute) decreased from pre- to post-intervention in both groups (TG by 901, 49, 222, and 271 steps/day and comparison by 2528, 205, 120, and 325 steps/day, respectively). No significant group by time interactions were found for a.m., p.m., and total commuting steps. A medium (partial eta squared=0.09) and significant (p<0.05) group by time interaction was found for total daily steps. MVPA results were similar to step results. TG has a little effect on walking to and from school. However, for total daily steps and daily MVPA, TG results in a smaller seasonal decrease than for children who do not receive the intervention. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. The tropopause cold trap in the Australian Monsoon during STEP/AMEX 1987

    NASA Technical Reports Server (NTRS)

    Selkirk, Henry B.

    1993-01-01

    The relationship between deep convection and tropopause cold trap conditions is examined for the tropical northern Australia region during the 1986-87 summer monsoon season, emphasizing the Australia Monsoon Experiment (AMEX) period when the NASA Stratosphere-Troposphere Exchange Project (STEP) was being conducted. The factors related to the spatial and temporal variability of the cold point potential temperature (CPPT) are investigated. A framework is developed for describing the relationships among surface average equivalent potential temperature in the surface layer (AEPTSL) the height of deep convection, and stratosphere-troposphere exchange. The time-mean pattern of convection, large-scale circulation, and surface AEPTSL in the Australian monsoon and the evolution of the convective environment during the monsoon period and the extended transition season which preceded it are described. The time-mean fields of cold point level variables are examined and the statistical relationships between mean CPPT, surface AEPTSL, and deep convection are described. Day-to-day variations of CPPT are examined in terms of these time mean relationships.

  13. An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, G.; Chacón, L.; Barnes, D. C.

    2011-08-01

    This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov-Poisson formulation), ours is based on a nonlinearly converged Vlasov-Ampére (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant-Friedrichs-Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicit time steps (unlike the earlier "energy-conserving" explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton-Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.

  14. Exact charge and energy conservation in implicit PIC with mapped computational meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guangye; Barnes, D. C.

    This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov Poisson formulation), ours is based on a nonlinearly converged Vlasov Amp re (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant Friedrichs Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicitmore » time steps (unlike the earlier energy-conserving explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.« less

  15. Empirical mode decomposition apparatus, method and article of manufacture for analyzing biological signals and performing curve fitting

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2004-01-01

    A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.

  16. Empirical mode decomposition apparatus, method and article of manufacture for analyzing biological signals and performing curve fitting

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2002-01-01

    A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.

  17. Comparative analysis of peak-detection techniques for comprehensive two-dimensional chromatography.

    PubMed

    Latha, Indu; Reichenbach, Stephen E; Tao, Qingping

    2011-09-23

    Comprehensive two-dimensional gas chromatography (GC×GC) is a powerful technology for separating complex samples. The typical goal of GC×GC peak detection is to aggregate data points of analyte peaks based on their retention times and intensities. Two techniques commonly used for two-dimensional peak detection are the two-step algorithm and the watershed algorithm. A recent study [4] compared the performance of the two-step and watershed algorithms for GC×GC data with retention-time shifts in the second-column separations. In that analysis, the peak retention-time shifts were corrected while applying the two-step algorithm but the watershed algorithm was applied without shift correction. The results indicated that the watershed algorithm has a higher probability of erroneously splitting a single two-dimensional peak than the two-step approach. This paper reconsiders the analysis by comparing peak-detection performance for resolved peaks after correcting retention-time shifts for both the two-step and watershed algorithms. Simulations with wide-ranging conditions indicate that when shift correction is employed with both algorithms, the watershed algorithm detects resolved peaks with greater accuracy than the two-step method. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Avalanche for shape and feature-based virtual screening with 3D alignment

    NASA Astrophysics Data System (ADS)

    Diller, David J.; Connell, Nancy D.; Welsh, William J.

    2015-11-01

    This report introduces a new ligand-based virtual screening tool called Avalanche that incorporates both shape- and feature-based comparison with three-dimensional (3D) alignment between the query molecule and test compounds residing in a chemical database. Avalanche proceeds in two steps. The first step is an extremely rapid shape/feature based comparison which is used to narrow the focus from potentially millions or billions of candidate molecules and conformations to a more manageable number that are then passed to the second step. The second step is a detailed yet still rapid 3D alignment of the remaining candidate conformations to the query conformation. Using the 3D alignment, these remaining candidate conformations are scored, re-ranked and presented to the user as the top hits for further visualization and evaluation. To provide further insight into the method, the results from two prospective virtual screens are presented which show the ability of Avalanche to identify hits from chemical databases that would likely be missed by common substructure-based or fingerprint-based search methods. The Avalanche method is extended to enable patent landscaping, i.e., structural refinements to improve the patentability of hits for deployment in drug discovery campaigns.

  19. Monitoring and remediation of on-farm and off-farm ground current measured as step potential on a Wisconsin dairy farm: A case study.

    PubMed

    Stetzer, Dave; Leavitt, Adam M; Goeke, Charles L; Havas, Magda

    2016-01-01

    Ground current commonly referred to as "stray voltage" has been an issue on dairy farms since electricity was first brought to rural America. Equipment that generates high-frequency voltage transients on electrical wires combined with a multigrounded (electrical distribution) system and inadequate neutral returns all contribute to ground current. Despite decades of problems, we are no closer to resolving this issue, in part, due to three misconceptions that are addressed in this study. Misconception 1. The current standard of 1 V at cow contact is adequate to protect dairy cows; Misconception 2. Frequencies higher than 60 Hz do not need to be considered; and Misconception 3. All sources of ground current originate on the farm that has a ground current problem. This case study of a Wisconsin dairy farm documents, 1. how to establish permanent monitoring of ground current (step potential) on a dairy farm; 2. how to determine and remediate both on-farm and off-farm sources contributing to step potential; 3. which step-potential metrics relate to cow comfort and milk production; and 4. how these metrics relate to established standards. On-farm sources include lighting, variable speed frequency drives on motors, radio frequency identification system and off-farm sources are due to a poor primary neutral return on the utility side of the distribution system. A step-potential threshold of 1 V root mean square (RMS) at 60 Hz is inadequate to protect dairy cows as decreases of a few mV peak-peak at higher frequencies increases milk production, reduces milking time and improves cow comfort.

  20. Feasibility of Focused Stepping Practice During Inpatient Rehabilitation Poststroke and Potential Contributions to Mobility Outcomes.

    PubMed

    Hornby, T George; Holleran, Carey L; Leddy, Abigail L; Hennessy, Patrick; Leech, Kristan A; Connolly, Mark; Moore, Jennifer L; Straube, Donald; Lovell, Linda; Roth, Elliot

    2015-01-01

    Optimal physical therapy strategies to maximize locomotor function in patients early poststroke are not well established. Emerging data indicate that substantial amounts of task-specific stepping practice may improve locomotor function, although stepping practice provided during inpatient rehabilitation is limited (<300 steps/session). The purpose of this investigation was to determine the feasibility of providing focused stepping training to patients early poststroke and its potential association with walking and other mobility outcomes. Daily stepping was recorded on 201 patients <6 months poststroke (80% < 1 month) during inpatient rehabilitation following implementation of a focused training program to maximize stepping practice during clinical physical therapy sessions. Primary outcomes included distance and physical assistance required during a 6-minute walk test (6MWT) and balance using the Berg Balance Scale (BBS). Retrospective data analysis included multiple regression techniques to evaluate the contributions of demographics, training activities, and baseline motor function to primary outcomes at discharge. Median stepping activity recorded from patients was 1516 steps/d, which is 5 to 6 times greater than that typically observed. The number of steps per day was positively correlated with both discharge 6MWT and BBS and improvements from baseline (changes; r = 0.40-0.87), independently contributing 10% to 31% of the total variance. Stepping activity also predicted level of assistance at discharge and discharge location (home vs other facility). Providing focused, repeated stepping training was feasible early poststroke during inpatient rehabilitation and was related to mobility outcomes. Further research is required to evaluate the effectiveness of these training strategies on short- or long-term mobility outcomes as compared with conventional interventions. © The Author(s) 2015.

  1. A multi-layer steganographic method based on audio time domain segmented and network steganography

    NASA Astrophysics Data System (ADS)

    Xue, Pengfei; Liu, Hanlin; Hu, Jingsong; Hu, Ronggui

    2018-05-01

    Both audio steganography and network steganography are belong to modern steganography. Audio steganography has a large capacity. Network steganography is difficult to detect or track. In this paper, a multi-layer steganographic method based on the collaboration of them (MLS-ATDSS&NS) is proposed. MLS-ATDSS&NS is realized in two covert layers (audio steganography layer and network steganography layer) by two steps. A new audio time domain segmented steganography (ATDSS) method is proposed in step 1, and the collaboration method of ATDSS and NS is proposed in step 2. The experimental results showed that the advantage of MLS-ATDSS&NS over others is better trade-off between capacity, anti-detectability and robustness, that means higher steganographic capacity, better anti-detectability and stronger robustness.

  2. An automated multi-scale network-based scheme for detection and location of seismic sources

    NASA Astrophysics Data System (ADS)

    Poiata, N.; Aden-Antoniow, F.; Satriano, C.; Bernard, P.; Vilotte, J. P.; Obara, K.

    2017-12-01

    We present a recently developed method - BackTrackBB (Poiata et al. 2016) - allowing to image energy radiation from different seismic sources (e.g., earthquakes, LFEs, tremors) in different tectonic environments using continuous seismic records. The method exploits multi-scale frequency-selective coherence in the wave field, recorded by regional seismic networks or local arrays. The detection and location scheme is based on space-time reconstruction of the seismic sources through an imaging function built from the sum of station-pair time-delay likelihood functions, projected onto theoretical 3D time-delay grids. This imaging function is interpreted as the location likelihood of the seismic source. A signal pre-processing step constructs a multi-band statistical representation of the non stationary signal, i.e. time series, by means of higher-order statistics or energy envelope characteristic functions. Such signal-processing is designed to detect in time signal transients - of different scales and a priori unknown predominant frequency - potentially associated with a variety of sources (e.g., earthquakes, LFE, tremors), and to improve the performance and the robustness of the detection-and-location location step. The initial detection-location, based on a single phase analysis with the P- or S-phase only, can then be improved recursively in a station selection scheme. This scheme - exploiting the 3-component records - makes use of P- and S-phase characteristic functions, extracted after a polarization analysis of the event waveforms, and combines the single phase imaging functions with the S-P differential imaging functions. The performance of the method is demonstrated here in different tectonic environments: (1) analysis of the one year long precursory phase of 2014 Iquique earthquake in Chile; (2) detection and location of tectonic tremor sources and low-frequency earthquakes during the multiple episodes of tectonic tremor activity in southwestern Japan.

  3. Use of an App-Controlled Neuromuscular Electrical Stimulation System for Improved Self-Management of Knee Conditions and Reduced Costs.

    PubMed

    Chughtai, Morad; Piuzzi, Nicholas; Yakubek, George; Khlopas, Anton; Sodhi, Nipun; Sultan, Assem A; Nasir, Salahuddin; Yates, Benjamin S T; Bhave, Anil; Mont, Michael A

    2017-10-12

    Patients suffering from quadriceps muscle weakness secondary to osteoarthritis or after surgeries, such as total knee arthroplasty, appear to benefit from the use of neuromuscular electrical stimulation (NMES), which can improve muscle strength and function, range of motion, exercise capacity, and quality of life. Several modalities exist that deliver this therapy. However, with the ever-increasing demand to improve clinical efficiency and costs, digitalize healthcare, optimize data collection, improve care coordination, and increase patient compliance and engagement, newer devices incorporating technologies that facilitate these demands are emerging. One of these devices, an app-controlled home-based NMES therapy system that allows patients to self-manage their condition and potentially increase adherence to the treatment, incorporates a smartphone-based application which allows a cloud-based portal that feeds real-time patient monitoring to physicians, allowing patients to be supported remotely and given feedback. This device is a step forward in improving both patient care and physician efficiency, as well as decreasing resource utilization, which potentially may reduce healthcare costs.

  4. Unsteady Crystal Growth Due to Step-Bunch Cascading

    NASA Technical Reports Server (NTRS)

    Vekilov, Peter G.; Lin, Hong; Rosenberger, Franz

    1997-01-01

    Based on our experimental findings of growth rate fluctuations during the crystallization of the protein lysozym, we have developed a numerical model that combines diffusion in the bulk of a solution with diffusive transport to microscopic growth steps that propagate on a finite crystal facet. Nonlinearities in layer growth kinetics arising from step interaction by bulk and surface diffusion, and from step generation by surface nucleation, are taken into account. On evaluation of the model with properties characteristic for the solute transport, and the generation and propagation of steps in the lysozyme system, growth rate fluctuations of the same magnitude and characteristic time, as in the experiments, are obtained. The fluctuation time scale is large compared to that of step generation. Variations of the governing parameters of the model reveal that both the nonlinearity in step kinetics and mixed transport-kinetics control of the crystallization process are necessary conditions for the fluctuations. On a microscopic scale, the fluctuations are associated with a morphological instability of the vicinal face, in which a step bunch triggers a cascade of new step bunches through the microscopic interfacial supersaturation distribution.

  5. Comparison of two statistical methods for probability prediction of monthly precipitation during summer over Huaihe River Basin in China, and applications in runoff prediction based on hydrological model

    NASA Astrophysics Data System (ADS)

    Liu, L.; Du, L.; Liao, Y.

    2017-12-01

    Based on the ensemble hindcast dataset of CSM1.1m by NCC, CMA, Bayesian merging models and a two-step statistical model are developed and employed to predict monthly grid/station precipitation in the Huaihe River China during summer at the lead-time of 1 to 3 months. The hindcast datasets span a period of 1991 to 2014. The skill of the two models is evaluated using area under the ROC curve (AUC) in a leave-one-out cross-validation framework, and is compared to the skill of CSM1.1m. CSM1.1m has highest skill for summer precipitation from April while lowest from May, and has highest skill for precipitation in June but lowest for precipitation in July. Compared with raw outputs of climate models, some schemes of the two approaches have higher skill for the prediction from March and May, but almost schemes have lower skill for prediction from April. Compared to two-step approach, one sampling scheme of Bayesian merging approach has higher skill for the prediction from March, but has lower skill from May. The results suggest that there is potential to apply the two statistical models for monthly precipitation forecast in summer from March and from May over Huaihe River basin, but is potential to apply CSM1.1m forecast from April. Finally, the summer runoff during 1991 to 2014 is simulated based on one hydrological model using the climate hindcast of CSM1.1m and the two statistical models.

  6. Dissecting Costs of CT Study: Application of TDABC (Time-driven Activity-based Costing) in a Tertiary Academic Center.

    PubMed

    Anzai, Yoshimi; Heilbrun, Marta E; Haas, Derek; Boi, Luca; Moshre, Kirk; Minoshima, Satoshi; Kaplan, Robert; Lee, Vivian S

    2017-02-01

    The lack of understanding of the real costs (not charge) of delivering healthcare services poses tremendous challenges in the containment of healthcare costs. In this study, we applied an established cost accounting method, the time-driven activity-based costing (TDABC), to assess the costs of performing an abdomen and pelvis computed tomography (AP CT) in an academic radiology department and identified opportunities for improved efficiency in the delivery of this service. The study was exempt from an institutional review board approval. TDABC utilizes process mapping tools from industrial engineering and activity-based costing. The process map outlines every step of discrete activity and duration of use of clinical resources, personnel, and equipment. By multiplying the cost per unit of capacity by the required task time for each step, and summing each component cost, the overall costs of AP CT is determined for patients in three settings, inpatient (IP), outpatient (OP), and emergency departments (ED). The component costs to deliver an AP CT study were as follows: radiologist interpretation: 40.1%; other personnel (scheduler, technologist, nurse, pharmacist, and transporter): 39.6%; materials: 13.9%; and space and equipment: 6.4%. The cost of performing CT was 13% higher for ED patients and 31% higher for inpatients (IP), as compared to that for OP. The difference in cost was mostly due to non-radiologist personnel costs. Approximately 80% of the direct costs of AP CT to the academic medical center are related to labor. Potential opportunities to reduce the costs include increasing the efficiency of utilization of CT, substituting lower cost resources when appropriate, and streamlining the ordering system to clarify medical necessity and clinical indications. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  7. Emissions from Open Burning of Simulated Military Waste from Forward Operating Bases

    EPA Science Inventory

    Emissions from open burning of simulated military waste from forward operating bases (FOBs) were extensively characterized as an initial step in assessing potential inhalation exposure of FOB personnel and future disposal alternatives. Emissions from two different burning scenar...

  8. Developing and Validating a Tablet Version of an Illness Explanatory Model Interview for a Public Health Survey in Pune, India

    PubMed Central

    Giduthuri, Joseph G.; Maire, Nicolas; Joseph, Saju; Kudale, Abhay; Schaetti, Christian; Sundaram, Neisha; Schindler, Christian; Weiss, Mitchell G.

    2014-01-01

    Background Mobile electronic devices are replacing paper-based instruments and questionnaires for epidemiological and public health research. The elimination of a data-entry step after an interview is a notable advantage over paper, saving investigator time, decreasing the time lags in managing and analyzing data, and potentially improving the data quality by removing the error-prone data-entry step. Research has not yet provided adequate evidence, however, to substantiate the claim of fewer errors for computerized interviews. Methodology We developed an Android-based illness explanatory interview for influenza vaccine acceptance and tested the instrument in a field study in Pune, India, for feasibility and acceptability. Error rates for tablet and paper were compared with reference to the voice recording of the interview as gold standard to assess discrepancies. We also examined the preference of interviewers for the classical paper-based or the electronic version of the interview and compared the costs of research with both data collection devices. Results In 95 interviews with household respondents, total error rates with paper and tablet devices were nearly the same (2.01% and 1.99% respectively). Most interviewers indicated no preference for a particular device; but those with a preference opted for tablets. The initial investment in tablet-based interviews was higher compared to paper, while the recurring costs per interview were lower with the use of tablets. Conclusion An Android-based tablet version of a complex interview was developed and successfully validated. Advantages were not compromised by increased errors, and field research assistants with a preference preferred the Android device. Use of tablets may be more costly than paper for small samples and less costly for large studies. PMID:25233212

  9. High sensitive and selective Escherichia coli detection using immobilized optical fiber microprobe

    NASA Astrophysics Data System (ADS)

    Li, Yanpeng; Sun, Qizhen; Luo, Yiyang; Li, Yue; Gong, Andong; Zhang, Haibin; Liu, Deming

    2017-04-01

    We proposed and demonstrated a stable, label-free bacteriophage-based sensor of Escherichia coli using microfiber probe. T4 Bacteriophage was covalently immobilized on microfiber surface and E.coli concentration was investigated using the high accurate spectral interference mechanism. By immersing microfiber sensor into different concentration E.coli solution, the relationship between resonant wavelength shift and E.coli concentration was analyzed in the range of 103-107cfu/ml. The proposed method is capable of reliable detection of E.coli concentration as low as 103cfu/ml with a fast response time about 10minutes, which makes the real-time detection of E.coli move on a giant step. Additionally, the sensor has great potential to be applied in the fields like environment monitoring and food safety.

  10. Label-Free Immuno-Sensors for the Fast Detection of Listeria in Food.

    PubMed

    Morlay, Alexandra; Roux, Agnès; Templier, Vincent; Piat, Félix; Roupioz, Yoann

    2017-01-01

    Foodborne diseases are a major concern for both food industry and health organizations due to the economic costs and potential threats for human lives. For these reasons, specific regulations impose the research of pathogenic bacteria in food products. Nevertheless, current methods, references and alternatives, take up to several days and require many handling steps. In order to improve pathogen detection in food, we developed an immune-sensor, based on Surface Plasmon Resonance imaging (SPRi) and bacterial growth which allows the detection of a very low number of Listeria monocytogenes in food sample in one day. Adequate sensitivity is achieved by the deposition of several antibodies in a micro-array format allowing real-time detection. This label-free method thus reduces handling and time to result compared with current methods.

  11. Delayed Learning Effects with Erroneous Examples: A Study of Learning Decimals with a Web-Based Tutor

    ERIC Educational Resources Information Center

    McLaren, Bruce M.; Adams, Deanne M.; Mayer, Richard E.

    2015-01-01

    Erroneous examples--step-by-step problem solutions with one or more errors for students to find and fix--hold great potential to help students learn. In this study, which is a replication of a prior study (Adams et al. 2014), but with a much larger population (390 vs. 208), middle school students learned about decimals either by working with…

  12. The CSAICLAWPS project: a multi-scalar, multi-data source approach to providing climate services for both modelling of climate change impacts on crop yields and development of community-level adaptive capacity for sustainable food security

    NASA Astrophysics Data System (ADS)

    Forsythe, N. D.; Fowler, H. J.

    2017-12-01

    The "Climate-smart agriculture implementation through community-focused pursuit of land and water productivity in South Asia" (CSAICLAWPS) project is a research initiative funded by the (UK) Royal Society through its Challenge Grants programme which is part of the broader UK Global Challenges Research Fund (GCRF). CSAICLAWPS has three objectives: a) development of "added-value" - bias assessed, statistically down-scaled - climate projections for selected case study sites across South Asia; b) investigation of crop failure modes under both present (observed) and future (projected) conditions; and c) facilitation of developing local adaptive capacity and resilience through stakeholder engagement. At AGU we will be presenting both next steps and progress to date toward these three objectives: [A] We have carried out bias assessments of a substantial multi-model RCM ensemble (MME) from the CORDEX South Asia (CORDEXdomain for case studies in three countries - Pakistan, India and Sri Lanka - and (stochastically) produced synthetic time-series for these sites from local observations using a Python-based implementation of the principles underlying the Climate Research Unit Weather Generator (CRU-WG) in order to enable probabilistic simulation of current crop yields. [B] We have characterised present response of local crop yields to climate variability in key case study sites using AquaCrop simulations parameterised based on input (agronomic practices, soil conditions, etc) from smallholder farmers. [C] We have implemented community-based hydro-climatological monitoring in several case study "revenue villages" (panchayats) in the Nainital District of Uttarakhand. The purpose of this is not only to increase availability of meteorological data, but also has the aspiration of, over time, leading to enhanced quantitative awareness of present climate variability and potential future conditions (as projected by RCMs). Next steps in our work will include: 1) future crop yield simulations driven by "perturbation" of synthetic time-series using "change factors from the CORDEX-SA MME; 2) stakeholder dialogues critically evaluating potential strategies at the grassroots (implementation) level to mitigate impacts of climate variability and change on crop yields.

  13. Demonstration of Single-Barium-Ion Sensitivity for Neutrinoless Double-Beta Decay Using Single-Molecule Fluorescence Imaging.

    PubMed

    McDonald, A D; Jones, B J P; Nygren, D R; Adams, C; Álvarez, V; Azevedo, C D R; Benlloch-Rodríguez, J M; Borges, F I G M; Botas, A; Cárcel, S; Carrión, J V; Cebrián, S; Conde, C A N; Díaz, J; Diesburg, M; Escada, J; Esteve, R; Felkai, R; Fernandes, L M P; Ferrario, P; Ferreira, A L; Freitas, E D C; Goldschmidt, A; Gómez-Cadenas, J J; González-Díaz, D; Gutiérrez, R M; Guenette, R; Hafidi, K; Hauptman, J; Henriques, C A O; Hernandez, A I; Hernando Morata, J A; Herrero, V; Johnston, S; Labarga, L; Laing, A; Lebrun, P; Liubarsky, I; López-March, N; Losada, M; Martín-Albo, J; Martínez-Lema, G; Martínez, A; Monrabal, F; Monteiro, C M B; Mora, F J; Moutinho, L M; Muñoz Vidal, J; Musti, M; Nebot-Guinot, M; Novella, P; Palmeiro, B; Para, A; Pérez, J; Querol, M; Repond, J; Renner, J; Riordan, S; Ripoll, L; Rodríguez, J; Rogers, L; Santos, F P; Dos Santos, J M F; Simón, A; Sofka, C; Sorel, M; Stiegler, T; Toledo, J F; Torrent, J; Tsamalaidze, Z; Veloso, J F C A; Webb, R; White, J T; Yahlali, N

    2018-03-30

    A new method to tag the barium daughter in the double-beta decay of ^{136}Xe is reported. Using the technique of single molecule fluorescent imaging (SMFI), individual barium dication (Ba^{++}) resolution at a transparent scanning surface is demonstrated. A single-step photobleach confirms the single ion interpretation. Individual ions are localized with superresolution (∼2  nm), and detected with a statistical significance of 12.9σ over backgrounds. This lays the foundation for a new and potentially background-free neutrinoless double-beta decay technology, based on SMFI coupled to high pressure xenon gas time projection chambers.

  14. Demonstration of Single-Barium-Ion Sensitivity for Neutrinoless Double-Beta Decay Using Single-Molecule Fluorescence Imaging

    NASA Astrophysics Data System (ADS)

    McDonald, A. D.; Jones, B. J. P.; Nygren, D. R.; Adams, C.; Álvarez, V.; Azevedo, C. D. R.; Benlloch-Rodríguez, J. M.; Borges, F. I. G. M.; Botas, A.; Cárcel, S.; Carrión, J. V.; Cebrián, S.; Conde, C. A. N.; Díaz, J.; Diesburg, M.; Escada, J.; Esteve, R.; Felkai, R.; Fernandes, L. M. P.; Ferrario, P.; Ferreira, A. L.; Freitas, E. D. C.; Goldschmidt, A.; Gómez-Cadenas, J. J.; González-Díaz, D.; Gutiérrez, R. M.; Guenette, R.; Hafidi, K.; Hauptman, J.; Henriques, C. A. O.; Hernandez, A. I.; Hernando Morata, J. A.; Herrero, V.; Johnston, S.; Labarga, L.; Laing, A.; Lebrun, P.; Liubarsky, I.; López-March, N.; Losada, M.; Martín-Albo, J.; Martínez-Lema, G.; Martínez, A.; Monrabal, F.; Monteiro, C. M. B.; Mora, F. J.; Moutinho, L. M.; Muñoz Vidal, J.; Musti, M.; Nebot-Guinot, M.; Novella, P.; Palmeiro, B.; Para, A.; Pérez, J.; Querol, M.; Repond, J.; Renner, J.; Riordan, S.; Ripoll, L.; Rodríguez, J.; Rogers, L.; Santos, F. P.; dos Santos, J. M. F.; Simón, A.; Sofka, C.; Sorel, M.; Stiegler, T.; Toledo, J. F.; Torrent, J.; Tsamalaidze, Z.; Veloso, J. F. C. A.; Webb, R.; White, J. T.; Yahlali, N.; NEXT Collaboration

    2018-03-01

    A new method to tag the barium daughter in the double-beta decay of Xe 136 is reported. Using the technique of single molecule fluorescent imaging (SMFI), individual barium dication (Ba++ ) resolution at a transparent scanning surface is demonstrated. A single-step photobleach confirms the single ion interpretation. Individual ions are localized with superresolution (˜2 nm ), and detected with a statistical significance of 12.9 σ over backgrounds. This lays the foundation for a new and potentially background-free neutrinoless double-beta decay technology, based on SMFI coupled to high pressure xenon gas time projection chambers.

  15. A financial analysis of operating room charges for robot-assisted gynaecologic surgery: Efficiency strategies in the operating room for reducing the costs

    PubMed Central

    Zeybek, Burak; Öge, Tufan; Kılıç, Cemil Hakan; Borahay, Mostafa A.; Kılıç, Gökhan Sami

    2014-01-01

    Objective To analyse the steps taking place in the operating room (OR) before the console time starts in robot-assisted gynaecologic surgery and to identify potential ways to decrease non-operative time in the OR. Material and Methods Thirteen consecutive robotic cases for benign gynaecologic disease at the Department of Obstetrics and Gynecology at University of Texas Medical Branch (UTMB) were retrospectively reviewed. The collected data included the specific terms ‘Anaesthesia Done’ (step 1), ‘Drape Done’ (step 2), and ‘Trocar In’ (step 3), all of which refer to the time before the actual surgery began and OR charges were evaluated as level 3, 4, and 5 for open abdominal/vaginal hysterectomy, laparoscopic hysterectomy, and robot-assisted hysterectomy, respectively. Results The cost of the OR for 0–30 minutes and each additional 30 minutes were $3,693 and $1,488, $4,961 and $2,426, $5,513 and $2,756 in level 3, 4, and 5 surgeries, respectively. The median time for step 1 was 12.1 min (5.25–23.3), for step 2 was 19 (4.59–44) min, and for step 3 was 25.3 (16.45–45) min. The total median time until the actual operation began was 54.58 min (40–100). The total cost was $6948.7 when the charge was calculated according to level 4 and $7771.1 when the charge was calculated according to level 5. Conclusion Robot-assisted surgery is already ‘cost-expensive’ in the preparation stage of a surgical procedure during anaesthesia induction and draping of the patient because of charging levels. Every effort should be made to shorten the time and reduce the number of instruments used without compromising care. (J Turk Ger Gynecol Assoc 2014; 15: 25–9) PMID:24790513

  16. O–O bond formation in ruthenium-catalyzed water oxidation: single-site nucleophilic attack vs. O–O radical coupling

    DOE PAGES

    Shaffer, David W.; Xie, Yan; Concepcion, Javier J.

    2017-09-01

    In this review we discuss at the mechanistic level the different steps involved in water oxidation catalysis with ruthenium-based molecular catalysts. We have chosen to focus on ruthenium-based catalysts to provide a more coherent discussion and because of the availability of detailed mechanistic studies for these systems but many of the aspects presented in this review are applicable to other systems as well. The water oxidation cycle has been divided in four major steps: water oxidative activation, O–O bond formation, oxidative activation of peroxide intermediates, and O 2 evolution. A significant portion of the review is dedicated to the O–Omore » bond formation step as the key step in water oxidation catalysis. As a result, the two main pathways to accomplish this step, single-site water nucleophilic attack and O–O radical coupling, are discussed in detail and compared in terms of their potential use in photoelectrochemical cells for solar fuels generation.« less

  17. O–O bond formation in ruthenium-catalyzed water oxidation: single-site nucleophilic attack vs. O–O radical coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaffer, David W.; Xie, Yan; Concepcion, Javier J.

    In this review we discuss at the mechanistic level the different steps involved in water oxidation catalysis with ruthenium-based molecular catalysts. We have chosen to focus on ruthenium-based catalysts to provide a more coherent discussion and because of the availability of detailed mechanistic studies for these systems but many of the aspects presented in this review are applicable to other systems as well. The water oxidation cycle has been divided in four major steps: water oxidative activation, O–O bond formation, oxidative activation of peroxide intermediates, and O 2 evolution. A significant portion of the review is dedicated to the O–Omore » bond formation step as the key step in water oxidation catalysis. As a result, the two main pathways to accomplish this step, single-site water nucleophilic attack and O–O radical coupling, are discussed in detail and compared in terms of their potential use in photoelectrochemical cells for solar fuels generation.« less

  18. O-O bond formation in ruthenium-catalyzed water oxidation: single-site nucleophilic attack vs. O-O radical coupling.

    PubMed

    Shaffer, David W; Xie, Yan; Concepcion, Javier J

    2017-10-16

    In this review we discuss at the mechanistic level the different steps involved in water oxidation catalysis with ruthenium-based molecular catalysts. We have chosen to focus on ruthenium-based catalysts to provide a more coherent discussion and because of the availability of detailed mechanistic studies for these systems but many of the aspects presented in this review are applicable to other systems as well. The water oxidation cycle has been divided in four major steps: water oxidative activation, O-O bond formation, oxidative activation of peroxide intermediates, and O 2 evolution. A significant portion of the review is dedicated to the O-O bond formation step as the key step in water oxidation catalysis. The two main pathways to accomplish this step, single-site water nucleophilic attack and O-O radical coupling, are discussed in detail and compared in terms of their potential use in photoelectrochemical cells for solar fuels generation.

  19. Three-step effluent chlorination increases disinfection efficiency and reduces DBP formation and toxicity.

    PubMed

    Li, Yu; Zhang, Xiangru; Yang, Mengting; Liu, Jiaqi; Li, Wanxin; Graham, Nigel J D; Li, Xiaoyan; Yang, Bo

    2017-02-01

    Chlorination is extensively applied for disinfecting sewage effluents, but it unintentionally generates disinfection byproducts (DBPs). Using seawater for toilet flushing introduces a high level of bromide into domestic sewage. Chlorination of sewage effluent rich in bromide causes the formation of brominated DBPs. The objectives of achieving a disinfection goal, reducing disinfectant consumption and operational costs, as well as diminishing adverse effects to aquatic organisms in receiving water body remain a challenge in sewage treatment. In this study, we have demonstrated that, with the same total chlorine dosage, a three-step chlorination (dosing chlorine by splitting it into three equal portions with a 5-min time interval for each portion) was significantly more efficient in disinfecting a primary saline sewage effluent than a one-step chlorination (dosing chlorine at one time). Compared to one-step chlorination, three-step chlorination enhanced the disinfection efficiency by up to 0.73-log reduction of Escherichia coli. The overall DBP formation resulting from one-step and three-step chlorination was quantified by total organic halogen measurement. Compared to one-step chlorination, the DBP formation in three-step chlorination was decreased by up to 23.4%. The comparative toxicity of one-step and three-step chlorination was evaluated in terms of the development of embryo-larva of a marine polychaete Platynereis dumerilii. The results revealed that the primary sewage effluent with three-step chlorination was less toxic than that with one-step chlorination, indicating that three-step chlorination could reduce the potential adverse effects of disinfected sewage effluents to aquatic organisms in the receiving marine water. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Partnering with Engineers to Identify and Empirically Evaluate Delays in Magnetic Resonance Imaging Laying the Foundations for Quality Improvement and System-based Practice in Radiology.

    PubMed

    Brandon, Catherine J; Holody, Michael; Inch, Geoffrey; Kabcenell, Michael; Schowalter, Diane; Mullan, Patricia B

    2012-01-01

    The aim of this study was to evaluate the feasibility of partnering with engineering students and critically examining the merit of the problem identification and analyses students generated in identifying sources impeding effective turnaround in a large university department of diagnostic radiology. Turnaround involves the time and activities beginning when a patient enters the magnetic resonance scanner room until the patient leaves, minus the time the scanner is conducting the protocol. A prospective observational study was conducted, in which four senior undergraduate industrial and operations engineering students interviewed magnetic resonance staff members and observed all shifts. On the basis of 150 hours of observation, the engineering students identified 11 process steps (eg, changing coils). They charted machine use for all shifts, providing a breakdown of turnaround time between appropriate process and non-value-added time. To evaluate the processes occurring in the scanning room, the students used a work-sampling schedule in which a beeper sounded 2.5 times per hour, signaling the technologist to identify which of 11 process steps was occurring. This generated 2147 random observations over a 3-week period. The breakdown of machine use over 105 individual studies showed that non-value-added time accounted for 62% of turnaround time. Analysis of 2147 random samples of work showed that scanners were empty and waiting for patients 15% of the total time. Analyses showed that poor communication delayed the arrival of patients and that no one had responsibility for communicating when scanning was done. Engineering students used rigorous study design and sampling methods to conduct interviews and observations. This led to data-driven definition of problems and potential solutions to guide systems-based improvement. Copyright © 2012 AUR. Published by Elsevier Inc. All rights reserved.

  1. On-orbit assembly of a team of flexible spacecraft using potential field based method

    NASA Astrophysics Data System (ADS)

    Chen, Ti; Wen, Hao; Hu, Haiyan; Jin, Dongping

    2017-04-01

    In this paper, a novel control strategy is developed based on artificial potential field for the on-orbit autonomous assembly of four flexible spacecraft without inter-member collision. Each flexible spacecraft is simplified as a hub-beam model with truncated beam modes in the floating frame of reference and the communication graph among the four spacecraft is assumed to be a ring topology. The four spacecraft are driven to a pre-assembly configuration first and then to the assembly configuration. In order to design the artificial potential field for the first step, each spacecraft is outlined by an ellipse and a virtual leader of circle is introduced. The potential field mainly depends on the attitude error between the flexible spacecraft and its neighbor, the radial Euclidian distance between the ellipse and the circle and the classical Euclidian distance between the centers of the ellipse and the circle. It can be demonstrated that there are no local minima for the potential function and the global minimum is zero. If the function is equal to zero, the solution is not a certain state, but a set. All the states in the set are corresponding to the desired configurations. The Lyapunov analysis guarantees that the four spacecraft asymptotically converge to the target configuration. Moreover, the other potential field is also included to avoid the inter-member collision. In the control design of the second step, only small modification is made for the controller in the first step. Finally, the successful application of the proposed control law to the assembly mission is verified by two case studies.

  2. A Single-Step Enrichment Medium for Nonchromogenic Isolation of Healthy and Cold-Injured Salmonella spp. from Fresh Vegetables.

    PubMed

    Kim, Hong-Seok; Choi, Dasom; Kang, Il-Byeong; Kim, Dong-Hyeon; Yim, Jin-Hyeok; Kim, Young-Ji; Chon, Jung-Whan; Oh, Deog-Hwan; Seo, Kun-Ho

    2017-02-01

    Culture-based detection of nontyphoidal Salmonella spp. in foods requires at least four working days; therefore, new detection methods that shorten the test time are needed. In this study, we developed a novel single-step Salmonella enrichment broth, SSE-1, and compared its detection capability with that of commercial single-step ONE broth-Salmonella (OBS) medium and a conventional two-step enrichment method using buffered peptone water and Rappaport-Vassiliadis soy broth (BPW-RVS). Minimally processed lettuce samples were artificially inoculated with low levels of healthy and cold-injured Salmonella Enteritidis (10 0 or 10 1 colony-forming unit/25 g), incubated in OBS, BPW-RVS, and SSE-1 broths, and streaked on xylose lysine deoxycholate (XLD) agar. Salmonella recoverability was significantly higher in BPW-RVS (79.2%) and SSE-1 (83.3%) compared to OBS (39.3%) (p < 0.05). Our data suggest that the SSE-1 single-step enrichment broth could completely replace two-step enrichment with reduced enrichment time from 48 to 24 h, performing better than commercial single-step enrichment medium in the conventional nonchromogenic Salmonella detection, thus saving time, labor, and cost.

  3. A geo-spatial data management system for potentially active volcanoes—GEOWARN project

    NASA Astrophysics Data System (ADS)

    Gogu, Radu C.; Dietrich, Volker J.; Jenny, Bernhard; Schwandner, Florian M.; Hurni, Lorenz

    2006-02-01

    Integrated studies of active volcanic systems for the purpose of long-term monitoring and forecast and short-term eruption prediction require large numbers of data-sets from various disciplines. A modern database concept has been developed for managing and analyzing multi-disciplinary volcanological data-sets. The GEOWARN project (choosing the "Kos-Yali-Nisyros-Tilos volcanic field, Greece" and the "Campi Flegrei, Italy" as test sites) is oriented toward potentially active volcanoes situated in regions of high geodynamic unrest. This article describes the volcanological database of the spatial and temporal data acquired within the GEOWARN project. As a first step, a spatial database embedded in a Geographic Information System (GIS) environment was created. Digital data of different spatial resolution, and time-series data collected at different intervals or periods, were unified in a common, four-dimensional representation of space and time. The database scheme comprises various information layers containing geographic data (e.g. seafloor and land digital elevation model, satellite imagery, anthropogenic structures, land-use), geophysical data (e.g. from active and passive seismicity, gravity, tomography, SAR interferometry, thermal imagery, differential GPS), geological data (e.g. lithology, structural geology, oceanography), and geochemical data (e.g. from hydrothermal fluid chemistry and diffuse degassing features). As a second step based on the presented database, spatial data analysis has been performed using custom-programmed interfaces that execute query scripts resulting in a graphical visualization of data. These query tools were designed and compiled following scenarios of known "behavior" patterns of dormant volcanoes and first candidate signs of potential unrest. The spatial database and query approach is intended to facilitate scientific research on volcanic processes and phenomena, and volcanic surveillance.

  4. Rapid non-enzymatic extraction method for isolating PCR-quality camelpox virus DNA from skin.

    PubMed

    Yousif, A Ausama; Al-Naeem, A Abdelmohsen; Al-Ali, M Ahmad

    2010-10-01

    Molecular diagnostic investigations of orthopoxvirus (OPV) infections are performed using a variety of clinical samples including skin lesions, tissues from internal organs, blood and secretions. Skin samples are particularly convenient for rapid diagnosis and molecular epidemiological investigations of camelpox virus (CMLV). Classical extraction procedures and commercial spin-column-based kits are time consuming, relatively expensive, and require multiple extraction and purification steps in addition to proteinase K digestion. A rapid non-enzymatic procedure for extracting CMLV DNA from dried scabs or pox lesions was developed to overcome some of the limitations of the available DNA extraction techniques. The procedure requires as little as 10mg of tissue and produces highly purified DNA [OD(260)/OD(280) ratios between 1.47 and 1.79] with concentrations ranging from 6.5 to 16 microg/ml. The extracted CMLV DNA was proven suitable for virus-specific qualitative and, semi-quantitative PCR applications. Compared to spin-column and conventional viral DNA extraction techniques, the two-step extraction procedure saves money and time, and retains the potential for automation without compromising CMLV PCR sensitivity. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  5. Volumetric ambient occlusion for real-time rendering and games.

    PubMed

    Szirmay-Kalos, L; Umenhoffer, T; Toth, B; Szecsi, L; Sbert, M

    2010-01-01

    This new algorithm, based on GPUs, can compute ambient occlusion to inexpensively approximate global-illumination effects in real-time systems and games. The first step in deriving this algorithm is to examine how ambient occlusion relates to the physically founded rendering equation. The correspondence stems from a fuzzy membership function that defines what constitutes nearby occlusions. The next step is to develop a method to calculate ambient occlusion in real time without precomputation. The algorithm is based on a novel interpretation of ambient occlusion that measures the relative volume of the visible part of the surface's tangent sphere. The new formula's integrand has low variation and thus can be estimated accurately with a few samples.

  6. Development of a time-dependent incompressible Navier-Stokes solver based on a fractional-step method

    NASA Technical Reports Server (NTRS)

    Rosenfeld, Moshe

    1990-01-01

    The development, validation and application of a fractional step solution method of the time-dependent incompressible Navier-Stokes equations in generalized coordinate systems are discussed. A solution method that combines a finite-volume discretization with a novel choice of the dependent variables and a fractional step splitting to obtain accurate solutions in arbitrary geometries was previously developed for fixed-grids. In the present research effort, this solution method is extended to include more general situations, including cases with moving grids. The numerical techniques are enhanced to gain efficiency and generality.

  7. Space-Pseudo-Time Method: Application to the One-Dimensional Coulomb Potential and Density Funtional Theory

    NASA Astrophysics Data System (ADS)

    Weatherford, Charles; Gebremedhin, Daniel

    2016-03-01

    A new and efficient way of evolving a solution to an ordinary differential equation is presented. A finite element method is used where we expand in a convenient local basis set of functions that enforce both function and first derivative continuity across the boundaries of each element. We also implement an adaptive step size choice for each element that is based on a Taylor series expansion. The method is applied to solve for the eigenpairs of the one-dimensional soft-coulomb potential and the hard-coulomb limit is studied. The method is then used to calculate a numerical solution of the Kohn-Sham differential equation within the local density approximation is presented and is applied to the helium atom. Supported by the National Nuclear Security Agency, the Nuclear Regulatory Commission, and the Defense Threat Reduction Agency.

  8. Electronic sleep analyzer

    NASA Technical Reports Server (NTRS)

    Frost, J. D., Jr.

    1970-01-01

    Electronic instrument automatically monitors the stages of sleep of a human subject. The analyzer provides a series of discrete voltage steps with each step corresponding to a clinical assessment of level of consciousness. It is based on the operation of an EEG and requires very little telemetry bandwidth or time.

  9. Real-time color image processing for forensic fiber investigations

    NASA Astrophysics Data System (ADS)

    Paulsson, Nils

    1995-09-01

    This paper describes a system for automatic fiber debris detection based on color identification. The properties of the system are fast analysis and high selectivity, a necessity when analyzing forensic fiber samples. An ordinary investigation separates the material into well above 100,000 video images to analyze. The system is based on standard techniques such as CCD-camera, motorized sample table, and IBM-compatible PC/AT with add-on-boards for video frame digitalization and stepping motor control as the main parts. It is possible to operate the instrument at full video rate (25 image/s) with aid of the HSI-color system (hue- saturation-intensity) and software optimization. High selectivity is achieved by separating the analysis into several steps. The first step is fast direct color identification of objects in the analyzed video images and the second step analyzes detected objects with a more complex and time consuming stage of the investigation to identify single fiber fragments for subsequent analysis with more selective techniques.

  10. Fine Pointing of Military Spacecraft

    DTIC Science & Technology

    2007-03-01

    estimate is high. But feedback controls are attempting to fix the attitude at the next time step with error based on the previous time step without using ...52 a. Stability Analysis Consider not using the reference trajectory in the feedback signal. The previous stability proof (Refs.[43],[46]) are no... robust steering law and quaternion feedback control [52]. TASS2 has center-of-gravity offset disturbance that must be countered by the three CMG

  11. Development of annualized diameter and height growth equations for red alder: preliminary results.

    Treesearch

    Aaron Weiskittel; Sean M. Garber; Greg Johnson; Doug Maguire; Robert A. Monserud

    2006-01-01

    Most individual-tree based growth and yield models use a 5- to 10-year time step, which can make projections for a fast-growing species like red alder quite difficult. Further, it is rather cumbersome to simulate the effects of intensive silvicultural treatments such as thinning or pruning on a time step longer than one year given the highly dynamic nature of growth...

  12. Spatiotemporal groundwater level modeling using hybrid artificial intelligence-meshless method

    NASA Astrophysics Data System (ADS)

    Nourani, Vahid; Mousavi, Shahram

    2016-05-01

    Uncertainties of the field parameters, noise of the observed data and unknown boundary conditions are the main factors involved in the groundwater level (GL) time series which limit the modeling and simulation of GL. This paper presents a hybrid artificial intelligence-meshless model for spatiotemporal GL modeling. In this way firstly time series of GL observed in different piezometers were de-noised using threshold-based wavelet method and the impact of de-noised and noisy data was compared in temporal GL modeling by artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). In the second step, both ANN and ANFIS models were calibrated and verified using GL data of each piezometer, rainfall and runoff considering various input scenarios to predict the GL at one month ahead. In the final step, the simulated GLs in the second step of modeling were considered as interior conditions for the multiquadric radial basis function (RBF) based solve of governing partial differential equation of groundwater flow to estimate GL at any desired point within the plain where there is not any observation. In order to evaluate and compare the GL pattern at different time scales, the cross-wavelet coherence was also applied to GL time series of piezometers. The results showed that the threshold-based wavelet de-noising approach can enhance the performance of the modeling up to 13.4%. Also it was found that the accuracy of ANFIS-RBF model is more reliable than ANN-RBF model in both calibration and validation steps.

  13. QUICR-learning for Multi-Agent Coordination

    NASA Technical Reports Server (NTRS)

    Agogino, Adrian K.; Tumer, Kagan

    2006-01-01

    Coordinating multiple agents that need to perform a sequence of actions to maximize a system level reward requires solving two distinct credit assignment problems. First, credit must be assigned for an action taken at time step t that results in a reward at time step t > t. Second, credit must be assigned for the contribution of agent i to the overall system performance. The first credit assignment problem is typically addressed with temporal difference methods such as Q-learning. The second credit assignment problem is typically addressed by creating custom reward functions. To address both credit assignment problems simultaneously, we propose the "Q Updates with Immediate Counterfactual Rewards-learning" (QUICR-learning) designed to improve both the convergence properties and performance of Q-learning in large multi-agent problems. QUICR-learning is based on previous work on single-time-step counterfactual rewards described by the collectives framework. Results on a traffic congestion problem shows that QUICR-learning is significantly better than a Q-learner using collectives-based (single-time-step counterfactual) rewards. In addition QUICR-learning provides significant gains over conventional and local Q-learning. Additional results on a multi-agent grid-world problem show that the improvements due to QUICR-learning are not domain specific and can provide up to a ten fold increase in performance over existing methods.

  14. Random walks of colloidal probes in viscoelastic materials

    NASA Astrophysics Data System (ADS)

    Khan, Manas; Mason, Thomas G.

    2014-04-01

    To overcome limitations of using a single fixed time step in random walk simulations, such as those that rely on the classic Wiener approach, we have developed an algorithm for exploring random walks based on random temporal steps that are uniformly distributed in logarithmic time. This improvement enables us to generate random-walk trajectories of probe particles that span a highly extended dynamic range in time, thereby facilitating the exploration of probe motion in soft viscoelastic materials. By combining this faster approach with a Maxwell-Voigt model (MVM) of linear viscoelasticity, based on a slowly diffusing harmonically bound Brownian particle, we rapidly create trajectories of spherical probes in soft viscoelastic materials over more than 12 orders of magnitude in time. Appropriate windowing of these trajectories over different time intervals demonstrates that random walk for the MVM is neither self-similar nor self-affine, even if the viscoelastic material is isotropic. We extend this approach to spatially anisotropic viscoelastic materials, using binning to calculate the anisotropic mean square displacements and creep compliances along different orthogonal directions. The elimination of a fixed time step in simulations of random processes, including random walks, opens up interesting possibilities for modeling dynamics and response over a highly extended temporal dynamic range.

  15. Structural Effects on the Spin Dynamics of Potential Molecular Qubits.

    PubMed

    Atzori, Matteo; Benci, Stefano; Morra, Elena; Tesi, Lorenzo; Chiesa, Mario; Torre, Renato; Sorace, Lorenzo; Sessoli, Roberta

    2018-01-16

    Control of spin-lattice magnetic relaxation is crucial to observe long quantum coherence in spin systems at reasonable temperatures. Such a control is most often extremely difficult to achieve, because of the coexistence of several relaxation mechanisms, that is direct, Raman, and Orbach. These are not always easy to relate to the energy states of the investigated system, because of the contribution to the relaxation of additional spin-phonon coupling phenomena mediated by intramolecular vibrations. In this work, we have investigated the effect of slight changes on the molecular structure of four vanadium(IV)-based potential spin qubits on their spin dynamics, studied by alternate current (AC) susceptometry. The analysis of the magnetic field dependence of the relaxation time correlates well with the low-energy vibrational modes experimentally detected by time-domain THz spectroscopy. This confirms and extends our preliminary observations on the role played by spin-vibration coupling in determining the fine structure of the spin-lattice relaxation time as a function of the magnetic field, for S = 1 / 2 potential spin qubits. This study represents a step forward in the use of low-energy vibrational spectroscopy as a prediction tool for the design of molecular spin qubits with long-lived quantum coherence. Indeed, quantum coherence times of ca. 4.0-6.0 μs in the 4-100 K range are observed for the best performing vanadyl derivatives identified through this multitechnique approach.

  16. Simplified Two-Time Step Method for Calculating Combustion and Emission Rates of Jet-A and Methane Fuel With and Without Water Injection

    NASA Technical Reports Server (NTRS)

    Molnar, Melissa; Marek, C. John

    2005-01-01

    A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two time step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting rates of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx are obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3). The temperature of the gas entering the turbine (T4) was also correlated as a function of the initial combustor temperature (T3), equivalence ratio, water to fuel mass ratio, and pressure.

  17. A cross-sectional study of the individual, social, and built environmental correlates of pedometer-based physical activity among elementary school children.

    PubMed

    McCormack, Gavin R; Giles-Corti, Billie; Timperio, Anna; Wood, Georgina; Villanueva, Karen

    2011-04-12

    Children who participate in regular physical activity obtain health benefits. Preliminary pedometer-based cut-points representing sufficient levels of physical activity among youth have been established; however limited evidence regarding correlates of achieving these cut-points exists. The purpose of this study was to identify correlates of pedometer-based cut-points among elementary school-aged children. A cross-section of children in grades 5-7 (10-12 years of age) were randomly selected from the most (n = 13) and least (n = 12) 'walkable' public elementary schools (Perth, Western Australia), stratified by socioeconomic status. Children (n = 1480; response rate = 56.6%) and parents (n = 1332; response rate = 88.8%) completed a survey, and steps were collected from children using pedometers. Pedometer data were categorized to reflect the sex-specific pedometer-based cut-points of ≥15000 steps/day for boys and ≥12000 steps/day for girls. Associations between socio-demographic characteristics, sedentary and active leisure-time behavior, independent mobility, active transportation and built environmental variables - collected from the child and parent surveys - and meeting pedometer-based cut-points were estimated (odds ratios: OR) using generalized estimating equations. Overall 927 children participated in all components of the study and provided complete data. On average, children took 11407 ± 3136 steps/day (boys: 12270 ± 3350 vs. girls: 10681 ± 2745 steps/day; p < 0.001) and 25.9% (boys: 19.1 vs. girls: 31.6%; p < 0.001) achieved the pedometer-based cut-points.After adjusting for all other variables and school clustering, meeting the pedometer-based cut-points was negatively associated (p < 0.05) with being male (OR = 0.42), parent self-reported number of different destinations in the neighborhood (OR 0.93), and a friend's (OR 0.62) or relative's (OR 0.44, boys only) house being at least a 10-minute walk from home. Achieving the pedometer-based cut-points was positively associated with participating in screen-time < 2 hours/day (OR 1.88), not being driven to school (OR 1.48), attending a school located in a high SES neighborhood (OR 1.33), the average number of steps among children within the respondent's grade (for each 500 step/day increase: OR 1.29), and living further than a 10-minute walk from a relative's house (OR 1.69, girls only). Comprehensive multi-level interventions that reduce screen-time, encourage active travel to/from school and foster a physically active classroom culture might encourage more physical activity among children.

  18. A cross-sectional study of the individual, social, and built environmental correlates of pedometer-based physical activity among elementary school children

    PubMed Central

    2011-01-01

    Background Children who participate in regular physical activity obtain health benefits. Preliminary pedometer-based cut-points representing sufficient levels of physical activity among youth have been established; however limited evidence regarding correlates of achieving these cut-points exists. The purpose of this study was to identify correlates of pedometer-based cut-points among elementary school-aged children. Method A cross-section of children in grades 5-7 (10-12 years of age) were randomly selected from the most (n = 13) and least (n = 12) 'walkable' public elementary schools (Perth, Western Australia), stratified by socioeconomic status. Children (n = 1480; response rate = 56.6%) and parents (n = 1332; response rate = 88.8%) completed a survey, and steps were collected from children using pedometers. Pedometer data were categorized to reflect the sex-specific pedometer-based cut-points of ≥15000 steps/day for boys and ≥12000 steps/day for girls. Associations between socio-demographic characteristics, sedentary and active leisure-time behavior, independent mobility, active transportation and built environmental variables - collected from the child and parent surveys - and meeting pedometer-based cut-points were estimated (odds ratios: OR) using generalized estimating equations. Results Overall 927 children participated in all components of the study and provided complete data. On average, children took 11407 ± 3136 steps/day (boys: 12270 ± 3350 vs. girls: 10681 ± 2745 steps/day; p < 0.001) and 25.9% (boys: 19.1 vs. girls: 31.6%; p < 0.001) achieved the pedometer-based cut-points. After adjusting for all other variables and school clustering, meeting the pedometer-based cut-points was negatively associated (p < 0.05) with being male (OR = 0.42), parent self-reported number of different destinations in the neighborhood (OR 0.93), and a friend's (OR 0.62) or relative's (OR 0.44, boys only) house being at least a 10-minute walk from home. Achieving the pedometer-based cut-points was positively associated with participating in screen-time < 2 hours/day (OR 1.88), not being driven to school (OR 1.48), attending a school located in a high SES neighborhood (OR 1.33), the average number of steps among children within the respondent's grade (for each 500 step/day increase: OR 1.29), and living further than a 10-minute walk from a relative's house (OR 1.69, girls only). Conclusions Comprehensive multi-level interventions that reduce screen-time, encourage active travel to/from school and foster a physically active classroom culture might encourage more physical activity among children. PMID:21486475

  19. Independence of amplitude-frequency and phase calibrations in an SSVEP-based BCI using stepping delay flickering sequences.

    PubMed

    Chang, Hsiang-Chih; Lee, Po-Lei; Lo, Men-Tzung; Lee, I-Hui; Yeh, Ting-Kuang; Chang, Chun-Yen

    2012-05-01

    This study proposes a steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) independent of amplitude-frequency and phase calibrations. Six stepping delay flickering sequences (SDFSs) at 32-Hz flickering frequency were used to implement a six-command BCI system. EEG signals recorded from Oz position were first filtered within 29-35 Hz, segmented based on trigger events of SDFSs to obtain SDFS epochs, and then stored separately in epoch registers. An epoch-average process suppressed the inter-SDFS interference. For each detection point, the latest six SDFS epochs in each epoch register were averaged and the normalized power of averaged responses was calculated. The visual target that induced the maximum normalized power was identified as the visual target. Eight subjects were recruited in this study. All subjects were requested to produce the "563241" command sequence four times. The averaged accuracy, command transfer interval, and information transfer rate (mean ± std.) values for all eight subjects were 97.38 ± 5.97%, 3.56 ± 0.68 s, and 42.46 ± 11.17 bits/min, respectively. The proposed system requires no calibration in either the amplitude-frequency characteristic or the reference phase of SSVEP which may provide an efficient and reliable channel for the neuromuscular disabled to communicate with external environments.

  20. Sub-second pencil beam dose calculation on GPU for adaptive proton therapy.

    PubMed

    da Silva, Joakim; Ansorge, Richard; Jena, Rajesh

    2015-06-21

    Although proton therapy delivered using scanned pencil beams has the potential to produce better dose conformity than conventional radiotherapy, the created dose distributions are more sensitive to anatomical changes and patient motion. Therefore, the introduction of adaptive treatment techniques where the dose can be monitored as it is being delivered is highly desirable. We present a GPU-based dose calculation engine relying on the widely used pencil beam algorithm, developed for on-line dose calculation. The calculation engine was implemented from scratch, with each step of the algorithm parallelized and adapted to run efficiently on the GPU architecture. To ensure fast calculation, it employs several application-specific modifications and simplifications, and a fast scatter-based implementation of the computationally expensive kernel superposition step. The calculation time for a skull base treatment plan using two beam directions was 0.22 s on an Nvidia Tesla K40 GPU, whereas a test case of a cubic target in water from the literature took 0.14 s to calculate. The accuracy of the patient dose distributions was assessed by calculating the γ-index with respect to a gold standard Monte Carlo simulation. The passing rates were 99.2% and 96.7%, respectively, for the 3%/3 mm and 2%/2 mm criteria, matching those produced by a clinical treatment planning system.

  1. Extension of a streamwise upwind algorithm to a moving grid system

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru; Goorjian, Peter M.; Guruswamy, Guru P.

    1990-01-01

    A new streamwise upwind algorithm was derived to compute unsteady flow fields with the use of a moving-grid system. The temporally nonconservative LU-ADI (lower-upper-factored, alternating-direction-implicit) method was applied for time marching computations. A comparison of the temporally nonconservative method with a time-conservative implicit upwind method indicates that the solutions are insensitive to the conservative properties of the implicit solvers when practical time steps are used. Using this new method, computations were made for an oscillating wing at a transonic Mach number. The computed results confirm that the present upwind scheme captures the shock motion better than the central-difference scheme based on the beam-warming algorithm. The new upwind option of the code allows larger time-steps and thus is more efficient, even though it requires slightly more computational time per time step than the central-difference option.

  2. Gynecologic oncology group strategies to improve timeliness of publication.

    PubMed

    Bialy, Sally; Blessing, John A; Stehman, Frederick B; Reardon, Anne M; Blaser, Kim M

    2013-08-01

    The Gynecologic Oncology Group (GOG) is a multi-institution cooperative group funded by the National Cancer Institute to conduct clinical trials encompassing clinical and basic scientific research in gynecologic malignancies. These results are disseminated via publication in peer-reviewed journals. This process requires collaboration of numerous investigators located in diverse cancer research centers. Coordination of manuscript development is positioned within the Statistical and Data Center (SDC), thus allowing the SDC personnel to manage the process and refine strategies to promote earlier dissemination of results. A major initiative to improve timeliness utilizing the assignment, monitoring, and enforcement of deadlines for each phase of manuscript development is the focus of this investigation. Document improvement in timeliness via comparison of deadline compliance and time to journal submission due to expanded administrative and technologic initiatives implemented in 2006. Major steps in the publication process include generation of first draft by the First Author and submission to SDC, Co-author review, editorial review by Publications Subcommittee, response to journal critique, and revision. Associated with each step are responsibilities of First Author to write or revise, collaborating Biostatistician to perform analysis and interpretation, and assigned SDC Clinical Trials Editorial Associate to format/revise according to journal requirements. Upon the initiation of each step, a deadline for completion is assigned. In order to improve efficiency, a publications database was developed to track potential steps in manuscript development that enables the SDC Director of Administration and the Publications Subcommittee Chair to assign, monitor, and enforce deadlines. They, in turn, report progress to Group Leadership through the Operations Committee. The success of the strategies utilized to improve the GOG publication process was assessed by comparing the timeliness of each potential step in the development of primary Phase II manuscripts during 2003-2006 versus 2007-2010. Improvement was noted in 10 of 11 identified steps resulting in a cumulative average improvement of 240 days from notification of data maturity to First Author through first submission to a journal. Moreover, the average time to journal acceptance has improved by an average of 346 days. The investigation is based on only Phase II trials to ensure comparability of manuscript complexity. Nonetheless, the procedures employed are applicable to the development of any clinical trials manuscript. The assignment, monitoring, and enforcement of deadlines for all stages of manuscript development have resulted in increased efficiency and timeliness. The positioning and support of manuscript development within the SDC provide a valuable resource to authors in meeting assigned deadlines, accomplishing peer review, and complying with journal requirements.

  3. Contrast-to-noise ratio optimization for a prototype phase-contrast computed tomography scanner.

    PubMed

    Müller, Mark; Yaroshenko, Andre; Velroyen, Astrid; Bech, Martin; Tapfer, Arne; Pauwels, Bart; Bruyndonckx, Peter; Sasov, Alexander; Pfeiffer, Franz

    2015-12-01

    In the field of biomedical X-ray imaging, novel techniques, such as phase-contrast and dark-field imaging, have the potential to enhance the contrast and provide complementary structural information about a specimen. In this paper, a first prototype of a preclinical X-ray phase-contrast CT scanner based on a Talbot-Lau interferometer is characterized. We present a study of the contrast-to-noise ratios for attenuation and phase-contrast images acquired with the prototype scanner. The shown results are based on a series of projection images and tomographic data sets of a plastic phantom in phase and attenuation-contrast recorded with varying acquisition settings. Subsequently, the signal and noise distribution of different regions in the phantom were determined. We present a novel method for estimation of contrast-to-noise ratios for projection images based on the cylindrical geometry of the phantom. Analytical functions, representing the expected signal in phase and attenuation-contrast for a circular object, are fitted to individual line profiles of the projection data. The free parameter of the fit function is used to estimate the contrast and the goodness of the fit is determined to assess the noise in the respective signal. The results depict the dependence of the contrast-to-noise ratios on the applied source voltages, the number of steps of the phase stepping routine, and the exposure times for an individual step. Moreover, the influence of the number of projection angles on the image quality of CT slices is investigated. Finally, the implications for future imaging purposes with the scanner are discussed.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Müller, Mark, E-mail: mark-mueller@ph.tum.de; Yaroshenko, Andre; Velroyen, Astrid

    In the field of biomedical X-ray imaging, novel techniques, such as phase-contrast and dark-field imaging, have the potential to enhance the contrast and provide complementary structural information about a specimen. In this paper, a first prototype of a preclinical X-ray phase-contrast CT scanner based on a Talbot-Lau interferometer is characterized. We present a study of the contrast-to-noise ratios for attenuation and phase-contrast images acquired with the prototype scanner. The shown results are based on a series of projection images and tomographic data sets of a plastic phantom in phase and attenuation-contrast recorded with varying acquisition settings. Subsequently, the signal andmore » noise distribution of different regions in the phantom were determined. We present a novel method for estimation of contrast-to-noise ratios for projection images based on the cylindrical geometry of the phantom. Analytical functions, representing the expected signal in phase and attenuation-contrast for a circular object, are fitted to individual line profiles of the projection data. The free parameter of the fit function is used to estimate the contrast and the goodness of the fit is determined to assess the noise in the respective signal. The results depict the dependence of the contrast-to-noise ratios on the applied source voltages, the number of steps of the phase stepping routine, and the exposure times for an individual step. Moreover, the influence of the number of projection angles on the image quality of CT slices is investigated. Finally, the implications for future imaging purposes with the scanner are discussed.« less

  5. Meta-analysis and psychophysiology: A tutorial using depression and action-monitoring event-related potentials.

    PubMed

    Moran, Tim P; Schroder, Hans S; Kneip, Chelsea; Moser, Jason S

    2017-01-01

    Meta-analyses are regularly used to quantitatively integrate the findings of a field, assess the consistency of an effect and make decisions based on extant research. The current article presents an overview and step-by-step tutorial of meta-analysis aimed at psychophysiological researchers. We also describe best-practices and steps that researchers can take to facilitate future meta-analysis in their sub-discipline. Lastly, we illustrate each of the steps by presenting a novel meta-analysis on the relationship between depression and action-monitoring event-related potentials - the error-related negativity (ERN) and the feedback negativity (FN). This meta-analysis found that the literature on depression and the ERN is contaminated by publication bias. With respect to the FN, the meta-analysis found that depression does predict the magnitude of the FN; however, this effect was dependent on the type of task used by the study. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. The Satellite Test of the Equivalence Principle (STEP)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    STEP will carry concentric test masses to Earth orbit to test a fundamental assumption underlying Einstein's theory of general relativity: that gravitational mass is equivalent to inertial mass. STEP is a 21st-century version of the test that Galileo is said to have performed by dropping a carnon ball and a musket ball simultaneously from the top of the Leaning Tower of Pisa to compare their accelerations. During the STEP experiment, four pairs of test masses will be falling around the Earth, and their accelerations will be measured by superconducting quantum interference devices (SQUIDS). The extended time sensitivity of the instruments will allow the measurements to be a million times more accurate than those made in modern ground-based tests.

  7. A Multi-Scale Distribution Model for Non-Equilibrium Populations Suggests Resource Limitation in an Endangered Rodent

    PubMed Central

    Bean, William T.; Stafford, Robert; Butterfield, H. Scott; Brashares, Justin S.

    2014-01-01

    Species distributions are known to be limited by biotic and abiotic factors at multiple temporal and spatial scales. Species distribution models, however, frequently assume a population at equilibrium in both time and space. Studies of habitat selection have repeatedly shown the difficulty of estimating resource selection if the scale or extent of analysis is incorrect. Here, we present a multi-step approach to estimate the realized and potential distribution of the endangered giant kangaroo rat. First, we estimate the potential distribution by modeling suitability at a range-wide scale using static bioclimatic variables. We then examine annual changes in extent at a population-level. We define “available” habitat based on the total suitable potential distribution at the range-wide scale. Then, within the available habitat, model changes in population extent driven by multiple measures of resource availability. By modeling distributions for a population with robust estimates of population extent through time, and ecologically relevant predictor variables, we improved the predictive ability of SDMs, as well as revealed an unanticipated relationship between population extent and precipitation at multiple scales. At a range-wide scale, the best model indicated the giant kangaroo rat was limited to areas that received little to no precipitation in the summer months. In contrast, the best model for shorter time scales showed a positive relation with resource abundance, driven by precipitation, in the current and previous year. These results suggest that the distribution of the giant kangaroo rat was limited to the wettest parts of the drier areas within the study region. This multi-step approach reinforces the differing relationship species may have with environmental variables at different scales, provides a novel method for defining “available” habitat in habitat selection studies, and suggests a way to create distribution models at spatial and temporal scales relevant to theoretical and applied ecologists. PMID:25237807

  8. One-step synthesis of bioactive glass by spray pyrolysis

    NASA Astrophysics Data System (ADS)

    Shih, Shao-Ju; Chou, Yu-Jen; Chien, I.-Chen

    2012-12-01

    Bioactive glasses (BGs) have recently received more attention from biologists and engineers because of their potential applications in bone implants. The sol-gel process is one of the most popular methods for fabricating BGs, and has been used to produce BGs for years. However, the sol-gel process has the disadvantages of discontinuous processing and a long processing time. This study presented a one-step spray pyrolysis (SP) synthesis method to overcome these disadvantages. This SP method has synthesized spherical bioactive glass (SBG) and mesoporous bioactive glass (MBG) particles using Si-, Ca- and P-based precursors. This study used transmission electron microscopy, selected area electron diffraction and X-ray dispersive spectroscopy to characterize the microstructure, crystallographic structure, and chemical composition for the BG particles. In addition, in vitro bioactive tests showed the formation of hydroxyl apatite layers on SBG and MBG particles after immersion in simulated body fluid for 5 h. Experimental results show the SP formation mechanisms of SBG and MBG particles.

  9. Deciding to Decide: How Decisions Are Made and How Some Forces Affect the Process.

    PubMed

    McConnell, Charles R

    There is a decision-making pattern that applies in all situations, large or small, although in small decisions, the steps are not especially evident. The steps are gathering information, analyzing information and creating alternatives, selecting and implementing an alternative, and following up on implementation. The amount of effort applied in any decision situation should be consistent with the potential consequences of the decision. Essentially, all decisions are subject to certain limitations or constraints, forces, or circumstances that limit one's range of choices. Follow-up on implementation is the phase of decision making most often neglected, yet it is frequently the phase that determines success or failure. Risk and uncertainty are always present in a decision situation, and the application of human judgment is always necessary. In addition, there are often emotional forces at work that can at times unwittingly steer one away from that which is best or most workable under the circumstances and toward a suboptimal result based largely on the desires of the decision maker.

  10. One-Step Generation of Multifunctional Polyelectrolyte Microcapsules via Nanoscale Interfacial Complexation in Emulsion (NICE)

    DOE PAGES

    Kim, Miju; Yeo, Seon Ju; Highley, Christopher B.; ...

    2015-07-14

    Polyelectrolyte microcapsules represent versatile stimuli-responsive structures that enable the encapsulation, protection, and release of active agents. Their conventional preparation methods, however, tend to be time-consuming, yield low encapsulation efficiency, and seldom allow for the dual incorporation of hydrophilic and hydrophobic materials, limiting their widespread utilization. In this work, we present a method to fabricate stimuli-responsive polyelectrolyte microcapsules in one step based on nanoscale interfacial complexation in emulsions (NICE) followed by spontaneous droplet hatching. NICE microcapsules can incorporate both hydrophilic and hydrophobic materials and also can be induced to trigger the release of encapsulated materials by changes in the solution pHmore » or ionic strength. We also show that NICE microcapsules can be functionalized with nanomaterials to exhibit useful functionality, such as response to a magnetic field and disassembly in response to light. NICE represents a potentially transformative method to prepare multifunctional nanoengineered polyelectrolyte microcapsules for various applications such as drug delivery and cell mimicry.« less

  11. A Pilot Study of Gait Function in Farmworkers in Eastern North Carolina.

    PubMed

    Nguyen, Ha T; Kritchevsky, Stephen B; Foxworth, Judy L; Quandt, Sara A; Summers, Phillip; Walker, Francis O; Arcury, Thomas A

    2015-01-01

    Farmworkers endure many job-related hazards, including fall-related work injuries. Gait analysis may be useful in identifying potential fallers. The goal of this pilot study was to explore differences in gait between farmworkers and non-farmworkers. The sample included 16 farmworkers and 24 non-farmworkers. Gait variables were collected using the portable GAITRite system, a 16-foot computerized walkway. Generalized linear regression models were used to examine group differences. All models were adjusted for two established confounders, age and body mass index. There were no significant differences in stride length, step length, double support time, and base of support; but farmworkers had greater irregularity of stride length (P = .01) and step length (P = .08). Farmworkers performed significantly worse on gait velocity (P = .003) and cadence (P < .001) relative to non-farmworkers. We found differences in gait function between farmworkers and non-farmworkers. These findings suggest that measuring gait with a portable walkway system is feasible and informative in farmworkers and may possibly be of use in assessing fall risk.

  12. One-Step Generation of Multifunctional Polyelectrolyte Microcapsules via Nanoscale Interfacial Complexation in Emulsion (NICE).

    PubMed

    Kim, Miju; Yeo, Seon Ju; Highley, Christopher B; Burdick, Jason A; Yoo, Pil J; Doh, Junsang; Lee, Daeyeon

    2015-08-25

    Polyelectrolyte microcapsules represent versatile stimuli-responsive structures that enable the encapsulation, protection, and release of active agents. Their conventional preparation methods, however, tend to be time-consuming, yield low encapsulation efficiency, and seldom allow for the dual incorporation of hydrophilic and hydrophobic materials, limiting their widespread utilization. In this work, we present a method to fabricate stimuli-responsive polyelectrolyte microcapsules in one step based on nanoscale interfacial complexation in emulsions (NICE) followed by spontaneous droplet hatching. NICE microcapsules can incorporate both hydrophilic and hydrophobic materials and also can be induced to trigger the release of encapsulated materials by changes in the solution pH or ionic strength. We also show that NICE microcapsules can be functionalized with nanomaterials to exhibit useful functionality, such as response to a magnetic field and disassembly in response to light. NICE represents a potentially transformative method to prepare multifunctional nanoengineered polyelectrolyte microcapsules for various applications such as drug delivery and cell mimicry.

  13. [Application of virtual instrumentation technique in toxicological studies].

    PubMed

    Moczko, Jerzy A

    2005-01-01

    Research investigations require frequently direct connection of measuring equipment to the computer. Virtual instrumentation technique considerably facilitates programming of sophisticated acquisition-and-analysis procedures. In standard approach these two steps are performed subsequently with separate software tools. The acquired data are transfered with export / import procedures of particular program to the another one which executes next step of analysis. The described procedure is cumbersome, time consuming and may be potential source of the errors. In 1987 National Instruments Corporation introduced LabVIEW language based on the concept of graphical programming. Contrary to conventional textual languages it allows the researcher to concentrate on the resolved problem and omit all syntactical rules. Programs developed in LabVIEW are called as virtual instruments (VI) and are portable among different computer platforms as PCs, Macintoshes, Sun SPARCstations, Concurrent PowerMAX stations, HP PA/RISK workstations. This flexibility warrants that the programs prepared for one particular platform would be also appropriate to another one. In presented paper basic principles of connection of research equipment to computer systems were described.

  14. One-Step Generation of Multifunctional Polyelectrolyte Microcapsules via Nanoscale Interfacial Complexation in Emulsion (NICE)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Miju; Yeo, Seon Ju; Highley, Christopher B.

    Polyelectrolyte microcapsules represent versatile stimuli-responsive structures that enable the encapsulation, protection, and release of active agents. Their conventional preparation methods, however, tend to be time-consuming, yield low encapsulation efficiency, and seldom allow for the dual incorporation of hydrophilic and hydrophobic materials, limiting their widespread utilization. In this work, we present a method to fabricate stimuli-responsive polyelectrolyte microcapsules in one step based on nanoscale interfacial complexation in emulsions (NICE) followed by spontaneous droplet hatching. NICE microcapsules can incorporate both hydrophilic and hydrophobic materials and also can be induced to trigger the release of encapsulated materials by changes in the solution pHmore » or ionic strength. We also show that NICE microcapsules can be functionalized with nanomaterials to exhibit useful functionality, such as response to a magnetic field and disassembly in response to light. NICE represents a potentially transformative method to prepare multifunctional nanoengineered polyelectrolyte microcapsules for various applications such as drug delivery and cell mimicry.« less

  15. Fast quantification of bovine milk proteins employing external cavity-quantum cascade laser spectroscopy.

    PubMed

    Schwaighofer, Andreas; Kuligowski, Julia; Quintás, Guillermo; Mayer, Helmut K; Lendl, Bernhard

    2018-06-30

    Analysis of proteins in bovine milk is usually tackled by time-consuming analytical approaches involving wet-chemical, multi-step sample clean-up procedures. The use of external cavity-quantum cascade laser (EC-QCL) based IR spectroscopy was evaluated as an alternative screening tool for direct and simultaneous quantification of individual proteins (i.e. casein and β-lactoglobulin) and total protein content in commercial bovine milk samples. Mid-IR spectra of protein standard mixtures were used for building partial least squares (PLS) regression models. A sample set comprising different milk types (pasteurized; differently processed extended shelf life, ESL; ultra-high temperature, UHT) was analysed and results were compared to reference methods. Concentration values of the QCL-IR spectroscopy approach obtained within several minutes are in good agreement with reference methods involving multiple sample preparation steps. The potential application as a fast screening method for estimating the heat load applied to liquid milk is demonstrated. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Rapid Detection of Pathogenic Bacteria from Fresh Produce by Filtration and Surface-Enhanced Raman Spectroscopy

    NASA Astrophysics Data System (ADS)

    Wu, Xiaomeng; Han, Caiqin; Chen, Jing; Huang, Yao-Wen; Zhao, Yiping

    2016-04-01

    The detection of Salmonella Poona from cantaloupe cubes and E. coli O157:H7 from lettuce has been explored by using a filtration method and surface-enhanced Raman spectroscopy (SERS) based on vancomycin-functionalized silver nanorod array substrates. It is found that with a two-step filtration process, the limit of detection (LOD) of Salmonella Poona from cantaloupe cubes can be as low as 100 CFU/mL in less than 4 h, whereas the chlorophyll in the lettuce causes severe SERS spectral interference. To improve the LOD of lettuce, a three-step filtration method with a hydrophobic filter is proposed. The hydrophobic filter can effectively eliminate the interferences from chlorophyll and achieve a LOD of 1000 CFU/mL detection of E. coli O157:H7 from lettuce samples within 5 h. With the low LODs and rapid detection time, the SERS biosensing platform has demonstrated its potential as a rapid, simple, and inexpensive means for pathogenic bacteria detection from fresh produce.

  17. STEP Experiment Requirements

    NASA Technical Reports Server (NTRS)

    Brumfield, M. L. (Compiler)

    1984-01-01

    A plan to develop a space technology experiments platform (STEP) was examined. NASA Langley Research Center held a STEP Experiment Requirements Workshop on June 29 and 30 and July 1, 1983, at which experiment proposers were invited to present more detailed information on their experiment concept and requirements. A feasibility and preliminary definition study was conducted and the preliminary definition of STEP capabilities and experiment concepts and expected requirements for support services are presented. The preliminary definition of STEP capabilities based on detailed review of potential experiment requirements is investigated. Topics discussed include: Shuttle on-orbit dynamics; effects of the space environment on damping materials; erectable beam experiment; technology for development of very large solar array deployers; thermal energy management process experiment; photovoltaic concentrater pointing dynamics and plasma interactions; vibration isolation technology; flight tests of a synthetic aperture radar antenna with use of STEP.

  18. A clinical measure of maximal and rapid stepping in older women.

    PubMed

    Medell, J L; Alexander, N B

    2000-08-01

    In older adults, clinical measures have been used to assess fall risk based on the ability to maintain stance or to complete a functional task. However, in an impending fall situation, a stepping response is often used when strategies to maintain stance are inadequate. We examined how maximal and rapid stepping performance might differ among healthy young, healthy older, and balance-impaired older adults, and how this stepping performance related to other measures of balance and fall risk. Young (Y; n = 12; mean age, 21 years), unimpaired older (UO; n = 12; mean age, 69 years), and balance-impaired older women IO; n = 10; mean age, 77 years) were tested in their ability to take a maximal step (Maximum Step Length or MSL) and in their ability to take rapid steps in three directions (front, side, and back), termed the Rapid Step Test (RST). Time to complete the RST and stepping errors occurring during the RST were noted. The IO group, compared with the Y and UO groups, demonstrated significantly poorer balance and higher fall risk, based on performance on tasks such as unipedal stance. Mean MSL was significantly higher (by 16%) in the Y than in the UO group and in the UO (by 30%) than in the IO group. Mean RST time was significantly faster in the Y group versus the UO group (by 24%) and in the UO group versus the IO group (by 15%). Mean RST errors tended to be higher in the UO than in the Y group, but were significantly higher only in the UO versus the IO group. Both MSL and RST time correlated strongly (0.5 to 0.8) with other measures of balance and fall risk including unipedal stance, tandem walk, leg strength, and the Activities-Specific Balance Confidence (ABC) scale. We found substantial declines in the ability of both unimpaired and balance-impaired older adults to step maximally and to step rapidly. Stepping performance is closely related to other measures of balance and fall risk and might be considered in future studies as a predictor of falls and fall-related injuries.

  19. Advances in simultaneous atmospheric profile and cloud parameter regression based retrieval from high-spectral resolution radiance measurements

    NASA Astrophysics Data System (ADS)

    Weisz, Elisabeth; Smith, William L.; Smith, Nadia

    2013-06-01

    The dual-regression (DR) method retrieves information about the Earth surface and vertical atmospheric conditions from measurements made by any high-spectral resolution infrared sounder in space. The retrieved information includes temperature and atmospheric gases (such as water vapor, ozone, and carbon species) as well as surface and cloud top parameters. The algorithm was designed to produce a high-quality product with low latency and has been demonstrated to yield accurate results in real-time environments. The speed of the retrieval is achieved through linear regression, while accuracy is achieved through a series of classification schemes and decision-making steps. These steps are necessary to account for the nonlinearity of hyperspectral retrievals. In this work, we detail the key steps that have been developed in the DR method to advance accuracy in the retrieval of nonlinear parameters, specifically cloud top pressure. The steps and their impact on retrieval results are discussed in-depth and illustrated through relevant case studies. In addition to discussing and demonstrating advances made in addressing nonlinearity in a linear geophysical retrieval method, advances toward multi-instrument geophysical analysis by applying the DR to three different operational sounders in polar orbit are also noted. For any area on the globe, the DR method achieves consistent accuracy and precision, making it potentially very valuable to both the meteorological and environmental user communities.

  20. Computer implemented empirical mode decomposition method, apparatus and article of manufacture

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    1999-01-01

    A computer implemented physical signal analysis method is invented. This method includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.

  1. Development and assessment of Transpirative Deficit Index (D-TDI) for agricultural drought monitoring

    NASA Astrophysics Data System (ADS)

    Borghi, Anna; Rienzner, Michele; Gandolfi, Claudio; Facchi, Arianna

    2017-04-01

    Drought is a major cause of crop yield loss, both in rainfed and irrigated agroecosystems. In past decades, many approaches have been developed to assess agricultural drought, usually based on the monitoring or modelling of the soil water content condition. All these indices show weaknesses when applied for a real time drought monitoring and management at the local scale, since they do not consider explicitly crops and soil properties at an adequate spatial resolution. This work describes a newly developed agricultural drought index, called Transpirative Deficit Index (D-TDI), and assesses the results of its application over a study area of about 210 km2 within the Po River Plain (northern Italy). The index is based on transforming the interannual distribution of the transpirative deficit (potential crop transpiration minus actual transpiration), calculated daily by means of a spatially distributed conceptual hydrological model and cumulated over user-selected time-steps, to a standard normal distribution (following the approach proposed by the meteorological index SPI - Standard Precipitation Index). For the application to the study area a uniform maize crop cover (maize is the most widespread crop in the area) and 22-year (1993-2014) meteorological data series were considered. Simulation results consist in maps of the index cumulated over 10-day time steps over a mesh with cells of 250 m. A correlation analysis was carried out (1) to study the characteristics and the memory of D-TDI and to assess its intra- and inter-annual variability, (2) to assess the response of the agricultural drought (i.e., the information provided by D-TDI) to the meteorological drought computed through the SPI over different temporal steps. The D-TDI is positively auto-correlated with a persistence of 30 days, and positively cross-correlated to the SPI with a persistence of 40 days, demonstrating that D-TDI responds to meteorological forcing. Correlation analyses demonstrate that soils characterized by high available water content (AWC) can more easily compensate for a short-term variability in the precipitation pattern, while soils with low AWC are more strictly linked to the SPI variability. Since D-TDI relies both on climate and fine-resolution soil and land cover data, it provides a reliable measure of the evolution of agricultural drought over the territory with respect to that achieved through meteorological drought indices. The accumulation of the index over a 10-day period considering a mesh with cells of 250 m allows to capture the response of the territory to drought at time and spatial scales of interest for stakeholders. Modelling efforts utilizing the D-TDI have potential to shed light on the vulnerability of agricultural areas to drought; future work using the D-TDI as a tool to map drought prone areas could therefore improve the ability of farmers and irrigation district managers to cope with agricultural droughts and set up adaptation actions. Despite D-TDI was used in this study on historical data series, the index has the potential to be applied for real-time or provisional monitoring by incorporating real time or provisional meteorological data, giving the opportunity to stakeholders to promptly cope with droughts.

  2. Two-step estimation in ratio-of-mediator-probability weighted causal mediation analysis.

    PubMed

    Bein, Edward; Deutsch, Jonah; Hong, Guanglei; Porter, Kristin E; Qin, Xu; Yang, Cheng

    2018-04-15

    This study investigates appropriate estimation of estimator variability in the context of causal mediation analysis that employs propensity score-based weighting. Such an analysis decomposes the total effect of a treatment on the outcome into an indirect effect transmitted through a focal mediator and a direct effect bypassing the mediator. Ratio-of-mediator-probability weighting estimates these causal effects by adjusting for the confounding impact of a large number of pretreatment covariates through propensity score-based weighting. In step 1, a propensity score model is estimated. In step 2, the causal effects of interest are estimated using weights derived from the prior step's regression coefficient estimates. Statistical inferences obtained from this 2-step estimation procedure are potentially problematic if the estimated standard errors of the causal effect estimates do not reflect the sampling uncertainty in the estimation of the weights. This study extends to ratio-of-mediator-probability weighting analysis a solution to the 2-step estimation problem by stacking the score functions from both steps. We derive the asymptotic variance-covariance matrix for the indirect effect and direct effect 2-step estimators, provide simulation results, and illustrate with an application study. Our simulation results indicate that the sampling uncertainty in the estimated weights should not be ignored. The standard error estimation using the stacking procedure offers a viable alternative to bootstrap standard error estimation. We discuss broad implications of this approach for causal analysis involving propensity score-based weighting. Copyright © 2018 John Wiley & Sons, Ltd.

  3. One Step Quantum Key Distribution Based on EPR Entanglement

    PubMed Central

    Li, Jian; Li, Na; Li, Lei-Lei; Wang, Tao

    2016-01-01

    A novel quantum key distribution protocol is presented, based on entanglement and dense coding and allowing asymptotically secure key distribution. Considering the storage time limit of quantum bits, a grouping quantum key distribution protocol is proposed, which overcomes the vulnerability of first protocol and improves the maneuverability. Moreover, a security analysis is given and a simple type of eavesdropper’s attack would introduce at least an error rate of 46.875%. Compared with the “Ping-pong” protocol involving two steps, the proposed protocol does not need to store the qubit and only involves one step. PMID:27357865

  4. Application of an Integrated Methodology for Propulsion and Airframe Control Design to a STOVL Aircraft

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Mattern, Duane

    1994-01-01

    An advanced methodology for integrated flight propulsion control (IFPC) design for future aircraft, which will use propulsion system generated forces and moments for enhanced maneuver capabilities, is briefly described. This methodology has the potential to address in a systematic manner the coupling between the airframe and the propulsion subsystems typical of such enhanced maneuverability aircraft. Application of the methodology to a short take-off vertical landing (STOVL) aircraft in the landing approach to hover transition flight phase is presented with brief description of the various steps in the IFPC design methodology. The details of the individual steps have been described in previous publications and the objective of this paper is to focus on how the components of the control system designed at each step integrate into the overall IFPC system. The full nonlinear IFPC system was evaluated extensively in nonreal-time simulations as well as piloted simulations. Results from the nonreal-time evaluations are presented in this paper. Lessons learned from this application study are summarized in terms of areas of potential improvements in the STOVL IFPC design as well as identification of technology development areas to enhance the applicability of the proposed design methodology.

  5. Optimal subinterval selection approach for power system transient stability simulation

    DOE PAGES

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less

  6. Understanding molecular motor walking along a microtubule: a themosensitive asymmetric Brownian motor driven by bubble formation.

    PubMed

    Arai, Noriyoshi; Yasuoka, Kenji; Koishi, Takahiro; Ebisuzaki, Toshikazu; Zeng, Xiao Cheng

    2013-06-12

    The "asymmetric Brownian ratchet model", a variation of Feynman's ratchet and pawl system, is invoked to understand the kinesin walking behavior along a microtubule. The model system, consisting of a motor and a rail, can exhibit two distinct binding states, namely, the random Brownian state and the asymmetric potential state. When the system is transformed back and forth between the two states, the motor can be driven to "walk" in one direction. Previously, we suggested a fundamental mechanism, that is, bubble formation in a nanosized channel surrounded by hydrophobic atoms, to explain the transition between the two states. In this study, we propose a more realistic and viable switching method in our computer simulation of molecular motor walking. Specifically, we propose a thermosensitive polymer model with which the transition between the two states can be controlled by temperature pulses. Based on this new motor system, the stepping size and stepping time of the motor can be recorded. Remarkably, the "walking" behavior observed in the newly proposed model resembles that of the realistic motor protein. The bubble formation based motor not only can be highly efficient but also offers new insights into the physical mechanism of realistic biomolecule motors.

  7. D Data Acquisition Based on Opencv for Close-Range Photogrammetry Applications

    NASA Astrophysics Data System (ADS)

    Jurjević, L.; Gašparović, M.

    2017-05-01

    Development of the technology in the area of the cameras, computers and algorithms for 3D the reconstruction of the objects from the images resulted in the increased popularity of the photogrammetry. Algorithms for the 3D model reconstruction are so advanced that almost anyone can make a 3D model of photographed object. The main goal of this paper is to examine the possibility of obtaining 3D data for the purposes of the close-range photogrammetry applications, based on the open source technologies. All steps of obtaining 3D point cloud are covered in this paper. Special attention is given to the camera calibration, for which two-step process of calibration is used. Both, presented algorithm and accuracy of the point cloud are tested by calculating the spatial difference between referent and produced point clouds. During algorithm testing, robustness and swiftness of obtaining 3D data is noted, and certainly usage of this and similar algorithms has a lot of potential in the real-time application. That is the reason why this research can find its application in the architecture, spatial planning, protection of cultural heritage, forensic, mechanical engineering, traffic management, medicine and other sciences.

  8. Two-step electrodeposition to fabricate the p-n heterojunction of a Cu2O/BiVO4 photoanode for the enhancement of photoelectrochemical water splitting.

    PubMed

    Bai, Shouli; Liu, Jingchao; Cui, Meng; Luo, Ruixian; He, Jing; Chen, Aifan

    2018-05-15

    A Cu2O/BiVO4 p-n heterojunction based photoanode in photoelectrochemical (PEC) water splitting is fabricated by a two-step electrodeposition method on an FTO substrate followed by annealing treatment. The structures and properties of the samples are characterized by XRD, FESEM, HRTEM, XPS and UV-visible spectra. The photoelectrochemical activity of the photoanode in water oxidation has been investigated and measured in a three electrode quartz cell system; the obtained maximum photocurrent density of 1.72 mA cm-2 at 1.23 V vs. RHE is 4.5 times higher than that of pristine BiVO4 thin films (∼0.38 mA cm-2). The heterojunction based photoanode also exhibits a tremendous cathodic shift of the onset potential (∼420 mV) and enhancement in the IPCE value by more than 4-fold. The enhanced photoelectrochemical properties of the Cu2O/BiVO4 photoelectrode are attributed to the efficient separation of the photoexcited electron-hole pairs caused by the inner electronic field (IEF) of the p-n heterojunction.

  9. Development and evaluation of probe based real time loop mediated isothermal amplification for Salmonella: A new tool for DNA quantification.

    PubMed

    Mashooq, Mohmad; Kumar, Deepak; Niranjan, Ankush Kiran; Agarwal, Rajesh Kumar; Rathore, Rajesh

    2016-07-01

    A one step, single tube, accelerated probe based real time loop mediated isothermal amplification (RT LAMP) assay was developed for detecting the invasion gene (InvA) of Salmonella. The probe based RT LAMP is a novel method of gene amplification that amplifies nucleic acid with high specificity and rapidity under isothermal conditions with a set of six primers. The whole procedure is very simple and rapid, and amplification can be obtained in 20min. Detection of gene amplification was accomplished by amplification curve, turbidity and addition of DNA binding dye at the end of the reaction results in colour difference and can be visualized under normal day light and in UV. The sensitivity of developed assay was found 10 fold higher than taqman based qPCR. The specificity of the RT LAMP assay was validated by the absence of any cross reaction with other members of enterobacteriaceae family and other gram negative bacteria. These results indicate that the probe based RT LAMP assay is extremely rapid, cost effective, highly specific and sensitivity and has potential usefulness for rapid Salmonella surveillance. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  10. A Physics-driven Neural Networks-based Simulation System (PhyNNeSS) for multimodal interactive virtual environments involving nonlinear deformable objects

    PubMed Central

    De, Suvranu; Deo, Dhannanjay; Sankaranarayanan, Ganesh; Arikatla, Venkata S.

    2012-01-01

    Background While an update rate of 30 Hz is considered adequate for real time graphics, a much higher update rate of about 1 kHz is necessary for haptics. Physics-based modeling of deformable objects, especially when large nonlinear deformations and complex nonlinear material properties are involved, at these very high rates is one of the most challenging tasks in the development of real time simulation systems. While some specialized solutions exist, there is no general solution for arbitrary nonlinearities. Methods In this work we present PhyNNeSS - a Physics-driven Neural Networks-based Simulation System - to address this long-standing technical challenge. The first step is an off-line pre-computation step in which a database is generated by applying carefully prescribed displacements to each node of the finite element models of the deformable objects. In the next step, the data is condensed into a set of coefficients describing neurons of a Radial Basis Function network (RBFN). During real-time computation, these neural networks are used to reconstruct the deformation fields as well as the interaction forces. Results We present realistic simulation examples from interactive surgical simulation with real time force feedback. As an example, we have developed a deformable human stomach model and a Penrose-drain model used in the Fundamentals of Laparoscopic Surgery (FLS) training tool box. Conclusions A unique computational modeling system has been developed that is capable of simulating the response of nonlinear deformable objects in real time. The method distinguishes itself from previous efforts in that a systematic physics-based pre-computational step allows training of neural networks which may be used in real time simulations. We show, through careful error analysis, that the scheme is scalable, with the accuracy being controlled by the number of neurons used in the simulation. PhyNNeSS has been integrated into SoFMIS (Software Framework for Multimodal Interactive Simulation) for general use. PMID:22629108

  11. Partition-based discrete-time quantum walks

    NASA Astrophysics Data System (ADS)

    Konno, Norio; Portugal, Renato; Sato, Iwao; Segawa, Etsuo

    2018-04-01

    We introduce a family of discrete-time quantum walks, called two-partition model, based on two equivalence-class partitions of the computational basis, which establish the notion of local dynamics. This family encompasses most versions of unitary discrete-time quantum walks driven by two local operators studied in literature, such as the coined model, Szegedy's model, and the 2-tessellable staggered model. We also analyze the connection of those models with the two-step coined model, which is driven by the square of the evolution operator of the standard discrete-time coined walk. We prove formally that the two-step coined model, an extension of Szegedy model for multigraphs, and the two-tessellable staggered model are unitarily equivalent. Then, selecting one specific model among those families is a matter of taste not generality.

  12. Reliability and validity of a smartphone-based assessment of gait parameters across walking speed and smartphone locations: Body, bag, belt, hand, and pocket.

    PubMed

    Silsupadol, Patima; Teja, Kunlanan; Lugade, Vipul

    2017-10-01

    The assessment of spatiotemporal gait parameters is a useful clinical indicator of health status. Unfortunately, most assessment tools require controlled laboratory environments which can be expensive and time consuming. As smartphones with embedded sensors are becoming ubiquitous, this technology can provide a cost-effective, easily deployable method for assessing gait. Therefore, the purpose of this study was to assess the reliability and validity of a smartphone-based accelerometer in quantifying spatiotemporal gait parameters when attached to the body or in a bag, belt, hand, and pocket. Thirty-four healthy adults were asked to walk at self-selected comfortable, slow, and fast speeds over a 10-m walkway while carrying a smartphone. Step length, step time, gait velocity, and cadence were computed from smartphone-based accelerometers and validated with GAITRite. Across all walking speeds, smartphone data had excellent reliability (ICC 2,1 ≥0.90) for the body and belt locations, with bag, hand, and pocket locations having good to excellent reliability (ICC 2,1 ≥0.69). Correlations between the smartphone-based and GAITRite-based systems were very high for the body (r=0.89, 0.98, 0.96, and 0.87 for step length, step time, gait velocity, and cadence, respectively). Similarly, Bland-Altman analysis demonstrated that the bias approached zero, particularly in the body, bag, and belt conditions under comfortable and fast speeds. Thus, smartphone-based assessments of gait are most valid when placed on the body, in a bag, or on a belt. The use of a smartphone to assess gait can provide relevant data to clinicians without encumbering the user and allow for data collection in the free-living environment. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. A transition from using multi‐step procedures to a fully integrated system for performing extracorporeal photopheresis: A comparison of costs and efficiencies

    PubMed Central

    Leblond, Veronique; Ouzegdouh, Maya; Button, Paul

    2017-01-01

    Abstract Introduction The Pitié Salpêtrière Hospital Hemobiotherapy Department, Paris, France, has been providing extracorporeal photopheresis (ECP) since November 2011, and started using the Therakos® CELLEX® fully integrated system in 2012. This report summarizes our single‐center experience of transitioning from the use of multi‐step ECP procedures to the fully integrated ECP system, considering the capacity and cost implications. Materials and Methods The total number of ECP procedures performed 2011–2015 was derived from department records. The time taken to complete a single ECP treatment using a multi‐step technique and the fully integrated system at our department was assessed. Resource costs (2014€) were obtained for materials and calculated for personnel time required. Time‐driven activity‐based costing methods were applied to provide a cost comparison. Results The number of ECP treatments per year increased from 225 (2012) to 727 (2015). The single multi‐step procedure took 270 min compared to 120 min for the fully integrated system. The total calculated per‐session cost of performing ECP using the multi‐step procedure was greater than with the CELLEX® system (€1,429.37 and €1,264.70 per treatment, respectively). Conclusions For hospitals considering a transition from multi‐step procedures to fully integrated methods for ECP where cost may be a barrier, time‐driven activity‐based costing should be utilized to gain a more comprehensive understanding the full benefit that such a transition offers. The example from our department confirmed that there were not just cost and time savings, but that the time efficiencies gained with CELLEX® allow for more patient treatments per year. PMID:28419561

  14. A correlation between extensional displacement and architecture of ionic polymer transducers

    NASA Astrophysics Data System (ADS)

    Akle, Barbar J.; Duncan, Andrew; Leo, Donald J.

    2008-03-01

    Ionic polymer transducers (IPT), sometimes referred to as artificial muscles, are known to generate a large bending strain and a moderate stress at low applied voltages (<5V). Bending actuators have limited engineering applications due to the low forcing capabilities and the need for complicated external devices to convert the bending action into rotating or linear motion desired in most devices. Recently Akle and Leo reported extensional actuation in ionic polymer transducers. In this study, extensional IPTs are characterized as a function of transducer architecture. In this study 2 actuators are built and there extensional displacement response is characterized. The transducers have similar electrodes while the middle membrane in the first is a Nafion / ionic liquid and an aluminum oxide - ionic liquid in the second. The first transducer is characterized for constant current input, voltage step input, and sweep voltage input. The model prediction is in agreement in both shape and magnitude for the constant current experiment. The values of α and β used are within the range of values reported in Akle and Leo. Both experiments and model demonstrate that there is a preferred direction of applying the potential so that the transducer will exhibit large deformations. In step response the model well predicted the negative potential and the early part of the step in the positive potential and failed to predict the displacement after approximately 180s has elapsed. The model well predicted the sweep response, and the observed 1st harmonic in the displacement further confirmed the existence of a quadratic in the charge response. Finally the aluminum oxide based transducer is characterized for a step response and compared to the Nafion based transducer. The second actuator demonstrated electromechanical extensional response faster than that in the Nafion based transducer. The Aluminum oxide based transducer is expected to provide larger forces and hence larger energy density.

  15. Higher Order Time Integration Schemes for the Unsteady Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The efficiency gains obtained using higher-order implicit Runge-Kutta schemes as compared with the second-order accurate backward difference schemes for the unsteady Navier-Stokes equations are investigated. Three different algorithms for solving the nonlinear system of equations arising at each timestep are presented. The first algorithm (NMG) is a pseudo-time-stepping scheme which employs a non-linear full approximation storage (FAS) agglomeration multigrid method to accelerate convergence. The other two algorithms are based on Inexact Newton's methods. The linear system arising at each Newton step is solved using iterative/Krylov techniques and left preconditioning is used to accelerate convergence of the linear solvers. One of the methods (LMG) uses Richardson's iterative scheme for solving the linear system at each Newton step while the other (PGMRES) uses the Generalized Minimal Residual method. Results demonstrating the relative superiority of these Newton's methods based schemes are presented. Efficiency gains as high as 10 are obtained by combining the higher-order time integration schemes with the more efficient nonlinear solvers.

  16. Multispectra CWT-based algorithm (MCWT) in mass spectra for peak extraction.

    PubMed

    Hsueh, Huey-Miin; Kuo, Hsun-Chih; Tsai, Chen-An

    2008-01-01

    An important objective in mass spectrometry (MS) is to identify a set of biomarkers that can be used to potentially distinguish patients between distinct treatments (or conditions) from tens or hundreds of spectra. A common two-step approach involving peak extraction and quantification is employed to identify the features of scientific interest. The selected features are then used for further investigation to understand underlying biological mechanism of individual protein or for development of genomic biomarkers to early diagnosis. However, the use of inadequate or ineffective peak detection and peak alignment algorithms in peak extraction step may lead to a high rate of false positives. Also, it is crucial to reduce the false positive rate in detecting biomarkers from ten or hundreds of spectra. Here a new procedure is introduced for feature extraction in mass spectrometry data that extends the continuous wavelet transform-based (CWT-based) algorithm to multiple spectra. The proposed multispectra CWT-based algorithm (MCWT) not only can perform peak detection for multiple spectra but also carry out peak alignment at the same time. The author' MCWT algorithm constructs a reference, which integrates information of multiple raw spectra, for feature extraction. The algorithm is applied to a SELDI-TOF mass spectra data set provided by CAMDA 2006 with known polypeptide m/z positions. This new approach is easy to implement and it outperforms the existing peak extraction method from the Bioconductor PROcess package.

  17. Quantifying Non-Equilibrium in Hypersonic Flows Using Entropy Generation

    DTIC Science & Technology

    2007-03-01

    mirror fabrication, 2) mirror actuation, and 3) control algorithms with a focus on potential for future space based applications. For a...electrodes transported via a conducting electrolyte [19]. When placed under a voltage potential , cations in a polymer matrix immediately swell...has the potential to create an over- damped surface preventing Wavescope from detecting any strains. The final step in the mirror fabrication is to

  18. Report of 111 Consecutive Patients Enrolled in the International Serial Transverse Enteroplasty (STEP) Data Registry: A Retrospective Observational Study

    PubMed Central

    Jones, Brian A; Hull, Melissa A; Potanos, Kristina M; Zurakowski, David; Fitzgibbons, Shimae C; Ching, Y Avery; Duggan, Christopher; Jaksic, Tom; Kim, Heung Bae

    2016-01-01

    Background The International Serial Transverse Enteroplasty (STEP) Data Registry is a voluntary online database created in 2004 to collect information on patients undergoing the STEP procedure. The aim of this study was to identify preoperative factors significantly associated with 1) transplantation or death, or 2) attainment of enteral autonomy following STEP. Study Design Data were collected from September 2004 to January 2010. Univariate and multivariate logistic regression analyses were applied to determine predictors of transplantation/death or enteral autonomy post-STEP. Time to reach full enteral nutrition was estimated using a Kaplan-Meier curve. Results Fourteen of the 111 patients in the Registry were excluded due to inadequate follow-up. Of the remaining 97 patients, 11 patients died, and 5 progressed to intestinal transplantation. On multivariate analysis, higher direct bilirubin and shorter pre-STEP bowel length were independently predictive of progression to transplantation or death (p = .05 and p < .001, respectively). Of the 78 patients who were ≥7 days of age and required parenteral nutrition (PN) at the time of STEP, 37 (47%) achieved enteral autonomy after the first STEP. Longer pre-STEP bowel length was also independently associated with enteral autonomy (p = .002). The median time to reach enteral autonomy based on Kaplan-Meier analysis was 21 months (95% CI: 12-30). Conclusions Overall mortality post-STEP was 11%. Pre-STEP risk factors for progressing to transplantation or death were higher direct bilirubin and shorter bowel length. Among patients who underwent STEP for short bowel syndrome, 47% attained full enteral nutrition post-STEP. Patients with longer pre-STEP bowel length were significantly more likely to achieve enteral autonomy. PMID:23357726

  19. One-step fabrication of multifunctional micromotors.

    PubMed

    Gao, Wenlong; Liu, Mei; Liu, Limei; Zhang, Hui; Dong, Bin; Li, Christopher Y

    2015-09-07

    Although artificial micromotors have undergone tremendous progress in recent years, their fabrication normally requires complex steps or expensive equipment. In this paper, we report a facile one-step method based on an emulsion solvent evaporation process to fabricate multifunctional micromotors. By simultaneously incorporating various components into an oil-in-water droplet, upon emulsification and solidification, a sphere-shaped, asymmetric, and multifunctional micromotor is formed. Some of the attractive functions of this model micromotor include autonomous movement in high ionic strength solution, remote control, enzymatic disassembly and sustained release. This one-step, versatile fabrication method can be easily scaled up and therefore may have great potential in mass production of multifunctional micromotors for a wide range of practical applications.

  20. Solar-based navigation for robotic explorers

    NASA Astrophysics Data System (ADS)

    Shillcutt, Kimberly Jo

    2000-12-01

    This thesis introduces the application of solar position and shadowing information to robotic exploration. Power is a critical resource for robots with remote, long-term missions, so this research focuses on the power generation capabilities of robotic explorers during navigational tasks, in addition to power consumption. Solar power is primarily considered, with the possibility of wind power also contemplated. Information about the environment, including the solar ephemeris, terrain features, time of day, and surface location, is incorporated into a planning structure, allowing robots to accurately predict shadowing and thus potential costs and gains during navigational tasks. By evaluating its potential to generate and expend power, a robot can extend its lifetime and accomplishments. The primary tasks studied are coverage patterns, with a variety of plans developed for this research. The use of sun, terrain and temporal information also enables new capabilities of identifying and following sun-synchronous and sun-seeking paths. Digital elevation maps are combined with an ephemeris algorithm to calculate the altitude and azimuth of the sun from surface locations, and to identify and map shadows. Solar navigation path simulators use this information to perform searches through two-dimensional space, while considering temporal changes. Step by step simulations of coverage patterns also incorporate time in addition to location. Evaluations of solar and wind power generation, power consumption, area coverage, area overlap, and time are generated for sets of coverage patterns, with on-board environmental information linked to the simulations. This research is implemented on the Nomad robot for the Robotic Antarctic Meteorite Search. Simulators have been developed for coverage pattern tests, as well as for sun-synchronous and sun-seeking path searches. Results of field work and simulations are reported and analyzed, with demonstrated improvements in efficiency, productivity and lifetime of robotic explorers, along with new solar navigation abilities.

  1. Inflight Microbial Monitoring-An Alternative Method to Culture Based Detection Currently Used on International Space Station

    NASA Technical Reports Server (NTRS)

    Khodadad, Christina L.; Birmele, Michele N.; Roman, Monsi; Hummerick, Mary E.; Smith, David J.; Wheeler, Raymond M.

    2015-01-01

    Previous research has shown that microorganisms and potential human pathogens have been detected on the International Space Station (ISS). The potential to introduce new microorganisms occurs with every exchange of crew or addition of equipment or supplies. Previous research has shown that microorganisms introduced to the ISS are readily transferred between crew and subsystems and back (i.e. ECLSS, environmental control and life support systems). Current microbial characterization methods require enrichment of microorganisms and a 48-hour incubation time. This increases the microbial load while detecting a limited number of microorganisms. The culture based method detects approximately 1-10% of the total organisms present and provides no identification, To identify and enumerate ISS samples requires that samples to be returned to Earth for complete analysis. Therefore, a more expedient, low-cost, in-flight method of microbial detection, identification, and enumeration is warranted. The RAZOR EX, a ruggedized, commercial off the shelf, real-time PCR field instrument was tested for its ability to detect microorganism at low concentrations within one hour. Escherichia coli, Salmonella enterica Typhimurium, and Pseudomonas aeruginosa were detected at low levels using real-time DNA amplification. Total heterotrophic counts could also be detected using a 16S gene marker that can identify up to 98% of all bacteria. To reflect viable cells found in the samples, RNA was also detectable using a modified, single-step reverse transcription reaction.

  2. Revisiting the relevance of using a constant voltage step to improve electrochemical performances of Li-rich lamellar oxides

    NASA Astrophysics Data System (ADS)

    Pradon, A.; Caldes, M. T.; Petit, P.-E.; La Fontaine, C.; Elkaim, E.; Tessier, C.; Ouvrard, G.; Dumont, E.

    2018-03-01

    A Li-rich lamellar oxide was cycled at high potential and the relevance of using a constant voltage step (CVS) at the end of the charge, needed for industrial application, was investigated by electrochemical performance, X-ray diffraction (XRD) and high resolution transmission electron microscopy (HRTEM). Electrochemical studies at 4.7 and 4.5 V with and without CVS showed that capacity and voltage fading occurred mostly when cells operated at high potential. After cycling, 3D-type defects involving transition metals trapped in lithium layer were observed by HRTEM into the electrode bulk. These defects are responsible for the voltage fading. XRD microstrain parameter was used to evaluate defects rate in aged materials subjected to a CVS, showing more 3D-type defects when cycled at 4.7 V than at 4.5 V. The time spent at high potential at the end of the charge as well as the value of the upper potential limit, are both relevant parameters to voltage decay. The use of a CVS at the end of the charge needs at the same time, a reduced upper potential window in order to minimize 3D-type defects occurrence. Unfortunately, this approach is still not sufficient to prevent voltage fading.

  3. A two-step lyssavirus real-time polymerase chain reaction using degenerate primers with superior sensitivity to the fluorescent antigen test.

    PubMed

    Suin, Vanessa; Nazé, Florence; Francart, Aurélie; Lamoral, Sophie; De Craeye, Stéphane; Kalai, Michael; Van Gucht, Steven

    2014-01-01

    A generic two-step lyssavirus real-time reverse transcriptase polymerase chain reaction (qRT-PCR), based on a nested PCR strategy, was validated for the detection of different lyssavirus species. Primers with 17 to 30% of degenerate bases were used in both consecutive steps. The assay could accurately detect RABV, LBV, MOKV, DUVV, EBLV-1, EBLV-2, and ABLV. In silico sequence alignment showed a functional match with the remaining lyssavirus species. The diagnostic specificity was 100% and the sensitivity proved to be superior to that of the fluorescent antigen test. The limit of detection was ≤ 1 50% tissue culture infectious dose. The related vesicular stomatitis virus was not recognized, confirming the selectivity for lyssaviruses. The assay was applied to follow the evolution of rabies virus infection in the brain of mice from 0 to 10 days after intranasal inoculation. The obtained RNA curve corresponded well with the curves obtained by a one-step monospecific RABV-qRT-PCR, the fluorescent antigen test, and virus titration. Despite the presence of degenerate bases, the assay proved to be highly sensitive, specific, and reproducible.

  4. A Two-Step Lyssavirus Real-Time Polymerase Chain Reaction Using Degenerate Primers with Superior Sensitivity to the Fluorescent Antigen Test

    PubMed Central

    Nazé, Florence; Francart, Aurélie; Lamoral, Sophie; De Craeye, Stéphane; Kalai, Michael

    2014-01-01

    A generic two-step lyssavirus real-time reverse transcriptase polymerase chain reaction (qRT-PCR), based on a nested PCR strategy, was validated for the detection of different lyssavirus species. Primers with 17 to 30% of degenerate bases were used in both consecutive steps. The assay could accurately detect RABV, LBV, MOKV, DUVV, EBLV-1, EBLV-2, and ABLV. In silico sequence alignment showed a functional match with the remaining lyssavirus species. The diagnostic specificity was 100% and the sensitivity proved to be superior to that of the fluorescent antigen test. The limit of detection was ≤1 50% tissue culture infectious dose. The related vesicular stomatitis virus was not recognized, confirming the selectivity for lyssaviruses. The assay was applied to follow the evolution of rabies virus infection in the brain of mice from 0 to 10 days after intranasal inoculation. The obtained RNA curve corresponded well with the curves obtained by a one-step monospecific RABV-qRT-PCR, the fluorescent antigen test, and virus titration. Despite the presence of degenerate bases, the assay proved to be highly sensitive, specific, and reproducible. PMID:24822188

  5. Pressure optimization of an EC-QCL based cavity ring-down spectroscopy instrument for exhaled NO detection

    NASA Astrophysics Data System (ADS)

    Zhou, Sheng; Han, Yanling; Li, Bincheng

    2018-02-01

    Nitric oxide (NO) in exhaled breath has gained increasing interest in recent years mainly driven by the clinical need to monitor inflammatory status in respiratory disorders, such as asthma and other pulmonary conditions. Mid-infrared cavity ring-down spectroscopy (CRDS) using an external cavity, widely tunable continuous-wave quantum cascade laser operating at 5.3 µm was employed for NO detection. The detection pressure was reduced in steps to improve the sensitivity, and the optimal pressure was determined to be 15 kPa based on the fitting residual analysis of measured absorption spectra. A detection limit (1σ, or one time of standard deviation) of 0.41 ppb was experimentally achieved for NO detection in human breath under the optimized condition in a total of 60 s acquisition time (2 s per data point). Diurnal measurement session was conducted for exhaled NO. The experimental results indicated that mid-infrared CRDS technique has great potential for various applications in health diagnosis.

  6. Unintended changes in cognition, mood, and behavior arising from cell-based interventions for neurological conditions: ethical challenges.

    PubMed

    Duggan, P S; Siegel, A W; Blass, D M; Bok, H; Coyle, J T; Faden, R; Finkel, J; Gearhart, J D; Greely, H T; Hillis, A; Hoke, A; Johnson, R; Johnston, M; Kahn, J; Kerr, D; King, P; Kurtzberg, J; Liao, S M; McDonald, J W; McKhann, G; Nelson, K B; Rao, M; Regenberg, A; Smith, K; Solter, D; Song, H; Sugarman, J; Traystman, R J; Vescovi, A; Yanofski, J; Young, W; Mathews, D J H

    2009-05-01

    The prospect of using cell-based interventions (CBIs) to treat neurological conditions raises several important ethical and policy questions. In this target article, we focus on issues related to the unique constellation of traits that characterize CBIs targeted at the central nervous system. In particular, there is at least a theoretical prospect that these cells will alter the recipients' cognition, mood, and behavior-brain functions that are central to our concept of the self. The potential for such changes, although perhaps remote, is cause for concern and careful ethical analysis. Both to enable better informed consent in the future and as an end in itself, we argue that early human trials of CBIs for neurological conditions must monitor subjects for changes in cognition, mood, and behavior; further, we recommend concrete steps for that monitoring. Such steps will help better characterize the potential risks and benefits of CBIs as they are tested and potentially used for treatment.

  7. Analysis of Time Filters in Multistep Methods

    NASA Astrophysics Data System (ADS)

    Hurl, Nicholas

    Geophysical ow simulations have evolved sophisticated implicit-explicit time stepping methods (based on fast-slow wave splittings) followed by time filters to control any unstable models that result. Time filters are modular and parallel. Their effect on stability of the overall process has been tested in numerous simulations, but never analyzed. Stability is proven herein for the Crank-Nicolson Leapfrog (CNLF) method with the Robert-Asselin (RA) time filter and for the Crank-Nicolson Leapfrog method with the Robert-Asselin-Williams (RAW) time filter for systems by energy methods. We derive an equivalent multistep method for CNLF+RA and CNLF+RAW and stability regions are obtained. The time step restriction for energy stability of CNLF+RA is smaller than CNLF and CNLF+RAW time step restriction is even smaller. Numerical tests find that RA and RAW add numerical dissipation. This thesis also shows that all modes of the Crank-Nicolson Leap Frog (CNLF) method are asymptotically stable under the standard timestep condition.

  8. The Development of a Real Time Surface Water Flow Model to Protect Public Water Intakes in West Virginia

    NASA Astrophysics Data System (ADS)

    Zegre, N.; Strager, M.

    2015-12-01

    In January of 2014 West Virginia experienced a chemical spill upstream of a public water intake on the Elk River near Charleston, West Virginia that made the water unusable for 300,000 people for weeks. In response to this disaster, state officials enacted legislation to protect the future public water intake locations by requiring the delineation of zones of critical concern that extend a five hour travel time above the intakes. Each zone is defined by the travel time and buffered along the river mainstem and tributary locations to identify future potential threats to the water supply. While this approach helps to identify potential problems before they occur, the need existed to be able to respond to a spill with information regarding the real travel time of a spill to an intake with consideration of actual stream flow at the time of the spill. This study developed a real time surface flow model to protect the public water intakes using both regional and seasonal variables. Bayesian statistical inference enabled confidence levels to be placed on flow estimates and used to show the probability for the time steps as water approached an public water intake. The flow model has been incorporated into both a smartphone app and web-based tool for better emergency response and management of water resources throughout the state.

  9. Hail frequency estimation across Europe based on a combination of overshooting top detections and the ERA-INTERIM reanalysis

    NASA Astrophysics Data System (ADS)

    Punge, H. J.; Bedka, K. M.; Kunz, M.; Reinbold, A.

    2017-12-01

    This article presents a hail frequency estimation based on the detection of cold overshooting cloud tops (OTs) from the Meteosat Second Generation (MSG) operational weather satellites, in combination with a hail-specific filter derived from the ERA-INTERIM reanalysis. This filter has been designed based on the atmospheric properties in the vicinity of hail reports registered in the European Severe Weather Database (ESWD). These include Convective Available Potential Energy (CAPE), 0-6-km bulk wind shear and freezing level height, evaluated at the nearest time step and interpolated from the reanalysis grid to the location of the hail report. Regions highly exposed to hail events include Northern Italy, followed by South-Eastern Austria and Eastern Spain. Pronounced hail frequency is also found in large parts of Eastern Europe, around the Alps, the Czech Republic, Southern Germany, Southern and Eastern France, and in the Iberic and Apennine mountain ranges.

  10. Towards a Graphene-Based Low Intensity Photon Counting Photodetector

    PubMed Central

    Williams, Jamie O. D.; Alexander-Webber, Jack A.; Lapington, Jon S.; Roy, Mervyn; Hutchinson, Ian B.; Sagade, Abhay A.; Martin, Marie-Blandine; Braeuninger-Weimer, Philipp; Cabrero-Vilatela, Andrea; Wang, Ruizhi; De Luca, Andrea; Udrea, Florin; Hofmann, Stephan

    2016-01-01

    Graphene is a highly promising material in the development of new photodetector technologies, in particular due its tunable optoelectronic properties, high mobilities and fast relaxation times coupled to its atomic thinness and other unique electrical, thermal and mechanical properties. Optoelectronic applications and graphene-based photodetector technology are still in their infancy, but with a range of device integration and manufacturing approaches emerging this field is progressing quickly. In this review we explore the potential of graphene in the context of existing single photon counting technologies by comparing their performance to simulations of graphene-based single photon counting and low photon intensity photodetection technologies operating in the visible, terahertz and X-ray energy regimes. We highlight the theoretical predictions and current graphene manufacturing processes for these detectors. We show initial experimental implementations and discuss the key challenges and next steps in the development of these technologies. PMID:27563903

  11. Special Education Eligibility: A Step-by-Step Guide for Educators

    ERIC Educational Resources Information Center

    Pierangelo, Roger; Giuliani, George A.

    2007-01-01

    Understanding the criteria and process for determining a student's eligibility for special education services is critical for any educator in an inclusive school environment. Based on the current reauthorization of IDEA 2004, this timely resource offers teachers and administrators an overview of each eligible disability and clear, specific…

  12. The reliability and preliminary validity of game-based fall risk assessment in community-dwelling older adults.

    PubMed

    Yamada, Minoru; Aoyama, Tomoki; Nakamura, Masatoshi; Tanaka, Buichi; Nagai, Koutatsu; Tatematsu, Noriatsu; Uemura, Kazuki; Nakamura, Takashi; Tsuboyama, Tadao; Ichihashi, Noriaki

    2011-01-01

    The purpose of this study was to examine whether the Nintendo Wii Fit program could be used for fall risk assessment in healthy, community-dwelling older adults. Forty-five community-dwelling older women participated in this study. The "Basic Step" and "Ski Slalom" modules were selected from the Wii Fit game program. The following 5 physical performance tests were performed: the 10-m walk test under single- and dual-task conditions, the Timed Up and Go test under single- and dual-task conditions, and the Functional Reach test. Compared with the faller group, the nonfaller group showed a significant difference in the Basic Step (P < .001) and a nonsignificant difference in the Ski Slalom (P = .453). The discriminating criterion between the 2 groups was a score of 111 points on the Basic Step (P < .001). The Basic Step showed statistically significant, moderate correlations between the dual-task lag of walking (r = -.547) and the dual-task lag of the Timed Up and Go test (r = -.688). These results suggest that game-based fall risk assessment using the Basic Step has a high generality and is useful in community-dwelling older adults. Copyright © 2011 Mosby, Inc. All rights reserved.

  13. Design and Implementation of Foot-Mounted Inertial Sensor Based Wearable Electronic Device for Game Play Application.

    PubMed

    Zhou, Qifan; Zhang, Hai; Lari, Zahra; Liu, Zhenbo; El-Sheimy, Naser

    2016-10-21

    Wearable electronic devices have experienced increasing development with the advances in the semiconductor industry and have received more attention during the last decades. This paper presents the development and implementation of a novel inertial sensor-based foot-mounted wearable electronic device for a brand new application: game playing. The main objective of the introduced system is to monitor and identify the human foot stepping direction in real time, and coordinate these motions to control the player operation in games. This proposed system extends the utilized field of currently available wearable devices and introduces a convenient and portable medium to perform exercise in a more compelling way in the near future. This paper provides an overview of the previously-developed system platforms, introduces the main idea behind this novel application, and describes the implemented human foot moving direction identification algorithm. Practical experiment results demonstrate that the proposed system is capable of recognizing five foot motions, jump, step left, step right, step forward, and step backward, and has achieved an over 97% accuracy performance for different users. The functionality of the system for real-time application has also been verified through the practical experiments.

  14. Design and Implementation of Foot-Mounted Inertial Sensor Based Wearable Electronic Device for Game Play Application

    PubMed Central

    Zhou, Qifan; Zhang, Hai; Lari, Zahra; Liu, Zhenbo; El-Sheimy, Naser

    2016-01-01

    Wearable electronic devices have experienced increasing development with the advances in the semiconductor industry and have received more attention during the last decades. This paper presents the development and implementation of a novel inertial sensor-based foot-mounted wearable electronic device for a brand new application: game playing. The main objective of the introduced system is to monitor and identify the human foot stepping direction in real time, and coordinate these motions to control the player operation in games. This proposed system extends the utilized field of currently available wearable devices and introduces a convenient and portable medium to perform exercise in a more compelling way in the near future. This paper provides an overview of the previously-developed system platforms, introduces the main idea behind this novel application, and describes the implemented human foot moving direction identification algorithm. Practical experiment results demonstrate that the proposed system is capable of recognizing five foot motions, jump, step left, step right, step forward, and step backward, and has achieved an over 97% accuracy performance for different users. The functionality of the system for real-time application has also been verified through the practical experiments. PMID:27775673

  15. Real-time color/shape-based traffic signs acquisition and recognition system

    NASA Astrophysics Data System (ADS)

    Saponara, Sergio

    2013-02-01

    A real-time system is proposed to acquire from an automotive fish-eye CMOS camera the traffic signs, and provide their automatic recognition on the vehicle network. Differently from the state-of-the-art, in this work color-detection is addressed exploiting the HSI color space which is robust to lighting changes. Hence the first stage of the processing system implements fish-eye correction and RGB to HSI transformation. After color-based detection a noise deletion step is implemented and then, for the classification, a template-based correlation method is adopted to identify potential traffic signs, of different shapes, from acquired images. Starting from a segmented-image a matching with templates of the searched signs is carried out using a distance transform. These templates are organized hierarchically to reduce the number of operations and hence easing real-time processing for several types of traffic signs. Finally, for the recognition of the specific traffic sign, a technique based on extraction of signs characteristics and thresholding is adopted. Implemented on DSP platform the system recognizes traffic signs in less than 150 ms at a distance of about 15 meters from 640x480-pixel acquired images. Tests carried out with hundreds of images show a detection and recognition rate of about 93%.

  16. An empirical method to cluster objective nebulizer adherence data among adults with cystic fibrosis.

    PubMed

    Hoo, Zhe H; Campbell, Michael J; Curley, Rachael; Wildman, Martin J

    2017-01-01

    The purpose of using preventative inhaled treatments in cystic fibrosis is to improve health outcomes. Therefore, understanding the relationship between adherence to treatment and health outcome is crucial. Temporal variability, as well as absolute magnitude of adherence affects health outcomes, and there is likely to be a threshold effect in the relationship between adherence and outcomes. We therefore propose a pragmatic algorithm-based clustering method of objective nebulizer adherence data to better understand this relationship, and potentially, to guide clinical decisions. This clustering method consists of three related steps. The first step is to split adherence data for the previous 12 months into four 3-monthly sections. The second step is to calculate mean adherence for each section and to score the section based on mean adherence. The third step is to aggregate the individual scores to determine the final cluster ("cluster 1" = very low adherence; "cluster 2" = low adherence; "cluster 3" = moderate adherence; "cluster 4" = high adherence), and taking into account adherence trend as represented by sequential individual scores. The individual scores should be displayed along with the final cluster for clinicians to fully understand the adherence data. We present three cases to illustrate the use of the proposed clustering method. This pragmatic clustering method can deal with adherence data of variable duration (ie, can be used even if 12 months' worth of data are unavailable) and can cluster adherence data in real time. Empirical support for some of the clustering parameters is not yet available, but the suggested classifications provide a structure to investigate parameters in future prospective datasets in which there are accurate measurements of nebulizer adherence and health outcomes.

  17. The comparison of stepping responses following perturbations applied to pelvis during overground and treadmill walking.

    PubMed

    Zadravec, Matjaž; Olenšek, Andrej; Matjačić, Zlatko

    2017-08-09

    Treadmills are used frequently in rehabilitation enabling neurologically impaired subjects to train walking while being assisted by therapists. Numerous studies compared walking on treadmill and overground for unperturbed but not also perturbed conditions. The objective of this study was to compare stepping responses (step length, step width and step time) during overground and treadmill walking in a group of healthy subjects where balance assessment robots applied perturbing pushes to the subject's pelvis in sagittal and frontal planes. During walking in both balance assessment robots (overground and treadmill-based) with applied perturbations the stepping responses of a group of seven healthy subjects were assessed with a motion tracking camera. The results show high degree of similarity of stepping responses between overground and treadmill walking for all perturbation directions. Both devices reproduced similar experimental conditions with relatively small standard deviations in the unperturbed walking as well as in perturbed walking. Based on these results we may conclude that stepping responses following perturbations can be studied on an instrumented treadmill where ground reaction forces can be readily assessed which is not the case during perturbed overground walking.

  18. Faster and exact implementation of the continuous cellular automaton for anisotropic etching simulations

    NASA Astrophysics Data System (ADS)

    Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Gadea, R.; Sato, K.

    2011-02-01

    The current success of the continuous cellular automata for the simulation of anisotropic wet chemical etching of silicon in microengineering applications is based on a relatively fast, approximate, constant time stepping implementation (CTS), whose accuracy against the exact algorithm—a computationally slow, variable time stepping implementation (VTS)—has not been previously analyzed in detail. In this study we show that the CTS implementation can generate moderately wrong etch rates and overall etching fronts, thus justifying the presentation of a novel, exact reformulation of the VTS implementation based on a new state variable, referred to as the predicted removal time (PRT), and the use of a self-balanced binary search tree that enables storage and efficient access to the PRT values in each time step in order to quickly remove the corresponding surface atom/s. The proposed PRT method reduces the simulation cost of the exact implementation from {O}(N^{5/3}) to {O}(N^{3/2} log N) without introducing any model simplifications. This enables more precise simulations (only limited by numerical precision errors) with affordable computational times that are similar to the less precise CTS implementation and even faster for low reactivity systems.

  19. Hyperpolarization-activated current (I(h)) in vestibular calyx terminals: characterization and role in shaping postsynaptic events.

    PubMed

    Meredith, Frances L; Benke, Tim A; Rennie, Katherine J

    2012-12-01

    Calyx afferent terminals engulf the basolateral region of type I vestibular hair cells, and synaptic transmission across the vestibular type I hair cell/calyx is not well understood. Calyces express several ionic conductances, which may shape postsynaptic potentials. These include previously described tetrodotoxin-sensitive inward Na(+) currents, voltage-dependent outward K(+) currents and a K(Ca) current. Here, we characterize an inwardly rectifying conductance in gerbil semicircular canal calyx terminals (postnatal days 3-45), sensitive to voltage and to cyclic nucleotides. Using whole-cell patch clamp, we recorded from isolated calyx terminals still attached to their type I hair cells. A slowly activating, noninactivating current (I(h)) was seen with hyperpolarizing voltage steps negative to the resting potential. External Cs(+) (1-5 mM) and ZD7288 (100 μM) blocked the inward current by 97 and 83 %, respectively, confirming that I(h) was carried by hyperpolarization-activated, cyclic nucleotide gated channels. Mean half-activation voltage of I(h) was -123 mV, which shifted to -114 mV in the presence of cAMP. Activation of I(h) was well described with a third order exponential fit to the current (mean time constant of activation, τ, was 190 ms at -139 mV). Activation speeded up significantly (τ=136 and 127 ms, respectively) when intracellular cAMP and cGMP were present, suggesting that in vivo I(h) could be subject to efferent modulation via cyclic nucleotide-dependent mechanisms. In current clamp, hyperpolarizing current steps produced a time-dependent depolarizing sag followed by either a rebound afterdepolarization or an action potential. Spontaneous excitatory postsynaptic potentials (EPSPs) became larger and wider when I(h) was blocked with ZD7288. In a three-dimensional mathematical model of the calyx terminal based on Hodgkin-Huxley type ionic conductances, removal of I(h) similarly increased the EPSP, whereas cAMP slightly decreased simulated EPSP size and width.

  20. Semi-autonomous remote sensing time series generation tool

    NASA Astrophysics Data System (ADS)

    Babu, Dinesh Kumar; Kaufmann, Christof; Schmidt, Marco; Dhams, Thorsten; Conrad, Christopher

    2017-10-01

    High spatial and temporal resolution data is vital for crop monitoring and phenology change detection. Due to the lack of satellite architecture and frequent cloud cover issues, availability of daily high spatial data is still far from reality. Remote sensing time series generation of high spatial and temporal data by data fusion seems to be a practical alternative. However, it is not an easy process, since it involves multiple steps and also requires multiple tools. In this paper, a framework of Geo Information System (GIS) based tool is presented for semi-autonomous time series generation. This tool will eliminate the difficulties by automating all the steps and enable the users to generate synthetic time series data with ease. Firstly, all the steps required for the time series generation process are identified and grouped into blocks based on their functionalities. Later two main frameworks are created, one to perform all the pre-processing steps on various satellite data and the other one to perform data fusion to generate time series. The two frameworks can be used individually to perform specific tasks or they could be combined to perform both the processes in one go. This tool can handle most of the known geo data formats currently available which makes it a generic tool for time series generation of various remote sensing satellite data. This tool is developed as a common platform with good interface which provides lot of functionalities to enable further development of more remote sensing applications. A detailed description on the capabilities and the advantages of the frameworks are given in this paper.

  1. Computerized ionospheric tomography based on geosynchronous SAR

    NASA Astrophysics Data System (ADS)

    Hu, Cheng; Tian, Ye; Dong, Xichao; Wang, Rui; Long, Teng

    2017-02-01

    Computerized ionospheric tomography (CIT) based on spaceborne synthetic aperture radar (SAR) is an emerging technique to construct the three-dimensional (3-D) image of ionosphere. The current studies are all based on the Low Earth Orbit synthetic aperture radar (LEO SAR) which is limited by long repeat period and small coverage. In this paper, a novel ionospheric 3-D CIT technique based on geosynchronous SAR (GEO SAR) is put forward. First, several influences of complex atmospheric environment on GEO SAR focusing are detailedly analyzed, including background ionosphere and multiple scattering effects (induced by turbulent ionosphere), tropospheric effects, and random noises. Then the corresponding GEO SAR signal model is constructed with consideration of the temporal-variant background ionosphere within the GEO SAR long integration time (typically 100 s to 1000 s level). Concurrently, an accurate total electron content (TEC) retrieval method based on GEO SAR data is put forward through subband division in range and subaperture division in azimuth, obtaining variant TEC value with respect to the azimuth time. The processing steps of GEO SAR CIT are given and discussed. Owing to the short repeat period and large coverage area, GEO SAR CIT has potentials of covering the specific space continuously and completely and resultantly has excellent real-time performance. Finally, the TEC retrieval and GEO SAR CIT construction are performed by employing a numerical study based on the meteorological data. The feasibility and correctness of the proposed methods are verified.

  2. SU-F-T-99: Data Visualization From a Treatment Planning Tracking System for Radiation Oncology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cline, K; Kabat, C; Li, Y

    2016-06-15

    Purpose: A treatment planning process tracker database with input forms and a TV-viewable display webpage was developed and implemented in our clinic to collect time data points throughout the process. Tracking plan times is important because it directly affects the patient quality of care. Simply, the longer a patient waits after their initial simulation CT for treatment to begin, the more time the cancer has to progress. The tracker helps to drive workflow through the clinic, while the data collected can be used to understand and manage the process to find and eliminate inefficiencies. Methods: The overall process steps trackedmore » are CT-simulation, mark patient, draw normal contours, draw target volumes, create plan, and review/approve plan. Time stamps for task completion were extracted and used to generate a set of clinic metrics, among which include average time for each step in the process split apart by type of treatment, average time to completion for plans started in a given week, and individual overall completion time per plan. Results: Trends have been tracked for fourteen weeks of clinical data (196 plans). On average, drawing normal contours and target volumes is taking 2–5 times as long as creating the plan itself. This is potentially an issue because it could mean the process is taking too long initially, and it could be forcing the planning step to be done in a short amount of time. We also saw from our graphs that there appears to be no clear trend on the average amount of time per plan week-to-week. Conclusion: A tracker of this type has the potential to provide insight into how time is utilized in our clinic. By equipping our dosimetrists, radiation oncologists, and physicists with individualized metric sets, the tracker can help provide visibility and drive workflow. Funded in part by CPRIT (RP140105).« less

  3. Monitoring gait in multiple sclerosis with novel wearable motion sensors

    PubMed Central

    McGinnis, Ryan S.; Seagers, Kirsten; Motl, Robert W.; Sheth, Nirav; Wright, John A.; Ghaffari, Roozbeh; Sosnoff, Jacob J.

    2017-01-01

    Background Mobility impairment is common in people with multiple sclerosis (PwMS) and there is a need to assess mobility in remote settings. Here, we apply a novel wireless, skin-mounted, and conformal inertial sensor (BioStampRC, MC10 Inc.) to examine gait characteristics of PwMS under controlled conditions. We determine the accuracy and precision of BioStampRC in measuring gait kinematics by comparing to contemporary research-grade measurement devices. Methods A total of 45 PwMS, who presented with diverse walking impairment (Mild MS = 15, Moderate MS = 15, Severe MS = 15), and 15 healthy control subjects participated in the study. Participants completed a series of clinical walking tests. During the tests participants were instrumented with BioStampRC and MTx (Xsens, Inc.) sensors on their shanks, as well as an activity monitor GT3X (Actigraph, Inc.) on their non-dominant hip. Shank angular velocity was simultaneously measured with the inertial sensors. Step number and temporal gait parameters were calculated from the data recorded by each sensor. Visual inspection and the MTx served as the reference standards for computing the step number and temporal parameters, respectively. Accuracy (error) and precision (variance of error) was assessed based on absolute and relative metrics. Temporal parameters were compared across groups using ANOVA. Results Mean accuracy±precision for the BioStampRC was 2±2 steps error for step number, 6±9ms error for stride time and 6±7ms error for step time (0.6–2.6% relative error). Swing time had the least accuracy±precision (25±19ms error, 5±4% relative error) among the parameters. GT3X had the least accuracy±precision (8±14% relative error) in step number estimate among the devices. Both MTx and BioStampRC detected significantly distinct gait characteristics between PwMS with different disability levels (p<0.01). Conclusion BioStampRC sensors accurately and precisely measure gait parameters in PwMS across diverse walking impairment levels and detected differences in gait characteristics by disability level in PwMS. This technology has the potential to provide granular monitoring of gait both inside and outside the clinic. PMID:28178288

  4. Effect of a perturbation-based balance training program on compensatory stepping and grasping reactions in older adults: a randomized controlled trial.

    PubMed

    Mansfield, Avril; Peters, Amy L; Liu, Barbara A; Maki, Brian E

    2010-04-01

    Compensatory stepping and grasping reactions are prevalent responses to sudden loss of balance and play a critical role in preventing falls. The ability to execute these reactions effectively is impaired in older adults. The purpose of this study was to evaluate a perturbation-based balance training program designed to target specific age-related impairments in compensatory stepping and grasping balance recovery reactions. This was a double-blind randomized controlled trial. The study was conducted at research laboratories in a large urban hospital. Thirty community-dwelling older adults (aged 64-80 years) with a recent history of falls or self-reported instability participated in the study. Participants were randomly assigned to receive either a 6-week perturbation-based (motion platform) balance training program or a 6-week control program involving flexibility and relaxation training. Features of balance reactions targeted by the perturbation-based program were: (1) multi-step reactions, (2) extra lateral steps following anteroposterior perturbations, (3) foot collisions following lateral perturbations, and (4) time to complete grasping reactions. The reactions were evoked during testing by highly unpredictable surface translation and cable pull perturbations, both of which differed from the perturbations used during training. /b> Compared with the control program, the perturbation-based training led to greater reductions in frequency of multi-step reactions and foot collisions that were statistically significant for surface translations but not cable pulls. The perturbation group also showed significantly greater reduction in handrail contact time compared with the control group for cable pulls and a possible trend in this direction for surface translations. Further work is needed to determine whether a maintenance program is needed to retain the training benefits and to assess whether these benefits reduce fall risk in daily life. Perturbation-based training shows promise as an effective intervention to improve the ability of older adults to prevent themselves from falling when they lose their balance.

  5. Low NO(x) potential of gas turbine engines

    NASA Technical Reports Server (NTRS)

    Tacina, Robert R.

    1990-01-01

    The purpose is to correlate emission levels of gas turbine engines. The predictions of NO(x) emissions are based on a review of the literature of previous low NO(x) combustor programs and analytical chemical kinetic calculations. Concepts included in the literature review consisted of lean-premixed-prevaporized (LPP), rich burn/quick quench/lean burn (RQL), and direct injection. The NO(x) emissions were found to be an exponential function of adiabatic combustion temperature over a wide range of inlet temperatures, pressures and (lean) fuel-air ratios. A simple correlation of NO(x) formation with time was not found. The LPP and direct injection (using gaseous fuels) concepts have the lowest NO(x) emissions of the three concepts. The RQL data has higher values of NO(x) than the LPP concept, probably due to the stoichiometric temperatures and NO(x) production that occur during the quench step. Improvements in the quick quench step could reduce the NO(x) emissions to the LPP levels. The low NO(x) potential of LPP is offset by the operational disadvantages of its narrow stability limits and its susceptibility to autoignition/flashback. The Rich-Burn/Quick-Quench/Lean-Burn (RQL) and the direct injection concepts have the advantage of wider stability limits comparable to conventional combustors.

  6. Theoretical Insights Into the Excited State Double Proton Transfer Mechanism of Deep Red Pigment Alkannin.

    PubMed

    Zhao, Jinfeng; Dong, Hao; Zheng, Yujun

    2018-02-08

    As the most important component of deep red pigments, alkannin is investigated theoretically in detail based on time-dependent density functional theory (TDDFT) method. Exploring the dual intramolecular hydrogen bonds (O1-H2···O3 and O4-H5···O6) of alkannin, we confirm the O1-H2···O3 may play a more important role in the first excited state than the O4-H5···O6 one. Infrared (IR) vibrational analyses and subsequent charge redistribution also support this viewpoint. Via constructing the S 1 -state potential energy surface (PES) and searching transition state (TS) structures, we illuminate the excited state double proton transfer (ESDPT) mechanism of alkannin is the stepwise process that can be first launched by the O1-H2···O3 hydrogen bond wire in gas state, acetonitrile (CH 3 CN) and cyclohexane (CYH) solvents. We present a novel mechanism that polar aprotic solvents can contribute to the first-step proton transfer (PT) process in the S 1 state, and nonpolar solvents play important roles in lowering the potential energy barrier of the second-step PT reaction.

  7. Recognising and referring children exposed to domestic abuse: a multi-professional, proactive systems-based evaluation using a modified Failure Mode and Effects Analysis (FMEA).

    PubMed

    Ashley, Laura; Armitage, Gerry; Taylor, Julie

    2017-03-01

    Failure Modes and Effects Analysis (FMEA) is a prospective quality assurance methodology increasingly used in healthcare, which identifies potential vulnerabilities in complex, high-risk processes and generates remedial actions. We aimed, for the first time, to apply FMEA in a social care context to evaluate the process for recognising and referring children exposed to domestic abuse within one Midlands city safeguarding area in England. A multidisciplinary, multi-agency team of 10 front-line professionals undertook the FMEA, using a modified methodology, over seven group meetings. The FMEA included mapping out the process under evaluation to identify its component steps, identifying failure modes (potential errors) and possible causes for each step and generating corrective actions. In this article, we report the output from the FMEA, including illustrative examples of the failure modes and corrective actions generated. We also present an analysis of feedback from the FMEA team and provide future recommendations for the use of FMEA in appraising social care processes and practice. Although challenging, the FMEA was unequivocally valuable for team members and generated a significant number of corrective actions locally for the safeguarding board to consider in its response to children exposed to domestic abuse. © 2016 John Wiley & Sons Ltd.

  8. Solvent and viscosity effects on the rate-limiting product release step of glucoamylase during maltose hydrolysis.

    PubMed

    Sierks, M R; Sico, C; Zaw, M

    1997-01-01

    Release of product from the active site is the rate-limiting step in a number of enzymatic reactions, including maltose hydrolysis by glucoamylase (GA). With GA, an enzymatic conformational change has been associated with the product release step. Solvent characteristics such as viscosity can strongly influence protein conformational changes. Here we show that the rate-limiting step of GA has a rather complex dependence on solvent characteristics. Seven different cosolvents were added to the GA/maltose reaction solution. Five of the cosolvents, all having an ethylene glycol base, resulted in an increase in activity at low concentration of cosolvent and variable decreases in activity at higher concentrations. The increase in enzyme activity was dependent on polymer length of the cosolvent; the longer the polymer, the lower the concentration needed. The maximum increase in catalytic activity at 45 degrees C (40-45%) was obtained with the three longest polymers (degree of polymerization from 200 to 8000). A further increase in activity to 60-65% was obtained at 60 degrees C. The linear relationship between ln(kcat) and (viscosity)2 obtained with all the cosolvents provides further evidence that product release is the rate-limiting step in the GA catalytic mechanism. A substantial increase in the turnover rate of GA by addition of relatively small amounts of a cosolvent has potential applications for the food industry where high-fructose corn syrup (HFCS) is one of the primary products produced with GA. Since maltodextrin hydrolysis by GA is by far the slowest step in the production of HFCS, increasing the catalytic rate of GA can substantially reduce the process time.

  9. Biological-based and physical-based optimization for biological evaluation of prostate patient's plans

    NASA Astrophysics Data System (ADS)

    Sukhikh, E.; Sheino, I.; Vertinsky, A.

    2017-09-01

    Modern modalities of radiation treatment therapy allow irradiation of the tumor to high dose values and irradiation of organs at risk (OARs) to low dose values at the same time. In this paper we study optimal radiation treatment plans made in Monaco system. The first aim of this study was to evaluate dosimetric features of Monaco treatment planning system using biological versus dose-based cost functions for the OARs and irradiation targets (namely tumors) when the full potential of built-in biological cost functions is utilized. The second aim was to develop criteria for the evaluation of radiation dosimetry plans for patients based on the macroscopic radiobiological criteria - TCP/NTCP. In the framework of the study four dosimetric plans were created utilizing the full extent of biological and physical cost functions using dose calculation-based treatment planning for IMRT Step-and-Shoot delivery of stereotactic body radiation therapy (SBRT) in prostate case (5 fractions per 7 Gy).

  10. Comprehensive Multiplex One-Step Real-Time TaqMan qRT-PCR Assays for Detection and Quantification of Hemorrhagic Fever Viruses

    PubMed Central

    Li, Jiandong; Qu, Jing; He, Chengcheng; Zhang, Shuo; Li, Chuan; Zhang, Quanfu; Liang, Mifang; Li, Dexin

    2014-01-01

    Background Viral hemorrhagic fevers (VHFs) are a group of animal and human illnesses that are mostly caused by several distinct families of viruses including bunyaviruses, flaviviruses, filoviruses and arenaviruses. Although specific signs and symptoms vary by the type of VHF, initial signs and symptoms are very similar. Therefore rapid immunologic and molecular tools for differential diagnosis of hemorrhagic fever viruses (HFVs) are important for effective case management and control of the spread of VHFs. Real-time quantitative reverse transcriptase-polymerase chain reaction (qRT-PCR) assay is one of the reliable and desirable methods for specific detection and quantification of virus load. Multiplex PCR assay has the potential to produce considerable savings in time and resources in the laboratory detection. Results Primers/probe sets were designed based on appropriate specific genes for each of 28 HFVs which nearly covered all the HFVs, and identified with good specificity and sensitivity using monoplex assays. Seven groups of multiplex one-step real-time qRT-PCR assays in a universal experimental system were then developed by combining all primers/probe sets into 4-plex reactions and evaluated with serial dilutions of synthesized viral RNAs. For all the multiplex assays, no cross-reactivity with other HFVs was observed, and the limits of detection were mainly between 45 and 150 copies/PCR. The reproducibility was satisfactory, since the coefficient of variation of Ct values were all less than 5% in each dilution of synthesized viral RNAs for both intra-assays and inter-assays. Evaluation of the method with available clinical serum samples collected from HFRS patients, SFTS patients and Dengue fever patients showed high sensitivity and specificity of the related multiplex assays on the clinical specimens. Conclusions Overall, the comprehensive multiplex one-step real-time qRT-PCR assays were established in this study, and proved to be specific, sensitive, stable and easy to serve as a useful tool for rapid detection of HFVs. PMID:24752452

  11. DVD-COOP: Innovative Conjunction Prediction Using Voronoi-filter based on the Dynamic Voronoi Diagram of 3D Spheres

    NASA Astrophysics Data System (ADS)

    Cha, J.; Ryu, J.; Lee, M.; Song, C.; Cho, Y.; Schumacher, P.; Mah, M.; Kim, D.

    Conjunction prediction is one of the critical operations in space situational awareness (SSA). For geospace objects, common algorithms for conjunction prediction are usually based on all-pairwise check, spatial hash, or kd-tree. Computational load is usually reduced through some filters. However, there exists a good chance of missing potential collisions between space objects. We present a novel algorithm which both guarantees no missing conjunction and is efficient to answer to a variety of spatial queries including pairwise conjunction prediction. The algorithm takes only O(k log N) time for N objects in the worst case to answer conjunctions where k is a constant which is linear to prediction time length. The proposed algorithm, named DVD-COOP (Dynamic Voronoi Diagram-based Conjunctive Orbital Object Predictor), is based on the dynamic Voronoi diagram of moving spherical balls in 3D space. The algorithm has a preprocessing which consists of two steps: The construction of an initial Voronoi diagram (taking O(N) time on average) and the construction of a priority queue for the events of topology changes in the Voronoi diagram (taking O(N log N) time in the worst case). The scalability of the proposed algorithm is also discussed. We hope that the proposed Voronoi-approach will change the computational paradigm in spatial reasoning among space objects.

  12. Systematic uncertainties in RF-based measurement of superconducting cavity quality factors

    DOE PAGES

    Holzbauer, J. P.; Pischalnikov, Yu.; Sergatskov, D. A.; ...

    2016-05-10

    Q 0 determinations based on RF power measurements are subject to at least three potentially large systematic effects that have not been previously appreciated. Here, instrumental factors that can systematically bias RF based measurements of Q 0 are quantified and steps that can be taken to improve the determination of Q 0 are discussed.

  13. Surrogate Analysis and Index Developer (SAID) tool

    USGS Publications Warehouse

    Domanski, Marian M.; Straub, Timothy D.; Landers, Mark N.

    2015-10-01

    The regression models created in SAID can be used in utilities that have been developed to work with the USGS National Water Information System (NWIS) and for the USGS National Real-Time Water Quality (NRTWQ) Web site. The real-time dissemination of predicted SSC and prediction intervals for each time step has substantial potential to improve understanding of sediment-related water quality and associated engineering and ecological management decisions.

  14. From cheek swabs to consensus sequences: an A to Z protocol for high-throughput DNA sequencing of complete human mitochondrial genomes

    PubMed Central

    2014-01-01

    Background Next-generation DNA sequencing (NGS) technologies have made huge impacts in many fields of biological research, but especially in evolutionary biology. One area where NGS has shown potential is for high-throughput sequencing of complete mtDNA genomes (of humans and other animals). Despite the increasing use of NGS technologies and a better appreciation of their importance in answering biological questions, there remain significant obstacles to the successful implementation of NGS-based projects, especially for new users. Results Here we present an ‘A to Z’ protocol for obtaining complete human mitochondrial (mtDNA) genomes – from DNA extraction to consensus sequence. Although designed for use on humans, this protocol could also be used to sequence small, organellar genomes from other species, and also nuclear loci. This protocol includes DNA extraction, PCR amplification, fragmentation of PCR products, barcoding of fragments, sequencing using the 454 GS FLX platform, and a complete bioinformatics pipeline (primer removal, reference-based mapping, output of coverage plots and SNP calling). Conclusions All steps in this protocol are designed to be straightforward to implement, especially for researchers who are undertaking next-generation sequencing for the first time. The molecular steps are scalable to large numbers (hundreds) of individuals and all steps post-DNA extraction can be carried out in 96-well plate format. Also, the protocol has been assembled so that individual ‘modules’ can be swapped out to suit available resources. PMID:24460871

  15. A Particle Smoother with Sequential Importance Resampling for soil hydraulic parameter estimation: A lysimeter experiment

    NASA Astrophysics Data System (ADS)

    Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry

    2013-04-01

    An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.

  16. Ranking network of a captive rhesus macaque society: a sophisticated corporative kingdom.

    PubMed

    Fushing, Hsieh; McAssey, Michael P; Beisner, Brianne; McCowan, Brenda

    2011-03-15

    We develop a three-step computing approach to explore a hierarchical ranking network for a society of captive rhesus macaques. The computed network is sufficiently informative to address the question: Is the ranking network for a rhesus macaque society more like a kingdom or a corporation? Our computations are based on a three-step approach. These steps are devised to deal with the tremendous challenges stemming from the transitivity of dominance as a necessary constraint on the ranking relations among all individual macaques, and the very high sampling heterogeneity in the behavioral conflict data. The first step simultaneously infers the ranking potentials among all network members, which requires accommodation of heterogeneous measurement error inherent in behavioral data. Our second step estimates the social rank for all individuals by minimizing the network-wide errors in the ranking potentials. The third step provides a way to compute confidence bounds for selected empirical features in the social ranking. We apply this approach to two sets of conflict data pertaining to two captive societies of adult rhesus macaques. The resultant ranking network for each society is found to be a sophisticated mixture of both a kingdom and a corporation. Also, for validation purposes, we reanalyze conflict data from twenty longhorn sheep and demonstrate that our three-step approach is capable of correctly computing a ranking network by eliminating all ranking error.

  17. RiseTx: testing the feasibility of a web application for reducing sedentary behavior among prostate cancer survivors receiving androgen deprivation therapy.

    PubMed

    Trinh, Linda; Arbour-Nicitopoulos, Kelly P; Sabiston, Catherine M; Berry, Scott R; Loblaw, Andrew; Alibhai, Shabbir M H; Jones, Jennifer M; Faulkner, Guy E

    2018-06-07

    Given the high levels of sedentary time and treatment-related side effects in prostate cancer survivors (PCS), interventions targeting sedentary behavior (SED) may be more sustainable compared to physical activity (PA). To examine the feasibility of a web-based intervention (RiseTx) for reducing SED and increasing moderate-to-vigorous physical activity (MVPA) among PCS undergoing ADT. Secondary outcomes include changes in SED, MVPA, light intensity PA, and quality of life. Forty-six PCS were recruited from two cancer centres in Toronto, Ontario, Canada between July 2015-October 2016. PCS were given an activity tracker (Jawbone), access to the RiseTx website program, and provided with a goal of increasing walking by 3000 daily steps above baseline levels over a 12-week period. A range of support tools were progressively released to reduce SED time (e.g., self-monitoring of steps) during the five-phase program. Objective measures of SED, MVPA, and daily steps were compared across the 12-week intervention using linear mixed models. Of the 46 PCS enrolled in the study, 42 completed the SED intervention, representing a 9% attrition rate. Measurement completion rates were 97 and 65% at immediately post-intervention and 12-week follow-up for all measures, respectively. Overall adherence was 64% for total number of logins (i.e., > 3 visits each week). Sample mean age was 73.2 ± 7.3 years, mean BMI was 28.0 ± 3.0 kg/m 2 , mean number of months since diagnosis was 93.6 ± 71.2, and 72% had ADT administered continuously. Significant reductions of 455.4 weekly minutes of SED time were observed at post-intervention (p = .005). Significant increases of + 44.1 for weekly minutes of MVPA was observed at immediately post-intervention (p = .010). There were significant increases in step counts of + 1535 steps from baseline to post-intervention (p < .001). RiseTx was successful in reducing SED and increasing MVPA in PCS. PCS were satisfied with the intervention and its components. Additional strategies may be needed though for maintenance of behavior change. The next step for RiseTx is to replicate these findings in a larger, randomized controlled trial that will have the potential for reducing sedentary time among PCS. NCT03321149 (ClinicalTrials.gov Identifier).

  18. Method and allocation device for allocating pending requests for data packet transmission at a number of inputs to a number of outputs of a packet switching device in successive time slots

    DOEpatents

    Abel, Francois [Rueschlikon, CH; Iliadis, Ilias [Rueschlikon, CH; Minkenberg, Cyriel J. A. [Adliswil, CH

    2009-02-03

    A method for allocating pending requests for data packet transmission at a number of inputs to a number of outputs of a switching system in successive time slots, including a matching method including the steps of providing a first request information in a first time slot indicating data packets at the inputs requesting transmission to the outputs of the switching system, performing a first step in the first time slot depending on the first request information to obtain a first matching information, providing a last request information in a last time slot successive to the first time slot, performing a last step in the last time slot depending on the last request information and depending on the first matching information to obtain a final matching information, and assigning the pending data packets at the number of inputs to the number of outputs based on the final matching information.

  19. Convergent Validity of the Arab Teens Lifestyle Study (ATLS) Physical Activity Questionnaire

    PubMed Central

    Al-Hazzaa, Hazzaa M.; Al-Sobayel, Hana I.; Musaiger, Abdulrahman O.

    2011-01-01

    The Arab Teens Lifestyle Study (ATLS) is a multicenter project for assessing the lifestyle habits of Arab adolescents. This study reports on the convergent validity of the physical activity questionnaire used in ATLS against an electronic pedometer. Participants were 39 males and 36 females randomly selected from secondary schools, with a mean age of 16.1 ± 1.1 years. ATLS self-reported questionnaire was validated against the electronic pedometer for three consecutive weekdays. Mean steps counts were 6,866 ± 3,854 steps/day with no significant gender difference observed. Questionnaire results showed no significant gender differences in time spent on total or moderate-intensity activities. However, males spent significantly more time than females on vigorous-intensity activity. The correlation of steps counts with total time spent on all activities by the questionnaire was 0.369. Relationship of steps counts was higher with vigorous-intensity (r = 0.338) than with moderate-intensity activity (r = 0.265). Pedometer steps counts showed higher correlations with time spent on walking (r = 0.350) and jogging (r = 0.383) than with the time spent on other activities. Active participants, based on pedometer assessment, were also most active by the questionnaire. It appears that ATLS questionnaire is a valid instrument for assessing habitual physical activity among Arab adolescents. PMID:22016718

  20. Convergent validity of the Arab Teens Lifestyle Study (ATLS) physical activity questionnaire.

    PubMed

    Al-Hazzaa, Hazzaa M; Al-Sobayel, Hana I; Musaiger, Abdulrahman O

    2011-09-01

    The Arab Teens Lifestyle Study (ATLS) is a multicenter project for assessing the lifestyle habits of Arab adolescents. This study reports on the convergent validity of the physical activity questionnaire used in ATLS against an electronic pedometer. Participants were 39 males and 36 females randomly selected from secondary schools, with a mean age of 16.1 ± 1.1 years. ATLS self-reported questionnaire was validated against the electronic pedometer for three consecutive weekdays. Mean steps counts were 6,866 ± 3,854 steps/day with no significant gender difference observed. Questionnaire results showed no significant gender differences in time spent on total or moderate-intensity activities. However, males spent significantly more time than females on vigorous-intensity activity. The correlation of steps counts with total time spent on all activities by the questionnaire was 0.369. Relationship of steps counts was higher with vigorous-intensity (r = 0.338) than with moderate-intensity activity (r = 0.265). Pedometer steps counts showed higher correlations with time spent on walking (r = 0.350) and jogging (r = 0.383) than with the time spent on other activities. Active participants, based on pedometer assessment, were also most active by the questionnaire. It appears that ATLS questionnaire is a valid instrument for assessing habitual physical activity among Arab adolescents.

Top