Sample records for time step limit

  1. Stability analysis of implicit time discretizations for the Compton-scattering Fokker-Planck equation

    NASA Astrophysics Data System (ADS)

    Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.; Morel, Jim E.

    2009-09-01

    The Fokker-Planck equation is a widely used approximation for modeling the Compton scattering of photons in high energy density applications. In this paper, we perform a stability analysis of three implicit time discretizations for the Compton-Scattering Fokker-Planck equation. Specifically, we examine (i) a Semi-Implicit (SI) scheme that employs backward-Euler differencing but evaluates temperature-dependent coefficients at their beginning-of-time-step values, (ii) a Fully Implicit (FI) discretization that instead evaluates temperature-dependent coefficients at their end-of-time-step values, and (iii) a Linearized Implicit (LI) scheme, which is developed by linearizing the temperature dependence of the FI discretization within each time step. Our stability analysis shows that the FI and LI schemes are unconditionally stable and cannot generate oscillatory solutions regardless of time-step size, whereas the SI discretization can suffer from instabilities and nonphysical oscillations for sufficiently large time steps. With the results of this analysis, we present time-step limits for the SI scheme that prevent undesirable behavior. We test the validity of our stability analysis and time-step limits with a set of numerical examples.

  2. A WENO-Limited, ADER-DT, Finite-Volume Scheme for Efficient, Robust, and Communication-Avoiding Multi-Dimensional Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norman, Matthew R

    2014-01-01

    The novel ADER-DT time discretization is applied to two-dimensional transport in a quadrature-free, WENO- and FCT-limited, Finite-Volume context. Emphasis is placed on (1) the serial and parallel computational properties of ADER-DT and this framework and (2) the flexibility of ADER-DT and this framework in efficiently balancing accuracy with other constraints important to transport applications. This study demonstrates a range of choices for the user when approaching their specific application while maintaining good parallel properties. In this method, genuine multi-dimensionality, single-step and single-stage time stepping, strict positivity, and a flexible range of limiting are all achieved with only one parallel synchronizationmore » and data exchange per time step. In terms of parallel data transfers per simulated time interval, this improves upon multi-stage time stepping and post-hoc filtering techniques such as hyperdiffusion. This method is evaluated with standard transport test cases over a range of limiting options to demonstrate quantitatively and qualitatively what a user should expect when employing this method in their application.« less

  3. Molecular dynamics based enhanced sampling of collective variables with very large time steps.

    PubMed

    Chen, Pei-Yang; Tuckerman, Mark E

    2018-01-14

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  4. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    NASA Astrophysics Data System (ADS)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  5. An in-depth stability analysis of nonuniform FDTD combined with novel local implicitization techniques

    NASA Astrophysics Data System (ADS)

    Van Londersele, Arne; De Zutter, Daniël; Vande Ginste, Dries

    2017-08-01

    This work focuses on efficient full-wave solutions of multiscale electromagnetic problems in the time domain. Three local implicitization techniques are proposed and carefully analyzed in order to relax the traditional time step limit of the Finite-Difference Time-Domain (FDTD) method on a nonuniform, staggered, tensor product grid: Newmark, Crank-Nicolson (CN) and Alternating-Direction-Implicit (ADI) implicitization. All of them are applied in preferable directions, alike Hybrid Implicit-Explicit (HIE) methods, as to limit the rank of the sparse linear systems. Both exponential and linear stability are rigorously investigated for arbitrary grid spacings and arbitrary inhomogeneous, possibly lossy, isotropic media. Numerical examples confirm the conservation of energy inside a cavity for a million iterations if the time step is chosen below the proposed, relaxed limit. Apart from the theoretical contributions, new accomplishments such as the development of the leapfrog Alternating-Direction-Hybrid-Implicit-Explicit (ADHIE) FDTD method and a less stringent Courant-like time step limit for the conventional, fully explicit FDTD method on a nonuniform grid, have immediate practical applications.

  6. Accurate Monotonicity - Preserving Schemes With Runge-Kutta Time Stepping

    NASA Technical Reports Server (NTRS)

    Suresh, A.; Huynh, H. T.

    1997-01-01

    A new class of high-order monotonicity-preserving schemes for the numerical solution of conservation laws is presented. The interface value in these schemes is obtained by limiting a higher-order polynominal reconstruction. The limiting is designed to preserve accuracy near extrema and to work well with Runge-Kutta time stepping. Computational efficiency is enhanced by a simple test that determines whether the limiting procedure is needed. For linear advection in one dimension, these schemes are shown as well as the Euler equations also confirm their high accuracy, good shock resolution, and computational efficiency.

  7. Mass imbalances in EPANET water-quality simulations

    NASA Astrophysics Data System (ADS)

    Davis, Michael J.; Janke, Robert; Taxon, Thomas N.

    2018-04-01

    EPANET is widely employed to simulate water quality in water distribution systems. However, in general, the time-driven simulation approach used to determine concentrations of water-quality constituents provides accurate results only for short water-quality time steps. Overly long time steps can yield errors in concentration estimates and can result in situations in which constituent mass is not conserved. The use of a time step that is sufficiently short to avoid these problems may not always be feasible. The absence of EPANET errors or warnings does not ensure conservation of mass. This paper provides examples illustrating mass imbalances and explains how such imbalances can occur because of fundamental limitations in the water-quality routing algorithm used in EPANET. In general, these limitations cannot be overcome by the use of improved water-quality modeling practices. This paper also presents a preliminary event-driven approach that conserves mass with a water-quality time step that is as long as the hydraulic time step. Results obtained using the current approach converge, or tend to converge, toward those obtained using the preliminary event-driven approach as the water-quality time step decreases. Improving the water-quality routing algorithm used in EPANET could eliminate mass imbalances and related errors in estimated concentrations. The results presented in this paper should be of value to those who perform water-quality simulations using EPANET or use the results of such simulations, including utility managers and engineers.

  8. The Relaxation of Vicinal (001) with ZigZag [110] Steps

    NASA Astrophysics Data System (ADS)

    Hawkins, Micah; Hamouda, Ajmi Bh; González-Cabrera, Diego Luis; Einstein, Theodore L.

    2012-02-01

    This talk presents a kinetic Monte Carlo study of the relaxation dynamics of [110] steps on a vicinal (001) simple cubic surface. This system is interesting because [110] steps have different elementary excitation energetics and favor step diffusion more than close-packed [100] steps. In this talk we show how this leads to relaxation dynamics showing greater fluctuations on a shorter time scale for [110] steps as well as 2-bond breaking processes being rate determining in contrast to 3-bond breaking processes for [100] steps. The existence of a steady state is shown via the convergence of terrace width distributions at times much longer than the relaxation time. In this time regime excellent fits to the modified generalized Wigner distribution (as well as to the Berry-Robnik model when steps can overlap) were obtained. Also, step-position correlation function data show diffusion-limited increase for small distances along the step as well as greater average step displacement for zigzag steps compared to straight steps for somewhat longer distances along the step. Work supported by NSF-MRSEC Grant DMR 05-20471 as well as a DOE-CMCSN Grant.

  9. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting.

    PubMed

    Rashed-Ul Islam, S M; Jahan, Munira; Tabassum, Shahina

    2015-01-01

    Virological monitoring is the best predictor for the management of chronic hepatitis B virus (HBV) infections. Consequently, it is important to use the most efficient, rapid and cost-effective testing systems for HBV DNA quantification. The present study compared the performance characteristics of a one-step HBV polymerase chain reaction (PCR) vs the two-step HBV PCR method for quantification of HBV DNA from clinical samples. A total of 100 samples consisting of 85 randomly selected samples from patients with chronic hepatitis B (CHB) and 15 samples from apparently healthy individuals were enrolled in this study. Of the 85 CHB clinical samples tested, HBV DNA was detected from 81% samples by one-step PCR method with median HBV DNA viral load (VL) of 7.50 × 10 3 lU/ml. In contrast, 72% samples were detected by the two-step PCR system with median HBV DNA of 3.71 × 10 3 lU/ml. The one-step method showed strong linear correlation with two-step PCR method (r = 0.89; p < 0.0001). Both methods showed good agreement at Bland-Altman plot, with a mean difference of 0.61 log 10 IU/ml and limits of agreement of -1.82 to 3.03 log 10 IU/ml. The intra-assay and interassay coefficients of variation (CV%) of plasma samples (4-7 log 10 IU/ml) for the one-step PCR method ranged between 0.33 to 0.59 and 0.28 to 0.48 respectively, thus demonstrating a high level of concordance between the two methods. Moreover, elimination of the DNA extraction step in the one-step PCR kit allowed time-efficient and significant labor and cost savings for the quantification of HBV DNA in a resource limited setting. Rashed-Ul Islam SM, Jahan M, Tabassum S. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting. Euroasian J Hepato-Gastroenterol 2015;5(1):11-15.

  10. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting

    PubMed Central

    Jahan, Munira; Tabassum, Shahina

    2015-01-01

    Virological monitoring is the best predictor for the management of chronic hepatitis B virus (HBV) infections. Consequently, it is important to use the most efficient, rapid and cost-effective testing systems for HBV DNA quantification. The present study compared the performance characteristics of a one-step HBV polymerase chain reaction (PCR) vs the two-step HBV PCR method for quantification of HBV DNA from clinical samples. A total of 100 samples consisting of 85 randomly selected samples from patients with chronic hepatitis B (CHB) and 15 samples from apparently healthy individuals were enrolled in this study. Of the 85 CHB clinical samples tested, HBV DNA was detected from 81% samples by one-step PCR method with median HBV DNA viral load (VL) of 7.50 × 103 lU/ml. In contrast, 72% samples were detected by the two-step PCR system with median HBV DNA of 3.71 × 103 lU/ml. The one-step method showed strong linear correlation with two-step PCR method (r = 0.89; p < 0.0001). Both methods showed good agreement at Bland-Altman plot, with a mean difference of 0.61 log10 IU/ml and limits of agreement of -1.82 to 3.03 log10 IU/ml. The intra-assay and interassay coefficients of variation (CV%) of plasma samples (4-7 log10 IU/ml) for the one-step PCR method ranged between 0.33 to 0.59 and 0.28 to 0.48 respectively, thus demonstrating a high level of concordance between the two methods. Moreover, elimination of the DNA extraction step in the one-step PCR kit allowed time-efficient and significant labor and cost savings for the quantification of HBV DNA in a resource limited setting. How to cite this article Rashed-Ul Islam SM, Jahan M, Tabassum S. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting. Euroasian J Hepato-Gastroenterol 2015;5(1):11-15. PMID:29201678

  11. Feasibility of robotic exoskeleton ambulation in a C4 person with incomplete spinal cord injury: a case report.

    PubMed

    Lester, Robert M; Gorgey, Ashraf S

    2018-01-01

    To determine whether an individual with C4 incomplete spinal cord injury (SCI) with limited hand functions can effectively operate a powered exoskeleton (Ekso) to improve parameters of physical activity as determined by swing-time, up-time, walk-time, and total number of steps. A 21-year-old male with incomplete chronic (>1 year postinjury) SCI C4, participated in a clinical exoskeleton program to determine the feasibility of standing up and walking with limited hand functions. The participant was invited to attend 3 sessions including fitting, familiarization and gait training separated by one week intervals. Walk-time, up-time and total number of steps were measured during each training session. A complete body composition assessment using dual-energy X-ray absorptiometry (DXA) of the spine, knees and hips was conducted before training.Using a platform walker and cuffing both hands, the participant managed to stand up and ambulate successfully using exoskeleton. Over the course of 2 weeks, maximum walk-time increased from 7 to 17 min and number of steps increased from 83 to 589 steps. The total up-time increased from 19 to 31 min. Exoskeleton training may be a safe and feasible approach for persons with higher levels of SCI after effectively providing a supportive assistive device for weight shifting. The current case study demonstrates the use of a powered exoskeleton for an individual with high level tetraplegia (C4 and above) and limited hand functions.

  12. Unstable vicinal crystal growth from cellular automata

    NASA Astrophysics Data System (ADS)

    Krasteva, A.; Popova, H.; KrzyŻewski, F.; Załuska-Kotur, M.; Tonchev, V.

    2016-03-01

    In order to study the unstable step motion on vicinal crystal surfaces we devise vicinal Cellular Automata. Each cell from the colony has value equal to its height in the vicinal, initially the steps are regularly distributed. Another array keeps the adatoms, initially distributed randomly over the surface. The growth rule defines that each adatom at right nearest neighbor position to a (multi-) step attaches to it. The update of whole colony is performed at once and then time increases. This execution of the growth rule is followed by compensation of the consumed particles and by diffusional update(s) of the adatom population. Two principal sources of instability are employed - biased diffusion and infinite inverse Ehrlich-Schwoebel barrier (iiSE). Since these factors are not opposed by step-step repulsion the formation of multi-steps is observed but in general the step bunches preserve a finite width. We monitor the developing surface patterns and quantify the observations by scaling laws with focus on the eventual transition from diffusion-limited to kinetics-limited phenomenon. The time-scaling exponent of the bunch size N is 1/2 for the case of biased diffusion and 1/3 for the case of iiSE. Additional distinction is possible based on the time-scaling exponents of the sizes of multi-step Nmulti, these are 0.36÷0.4 (for biased diffusion) and 1/4 (iiSE).

  13. A gas-kinetic BGK scheme for the compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Xu, Kun

    2000-01-01

    This paper presents an improved gas-kinetic scheme based on the Bhatnagar-Gross-Krook (BGK) model for the compressible Navier-Stokes equations. The current method extends the previous gas-kinetic Navier-Stokes solver developed by Xu and Prendergast by implementing a general nonequilibrium state to represent the gas distribution function at the beginning of each time step. As a result, the requirement in the previous scheme, such as the particle collision time being less than the time step for the validity of the BGK Navier-Stokes solution, is removed. Therefore, the applicable regime of the current method is much enlarged and the Navier-Stokes solution can be obtained accurately regardless of the ratio between the collision time and the time step. The gas-kinetic Navier-Stokes solver developed by Chou and Baganoff is the limiting case of the current method, and it is valid only under such a limiting condition. Also, in this paper, the appropriate implementation of boundary condition for the kinetic scheme, different kinetic limiting cases, and the Prandtl number fix are presented. The connection among artificial dissipative central schemes, Godunov-type schemes, and the gas-kinetic BGK method is discussed. Many numerical tests are included to validate the current method.

  14. 14 CFR 23.529 - Hull and main float landing conditions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... landing. For symmetrical step, bow, and stern landings, the limit water reaction load factors are those....25 tan β times the resultant load in the corresponding symmetrical landing condition; and (2) The... at one float times the step landing load reached under § 23.527. The side load is directed inboard...

  15. 14 CFR 23.529 - Hull and main float landing conditions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... landing. For symmetrical step, bow, and stern landings, the limit water reaction load factors are those....25 tan β times the resultant load in the corresponding symmetrical landing condition; and (2) The... at one float times the step landing load reached under § 23.527. The side load is directed inboard...

  16. Steps wandering on the lysozyme and KDP crystals during growth in solution

    NASA Astrophysics Data System (ADS)

    Rashkovich, L. N.; Chernevich, T. G.; Gvozdev, N. V.; Shustin, O. A.; Yaminsky, I. V.

    2001-10-01

    We have applied atomic force microscopy for the study in solution of time evolution of step roughness on the crystal faces with high (pottasium dihydrophosphate: KDP) and low (lysozyme) density of kinks. It was found that the roughness increases with time revealing the time dependence as t1/4. Step velocity does not depend upon distance between steps, that is why the experimental data were interpreted on the basis of Voronkov theory, which assume, that the attachment and detachment of building units in the kinks is major limitation for crystal growth. In the frame of this theoretical model the calculation of material parameters is performed.

  17. Transfer effects of step training on stepping performance in untrained directions in older adults: A randomized controlled trial.

    PubMed

    Okubo, Yoshiro; Menant, Jasmine; Udyavar, Manasa; Brodie, Matthew A; Barry, Benjamin K; Lord, Stephen R; L Sturnieks, Daina

    2017-05-01

    Although step training improves the ability of quick stepping, some home-based step training systems train limited stepping directions and may cause harm by reducing stepping performance in untrained directions. This study examines the possible transfer effects of step training on stepping performance in untrained directions in older people. Fifty four older adults were randomized into: forward step training (FT); lateral plus forward step training (FLT); or no training (NT) groups. FT and FLT participants undertook a 15-min training session involving 200 step repetitions. Prior to and post training, choice stepping reaction time and stepping kinematics in untrained, diagonal and lateral directions were assessed. Significant interactions of group and time (pre/post-assessment) were evident for the first step after training indicating negative (delayed response time) and positive (faster peak stepping speed) transfer effects in the diagonal direction in the FT group. However, when the second to the fifth steps after training were included in the analysis, there were no significant interactions of group and time for measures in the diagonal stepping direction. Step training only in the forward direction improved stepping speed but may acutely slow response times in the untrained diagonal direction. However, this acute effect appears to dissipate after a few repeated step trials. Step training in both forward and lateral directions appears to induce no negative transfer effects in diagonal stepping. These findings suggest home-based step training systems present low risk of harm through negative transfer effects in untrained stepping directions. ANZCTR 369066. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. A Scale-Invariant ``Discrete-Time'' Balitsky--Kovchegov Equation

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Peschanski, R.

    2005-06-01

    We consider a version of QCD dipole cascading corresponding to a finite number n of discrete Δ Y steps of branching in rapidity. Using the discretization scheme preserving the holomorphic factorizability and scale-invariance in position space of the dipole splitting function, we derive an exact recurrence formula from step to step which plays the rôle of a ``discrete-time'' Balitsky--Kovchegov equation. The BK solutions are recovered in the limit n=∞ and Δ Y=0.

  19. Cross-platform evaluation of commercial real-time SYBR green RT-PCR kits for sensitive and rapid detection of European bat Lyssavirus type 1.

    PubMed

    Picard-Meyer, Evelyne; Peytavin de Garam, Carine; Schereffer, Jean Luc; Marchal, Clotilde; Robardet, Emmanuelle; Cliquet, Florence

    2015-01-01

    This study evaluates the performance of five two-step SYBR Green RT-qPCR kits and five one-step SYBR Green qRT-PCR kits using real-time PCR assays. Two real-time thermocyclers showing different throughput capacities were used. The analysed performance evaluation criteria included the generation of standard curve, reaction efficiency, analytical sensitivity, intra- and interassay repeatability as well as the costs and the practicability of kits, and thermocycling times. We found that the optimised one-step PCR assays had a higher detection sensitivity than the optimised two-step assays regardless of the machine used, while no difference was detected in reaction efficiency, R (2) values, and intra- and interreproducibility between the two methods. The limit of detection at the 95% confidence level varied between 15 to 981 copies/µL and 41 to 171 for one-step kits and two-step kits, respectively. Of the ten kits tested, the most efficient kit was the Quantitect SYBR Green qRT-PCR with a limit of detection at 95% of confidence of 20 and 22 copies/µL on the thermocyclers Rotor gene Q MDx and MX3005P, respectively. The study demonstrated the pivotal influence of the thermocycler on PCR performance for the detection of rabies RNA, as well as that of the master mixes.

  20. Cross-Platform Evaluation of Commercial Real-Time SYBR Green RT-PCR Kits for Sensitive and Rapid Detection of European Bat Lyssavirus Type 1

    PubMed Central

    Picard-Meyer, Evelyne; Peytavin de Garam, Carine; Schereffer, Jean Luc; Marchal, Clotilde; Robardet, Emmanuelle; Cliquet, Florence

    2015-01-01

    This study evaluates the performance of five two-step SYBR Green RT-qPCR kits and five one-step SYBR Green qRT-PCR kits using real-time PCR assays. Two real-time thermocyclers showing different throughput capacities were used. The analysed performance evaluation criteria included the generation of standard curve, reaction efficiency, analytical sensitivity, intra- and interassay repeatability as well as the costs and the practicability of kits, and thermocycling times. We found that the optimised one-step PCR assays had a higher detection sensitivity than the optimised two-step assays regardless of the machine used, while no difference was detected in reaction efficiency, R 2 values, and intra- and interreproducibility between the two methods. The limit of detection at the 95% confidence level varied between 15 to 981 copies/µL and 41 to 171 for one-step kits and two-step kits, respectively. Of the ten kits tested, the most efficient kit was the Quantitect SYBR Green qRT-PCR with a limit of detection at 95% of confidence of 20 and 22 copies/µL on the thermocyclers Rotor gene Q MDx and MX3005P, respectively. The study demonstrated the pivotal influence of the thermocycler on PCR performance for the detection of rabies RNA, as well as that of the master mixes. PMID:25785274

  1. Step-by-step design of a single phase 3.3 kV/200 a resistive type superconducting fault current limiter (R-SFCL) and cryostat

    NASA Astrophysics Data System (ADS)

    Kar, Soumen; Rao, V. V.

    2018-07-01

    In our first attempt to design a single phase R-SFCL in India, we have chosen the typical rating of a medium voltage level (3.3 kVrms, 200 Arms, 1Φ) R-SFCL. The step-by-step design procedure for the R-SFCL involves conductor selection, time dependent electro-thermal simulations and recovery time optimization after fault removal. In the numerical analysis, effective fault limitation for a fault current of 5 kA for the medium voltage level R-SFCL are simulated. Maximum normal state resistance and maximum temperature rise in the SFCL coil during current limitation are estimated using one-dimensional energy balance equation. Further, a cryogenic system is conceptually designed for aforesaid MV level R-SFCL by considering inner and outer vessel materials, wall-thickness and thermal insulation which can be used for R-SFCL system. Finally, the total thermal load is calculated for the designed R-SFCL cryostat to select a suitable cryo-refrigerator for LN2 re-condensation.

  2. Smoothing and the second law

    NASA Technical Reports Server (NTRS)

    Merriam, Marshal L.

    1987-01-01

    The technique of obtaining second-order oscillation-free total -variation-diminishing (TVD), scalar difference schemes by adding a limited diffusive flux ('smoothing') to a second-order centered scheme is explored. It is shown that such schemes do not always converge to the correct physical answer. The approach presented here is to construct schemes that numerically satisfy the second law of thermodynamics on a cell-by-cell basis. Such schemes can only converge to the correct physical solution and in some cases can be shown to be TVD. An explicit scheme with this property and second-order spatial accuracy was found to have extremely restrictive time-step limitation. Switching to an implicit scheme removed the time-step limitation.

  3. Minimum Performance on Clinical Tests of Physical Function to Predict Walking 6,000 Steps/Day in Knee Osteoarthritis: An Observational Study.

    PubMed

    Master, Hiral; Thoma, Louise M; Christiansen, Meredith B; Polakowski, Emily; Schmitt, Laura A; White, Daniel K

    2018-07-01

    Evidence of physical function difficulties, such as difficulty rising from a chair, may limit daily walking for people with knee osteoarthritis (OA). The purpose of this study was to identify minimum performance thresholds on clinical tests of physical function predictive to walking ≥6,000 steps/day. This benchmark is known to discriminate people with knee OA who develop functional limitation over time from those who do not. Using data from the Osteoarthritis Initiative, we quantified daily walking as average steps/day from an accelerometer (Actigraph GT1M) worn for ≥10 hours/day over 1 week. Physical function was quantified using 3 performance-based clinical tests: 5 times sit-to-stand test, walking speed (tested over 20 meters), and 400-meter walk test. To identify minimum performance thresholds for daily walking, we calculated physical function values corresponding to high specificity (80-95%) to predict walking ≥6,000 steps/day. Among 1,925 participants (mean ± SD age 65.1 ± 9.1 years, mean ± SD body mass index 28.4 ± 4.8 kg/m 2 , and 55% female) with valid accelerometer data, 54.9% walked ≥6,000 steps/day. High specificity thresholds of physical function for walking ≥6,000 steps/day ranged 11.4-14.0 seconds on the 5 times sit-to-stand test, 1.13-1.26 meters/second for walking speed, or 315-349 seconds on the 400-meter walk test. Not meeting these minimum performance thresholds on clinical tests of physical function may indicate inadequate physical ability to walk ≥6,000 steps/day for people with knee OA. Rehabilitation may be indicated to address underlying impairments limiting physical function. © 2017, American College of Rheumatology.

  4. General methods for analysis of sequential "n-step" kinetic mechanisms: application to single turnover kinetics of helicase-catalyzed DNA unwinding.

    PubMed

    Lucius, Aaron L; Maluf, Nasib K; Fischer, Christopher J; Lohman, Timothy M

    2003-10-01

    Helicase-catalyzed DNA unwinding is often studied using "all or none" assays that detect only the final product of fully unwound DNA. Even using these assays, quantitative analysis of DNA unwinding time courses for DNA duplexes of different lengths, L, using "n-step" sequential mechanisms, can reveal information about the number of intermediates in the unwinding reaction and the "kinetic step size", m, defined as the average number of basepairs unwound between two successive rate limiting steps in the unwinding cycle. Simultaneous nonlinear least-squares analysis using "n-step" sequential mechanisms has previously been limited by an inability to float the number of "unwinding steps", n, and m, in the fitting algorithm. Here we discuss the behavior of single turnover DNA unwinding time courses and describe novel methods for nonlinear least-squares analysis that overcome these problems. Analytic expressions for the time courses, f(ss)(t), when obtainable, can be written using gamma and incomplete gamma functions. When analytic expressions are not obtainable, the numerical solution of the inverse Laplace transform can be used to obtain f(ss)(t). Both methods allow n and m to be continuous fitting parameters. These approaches are generally applicable to enzymes that translocate along a lattice or require repetition of a series of steps before product formation.

  5. Kinematic and behavioral analyses of protective stepping strategies and risk for falls among community living older adults.

    PubMed

    Bair, Woei-Nan; Prettyman, Michelle G; Beamer, Brock A; Rogers, Mark W

    2016-07-01

    Protective stepping evoked by externally applied lateral perturbations reveals balance deficits underlying falls. However, a lack of comprehensive information about the control of different stepping strategies in relation to the magnitude of perturbation limits understanding of balance control in relation to age and fall status. The aim of this study was to investigate different protective stepping strategies and their kinematic and behavioral control characteristics in response to different magnitudes of lateral waist-pulls between older fallers and non-fallers. Fifty-two community-dwelling older adults (16 fallers) reacted naturally to maintain balance in response to five magnitudes of lateral waist-pulls. The balance tolerance limit (BTL, waist-pull magnitude where protective steps transitioned from single to multiple steps), first step control characteristics (stepping frequency and counts, spatial-temporal kinematic, and trunk position at landing) of four naturally selected protective step types were compared between fallers and non-fallers at- and above-BTL. Fallers took medial-steps most frequently while non-fallers most often took crossover-back-steps. Only non-fallers varied their step count and first step control parameters by step type at the instants of step initiation (onset time) and termination (trunk position), while both groups modulated step execution parameters (single stance duration and step length) by step type. Group differences were generally better demonstrated above-BTL. Fallers primarily used a biomechanically less effective medial-stepping strategy that may be partially explained by reduced somato-sensation. Fallers did not modulate their step parameters by step type at first step initiation and termination, instances particularly vulnerable to instability, reflecting their limitations in balance control during protective stepping. Copyright © 2016. Published by Elsevier Ltd.

  6. Smoothing and the second law

    NASA Technical Reports Server (NTRS)

    Merriam, Marshal L.

    1986-01-01

    The technique of obtaining second order, oscillation free, total variation diminishing (TVD), scalar difference schemes by adding a limited diffusion flux (smoothing) to a second order centered scheme is explored. It is shown that such schemes do not always converge to the correct physical answer. The approach presented here is to construct schemes that numerically satisfy the second law of thermodynamics on a cell by cell basis. Such schemes can only converge to the correct physical solution and in some cases can be shown to be TVD. An explicit scheme with this property and second order spatial accuracy was found to have an extremely restrictive time step limitation (Delta t less than Delta x squared). Switching to an implicit scheme removed the time step limitation.

  7. On the correct use of stepped-sine excitations for the measurement of time-varying bioimpedance.

    PubMed

    Louarroudi, E; Sanchez, B

    2017-02-01

    When a linear time-varying (LTV) bioimpedance is measured using stepped-sine excitations, a compromise must be made: the temporal distortions affecting the data depend on the experimental time, which in turn sets the data accuracy and limits the temporal bandwidth of the system that needs to be measured. Here, the experimental time required to measure linear time-invariant bioimpedance with a specified accuracy is analyzed for different stepped-sine excitation setups. We provide simple equations that allow the reader to know whether LTV bioimpedance can be measured through repeated time- invariant stepped-sine experiments. Bioimpedance technology is on the rise thanks to a plethora of healthcare monitoring applications. The results presented can help to avoid distortions in the data while measuring accurately non-stationary physiological phenomena. The impact of the work presented is broad, including the potential of enhancing bioimpedance studies and healthcare devices using bioimpedance technology.

  8. Efficiency and Accuracy of Time-Accurate Turbulent Navier-Stokes Computations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Sanetrik, Mark D.; Biedron, Robert T.; Melson, N. Duane; Parlette, Edward B.

    1995-01-01

    The accuracy and efficiency of two types of subiterations in both explicit and implicit Navier-Stokes codes are explored for unsteady laminar circular-cylinder flow and unsteady turbulent flow over an 18-percent-thick circular-arc (biconvex) airfoil. Grid and time-step studies are used to assess the numerical accuracy of the methods. Nonsubiterative time-stepping schemes and schemes with physical time subiterations are subject to time-step limitations in practice that are removed by pseudo time sub-iterations. Computations for the circular-arc airfoil indicate that a one-equation turbulence model predicts the unsteady separated flow better than an algebraic turbulence model; also, the hysteresis with Mach number of the self-excited unsteadiness due to shock and boundary-layer separation is well predicted.

  9. Dissolvable fluidic time delays for programming multi-step assays in instrument-free paper diagnostics.

    PubMed

    Lutz, Barry; Liang, Tinny; Fu, Elain; Ramachandran, Sujatha; Kauffman, Peter; Yager, Paul

    2013-07-21

    Lateral flow tests (LFTs) are an ingenious format for rapid and easy-to-use diagnostics, but they are fundamentally limited to assay chemistries that can be reduced to a single chemical step. In contrast, most laboratory diagnostic assays rely on multiple timed steps carried out by a human or a machine. Here, we use dissolvable sugar applied to paper to create programmable flow delays and present a paper network topology that uses these time delays to program automated multi-step fluidic protocols. Solutions of sucrose at different concentrations (10-70% of saturation) were added to paper strips and dried to create fluidic time delays spanning minutes to nearly an hour. A simple folding card format employing sugar delays was shown to automate a four-step fluidic process initiated by a single user activation step (folding the card); this device was used to perform a signal-amplified sandwich immunoassay for a diagnostic biomarker for malaria. The cards are capable of automating multi-step assay protocols normally used in laboratories, but in a rapid, low-cost, and easy-to-use format.

  10. Dissolvable fluidic time delays for programming multi-step assays in instrument-free paper diagnostics

    PubMed Central

    Lutz, Barry; Liang, Tinny; Fu, Elain; Ramachandran, Sujatha; Kauffman, Peter; Yager, Paul

    2013-01-01

    Lateral flow tests (LFTs) are an ingenious format for rapid and easy-to-use diagnostics, but they are fundamentally limited to assay chemistries that can be reduced to a single chemical step. In contrast, most laboratory diagnostic assays rely on multiple timed steps carried out by a human or a machine. Here, we use dissolvable sugar applied to paper to create programmable flow delays and present a paper network topology that uses these time delays to program automated multi-step fluidic protocols. Solutions of sucrose at different concentrations (10-70% of saturation) were added to paper strips and dried to create fluidic time delays spanning minutes to nearly an hour. A simple folding card format employing sugar delays was shown to automate a four-step fluidic process initiated by a single user activation step (folding the card); this device was used to perform a signal-amplified sandwich immunoassay for a diagnostic biomarker for malaria. The cards are capable of automating multi-step assay protocols normally used in laboratories, but in a rapid, low-cost, and easy-to-use format. PMID:23685876

  11. 14 CFR 25.529 - Hull and main float landing conditions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... stern landings, the limit water reaction load factors are those computed under § 25.527. In addition— (1... upward component and a side component equal, respectively, to 0.75 and 0.25 tan β times the resultant... upward load at the step of each float of 0.75 and a side load of 0.25 tan β at one float times the step...

  12. 14 CFR 25.529 - Hull and main float landing conditions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... stern landings, the limit water reaction load factors are those computed under § 25.527. In addition— (1... upward component and a side component equal, respectively, to 0.75 and 0.25 tan β times the resultant... upward load at the step of each float of 0.75 and a side load of 0.25 tan β at one float times the step...

  13. Melatonin: a universal time messenger.

    PubMed

    Erren, Thomas C; Reiter, Russel J

    2015-01-01

    Temporal organization plays a key role in humans, and presumably all species on Earth. A core building block of the chronobiological architecture is the master clock, located in the suprachi asmatic nuclei [SCN], which organizes "when" things happen in sub-cellular biochemistry, cells, organs and organisms, including humans. Conceptually, time messenging should follow a 5 step-cascade. While abundant evidence suggests how steps 1 through 4 work, step 5 of "how is central time information transmitted througout the body?" awaits elucidation. Step 1: Light provides information on environmental (external) time; Step 2: Ocular interfaces between light and biological (internal) time are intrinsically photosensitive retinal ganglion cells [ipRGS] and rods and cones; Step 3: Via the retinohypothalamic tract external time information reaches the light-dependent master clock in the brain, viz the SCN; Step 4: The SCN translate environmental time information into biological time and distribute this information to numerous brain structures via a melanopsin-based network. Step 5: Melatonin, we propose, transmits, or is a messenger of, internal time information to all parts of the body to allow temporal organization which is orchestrated by the SCN. Key reasons why we expect melatonin to have such role include: First, melatonin, as the chemical expression of darkness, is centrally involved in time- and timing-related processes such as encoding clock and calendar information in the brain; Second, melatonin travels throughout the body without limits and is thus a ubiquitous molecule. The chemial conservation of melatonin in all tested species could make this molecule a candidate for a universal time messenger, possibly constituting a legacy of an all-embracing evolutionary history.

  14. Pareto genealogies arising from a Poisson branching evolution model with selection.

    PubMed

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  15. On the stability of projection methods for the incompressible Navier-Stokes equations based on high-order discontinuous Galerkin discretizations

    NASA Astrophysics Data System (ADS)

    Fehn, Niklas; Wall, Wolfgang A.; Kronbichler, Martin

    2017-12-01

    The present paper deals with the numerical solution of the incompressible Navier-Stokes equations using high-order discontinuous Galerkin (DG) methods for discretization in space. For DG methods applied to the dual splitting projection method, instabilities have recently been reported that occur for small time step sizes. Since the critical time step size depends on the viscosity and the spatial resolution, these instabilities limit the robustness of the Navier-Stokes solver in case of complex engineering applications characterized by coarse spatial resolutions and small viscosities. By means of numerical investigation we give evidence that these instabilities are related to the discontinuous Galerkin formulation of the velocity divergence term and the pressure gradient term that couple velocity and pressure. Integration by parts of these terms with a suitable definition of boundary conditions is required in order to obtain a stable and robust method. Since the intermediate velocity field does not fulfill the boundary conditions prescribed for the velocity, a consistent boundary condition is derived from the convective step of the dual splitting scheme to ensure high-order accuracy with respect to the temporal discretization. This new formulation is stable in the limit of small time steps for both equal-order and mixed-order polynomial approximations. Although the dual splitting scheme itself includes inf-sup stabilizing contributions, we demonstrate that spurious pressure oscillations appear for equal-order polynomials and small time steps highlighting the necessity to consider inf-sup stability explicitly.

  16. Assessment of power step performances of variable speed pump-turbine unit by means of hydro-electrical system simulation

    NASA Astrophysics Data System (ADS)

    Béguin, A.; Nicolet, C.; Hell, J.; Moreira, C.

    2017-04-01

    The paper explores the improvement in ancillary services that variable speed technologies can provide for the case of an existing pumped storage power plant of 2x210 MVA which conversion from fixed speed to variable speed is investigated with a focus on the power step performances of the units. First two motor-generator variable speed technologies are introduced, namely the Doubly Fed Induction Machine (DFIM) and the Full Scale Frequency Converter (FSFC). Then a detailed numerical simulation model of the investigated power plant used to simulate power steps response and comprising the waterways, the pump-turbine unit, the motor-generator, the grid connection and the control systems is presented. Hydroelectric system time domain simulations are performed in order to determine the shortest response time achievable, taking into account the constraints from the maximum penstock pressure and from the rotational speed limits. It is shown that the maximum instantaneous power step response up and down depends on the hydro-mechanical characteristics of the pump-turbine unit and of the motor-generator speed limits. As a results, for the investigated test case, the FSFC solution offer the best power step response performances.

  17. Cut-off values for step count and TV viewing time as discriminators of hyperglycaemia in Brazilian children and adolescents.

    PubMed

    Gordia, Alex Pinheiro; Quadros, Teresa Maria Bianchini de; Silva, Luciana Rodrigues; Mota, Jorge

    2016-09-01

    The use of step count and TV viewing time to discriminate youngsters with hyperglycaemia is still a matter of debate. To establish cut-off values for step count and TV viewing time in children and adolescents using glycaemia as the reference criterion. A cross-sectional study was conducted on 1044 schoolchildren aged 6-18 years from Northeastern Brazil. Daily step counts were assessed with a pedometer over 1 week and TV viewing time by self-report. The area under the curve (AUC) ranged from 0.52-0.61 for step count and from 0.49-0.65 for TV viewing time. The daily step count with the highest discriminatory power for hyperglycaemia was 13 884 (sensitivity = 77.8; specificity = 51.8) for male children and 12 371 (sensitivity = 55.6; specificity = 55.5) and 11 292 (sensitivity = 57.7; specificity = 48.6) for female children and adolescents respectively. The cut-off for TV viewing time with the highest discriminatory capacity for hyperglycaemia was 3 hours/day (sensitivity = 57.7-77.8; specificity = 48.6-53.2). This study represents the first step for the development of criteria based on cardiometabolic risk factors for step count and TV viewing time in youngsters. However, the present cut-off values have limited practical application because of their poor accuracy and low sensitivity and specificity.

  18. A scale-free network with limiting on vertices

    NASA Astrophysics Data System (ADS)

    Tang, Lian; Wang, Bin

    2010-05-01

    We propose and analyze a random graph model which explains a phenomena in the economic company network in which company may not expand its business at some time due to the limiting of money and capacity. The random graph process is defined as follows: at any time-step t, (i) with probability α(k) and independently of other time-step, each vertex vi (i≤t-1) is inactive which means it cannot be connected by more edges, where k is the degree of vi at the time-step t; (ii) a new vertex vt is added along with m edges incident with vt at one time and its neighbors are chosen in the manner of preferential attachment. We prove that the degree distribution P(k) of this random graph process satisfies P(k)∝C1k if α(ṡ) is a constant α0; and P(k)∝C2k-3 if α(ℓ)↓0 as ℓ↑∞, where C1,C2 are two positive constants. The analytical result is found to be in good agreement with that obtained by numerical simulations. Furthermore, we get the degree distributions in this model with m-varying functions by simulation.

  19. Dynamical approach study of spurious steady-state numerical solutions of nonlinear differential equations. I - The dynamics of time discretization and its implications for algorithm development in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.; Griffiths, D. F.

    1991-01-01

    Spurious stable as well as unstable steady state numerical solutions, spurious asymptotic numerical solutions of higher period, and even stable chaotic behavior can occur when finite difference methods are used to solve nonlinear differential equations (DE) numerically. The occurrence of spurious asymptotes is independent of whether the DE possesses a unique steady state or has additional periodic solutions and/or exhibits chaotic phenomena. The form of the nonlinear DEs and the type of numerical schemes are the determining factor. In addition, the occurrence of spurious steady states is not restricted to the time steps that are beyond the linearized stability limit of the scheme. In many instances, it can occur below the linearized stability limit. Therefore, it is essential for practitioners in computational sciences to be knowledgeable about the dynamical behavior of finite difference methods for nonlinear scalar DEs before the actual application of these methods to practical computations. It is also important to change the traditional way of thinking and practices when dealing with genuinely nonlinear problems. In the past, spurious asymptotes were observed in numerical computations but tended to be ignored because they all were assumed to lie beyond the linearized stability limits of the time step parameter delta t. As can be seen from the study, bifurcations to and from spurious asymptotic solutions and transitions to computational instability not only are highly scheme dependent and problem dependent, but also initial data and boundary condition dependent, and not limited to time steps that are beyond the linearized stability limit.

  20. Fault current limiter with shield and adjacent cores

    DOEpatents

    Darmann, Francis Anthony; Moriconi, Franco; Hodge, Eoin Patrick

    2013-10-22

    In a fault current limiter (FCL) of a saturated core type having at least one coil wound around a high permeability material, a method of suppressing the time derivative of the fault current at the zero current point includes the following step: utilizing an electromagnetic screen or shield around the AC coil to suppress the time derivative current levels during zero current conditions.

  1. Dynamic Pathfinders: Leveraging Your OPAC to Create Resource Guides

    ERIC Educational Resources Information Center

    Hunter, Ben

    2008-01-01

    Library pathfinders are a time-tested method of leading library users to important resources. However, paper-based pathfinders suffer from space limitations, and both paper-based and Web-based pathfinders require frequent updates to keep up with new library acquisitions. This article details a step-by-step method to create an online dynamic…

  2. Dynamical approach study of spurious steady-state numerical solutions of nonlinear differential equations. Part 1: The ODE connection and its implications for algorithm development in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.; Griffiths, D. F.

    1990-01-01

    Spurious stable as well as unstable steady state numerical solutions, spurious asymptotic numerical solutions of higher period, and even stable chaotic behavior can occur when finite difference methods are used to solve nonlinear differential equations (DE) numerically. The occurrence of spurious asymptotes is independent of whether the DE possesses a unique steady state or has additional periodic solutions and/or exhibits chaotic phenomena. The form of the nonlinear DEs and the type of numerical schemes are the determining factor. In addition, the occurrence of spurious steady states is not restricted to the time steps that are beyond the linearized stability limit of the scheme. In many instances, it can occur below the linearized stability limit. Therefore, it is essential for practitioners in computational sciences to be knowledgeable about the dynamical behavior of finite difference methods for nonlinear scalar DEs before the actual application of these methods to practical computations. It is also important to change the traditional way of thinking and practices when dealing with genuinely nonlinear problems. In the past, spurious asymptotes were observed in numerical computations but tended to be ignored because they all were assumed to lie beyond the linearized stability limits of the time step parameter delta t. As can be seen from the study, bifurcations to and from spurious asymptotic solutions and transitions to computational instability not only are highly scheme dependent and problem dependent, but also initial data and boundary condition dependent, and not limited to time steps that are beyond the linearized stability limit.

  3. Asynchronous machine rotor speed estimation using a tabulated numerical approach

    NASA Astrophysics Data System (ADS)

    Nguyen, Huu Phuc; De Miras, Jérôme; Charara, Ali; Eltabach, Mario; Bonnet, Stéphane

    2017-12-01

    This paper proposes a new method to estimate the rotor speed of the asynchronous machine by looking at the estimation problem as a nonlinear optimal control problem. The behavior of the nonlinear plant model is approximated off-line as a prediction map using a numerical one-step time discretization obtained from simulations. At each time-step, the speed of the induction machine is selected satisfying the dynamic fitting problem between the plant output and the predicted output, leading the system to adopt its dynamical behavior. Thanks to the limitation of the prediction horizon to a single time-step, the execution time of the algorithm can be completely bounded. It can thus easily be implemented and embedded into a real-time system to observe the speed of the real induction motor. Simulation results show the performance and robustness of the proposed estimator.

  4. OGS#PETSc approach for robust and efficient simulations of strongly coupled hydrothermal processes in EGS reservoirs

    NASA Astrophysics Data System (ADS)

    Watanabe, Norihiro; Blucher, Guido; Cacace, Mauro; Kolditz, Olaf

    2016-04-01

    A robust and computationally efficient solution is important for 3D modelling of EGS reservoirs. This is particularly the case when the reservoir model includes hydraulic conduits such as induced or natural fractures, fault zones, and wellbore open-hole sections. The existence of such hydraulic conduits results in heterogeneous flow fields and in a strengthened coupling between fluid flow and heat transport processes via temperature dependent fluid properties (e.g. density and viscosity). A commonly employed partitioned solution (or operator-splitting solution) may not robustly work for such strongly coupled problems its applicability being limited by small time step sizes (e.g. 5-10 days) whereas the processes have to be simulated for 10-100 years. To overcome this limitation, an alternative approach is desired which can guarantee a robust solution of the coupled problem with minor constraints on time step sizes. In this work, we present a Newton-Raphson based monolithic coupling approach implemented in the OpenGeoSys simulator (OGS) combined with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library. The PETSc library is used for both linear and nonlinear solvers as well as MPI-based parallel computations. The suggested method has been tested by application to the 3D reservoir site of Groß Schönebeck, in northern Germany. Results show that the exact Newton-Raphson approach can also be limited to small time step sizes (e.g. one day) due to slight oscillations in the temperature field. The usage of a line search technique and modification of the Jacobian matrix were necessary to achieve robust convergence of the nonlinear solution. For the studied example, the proposed monolithic approach worked even with a very large time step size of 3.5 years.

  5. MIMO equalization with adaptive step size for few-mode fiber transmission systems.

    PubMed

    van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J

    2014-01-13

    Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.

  6. A Semi-implicit Method for Time Accurate Simulation of Compressible Flow

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Pierce, Charles D.; Moin, Parviz

    2001-11-01

    A semi-implicit method for time accurate simulation of compressible flow is presented. The method avoids the acoustic CFL limitation, allowing a time step restricted only by the convective velocity. Centered discretization in both time and space allows the method to achieve zero artificial attenuation of acoustic waves. The method is an extension of the standard low Mach number pressure correction method to the compressible Navier-Stokes equations, and the main feature of the method is the solution of a Helmholtz type pressure correction equation similar to that of Demirdžić et al. (Int. J. Num. Meth. Fluids, Vol. 16, pp. 1029-1050, 1993). The method is attractive for simulation of acoustic combustion instabilities in practical combustors. In these flows, the Mach number is low; therefore the time step allowed by the convective CFL limitation is significantly larger than that allowed by the acoustic CFL limitation, resulting in significant efficiency gains. Also, the method's property of zero artificial attenuation of acoustic waves is important for accurate simulation of the interaction between acoustic waves and the combustion process. The method has been implemented in a large eddy simulation code, and results from several test cases will be presented.

  7. An FMS Dynamic Production Scheduling Algorithm Considering Cutting Tool Failure and Cutting Tool Life

    NASA Astrophysics Data System (ADS)

    Setiawan, A.; Wangsaputra, R.; Martawirya, Y. Y.; Halim, A. H.

    2016-02-01

    This paper deals with Flexible Manufacturing System (FMS) production rescheduling due to unavailability of cutting tools caused either of cutting tool failure or life time limit. The FMS consists of parallel identical machines integrated with an automatic material handling system and it runs fully automatically. Each machine has a same cutting tool configuration that consists of different geometrical cutting tool types on each tool magazine. The job usually takes two stages. Each stage has sequential operations allocated to machines considering the cutting tool life. In the real situation, the cutting tool can fail before the cutting tool life is reached. The objective in this paper is to develop a dynamic scheduling algorithm when a cutting tool is broken during unmanned and a rescheduling needed. The algorithm consists of four steps. The first step is generating initial schedule, the second step is determination the cutting tool failure time, the third step is determination of system status at cutting tool failure time and the fourth step is the rescheduling for unfinished jobs. The approaches to solve the problem are complete-reactive scheduling and robust-proactive scheduling. The new schedules result differences starting time and completion time of each operations from the initial schedule.

  8. Anomalous diffusion with linear reaction dynamics: from continuous time random walks to fractional reaction-diffusion equations.

    PubMed

    Henry, B I; Langlands, T A M; Wearne, S L

    2006-09-01

    We have revisited the problem of anomalously diffusing species, modeled at the mesoscopic level using continuous time random walks, to include linear reaction dynamics. If a constant proportion of walkers are added or removed instantaneously at the start of each step then the long time asymptotic limit yields a fractional reaction-diffusion equation with a fractional order temporal derivative operating on both the standard diffusion term and a linear reaction kinetics term. If the walkers are added or removed at a constant per capita rate during the waiting time between steps then the long time asymptotic limit has a standard linear reaction kinetics term but a fractional order temporal derivative operating on a nonstandard diffusion term. Results from the above two models are compared with a phenomenological model with standard linear reaction kinetics and a fractional order temporal derivative operating on a standard diffusion term. We have also developed further extensions of the CTRW model to include more general reaction dynamics.

  9. A MULTIPLE GRID APPROACH FOR OPEN CHANNEL FLOWS WITH STRONG SHOCKS. (R825200)

    EPA Science Inventory

    Abstract

    Explicit finite difference schemes are being widely used for modeling open channel flows accompanied with shocks. A characteristic feature of explicit schemes is the small time step, which is limited by the CFL stability condition. To overcome this limitation,...

  10. A MULTIPLE GRID ALGORITHM FOR ONE-DIMENSIONAL TRANSIENT OPEN CHANNEL FLOWS. (R825200)

    EPA Science Inventory

    Numerical modeling of open channel flows with shocks using explicit finite difference schemes is constrained by the choice of time step, which is limited by the CFL stability criteria. To overcome this limitation, in this work we introduce the application of a multiple grid al...

  11. A simple robust and accurate a posteriori sub-cell finite volume limiter for the discontinuous Galerkin method on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Dumbser, Michael; Loubère, Raphaël

    2016-08-01

    In this paper we propose a simple, robust and accurate nonlinear a posteriori stabilization of the Discontinuous Galerkin (DG) finite element method for the solution of nonlinear hyperbolic PDE systems on unstructured triangular and tetrahedral meshes in two and three space dimensions. This novel a posteriori limiter, which has been recently proposed for the simple Cartesian grid case in [62], is able to resolve discontinuities at a sub-grid scale and is substantially extended here to general unstructured simplex meshes in 2D and 3D. It can be summarized as follows: At the beginning of each time step, an approximation of the local minimum and maximum of the discrete solution is computed for each cell, taking into account also the vertex neighbors of an element. Then, an unlimited discontinuous Galerkin scheme of approximation degree N is run for one time step to produce a so-called candidate solution. Subsequently, an a posteriori detection step checks the unlimited candidate solution at time t n + 1 for positivity, absence of floating point errors and whether the discrete solution has remained within or at least very close to the bounds given by the local minimum and maximum computed in the first step. Elements that do not satisfy all the previously mentioned detection criteria are flagged as troubled cells. For these troubled cells, the candidate solution is discarded as inappropriate and consequently needs to be recomputed. Within these troubled cells the old discrete solution at the previous time tn is scattered onto small sub-cells (Ns = 2 N + 1 sub-cells per element edge), in order to obtain a set of sub-cell averages at time tn. Then, a more robust second order TVD finite volume scheme is applied to update the sub-cell averages within the troubled DG cells from time tn to time t n + 1. The new sub-grid data at time t n + 1 are finally gathered back into a valid cell-centered DG polynomial of degree N by using a classical conservative and higher order accurate finite volume reconstruction technique. Consequently, if the number Ns is sufficiently large (Ns ≥ N + 1), the subscale resolution capability of the DG scheme is fully maintained, while preserving at the same time an essentially non-oscillatory behavior of the solution at discontinuities. Many standard DG limiters only adjust the discrete solution in troubled cells, based on the limiting of higher order moments or by applying a nonlinear WENO/HWENO reconstruction on the data at the new time t n + 1. Instead, our new DG limiter entirely recomputes the troubled cells by solving the governing PDE system again starting from valid data at the old time level tn, but using this time a more robust scheme on the sub-grid level. In other words, the piecewise polynomials produced by the new limiter are the result of a more robust solution of the PDE system itself, while most standard DG limiters are simply based on a mere nonlinear data post-processing of the discrete solution. Technically speaking, the new method corresponds to an element-wise checkpointing and restarting of the solver, using a lower order scheme on the sub-grid. As a result, the present DG limiter is even able to cure floating point errors like NaN values that have occurred after divisions by zero or after the computation of roots from negative numbers. This is a unique feature of our new algorithm among existing DG limiters. The new a posteriori sub-cell stabilization approach is developed within a high order accurate one-step ADER-DG framework on multidimensional unstructured meshes for hyperbolic systems of conservation laws as well as for hyperbolic PDE with non-conservative products. The method is applied to the Euler equations of compressible gas dynamics, to the ideal magneto-hydrodynamics equations (MHD) as well as to the seven-equation Baer-Nunziato model of compressible multi-phase flows. A large set of standard test problems is solved in order to assess the accuracy and robustness of the new limiter.

  12. Dynamical continuous time random Lévy flights

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Chen, Xiaosong

    2016-03-01

    The Lévy flights' diffusive behavior is studied within the framework of the dynamical continuous time random walk (DCTRW) method, while the nonlinear friction is introduced in each step. Through the DCTRW method, Lévy random walker in each step flies by obeying the Newton's Second Law while the nonlinear friction f(v) = - γ0v - γ2v3 being considered instead of Stokes friction. It is shown that after introducing the nonlinear friction, the superdiffusive Lévy flights converges, behaves localization phenomenon with long time limit, but for the Lévy index μ = 2 case, it is still Brownian motion.

  13. Note: Quasi-real-time analysis of dynamic near field scattering data using a graphics processing unit

    NASA Astrophysics Data System (ADS)

    Cerchiari, G.; Croccolo, F.; Cardinaux, F.; Scheffold, F.

    2012-10-01

    We present an implementation of the analysis of dynamic near field scattering (NFS) data using a graphics processing unit. We introduce an optimized data management scheme thereby limiting the number of operations required. Overall, we reduce the processing time from hours to minutes, for typical experimental conditions. Previously the limiting step in such experiments, the processing time is now comparable to the data acquisition time. Our approach is applicable to various dynamic NFS methods, including shadowgraph, Schlieren and differential dynamic microscopy.

  14. Time-asymptotic solutions of the Navier-Stokes equation for free shear flows using an alternating-direction implicit method

    NASA Technical Reports Server (NTRS)

    Rudy, D. H.; Morris, D. J.

    1976-01-01

    An uncoupled time asymptotic alternating direction implicit method for solving the Navier-Stokes equations was tested on two laminar parallel mixing flows. A constant total temperature was assumed in order to eliminate the need to solve the full energy equation; consequently, static temperature was evaluated by using algebraic relationship. For the mixing of two supersonic streams at a Reynolds number of 1,000, convergent solutions were obtained for a time step 5 times the maximum allowable size for an explicit method. The solution diverged for a time step 10 times the explicit limit. Improved convergence was obtained when upwind differencing was used for convective terms. Larger time steps were not possible with either upwind differencing or the diagonally dominant scheme. Artificial viscosity was added to the continuity equation in order to eliminate divergence for the mixing of a subsonic stream with a supersonic stream at a Reynolds number of 1,000.

  15. Preliminary Investigation of Time Remaining Display on the Computer-based Emergency Operating Procedure

    NASA Astrophysics Data System (ADS)

    Suryono, T. J.; Gofuku, A.

    2018-02-01

    One of the important thing in the mitigation of accidents in nuclear power plant accidents is time management. The accidents should be resolved as soon as possible in order to prevent the core melting and the release of radioactive material to the environment. In this case, operators should follow the emergency operating procedure related with the accident, in step by step order and in allowable time. Nowadays, the advanced main control rooms are equipped with computer-based procedures (CBPs) which is make it easier for operators to do their tasks of monitoring and controlling the reactor. However, most of the CBPs do not include the time remaining display feature which informs operators of time available for them to execute procedure steps and warns them if the they reach the time limit. Furthermore, the feature will increase the awareness of operators about their current situation in the procedure. This paper investigates this issue. The simplified of emergency operating procedure (EOP) of steam generator tube rupture (SGTR) accident of PWR plant is applied. In addition, the sequence of actions on each step of the procedure is modelled using multilevel flow modelling (MFM) and influenced propagation rule. The prediction of action time on each step is acquired based on similar case accidents and the Support Vector Regression. The derived time will be processed and then displayed on a CBP user interface.

  16. Photopigment quenching is Ca2+ dependent and controls response duration in salamander L-cone photoreceptors

    PubMed Central

    2010-01-01

    The time scale of the photoresponse in photoreceptor cells is set by the slowest of the steps that quench the light-induced activity of the phototransduction cascade. In vertebrate photoreceptor cells, this rate-limiting reaction is thought to be either shutoff of catalytic activity in the photopigment or shutoff of the pigment's effector, the transducin-GTP–phosphodiesterase complex. In suction pipette recordings from isolated salamander L-cones, we found that preventing changes in internal [Ca2+] delayed the recovery of the light response and prolonged the dominant time constant for recovery. Evidence that the Ca2+-sensitive step involved the pigment itself was provided by the observation that removal of Cl− from the pigment's anion-binding site accelerated the dominant time constant for response recovery. Collectively, these observations indicate that in L-cones, unlike amphibian rods where the dominant time constant is insensitive to [Ca2+], pigment quenching rate limits recovery and provides an additional mechanism for modulating the cone response during light adaptation. PMID:20231373

  17. A high-order positivity-preserving single-stage single-step method for the ideal magnetohydrodynamic equations

    NASA Astrophysics Data System (ADS)

    Christlieb, Andrew J.; Feng, Xiao; Seal, David C.; Tang, Qi

    2016-07-01

    We propose a high-order finite difference weighted ENO (WENO) method for the ideal magnetohydrodynamics (MHD) equations. The proposed method is single-stage (i.e., it has no internal stages to store), single-step (i.e., it has no time history that needs to be stored), maintains a discrete divergence-free condition on the magnetic field, and has the capacity to preserve the positivity of the density and pressure. To accomplish this, we use a Taylor discretization of the Picard integral formulation (PIF) of the finite difference WENO method proposed in Christlieb et al. (2015) [23], where the focus is on a high-order discretization of the fluxes (as opposed to the conserved variables). We use the version where fluxes are expanded to third-order accuracy in time, and for the fluid variables space is discretized using the classical fifth-order finite difference WENO discretization. We use constrained transport in order to obtain divergence-free magnetic fields, which means that we simultaneously evolve the magnetohydrodynamic (that has an evolution equation for the magnetic field) and magnetic potential equations alongside each other, and set the magnetic field to be the (discrete) curl of the magnetic potential after each time step. In this work, we compute these derivatives to fourth-order accuracy. In order to retain a single-stage, single-step method, we develop a novel Lax-Wendroff discretization for the evolution of the magnetic potential, where we start with technology used for Hamilton-Jacobi equations in order to construct a non-oscillatory magnetic field. The end result is an algorithm that is similar to our previous work Christlieb et al. (2014) [8], but this time the time stepping is replaced through a Taylor method with the addition of a positivity-preserving limiter. Finally, positivity preservation is realized by introducing a parameterized flux limiter that considers a linear combination of high and low-order numerical fluxes. The choice of the free parameter is then given in such a way that the fluxes are limited towards the low-order solver until positivity is attained. Given the lack of additional degrees of freedom in the system, this positivity limiter lacks energy conservation where the limiter turns on. However, this ingredient can be dropped for problems where the pressure does not become negative. We present two and three dimensional numerical results for several standard test problems including a smooth Alfvén wave (to verify formal order of accuracy), shock tube problems (to test the shock-capturing ability of the scheme), Orszag-Tang, and cloud shock interactions. These results assert the robustness and verify the high-order of accuracy of the proposed scheme.

  18. Advancing parabolic operators in thermodynamic MHD models: Explicit super time-stepping versus implicit schemes with Krylov solvers

    NASA Astrophysics Data System (ADS)

    Caplan, R. M.; Mikić, Z.; Linker, J. A.; Lionello, R.

    2017-05-01

    We explore the performance and advantages/disadvantages of using unconditionally stable explicit super time-stepping (STS) algorithms versus implicit schemes with Krylov solvers for integrating parabolic operators in thermodynamic MHD models of the solar corona. Specifically, we compare the second-order Runge-Kutta Legendre (RKL2) STS method with the implicit backward Euler scheme computed using the preconditioned conjugate gradient (PCG) solver with both a point-Jacobi and a non-overlapping domain decomposition ILU0 preconditioner. The algorithms are used to integrate anisotropic Spitzer thermal conduction and artificial kinematic viscosity at time-steps much larger than classic explicit stability criteria allow. A key component of the comparison is the use of an established MHD model (MAS) to compute a real-world simulation on a large HPC cluster. Special attention is placed on the parallel scaling of the algorithms. It is shown that, for a specific problem and model, the RKL2 method is comparable or surpasses the implicit method with PCG solvers in performance and scaling, but suffers from some accuracy limitations. These limitations, and the applicability of RKL methods are briefly discussed.

  19. 42 CFR 438.406 - Handling of grievances and appeals.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... appeals are individuals— (i) Who were not involved in any previous level of review or decision-making; and... in writing. (The MCO or PIHP must inform the enrollee of the limited time available for this in the... and taking other procedural steps. This includes, but is not limited to, providing interpreter...

  20. Rational reduction of periodic propagators for off-period observations.

    PubMed

    Blanton, Wyndham B; Logan, John W; Pines, Alexander

    2004-02-01

    Many common solid-state nuclear magnetic resonance problems take advantage of the periodicity of the underlying Hamiltonian to simplify the computation of an observation. Most of the time-domain methods used, however, require the time step between observations to be some integer or reciprocal-integer multiple of the period, thereby restricting the observation bandwidth. Calculations of off-period observations are usually reduced to brute force direct methods resulting in many demanding matrix multiplications. For large spin systems, the matrix multiplication becomes the limiting step. A simple method that can dramatically reduce the number of matrix multiplications required to calculate the time evolution when the observation time step is some rational fraction of the period of the Hamiltonian is presented. The algorithm implements two different optimization routines. One uses pattern matching and additional memory storage, while the other recursively generates the propagators via time shifting. The net result is a significant speed improvement for some types of time-domain calculations.

  1. Enhanced sensitivity of self-assembled-monolayer-based SPR immunosensor for detection of benzaldehyde using a single-step multi-sandwich immunoassay.

    PubMed

    Gobi, K Vengatajalabathy; Matsumoto, Kiyoshi; Toko, Kiyoshi; Ikezaki, Hidekazu; Miura, Norio

    2007-04-01

    This paper describes the fabrication and sensing characteristics of a self-assembled monolayer (SAM)-based surface plasmon resonance (SPR) immunosensor for detection of benzaldehyde (BZ). The functional sensing surface was fabricated by the immobilization of a benzaldehyde-ovalbumin conjugate (BZ-OVA) on Au-thiolate SAMs containing carboxyl end groups. Covalent binding of BZ-OVA on SAM was found to be dependent on the composition of the base SAM, and it is improved very much with the use of a mixed monolayer strategy. Based on SPR angle measurements, the functional sensor surface is established as a compact monolayer of BZ-OVA bound on the mixed SAM. The BZ-OVA-bound sensor surface undergoes immunoaffinity binding with anti-benzaldehyde antibody (BZ-Ab) selectively. An indirect inhibition immunoassay principle has been applied, in which analyte benzaldehyde solution was incubated with an optimal concentration of BZ-Ab for 5 min and injected over the sensor chip. Analyte benzaldehyde undergoes immunoreaction with BZ-Ab and makes it inactive for binding to BZ-OVA on the sensor chip. As a result, the SPR angle response decreases with an increase in the concentration of benzaldehyde. The fabricated immunosensor demonstrates a low detection limit (LDL) of 50 ppt (pg mL(-1)) with a response time of 5 min. Antibodies bound to the sensor chip during an immunoassay could be detached by a brief exposure to acidic pepsin. With this surface regeneration, reusability of the same sensor chip for as many as 30 determination cycles has been established. Sensitivity has been enhanced further with the application of an additional single-step multi-sandwich immunoassay step, in which the BZ-Ab bound to the sensor chip was treated with a mixture of biotin-labeled secondary antibody, streptavidin and biotin-bovine serum albumin (Bio-BSA) conjugate. With this approach, the SPR sensor signal increased by ca. 12 times and the low detection limit improved to 5 ppt with a total response time of no more than ca. 10 min. Figure A single-step multi-sandwich immunoassay step increases SPR sensor signal by ca. 12 times affording a low detection limit for benzaldehyde of 5 ppt.

  2. Individual-based modelling of population growth and diffusion in discrete time.

    PubMed

    Tkachenko, Natalie; Weissmann, John D; Petersen, Wesley P; Lake, George; Zollikofer, Christoph P E; Callegari, Simone

    2017-01-01

    Individual-based models (IBMs) of human populations capture spatio-temporal dynamics using rules that govern the birth, behavior, and death of individuals. We explore a stochastic IBM of logistic growth-diffusion with constant time steps and independent, simultaneous actions of birth, death, and movement that approaches the Fisher-Kolmogorov model in the continuum limit. This model is well-suited to parallelization on high-performance computers. We explore its emergent properties with analytical approximations and numerical simulations in parameter ranges relevant to human population dynamics and ecology, and reproduce continuous-time results in the limit of small transition probabilities. Our model prediction indicates that the population density and dispersal speed are affected by fluctuations in the number of individuals. The discrete-time model displays novel properties owing to the binomial character of the fluctuations: in certain regimes of the growth model, a decrease in time step size drives the system away from the continuum limit. These effects are especially important at local population sizes of <50 individuals, which largely correspond to group sizes of hunter-gatherers. As an application scenario, we model the late Pleistocene dispersal of Homo sapiens into the Americas, and discuss the agreement of model-based estimates of first-arrival dates with archaeological dates in dependence of IBM model parameter settings.

  3. Wafer-scale Thermodynamically Stable GaN Nanorods via Two-Step Self-Limiting Epitaxy for Optoelectronic Applications

    NASA Astrophysics Data System (ADS)

    Kum, Hyun; Seong, Han-Kyu; Lim, Wantae; Chun, Daemyung; Kim, Young-Il; Park, Youngsoo; Yoo, Geonwook

    2017-01-01

    We present a method of epitaxially growing thermodynamically stable gallium nitride (GaN) nanorods via metal-organic chemical vapor deposition (MOCVD) by invoking a two-step self-limited growth (TSSLG) mechanism. This allows for growth of nanorods with excellent geometrical uniformity with no visible extended defects over a 100 mm sapphire (Al2O3) wafer. An ex-situ study of the growth morphology as a function of growth time for the two self-limiting steps elucidate the growth dynamics, which show that formation of an Ehrlich-Schwoebel barrier and preferential growth in the c-plane direction governs the growth process. This process allows monolithic formation of dimensionally uniform nanowires on templates with varying filling matrix patterns for a variety of novel electronic and optoelectronic applications. A color tunable phosphor-free white light LED with a coaxial architecture is fabricated as a demonstration of the applicability of these nanorods grown by TSSLG.

  4. Simplified Two-Time Step Method for Calculating Combustion and Emission Rates of Jet-A and Methane Fuel With and Without Water Injection

    NASA Technical Reports Server (NTRS)

    Molnar, Melissa; Marek, C. John

    2005-01-01

    A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two time step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting rates of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx are obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3). The temperature of the gas entering the turbine (T4) was also correlated as a function of the initial combustor temperature (T3), equivalence ratio, water to fuel mass ratio, and pressure.

  5. One Step Quantum Key Distribution Based on EPR Entanglement.

    PubMed

    Li, Jian; Li, Na; Li, Lei-Lei; Wang, Tao

    2016-06-30

    A novel quantum key distribution protocol is presented, based on entanglement and dense coding and allowing asymptotically secure key distribution. Considering the storage time limit of quantum bits, a grouping quantum key distribution protocol is proposed, which overcomes the vulnerability of first protocol and improves the maneuverability. Moreover, a security analysis is given and a simple type of eavesdropper's attack would introduce at least an error rate of 46.875%. Compared with the "Ping-pong" protocol involving two steps, the proposed protocol does not need to store the qubit and only involves one step.

  6. Step training improves reaction time, gait and balance and reduces falls in older people: a systematic review and meta-analysis.

    PubMed

    Okubo, Yoshiro; Schoene, Daniel; Lord, Stephen R

    2017-04-01

    To examine the effects of stepping interventions on fall risk factors and fall incidence in older people. Electronic databases (PubMed, EMBASE, CINAHL, Cochrane, CENTRAL) and reference lists of included articles from inception to March 2015. Randomised (RCT) or clinical controlled trials (CCT) of volitional and reactive stepping interventions that included older (minimum age 60) people providing data on falls or fall risk factors. Meta-analyses of seven RCTs (n=660) showed that the stepping interventions significantly reduced the rate of falls (rate ratio=0.48, 95% CI 0.36 to 0.65, p<0.0001, I 2 =0%) and the proportion of fallers (risk ratio=0.51, 95% CI 0.38 to 0.68, p<0.0001, I 2 =0%). Subgroup analyses stratified by reactive and volitional stepping interventions revealed a similar efficacy for rate of falls and proportion of fallers. A meta-analysis of two RCTs (n=62) showed that stepping interventions significantly reduced laboratory-induced falls, and meta-analysis findings of up to five RCTs and CCTs (n=36-416) revealed that stepping interventions significantly improved simple and choice stepping reaction time, single leg stance, timed up and go performance (p<0.05), but not measures of strength. The findings indicate that both reactive and volitional stepping interventions reduce falls among older adults by approximately 50%. This clinically significant reduction may be due to improvements in reaction time, gait, balance and balance recovery but not in strength. Further high-quality studies aimed at maximising the effectiveness and feasibility of stepping interventions are required. CRD42015017357. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  7. A particle-in-cell method for the simulation of plasmas based on an unconditionally stable field solver

    DOE PAGES

    Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; ...

    2016-08-09

    Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowedmore » by typical CFL restrictions.« less

  8. Quadratic adaptive algorithm for solving cardiac action potential models.

    PubMed

    Chen, Min-Hung; Chen, Po-Yuan; Luo, Ching-Hsing

    2016-10-01

    An adaptive integration method is proposed for computing cardiac action potential models accurately and efficiently. Time steps are adaptively chosen by solving a quadratic formula involving the first and second derivatives of the membrane action potential. To improve the numerical accuracy, we devise an extremum-locator (el) function to predict the local extremum when approaching the peak amplitude of the action potential. In addition, the time step restriction (tsr) technique is designed to limit the increase in time steps, and thus prevent the membrane potential from changing abruptly. The performance of the proposed method is tested using the Luo-Rudy phase 1 (LR1), dynamic (LR2), and human O'Hara-Rudy dynamic (ORd) ventricular action potential models, and the Courtemanche atrial model incorporating a Markov sodium channel model. Numerical experiments demonstrate that the action potential generated using the proposed method is more accurate than that using the traditional Hybrid method, especially near the peak region. The traditional Hybrid method may choose large time steps near to the peak region, and sometimes causes the action potential to become distorted. In contrast, the proposed new method chooses very fine time steps in the peak region, but large time steps in the smooth region, and the profiles are smoother and closer to the reference solution. In the test on the stiff Markov ionic channel model, the Hybrid blows up if the allowable time step is set to be greater than 0.1ms. In contrast, our method can adjust the time step size automatically, and is stable. Overall, the proposed method is more accurate than and as efficient as the traditional Hybrid method, especially for the human ORd model. The proposed method shows improvement for action potentials with a non-smooth morphology, and it needs further investigation to determine whether the method is helpful during propagation of the action potential. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Limit theorems for Lévy walks in d dimensions: rare and bulk fluctuations

    NASA Astrophysics Data System (ADS)

    Fouxon, Itzhak; Denisov, Sergey; Zaburdaev, Vasily; Barkai, Eli

    2017-04-01

    We consider super-diffusive Lévy walks in d≥slant 2 dimensions when the duration of a single step, i.e. a ballistic motion performed by a walker, is governed by a power-law tailed distribution of infinite variance and finite mean. We demonstrate that the probability density function (PDF) of the coordinate of the random walker has two different scaling limits at large times. One limit describes the bulk of the PDF. It is the d-dimensional generalization of the one-dimensional Lévy distribution and is the counterpart of the central limit theorem (CLT) for random walks with finite dispersion. In contrast with the one-dimensional Lévy distribution and the CLT this distribution does not have a universal shape. The PDF reflects anisotropy of the single-step statistics however large the time is. The other scaling limit, the so-called ‘infinite density’, describes the tail of the PDF which determines second (dispersion) and higher moments of the PDF. This limit repeats the angular structure of the PDF of velocity in one step. A typical realization of the walk consists of anomalous diffusive motion (described by anisotropic d-dimensional Lévy distribution) interspersed with long ballistic flights (described by infinite density). The long flights are rare but due to them the coordinate increases so much that their contribution determines the dispersion. We illustrate the concept by considering two types of Lévy walks, with isotropic and anisotropic distributions of velocities. Furthermore, we show that for isotropic but otherwise arbitrary velocity distributions the d-dimensional process can be reduced to a one-dimensional Lévy walk. We briefly discuss the consequences of non-universality for the d  >  1 dimensional fractional diffusion equation, in particular the non-uniqueness of the fractional Laplacian.

  10. Solvent and viscosity effects on the rate-limiting product release step of glucoamylase during maltose hydrolysis.

    PubMed

    Sierks, M R; Sico, C; Zaw, M

    1997-01-01

    Release of product from the active site is the rate-limiting step in a number of enzymatic reactions, including maltose hydrolysis by glucoamylase (GA). With GA, an enzymatic conformational change has been associated with the product release step. Solvent characteristics such as viscosity can strongly influence protein conformational changes. Here we show that the rate-limiting step of GA has a rather complex dependence on solvent characteristics. Seven different cosolvents were added to the GA/maltose reaction solution. Five of the cosolvents, all having an ethylene glycol base, resulted in an increase in activity at low concentration of cosolvent and variable decreases in activity at higher concentrations. The increase in enzyme activity was dependent on polymer length of the cosolvent; the longer the polymer, the lower the concentration needed. The maximum increase in catalytic activity at 45 degrees C (40-45%) was obtained with the three longest polymers (degree of polymerization from 200 to 8000). A further increase in activity to 60-65% was obtained at 60 degrees C. The linear relationship between ln(kcat) and (viscosity)2 obtained with all the cosolvents provides further evidence that product release is the rate-limiting step in the GA catalytic mechanism. A substantial increase in the turnover rate of GA by addition of relatively small amounts of a cosolvent has potential applications for the food industry where high-fructose corn syrup (HFCS) is one of the primary products produced with GA. Since maltodextrin hydrolysis by GA is by far the slowest step in the production of HFCS, increasing the catalytic rate of GA can substantially reduce the process time.

  11. Reliability of fitness tests using methods and time periods common in sport and occupational management.

    PubMed

    Burnstein, Bryan D; Steele, Russell J; Shrier, Ian

    2011-01-01

    Fitness testing is used frequently in many areas of physical activity, but the reliability of these measurements under real-world, practical conditions is unknown. To evaluate the reliability of specific fitness tests using the methods and time periods used in the context of real-world sport and occupational management. Cohort study. Eighteen different Cirque du Soleil shows. Cirque du Soleil physical performers who completed 4 consecutive tests (6-month intervals) and were free of injury or illness at each session (n = 238 of 701 physical performers). Performers completed 6 fitness tests on each assessment date: dynamic balance, Harvard step test, handgrip, vertical jump, pull-ups, and 60-second jump test. We calculated the intraclass coefficient (ICC) and limits of agreement between baseline and each time point and the ICC over all 4 time points combined. Reliability was acceptable (ICC > 0.6) over an 18-month time period for all pairwise comparisons and all time points together for the handgrip, vertical jump, and pull-up assessments. The Harvard step test and 60-second jump test had poor reliability (ICC < 0.6) between baseline and other time points. When we excluded the baseline data and calculated the ICC for 6-month, 12-month, and 18-month time points, both the Harvard step test and 60-second jump test demonstrated acceptable reliability. Dynamic balance was unreliable in all contexts. Limit-of-agreement analysis demonstrated considerable intraindividual variability for some tests and a learning effect by administrators on others. Five of the 6 tests in this battery had acceptable reliability over an 18-month time frame, but the values for certain individuals may vary considerably from time to time for some tests. Specific tests may require a learning period for administrators.

  12. Step climbing capacity in patients with pulmonary hypertension.

    PubMed

    Fox, Benjamin Daniel; Langleben, David; Hirsch, Andrew; Boutet, Kim; Shimony, Avi

    2013-01-01

    Patients with pulmonary hypertension (PH) typically have exercise intolerance and limitation in climbing steps. To explore the exercise physiology of step climbing in PH patients, on a laboratory-based step test. We built a step oximetry system from an 'aerobics' step equipped with pressure sensors and pulse oximeter linked to a computer. Subjects mounted and dismounted from the step until their maximal exercise capacity or 200 steps was achieved. Step-count, SpO(2) and heart rate were monitored throughout exercise and recovery. We derived indices of exercise performance, desaturation and heart rate. A 6-min walk test and serum NT-proBrain Natriuretic Peptide (BNP) level were measured. Lung function tests and hemodynamic parameters were extracted from the medical record. Eighty-six subjects [52 pulmonary arterial hypertension (PAH), 14 chronic thromboembolic PH (CTEPH), 20 controls] were recruited. Exercise performance (climbing time, height gained, velocity, energy expenditure, work-rate and climbing index) on the step test was significantly worse with PH and/or worsening WHO functional class (ANOVA, p < 0.001). There was a good correlation between exercise performance on the step and 6-min walking distance-climb index (r = -0.77, p < 0.0001). The saturation deviation (mean of SpO(2) values <95 %) on the step test correlated with diffusion capacity of the lung (ρ = -0.49, p = 0.001). No correlations were found between the step test indices and other lung function tests, hemodynamic parameters or NT-proBNP levels. Patients with PAH/CTEPH have significant limitation in step climbing ability that correlates with functional class and 6-min walking distance. This is a significant impediment to their daily activities.

  13. A New Performance Improvement Model: Adding Benchmarking to the Analysis of Performance Indicator Data.

    PubMed

    Al-Kuwaiti, Ahmed; Homa, Karen; Maruthamuthu, Thennarasu

    2016-01-01

    A performance improvement model was developed that focuses on the analysis and interpretation of performance indicator (PI) data using statistical process control and benchmarking. PIs are suitable for comparison with benchmarks only if the data fall within the statistically accepted limit-that is, show only random variation. Specifically, if there is no significant special-cause variation over a period of time, then the data are ready to be benchmarked. The proposed Define, Measure, Control, Internal Threshold, and Benchmark model is adapted from the Define, Measure, Analyze, Improve, Control (DMAIC) model. The model consists of the following five steps: Step 1. Define the process; Step 2. Monitor and measure the variation over the period of time; Step 3. Check the variation of the process; if stable (no significant variation), go to Step 4; otherwise, control variation with the help of an action plan; Step 4. Develop an internal threshold and compare the process with it; Step 5.1. Compare the process with an internal benchmark; and Step 5.2. Compare the process with an external benchmark. The steps are illustrated through the use of health care-associated infection (HAI) data collected for 2013 and 2014 from the Infection Control Unit, King Fahd Hospital, University of Dammam, Saudi Arabia. Monitoring variation is an important strategy in understanding and learning about a process. In the example, HAI was monitored for variation in 2013, and the need to have a more predictable process prompted the need to control variation by an action plan. The action plan was successful, as noted by the shift in the 2014 data, compared to the historical average, and, in addition, the variation was reduced. The model is subject to limitations: For example, it cannot be used without benchmarks, which need to be calculated the same way with similar patient populations, and it focuses only on the "Analyze" part of the DMAIC model.

  14. Step Permeability on the Pt(111) Surface

    NASA Astrophysics Data System (ADS)

    Altman, Michael

    2005-03-01

    Surface morphology will be affected, or even dictated, by kinetic limitations that may be present during growth. Asymmetric step attachment is recognized to be an important and possibly common cause of morphological growth instabilities. However, the impact of this kinetic limitation on growth morphology may be hindered by other factors such as the rate limiting step and step permeability. This strongly motivates experimental measurements of these quantities in real systems. Using low energy electron microscopy, we have measured step flow velocities in growth on the Pt(111) surface. The dependence of step velocity upon adjacent terrace width clearly shows evidence of asymmetric step attachment and step permeability. Step velocity is modeled by solving the diffusion equation simultaneously on several adjacent terraces subject to boundary conditions at intervening steps that include asymmetric step attachment and step permeability. This analysis allows a quantitative evaluation of step permeability and the kinetic length, which characterizes the rate limiting step continuously between diffusion and attachment-detachment limited regimes. This work provides information that is greatly needed to set physical bounds on the parameters that are used in theoretical treatments of growth. The observation that steps are permeable even on a simple metal surface should also stimulate more experimental measurements and theoretical treatments of this effect.

  15. Peripheral neuropathy, decreased muscle strength and obesity are strongly associated with walking in persons with type 2 diabetes without manifest mobility limitations.

    PubMed

    van Sloten, Thomas T; Savelberg, Hans H C M; Duimel-Peeters, Inge G P; Meijer, Kenneth; Henry, Ronald M A; Stehouwer, Coen D A; Schaper, Nicolaas C

    2011-01-01

    We evaluated the associations of diabetic complications and underlying pathology with daily walking activity in type 2 diabetic patients without manifest mobility limitations. 100 persons with type 2 diabetes (mean age 64.5 ± 9.4 years) were studied. Persons with manifest mobility limitations were excluded. Possible determinants measured: peripheral neuropathy, neuropathic pain, peripheral arterial disease, cardiovascular disease, decreased muscle strength (handgrip strength), BMI, depression, falls and fear of falling. Walking activity was measured during one week with a pedometer. Functional capacity was measured with the 6 min walk test, the timed "up and go" test and a stair climbing test. prevalence of neuropathy (40%) and obesity (53%) was high. Persons took a median of 6429 steps/day. In multivariate regression analysis, adjusted for age and sex, neuropathy was associated with a reduction of 1967 steps/day, decreased muscle strength with 1782 steps/day, and an increase in BMI of 1 kg/m(2) with a decrease of 210 steps/day (all p<0.05). Decreased muscle strength and BMI, but not neuropathy, were associated with outcome of functional capacity tests in multiple regression analysis. peripheral neuropathy, decreased muscle strength and obesity are strongly associated with walking in persons with type 2 diabetes without manifest mobility limitations. 2010 Elsevier Ireland Ltd. All rights reserved.

  16. Robust numerical solution of the reservoir routing equation

    NASA Astrophysics Data System (ADS)

    Fiorentini, Marcello; Orlandini, Stefano

    2013-09-01

    The robustness of numerical methods for the solution of the reservoir routing equation is evaluated. The methods considered in this study are: (1) the Laurenson-Pilgrim method, (2) the fourth-order Runge-Kutta method, and (3) the fixed order Cash-Karp method. Method (1) is unable to handle nonmonotonic outflow rating curves. Method (2) is found to fail under critical conditions occurring, especially at the end of inflow recession limbs, when large time steps (greater than 12 min in this application) are used. Method (3) is computationally intensive and it does not solve the limitations of method (2). The limitations of method (2) can be efficiently overcome by reducing the time step in the critical phases of the simulation so as to ensure that water level remains inside the domains of the storage function and the outflow rating curve. The incorporation of a simple backstepping procedure implementing this control into the method (2) yields a robust and accurate reservoir routing method that can be safely used in distributed time-continuous catchment models.

  17. Four decades of implicit Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaber, Allan B.

    In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less

  18. Four decades of implicit Monte Carlo

    DOE PAGES

    Wollaber, Allan B.

    2016-02-23

    In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less

  19. Age-related changes in gait adaptability in response to unpredictable obstacles and stepping targets.

    PubMed

    Caetano, Maria Joana D; Lord, Stephen R; Schoene, Daniel; Pelicioni, Paulo H S; Sturnieks, Daina L; Menant, Jasmine C

    2016-05-01

    A large proportion of falls in older people occur when walking. Limitations in gait adaptability might contribute to tripping; a frequently reported cause of falls in this group. To evaluate age-related changes in gait adaptability in response to obstacles or stepping targets presented at short notice, i.e.: approximately two steps ahead. Fifty older adults (aged 74±7 years; 34 females) and 21 young adults (aged 26±4 years; 12 females) completed 3 usual gait speed (baseline) trials. They then completed the following randomly presented gait adaptability trials: obstacle avoidance, short stepping target, long stepping target and no target/obstacle (3 trials of each). Compared with the young, the older adults slowed significantly in no target/obstacle trials compared with the baseline trials. They took more steps and spent more time in double support while approaching the obstacle and stepping targets, demonstrated poorer stepping accuracy and made more stepping errors (failed to hit the stepping targets/avoid the obstacle). The older adults also reduced velocity of the two preceding steps and shortened the previous step in the long stepping target condition and in the obstacle avoidance condition. Compared with their younger counterparts, the older adults exhibited a more conservative adaptation strategy characterised by slow, short and multiple steps with longer time in double support. Even so, they demonstrated poorer stepping accuracy and made more stepping errors. This reduced gait adaptability may place older adults at increased risk of falling when negotiating unexpected hazards. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Accelerated simulations of aromatic polymers: application to polyether ether ketone (PEEK)

    NASA Astrophysics Data System (ADS)

    Broadbent, Richard J.; Spencer, James S.; Mostofi, Arash A.; Sutton, Adrian P.

    2014-10-01

    For aromatic polymers, the out-of-plane oscillations of aromatic groups limit the maximum accessible time step in a molecular dynamics simulation. We present a systematic approach to removing such high-frequency oscillations from planar groups along aromatic polymer backbones, while preserving the dynamical properties of the system. We consider, as an example, the industrially important polymer, polyether ether ketone (PEEK), and show that this coarse graining technique maintains excellent agreement with the fully flexible all-atom and all-atom rigid bond models whilst allowing the time step to increase fivefold to 5 fs.

  1. Automatic localization of cochlear implant electrodes in CTs with a limited intensity range

    NASA Astrophysics Data System (ADS)

    Zhao, Yiyuan; Dawant, Benoit M.; Noble, Jack H.

    2017-02-01

    Cochlear implants (CIs) are neural prosthetics for treating severe-to-profound hearing loss. Our group has developed an image-guided cochlear implant programming (IGCIP) system that uses image analysis techniques to recommend patientspecific CI processor settings to improve hearing outcomes. One crucial step in IGCIP is the localization of CI electrodes in post-implantation CTs. Manual localization of electrodes requires time and expertise. To automate this process, our group has proposed automatic techniques that have been validated on CTs acquired with scanners that produce images with an extended range of intensity values. However, there are many clinical CTs acquired with a limited intensity range. This limitation complicates the electrode localization process. In this work, we present a pre-processing step for CTs with a limited intensity range and extend the methods we proposed for full intensity range CTs to localize CI electrodes in CTs with limited intensity range. We evaluate our method on CTs of 20 subjects implanted with CI arrays produced by different manufacturers. Our method achieves a mean localization error of 0.21mm. This indicates our method is robust for automatic localization of CI electrodes in different types of CTs, which represents a crucial step for translating IGCIP from research laboratory to clinical use.

  2. Langevin dynamics in inhomogeneous media: Re-examining the Itô-Stratonovich dilemma

    NASA Astrophysics Data System (ADS)

    Farago, Oded; Grønbech-Jensen, Niels

    2014-01-01

    The diffusive dynamics of a particle in a medium with space-dependent friction coefficient is studied within the framework of the inertial Langevin equation. In this description, the ambiguous interpretation of the stochastic integral, known as the Itô-Stratonovich dilemma, is avoided since all interpretations converge to the same solution in the limit of small time steps. We use a newly developed method for Langevin simulations to measure the probability distribution of a particle diffusing in a flat potential. Our results reveal that both the Itô and Stratonovich interpretations converge very slowly to the uniform equilibrium distribution for vanishing time step sizes. Three other conventions exhibit significantly improved accuracy: (i) the "isothermal" (Hänggi) convention, (ii) the Stratonovich convention corrected by a drift term, and (iii) a newly proposed convention employing two different effective friction coefficients representing two different averages of the friction function during the time step. We argue that the most physically accurate dynamical description is provided by the third convention, in which the particle experiences a drift originating from the dissipation instead of the fluctuation term. This feature is directly related to the fact that the drift is a result of an inertial effect that cannot be well understood in the Brownian, overdamped limit of the Langevin equation.

  3. Kinematic Validation of a Multi-Kinect v2 Instrumented 10-Meter Walkway for Quantitative Gait Assessments.

    PubMed

    Geerse, Daphne J; Coolen, Bert H; Roerdink, Melvyn

    2015-01-01

    Walking ability is frequently assessed with the 10-meter walking test (10MWT), which may be instrumented with multiple Kinect v2 sensors to complement the typical stopwatch-based time to walk 10 meters with quantitative gait information derived from Kinect's 3D body point's time series. The current study aimed to evaluate a multi-Kinect v2 set-up for quantitative gait assessments during the 10MWT against a gold-standard motion-registration system by determining between-systems agreement for body point's time series, spatiotemporal gait parameters and the time to walk 10 meters. To this end, the 10MWT was conducted at comfortable and maximum walking speed, while 3D full-body kinematics was concurrently recorded with the multi-Kinect v2 set-up and the Optotrak motion-registration system (i.e., the gold standard). Between-systems agreement for body point's time series was assessed with the intraclass correlation coefficient (ICC). Between-systems agreement was similarly determined for the gait parameters' walking speed, cadence, step length, stride length, step width, step time, stride time (all obtained for the intermediate 6 meters) and the time to walk 10 meters, complemented by Bland-Altman's bias and limits of agreement. Body point's time series agreed well between the motion-registration systems, particularly so for body points in motion. For both comfortable and maximum walking speeds, the between-systems agreement for the time to walk 10 meters and all gait parameters except step width was high (ICC ≥ 0.888), with negligible biases and narrow limits of agreement. Hence, body point's time series and gait parameters obtained with a multi-Kinect v2 set-up match well with those derived with a gold standard in 3D measurement accuracy. Future studies are recommended to test the clinical utility of the multi-Kinect v2 set-up to automate 10MWT assessments, thereby complementing the time to walk 10 meters with reliable spatiotemporal gait parameters obtained objectively in a quick, unobtrusive and patient-friendly manner.

  4. Kinematic Validation of a Multi-Kinect v2 Instrumented 10-Meter Walkway for Quantitative Gait Assessments

    PubMed Central

    Geerse, Daphne J.; Coolen, Bert H.; Roerdink, Melvyn

    2015-01-01

    Walking ability is frequently assessed with the 10-meter walking test (10MWT), which may be instrumented with multiple Kinect v2 sensors to complement the typical stopwatch-based time to walk 10 meters with quantitative gait information derived from Kinect’s 3D body point’s time series. The current study aimed to evaluate a multi-Kinect v2 set-up for quantitative gait assessments during the 10MWT against a gold-standard motion-registration system by determining between-systems agreement for body point’s time series, spatiotemporal gait parameters and the time to walk 10 meters. To this end, the 10MWT was conducted at comfortable and maximum walking speed, while 3D full-body kinematics was concurrently recorded with the multi-Kinect v2 set-up and the Optotrak motion-registration system (i.e., the gold standard). Between-systems agreement for body point’s time series was assessed with the intraclass correlation coefficient (ICC). Between-systems agreement was similarly determined for the gait parameters’ walking speed, cadence, step length, stride length, step width, step time, stride time (all obtained for the intermediate 6 meters) and the time to walk 10 meters, complemented by Bland-Altman’s bias and limits of agreement. Body point’s time series agreed well between the motion-registration systems, particularly so for body points in motion. For both comfortable and maximum walking speeds, the between-systems agreement for the time to walk 10 meters and all gait parameters except step width was high (ICC ≥ 0.888), with negligible biases and narrow limits of agreement. Hence, body point’s time series and gait parameters obtained with a multi-Kinect v2 set-up match well with those derived with a gold standard in 3D measurement accuracy. Future studies are recommended to test the clinical utility of the multi-Kinect v2 set-up to automate 10MWT assessments, thereby complementing the time to walk 10 meters with reliable spatiotemporal gait parameters obtained objectively in a quick, unobtrusive and patient-friendly manner. PMID:26461498

  5. A Symmetric Positive Definite Formulation for Monolithic Fluid Structure Interaction

    DTIC Science & Technology

    2010-08-09

    more likely to converge than simply iterating the partitioned approach to convergence in a simple Gauss - Seidel manner. Our approach allows the use of...conditions in a second step. These approaches can also be iterated within a given time step for increased stability, noting that in the limit if one... converges one obtains a monolithic (albeit expensive) approach. Other approaches construct strongly coupled systems and then solve them in one of several

  6. Multigrid for hypersonic viscous two- and three-dimensional flows

    NASA Technical Reports Server (NTRS)

    Turkel, E.; Swanson, R. C.; Vatsa, V. N.; White, J. A.

    1991-01-01

    The use of a multigrid method with central differencing to solve the Navier-Stokes equations for hypersonic flows is considered. The time dependent form of the equations is integrated with an explicit Runge-Kutta scheme accelerated by local time stepping and implicit residual smoothing. Variable coefficients are developed for the implicit process that removes the diffusion limit on the time step, producing significant improvement in convergence. A numerical dissipation formulation that provides good shock capturing capability for hypersonic flows is presented. This formulation is shown to be a crucial aspect of the multigrid method. Solutions are given for two-dimensional viscous flow over a NACA 0012 airfoil and three-dimensional flow over a blunt biconic.

  7. One Step Quantum Key Distribution Based on EPR Entanglement

    PubMed Central

    Li, Jian; Li, Na; Li, Lei-Lei; Wang, Tao

    2016-01-01

    A novel quantum key distribution protocol is presented, based on entanglement and dense coding and allowing asymptotically secure key distribution. Considering the storage time limit of quantum bits, a grouping quantum key distribution protocol is proposed, which overcomes the vulnerability of first protocol and improves the maneuverability. Moreover, a security analysis is given and a simple type of eavesdropper’s attack would introduce at least an error rate of 46.875%. Compared with the “Ping-pong” protocol involving two steps, the proposed protocol does not need to store the qubit and only involves one step. PMID:27357865

  8. Method of detecting system function by measuring frequency response

    DOEpatents

    Morrison, John L.; Morrison, William H.; Christophersen, Jon P.; Motloch, Chester G.

    2013-01-08

    Methods of rapidly measuring an impedance spectrum of an energy storage device in-situ over a limited number of logarithmically distributed frequencies are described. An energy storage device is excited with a known input signal, and a response is measured to ascertain the impedance spectrum. An excitation signal is a limited time duration sum-of-sines consisting of a select number of frequencies. In one embodiment, magnitude and phase of each frequency of interest within the sum-of-sines is identified when the selected frequencies and sample rate are logarithmic integer steps greater than two. This technique requires a measurement with a duration of one period of the lowest frequency. In another embodiment, where selected frequencies are distributed in octave steps, the impedance spectrum can be determined using a captured time record that is reduced to a half-period of the lowest frequency.

  9. Overlapping MALDI-Mass Spectrometry Imaging for In-Parallel MS and MS/MS Data Acquisition without Sacrificing Spatial Resolution

    NASA Astrophysics Data System (ADS)

    Hansen, Rebecca L.; Lee, Young Jin

    2017-09-01

    Metabolomics experiments require chemical identifications, often through MS/MS analysis. In mass spectrometry imaging (MSI), this necessitates running several serial tissue sections or using a multiplex data acquisition method. We have previously developed a multiplex MSI method to obtain MS and MS/MS data in a single experiment to acquire more chemical information in less data acquisition time. In this method, each raster step is composed of several spiral steps and each spiral step is used for a separate scan event (e.g., MS or MS/MS). One main limitation of this method is the loss of spatial resolution as the number of spiral steps increases, limiting its applicability for high-spatial resolution MSI. In this work, we demonstrate multiplex MS imaging is possible without sacrificing spatial resolution by the use of overlapping spiral steps, instead of spatially separated spiral steps as used in the previous work. Significant amounts of matrix and analytes are still left after multiple spectral acquisitions, especially with nanoparticle matrices, so that high quality MS and MS/MS data can be obtained on virtually the same tissue spot. This method was then applied to visualize metabolites and acquire their MS/MS spectra in maize leaf cross-sections at 10 μm spatial resolution. [Figure not available: see fulltext.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wemhoff, A P; Burnham, A K; Nichols III, A L

    The reduction of the number of reactions in kinetic models for both the HMX beta-delta phase transition and thermal cookoff provides an attractive alternative to traditional multi-stage kinetic models due to reduced calibration effort requirements. In this study, we use the LLNL code ALE3D to provide calibrated kinetic parameters for a two-reaction bidirectional beta-delta HMX phase transition model based on Sandia Instrumented Thermal Ignition (SITI) and Scaled Thermal Explosion (STEX) temperature history curves, and a Prout-Tompkins cookoff model based on One-Dimensional Time to Explosion (ODTX) data. Results show that the two-reaction bidirectional beta-delta transition model presented here agrees as wellmore » with STEX and SITI temperature history curves as a reversible four-reaction Arrhenius model, yet requires an order of magnitude less computational effort. In addition, a single-reaction Prout-Tompkins model calibrated to ODTX data provides better agreement with ODTX data than a traditional multi-step Arrhenius model, and can contain up to 90% less chemistry-limited time steps for low-temperature ODTX simulations. Manual calibration methods for the Prout-Tompkins kinetics provide much better agreement with ODTX experimental data than parameters derived from Differential Scanning Calorimetry (DSC) measurements at atmospheric pressure. The predicted surface temperature at explosion for STEX cookoff simulations is a weak function of the cookoff model used, and a reduction of up to 15% of chemistry-limited time steps can be achieved by neglecting the beta-delta transition for this type of simulation. Finally, the inclusion of the beta-delta transition model in the overall kinetics model can affect the predicted time to explosion by 1% for the traditional multi-step Arrhenius approach, while up to 11% using a Prout-Tompkins cookoff model.« less

  11. Elderly Fallers Enhance Dynamic Stability Through Anticipatory Postural Adjustments during a Choice Stepping Reaction Time

    PubMed Central

    Tisserand, Romain; Robert, Thomas; Chabaud, Pascal; Bonnefoy, Marc; Chèze, Laurence

    2016-01-01

    In the case of disequilibrium, the capacity to step quickly is critical to avoid falling in elderly. This capacity can be simply assessed through the choice stepping reaction time test (CSRT), where elderly fallers (F) take longer to step than elderly non-fallers (NF). However, the reasons why elderly F elongate their stepping time remain unclear. The purpose of this study is to assess the characteristics of anticipated postural adjustments (APA) that elderly F develop in a stepping context and their consequences on the dynamic stability. Forty-four community-dwelling elderly subjects (20 F and 24 NF) performed a CSRT where kinematics and ground reaction forces were collected. Variables were analyzed using two-way repeated measures ANOVAs. Results for F compared to NF showed that stepping time is elongated, due to a longer APA phase. During APA, they seem to use two distinct balance strategies, depending on the axis: in the anteroposterior direction, we measured a smaller backward movement and slower peak velocity of the center of pressure (CoP); in the mediolateral direction, the CoP movement was similar in amplitude and peak velocity between groups but lasted longer. The biomechanical consequence of both strategies was an increased margin of stability (MoS) at foot-off, in the respective direction. By elongating their APA, elderly F use a safer balance strategy that prioritizes dynamic stability conditions instead of the objective of the task. Such a choice in balance strategy probably comes from muscular limitations and/or a higher fear of falling and paradoxically indicates an increased risk of fall. PMID:27965561

  12. Time limited field of regard search

    NASA Astrophysics Data System (ADS)

    Flug, Eric; Maurer, Tana; Nguyen, Oanh-Tho

    2005-05-01

    Recent work by the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has led to the Time-Limited Search (TLS) model, which has given new formulations for the field of view (FOV) search times. The next step in the evaluation of the overall search model (ACQUIRE) is to apply these parameters to the field of regard (FOR) model. Human perception experiments were conducted using synthetic imagery developed at NVESD. The experiments were competitive player-on-player search tests with the intention of imposing realistic time constraints on the observers. FOR detection probabilities, search times, and false alarm data are analyzed and compared to predictions using both the TLS model and ACQUIRE.

  13. A parallelization method for time periodic steady state in simulation of radio frequency sheath dynamics

    NASA Astrophysics Data System (ADS)

    Kwon, Deuk-Chul; Shin, Sung-Sik; Yu, Dong-Hun

    2017-10-01

    In order to reduce the computing time in simulation of radio frequency (rf) plasma sources, various numerical schemes were developed. It is well known that the upwind, exponential, and power-law schemes can efficiently overcome the limitation on the grid size for fluid transport simulations of high density plasma discharges. Also, the semi-implicit method is a well-known numerical scheme to overcome on the simulation time step. However, despite remarkable advances in numerical techniques and computing power over the last few decades, efficient multi-dimensional modeling of low temperature plasma discharges has remained a considerable challenge. In particular, there was a difficulty on parallelization in time for the time periodic steady state problems such as capacitively coupled plasma discharges and rf sheath dynamics because values of plasma parameters in previous time step are used to calculate new values each time step. Therefore, we present a parallelization method for the time periodic steady state problems by using period-slices. In order to evaluate the efficiency of the developed method, one-dimensional fluid simulations are conducted for describing rf sheath dynamics. The result shows that speedup can be achieved by using a multithreading method.

  14. New Reduced Two-Time Step Method for Calculating Combustion and Emission Rates of Jet-A and Methane Fuel With and Without Water Injection

    NASA Technical Reports Server (NTRS)

    Molnar, Melissa; Marek, C. John

    2004-01-01

    A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes that are being developed at Glenn. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates were then used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx were obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3). The temperature of the gas entering the turbine (T4) was also correlated as a function of the initial combustor temperature (T3), equivalence ratio, water to fuel mass ratio, and pressure.

  15. Effectiveness of en masse versus two-step retraction: a systematic review and meta-analysis.

    PubMed

    Rizk, Mumen Z; Mohammed, Hisham; Ismael, Omar; Bearn, David R

    2018-01-05

    This review aims to compare the effectiveness of en masse and two-step retraction methods during orthodontic space closure regarding anchorage preservation and anterior segment retraction and to assess their effect on the duration of treatment and root resorption. An electronic search for potentially eligible randomized controlled trials and prospective controlled trials was performed in five electronic databases up to July 2017. The process of study selection, data extraction, and quality assessment was performed by two reviewers independently. A narrative review is presented in addition to a quantitative synthesis of the pooled results where possible. The Cochrane risk of bias tool and the Newcastle-Ottawa Scale were used for the methodological quality assessment of the included studies. Eight studies were included in the qualitative synthesis in this review. Four studies were included in the quantitative synthesis. En masse/miniscrew combination showed a statistically significant standard mean difference regarding anchorage preservation - 2.55 mm (95% CI - 2.99 to - 2.11) and the amount of upper incisor retraction - 0.38 mm (95% CI - 0.70 to - 0.06) when compared to a two-step/conventional anchorage combination. Qualitative synthesis suggested that en masse retraction requires less time than two-step retraction with no difference in the amount of root resorption. Both en masse and two-step retraction methods are effective during the space closure phase. The en masse/miniscrew combination is superior to the two-step/conventional anchorage combination with regard to anchorage preservation and amount of retraction. Limited evidence suggests that anchorage reinforcement with a headgear produces similar results with both retraction methods. Limited evidence also suggests that en masse retraction may require less time and that no significant differences exist in the amount of root resorption between the two methods.

  16. Progress in development of HEDP capabilities in FLASH's Unsplit Staggered Mesh MHD solver

    NASA Astrophysics Data System (ADS)

    Lee, D.; Xia, G.; Daley, C.; Dubey, A.; Gopal, S.; Graziani, C.; Lamb, D.; Weide, K.

    2011-11-01

    FLASH is a publicly available astrophysical community code designed to solve highly compressible multi-physics reactive flows. We are adding capabilities to FLASH that will make it an open science code for the academic HEDP community. Among many important numerical requirements, we consider the following features to be important components necessary to meet our goals for FLASH as an HEDP open toolset. First, we are developing computationally efficient time-stepping integration methods that overcome the stiffness that arises in the equations describing a physical problem when there are disparate time scales. To this end, we are adding two different time-stepping schemes to FLASH that relax the time step limit when diffusive effects are present: an explicit super-time-stepping algorithm (Alexiades et al. in Com. Num. Mech. Eng. 12:31-42, 1996) and a Jacobian-Free Newton-Krylov implicit formulation. These two methods will be integrated into a robust, efficient, and high-order accurate Unsplit Staggered Mesh MHD (USM) solver (Lee and Deane in J. Comput. Phys. 227, 2009). Second, we have implemented an anisotropic Spitzer-Braginskii conductivity model to treat thermal heat conduction along magnetic field lines. Finally, we are implementing the Biermann Battery term to account for spontaneous generation of magnetic fields in the presence of non-parallel temperature and density gradients.

  17. Aqueous solvation from the water perspective.

    PubMed

    Ahmed, Saima; Pasti, Andrea; Fernández-Terán, Ricardo J; Ciardi, Gustavo; Shalit, Andrey; Hamm, Peter

    2018-06-21

    The response of water re-solvating a charge-transfer dye (deprotonated Coumarin 343) after photoexcitation has been measured by means of transient THz spectroscopy. Two steps of increasing THz absorption are observed, a first ∼10 ps step on the time scale of Debye relaxation of bulk water and a much slower step on a 3.9 ns time scale, the latter of which reflecting heating of the bulk solution upon electronic relaxation of the dye molecules from the S 1 back into the S 0 state. As an additional reference experiment, the hydroxyl vibration of water has been excited directly by a short IR pulse, establishing that the THz signal measures an elevated temperature within ∼1 ps. This result shows that the first step upon dye excitation (10 ps) is not limited by the response time of the THz signal; it rather reflects the reorientation of water molecules in the solvation layer. The apparent discrepancy between the relatively slow reorientation time and the general notion that water is among the fastest solvents with a solvation time in the sub-picosecond regime is discussed. Furthermore, non-equilibrium molecular dynamics simulations have been performed, revealing a close-to-quantitative agreement with experiment, which allows one to disentangle the contribution of heating to the overall THz response from that of water orientation.

  18. MRO DKF Post-Processing Tool

    NASA Technical Reports Server (NTRS)

    Ayap, Shanti; Fisher, Forest; Gladden, Roy; Khanampompan, Teerapat

    2008-01-01

    This software tool saves time and reduces risk by automating two labor-intensive and error-prone post-processing steps required for every DKF [DSN (Deep Space Network) Keyword File] that MRO (Mars Reconnaissance Orbiter) produces, and is being extended to post-process the corresponding TSOE (Text Sequence Of Events) as well. The need for this post-processing step stems from limitations in the seq-gen modeling resulting in incorrect DKF generation that is then cleaned up in post-processing.

  19. Cadence Feedback With ECE PEDO to Monitor Physical Activity Intensity: A Pilot Study.

    PubMed

    Ardic, Fusun; Göcer, Esra

    2016-03-01

    The purpose of this study was to examine the monitoring capabilities of the equipment for clever exercise pedometer (ECE PEDO) that provides audible feedback when the person exceeds the upper and lower limits of the target step numbers per minute and to compare step counts with Yamax SW-200 (YX200) as the criterion pedometer.A total of 30 adult volunteers (15 males and 15 females) were classified as normal weight (n = 10), overweight (n = 10), and obese (n = 10). After the submaximal exercise test on a treadmill, the moderate intensity for walking was determined by using YX200 pedometer and then the number of steps taken in a minute was measured. Lower and upper limits of steps per minute (cadence) were recorded in ECE PEDO providing audible feedback when the person's walking speed gets out of the limits. Volunteers walked for 30 minutes in the individual step count range by attaching the ECE PEDO and YX200 pedometer on both sides of the waist belt in the same session. Step counts of the volunteers were recorded. Wilcoxon, Spearman correlation, and Bland-Altman analyses were performed to show the relationship and agreement between the results of 2 devices.Subjects took an average of 3511 ± 426 and 3493 ± 399 steps during 30 minutes with ECE PEDO and criterion pedometer, respectively. About 3500 steps taken by ECE PEDO reflected that this pedometer has capability of identifying steps per minute to meet moderate intensity of physical activity. There was a strong correlation between step counts of both devices (P < 0.001, r = 0.96). Correlations across all three BMI categories and both sex remained consistently high ranging from 0.92 to 0.95. There was a high level of agreement between the ECE PEDO and YX200 pedometer in the Bland-Altman analysis.Although both devices showed a strong similarity in counting steps, the ECE PEDO provides monitoring of intensity such that a person can walk in a specified time with a desired speed.

  20. Cadence Feedback With ECE PEDO to Monitor Physical Activity Intensity

    PubMed Central

    Ardic, Fusun; Göcer, Esra

    2016-01-01

    Abstract The purpose of this study was to examine the monitoring capabilities of the equipment for clever exercise pedometer (ECE PEDO) that provides audible feedback when the person exceeds the upper and lower limits of the target step numbers per minute and to compare step counts with Yamax SW-200 (YX200) as the criterion pedometer. A total of 30 adult volunteers (15 males and 15 females) were classified as normal weight (n = 10), overweight (n = 10), and obese (n = 10). After the submaximal exercise test on a treadmill, the moderate intensity for walking was determined by using YX200 pedometer and then the number of steps taken in a minute was measured. Lower and upper limits of steps per minute (cadence) were recorded in ECE PEDO providing audible feedback when the person's walking speed gets out of the limits. Volunteers walked for 30 minutes in the individual step count range by attaching the ECE PEDO and YX200 pedometer on both sides of the waist belt in the same session. Step counts of the volunteers were recorded. Wilcoxon, Spearman correlation, and Bland–Altman analyses were performed to show the relationship and agreement between the results of 2 devices. Subjects took an average of 3511 ± 426 and 3493 ± 399 steps during 30 minutes with ECE PEDO and criterion pedometer, respectively. About 3500 steps taken by ECE PEDO reflected that this pedometer has capability of identifying steps per minute to meet moderate intensity of physical activity. There was a strong correlation between step counts of both devices (P < 0.001, r = 0.96). Correlations across all three BMI categories and both sex remained consistently high ranging from 0.92 to 0.95. There was a high level of agreement between the ECE PEDO and YX200 pedometer in the Bland–Altman analysis. Although both devices showed a strong similarity in counting steps, the ECE PEDO provides monitoring of intensity such that a person can walk in a specified time with a desired speed. PMID:26962822

  1. A two-step method for developing a control rod program for boiling water reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taner, M.S.; Levine, S.H.; Hsiao, M.Y.

    1992-01-01

    This paper reports on a two-step method that is established for the generation of a long-term control rod program for boiling water reactors (BWRs). The new method assumes a time-variant target power distribution in core depletion. In the new method, the BWR control rod programming is divided into two steps. In step 1, a sequence of optimal, exposure-dependent Haling power distribution profiles is generated, utilizing the spectral shift concept. In step 2, a set of exposure-dependent control rod patterns is developed by using the Haling profiles generated at step 1 as a target. The new method is implemented in amore » computer program named OCTOPUS. The optimization procedure of OCTOPUS is based on the method of approximation programming, in which the SIMULATE-E code is used to determine the nucleonics characteristics of the reactor core state. In a test in cycle length over a time-invariant, target Haling power distribution case because of a moderate application of spectral shift. No thermal limits of the core were violated. The gain in cycle length could be increased further by broadening the extent of the spetral shift.« less

  2. Towards a Passive Low-Cost In-Home Gait Assessment System for Older Adults

    PubMed Central

    Wang, Fang; Stone, Erik; Skubic, Marjorie; Keller, James M.; Abbott, Carmen; Rantz, Marilyn

    2013-01-01

    In this paper, we propose a webcam-based system for in-home gait assessment of older adults. A methodology has been developed to extract gait parameters including walking speed, step time and step length from a three-dimensional voxel reconstruction, which is built from two calibrated webcam views. The gait parameters are validated with a GAITRite mat and a Vicon motion capture system in the lab with 13 participants and 44 tests, and again with GAITRite for 8 older adults in senior housing. An excellent agreement with intra-class correlation coefficients of 0.99 and repeatability coefficients between 0.7% and 6.6% was found for walking speed, step time and step length given the limitation of frame rate and voxel resolution. The system was further tested with 10 seniors in a scripted scenario representing everyday activities in an unstructured environment. The system results demonstrate the capability of being used as a daily gait assessment tool for fall risk assessment and other medical applications. Furthermore, we found that residents displayed different gait patterns during their clinical GAITRite tests compared to the realistic scenario, namely a mean increase of 21% in walking speed, a mean decrease of 12% in step time, and a mean increase of 6% in step length. These findings provide support for continuous gait assessment in the home for capturing habitual gait. PMID:24235111

  3. Stability of numerical integration techniques for transient rotor dynamics

    NASA Technical Reports Server (NTRS)

    Kascak, A. F.

    1977-01-01

    A finite element model of a rotor bearing system was analyzed to determine the stability limits of the forward, backward, and centered Euler; Runge-Kutta; Milne; and Adams numerical integration techniques. The analysis concludes that the highest frequency mode determines the maximum time step for a stable solution. Thus, the number of mass elements should be minimized. Increasing the damping can sometimes cause numerical instability. For a uniform shaft, with 10 mass elements, operating at approximately the first critical speed, the maximum time step for the Runge-Kutta, Milne, and Adams methods is that which corresponds to approximately 1 degree of shaft movement. This is independent of rotor dimensions.

  4. Multiple time step integrators in ab initio molecular dynamics.

    PubMed

    Luehr, Nathan; Markland, Thomas E; Martínez, Todd J

    2014-02-28

    Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy.

  5. Evaluation of a continuous-rotation, high-speed scanning protocol for micro-computed tomography.

    PubMed

    Kerl, Hans Ulrich; Isaza, Cristina T; Boll, Hanne; Schambach, Sebastian J; Nolte, Ingo S; Groden, Christoph; Brockmann, Marc A

    2011-01-01

    Micro-computed tomography is used frequently in preclinical in vivo research. Limiting factors are radiation dose and long scan times. The purpose of the study was to compare a standard step-and-shoot to a continuous-rotation, high-speed scanning protocol. Micro-computed tomography of a lead grid phantom and a rat femur was performed using a step-and-shoot and a continuous-rotation protocol. Detail discriminability and image quality were assessed by 3 radiologists. The signal-to-noise ratio and the modulation transfer function were calculated, and volumetric analyses of the femur were performed. The radiation dose of the scan protocols was measured using thermoluminescence dosimeters. The 40-second continuous-rotation protocol allowed a detail discriminability comparable to the step-and-shoot protocol at significantly lower radiation doses. No marked differences in volumetric or qualitative analyses were observed. Continuous-rotation micro-computed tomography significantly reduces scanning time and radiation dose without relevantly reducing image quality compared with a normal step-and-shoot protocol.

  6. Super-sensitive time-resolved fluoroimmunoassay for thyroid-stimulating hormone utilizing europium(III) nanoparticle labels achieved by protein corona stabilization, short binding time, and serum preprocessing.

    PubMed

    Näreoja, Tuomas; Rosenholm, Jessica M; Lamminmäki, Urpo; Hänninen, Pekka E

    2017-05-01

    Thyrotropin or thyroid-stimulating hormone (TSH) is used as a marker for thyroid function. More precise and more sensitive immunoassays are needed to facilitate continuous monitoring of thyroid dysfunctions and to assess the efficacy of the selected therapy and dosage of medication. Moreover, most thyroid diseases are autoimmune diseases making TSH assays very prone to immunoassay interferences due to autoantibodies in the sample matrix. We have developed a super-sensitive TSH immunoassay utilizing nanoparticle labels with a detection limit of 60 nU L -1 in preprocessed serum samples by reducing nonspecific binding. The developed preprocessing step by affinity purification removed interfering compounds and improved the recovery of spiked TSH from serum. The sensitivity enhancement was achieved by stabilization of the protein corona of the nanoparticle bioconjugates and a spot-coated configuration of the active solid-phase that reduced sedimentation of the nanoparticle bioconjugates and their contact time with antibody-coated solid phase, thus making use of the higher association rate of specific binding due to high avidity nanoparticle bioconjugates. Graphical Abstract We were able to decrease the lowest limit of detection and increase sensitivity of TSH immunoassay using Eu(III)-nanoparticles. The improvement was achieved by decreasing binding time of nanoparticle bioconjugates by small capture area and fast circular rotation. Also, we applied a step to stabilize protein corona of the nanoparticles and a serum-preprocessing step with a structurally related antibody.

  7. Event-Triggered Distributed Average Consensus Over Directed Digital Networks With Limited Communication Bandwidth.

    PubMed

    Li, Huaqing; Chen, Guo; Huang, Tingwen; Dong, Zhaoyang; Zhu, Wei; Gao, Lan

    2016-12-01

    In this paper, we consider the event-triggered distributed average-consensus of discrete-time first-order multiagent systems with limited communication data rate and general directed network topology. In the framework of digital communication network, each agent has a real-valued state but can only exchange finite-bit binary symbolic data sequence with its neighborhood agents at each time step due to the digital communication channels with energy constraints. Novel event-triggered dynamic encoder and decoder for each agent are designed, based on which a distributed control algorithm is proposed. A scheme that selects the number of channel quantization level (number of bits) at each time step is developed, under which all the quantizers in the network are never saturated. The convergence rate of consensus is explicitly characterized, which is related to the scale of network, the maximum degree of nodes, the network structure, the scaling function, the quantization interval, the initial states of agents, the control gain and the event gain. It is also found that under the designed event-triggered protocol, by selecting suitable parameters, for any directed digital network containing a spanning tree, the distributed average consensus can be always achieved with an exponential convergence rate based on merely one bit information exchange between each pair of adjacent agents at each time step. Two simulation examples are provided to illustrate the feasibility of presented protocol and the correctness of the theoretical results.

  8. Can quantum transition state theory be defined as an exact t = 0+ limit?

    NASA Astrophysics Data System (ADS)

    Jang, Seogjoo; Voth, Gregory A.

    2016-02-01

    The definition of the classical transition state theory (TST) as a t → 0+ limit of the flux-side time correlation function relies on the assumption that simultaneous measurement of population and flux is a well defined physical process. However, the noncommutativity of the two measurements in quantum mechanics makes the extension of such a concept to the quantum regime impossible. For this reason, quantum TST (QTST) has been generally accepted as any kind of quantum rate theory reproducing the TST in the classical limit, and there has been a broad consensus that no unique QTST retaining all the properties of TST can be defined. Contrary to this widely held view, Hele and Althorpe (HA) [J. Chem. Phys. 138, 084108 (2013)] recently suggested that a true QTST can be defined as the exact t → 0+ limit of a certain kind of quantum flux-side time correlation function and that it is equivalent to the ring polymer molecular dynamics (RPMD) TST. This work seeks to question and clarify certain assumptions underlying these suggestions and their implications. First, the time correlation function used by HA as a starting expression is not related to the kinetic rate constant by virtue of linear response theory, which is the first important step in relating a t = 0+ limit to a physically measurable rate. Second, a theoretical analysis calls into question a key step in HA's proof which appears not to rely on an exact quantum mechanical identity. The correction of this makes the true t = 0+ limit of HA's QTST different from the RPMD-TST rate expression, but rather equal to the well-known path integral quantum transition state theory rate expression for the case of centroid dividing surface. An alternative quantum rate expression is then formulated starting from the linear response theory and by applying a recently developed formalism of real time dynamics of imaginary time path integrals [S. Jang, A. V. Sinitskiy, and G. A. Voth, J. Chem. Phys. 140, 154103 (2014)]. It is shown that the t → 0+ limit of the new rate expression vanishes in the exact quantum limit.

  9. Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wada, Takao

    2014-07-01

    A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.

  10. General Methods for Analysis of Sequential “n-step” Kinetic Mechanisms: Application to Single Turnover Kinetics of Helicase-Catalyzed DNA Unwinding

    PubMed Central

    Lucius, Aaron L.; Maluf, Nasib K.; Fischer, Christopher J.; Lohman, Timothy M.

    2003-01-01

    Helicase-catalyzed DNA unwinding is often studied using “all or none” assays that detect only the final product of fully unwound DNA. Even using these assays, quantitative analysis of DNA unwinding time courses for DNA duplexes of different lengths, L, using “n-step” sequential mechanisms, can reveal information about the number of intermediates in the unwinding reaction and the “kinetic step size”, m, defined as the average number of basepairs unwound between two successive rate limiting steps in the unwinding cycle. Simultaneous nonlinear least-squares analysis using “n-step” sequential mechanisms has previously been limited by an inability to float the number of “unwinding steps”, n, and m, in the fitting algorithm. Here we discuss the behavior of single turnover DNA unwinding time courses and describe novel methods for nonlinear least-squares analysis that overcome these problems. Analytic expressions for the time courses, fss(t), when obtainable, can be written using gamma and incomplete gamma functions. When analytic expressions are not obtainable, the numerical solution of the inverse Laplace transform can be used to obtain fss(t). Both methods allow n and m to be continuous fitting parameters. These approaches are generally applicable to enzymes that translocate along a lattice or require repetition of a series of steps before product formation. PMID:14507688

  11. Multigrid methods with space–time concurrency

    DOE PAGES

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.; ...

    2017-10-06

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  12. Multigrid methods with space–time concurrency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  13. Reconstructing Genetic Regulatory Networks Using Two-Step Algorithms with the Differential Equation Models of Neural Networks.

    PubMed

    Chen, Chi-Kan

    2017-07-26

    The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two-step algorithms can potentially incorporate with different nonlinear differential equation models to reconstruct the GRN.

  14. Random walks of colloidal probes in viscoelastic materials

    NASA Astrophysics Data System (ADS)

    Khan, Manas; Mason, Thomas G.

    2014-04-01

    To overcome limitations of using a single fixed time step in random walk simulations, such as those that rely on the classic Wiener approach, we have developed an algorithm for exploring random walks based on random temporal steps that are uniformly distributed in logarithmic time. This improvement enables us to generate random-walk trajectories of probe particles that span a highly extended dynamic range in time, thereby facilitating the exploration of probe motion in soft viscoelastic materials. By combining this faster approach with a Maxwell-Voigt model (MVM) of linear viscoelasticity, based on a slowly diffusing harmonically bound Brownian particle, we rapidly create trajectories of spherical probes in soft viscoelastic materials over more than 12 orders of magnitude in time. Appropriate windowing of these trajectories over different time intervals demonstrates that random walk for the MVM is neither self-similar nor self-affine, even if the viscoelastic material is isotropic. We extend this approach to spatially anisotropic viscoelastic materials, using binning to calculate the anisotropic mean square displacements and creep compliances along different orthogonal directions. The elimination of a fixed time step in simulations of random processes, including random walks, opens up interesting possibilities for modeling dynamics and response over a highly extended temporal dynamic range.

  15. A bill to extend the pay limitation for Members of Congress and Federal employees.

    THOMAS, 112th Congress

    Sen. Heller, Dean [R-NV

    2012-02-07

    Senate - 02/09/2012 Read the second time. Placed on Senate Legislative Calendar under General Orders. Calendar No. 318. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  16. 20 CFR 404.822 - Correction of the record of your earnings after the time limit ends.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... step leading to a decision on a question about the earnings record, for example, an investigation is... without going beyond any of the pertinent SSA records. (3) Fraud. We may change any entry which was...

  17. Congressional Replacement of President Obama's Energy-Restricting and Job-Limiting Offshore Drilling Plan

    THOMAS, 112th Congress

    Rep. Hastings, Doc [R-WA-4

    2012-07-09

    Senate - 07/30/2012 Read the second time. Placed on Senate Legislative Calendar under General Orders. Calendar No. 474. (All Actions) Tracker: This bill has the status Passed HouseHere are the steps for Status of Legislation:

  18. 20 CFR 404.822 - Correction of the record of your earnings after the time limit ends.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... step leading to a decision on a question about the earnings record, for example, an investigation is... without going beyond any of the pertinent SSA records. (3) Fraud. We may change any entry which was...

  19. Reliability of Fitness Tests Using Methods and Time Periods Common in Sport and Occupational Management

    PubMed Central

    Burnstein, Bryan D.; Steele, Russell J.; Shrier, Ian

    2011-01-01

    Context: Fitness testing is used frequently in many areas of physical activity, but the reliability of these measurements under real-world, practical conditions is unknown. Objective: To evaluate the reliability of specific fitness tests using the methods and time periods used in the context of real-world sport and occupational management. Design: Cohort study. Setting: Eighteen different Cirque du Soleil shows. Patients or Other Participants: Cirque du Soleil physical performers who completed 4 consecutive tests (6-month intervals) and were free of injury or illness at each session (n = 238 of 701 physical performers). Intervention(s): Performers completed 6 fitness tests on each assessment date: dynamic balance, Harvard step test, handgrip, vertical jump, pull-ups, and 60-second jump test. Main Outcome Measure(s): We calculated the intraclass coefficient (ICC) and limits of agreement between baseline and each time point and the ICC over all 4 time points combined. Results: Reliability was acceptable (ICC > 0.6) over an 18-month time period for all pairwise comparisons and all time points together for the handgrip, vertical jump, and pull-up assessments. The Harvard step test and 60-second jump test had poor reliability (ICC < 0.6) between baseline and other time points. When we excluded the baseline data and calculated the ICC for 6-month, 12-month, and 18-month time points, both the Harvard step test and 60-second jump test demonstrated acceptable reliability. Dynamic balance was unreliable in all contexts. Limit-of-agreement analysis demonstrated considerable intraindividual variability for some tests and a learning effect by administrators on others. Conclusions: Five of the 6 tests in this battery had acceptable reliability over an 18-month time frame, but the values for certain individuals may vary considerably from time to time for some tests. Specific tests may require a learning period for administrators. PMID:22488138

  20. A Semi-implicit Method for Resolution of Acoustic Waves in Low Mach Number Flows

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Pierce, Charles D.; Moin, Parviz

    2002-09-01

    A semi-implicit numerical method for time accurate simulation of compressible flow is presented. By extending the low Mach number pressure correction method, a Helmholtz equation for pressure is obtained in the case of compressible flow. The method avoids the acoustic CFL limitation, allowing a time step restricted only by the convective velocity, resulting in significant efficiency gains. Use of a discretization that is centered in both time and space results in zero artificial damping of acoustic waves. The method is attractive for problems in which Mach numbers are low, and the acoustic waves of most interest are those having low frequency, such as acoustic combustion instabilities. Both of these characteristics suggest the use of time steps larger than those allowable by an acoustic CFL limitation. In some cases it may be desirable to include a small amount of numerical dissipation to eliminate oscillations due to small-wavelength, high-frequency, acoustic modes, which are not of interest; therefore, a provision for doing this in a controlled manner is included in the method. Results of the method for several model problems are presented, and the performance of the method in a large eddy simulation is examined.

  1. Dynamic performance of MEMS deformable mirrors for use in an active/adaptive two-photon microscope

    NASA Astrophysics Data System (ADS)

    Zhang, Christian C.; Foster, Warren B.; Downey, Ryan D.; Arrasmith, Christopher L.; Dickensheets, David L.

    2016-03-01

    Active optics can facilitate two-photon microscopic imaging deep in tissue. We are investigating fast focus control mirrors used in concert with an aberration correction mirror to control the axial position of focus and system aberrations dynamically during scanning. With an adaptive training step, sample-induced aberrations may be compensated as well. If sufficiently fast and precise, active optics may be able to compensate under-corrected imaging optics as well as sample aberrations to maintain diffraction-limited performance throughout the field of view. Toward this end we have measured a Boston Micromachines Corporation Multi-DM 140 element deformable mirror, and a Revibro Optics electrostatic 4-zone focus control mirror to characterize dynamic performance. Tests for the Multi-DM included both step response and sinusoidal frequency sweeps of specific Zernike modes. For the step response we measured 10%-90% rise times for the target Zernike amplitude, and wavefront rms error settling times. Frequency sweeps identified the 3dB bandwidth of the mirror when attempting to follow a sinusoidal amplitude trajectory for a specific Zernike mode. For five tested Zernike modes (defocus, spherical aberration, coma, astigmatism and trefoil) we find error settling times for mode amplitudes up to 400nm to be less than 52 us, and 3 dB frequencies range from 6.5 kHz to 10 kHz. The Revibro Optics mirror was tested for step response only, with error settling time of 80 μs for a large 3 um defocus step, and settling time of only 18 μs for a 400nm spherical aberration step. These response speeds are sufficient for intra-scan correction at scan rates typical of two-photon microscopy.

  2. Adaptive-Mesh-Refinement for hyperbolic systems of conservation laws based on a posteriori stabilized high order polynomial reconstructions

    NASA Astrophysics Data System (ADS)

    Semplice, Matteo; Loubère, Raphaël

    2018-02-01

    In this paper we propose a third order accurate finite volume scheme based on a posteriori limiting of polynomial reconstructions within an Adaptive-Mesh-Refinement (AMR) simulation code for hydrodynamics equations in 2D. The a posteriori limiting is based on the detection of problematic cells on a so-called candidate solution computed at each stage of a third order Runge-Kutta scheme. Such detection may include different properties, derived from physics, such as positivity, from numerics, such as a non-oscillatory behavior, or from computer requirements such as the absence of NaN's. Troubled cell values are discarded and re-computed starting again from the previous time-step using a more dissipative scheme but only locally, close to these cells. By locally decrementing the degree of the polynomial reconstructions from 2 to 0 we switch from a third-order to a first-order accurate but more stable scheme. The entropy indicator sensor is used to refine/coarsen the mesh. This sensor is also employed in an a posteriori manner because if some refinement is needed at the end of a time step, then the current time-step is recomputed with the refined mesh, but only locally, close to the new cells. We show on a large set of numerical tests that this a posteriori limiting procedure coupled with the entropy-based AMR technology can maintain not only optimal accuracy on smooth flows but also stability on discontinuous profiles such as shock waves, contacts, interfaces, etc. Moreover numerical evidences show that this approach is at least comparable in terms of accuracy and cost to a more classical CWENO approach within the same AMR context.

  3. ICASE Semiannual Report. April 1, 1993 through September 30, 1993

    DTIC Science & Technology

    1993-12-01

    scientists from universities and industry who have resident appointments for limited periods of time as well as by visiting and resident consultants... time integration. One of these is the time advancement of systems of hyperbolic partial differential equations via high order Runge- Kutta algorithms...Typically if the R-K methods is of, say, fourth order accuracy then there will be four intermediate steps between time level t = n6 and t + 6 = (n + 1)b

  4. Development of a time-dependent hurricane evacuation model for the New Orleans area : research project capsule.

    DOT National Transportation Integrated Search

    2008-08-01

    Current hurricane evacuation transportation modeling uses an approach fashioned after the : traditional four-step procedure applied in urban transportation planning. One of the limiting : features of this approach is that it models traffic in a stati...

  5. A bill to suspend the fiscal year 2013 sequester and establish limits on war-related spending.

    THOMAS, 113th Congress

    Sen. Reid, Harry [D-NV

    2013-04-23

    Senate - 04/24/2013 Read the second time. Placed on Senate Legislative Calendar under General Orders. Calendar No. 64. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  6. Family Reunification Project.

    ERIC Educational Resources Information Center

    Administration for Children, Youth, and Families (DHHS), Washington, DC.

    Utah's Department of Human Services' Family Reunification Project was initiated to demonstrate that intensive, time-limited, home-based services would enable children in foster care to return to their natural families more rapidly than regular foster care management permits. The following steps were taken in project development: (1) sites were…

  7. How to Deal with Interval-Censored Data Practically while Assessing the Progression-Free Survival: A Step-by-Step Guide Using SAS and R Software.

    PubMed

    Dugué, Audrey Emmanuelle; Pulido, Marina; Chabaud, Sylvie; Belin, Lisa; Gal, Jocelyn

    2016-12-01

    We describe how to estimate progression-free survival while dealing with interval-censored data in the setting of clinical trials in oncology. Three procedures with SAS and R statistical software are described: one allowing for a nonparametric maximum likelihood estimation of the survival curve using the EM-ICM (Expectation and Maximization-Iterative Convex Minorant) algorithm as described by Wellner and Zhan in 1997; a sensitivity analysis procedure in which the progression time is assigned (i) at the midpoint, (ii) at the upper limit (reflecting the standard analysis when the progression time is assigned at the first radiologic exam showing progressive disease), or (iii) at the lower limit of the censoring interval; and finally, two multiple imputations are described considering a uniform or the nonparametric maximum likelihood estimation (NPMLE) distribution. Clin Cancer Res; 22(23); 5629-35. ©2016 AACR. ©2016 American Association for Cancer Research.

  8. An atomistic simulation scheme for modeling crystal formation from solution.

    PubMed

    Kawska, Agnieszka; Brickmann, Jürgen; Kniep, Rüdiger; Hochrein, Oliver; Zahn, Dirk

    2006-01-14

    We present an atomistic simulation scheme for investigating crystal growth from solution. Molecular-dynamics simulation studies of such processes typically suffer from considerable limitations concerning both system size and simulation times. In our method this time-length scale problem is circumvented by an iterative scheme which combines a Monte Carlo-type approach for the identification of ion adsorption sites and, after each growth step, structural optimization of the ion cluster and the solvent by means of molecular-dynamics simulation runs. An important approximation of our method is based on assuming full structural relaxation of the aggregates between each of the growth steps. This concept only holds for compounds of low solubility. To illustrate our method we studied CaF2 aggregate growth from aqueous solution, which may be taken as prototypes for compounds of very low solubility. The limitations of our simulation scheme are illustrated by the example of NaCl aggregation from aqueous solution, which corresponds to a solute/solvent combination of very high salt solubility.

  9. Familial resemblance and shared latent familial variance in recurrent fall risk in older women

    PubMed Central

    Cauley, Jane A.; Roth, Stephen M.; Kammerer, Candace; Stone, Katie; Hillier, Teresa A.; Ensrud, Kristine E.; Hochberg, Marc; Nevitt, Michael C.; Zmuda, Joseph M.

    2010-01-01

    Background: A possible familial component to fracture risk may be mediated through a genetic liability to fall recurrently. Methods: Our analysis sample included 186 female sibling-ships (n = 401) of mean age 71.9 yr (SD = 5.0). Using variance component models, we estimated residual upper-limit heritabilities in fall-risk mobility phenotypes (e.g., chair-stand time, rapid step-ups, and usual-paced walking speed) and in recurrent falls. We also estimated familial and environmental (unmeasured) correlations between pairs of fall-risk mobility phenotypes. All models were adjusted for age, height, body mass index, and medical and environmental factors. Results: Residual upper-limit heritabilities were all moderate (P < 0.05), ranging from 0.27 for usual-paced walking speed to 0.58 for recurrent falls. A strong familial correlation between usual-paced walking speed and rapid step-ups of 0.65 (P < 0.01) was identified. Familial correlations between usual-paced walking speed and chair-stand time (−0.02) and between chair-stand time and rapid step-ups (−0.27) were both nonsignificant (P > 0.05). Environmental correlations ranged from 0.35 to 0.58 (absolute values), P < 0.05 for all. Conclusions: There exists moderate familial resemblance in fall-risk mobility phenotypes and recurrent falls among older female siblings, which we expect is primarily genetic given that adult siblings live separate lives. All fall-risk mobility phenotypes may be coinfluenced at least to a small degree by shared latent familial or environmental factors; however, up to approximately one-half of the covariation between usual-paced walking speed and rapid step-ups may be due to a common set of genes. PMID:20167680

  10. Validity and reliability of the Fitbit Zip as a measure of preschool children’s step count

    PubMed Central

    Sharp, Catherine A; Mackintosh, Kelly A; Erjavec, Mihela; Pascoe, Duncan M; Horne, Pauline J

    2017-01-01

    Objectives Validation of physical activity measurement tools is essential to determine the relationship between physical activity and health in preschool children, but research to date has not focused on this priority. The aims of this study were to ascertain inter-rater reliability of observer step count, and interdevice reliability and validity of Fitbit Zip accelerometer step counts in preschool children. Methods Fifty-six children aged 3–4 years (29 girls) recruited from 10 nurseries in North Wales, UK, wore two Fitbit Zip accelerometers while performing a timed walking task in their childcare settings. Accelerometers were worn in secure pockets inside a custom-made tabard. Video recordings enabled two observers to independently code the number of steps performed in 3 min by each child during the walking task. Intraclass correlations (ICCs), concordance correlation coefficients, Bland-Altman plots and absolute per cent error were calculated to assess the reliability and validity of the consumer-grade device. Results An excellent ICC was found between the two observer codings (ICC=1.00) and the two Fitbit Zips (ICC=0.91). Concordance between the Fitbit Zips and observer counts was also high (r=0.77), with an acceptable absolute per cent error (6%–7%). Bland-Altman analyses identified a bias for Fitbit 1 of 22.8±19.1 steps with limits of agreement between −14.7 and 60.2 steps, and a bias for Fitbit 2 of 25.2±23.2 steps with limits of agreement between −20.2 and 70.5 steps. Conclusions Fitbit Zip accelerometers are a reliable and valid method of recording preschool children’s step count in a childcare setting. PMID:29081984

  11. Robotics in the Construction Industry

    DTIC Science & Technology

    1990-06-01

    accomplished through reprogramming and the attachment of different end effectors. 2.1.2.3 Manipulator I This is the mechanism for moving objects in the...other 3 types of robots), limited repeatability (ability to "hit" the same point in space time after time without reprogramming or adjustment by the... Reprogramming for a different 3 sequence of steps is generally difficult and time- consuming, as the stops must be relocated and I calibrated for the new sequence

  12. An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, G.; Chacón, L.; Barnes, D. C.

    2011-08-01

    This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov-Poisson formulation), ours is based on a nonlinearly converged Vlasov-Ampére (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant-Friedrichs-Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicit time steps (unlike the earlier "energy-conserving" explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton-Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.

  13. Exact charge and energy conservation in implicit PIC with mapped computational meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guangye; Barnes, D. C.

    This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov Poisson formulation), ours is based on a nonlinearly converged Vlasov Amp re (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant Friedrichs Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicitmore » time steps (unlike the earlier energy-conserving explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.« less

  14. Methods for developing time-series climate surfaces to drive topographically distributed energy- and water-balance models

    USGS Publications Warehouse

    Susong, D.; Marks, D.; Garen, D.

    1999-01-01

    Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.

  15. Overcoming the detection bandwidth limit in precision spectroscopy: The analytical apparatus function for a stepped frequency scan

    NASA Astrophysics Data System (ADS)

    Rohart, François

    2017-01-01

    In a previous paper [Rohart et al., Phys Rev A 2014;90(042506)], the influence of detection-bandwidth properties on observed line-shapes in precision spectroscopy was theoretically modeled for the first time using the basic model of a continuous sweeping of the laser frequency. Specific experiments confirmed general theoretical trends but also revealed several insufficiencies of the model in case of stepped frequency scans. As a consequence in as much as up-to-date experiments use step-by-step frequency-swept lasers, a new model of the influence of the detection-bandwidth is developed, including a realistic timing of signal sampling and frequency changes. Using Fourier transform techniques, the resulting time domain apparatus function gets a simple analytical form that can be easily implemented in line-shape fitting codes without any significant increase of computation durations. This new model is then considered in details for detection systems characterized by 1st and 2nd order bandwidths, underlining the importance of the ratio of detection time constant to frequency step duration, namely for the measurement of line frequencies. It also allows a straightforward analysis of corresponding systematic deviations on retrieved line frequencies and broadenings. Finally, a special attention is paid to consequences of a finite detection-bandwidth in Doppler Broadening Thermometry, namely to experimental adjustments required for a spectroscopic determination of the Boltzmann constant at the 1-ppm level of accuracy. In this respect, the interest of implementing a Butterworth 2nd order filter is emphasized.

  16. Cingi Steps for preoperative computer-assisted image editing before reduction rhinoplasty.

    PubMed

    Cingi, Can Cemal; Cingi, Cemal; Bayar Muluk, Nuray

    2014-04-01

    The aim of this work is to provide a stepwise systematic guide for a preoperative photo-editing procedure for rhinoplasty cases involving the cooperation of a graphic artist and a surgeon. One hundred female subjects who planned to undergo a reduction rhinoplasty operation were included in this study. The Cingi Steps for Preoperative Computer Imaging (CS-PCI) program, a stepwise systematic guide for image editing using Adobe PhotoShop's "liquify" effect, was applied to the rhinoplasty candidates. The stages of CS-PCI are as follows: (1) lowering the hump; (2) shortening the nose; (3) adjusting the tip projection, (4) perfecting the nasal dorsum, (5) creating a supratip break, and (6) exaggerating the tip projection and/or dorsal slope. Performing the Cingi Steps allows the patient to see what will happen during the operation and observe the final appearance of his or her nose. After the application of described steps, 71 patients (71%) accepted step 4, and 21 (21%) of them accepted step 5. Only 10 patients (10%) wanted to make additional changes to their operation plans. The main benefits of using this method is that it decreases the time needed by the surgeon to perform a graphic analysis, and it reduces the time required for the patient to reach a decision about the procedure. It is an easy and reliable method that will provide improved physician-patient communication, increased patient confidence, and enhanced surgical planning while limiting the time needed for planning. © 2014 ARS-AAOA, LLC.

  17. Step responses of a torsional system with multiple clearances: Study of vibro-impact phenomenon using experimental and computational methods

    NASA Astrophysics Data System (ADS)

    Oruganti, Pradeep Sharma; Krak, Michael D.; Singh, Rajendra

    2018-01-01

    Recently Krak and Singh (2017) proposed a scientific experiment that examined vibro-impacts in a torsional system under a step down excitation and provided preliminary measurements and limited non-linear model studies. A major goal of this article is to extend the prior work with a focus on the examination of vibro-impact phenomena observed under step responses in a torsional system with one, two or three controlled clearances. First, new measurements are made at several locations with a higher sampling frequency. Measured angular accelerations are examined in both time and time-frequency domains. Minimal order non-linear models of the experiment are successfully constructed, using piecewise linear stiffness and Coulomb friction elements; eight cases of the generic system are examined though only three are experimentally studied. Measured and predicted responses for single and dual clearance configurations exhibit double sided impacts and time varying periods suggest softening trends under the step down torque. Non-linear models are experimentally validated by comparing results with new measurements and with those previously reported. Several metrics are utilized to quantify and compare the measured and predicted responses (including peak to peak accelerations). Eigensolutions and step responses of the corresponding linearized models are utilized to better understand the nature of the non-linear dynamic system. Finally, the effect of step amplitude on the non-linear responses is examined for several configurations, and hardening trends are observed in the torsional system with three clearances.

  18. Modelling Limit Order Execution Times from Market Data

    NASA Astrophysics Data System (ADS)

    Kim, Adlar; Farmer, Doyne; Lo, Andrew

    2007-03-01

    Although the term ``liquidity'' is widely used in finance literatures, its meaning is very loosely defined and there is no quantitative measure for it. Generally, ``liquidity'' means an ability to quickly trade stocks without causing a significant impact on the stock price. From this definition, we identified two facets of liquidity -- 1.execution time of limit orders, and 2.price impact of market orders. The limit order is an order to transact a prespecified number of shares at a prespecified price, which will not cause an immediate execution. On the other hand, the market order is an order to transact a prespecified number of shares at a market price, which will cause an immediate execution, but are subject to price impact. Therefore, when the stock is liquid, market participants will experience quick limit order executions and small market order impacts. As a first step to understand market liquidity, we studied the facet of liquidity related to limit order executions -- execution times. In this talk, we propose a novel approach of modeling limit order execution times and show how they are affected by size and price of orders. We used q-Weibull distribution, which is a generalized form of Weibull distribution that can control the fatness of tail to model limit order execution times.

  19. Faster and exact implementation of the continuous cellular automaton for anisotropic etching simulations

    NASA Astrophysics Data System (ADS)

    Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Gadea, R.; Sato, K.

    2011-02-01

    The current success of the continuous cellular automata for the simulation of anisotropic wet chemical etching of silicon in microengineering applications is based on a relatively fast, approximate, constant time stepping implementation (CTS), whose accuracy against the exact algorithm—a computationally slow, variable time stepping implementation (VTS)—has not been previously analyzed in detail. In this study we show that the CTS implementation can generate moderately wrong etch rates and overall etching fronts, thus justifying the presentation of a novel, exact reformulation of the VTS implementation based on a new state variable, referred to as the predicted removal time (PRT), and the use of a self-balanced binary search tree that enables storage and efficient access to the PRT values in each time step in order to quickly remove the corresponding surface atom/s. The proposed PRT method reduces the simulation cost of the exact implementation from {O}(N^{5/3}) to {O}(N^{3/2} log N) without introducing any model simplifications. This enables more precise simulations (only limited by numerical precision errors) with affordable computational times that are similar to the less precise CTS implementation and even faster for low reactivity systems.

  20. Simplified Two-Time Step Method for Calculating Combustion Rates and Nitrogen Oxide Emissions for Hydrogen/Air and Hydorgen/Oxygen

    NASA Technical Reports Server (NTRS)

    Molnar, Melissa; Marek, C. John

    2005-01-01

    A simplified single rate expression for hydrogen combustion and nitrogen oxide production was developed. Detailed kinetics are predicted for the chemical kinetic times using the complete chemical mechanism over the entire operating space. These times are then correlated to the reactor conditions using an exponential fit. Simple first order reaction expressions are then used to find the conversion in the reactor. The method uses a two-time step kinetic scheme. The first time averaged step is used at the initial times with smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, temperature, and pressure. The second instantaneous step is used at higher water concentrations (> 1 x 10(exp -20) moles/cc) in the mixture which gives the chemical kinetic time as a function of the instantaneous fuel and water mole concentrations, pressure and temperature (T4). The simple correlations are then compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. This time is regressed over the complete initial conditions using the Excel regression routine. Chemical kinetic time equations for H2 and NOx are obtained for H2/air fuel and for the H2/O2. A similar correlation is also developed using data from NASA s Chemical Equilibrium Applications (CEA) code to determine the equilibrium temperature (T4) as a function of overall fuel/air ratio, pressure and initial temperature (T3). High values of the regression coefficient R2 are obtained.

  1. Summary of Simplified Two Time Step Method for Calculating Combustion Rates and Nitrogen Oxide Emissions for Hydrogen/Air and Hydrogen/Oxygen

    NASA Technical Reports Server (NTRS)

    Marek, C. John; Molnar, Melissa

    2005-01-01

    A simplified single rate expression for hydrogen combustion and nitrogen oxide production was developed. Detailed kinetics are predicted for the chemical kinetic times using the complete chemical mechanism over the entire operating space. These times are then correlated to the reactor conditions using an exponential fit. Simple first order reaction expressions are then used to find the conversion in the reactor. The method uses a two time step kinetic scheme. The first time averaged step is used at the initial times with smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, temperature, and pressure. The second instantaneous step is used at higher water concentrations (greater than l x 10(exp -20)) moles per cc) in the mixture which gives the chemical kinetic time as a function of the instantaneous fuel and water mole concentrations, pressure and temperature (T(sub 4)). The simple correlations are then compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. This time is regressed over the complete initial conditions using the Excel regression routine. Chemical kinetic time equations for H2 and NOx are obtained for H2/Air fuel and for H2/O2. A similar correlation is also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium temperature (T(sub 4)) as a function of overall fuel/air ratio, pressure and initial temperature (T(sub 3)). High values of the regression coefficient R squared are obtained.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, Igor; Ronningen, Reginald Martin

    The research studied one-step and two-step Isotope Separation on Line (ISOL) targets for future radioactive beam facilities with high driver-beam power through advanced computer simulations. As a target material uranium carbide in the form of foils was used because of increasing demand for actinide targets in rare-isotope beam facilities and because such material was under development in ISAC at TRIUMF when this project started. Simulations of effusion were performed for one-step and two step targets and the effects of target dimensions and foil matrix were studied. Diffusion simulations were limited by availability of diffusion parameters for UC x material atmore » reduced density; however, the viability of the combined diffusion?effusion simulation methodology was demonstrated and could be used to extract physical parameters such as diffusion coefficients and effusion delay times from experimental isotope release curves. Dissipation of the heat from the isotope-producing targets is the limiting factor for high-power beam operation both for the direct and two-step targets. Detailed target models were used to simulate proton beam interactions with the targets to obtain the fission rates and power deposition distributions, which were then applied in the heat transfer calculations to study the performance of the targets. Results indicate that a direct target, with specification matching ISAC TRIUMF target, could operate in 500-MeV proton beam at beam powers up to ~40 kW, producing ~8 10 13 fission/s with maximum temperature in UCx below 2200 C. Targets with larger radius allow higher beam powers and fission rates. For the target radius in the range 9-mm to 30-mm the achievable fission rate increases almost linearly with target radius, however, the effusion delay time also increases linearly with target radius.« less

  3. Improving the Accuracy of the Chebyshev Rational Approximation Method Using Substeps

    DOE PAGES

    Isotalo, Aarno; Pusa, Maria

    2016-05-01

    The Chebyshev Rational Approximation Method (CRAM) for solving the decay and depletion of nuclides is shown to have a remarkable decrease in error when advancing the system with the same time step and microscopic reaction rates as the previous step. This property is exploited here to achieve high accuracy in any end-of-step solution by dividing a step into equidistant sub-steps. The computational cost of identical substeps can be reduced significantly below that of an equal number of regular steps, as the LU decompositions for the linear solves required in CRAM only need to be formed on the first substep. Themore » improved accuracy provided by substeps is most relevant in decay calculations, where there have previously been concerns about the accuracy and generality of CRAM. Lastly, with substeps, CRAM can solve any decay or depletion problem with constant microscopic reaction rates to an extremely high accuracy for all nuclides with concentrations above an arbitrary limit.« less

  4. Use of Visual and Proprioceptive Feedback to Improve Gait Speed and Spatiotemporal Symmetry Following Chronic Stroke: A Case Series

    PubMed Central

    Feasel, Jeff; Wentz, Erin; Brooks, Frederick P.; Whitton, Mary C.

    2012-01-01

    Background and Purpose Persistent deficits in gait speed and spatiotemporal symmetry are prevalent following stroke and can limit the achievement of community mobility goals. Rehabilitation can improve gait speed, but has shown limited ability to improve spatiotemporal symmetry. The incorporation of combined visual and proprioceptive feedback regarding spatiotemporal symmetry has the potential to be effective at improving gait. Case Description A 60-year-old man (18 months poststroke) and a 53-year-old woman (21 months poststroke) each participated in gait training to improve gait speed and spatiotemporal symmetry. Each patient performed 18 sessions (6 weeks) of combined treadmill-based gait training followed by overground practice. To assist with relearning spatiotemporal symmetry, treadmill-based training for both patients was augmented with continuous, real-time visual and proprioceptive feedback from an immersive virtual environment and a dual belt treadmill, respectively. Outcomes Both patients improved gait speed (patient 1: 0.35 m/s improvement; patient 2: 0.26 m/s improvement) and spatiotemporal symmetry. Patient 1, who trained with step-length symmetry feedback, improved his step-length symmetry ratio, but not his stance-time symmetry ratio. Patient 2, who trained with stance-time symmetry feedback, improved her stance-time symmetry ratio. She had no step-length asymmetry before training. Discussion Both patients made improvements in gait speed and spatiotemporal symmetry that exceeded those reported in the literature. Further work is needed to ascertain the role of combined visual and proprioceptive feedback for improving gait speed and spatiotemporal symmetry after chronic stroke. PMID:22228605

  5. Measuring physical activity in young people with cerebral palsy: validity and reliability of the ActivPAL™ monitor.

    PubMed

    Bania, Theofani

    2014-09-01

    We determined the criterion validity and the retest reliability of the ΑctivPAL™ monitor in young people with diplegic cerebral palsy (CP). Activity monitor data were compared with the criterion of video recording for 10 participants. For the retest reliability, activity monitor data were collected from 24 participants on two occasions. Participants had to have diplegic CP and be between 14 and 22 years of age. They also had to be of Gross Motor Function Classification System level II or III. Outcomes were time spent in standing, number of steps (physical activity) and time spent in sitting (sedentary behaviour). For criterion validity, coefficients of determination were all high (r(2)  ≥ 0.96), and limits of group agreement were relatively narrow, but limits of agreement for individuals were narrow only for number of steps (≥5.5%). Relative reliability was high for number of steps (intraclass correlation coefficient = 0.87) and moderate for time spent in sitting and lying, and time spent in standing (intraclass correlation coefficients = 0.60-0.66). For groups, changes of up to 7% could be due to measurement error with 95% confidence, but for individuals, changes as high as 68% could be due to measurement error. The results support the criterion validity and the retest reliability of the ActivPAL™ to measure physical activity and sedentary behaviour in groups of young people with diplegic CP but not in individuals. Copyright © 2014 John Wiley & Sons, Ltd.

  6. Angular distribution of Pigment epithelium central limit-Inner limit of the retina Minimal Distance (PIMD), in the young not pathological optic nerve head imaged by OCT

    NASA Astrophysics Data System (ADS)

    Söderberg, Per G.; Sandberg-Melin, Camilla

    2018-02-01

    The present study aimed to elucidate the angular distribution of the Pigment epithelium central limit-Inner limit of the retina Minimal Distance measured over 2π radians in the frontal plane (PIMD-2π) in young healthy eyes. Both healthy eyes of 16 subjects aged [20;30[ years were included. In each eye, a volume of the optical nerve head (ONH) was captured three times with a TOPCON DRI OCT Triton (Japan). Each volume renders a representation of the ONH 2.8 mm along the sagittal axis resolved in 993 steps, 6 mm long the frontal axis resolved in 512 steps and 6 x mm along the longitudinal axis resolved in 256 steps. The captured volumes were transferred to a custom made software for semiautomatic segmentation of PIMD around the circumference of the ONH. The phases of iterated volumes were calibrated with cross correlation. It was found that PIMD-2π expresses a double hump with a small maximum superiorly, a larger maximum inferiorly, and minima in between. The measurements indicated that there is no difference of PIMD-2π between genders nor between dominant and not dominant eye within subject. The variation between eyes within subject is of the same order as the variation among subjects. The variation among volumes within eye is substantially lower.

  7. Modeling of Hall Thruster Lifetime and Erosion Mechanisms (Preprint)

    DTIC Science & Technology

    2007-09-01

    Hall thruster plasma discharge has been upgraded to simulate the erosion of the thruster acceleration channel, the degradation of which is the main life-limiting factor of the propulsion system. Evolution of the thruster geometry as a result of material removal due to sputtering is modeled by calculating wall erosion rates, stepping the grid boundary by a chosen time step and altering the computational mesh between simulation runs. The code is first tuned to predict the nose cone erosion of a 200 W Busek Hall thruster , the BHT-200. Simulated erosion

  8. Analytical design of a parasitic-loading digital speed controller for a 400-hertz turbine driven alternator

    NASA Technical Reports Server (NTRS)

    Ingle, B. D.; Ryan, J. P.

    1972-01-01

    A design for a solid-state parasitic speed controller using digital logic was analyzed. Parasitic speed controllers are used in space power electrical generating systems to control the speed of turbine-driven alternators within specified limits. The analysis included the performance characteristics of the speed controller and the generation of timing functions. The speed controller using digital logic applies step loads to the alternator. The step loads conduct for a full half wave starting at either zero or 180 electrical degrees.

  9. Experimental observations of Lagrangian sand grain kinematics under bedload transport: statistical description of the step and rest regimes

    NASA Astrophysics Data System (ADS)

    Guala, M.; Liu, M.

    2017-12-01

    The kinematics of sediment particles is investigated by non-intrusive imaging methods to provide a statistical description of bedload transport in conditions near the threshold of motion. In particular, we focus on the cyclic transition between motion and rest regimes to quantify the waiting time statistics inferred to be responsible for anomalous diffusion, and so far elusive. Despite obvious limitations in the spatio-temporal domain of the observations, we are able to identify the probability distributions of the particle step time and length, velocity, acceleration, waiting time, and thus distinguish which quantities exhibit well converged mean values, based on the thickness of their respective tails. The experimental results shown here for four different transport conditions highlight the importance of the waiting time distribution and represent a benchmark dataset for the stochastic modeling of bedload transport.

  10. The one-step electroposition of superhydrophobic surface on AZ31 magnesium alloy and its time-dependence corrosion resistance in NaCl solution

    NASA Astrophysics Data System (ADS)

    Zhong, Yuxing; Hu, Jin; Zhang, Yufen; Tang, Shawei

    2018-01-01

    A calcium myristic superhydrophobicity coating with a hierarchical micro-nanostructure was fabricated on AZ31 magnesium alloy by one-step electroposition. The effects of deposition time on the coating structure, such as morphology, thickness, wettability and phase composition of the coating were studied. The corrosion behavior of the coated samples in 3.5% NaCl solution was also investigated and the corrosion mechanism was discussed. It was found that the deposition time has a visible effect on the morphology, thickness and wettability, which distinctly affects the corrosion resistance of coatings. The corrosion resistance of the coating gradually decreases with the increase in the immersion time due to the disappearance of the air layer which exists on the coating surface. The superhydrophobic surfaces present the temporal limitations to the corrosion resistance of AZ31 magnesium alloy.

  11. Comprehensive Numerical Analysis of Finite Difference Time Domain Methods for Improving Optical Waveguide Sensor Accuracy

    PubMed Central

    Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly

    2016-01-01

    This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.

  12. Virus elimination during the purification of monoclonal antibodies by column chromatography and additional steps.

    PubMed

    Roberts, Peter L

    2014-01-01

    The theoretical potential for virus transmission by monoclonal antibody based therapeutic products has led to the inclusion of appropriate virus reduction steps. In this study, virus elimination by the chromatographic steps used during the purification process for two (IgG-1 & -3) monoclonal antibodies (MAbs) have been investigated. Both the Protein G (>7log) and ion-exchange (5 log) chromatography steps were very effective for eliminating both enveloped and non-enveloped viruses over the life-time of the chromatographic gel. However, the contribution made by the final gel filtration step was more limited, i.e., 3 log. Because these chromatographic columns were recycled between uses, the effectiveness of the column sanitization procedures (guanidinium chloride for protein G or NaOH for ion-exchange) were tested. By evaluating standard column runs immediately after each virus spiked run, it was possible to directly confirm that there was no cross contamination with virus between column runs (guanidinium chloride or NaOH). To further ensure the virus safety of the product, two specific virus elimination steps have also been included in the process. A solvent/detergent step based on 1% triton X-100 rapidly inactivating a range of enveloped viruses by >6 log inactivation within 1 min of a 60 min treatment time. Virus removal by virus filtration step was also confirmed to be effective for those viruses of about 50 nm or greater. In conclusion, the combination of these multiple steps ensures a high margin of virus safety for this purification process. © 2014 American Institute of Chemical Engineers.

  13. Galerkin v. discrete-optimal projection in nonlinear model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Barone, Matthew Franklin; Antil, Harbir

    Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes.more » We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.« less

  14. A bill to limit the moratorium on certain permitting and drilling activities issued by the Secretary of the Interior, and for other purposes.

    THOMAS, 111th Congress

    Sen. Vitter, David [R-LA

    2010-07-14

    Senate - 07/15/2010 Read the second time. Placed on Senate Legislative Calendar under General Orders. Calendar No. 463. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  15. Gas Chromatographic Determination of Fatty Acid Compositions.

    ERIC Educational Resources Information Center

    Heinzen, Horacio; And Others

    1985-01-01

    Describes an experiment that: (1) has a derivation step using readily available reagents; (2) requires limited manipulative skills, centering attention on methodology; (3) can be completed within the time constraints of a normal laboratory period; and (4) investigates materials that are easy to acquire and are of great technical/biological…

  16. Elementary ELA/Social Studies Integration: Challenges and Limitations

    ERIC Educational Resources Information Center

    Heafner, Tina L.

    2018-01-01

    Adding instructional time and holding teachers accountable for teaching social studies are touted as practical, logical steps toward reforming the age-old tradition of marginalization. This qualitative case study of an urban elementary school, examines how nine teachers and one administrator enacted district reforms that added 45 minutes to the…

  17. Barriers and Facilitators to Initiating and Completing Time-Limited Trials in Critical Care.

    PubMed

    Bruce, Courtenay R; Liang, Cecilia; Blumenthal-Barby, Jennifer S; Zimmerman, Janice; Downey, Andrea; Pham, Linda; Theriot, Lisette; Delgado, Estevan D; White, Douglas

    2015-12-01

    A time-limited trial is an agreement between clinicians and patients or surrogate decision makers to use medical therapies over a defined period of time to see if the patient improves or deteriorates according to agreed-upon clinical milestones. Although time-limited trials are broadly advocated, there is little empirical evidence of the benefits and risks of time-limited trials, when they are initiated, when and why they succeed or fail, and what facilitates completion of them. Our study objectives were to 1) identify the purposes for which clinicians use time-limited trials and 2) identify barriers and facilitators to initiating and completing time-limited trials. Semistructured interviews: We analyzed interviews using qualitative description with constant comparative techniques. Nine hundred-bed, academic, tertiary hospital in Houston, Texas. Interviewees were from open medical, surgical, neurosurgical, and cardiovascular ICUs. Thirty healthcare professionals were interviewed (nine surgeons, 16 intensivists, three nurse practitioners, and two "other" clinicians). None. Interviewees reported initiating time-limited trials for three different purposes: to prepare surrogates and clinicians for discussion and possible shifts toward comfort-care only therapies, build consensus, and refine prognostic information. The main barriers to initiating time-limited trials involve clinicians' or surrogate decision makers' disagreement on setting a time limit. Barriers to completing time-limited trials include 1) requesting more time; 2) communication breakdowns because of rotating call schedules; and 3) changes in clinical course. Finally, facilitators to completing time-limited trials include 1) having defined goals about what could be achieved during an ICU stay, either framed in narrow, numeric terms or broad goals focusing on achievable activities of daily living; 2) applying time-limited trials in certain types of cases; and 3) taking ownership to ensure completion of the trial. An understanding of barriers and facilitators to initiating and completing time-limited trials is an essential first step toward appropriate utilization of time-limited trials in the ICUs, as well as developing educational or communication interventions with clinicians to facilitate time-limited trial use. We provide practical suggestions on patient populations in whom time-limited trials may be successful, the setting, and clinicians likely to benefit from educational interventions, allowing clinicians to have a fuller sense of when and how to use time-limited trials.

  18. Polymer microchip CE of proteins either off- or on-chip labeled with chameleon dye for simplified analysis.

    PubMed

    Yu, Ming; Wang, Hsiang-Yu; Woolley, Adam T

    2009-12-01

    Microchip CE of proteins labeled either off- or on-chip with the "chameleon" CE dye 503 using poly(methyl methacrylate) microchips is presented. A simple dynamic coating using the cationic surfactant CTAB prevented nonspecific adsorption of protein and dye to the channel walls. The labeling reactions for both off- and on-chip labeling proceeded at room temperature without requiring heating steps. In off-chip labeling, a 9 ng/mL concentration detection limit for BSA, corresponding to a approximately 7 fg (100 zmol) mass detection limit, was obtained. In on-chip tagging, the free dye and protein were placed in different reservoirs of the microchip, and an extra incubation step was not needed. A 1 microg/mL concentration detection limit for BSA, corresponding to a approximately 700 fg (10 amol) mass detection limit, was obtained from this protocol. The earlier elution time of the BSA peak in on-chip labeling resulted from fewer total labels on each protein molecule. Our on-chip labeling method is an important part of automation in miniaturized devices.

  19. Extravascular transport in normal and tumor tissues.

    PubMed

    Jain, R K; Gerlowski, L E

    1986-01-01

    The transport characteristics of the normal and tumor tissue extravascular space provide the basis for the determination of the optimal dosage and schedule regimes of various pharmacological agents in detection and treatment of cancer. In order for the drug to reach the cellular space where most therapeutic action takes place, several transport steps must first occur: (1) tissue perfusion; (2) permeation across the capillary wall; (3) transport through interstitial space; and (4) transport across the cell membrane. Any of these steps including intracellular events such as metabolism can be the rate-limiting step to uptake of the drug, and these rate-limiting steps may be different in normal and tumor tissues. This review examines these transport limitations, first from an experimental point of view and then from a modeling point of view. Various types of experimental tumor models which have been used in animals to represent human tumors are discussed. Then, mathematical models of extravascular transport are discussed from the prespective of two approaches: compartmental and distributed. Compartmental models lump one or more sections of a tissue or body into a "compartment" to describe the time course of disposition of a substance. These models contain "effective" parameters which represent the entire compartment. Distributed models consider the structural and morphological aspects of the tissue to determine the transport properties of that tissue. These distributed models describe both the temporal and spatial distribution of a substance in tissues. Each of these modeling techniques is described in detail with applications for cancer detection and treatment in mind.

  20. Reduced-order aeroelastic model for limit-cycle oscillations in vortex-dominated unsteady airfoil flows

    NASA Astrophysics Data System (ADS)

    Suresh Babu, Arun Vishnu; Ramesh, Kiran; Gopalarathnam, Ashok

    2017-11-01

    In previous research, Ramesh et al. (JFM,2014) developed a low-order discrete vortex method for modeling unsteady airfoil flows with intermittent leading edge vortex (LEV) shedding using a leading edge suction parameter (LESP). LEV shedding is initiated using discrete vortices (DVs) whenever the Leading Edge Suction Parameter (LESP) exceeds a critical value. In subsequent research, the method was successfully employed by Ramesh et al. (JFS, 2015) to predict aeroelastic limit-cycle oscillations in airfoil flows dominated by intermittent LEV shedding. When applied to flows that require large number of time steps, the computational cost increases due to the increasing vortex count. In this research, we apply an amalgamation strategy to actively control the DV count, and thereby reduce simulation time. A pair each of LEVs and TEVs are amalgamated at every time step. The ideal pairs for amalgamation are identified based on the requirement that the flowfield in the vicinity of the airfoil is least affected (Spalart, 1988). Instead of placing the amalgamated vortex at the centroid, we place it at an optimal location to ensure that the leading-edge suction and the airfoil bound circulation are conserved. Results of the initial study are promising.

  1. Molecular simulation of small Knudsen number flows

    NASA Astrophysics Data System (ADS)

    Fei, Fei; Fan, Jing

    2012-11-01

    The direct simulation Monte Carlo (DSMC) method is a powerful particle-based method for modeling gas flows. It works well for relatively large Knudsen (Kn) numbers, typically larger than 0.01, but quickly becomes computationally intensive as Kn decreases due to its time step and cell size limitations. An alternative approach was proposed to relax or remove these limitations, based on replacing pairwise collisions with a stochastic model corresponding to the Fokker-Planck equation [J. Comput. Phys., 229, 1077 (2010); J. Fluid Mech., 680, 574 (2011)]. Similar to the DSMC method, the downside of that approach suffers from computationally statistical noise. To solve the problem, a diffusion-based information preservation (D-IP) method has been developed. The main idea is to track the motion of a simulated molecule from the diffusive standpoint, and obtain the flow velocity and temperature through sampling and averaging the IP quantities. To validate the idea and the corresponding model, several benchmark problems with Kn ˜ 10-3-10-4 have been investigated. It is shown that the IP calculations are not only accurate, but also efficient because they make possible using a time step and cell size over an order of magnitude larger than the mean collision time and mean free path, respectively.

  2. Two-step liquid phase microextraction combined with capillary electrophoresis: a new approach to simultaneous determination of basic and zwitterionic compounds.

    PubMed

    Nojavan, Saeed; Moharami, Arezoo; Fakhari, Ali Reza

    2012-08-01

    In this work, two-step hollow fiber-based liquid-phase microextraction procedure was evaluated for extraction of the zwitterionic cetirizine (CTZ) and basic hydroxyzine (HZ) in human plasma. In the first step of extraction, the pH of sample was adjusted at 5.0 in order to promote liquid-phase microextraction of the zwitterionic CTZ. In the second step, the pH of sample was increased up to 11.0 for extraction of basic HZ. In this procedure, the extraction times for the first and the second steps were 30 and 20 min, respectively. Owing to the high ratio between the volumes of donor phase and acceptor phase, CTZ and HZ were enriched by factors of 280 and 355, respectively. The linearity of the analytical method was investigated for both compounds in the range of 10-500 ng mL(-1) (R(2) > 0.999). Limit of quantification (S/N = 10) for CTZ and HZ was 10 ng mL(-1) , while the limit of detection was 3 ng mL(-1) for both compounds at a signal to noise ratio of 3:1. Intraday and interday relative standard deviations (RSDs, n = 6) were in the range of 6.5-16.2%. This procedure enabled CTZ and HZ to be analyzed simultaneously by capillary electrophoresis. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Internal Wave Impact on the Performance of a Hypothetical Mine Hunting Sonar

    DTIC Science & Technology

    2014-10-01

    time steps) to simulate the propagation of the internal wave field through the mine field. Again the transmission loss and acoustic signal strength...dependent internal wave perturbed sound speed profile was evaluated by calculating the temporal variability of the signal excess (SE) of acoustic...internal wave perturbation of the sound speed profile, was calculated for a limited sound speed field time section. Acoustic signals were projected

  4. Quantifying Surface Water Dynamics at 30 Meter Spatial Resolution in the North American High Northern Latitudes 1991-2011

    NASA Technical Reports Server (NTRS)

    Carroll, Mark; Wooten, Margaret; DiMiceli, Charlene; Sohlberg, Robert; Kelly, Maureen

    2016-01-01

    The availability of a dense time series of satellite observations at moderate (30 m) spatial resolution is enabling unprecedented opportunities for understanding ecosystems around the world. A time series of data from Landsat was used to generate a series of three maps at decadal time step to show how surface water has changed from 1991 to 2011 in the high northern latitudes of North America. Previous attempts to characterize the change in surface water in this region have been limited in either spatial or temporal resolution, or both. This series of maps was generated for the NASA Arctic and Boreal Vulnerability Experiment (ABoVE), which began in fall 2015. These maps show a nominal extent of surface water by using multiple observations to make a single map for each time step. This increases the confidence that any detected changes are related to climate or ecosystem changes not simply caused by short duration weather events such as flood or drought. The methods and comparison to other contemporary maps of the region are presented here. Initial verification results indicate 96% producer accuracy and 54% user accuracy when compared to 2-m resolution World View-2 data. All water bodies that were omitted were one Landsat pixel or smaller, hence below detection limits of the instrument.

  5. Automating the evaluation of flood damages: methodology and potential gains

    NASA Astrophysics Data System (ADS)

    Eleutério, Julian; Martinez, Edgar Daniel

    2010-05-01

    The evaluation of flood damage potential consists of three main steps: assessing and processing data, combining data and calculating potential damages. The first step consists of modelling hazard and assessing vulnerability. In general, this step of the evaluation demands more time and investments than the others. The second step of the evaluation consists of combining spatial data on hazard with spatial data on vulnerability. Geographic Information System (GIS) is a fundamental tool in the realization of this step. GIS software allows the simultaneous analysis of spatial and matrix data. The third step of the evaluation consists of calculating potential damages by means of damage-functions or contingent analysis. All steps demand time and expertise. However, the last two steps must be realized several times when comparing different management scenarios. In addition, uncertainty analysis and sensitivity test are made during the second and third steps of the evaluation. The feasibility of these steps could be relevant in the choice of the extent of the evaluation. Low feasibility could lead to choosing not to evaluate uncertainty or to limit the number of scenario comparisons. Several computer models have been developed over time in order to evaluate the flood risk. GIS software is largely used to realise flood risk analysis. The software is used to combine and process different types of data, and to visualise the risk and the evaluation results. The main advantages of using a GIS in these analyses are: the possibility of "easily" realising the analyses several times, in order to compare different scenarios and study uncertainty; the generation of datasets which could be used any time in future to support territorial decision making; the possibility of adding information over time to update the dataset and make other analyses. However, these analyses require personnel specialisation and time. The use of GIS software to evaluate the flood risk requires personnel with a double professional specialisation. The professional should be proficient in GIS software and in flood damage analysis (which is already a multidisciplinary field). Great effort is necessary in order to correctly evaluate flood damages, and the updating and the improvement of the evaluation over time become a difficult task. The automation of this process should bring great advance in flood management studies over time, especially for public utilities. This study has two specific objectives: (1) show the entire process of automation of the second and third steps of flood damage evaluations; and (2) analyse the induced potential gains in terms of time and expertise needed in the analysis. A programming language is used within GIS software in order to automate hazard and vulnerability data combination and potential damages calculation. We discuss the overall process of flood damage evaluation. The main result of this study is a computational tool which allows significant operational gains on flood loss analyses. We quantify these gains by means of a hypothetical example. The tool significantly reduces the time of analysis and the needs for expertise. An indirect gain is that sensitivity and cost-benefit analyses can be more easily realized.

  6. Monte Carlo Sampling in Fractal Landscapes

    NASA Astrophysics Data System (ADS)

    Leitão, Jorge C.; Lopes, J. M. Viana Parente; Altmann, Eduardo G.

    2013-05-01

    We design a random walk to explore fractal landscapes such as those describing chaotic transients in dynamical systems. We show that the random walk moves efficiently only when its step length depends on the height of the landscape via the largest Lyapunov exponent of the chaotic system. We propose a generalization of the Wang-Landau algorithm which constructs not only the density of states (transient time distribution) but also the correct step length. As a result, we obtain a flat-histogram Monte Carlo method which samples fractal landscapes in polynomial time, a dramatic improvement over the exponential scaling of traditional uniform-sampling methods. Our results are not limited by the dimensionality of the landscape and are confirmed numerically in chaotic systems with up to 30 dimensions.

  7. Time-symmetric integration in astrophysics

    NASA Astrophysics Data System (ADS)

    Hernandez, David M.; Bertschinger, Edmund

    2018-04-01

    Calculating the long-term solution of ordinary differential equations, such as those of the N-body problem, is central to understanding a wide range of dynamics in astrophysics, from galaxy formation to planetary chaos. Because generally no analytic solution exists to these equations, researchers rely on numerical methods that are prone to various errors. In an effort to mitigate these errors, powerful symplectic integrators have been employed. But symplectic integrators can be severely limited because they are not compatible with adaptive stepping and thus they have difficulty in accommodating changing time and length scales. A promising alternative is time-reversible integration, which can handle adaptive time-stepping, but the errors due to time-reversible integration in astrophysics are less understood. The goal of this work is to study analytically and numerically the errors caused by time-reversible integration, with and without adaptive stepping. We derive the modified differential equations of these integrators to perform the error analysis. As an example, we consider the trapezoidal rule, a reversible non-symplectic integrator, and show that it gives secular energy error increase for a pendulum problem and for a Hénon-Heiles orbit. We conclude that using reversible integration does not guarantee good energy conservation and that, when possible, use of symplectic integrators is favoured. We also show that time-symmetry and time-reversibility are properties that are distinct for an integrator.

  8. From h to p efficiently: optimal implementation strategies for explicit time-dependent problems using the spectral/hp element method

    PubMed Central

    Bolis, A; Cantwell, C D; Kirby, R M; Sherwin, S J

    2014-01-01

    We investigate the relative performance of a second-order Adams–Bashforth scheme and second-order and fourth-order Runge–Kutta schemes when time stepping a 2D linear advection problem discretised using a spectral/hp element technique for a range of different mesh sizes and polynomial orders. Numerical experiments explore the effects of short (two wavelengths) and long (32 wavelengths) time integration for sets of uniform and non-uniform meshes. The choice of time-integration scheme and discretisation together fixes a CFL limit that imposes a restriction on the maximum time step, which can be taken to ensure numerical stability. The number of steps, together with the order of the scheme, affects not only the runtime but also the accuracy of the solution. Through numerical experiments, we systematically highlight the relative effects of spatial resolution and choice of time integration on performance and provide general guidelines on how best to achieve the minimal execution time in order to obtain a prescribed solution accuracy. The significant role played by higher polynomial orders in reducing CPU time while preserving accuracy becomes more evident, especially for uniform meshes, compared with what has been typically considered when studying this type of problem.© 2014. The Authors. International Journal for Numerical Methods in Fluids published by John Wiley & Sons, Ltd. PMID:25892840

  9. Diffractive optics fabricated by direct write methods with an electron beam

    NASA Technical Reports Server (NTRS)

    Kress, Bernard; Zaleta, David; Daschner, Walter; Urquhart, Kris; Stein, Robert; Lee, Sing H.

    1993-01-01

    State-of-the-art diffractive optics are fabricated using e-beam lithography and dry etching techniques to achieve multilevel phase elements with very high diffraction efficiencies. One of the major challenges encountered in fabricating diffractive optics is the small feature size (e.g. for diffractive lenses with small f-number). It is not only the e-beam system which dictates the feature size limitations, but also the alignment systems (mask aligner) and the materials (e-beam and photo resists). In order to allow diffractive optics to be used in new optoelectronic systems, it is necessary not only to fabricate elements with small feature sizes but also to do so in an economical fashion. Since price of a multilevel diffractive optical element is closely related to the e-beam writing time and the number of etching steps, we need to decrease the writing time and etching steps without affecting the quality of the element. To do this one has to utilize the full potentials of the e-beam writing system. In this paper, we will present three diffractive optics fabrication techniques which will reduce the number of process steps, the writing time, and the overall fabrication time for multilevel phase diffractive optics.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savelyev, Evgeny; Boll, Rebecca; Bomme, Cedric

    In pump-probe experiments employing a free-electron laser (FEL) in combination with a synchronized optical femtosecond laser, the arrival-time jitter between the FEL pulse and the optical laser pulse often severely limits the temporal resolution that can be achieved. Here, we present a pump-probe experiment on the UV-induced dissociation of 2,6-difluoroiodobenzene C 6H 3F 2I) molecules performed at the FLASH FEL that takes advantage of recent upgrades of the FLASH timing and synchronization system to obtain high-quality data that are not limited by the FEL arrival-time jitter. Here, we discuss in detail the necessary data analysis steps and describe the originmore » of the time-dependent effects in the yields and kinetic energies of the fragment ions that we observe in the experiment.« less

  11. Screen Time and Sleep among School-Aged Children and Adolescents: A Systematic Literature Review

    PubMed Central

    Hale, Lauren; Guan, Stanford

    2015-01-01

    Summary We systematically examined and updated the scientific literature on the association between screen time (e.g., television, computers, video games, and mobile devices) and sleep outcomes among school-aged children and adolescents. We reviewed 67 studies published from 1999 to early 2014. We found that screen time is adversely associated with sleep outcomes (primarily shortened duration and delayed timing) in 90% of studies. Some of the results varied by type of screen exposure, age of participant, gender, and day of the week. While the evidence regarding the association between screen time and sleep is consistent, we discuss limitations of the current studies: 1.) causal association not confirmed; 2.) measurement error (of both screen time exposure and sleep measures); 3.) limited data on simultaneous use of multiple screens, characteristics and content of screens used. Youth should be advised to limit or reduce screen time exposure, especially before or during bedtime hours to minimize any harmful effects of screen time on sleep and well-being. Future research should better account for the methodological limitations of the extant studies, and seek to better understand the magnitude and mechanisms of the association. These steps will help the development and implementation of policies or interventions related to screen time among youth. PMID:25193149

  12. A time-spectral approach to numerical weather prediction

    NASA Astrophysics Data System (ADS)

    Scheffel, Jan; Lindvall, Kristoffer; Yik, Hiu Fai

    2018-05-01

    Finite difference methods are traditionally used for modelling the time domain in numerical weather prediction (NWP). Time-spectral solution is an attractive alternative for reasons of accuracy and efficiency and because time step limitations associated with causal CFL-like criteria, typical for explicit finite difference methods, are avoided. In this work, the Lorenz 1984 chaotic equations are solved using the time-spectral algorithm GWRM (Generalized Weighted Residual Method). Comparisons of accuracy and efficiency are carried out for both explicit and implicit time-stepping algorithms. It is found that the efficiency of the GWRM compares well with these methods, in particular at high accuracy. For perturbative scenarios, the GWRM was found to be as much as four times faster than the finite difference methods. A primary reason is that the GWRM time intervals typically are two orders of magnitude larger than those of the finite difference methods. The GWRM has the additional advantage to produce analytical solutions in the form of Chebyshev series expansions. The results are encouraging for pursuing further studies, including spatial dependence, of the relevance of time-spectral methods for NWP modelling.

  13. Kinetics of protein–ligand unbinding: Predicting pathways, rates, and rate-limiting steps

    PubMed Central

    Tiwary, Pratyush; Limongelli, Vittorio; Salvalaglio, Matteo; Parrinello, Michele

    2015-01-01

    The ability to predict the mechanisms and the associated rate constants of protein–ligand unbinding is of great practical importance in drug design. In this work we demonstrate how a recently introduced metadynamics-based approach allows exploration of the unbinding pathways, estimation of the rates, and determination of the rate-limiting steps in the paradigmatic case of the trypsin–benzamidine system. Protein, ligand, and solvent are described with full atomic resolution. Using metadynamics, multiple unbinding trajectories that start with the ligand in the crystallographic binding pose and end with the ligand in the fully solvated state are generated. The unbinding rate koff is computed from the mean residence time of the ligand. Using our previously computed binding affinity we also obtain the binding rate kon. Both rates are in agreement with reported experimental values. We uncover the complex pathways of unbinding trajectories and describe the critical rate-limiting steps with unprecedented detail. Our findings illuminate the role played by the coupling between subtle protein backbone fluctuations and the solvation by water molecules that enter the binding pocket and assist in the breaking of the shielded hydrogen bonds. We expect our approach to be useful in calculating rates for general protein–ligand systems and a valid support for drug design. PMID:25605901

  14. Semiannual Report, October 1, 1989 through March 31, 1990 (Institute for Computer Applications in Science and Engineering)

    DTIC Science & Technology

    1990-06-01

    synchronization . We consider the performance of various synchronization protocols by deriving upper and lower bounds on optimal perfor- mance, upper bounds on Time ...from universities and from industry, who have resident appointments for limited periods of time , and by consultants. Members of NASA’s research staff...convergence to steady state is also being studied together with D. Gottlieb. The idea is to generalize the concept of local- time stepping by minimizing the

  15. Control Software for Piezo Stepping Actuators

    NASA Technical Reports Server (NTRS)

    Shields, Joel F.

    2013-01-01

    A control system has been developed for the Space Interferometer Mission (SIM) piezo stepping actuator. Piezo stepping actuators are novel because they offer extreme dynamic range (centimeter stroke with nanometer resolution) with power, thermal, mass, and volume advantages over existing motorized actuation technology. These advantages come with the added benefit of greatly reduced complexity in the support electronics. The piezo stepping actuator consists of three fully redundant sets of piezoelectric transducers (PZTs), two sets of brake PZTs, and one set of extension PZTs. These PZTs are used to grasp and move a runner attached to the optic to be moved. By proper cycling of the two brake and extension PZTs, both forward and backward moves of the runner can be achieved. Each brake can be configured for either a power-on or power-off state. For SIM, the brakes and gate of the mechanism are configured in such a manner that, at the end of the step, the actuator is in a parked or power-off state. The control software uses asynchronous sampling of an optical encoder to monitor the position of the runner. These samples are timed to coincide with the end of the previous move, which may consist of a variable number of steps. This sampling technique linearizes the device by avoiding input saturation of the actuator and makes latencies of the plant vanish. The software also estimates, in real time, the scale factor of the device and a disturbance caused by cycling of the brakes. These estimates are used to actively cancel the brake disturbance. The control system also includes feedback and feedforward elements that regulate the position of the runner to a given reference position. Convergence time for smalland medium-sized reference positions (less than 200 microns) to within 10 nanometers can be achieved in under 10 seconds. Convergence times for large moves (greater than 1 millimeter) are limited by the step rate.

  16. SUPRA: open-source software-defined ultrasound processing for real-time applications : A 2D and 3D pipeline from beamforming to B-mode.

    PubMed

    Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph

    2018-06-01

    Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.

  17. Model for estimating the penetration depth limit of the time-reversed ultrasonically encoded optical focusing technique

    PubMed Central

    Jang, Mooseok; Ruan, Haowen; Judkewitz, Benjamin; Yang, Changhuei

    2014-01-01

    The time-reversed ultrasonically encoded (TRUE) optical focusing technique is a method that is capable of focusing light deep within a scattering medium. This theoretical study aims to explore the depth limits of the TRUE technique for biological tissues in the context of two primary constraints – the safety limit of the incident light fluence and a limited TRUE’s recording time (assumed to be 1 ms), as dynamic scatterer movements in a living sample can break the time-reversal scattering symmetry. Our numerical simulation indicates that TRUE has the potential to render an optical focus with a peak-to-background ratio of ~2 at a depth of ~103 mm at wavelength of 800 nm in a phantom with tissue scattering characteristics. This study sheds light on the allocation of photon budget in each step of the TRUE technique, the impact of low signal on the phase measurement error, and the eventual impact of the phase measurement error on the strength of the TRUE optical focus. PMID:24663917

  18. One-dimensional model of interacting-step fluctuations on vicinal surfaces: Analytical formulas and kinetic Monte-Carlo simulations

    NASA Astrophysics Data System (ADS)

    Patrone, Paul; Einstein, T. L.; Margetis, Dionisios

    2011-03-01

    We study a 1+1D, stochastic, Burton-Cabrera-Frank (BCF) model of interacting steps fluctuating on a vicinal crystal. The step energy accounts for entropic and nearest-neighbor elastic-dipole interactions. Our goal is to formulate and validate a self-consistent mean-field (MF) formalism to approximately solve the system of coupled, nonlinear stochastic differential equations (SDEs) governing fluctuations in surface motion. We derive formulas for the time-dependent terrace width distribution (TWD) and its steady-state limit. By comparison with kinetic Monte-Carlo simulations, we show that our MF formalism improves upon models in which step interactions are linearized. We also indicate how fitting parameters of our steady state MF TWD may be used to determine the mass transport regime and step interaction energy of certain experimental systems. PP and TLE supported by NSF MRSEC under Grant DMR 05-20471 at U. of Maryland; DM supported by NSF under Grant DMS 08-47587.

  19. Transfrontal orbitotomy in the dog: an adaptable three-step approach to the orbit.

    PubMed

    Håkansson, Nils Wallin; Håkansson, Berit Wallin

    2010-11-01

    To describe an adaptable and extensive method for orbitotomy in the dog. An adaptable three-step technique for orbitotomy was developed and applied in nine consecutive cases. The steps are zygomatic arch resection laterally, temporalis muscle elevation medially and zygomatic process osteotomy anteriorly-dorsally. The entire orbit is accessed with excellent exposure and room for surgical manipulation. Facial nerve, lacrimal nerve and lacrimal gland function are preserved. The procedure can easily be converted into an orbital exenteration. Exposure of the orbit was excellent in all cases and anatomically correct closure was achieved. Signs of postoperative discomfort were limited, with moderate, reversible swelling in two cases and mild in seven. Wound infection or emphysema did not occur, nor did any other complication attributable to the operative procedure. Blinking ability and lacrimal function were preserved over follow-up times ranging from 1 to 4 years. Transfrontal orbitotomy in the dog offers excellent exposure and room for manipulation. Anatomically correct closure is easily accomplished, postoperative discomfort is limited and complications are mild and temporary. © 2010 American College of Veterinary Ophthalmologists.

  20. Real-Time Gait Cycle Parameter Recognition Using a Wearable Accelerometry System

    PubMed Central

    Yang, Che-Chang; Hsu, Yeh-Liang; Shih, Kao-Shang; Lu, Jun-Ming

    2011-01-01

    This paper presents the development of a wearable accelerometry system for real-time gait cycle parameter recognition. Using a tri-axial accelerometer, the wearable motion detector is a single waist-mounted device to measure trunk accelerations during walking. Several gait cycle parameters, including cadence, step regularity, stride regularity and step symmetry can be estimated in real-time by using autocorrelation procedure. For validation purposes, five Parkinson’s disease (PD) patients and five young healthy adults were recruited in an experiment. The gait cycle parameters among the two subject groups of different mobility can be quantified and distinguished by the system. Practical considerations and limitations for implementing the autocorrelation procedure in such a real-time system are also discussed. This study can be extended to the future attempts in real-time detection of disabling gaits, such as festinating or freezing of gait in PD patients. Ambulatory rehabilitation, gait assessment and personal telecare for people with gait disorders are also possible applications. PMID:22164019

  1. A stabilized Runge–Kutta–Legendre method for explicit super-time-stepping of parabolic and mixed equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-15

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s{sup 2} times larger than amore » single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful in parabolic problems with variable diffusion coefficients. This includes variable coefficient parabolic equations that might give rise to skew symmetric terms. The RKC1 and RKC2 schemes do not share this convex monotonicity preserving property. One-dimensional and two-dimensional von Neumann stability analyses of RKC1, RKC2, RKL1 and RKL2 are also presented, showing that the latter two have some advantages. The paper includes several details to facilitate implementation. A detailed accuracy analysis is presented to show that the methods reach their design accuracies. A stringent set of test problems is also presented. To demonstrate the robustness and versatility of our methods, we show their successful operation on problems involving linear and non-linear heat conduction and viscosity, resistive magnetohydrodynamics, ambipolar diffusion dominated magnetohydrodynamics, level set methods and flux limited radiation diffusion. In a prior paper (Meyer, Balsara and Aslam 2012 [36]) we have also presented an extensive test-suite showing that the RKL2 method works robustly in the presence of shocks in an anisotropically conducting, magnetized plasma.« less

  2. A stabilized Runge-Kutta-Legendre method for explicit super-time-stepping of parabolic and mixed equations

    NASA Astrophysics Data System (ADS)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-01

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge-Kutta-like time-steps to advance the parabolic terms by a time-step that is s2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge-Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems - a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful in parabolic problems with variable diffusion coefficients. This includes variable coefficient parabolic equations that might give rise to skew symmetric terms. The RKC1 and RKC2 schemes do not share this convex monotonicity preserving property. One-dimensional and two-dimensional von Neumann stability analyses of RKC1, RKC2, RKL1 and RKL2 are also presented, showing that the latter two have some advantages. The paper includes several details to facilitate implementation. A detailed accuracy analysis is presented to show that the methods reach their design accuracies. A stringent set of test problems is also presented. To demonstrate the robustness and versatility of our methods, we show their successful operation on problems involving linear and non-linear heat conduction and viscosity, resistive magnetohydrodynamics, ambipolar diffusion dominated magnetohydrodynamics, level set methods and flux limited radiation diffusion. In a prior paper (Meyer, Balsara and Aslam 2012 [36]) we have also presented an extensive test-suite showing that the RKL2 method works robustly in the presence of shocks in an anisotropically conducting, magnetized plasma.

  3. Implementation of Time-Resolved Step-Scan Fourier Transform Infrared (FT-IR) Spectroscopy Using a kHz Repetition Rate Pump Laser

    PubMed Central

    MAGANA, DONNY; PARUL, DZMITRY; DYER, R. BRIAN; SHREVE, ANDREW P.

    2011-01-01

    Time-resolved step-scan Fourier transform infrared (FT-IR) spectroscopy has been shown to be invaluable for studying excited-state structures and dynamics in both biological and inorganic systems. Despite the established utility of this method, technical challenges continue to limit the data quality and more wide ranging applications. A critical problem has been the low laser repetition rate and interferometer stepping rate (both are typically 10 Hz) used for data acquisition. Here we demonstrate significant improvement in the quality of time-resolved spectra through the use of a kHz repetition rate laser to achieve kHz excitation and data collection rates while stepping the spectrometer at 200 Hz. We have studied the metal-to-ligand charge transfer excited state of Ru(bipyridine)3Cl2 in deuterated acetonitrile to test and optimize high repetition rate data collection. Comparison of different interferometer stepping rates reveals an optimum rate of 200 Hz due to minimization of long-term baseline drift. With the improved collection efficiency and signal-to-noise ratio, better assignments of the MLCT excited-state bands can be made. Using optimized parameters, carbonmonoxy myoglobin in deuterated buffer is also studied by observing the infrared signatures of carbon monoxide photolysis upon excitation of the heme. We conclude from these studies that a substantial increase in performance of ss-FT-IR instrumentation is achieved by coupling commercial infrared benches with kHz repetition rate lasers. PMID:21513597

  4. Space shuttle rudder/speedbrake subsystem analysis

    NASA Technical Reports Server (NTRS)

    Duke, H. G.

    1975-01-01

    The Continuous System Modeling Program (CSMP) is described with its uses, its limitations, and its application to the rudder/speedbrake (R/SB) subsystem. The space shuttle R/SB is analyzed using the CSMP. Areas of analysis emphasized include: step response, ramp response, and the delay time or deadspace observed in system response. Results are presented and discussed.

  5. Technical Performance Measurement, Earned Value, and Risk Management: An Integrated Diagnostic Tool for Program Management

    DTIC Science & Technology

    2002-06-01

    time, the monkey would eventually produce the collected works of Shakespeare . Unfortunately for the analogist, systems, even live ones, do not work...limited his simulated computer monkey to producing, in a single random step, the sentence uttered by Polonius in the play Hamlet : “Methinks it is

  6. Revision of the documentation for a model for calculating effects of liquid waste disposal in deep saline aquifers

    USGS Publications Warehouse

    INTERA Environmental Consultants, Inc.

    1979-01-01

    The major limitation of the model arises using second-order correct (central-difference) finite-difference approximation in space. To avoid numerical oscillations in the solution, the user must restrict grid block and time step sizes depending upon the magnitude of the dispersivity.

  7. Calibrating Urgency: Triage Decision-Making in a Pediatric Emergency Department

    ERIC Educational Resources Information Center

    Patel, Vimla L.; Gutnik, Lily A.; Karlin, Daniel R.; Pusic, Martin

    2008-01-01

    Triage, the first step in the assessment of emergency department patients, occurs in a highly dynamic environment that functions under constraints of time, physical space, and patient needs that may exceed available resources. Through triage, patients are placed into one of a limited number of categories using a subset of diagnostic information.…

  8. Effect of resource constraints on intersimilar coupled networks.

    PubMed

    Shai, S; Dobson, S

    2012-12-01

    Most real-world networks do not live in isolation but are often coupled together within a larger system. Recent studies have shown that intersimilarity between coupled networks increases the connectivity of the overall system. However, unlike connected nodes in a single network, coupled nodes often share resources, like time, energy, and memory, which can impede flow processes through contention when intersimilarly coupled. We study a model of a constrained susceptible-infected-recovered (SIR) process on a system consisting of two random networks sharing the same set of nodes, where nodes are limited to interact with (and therefore infect) a maximum number of neighbors at each epidemic time step. We obtain that, in agreement with previous studies, when no limit exists (regular SIR model), positively correlated (intersimilar) coupling results in a lower epidemic threshold than negatively correlated (interdissimilar) coupling. However, in the case of the constrained SIR model, the obtained epidemic threshold is lower with negatively correlated coupling. The latter finding differentiates our work from previous studies and provides another step towards revealing the qualitative differences between single and coupled networks.

  9. Investigating the use of a rational Runge Kutta method for transport modelling

    NASA Astrophysics Data System (ADS)

    Dougherty, David E.

    An unconditionally stable explicit time integrator has recently been developed for parabolic systems of equations. This rational Runge Kutta (RRK) method, proposed by Wambecq 1 and Hairer 2, has been applied by Liu et al.3 to linear heat conduction problems in a time-partitioned solution context. An important practical question is whether the method has application for the solution of (nearly) hyperbolic equations as well. In this paper the RRK method is applied to a nonlinear heat conduction problem, the advection-diffusion equation, and the hyperbolic Buckley-Leverett problem. The method is, indeed, found to be unconditionally stable for the linear heat conduction problem and performs satisfactorily for the nonlinear heat flow case. A heuristic limitation on the utility of RRK for the advection-diffusion equation arises in the Courant number; for the second-order accurate one-step two-stage RRK method, a limiting Courant number of 2 applies. First order upwinding is not as effective when used with RRK as with Euler one-step methods. The method is found to perform poorly for the Buckley-Leverett problem.

  10. Effect of resource constraints on intersimilar coupled networks

    NASA Astrophysics Data System (ADS)

    Shai, S.; Dobson, S.

    2012-12-01

    Most real-world networks do not live in isolation but are often coupled together within a larger system. Recent studies have shown that intersimilarity between coupled networks increases the connectivity of the overall system. However, unlike connected nodes in a single network, coupled nodes often share resources, like time, energy, and memory, which can impede flow processes through contention when intersimilarly coupled. We study a model of a constrained susceptible-infected-recovered (SIR) process on a system consisting of two random networks sharing the same set of nodes, where nodes are limited to interact with (and therefore infect) a maximum number of neighbors at each epidemic time step. We obtain that, in agreement with previous studies, when no limit exists (regular SIR model), positively correlated (intersimilar) coupling results in a lower epidemic threshold than negatively correlated (interdissimilar) coupling. However, in the case of the constrained SIR model, the obtained epidemic threshold is lower with negatively correlated coupling. The latter finding differentiates our work from previous studies and provides another step towards revealing the qualitative differences between single and coupled networks.

  11. A coupled weather generator - rainfall-runoff approach on hourly time steps for flood risk analysis

    NASA Astrophysics Data System (ADS)

    Winter, Benjamin; Schneeberger, Klaus; Dung Nguyen, Viet; Vorogushyn, Sergiy; Huttenlau, Matthias; Merz, Bruno; Stötter, Johann

    2017-04-01

    The evaluation of potential monetary damage of flooding is an essential part of flood risk management. One possibility to estimate the monetary risk is to analyze long time series of observed flood events and their corresponding damages. In reality, however, only few flood events are documented. This limitation can be overcome by the generation of a set of synthetic, physically and spatial plausible flood events and subsequently the estimation of the resulting monetary damages. In the present work, a set of synthetic flood events is generated by a continuous rainfall-runoff simulation in combination with a coupled weather generator and temporal disaggregation procedure for the study area of Vorarlberg (Austria). Most flood risk studies focus on daily time steps, however, the mesoscale alpine study area is characterized by short concentration times, leading to large differences between daily mean and daily maximum discharge. Accordingly, an hourly time step is needed for the simulations. The hourly metrological input for the rainfall-runoff model is generated in a two-step approach. A synthetic daily dataset is generated by a multivariate and multisite weather generator and subsequently disaggregated to hourly time steps with a k-Nearest-Neighbor model. Following the event generation procedure, the negative consequences of flooding are analyzed. The corresponding flood damage for each synthetic event is estimated by combining the synthetic discharge at representative points of the river network with a loss probability relation for each community in the study area. The loss probability relation is based on exposure and susceptibility analyses on a single object basis (residential buildings) for certain return periods. For these impact analyses official inundation maps of the study area are used. Finally, by analyzing the total event time series of damages, the expected annual damage or losses associated with a certain probability of occurrence can be estimated for the entire study area.

  12. How many steps/day are enough? For older adults and special populations

    PubMed Central

    2011-01-01

    Older adults and special populations (living with disability and/or chronic illness that may limit mobility and/or physical endurance) can benefit from practicing a more physically active lifestyle, typically by increasing ambulatory activity. Step counting devices (accelerometers and pedometers) offer an opportunity to monitor daily ambulatory activity; however, an appropriate translation of public health guidelines in terms of steps/day is unknown. Therefore this review was conducted to translate public health recommendations in terms of steps/day. Normative data indicates that 1) healthy older adults average 2,000-9,000 steps/day, and 2) special populations average 1,200-8,800 steps/day. Pedometer-based interventions in older adults and special populations elicit a weighted increase of approximately 775 steps/day (or an effect size of 0.26) and 2,215 steps/day (or an effect size of 0.67), respectively. There is no evidence to inform a moderate intensity cadence (i.e., steps/minute) in older adults at this time. However, using the adult cadence of 100 steps/minute to demark the lower end of an absolutely-defined moderate intensity (i.e., 3 METs), and multiplying this by 30 minutes produces a reasonable heuristic (i.e., guiding) value of 3,000 steps. However, this cadence may be unattainable in some frail/diseased populations. Regardless, to truly translate public health guidelines, these steps should be taken over and above activities performed in the course of daily living, be of at least moderate intensity accumulated in minimally 10 minute bouts, and add up to at least 150 minutes over the week. Considering a daily background of 5,000 steps/day (which may actually be too high for some older adults and/or special populations), a computed translation approximates 8,000 steps on days that include a target of achieving 30 minutes of moderate-to-vigorous physical activity (MVPA), and approximately 7,100 steps/day if averaged over a week. Measured directly and including these background activities, the evidence suggests that 30 minutes of daily MVPA accumulated in addition to habitual daily activities in healthy older adults is equivalent to taking approximately 7,000-10,000 steps/day. Those living with disability and/or chronic illness (that limits mobility and or/physical endurance) display lower levels of background daily activity, and this will affect whole-day estimates of recommended physical activity. PMID:21798044

  13. Surfactant-controlled polymerization of semiconductor clusters to quantum dots through competing step-growth and living chain-growth mechanisms.

    PubMed

    Evans, Christopher M; Love, Alyssa M; Weiss, Emily A

    2012-10-17

    This article reports control of the competition between step-growth and living chain-growth polymerization mechanisms in the formation of cadmium chalcogenide colloidal quantum dots (QDs) from CdSe(S) clusters by varying the concentration of anionic surfactant in the synthetic reaction mixture. The growth of the particles proceeds by step-addition from initially nucleated clusters in the absence of excess phosphinic or carboxylic acids, which adsorb as their anionic conjugate bases, and proceeds indirectly by dissolution of clusters, and subsequent chain-addition of monomers to stable clusters (Ostwald ripening) in the presence of excess phosphinic or carboxylic acid. Fusion of clusters by step-growth polymerization is an explanation for the consistent observation of so-called "magic-sized" clusters in QD growth reactions. Living chain-addition (chain addition with no explicit termination step) produces QDs over a larger range of sizes with better size dispersity than step-addition. Tuning the molar ratio of surfactant to Se(2-)(S(2-)), the limiting ionic reagent, within the living chain-addition polymerization allows for stoichiometric control of QD radius without relying on reaction time.

  14. Preparing to take the USMLE Step 1: a survey on medical students' self-reported study habits.

    PubMed

    Kumar, Andre D; Shah, Monisha K; Maley, Jason H; Evron, Joshua; Gyftopoulos, Alex; Miller, Chad

    2015-05-01

    The USA Medical Licensing Examination Step 1 is a computerised multiple-choice examination that tests the basic biomedical sciences. It is administered after the second year in a traditional four-year MD programme. Most Step 1 scores fall between 140 and 260, with a mean (SD) of 227 (22). Step 1 scores are an important selection criterion for residency choice. Little is known about which study habits are associated with a higher score. To identify which self-reported study habits correlate with a higher Step 1 score. A survey regarding Step 1 study habits was sent to third year medical students at Tulane University School of Medicine every year between 2009 and 2011. The survey was sent approximately 3 months after the examination. 256 out of 475 students (54%) responded. The mean (SD) Step 1 score was 229.5 (22.1). Students who estimated studying more than 8-11 h per day had higher scores (p<0.05), but there was no added benefit with additional study time. Those who reported studying <40 days achieved higher scores (p<0.05). Those who estimated completing >2000 practice questions also obtained higher scores (p<0.01). Students who reported studying in a group, spending the majority of study time on practice questions or taking >40 preparation days did not achieve higher scores. Certain self-reported study habits may correlate with a higher Step 1 score compared with others. Given the importance of achieving a high Step 1 score on residency choice, it is important to further identify which characteristics may lead to a higher score. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  15. Flexible nano- and microliter injections on a single liquid chromatography-mass spectrometry system: Minimizing sample preparation and maximizing linear dynamic range.

    PubMed

    Lubin, Arnaud; Sheng, Sheng; Cabooter, Deirdre; Augustijns, Patrick; Cuyckens, Filip

    2017-11-17

    Lack of knowledge on the expected concentration range or insufficient linear dynamic range of the analytical method applied are common challenges for the analytical scientist. Samples that are above the upper limit of quantification are typically diluted and reanalyzed. The analysis of undiluted highly concentrated samples can cause contamination of the system, while the dilution step is time consuming and as the case for any sample preparation step, also potentially leads to precipitation, adsorption or degradation of the analytes. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Microbiological Load of Edible Insects Found in Belgium.

    PubMed

    Caparros Megido, Rudy; Desmedt, Sandrine; Blecker, Christophe; Béra, François; Haubruge, Éric; Alabi, Taofic; Francis, Frédéric

    2017-01-13

    Edible insects are gaining more and more attention as a sustainable source of animal protein for food and feed in the future. In Belgium, some insect products can be found on the market, and consumers are sourcing fresh insects from fishing stores or towards traditional markets to find exotic insects that are illegal and not sanitarily controlled. From this perspective, this study aims to characterize the microbial load of edible insects found in Belgium (i.e., fresh mealworms and house crickets from European farms and smoked termites and caterpillars from a traditional Congolese market) and to evaluate the efficiency of different processing methods (blanching for all species and freeze-drying and sterilization for European species) in reducing microorganism counts. All untreated insect samples had a total aerobic count higher than the limit for fresh minced meat (6.7 log cfu/g). Nevertheless, a species-dependent blanching step has led to a reduction of the total aerobic count under this limit, except for one caterpillar species. Freeze-drying and sterilization treatments on European species were also effective in reducing the total aerobic count. Yeast and mold counts for untreated insects were above the Good Manufacturing Practice limits for raw meat, but all treatments attained a reduction of these microorganisms under this limit. These results confirmed that fresh insects, but also smoked insects from non-European trades, need a cooking step (at least composed of a first blanching step) before consumption. Therefore, blanching timing for each studied insect species is proposed and discussed.

  17. Limited-memory adaptive snapshot selection for proper orthogonal decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill

    2015-04-02

    Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less

  18. Analysis of the track- and dose-averaged LET and LET spectra in proton therapy using the geant4 Monte Carlo code

    PubMed Central

    Guan, Fada; Peeler, Christopher; Bronk, Lawrence; Geng, Changran; Taleei, Reza; Randeniya, Sharmalee; Ge, Shuaiping; Mirkovic, Dragan; Grosshans, David; Mohan, Radhe; Titt, Uwe

    2015-01-01

    Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the geant 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from geant 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LETt and dose-averaged LET, LETd) using geant 4 for different tracking step size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LETt and LETd of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LETt but significant for LETd. This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in geant 4 can result in incorrect LETd calculation results in the dose plateau region for small step limits. The erroneous LETd results can be attributed to the algorithm to determine fluctuations in energy deposition along the tracking step in geant 4. The incorrect LETd values lead to substantial differences in the calculated RBE. Conclusions: When the geant 4 particle tracking method is used to calculate the average LET values within targets with a small step limit, such as smaller than 500 μm, the authors recommend the use of LETt in the dose plateau region and LETd around the Bragg peak. For a large step limit, i.e., 500 μm, LETd is recommended along the whole Bragg curve. The transition point depends on beam parameters and can be found by determining the location where the gradient of the ratio of LETd and LETt becomes positive. PMID:26520716

  19. A two-step lyssavirus real-time polymerase chain reaction using degenerate primers with superior sensitivity to the fluorescent antigen test.

    PubMed

    Suin, Vanessa; Nazé, Florence; Francart, Aurélie; Lamoral, Sophie; De Craeye, Stéphane; Kalai, Michael; Van Gucht, Steven

    2014-01-01

    A generic two-step lyssavirus real-time reverse transcriptase polymerase chain reaction (qRT-PCR), based on a nested PCR strategy, was validated for the detection of different lyssavirus species. Primers with 17 to 30% of degenerate bases were used in both consecutive steps. The assay could accurately detect RABV, LBV, MOKV, DUVV, EBLV-1, EBLV-2, and ABLV. In silico sequence alignment showed a functional match with the remaining lyssavirus species. The diagnostic specificity was 100% and the sensitivity proved to be superior to that of the fluorescent antigen test. The limit of detection was ≤ 1 50% tissue culture infectious dose. The related vesicular stomatitis virus was not recognized, confirming the selectivity for lyssaviruses. The assay was applied to follow the evolution of rabies virus infection in the brain of mice from 0 to 10 days after intranasal inoculation. The obtained RNA curve corresponded well with the curves obtained by a one-step monospecific RABV-qRT-PCR, the fluorescent antigen test, and virus titration. Despite the presence of degenerate bases, the assay proved to be highly sensitive, specific, and reproducible.

  20. A Two-Step Lyssavirus Real-Time Polymerase Chain Reaction Using Degenerate Primers with Superior Sensitivity to the Fluorescent Antigen Test

    PubMed Central

    Nazé, Florence; Francart, Aurélie; Lamoral, Sophie; De Craeye, Stéphane; Kalai, Michael

    2014-01-01

    A generic two-step lyssavirus real-time reverse transcriptase polymerase chain reaction (qRT-PCR), based on a nested PCR strategy, was validated for the detection of different lyssavirus species. Primers with 17 to 30% of degenerate bases were used in both consecutive steps. The assay could accurately detect RABV, LBV, MOKV, DUVV, EBLV-1, EBLV-2, and ABLV. In silico sequence alignment showed a functional match with the remaining lyssavirus species. The diagnostic specificity was 100% and the sensitivity proved to be superior to that of the fluorescent antigen test. The limit of detection was ≤1 50% tissue culture infectious dose. The related vesicular stomatitis virus was not recognized, confirming the selectivity for lyssaviruses. The assay was applied to follow the evolution of rabies virus infection in the brain of mice from 0 to 10 days after intranasal inoculation. The obtained RNA curve corresponded well with the curves obtained by a one-step monospecific RABV-qRT-PCR, the fluorescent antigen test, and virus titration. Despite the presence of degenerate bases, the assay proved to be highly sensitive, specific, and reproducible. PMID:24822188

  1. Step-by-step guideline for disease-specific costing studies in low- and middle-income countries: a mixed methodology

    PubMed Central

    Hendriks, Marleen E.; Kundu, Piyali; Boers, Alexander C.; Bolarinwa, Oladimeji A.; te Pas, Mark J.; Akande, Tanimola M.; Agbede, Kayode; Gomez, Gabriella B.; Redekop, William K.; Schultsz, Constance; Tan, Siok Swan

    2014-01-01

    Background Disease-specific costing studies can be used as input into cost-effectiveness analyses and provide important information for efficient resource allocation. However, limited data availability and limited expertise constrain such studies in low- and middle-income countries (LMICs). Objective To describe a step-by-step guideline for conducting disease-specific costing studies in LMICs where data availability is limited and to illustrate how the guideline was applied in a costing study of cardiovascular disease prevention care in rural Nigeria. Design The step-by-step guideline provides practical recommendations on methods and data requirements for six sequential steps: 1) definition of the study perspective, 2) characterization of the unit of analysis, 3) identification of cost items, 4) measurement of cost items, 5) valuation of cost items, and 6) uncertainty analyses. Results We discuss the necessary tradeoffs between the accuracy of estimates and data availability constraints at each step and illustrate how a mixed methodology of accurate bottom-up micro-costing and more feasible approaches can be used to make optimal use of all available data. An illustrative example from Nigeria is provided. Conclusions An innovative, user-friendly guideline for disease-specific costing in LMICs is presented, using a mixed methodology to account for limited data availability. The illustrative example showed that the step-by-step guideline can be used by healthcare professionals in LMICs to conduct feasible and accurate disease-specific cost analyses. PMID:24685170

  2. A 2-DOF model of an elastic rocket structure excited by a follower force

    NASA Astrophysics Data System (ADS)

    Brejão, Leandro F.; da Fonseca Brasil, Reyolando Manoel L. R.

    2017-10-01

    We present a two degree of freedom model of an elastic rocket structure excited by the follower force given by the motor thrust that is supposed to be always in the direction of the tangent to the deformed shape of the device at its lower tip. The model comprises two massless rigid pinned bars, initially in vertical position, connected by rotational springs. Lumped masses and dampers are considered at the connections. The generalized coordinates are the angular displacements of the bars with respect to the vertical. We derive the equations of motion via Lagrange’s equations and simulate its time evolution using Runge-Kutta 4th order time step-by-step numerical integration algorithm. Results indicate possible occurrence of stable and unstable vibrations, such as limit cycles.

  3. Multidimensional FEM-FCT schemes for arbitrary time stepping

    NASA Astrophysics Data System (ADS)

    Kuzmin, D.; Möller, M.; Turek, S.

    2003-05-01

    The flux-corrected-transport paradigm is generalized to finite-element schemes based on arbitrary time stepping. A conservative flux decomposition procedure is proposed for both convective and diffusive terms. Mathematical properties of positivity-preserving schemes are reviewed. A nonoscillatory low-order method is constructed by elimination of negative off-diagonal entries of the discrete transport operator. The linearization of source terms and extension to hyperbolic systems are discussed. Zalesak's multidimensional limiter is employed to switch between linear discretizations of high and low order. A rigorous proof of positivity is provided. The treatment of non-linearities and iterative solution of linear systems are addressed. The performance of the new algorithm is illustrated by numerical examples for the shock tube problem in one dimension and scalar transport equations in two dimensions.

  4. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE PAGES

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...

    2018-04-17

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  5. Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models

    NASA Astrophysics Data System (ADS)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.

    2018-04-01

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.

  6. Spatiotemporal groundwater level modeling using hybrid artificial intelligence-meshless method

    NASA Astrophysics Data System (ADS)

    Nourani, Vahid; Mousavi, Shahram

    2016-05-01

    Uncertainties of the field parameters, noise of the observed data and unknown boundary conditions are the main factors involved in the groundwater level (GL) time series which limit the modeling and simulation of GL. This paper presents a hybrid artificial intelligence-meshless model for spatiotemporal GL modeling. In this way firstly time series of GL observed in different piezometers were de-noised using threshold-based wavelet method and the impact of de-noised and noisy data was compared in temporal GL modeling by artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). In the second step, both ANN and ANFIS models were calibrated and verified using GL data of each piezometer, rainfall and runoff considering various input scenarios to predict the GL at one month ahead. In the final step, the simulated GLs in the second step of modeling were considered as interior conditions for the multiquadric radial basis function (RBF) based solve of governing partial differential equation of groundwater flow to estimate GL at any desired point within the plain where there is not any observation. In order to evaluate and compare the GL pattern at different time scales, the cross-wavelet coherence was also applied to GL time series of piezometers. The results showed that the threshold-based wavelet de-noising approach can enhance the performance of the modeling up to 13.4%. Also it was found that the accuracy of ANFIS-RBF model is more reliable than ANN-RBF model in both calibration and validation steps.

  7. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  8. Objective assessment of physical activity and sedentary behaviour in knee osteoarthritis patients - beyond daily steps and total sedentary time.

    PubMed

    Sliepen, Maik; Mauricio, Elsa; Lipperts, Matthijs; Grimm, Bernd; Rosenbaum, Dieter

    2018-02-23

    Knee osteoarthritis patients may become physically inactive due to pain and functional limitations. Whether physical activity exerts a protective or harmful effect depends on the frequency, intensity, time and type (F.I.T.T.). The F.I.T.T. dimensions should therefore be assessed during daily life, which so far has hardly been feasible. Furthermore, physical activity should be assessed within subgroups of patients, as they might experience different activity limitations. Therefore, this study aimed to objectively describe physical activity, by assessing the F.I.T.T. dimensions, and sedentary behaviour of knee osteoarthritis patients during daily life. An additional goal was to determine whether activity events, based on different types and durations of physical activity, were able to discriminate between subgroups of KOA patients based on risk factors. Clinically diagnosed knee osteoarthritis patients (according to American College of Rheumatology criteria) were monitored for 1 week with a tri-axial accelerometer. Furthermore, they performed three functional tests and completed the Knee Osteoarthritis Outcome Score. Physical activity levels were described for knee osteoarthritis patients and compared between subgroups. Sixty-one patients performed 7303 mean level steps, 319 ascending and 312 descending steps and 601 bicycle crank revolutions per day. Most waking hours were spent sedentary (61%), with 4.6 bouts of long duration (> 30 min). Specific events, particularly ascending and descending stairs/slopes, brief walking and sedentary bouts and prolonged walking bouts, varied between subgroups. From this sample of KOA patients, the most common form of activity was level walking, although cycling and stair climbing activities occurred frequently, highlighting the relevance of distinguishing between these types of PA. The total active time encompassed a small portion of their waking hours, as they spent most of their time sedentary, which was exacerbated by frequently occurring prolonged bouts. In this study, event-based parameters, such as stair climbing or short bouts of walking or sedentary time, were found more capable of discriminating between subgroups of KOA patients compared to overall levels of PA and sedentary time. Thereby, subtle limitations in physical behaviour of KOA-subgroups were revealed, which might ultimately be targeted in rehabilitation programs. German Clinical Trials Registry under ' DRKS00008735 ' at 02.12.2015.

  9. Enhanced capillary electrophoretic screening of Alzheimer based on direct apolipoprotein E genotyping and one-step multiplex PCR.

    PubMed

    Woo, Nain; Kim, Su-Kang; Sun, Yucheng; Kang, Seong Ho

    2018-01-01

    Human apolipoprotein E (ApoE) is associated with high cholesterol levels, coronary artery disease, and especially Alzheimer's disease. In this study, we developed an ApoE genotyping and one-step multiplex polymerase chain reaction (PCR) based-capillary electrophoresis (CE) method for the enhanced diagnosis of Alzheimer's. The primer mixture of ApoE genes enabled the performance of direct one-step multiplex PCR from whole blood without DNA purification. The combination of direct ApoE genotyping and one-step multiplex PCR minimized the risk of DNA loss or contamination due to the process of DNA purification. All amplified PCR products with different DNA lengths (112-, 253-, 308-, 444-, and 514-bp DNA) of the ApoE genes were analyzed within 2min by an extended voltage programming (VP)-based CE under the optimal conditions. The extended VP-based CE method was at least 120-180 times faster than conventional slab gel electrophoresis methods In particular, all amplified DNA fragments were detected in less than 10 PCR cycles using a laser-induced fluorescence detector. The detection limits of the ApoE genes were 6.4-62.0pM, which were approximately 100-100,000 times more sensitive than previous Alzheimer's diagnosis methods In addition, the combined one-step multiplex PCR and extended VP-based CE method was also successfully applied to the analysis of ApoE genotypes in Alzheimer's patients and normal samples and confirmed the distribution probability of allele frequencies. This combination of direct one-step multiplex PCR and an extended VP-based CE method should increase the diagnostic reliability of Alzheimer's with high sensitivity and short analysis time even with direct use of whole blood. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction

    DOE PAGES

    Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir

    2016-10-20

    Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less

  11. Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir

    Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less

  12. Vibration control by limiting the maximum axial forces in space trusses

    NASA Technical Reports Server (NTRS)

    Chawla, Vikas; Utku, Senol; Wada, Ben K.

    1993-01-01

    Proposed here is a method of vibration control based on limiting the maximum axial forces in the active members of an adaptive truss. The actuators simulate elastic rigid-plastic behavior and consume the vibrational energy as work. The method is applicable to both statically determinate as well as indeterminate truss structures. However, for energy efficient control of statistically indeterminate trusses extra actuators may be provided on the redundant bars. An energy formulation relating the various control parameters is derived to get an estimate of the control time. Since the simulation of elastic rigid-plastic behavior requires a piecewise linear control law, a general analytical solution is not possible. Numerical simulation by step-by-step integration is performed to simulate the control of an example truss structure. The problems of application to statically indeterminate trusses and optimal actuator placement are identified for future work.

  13. In Situ Observation of Dissolution of Oxide Inclusions in Steelmaking Slags

    NASA Astrophysics Data System (ADS)

    Sharma, Mukesh; Mu, Wangzhong; Dogan, Neslihan

    2018-05-01

    Better understanding of removal of non-metallic inclusions is of importance in the steelmaking process to control the cleanliness of steel. In this study, the dissolution rate of Al2O3 and Al2TiO5 inclusions in a liquid CaO-SiO2-Al2O3 slag was measured using high-temperature confocal scanning laser microscopy (HT-CSLM) at 1550°C. The dissolution rate of inclusions is expressed as a function of the rate of decrease of the radius of solid particles with time. It is found that Al2O3 inclusions have a slower dissolution rate than that of Al2TiO5 inclusions at 1550°C. The rate-limiting steps are investigated in terms of a shrinking core model. It is shown that the rate-limiting step for dissolution of both inclusion types is mass transfer in the slag at 1550°C.

  14. Cancer imaging using Surface-Enhanced Resonance Raman Scattering (SERRS) nanoparticles

    PubMed Central

    Harmsen, Stefan; Wall, Matthew A.; Huang, Ruimin

    2017-01-01

    The unique spectral signatures and biologically inert compositions of surface-enhanced (resonance) Raman scattering (SE(R)RS) nanoparticles make them promising contrast agents for in vivo cancer imaging. Subtle aspects of their preparation can shift their limit of detection by orders of magnitude. In this protocol, we present the optimized, step-by-step procedure for generating reproducible SERRS nanoparticles with femtomolar (10−15 M) limits of detection. We introduce several applications of these nanoprobes for biomedical research, with a focus on intraoperative cancer imaging via Raman imaging. A detailed account is provided for successful intravenous administration of SERRS nanoparticles such that delineation of cancerous lesions may be achieved without the need for specific biomarker targeting. The time estimate for this straightforward, yet comprehensive protocol from initial de novo gold nanoparticle synthesis to SE(R)RS nanoparticle contrast-enhanced preclinical Raman imaging in animal models is ~96 h. PMID:28686581

  15. An electrochemical sensing platform based on local repression of electrolyte diffusion for single-step, reagentless, sensitive detection of a sequence-specific DNA-binding protein.

    PubMed

    Zhang, Yun; Liu, Fang; Nie, Jinfang; Jiang, Fuyang; Zhou, Caibin; Yang, Jiani; Fan, Jinlong; Li, Jianping

    2014-05-07

    In this paper, we report for the first time an electrochemical biosensor for single-step, reagentless, and picomolar detection of a sequence-specific DNA-binding protein using a double-stranded, electrode-bound DNA probe terminally modified with a redox active label close to the electrode surface. This new methodology is based upon local repression of electrolyte diffusion associated with protein-DNA binding that leads to reduction of the electrochemical response of the label. In the proof-of-concept study, the resulting electrochemical biosensor was quantitatively sensitive to the concentrations of the TATA binding protein (TBP, a model analyte) ranging from 40 pM to 25.4 nM with an estimated detection limit of ∼10.6 pM (∼80 to 400-fold improvement on the detection limit over previous electrochemical analytical systems).

  16. Polymer microchip capillary electrophoresis of proteins either off- or on-chip labeled with chameleon dye for simplified analysis

    PubMed Central

    Yu, Ming; Wang, Hsiang-Yu; Woolley, Adam

    2009-01-01

    Microchip capillary electrophoresis of proteins labeled either off- or on-chip with the “chameleon” CE dye 503 using poly(methyl methacrylate) microchips is presented. A simple dynamic coating using the cationic surfactant cetyltrimethyl ammonium bromide prevented nonspecific adsorption of protein and dye to the channel walls. The labeling reactions for both off- and on-chip labeling proceeded at room temperature without requiring heating steps. In off-chip labeling, a 9 ng/mL concentration detection limit for bovine serum albumin (BSA), corresponding to a ~7 fg (100 zmol) mass detection limit, was obtained. In on-chip tagging, the free dye and protein were placed in different reservoirs of the microchip, and an extra incubation step was not needed. A 1 μg/mL concentration detection limit for BSA, corresponding to a ~700 fg (10 amol) mass detection limit, was obtained from this protocol. The earlier elution time of the BSA peak in on-chip labeling resulted from fewer total labels on each protein molecule. Our on-chip labeling method is an important part of automation in miniaturized devices. PMID:19924700

  17. A step-defined sedentary lifestyle index: <5000 steps/day.

    PubMed

    Tudor-Locke, Catrine; Craig, Cora L; Thyfault, John P; Spence, John C

    2013-02-01

    Step counting (using pedometers or accelerometers) is widely accepted by researchers, practitioners, and the general public. Given the mounting evidence of the link between low steps/day and time spent in sedentary behaviours, how few steps/day some populations actually perform, and the growing interest in the potentially deleterious effects of excessive sedentary behaviours on health, an emerging question is "How many steps/day are too few?" This review examines the utility, appropriateness, and limitations of using a reoccurring candidate for a step-defined sedentary lifestyle index: <5000 steps/day. Adults taking <5000 steps/day are more likely to have a lower household income and be female, older, of African-American vs. European-American heritage, a current vs. never smoker, and (or) living with chronic disease and (or) disability. Little is known about how contextual factors (e.g., built environment) foster such low levels of step-defined physical activity. Unfavorable indicators of body composition and cardiometabolic risk have been consistently associated with taking <5000 steps/day. The acute transition (3-14 days) of healthy active young people from higher (>10 000) to lower (<5000 or as low as 1500) daily step counts induces reduced insulin sensitivity and glycemic control, increased adiposity, and other negative changes in health parameters. Although few alternative values have been considered, the continued use of <5000 steps/day as a step-defined sedentary lifestyle index for adults is appropriate for researchers and practitioners and for communicating with the general public. There is little evidence to advocate any specific value indicative of a step-defined sedentary lifestyle index in children and adolescents.

  18. Prediction-Correction Algorithms for Time-Varying Constrained Optimization

    DOE PAGES

    Simonetto, Andrea; Dall'Anese, Emiliano

    2017-07-26

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  19. Operational Demands of AAC Mobile Technology Applications on Programming Vocabulary and Engagement During Professional and Child Interactions.

    PubMed

    Caron, Jessica; Light, Janice; Drager, Kathryn

    2016-01-01

    Typically, the vocabulary in augmentative and alternative communication (AAC) technologies is pre-programmed by manufacturers or by parents and professionals outside of daily interactions. Because vocabulary needs are difficult to predict, young children who use aided AAC often do not have access to vocabulary concepts as the need and interest arises in their daily interactions, limiting their vocabulary acquisition and use. Ideally, parents and professionals would be able to add vocabulary to AAC technologies "just-in-time" as required during daily interactions. This study compared the effects of two AAC applications for mobile technologies: GoTalk Now (which required more programming steps) and EasyVSD (which required fewer programming steps) on the number of visual scene displays (VSDs) and hotspots created in 10-min interactions between eight professionals and preschool-aged children with typical development. The results indicated that, although all of the professionals were able to create VSDs and add vocabulary during interactions with the children, they created more VSDs and hotspots with the app with fewer programming steps than with the one with more steps, and child engagement and programming participation levels were high with both apps, but higher levels for both variables were observed with the app with fewer programming steps than with the one with more steps. These results suggest that apps with fewer programming steps may reduce operational demands and better support professionals to (a) respond to the child's input, (b) use just-in-time programming during interactions, (c) provide access to more vocabulary, and (d) increase participation.

  20. Predictor - Predictive Reaction Design via Informatics, Computation and Theories of Reactivity

    DTIC Science & Technology

    2017-10-10

    into more complex and valuable molecules, but are limited by: 1. The extensive time it takes to design and optimize a synthesis 2. Multi-step...system. As it is fully compatible to the industry standard SQL, designing a server- based system at a later time will be trivial. Producing a JAVA front...Report: PREDICTOR - Predictive REaction Design via Informatics, Computation and Theories of Reactivity The goal of this program was to create a cyber

  1. Toward a Healthy Community (Organizing Events for Community Health Promotion).

    ERIC Educational Resources Information Center

    Public Health Service (DHHS), Rockville, MD. Office of Disease Prevention and Health Promotion.

    This booklet suggests the first steps communities can take in assessing their needs and resources and mobilizing public interest and support for health promotion. It is based on an approach to health education and community organization that recognizes the value of a highly visible, time-limited event, such as a health fair, a marathon, or an…

  2. Detection of viable Cyptosporidium parvum in soil by reverse transcription real time PCR targeting hsp70 mRNA

    EPA Science Inventory

    Extraction of high-quality mRNA from Cryptosporidium parvum is a key step in PCR detection of viable oocysts in environmental samples. Current methods for monitoring oocysts are limited to water samples; therefore, the goal of this study was to develop a rapid and sensitive proce...

  3. Effect of a lateral step-up exercise protocol on quadriceps and lower extremity performance.

    PubMed

    Worrell, T W; Borchert, B; Erner, K; Fritz, J; Leerar, P

    1993-12-01

    Closed kinetic chain exercises have been promoted as more functional and more appropriate than open kinetic chain exercises. Limited research exists demonstrating the effect of closed kinetic chain exercise on quadriceps and lower extremity performance. The purpose of this study was to determine the effect of a lateral step-up exercise protocol on isokinetic quadriceps peak torque and the following lower extremity activities: 1) leg press, 2) maximal step-up repetitions with body weight plus 25%, 3) hop for distance, and 4) 6-m timed hop. Twenty subjects participated in a 4-week training period, and 18 subjects served as controls. For the experimental group, a repeated measure ANOVA comparing pretest and posttest values revealed significant improvements in the leg press (p < or = .05), step-ups (p < or = .05), hop for distance (p < or = .05), and hop for time (p < or = .05) and no significant increase in isokinetic quadriceps peak torque (p > or = .05). Over the course of the training period, weight used for the step-up exercise increased (p < or = .05), repetitions decreased (p < or = .05), and step-up work did not change (p > or = .05). For the control group, no significant change (p > or = .05) occurred in any variable. The inability of the isokinetic dynamometer to detect increases in quadriceps performance is important because the isokinetic values are frequently used as criteria for return to functional activities. We conclude that closed kinetic chain testing and exercise provide additional means to assess and rehabilitate the lower extremity.

  4. A diffusive information preservation method for small Knudsen number flows

    NASA Astrophysics Data System (ADS)

    Fei, Fei; Fan, Jing

    2013-06-01

    The direct simulation Monte Carlo (DSMC) method is a powerful particle-based method for modeling gas flows. It works well for relatively large Knudsen (Kn) numbers, typically larger than 0.01, but quickly becomes computationally intensive as Kn decreases due to its time step and cell size limitations. An alternative approach was proposed to relax or remove these limitations, based on replacing pairwise collisions with a stochastic model corresponding to the Fokker-Planck equation [J. Comput. Phys., 229, 1077 (2010); J. Fluid Mech., 680, 574 (2011)]. Similar to the DSMC method, the downside of that approach suffers from computationally statistical noise. To solve the problem, a diffusion-based information preservation (D-IP) method has been developed. The main idea is to track the motion of a simulated molecule from the diffusive standpoint, and obtain the flow velocity and temperature through sampling and averaging the IP quantities. To validate the idea and the corresponding model, several benchmark problems with Kn ˜ 10-3-10-4 have been investigated. It is shown that the IP calculations are not only accurate, but also efficient because they make possible using a time step and cell size over an order of magnitude larger than the mean collision time and mean free path, respectively.

  5. Ultrafast learning in a hard-limited neural network pattern recognizer

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Lun J.

    1996-03-01

    As we published in the last five years, the supervised learning in a hard-limited perceptron system can be accomplished in a noniterative manner if the input-output mapping to be learned satisfies a certain positive-linear-independency (or PLI) condition. When this condition is satisfied (for most practical pattern recognition applications, this condition should be satisfied,) the connection matrix required to meet this mapping can be obtained noniteratively in one step. Generally, there exist infinitively many solutions for the connection matrix when the PLI condition is satisfied. We can then select an optimum solution such that the recognition of any untrained patterns will become optimally robust in the recognition mode. The learning speed is very fast and close to real-time because the learning process is noniterative and one-step. This paper reports the theoretical analysis and the design of a practical charter recognition system for recognizing hand-written alphabets. The experimental result is recorded in real-time on an unedited video tape for demonstration purposes. It is seen from this real-time movie that the recognition of the untrained hand-written alphabets is invariant to size, location, orientation, and writing sequence, even the training is done with standard size, standard orientation, central location and standard writing sequence.

  6. Comment on 'Shang S. 2012. Calculating actual crop evapotranspiration under soil water stress conditions with appropriate numerical methods and time step. Hydrological Processes 26: 3338-3343. DOI: 10.1002/hyp.8405'

    NASA Technical Reports Server (NTRS)

    Yatheendradas, Soni; Narapusetty, Balachandrudu; Peters-Lidard, Christa; Funk, Christopher; Verdin, James

    2014-01-01

    A previous study analyzed errors in the numerical calculation of actual crop evapotranspiration (ET(sub a)) under soil water stress. Assuming no irrigation or precipitation, it constructed equations for ET(sub a) over limited soil-water ranges in a root zone drying out due to evapotranspiration. It then used a single crop-soil composite to provide recommendations about the appropriate usage of numerical methods under different values of the time step and the maximum crop evapotranspiration (ET(sub c)). This comment reformulates those ET(sub a) equations for applicability over the full range of soil water values, revealing a dependence of the relative error in numerical ET(sub a) on the initial soil water that was not seen in the previous study. It is shown that the recommendations based on a single crop-soil composite can be invalid for other crop-soil composites. Finally, a consideration of the numerical error in the time-cumulative value of ET(sub a) is discussed besides the existing consideration of that error over individual time steps as done in the previous study. This cumulative ET(sub a) is more relevant to the final crop yield.

  7. A Cascaded Approach for Correcting Ionospheric Contamination with Large Amplitude in HF Skywave Radars

    PubMed Central

    Wei, Yinsheng; Guo, Rujiang; Xu, Rongqing; Tang, Xiudong

    2014-01-01

    Ionospheric phase perturbation with large amplitude causes broadening sea clutter's Bragg peaks to overlap each other; the performance of traditional decontamination methods about filtering Bragg peak is poor, which greatly limits the detection performance of HF skywave radars. In view of the ionospheric phase perturbation with large amplitude, this paper proposes a cascaded approach based on improved S-method to correct the ionospheric phase contamination. This approach consists of two correction steps. At the first step, a time-frequency distribution method based on improved S-method is adopted and an optimal detection method is designed to obtain a coarse ionospheric modulation estimation from the time-frequency distribution. At the second correction step, based on the phase gradient algorithm (PGA) is exploited to eliminate the residual contamination. Finally, use the measured data to verify the effectiveness of the method. Simulation results show the time-frequency resolution of this method is high and is not affected by the interference of the cross term; ionospheric phase perturbation with large amplitude can be corrected in low signal-to-noise (SNR); such a cascade correction method has a good effect. PMID:24578656

  8. An asymptotic-preserving Lagrangian algorithm for the time-dependent anisotropic heat transport equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon, Luis; del-Castillo-Negrete, Diego; Hauck, Cory D.

    2014-09-01

    We propose a Lagrangian numerical algorithm for a time-dependent, anisotropic temperature transport equation in magnetized plasmas in the large guide field regime. The approach is based on an analytical integral formal solution of the parallel (i.e., along the magnetic field) transport equation with sources, and it is able to accommodate both local and non-local parallel heat flux closures. The numerical implementation is based on an operator-split formulation, with two straightforward steps: a perpendicular transport step (including sources), and a Lagrangian (field-line integral) parallel transport step. Algorithmically, the first step is amenable to the use of modern iterative methods, while themore » second step has a fixed cost per degree of freedom (and is therefore scalable). Accuracy-wise, the approach is free from the numerical pollution introduced by the discrete parallel transport term when the perpendicular to parallel transport coefficient ratio X ⊥ /X ∥ becomes arbitrarily small, and is shown to capture the correct limiting solution when ε = X⊥L 2 ∥/X1L 2 ⊥ → 0 (with L∥∙ L⊥ , the parallel and perpendicular diffusion length scales, respectively). Therefore, the approach is asymptotic-preserving. We demonstrate the capabilities of the scheme with several numerical experiments with varying magnetic field complexity in two dimensions, including the case of transport across a magnetic island.« less

  9. Musculoskeletal ultrasound: how to treat calcific tendinitis of the rotator cuff by ultrasound-guided single-needle lavage technique.

    PubMed

    Lee, Kenneth S; Rosas, Humberto G

    2010-09-01

    The purpose of this video article is to illustrate the ultrasound appearance of calcium deposition in the rotator cuff and provide a detailed step-by-step protocol for performing the ultrasound-guided single-needle lavage technique for the treatment of calcific tendinitis with emphasis on patient positioning, necessary supplies, real-time lavage technique, and steroid injection into the subacromial subdeltoid bursa. Musculoskeletal ultrasound is well established as a safe, cost-effective imaging tool in diagnosing and treating common musculoskeletal disorders. Calcific tendinitis of the rotator cuff is a common disabling cause of shoulder pain. Although most cases are self-limiting, a subset of patients is refractory to conservative therapy and requires treatment intervention. Ultrasound-guided lavage is an effective and safe minimally-invasive treatment not readily offered in the United States as an alternative to surgery, perhaps because of the limited prevalence of musculoskeletal ultrasound programs and limited training. On completion of this video article, the participant should be able to develop an appropriate diagnostic and therapeutic algorithm for the treatment of calcific tendinitis of the rotator cuff using ultrasound.

  10. Gaussian process regression for geometry optimization

    NASA Astrophysics Data System (ADS)

    Denzel, Alexander; Kästner, Johannes

    2018-03-01

    We implemented a geometry optimizer based on Gaussian process regression (GPR) to find minimum structures on potential energy surfaces. We tested both a two times differentiable form of the Matérn kernel and the squared exponential kernel. The Matérn kernel performs much better. We give a detailed description of the optimization procedures. These include overshooting the step resulting from GPR in order to obtain a higher degree of interpolation vs. extrapolation. In a benchmark against the Limited-memory Broyden-Fletcher-Goldfarb-Shanno optimizer of the DL-FIND library on 26 test systems, we found the new optimizer to generally reduce the number of required optimization steps.

  11. Semi-Automatic Building Models and FAÇADE Texture Mapping from Mobile Phone Images

    NASA Astrophysics Data System (ADS)

    Jeong, J.; Kim, T.

    2016-06-01

    Research on 3D urban modelling has been actively carried out for a long time. Recently the need of 3D urban modelling research is increased rapidly due to improved geo-web services and popularized smart devices. Nowadays 3D urban models provided by, for example, Google Earth use aerial photos for 3D urban modelling but there are some limitations: immediate update for the change of building models is difficult, many buildings are without 3D model and texture, and large resources for maintaining and updating are inevitable. To resolve the limitations mentioned above, we propose a method for semi-automatic building modelling and façade texture mapping from mobile phone images and analyze the result of modelling with actual measurements. Our method consists of camera geometry estimation step, image matching step, and façade mapping step. Models generated from this method were compared with actual measurement value of real buildings. Ratios of edge length of models and measurements were compared. Result showed 5.8% average error of length ratio. Through this method, we could generate a simple building model with fine façade textures without expensive dedicated tools and dataset.

  12. The importance of daily physical activity for improved exercise tolerance in heart failure patients with limited access to centre-based cardiac rehabilitation.

    PubMed

    Sato, Noriaki; Origuchi, Hideki; Yamamoto, Umpei; Takanaga, Yasuhiro; Mohri, Masahiro

    2012-09-01

    Supervised cardiac rehabilitation provided at dedicated centres ameliorates exercise intolerance in patients with chronic heart failure. To correlate the amount of physical activity outside the hospital with improved exercise tolerance in patients with limited access to centre-based programs. Forty patients (median age 69 years) with stable heart failure due to systolic left ventricular dysfunction participated in cardiac rehabilitation once per week for five months. Using a validated single-axial accelerometer, the number of steps and physical activity-related energy expenditures on nonrehabilitation days were determined. Median (interquartile range) peak oxygen consumption was increased from 14.4 mL/kg/min (range 12.9 mL/kg/min to 17.8 mL/kg/min) to 16.4 mL/kg/min (range 13.9 mL/kg/min to 19.1 mL/kg/min); P<0.0001, in association with a decreased slope of the minute ventilation to carbon dioxide production plot (34.2 [range 31.3 to 38.1] versus 32.7 [range 30.3 to 36.5]; P<0.0001). Changes in peak oxygen consumption were correlated with the daily number of steps (P<0.01) and physical activity-related energy expenditures (P<0.05). Furthermore, these changes were significantly correlated with total exercise time per day and time spent for light (≤3 metabolic equivalents) exercise, but not with time spent for moderate/vigorous (>3 metabolic equivalents) exercise. The number of steps and energy expenditures outside the hospital were correlated with improved exercise capacity. An accelerometer may be useful for guiding home-based cardiac rehabilitation.

  13. Jitter-correction for IR/UV-XUV pump-probe experiments at the FLASH free-electron laser

    DOE PAGES

    Savelyev, Evgeny; Boll, Rebecca; Bomme, Cedric; ...

    2017-04-10

    In pump-probe experiments employing a free-electron laser (FEL) in combination with a synchronized optical femtosecond laser, the arrival-time jitter between the FEL pulse and the optical laser pulse often severely limits the temporal resolution that can be achieved. Here, we present a pump-probe experiment on the UV-induced dissociation of 2,6-difluoroiodobenzene C 6H 3F 2I) molecules performed at the FLASH FEL that takes advantage of recent upgrades of the FLASH timing and synchronization system to obtain high-quality data that are not limited by the FEL arrival-time jitter. Here, we discuss in detail the necessary data analysis steps and describe the originmore » of the time-dependent effects in the yields and kinetic energies of the fragment ions that we observe in the experiment.« less

  14. TRUMP. Transient & S-State Temperature Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elrod, D.C.; Turner, W.D.

    1992-03-03

    TRUMP solves a general nonlinear parabolic partial differential equation describing flow in various kinds of potential fields, such as fields of temperature, pressure, or electricity and magnetism; simultaneously, it will solve two additional equations representing, in thermal problems, heat production by decomposition of two reactants having rate constants with a general Arrhenius temperature dependence. Steady-state and transient flow in one, two, or three dimensions are considered in geometrical configurations having simple or complex shapes and structures. Problem parameters may vary with spatial position, time, or primary dependent variables, temperature, pressure, or field strength. Initial conditions may vary with spatial position,more » and among the criteria that may be specified for ending a problem are upper and lower limits on the size of the primary dependent variable, upper limits on the problem time or on the number of time-steps or on the computer time, and attainment of steady state.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elrod, D.C.; Turner, W.D.

    TRUMP solves a general nonlinear parabolic partial differential equation describing flow in various kinds of potential fields, such as fields of temperature, pressure, or electricity and magnetism; simultaneously, it will solve two additional equations representing, in thermal problems, heat production by decomposition of two reactants having rate constants with a general Arrhenius temperature dependence. Steady-state and transient flow in one, two, or three dimensions are considered in geometrical configurations having simple or complex shapes and structures. Problem parameters may vary with spatial position, time, or primary dependent variables, temperature, pressure, or field strength. Initial conditions may vary with spatial position,more » and among the criteria that may be specified for ending a problem are upper and lower limits on the size of the primary dependent variable, upper limits on the problem time or on the number of time-steps or on the computer time, and attainment of steady state.« less

  16. Performance Limiting Flow Processes in High-State Loading High-Mach Number Compressors

    DTIC Science & Technology

    2008-03-13

    the Doctoral Thesis Committee of the doctoral student. 3 3.0 Technical Background A strong incentive exists to reduce airfoil count in aircraft engine ...Advanced Turbine Engine ). A basic constraint on blade reduction is seen from the Euler turbine equation, which shows that, although a design can be carried...on the vane to rotor blade ratio of 8:11). Within the MSU Turbo code, specifying a small number of time steps requires more iteration at each time

  17. Falling coupled oscillators and trigonometric sums

    NASA Astrophysics Data System (ADS)

    Holcombe, S. R.

    2018-02-01

    A method for evaluating finite trigonometric summations is applied to a system of N coupled oscillators under acceleration. Initial motion of the nth particle is shown to be of the order T^{2{n}+2} for small time T, and the end particle in the continuum limit is shown to initially remain stationary for the time it takes a wavefront to reach it. The average velocities of particles at the ends of the system are shown to take discrete values in a step-like manner.

  18. Model predictive control design for polytopic uncertain systems by synthesising multi-step prediction scenarios

    NASA Astrophysics Data System (ADS)

    Lu, Jianbo; Xi, Yugeng; Li, Dewei; Xu, Yuli; Gan, Zhongxue

    2018-01-01

    A common objective of model predictive control (MPC) design is the large initial feasible region, low online computational burden as well as satisfactory control performance of the resulting algorithm. It is well known that interpolation-based MPC can achieve a favourable trade-off among these different aspects. However, the existing results are usually based on fixed prediction scenarios, which inevitably limits the performance of the obtained algorithms. So by replacing the fixed prediction scenarios with the time-varying multi-step prediction scenarios, this paper provides a new insight into improvement of the existing MPC designs. The adopted control law is a combination of predetermined multi-step feedback control laws, based on which two MPC algorithms with guaranteed recursive feasibility and asymptotic stability are presented. The efficacy of the proposed algorithms is illustrated by a numerical example.

  19. Multi-off-grid methods in multi-step integration of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Beaudet, P. R.

    1974-01-01

    Description of methods of solving first- and second-order systems of differential equations in which all derivatives are evaluated at off-grid locations in order to circumvent the Dahlquist stability limitation on the order of on-grid methods. The proposed multi-off-grid methods require off-grid state predictors for the evaluation of the n derivatives at each step. Progressing forward in time, the off-grid states are predicted using a linear combination of back on-grid state values and off-grid derivative evaluations. A comparison is made between the proposed multi-off-grid methods and the corresponding Adams and Cowell on-grid integration techniques in integrating systems of ordinary differential equations, showing a significant reduction in the error at larger step sizes in the case of the multi-off-grid integrator.

  20. Limited-memory fast gradient descent method for graph regularized nonnegative matrix factorization.

    PubMed

    Guan, Naiyang; Wei, Lei; Luo, Zhigang; Tao, Dacheng

    2013-01-01

    Graph regularized nonnegative matrix factorization (GNMF) decomposes a nonnegative data matrix X[Symbol:see text]R(m x n) to the product of two lower-rank nonnegative factor matrices, i.e.,W[Symbol:see text]R(m x r) and H[Symbol:see text]R(r x n) (r < min {m,n}) and aims to preserve the local geometric structure of the dataset by minimizing squared Euclidean distance or Kullback-Leibler (KL) divergence between X and WH. The multiplicative update rule (MUR) is usually applied to optimize GNMF, but it suffers from the drawback of slow-convergence because it intrinsically advances one step along the rescaled negative gradient direction with a non-optimal step size. Recently, a multiple step-sizes fast gradient descent (MFGD) method has been proposed for optimizing NMF which accelerates MUR by searching the optimal step-size along the rescaled negative gradient direction with Newton's method. However, the computational cost of MFGD is high because 1) the high-dimensional Hessian matrix is dense and costs too much memory; and 2) the Hessian inverse operator and its multiplication with gradient cost too much time. To overcome these deficiencies of MFGD, we propose an efficient limited-memory FGD (L-FGD) method for optimizing GNMF. In particular, we apply the limited-memory BFGS (L-BFGS) method to directly approximate the multiplication of the inverse Hessian and the gradient for searching the optimal step size in MFGD. The preliminary results on real-world datasets show that L-FGD is more efficient than both MFGD and MUR. To evaluate the effectiveness of L-FGD, we validate its clustering performance for optimizing KL-divergence based GNMF on two popular face image datasets including ORL and PIE and two text corpora including Reuters and TDT2. The experimental results confirm the effectiveness of L-FGD by comparing it with the representative GNMF solvers.

  1. Planning energy-efficient bipedal locomotion on patterned terrain

    NASA Astrophysics Data System (ADS)

    Zamani, Ali; Bhounsule, Pranav A.; Taha, Ahmad

    2016-05-01

    Energy-efficient bipedal walking is essential in realizing practical bipedal systems. However, current energy-efficient bipedal robots (e.g., passive-dynamics-inspired robots) are limited to walking at a single speed and step length. The objective of this work is to address this gap by developing a method of synthesizing energy-efficient bipedal locomotion on patterned terrain consisting of stepping stones using energy-efficient primitives. A model of Cornell Ranger (a passive-dynamics inspired robot) is utilized to illustrate our technique. First, an energy-optimal trajectory control problem for a single step is formulated and solved. The solution minimizes the Total Cost Of Transport (TCOT is defined as the energy used per unit weight per unit distance travelled) subject to various constraints such as actuator limits, foot scuffing, joint kinematic limits, ground reaction forces. The outcome of the optimization scheme is a table of TCOT values as a function of step length and step velocity. Next, we parameterize the terrain to identify the location of the stepping stones. Finally, the TCOT table is used in conjunction with the parameterized terrain to plan an energy-efficient stepping strategy.

  2. Engineering more stable, selectable marker-free autoluminescent mycobacteria by one step.

    PubMed

    Yang, Feng; Njire, Moses M; Liu, Jia; Wu, Tian; Wang, Bangxing; Liu, Tianzhou; Cao, Yuanyuan; Liu, Zhiyong; Wan, Junting; Tu, Zhengchao; Tan, Yaoju; Tan, Shouyong; Zhang, Tianyu

    2015-01-01

    In our previous study, we demonstrated that the use of the autoluminescent Mycobacterium tuberculosis as a reporter strain had the potential to drastically reduce the time, effort, animals and costs consumed in evaluation of the activities of drugs and vaccines in live mice. However, the strains were relatively unstable and lost reporter with time without selection. The kanamycin selection marker used wasn't the best choice as it provides resistance to amino glycosides which are an important class of second line drugs used in tuberculosis treatment. In addition, the marker could limit utility of the strains for screening of new potential drugs or evaluating drug combinations for tuberculosis treatment. Limited selection marker genes for mycobacterial genetic manipulation is a major drawback for such a marker-containing strain in many research fields. Therefore, selectable marker-free, more stable autoluminescent mycobacteria are highly needed. After trying several strategies, we created such mycobacterial strains successfully by using an integrative vector and removing both the resistance maker and integrase genes by Xer site-specific recombination in one step. The corresponding plasmid vectors developed in this study could be very convenient in constructing other selectable marker-free, more stable reporter mycobacteria with diverse applications.

  3. Inhibition of Insulin Amyloid Fibrillation by a Novel Amphipathic Heptapeptide

    PubMed Central

    Ratha, Bhisma N.; Ghosh, Anirban; Brender, Jeffrey R.; Gayen, Nilanjan; Ilyas, Humaira; Neeraja, Chilukoti; Das, Kali P.; Mandal, Atin K.; Bhunia, Anirban

    2016-01-01

    The aggregation of insulin into amyloid fibers has been a limiting factor in the development of fast acting insulin analogues, creating a demand for excipients that limit aggregation. Despite the potential demand, inhibitors specifically targeting insulin have been few in number. Here we report a non-toxic and serum stable-designed heptapeptide, KR7 (KPWWPRR-NH2), that differs significantly from the primarily hydrophobic sequences that have been previously used to interfere with insulin amyloid fibrillation. Thioflavin T fluorescence assays, circular dichroism spectroscopy, and one-dimensional proton NMR experiments suggest KR7 primarily targets the fiber elongation step with little effect on the early oligomerization steps in the lag time period. From confocal fluorescence and atomic force microscopy experiments, the net result appears to be the arrest of aggregation in an early, non-fibrillar aggregation stage. This mechanism is noticeably different from previous peptide-based inhibitors, which have primarily shifted the lag time with little effect on later stages of aggregation. As insulin is an important model system for understanding protein aggregation, the new peptide may be an important tool for understanding peptide-based inhibition of amyloid formation. PMID:27679488

  4. Q-Sample Construction: A Critical Step for a Q-Methodological Study.

    PubMed

    Paige, Jane B; Morin, Karen H

    2016-01-01

    Q-sample construction is a critical step in Q-methodological studies. Prior to conducting Q-studies, researchers start with a population of opinion statements (concourse) on a particular topic of interest from which a sample is drawn. These sampled statements are known as the Q-sample. Although literature exists on methodological processes to conduct Q-methodological studies, limited guidance exists on the practical steps to reduce the population of statements to a Q-sample. A case exemplar illustrates the steps to construct a Q-sample in preparation for a study that explored perspectives nurse educators and nursing students hold about simulation design. Experts in simulation and Q-methodology evaluated the Q-sample for readability, clarity, and for representativeness of opinions contained within the concourse. The Q-sample was piloted and feedback resulted in statement refinement. Researchers especially those undertaking Q-method studies for the first time may benefit from the practical considerations to construct a Q-sample offered in this article. © The Author(s) 2014.

  5. Magnetic resonance imaging-transectal ultrasound image-fusion biopsies accurately characterize the index tumor: correlation with step-sectioned radical prostatectomy specimens in 135 patients.

    PubMed

    Baco, Eduard; Ukimura, Osamu; Rud, Erik; Vlatkovic, Ljiljana; Svindland, Aud; Aron, Manju; Palmer, Suzanne; Matsugasumi, Toru; Marien, Arnaud; Bernhard, Jean-Christophe; Rewcastle, John C; Eggesbø, Heidi B; Gill, Inderbir S

    2015-04-01

    Prostate biopsies targeted by elastic fusion of magnetic resonance (MR) and three-dimensional (3D) transrectal ultrasound (TRUS) images may allow accurate identification of the index tumor (IT), defined as the lesion with the highest Gleason score or the largest volume or extraprostatic extension. To determine the accuracy of MR-TRUS image-fusion biopsy in characterizing ITs, as confirmed by correlation with step-sectioned radical prostatectomy (RP) specimens. Retrospective analysis of 135 consecutive patients who sequentially underwent pre-biopsy MR, MR-TRUS image-fusion biopsy, and robotic RP at two centers between January 2010 and September 2013. Image-guided biopsies of MR-suspected IT lesions were performed with tracking via real-time 3D TRUS. The largest geographically distinct cancer focus (IT lesion) was independently registered on step-sectioned RP specimens. A validated schema comprising 27 regions of interest was used to identify the IT center location on MR images and in RP specimens, as well as the location of the midpoint of the biopsy trajectory, and variables were correlated. The concordance between IT location on biopsy and RP specimens was 95% (128/135). The coefficient for correlation between IT volume on MRI and histology was r=0.663 (p<0.001). The maximum cancer core length on biopsy was weakly correlated with RP tumor volume (r=0.466, p<0.001). The concordance of primary Gleason pattern between targeted biopsy and RP specimens was 90% (115/128; κ=0.76). The study limitations include retrospective evaluation of a selected patient population, which limits the generalizability of the results. Use of MR-TRUS image fusion to guide prostate biopsies reliably identified the location and primary Gleason pattern of the IT lesion in >90% of patients, but showed limited ability to predict cancer volume, as confirmed by step-sectioned RP specimens. Biopsies targeted using magnetic resonance images combined with real-time three-dimensional transrectal ultrasound allowed us to reliably identify the spatial location of the most important tumor in prostate cancer and characterize its aggressiveness. Copyright © 2014 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  6. Intra-individual variability in day-to-day and month-to-month measurements of physical activity and sedentary behaviour at work and in leisure-time among Danish adults.

    PubMed

    Pedersen, E S L; Danquah, I H; Petersen, C B; Tolstrup, J S

    2016-12-03

    Accelerometers can obtain precise measurements of movements during the day. However, the individual activity pattern varies from day-to-day and there is limited evidence on measurement days needed to obtain sufficient reliability. The aim of this study was to examine variability in accelerometer derived data on sedentary behaviour and physical activity at work and in leisure-time during week days among Danish office employees. We included control participants (n = 135) from the Take a Stand! Intervention; a cluster randomized controlled trial conducted in 19 offices. Sitting time and physical activity were measured using an ActiGraph GT3X+ fixed on the thigh and data were processed using Acti4 software. Variability was examined for sitting time, standing time, steps and time spent in moderate-to-vigorous physical activity (MVPA) per day by multilevel mixed linear regression modelling. Results of this study showed that the number of days needed to obtain a reliability of 80% when measuring sitting time was 4.7 days for work and 5.5 days for leisure time. For physical activity at work, 4.0 days and 4.2 days were required to measure steps and MVPA, respectively. During leisure time, more monitoring time was needed to reliably estimate physical activity (6.8 days for steps and 5.8 days for MVPA). The number of measurement days needed to reliably estimate activity patterns was greater for leisure time than for work time. The domain specific variability is of great importance to researchers and health promotion workers planning to use objective measures of sedentary behaviour and physical activity. Clinical trials NCT01996176 .

  7. One-dimensional model of interacting-step fluctuations on vicinal surfaces: Analytical formulas and kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Patrone, Paul N.; Einstein, T. L.; Margetis, Dionisios

    2010-12-01

    We study analytically and numerically a one-dimensional model of interacting line defects (steps) fluctuating on a vicinal crystal. Our goal is to formulate and validate analytical techniques for approximately solving systems of coupled nonlinear stochastic differential equations (SDEs) governing fluctuations in surface motion. In our analytical approach, the starting point is the Burton-Cabrera-Frank (BCF) model by which step motion is driven by diffusion of adsorbed atoms on terraces and atom attachment-detachment at steps. The step energy accounts for entropic and nearest-neighbor elastic-dipole interactions. By including Gaussian white noise to the equations of motion for terrace widths, we formulate large systems of SDEs under different choices of diffusion coefficients for the noise. We simplify this description via (i) perturbation theory and linearization of the step interactions and, alternatively, (ii) a mean-field (MF) approximation whereby widths of adjacent terraces are replaced by a self-consistent field but nonlinearities in step interactions are retained. We derive simplified formulas for the time-dependent terrace-width distribution (TWD) and its steady-state limit. Our MF analytical predictions for the TWD compare favorably with kinetic Monte Carlo simulations under the addition of a suitably conservative white noise in the BCF equations.

  8. Large time-step stability of explicit one-dimensional advection schemes

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.

    1993-01-01

    There is a wide-spread belief that most explicit one-dimensional advection schemes need to satisfy the so-called 'CFL condition' - that the Courant number, c = udelta(t)/delta(x), must be less than or equal to one, for stability in the von Neumann sense. This puts severe limitations on the time-step in high-speed, fine-grid calculations and is an impetus for the development of implicit schemes, which often require less restrictive time-step conditions for stability, but are more expensive per time-step. However, it turns out that, at least in one dimension, if explicit schemes are formulated in a consistent flux-based conservative finite-volume form, von Neumann stability analysis does not place any restriction on the allowable Courant number. Any explicit scheme that is stable for c is less than 1, with a complex amplitude ratio, G(c), can be easily extended to arbitrarily large c. The complex amplitude ratio is then given by exp(- (Iota)(Nu)(Theta)) G(delta(c)), where N is the integer part of c, and delta(c) = c - N (less than 1); this is clearly stable. The CFL condition is, in fact, not a stability condition at all, but, rather, a 'range restriction' on the 'pieces' in a piece-wise polynomial interpolation. When a global view is taken of the interpolation, the need for a CFL condition evaporates. A number of well-known explicit advection schemes are considered and thus extended to large delta(t). The analysis also includes a simple interpretation of (large delta(t)) total-variation-diminishing (TVD) constraints.

  9. Analysis of the track- and dose-averaged LET and LET spectra in proton therapy using the GEANT4 Monte Carlo code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guan, Fada; Peeler, Christopher; Taleei, Reza

    Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the GEANT 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from GEANT 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LET{sub t} and dose-averaged LET, LET{sub d}) using GEANT 4 for different tracking stepmore » size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LET{sub t} and LET{sub d} of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LET{sub t} but significant for LET{sub d}. This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in GEANT 4 can result in incorrect LET{sub d} calculation results in the dose plateau region for small step limits. The erroneous LET{sub d} results can be attributed to the algorithm to determine fluctuations in energy deposition along the tracking step in GEANT 4. The incorrect LET{sub d} values lead to substantial differences in the calculated RBE. Conclusions: When the GEANT 4 particle tracking method is used to calculate the average LET values within targets with a small step limit, such as smaller than 500 μm, the authors recommend the use of LET{sub t} in the dose plateau region and LET{sub d} around the Bragg peak. For a large step limit, i.e., 500 μm, LET{sub d} is recommended along the whole Bragg curve. The transition point depends on beam parameters and can be found by determining the location where the gradient of the ratio of LET{sub d} and LET{sub t} becomes positive.« less

  10. Velocity and stress autocorrelation decay in isothermal dissipative particle dynamics

    NASA Astrophysics Data System (ADS)

    Chaudhri, Anuj; Lukes, Jennifer R.

    2010-02-01

    The velocity and stress autocorrelation decay in a dissipative particle dynamics ideal fluid model is analyzed in this paper. The autocorrelation functions are calculated at three different friction parameters and three different time steps using the well-known Groot/Warren algorithm and newer algorithms including self-consistent leap-frog, self-consistent velocity Verlet and Shardlow first and second order integrators. At low friction values, the velocity autocorrelation function decays exponentially at short times, shows slower-than exponential decay at intermediate times, and approaches zero at long times for all five integrators. As friction value increases, the deviation from exponential behavior occurs earlier and is more pronounced. At small time steps, all the integrators give identical decay profiles. As time step increases, there are qualitative and quantitative differences between the integrators. The stress correlation behavior is markedly different for the algorithms. The self-consistent velocity Verlet and the Shardlow algorithms show very similar stress autocorrelation decay with change in friction parameter, whereas the Groot/Warren and leap-frog schemes show variations at higher friction factors. Diffusion coefficients and shear viscosities are calculated using Green-Kubo integration of the velocity and stress autocorrelation functions. The diffusion coefficients match well-known theoretical results at low friction limits. Although the stress autocorrelation function is different for each integrator, fluctuates rapidly, and gives poor statistics for most of the cases, the calculated shear viscosities still fall within range of theoretical predictions and nonequilibrium studies.

  11. Synthesis of walking sounds for alleviating gait disturbances in Parkinson's disease.

    PubMed

    Rodger, Matthew W M; Young, William R; Craig, Cathy M

    2014-05-01

    Managing gait disturbances in people with Parkinson's disease is a pressing challenge, as symptoms can contribute to injury and morbidity through an increased risk of falls. While drug-based interventions have limited efficacy in alleviating gait impairments, certain nonpharmacological methods, such as cueing, can also induce transient improvements to gait. The approach adopted here is to use computationally-generated sounds to help guide and improve walking actions. The first method described uses recordings of force data taken from the steps of a healthy adult which in turn were used to synthesize realistic gravel-footstep sounds that represented different spatio-temporal parameters of gait, such as step duration and step length. The second method described involves a novel method of sonifying, in real time, the swing phase of gait using real-time motion-capture data to control a sound synthesis engine. Both approaches explore how simple but rich auditory representations of action based events can be used by people with Parkinson's to guide and improve the quality of their walking, reducing the risk of falls and injury. Studies with Parkinson's disease patients are reported which show positive results for both techniques in reducing step length variability. Potential future directions for how these sound approaches can be used to manage gait disturbances in Parkinson's are also discussed.

  12. A single-stage flux-corrected transport algorithm for high-order finite-volume methods

    DOE PAGES

    Chaplin, Christopher; Colella, Phillip

    2017-05-08

    We present a new limiter method for solving the advection equation using a high-order, finite-volume discretization. The limiter is based on the flux-corrected transport algorithm. Here, we modify the classical algorithm by introducing a new computation for solution bounds at smooth extrema, as well as improving the preconstraint on the high-order fluxes. We compute the high-order fluxes via a method-of-lines approach with fourth-order Runge-Kutta as the time integrator. For computing low-order fluxes, we select the corner-transport upwind method due to its improved stability over donor-cell upwind. Several spatial differencing schemes are investigated for the high-order flux computation, including centered- differencemore » and upwind schemes. We show that the upwind schemes perform well on account of the dissipation of high-wavenumber components. The new limiter method retains high-order accuracy for smooth solutions and accurately captures fronts in discontinuous solutions. Further, we need only apply the limiter once per complete time step.« less

  13. Real-time dedispersion for fast radio transient surveys, using auto tuning on many-core accelerators

    NASA Astrophysics Data System (ADS)

    Sclocco, A.; van Leeuwen, J.; Bal, H. E.; van Nieuwpoort, R. V.

    2016-01-01

    Dedispersion, the removal of deleterious smearing of impulsive signals by the interstellar matter, is one of the most intensive processing steps in any radio survey for pulsars and fast transients. We here present a study of the parallelization of this algorithm on many-core accelerators, including GPUs from AMD and NVIDIA, and the Intel Xeon Phi. We find that dedispersion is inherently memory-bound. Even in a perfect scenario, hardware limitations keep the arithmetic intensity low, thus limiting performance. We next exploit auto-tuning to adapt dedispersion to different accelerators, observations, and even telescopes. We demonstrate that the optimal settings differ between observational setups, and that auto-tuning significantly improves performance. This impacts time-domain surveys from Apertif to SKA.

  14. Fast and scalable purification of a therapeutic full-length antibody based on process crystallization.

    PubMed

    Smejkal, Benjamin; Agrawal, Neeraj J; Helk, Bernhard; Schulz, Henk; Giffard, Marion; Mechelke, Matthias; Ortner, Franziska; Heckmeier, Philipp; Trout, Bernhardt L; Hekmat, Dariusch

    2013-09-01

    The potential of process crystallization for purification of a therapeutic monoclonal IgG1 antibody was studied. The purified antibody was crystallized in non-agitated micro-batch experiments for the first time. A direct crystallization from clarified CHO cell culture harvest was inhibited by high salt concentrations. The salt concentration of the harvest was reduced by a simple pretreatment step. The crystallization process from pretreated harvest was successfully transferred to stirred tanks and scaled-up from the mL-scale to the 1 L-scale for the first time. The crystallization yield after 24 h was 88-90%. A high purity of 98.5% was reached after a single recrystallization step. A 17-fold host cell protein reduction was achieved and DNA content was reduced below the detection limit. High biological activity of the therapeutic antibody was maintained during the crystallization, dissolving, and recrystallization steps. Crystallization was also performed with impure solutions from intermediate steps of a standard monoclonal antibody purification process. It was shown that process crystallization has a strong potential to replace Protein A chromatography. Fast dissolution of the crystals was possible. Furthermore, it was shown that crystallization can be used as a concentrating step and can replace several ultra-/diafiltration steps. Molecular modeling suggested that a negative electrostatic region with interspersed exposed hydrophobic residues on the Fv domain of this antibody is responsible for the high crystallization propensity. As a result, process crystallization, following the identification of highly crystallizable antibodies using molecular modeling tools, can be recognized as an efficient, scalable, fast, and inexpensive alternative to key steps of a standard purification process for therapeutic antibodies. Copyright © 2013 Wiley Periodicals, Inc.

  15. Performance evaluation of the time delay digital tanlock loop architectures

    NASA Astrophysics Data System (ADS)

    Al-Kharji Al-Ali, Omar; Anani, Nader; Al-Qutayri, Mahmoud; Al-Araji, Saleh; Ponnapalli, Prasad

    2016-01-01

    This article presents the architectures, theoretical analyses and testing results of modified time delay digital tanlock loop (TDTLs) system. The modifications to the original TDTL architecture were introduced to overcome some of the limitations of the original TDTL and to enhance the overall performance of the particular systems. The limitations addressed in this article include the non-linearity of the phase detector, the restricted width of the locking range and the overall system acquisition speed. Each of the modified architectures was tested by subjecting the system to sudden positive and negative frequency steps and comparing its response with that of the original TDTL. In addition, the performance of all the architectures was evaluated under noise-free as well as noisy environments. The extensive simulation results using MATLAB/SIMULINK demonstrate that the new architectures overcome the limitations they addressed and the overall results confirmed significant improvements in performance compared to the conventional TDTL system.

  16. Biofeedback in Partial Weight Bearing: Validity of 3 Different Devices.

    PubMed

    van Lieshout, Remko; Stukstette, Mirelle J; de Bie, Rob A; Vanwanseele, Benedicte; Pisters, Martijn F

    2016-11-01

    Study Design Controlled laboratory study to assess criterion-related validity, with a cross-sectional within-subject design. Background Patients with orthopaedic conditions have difficulties complying with partial weight-bearing instructions. Technological advances have resulted in biofeedback devices that offer real-time feedback. However, the accuracy of these devices is mostly unknown. Inaccurate feedback can result in incorrect lower-limb loading and may lead to delayed healing. Objectives To investigate validity of peak force measurements obtained using 3 different biofeedback devices under varying levels of partial weight-bearing categories. Methods Validity of 3 biofeedback devices (OpenGo science, SmartStep, and SensiStep) was assessed. Healthy participants were instructed to walk at a self-selected speed with crutches under 3 different weight-bearing conditions, categorized as a percentage range of body weight: 1% to 20%, greater than 20% to 50%, and greater than 50% to 75%. Peak force data from the biofeedback devices were compared with the peak vertical ground reaction force measured with a force plate. Criterion validity was estimated using simple and regression-based Bland-Altman 95% limits of agreement and weighted kappas. Results Fifty-five healthy adults (58% male) participated. Agreement with the gold standard was substantial for the SmartStep, moderate for OpenGo science, and slight for SensiStep (weighted ± = 0.76, 0.58, and 0.19, respectively). For the 1% to 20% and greater than 20% to 50% weight-bearing categories, both the OpenGo science and SmartStep had acceptable limits of agreement. For the weight-bearing category greater than 50% to 75%, none of the devices had acceptable agreement. Conclusion The OpenGo science and SmartStep provided valid feedback in the lower weight-bearing categories, and the SensiStep showed poor validity of feedback in all weight-bearing categories. J Orthop Sports Phys Ther 2016;46(11):-1. Epub 12 Oct 2016. doi:10.2519/jospt.2016.6625.

  17. Pharmacokinetics of piperaquine and safety profile of dihydroartemisinin-piperaquine co-administered with antiretroviral therapy in malaria-uninfected HIV-positive Malawian adults.

    PubMed

    Banda, Clifford G; Dzinjalamala, Fraction; Mukaka, Mavuto; Mallewa, Jane; Maiden, Victor; Terlouw, Dianne J; Lalloo, David G; Khoo, Saye H; Mwapasa, Victor

    2018-05-21

    There are limited data on the pharmacokinetic and safety profiles of dihydroartemisinin-piperaquine (DHA-PQ) among human immunodeficiency virus infected (HIV+) individuals taking antiretroviral therapy (ART). In a two step (parallel-group) pharmacokinetic trial with intensive blood sampling, we compared area under the concentration-time curve (AUC 0-28 days ) and safety outcomes of piperaquine among malaria-uninfected HIV+ adults. In step 1, half the adult dose of DHA-PQ was administered for three days as an intitial safety check in four groups (n=6/group) of HIV+ adults (age≥18 years): (i) antiretroviral-naïve, (ii) on nevirapine-based ART, (iii) on efavirenz-based ART, and (iv) on ritonavir-boosted lopinavir-based ART. In step 2, a full adult treatment course of DHA-PQ was administered to a different cohort of participants in three groups: (i) antiretroviral naïve, (ii) on efavirenz-based ART and (iii) on nevirapine-based ART (n=10-15/group). Ritonavir-boosted lopinavir-based ART group was dropped in step 2 due to limited number of participants who were on this second line ART and were eligible for recruitment. Piperaquine's AUC 0-28 days in both steps was 43% lower among participants on efavirenz-based ART compared to ART naïve participants. There were no significant differences in AUC 0-28 days between the other ART groups and the ART naïve group in each of the two steps. Furthermore, no differences in treatment-emergent clinical and laboratory adverse events were observed across the groups in steps 1 and 2. Although well tolerated at half and full standard adult treatment courses, efavirenz based antiretroviral regimen was associated with reduced piperaquine exposure which may compromise dihydroartemisinin-piperaquine's effectiveness in programmatic settings. Copyright © 2018 Banda et al.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonetto, Andrea; Dall'Anese, Emiliano

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  19. Parental control, nurturance, self-efficacy, and screen viewing among 5- to 6-year-old children: a cross-sectional mediation analysis to inform potential behavior change strategies.

    PubMed

    Jago, Russell; Wood, Lesley; Zahra, Jesmond; Thompson, Janice L; Sebire, Simon J

    2015-04-01

    Children's screen viewing (SV) is associated with higher levels of childhood obesity. Many children exceed the American Academy of Pediatrics guideline of 2 hours of television (TV) per day. There is limited information about how parenting styles and parental self-efficacy to limit child screen time are associated with children's SV. This study examined whether parenting styles were associated with the SV of young children and whether any effects were mediated by parental self-efficacy to limit screen time. Data were from a cross-sectional survey conducted in 2013. Child and parent SV were reported by a parent, who also provided information about their parenting practices and self-efficacy to restrict SV. A four-step regression method examined whether parenting styles were associated with the SV of young children. Mediation by parental self-efficacy to limit screen time was examined using indirect effects. On a weekday, 90% of children watched TV for <2 hours per day, decreasing to 55% for boys and 58% for girls at weekends. At the weekend, 75% of children used a personal computer at home, compared with 61% during the week. Self-reported parental control, but not nurturance, was associated with children's TV viewing. Parental self-efficacy to limit screen time was independently associated with child weekday TV viewing and mediated associations between parental control and SV. Parental control was associated with lower levels of SV among 5- to 6-year-old children. This association was partially mediated by parental self-efficacy to limit screen time. The development of strategies to increase parental self-efficacy to limit screen-time may be useful.

  20. Parental Control, Nurturance, Self-Efficacy, and Screen Viewing among 5- to 6-Year-Old Children: A Cross-Sectional Mediation Analysis To Inform Potential Behavior Change Strategies

    PubMed Central

    Wood, Lesley; Zahra, Jesmond; Thompson, Janice L.; Sebire, Simon J.

    2015-01-01

    Abstract Background: Children's screen viewing (SV) is associated with higher levels of childhood obesity. Many children exceed the American Academy of Pediatrics guideline of 2 hours of television (TV) per day. There is limited information about how parenting styles and parental self-efficacy to limit child screen time are associated with children's SV. This study examined whether parenting styles were associated with the SV of young children and whether any effects were mediated by parental self-efficacy to limit screen time. Methods: Data were from a cross-sectional survey conducted in 2013. Child and parent SV were reported by a parent, who also provided information about their parenting practices and self-efficacy to restrict SV. A four-step regression method examined whether parenting styles were associated with the SV of young children. Mediation by parental self-efficacy to limit screen time was examined using indirect effects. Results: On a weekday, 90% of children watched TV for <2 hours per day, decreasing to 55% for boys and 58% for girls at weekends. At the weekend, 75% of children used a personal computer at home, compared with 61% during the week. Self-reported parental control, but not nurturance, was associated with children's TV viewing. Parental self-efficacy to limit screen time was independently associated with child weekday TV viewing and mediated associations between parental control and SV. Conclusions: Parental control was associated with lower levels of SV among 5- to 6-year-old children. This association was partially mediated by parental self-efficacy to limit screen time. The development of strategies to increase parental self-efficacy to limit screen-time may be useful. PMID:25584518

  1. Coupling a Reactive Transport Code with a Global Land Surface Model for Mechanistic Biogeochemistry Representation: 1. Addressing the Challenge of Nonnegativity

    DOE PAGES

    Tang, Guoping; Yuan, Fengming; Bisht, Gautam; ...

    2016-01-01

    Reactive transport codes (e.g., PFLOTRAN) are increasingly used to improve the representation of biogeochemical processes in terrestrial ecosystem models (e.g., the Community Land Model, CLM). As CLM and PFLOTRAN use explicit and implicit time stepping, implementation of CLM biogeochemical reactions in PFLOTRAN can result in negative concentration, which is not physical and can cause numerical instability and errors. The objective of this work is to address the nonnegativity challenge to obtain accurate, efficient, and robust solutions. We illustrate the implementation of a reaction network with the CLM-CN decomposition, nitrification, denitrification, and plant nitrogen uptake reactions and test the implementation atmore » arctic, temperate, and tropical sites. We examine use of scaling back the update during each iteration (SU), log transformation (LT), and downregulating the reaction rate to account for reactant availability limitation to enforce nonnegativity. Both SU and LT guarantee nonnegativity but with implications. When a very small scaling factor occurs due to either consumption or numerical overshoot, and the iterations are deemed converged because of too small an update, SU can introduce excessive numerical error. LT involves multiplication of the Jacobian matrix by the concentration vector, which increases the condition number, decreases the time step size, and increases the computational cost. Neither SU nor SE prevents zero concentration. When the concentration is close to machine precision or 0, a small positive update stops all reactions for SU, and LT can fail due to a singular Jacobian matrix. The consumption rate has to be downregulated such that the solution to the mathematical representation is positive. A first-order rate downregulates consumption and is nonnegative, and adding a residual concentration makes it positive. For zero-order rate or when the reaction rate is not a function of a reactant, representing the availability limitation of each reactant with a Monod substrate limiting function provides a smooth transition between a zero-order rate when the reactant is abundant and first-order rate when the reactant becomes limiting. When the half saturation is small, marching through the transition may require small time step sizes to resolve the sharp change within a small range of concentration values. Our results from simple tests and CLM-PFLOTRAN simulations caution against use of SU and indicate that accurate, stable, and relatively efficient solutions can be achieved with LT and downregulation with Monod substrate limiting function and residual concentration.« less

  2. Dynamic subcellular partitioning of the nucleolar transcription factor TIF-IA under ribotoxic stress.

    PubMed

    Szymański, Jedrzej; Mayer, Christine; Hoffmann-Rohrer, Urs; Kalla, Claudia; Grummt, Ingrid; Weiss, Matthias

    2009-07-01

    TIF-IA is a basal transcription factor of RNA polymerase I (Pol I) that is a major target of the JNK2 signaling pathway in response to ribotoxic stress. Using advanced fluorescence microscopy and kinetic modeling we elucidated the subcellular localization of TIF-IA and its exchange dynamics between the nucleolus, nucleoplasm and cytoplasm upon ribotoxic stress. In steady state, the majority of (GFP-tagged) TIF-IA was in the cytoplasm and the nucleus, a minor portion (7%) localizing to the nucleoli. We observed a rapid shuttling of GFP-TIF-IA between the different cellular compartments with a mean residence time of approximately 130 s in the nucleus and only approximately 30 s in the nucleoli. The import rate from the cytoplasm to the nucleus was approximately 3-fold larger than the export rate, suggesting an importin/exportin-mediated transport rather than a passive diffusion. Upon ribotoxic stress, GFP-TIF-IA was released from the nucleoli with a half-time of approximately 24 min. Oxidative stress and inhibition of protein synthesis led to a relocation of GFP-TIF-IA with slower kinetics while osmotic stress had no effect. The observed relocation was much slower than the nucleo-cytoplasmic and nucleus-nucleolus exchange rates of GFP-TIF-IA, indicating a time-limiting step upstream of the JNK2 pathway. In support of this, time-course experiments on the activity of JNK2 revealed the activation of the JNK kinase as the rate-limiting step.

  3. Force Limited Random Vibration Test of TESS Camera Mass Model

    NASA Technical Reports Server (NTRS)

    Karlicek, Alexandra; Hwang, James Ho-Jin; Rey, Justin J.

    2015-01-01

    The Transiting Exoplanet Survey Satellite (TESS) is a spaceborne instrument consisting of four wide field-of-view-CCD cameras dedicated to the discovery of exoplanets around the brightest stars. As part of the environmental testing campaign, force limiting was used to simulate a realistic random vibration launch environment. While the force limit vibration test method is a standard approach used at multiple institutions including Jet Propulsion Laboratory (JPL), NASA Goddard Space Flight Center (GSFC), European Space Research and Technology Center (ESTEC), and Japan Aerospace Exploration Agency (JAXA), it is still difficult to find an actual implementation process in the literature. This paper describes the step-by-step process on how the force limit method was developed and applied on the TESS camera mass model. The process description includes the design of special fixtures to mount the test article for properly installing force transducers, development of the force spectral density using the semi-empirical method, estimation of the fuzzy factor (C2) based on the mass ratio between the supporting structure and the test article, subsequent validating of the C2 factor during the vibration test, and calculation of the C.G. accelerations using the Root Mean Square (RMS) reaction force in the spectral domain and the peak reaction force in the time domain.

  4. Multiphysics modelling of the separation of suspended particles via frequency ramping of ultrasonic standing waves.

    PubMed

    Trujillo, Francisco J; Eberhardt, Sebastian; Möller, Dirk; Dual, Jurg; Knoerzer, Kai

    2013-03-01

    A model was developed to determine the local changes of concentration of particles and the formations of bands induced by a standing acoustic wave field subjected to a sawtooth frequency ramping pattern. The mass transport equation was modified to incorporate the effect of acoustic forces on the concentration of particles. This was achieved by balancing the forces acting on particles. The frequency ramping was implemented as a parametric sweep for the time harmonic frequency response in time steps of 0.1s. The physics phenomena of piezoelectricity, acoustic fields and diffusion of particles were coupled and solved in COMSOL Multiphysics™ (COMSOL AB, Stockholm, Sweden) following a three step approach. The first step solves the governing partial differential equations describing the acoustic field by assuming that the pressure field achieves a pseudo steady state. In the second step, the acoustic radiation force is calculated from the pressure field. The final step allows calculating the locally changing concentration of particles as a function of time by solving the modified equation of particle transport. The diffusivity was calculated as function of concentration following the Garg and Ruthven equation which describes the steep increase of diffusivity when the concentration approaches saturation. However, it was found that this steep increase creates numerical instabilities at high voltages (in the piezoelectricity equations) and high initial particle concentration. The model was simplified to a pseudo one-dimensional case due to computation power limitations. The predicted particle distribution calculated with the model is in good agreement with the experimental data as it follows accurately the movement of the bands in the centre of the chamber. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.

  5. Continuous bind-and-elute protein A capture chromatography: Optimization under process scale column constraints and comparison to batch operation.

    PubMed

    Kaltenbrunner, Oliver; Diaz, Luis; Hu, Xiaochun; Shearer, Michael

    2016-07-08

    Recently, continuous downstream processing has become a topic of discussion and analysis at conferences while no industrial applications of continuous downstream processing for biopharmaceutical manufacturing have been reported. There is significant potential to increase the productivity of a Protein A capture step by converting the operation to simulated moving bed (SMB) mode. In this mode, shorter columns are operated at higher process flow and corresponding short residence times. The ability to significantly shorten the product residence time during loading without appreciable capacity loss can dramatically increase productivity of the capture step and consequently reduce the amount of Protein A resin required in the process. Previous studies have not considered the physical limitations of how short columns can be packed and the flow rate limitations due to pressure drop of stacked columns. In this study, we are evaluating the process behavior of a continuous Protein A capture column cycling operation under the known pressure drop constraints of a compressible media. The results are compared to the same resin operated under traditional batch operating conditions. We analyze the optimum system design point for a range of feed concentrations, bed heights, and load residence times and determine achievable productivity for any feed concentration and any column bed height. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:938-948, 2016. © 2016 American Institute of Chemical Engineers.

  6. Kinetic characterization of thermophilic and mesophilic anaerobic digestion for coffee grounds and waste activated sludge.

    PubMed

    Li, Qian; Qiao, Wei; Wang, Xiaochang; Takayanagi, Kazuyuki; Shofie, Mohammad; Li, Yu-You

    2015-02-01

    This study was conducted to characterize the kinetics of an anaerobic process (hydrolysis, acetogenesis, acidogenesis and methanogenesis) under thermophilic (55 °C) and mesophilic (35 °C) conditions with coffee grounds and waste activated sludge (WAS) as the substrates. Special focus was given to the kinetics of propionic acid degradation to elucidate the accumulation of VFAs. Under the thermophilic condition, the methane production rate of all substrates (WAS, ground coffee and raw coffee) was about 1.5 times higher than that under the mesophilic condition. However, the effects on methane production of each substrate under the thermophilic condition differed: WAS increased by 35.8-48.2%, raw coffee decreased by 76.3-64.5% and ground coffee decreased by 74.0-57.9%. Based on the maximum reaction rate (Rmax) of each anaerobic stage obtained from the modified Gompertz model, acetogenesis was found to be the rate-limiting step for coffee grounds and WAS. This can be explained by the kinetics of propionate degradation under thermophilic condition in which a long lag-phase (more than 18 days) was observed, although the propionate concentration was only 500 mg/L. Under the mesophilic condition, acidogenesis and hydrolysis were found to be the rate-limiting step for coffee grounds and WAS, respectively. Even though reducing the particle size accelerated the methane production rate of coffee grounds, but did not change the rate-limiting step: acetogenesis in thermophilic and acidogenesis in mesophilic. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Predicting United States Medical Licensure Examination Step 2 clinical knowledge scores from previous academic indicators.

    PubMed

    Monteiro, Kristina A; George, Paul; Dollase, Richard; Dumenco, Luba

    2017-01-01

    The use of multiple academic indicators to identify students at risk of experiencing difficulty completing licensure requirements provides an opportunity to increase support services prior to high-stakes licensure examinations, including the United States Medical Licensure Examination (USMLE) Step 2 clinical knowledge (CK). Step 2 CK is becoming increasingly important in decision-making by residency directors because of increasing undergraduate medical enrollment and limited available residency vacancies. We created and validated a regression equation to predict students' Step 2 CK scores from previous academic indicators to identify students at risk, with sufficient time to intervene with additional support services as necessary. Data from three cohorts of students (N=218) with preclinical mean course exam score, National Board of Medical Examination subject examinations, and USMLE Step 1 and Step 2 CK between 2011 and 2013 were used in analyses. The authors created models capable of predicting Step 2 CK scores from academic indicators to identify at-risk students. In model 1, preclinical mean course exam score and Step 1 score accounted for 56% of the variance in Step 2 CK score. The second series of models included mean preclinical course exam score, Step 1 score, and scores on three NBME subject exams, and accounted for 67%-69% of the variance in Step 2 CK score. The authors validated the findings on the most recent cohort of graduating students (N=89) and predicted Step 2 CK score within a mean of four points (SD=8). The authors suggest using the first model as a needs assessment to gauge the level of future support required after completion of preclinical course requirements, and rescreening after three of six clerkships to identify students who might benefit from additional support before taking USMLE Step 2 CK.

  8. Molecular dynamics with rigid bodies: Alternative formulation and assessment of its limitations when employed to simulate liquid water

    NASA Astrophysics Data System (ADS)

    Silveira, Ana J.; Abreu, Charlles R. A.

    2017-09-01

    Sets of atoms collectively behaving as rigid bodies are often used in molecular dynamics to model entire molecules or parts thereof. This is a coarse-graining strategy that eliminates degrees of freedom and supposedly admits larger time steps without abandoning the atomistic character of a model. In this paper, we rely on a particular factorization of the rotation matrix to simplify the mechanical formulation of systems containing rigid bodies. We then propose a new derivation for the exact solution of torque-free rotations, which are employed as part of a symplectic numerical integration scheme for rigid-body dynamics. We also review methods for calculating pressure in systems of rigid bodies with pairwise-additive potentials and periodic boundary conditions. Finally, simulations of liquid phases, with special focus on water, are employed to analyze the numerical aspects of the proposed methodology. Our results show that energy drift is avoided for time step sizes up to 5 fs, but only if a proper smoothing is applied to the interatomic potentials. Despite this, the effects of discretization errors are relevant, even for smaller time steps. These errors induce, for instance, a systematic failure of the expected equipartition of kinetic energy between translational and rotational degrees of freedom.

  9. Self-similar space-time evolution of an initial density discontinuity

    NASA Astrophysics Data System (ADS)

    Rekaa, V. L.; Pécseli, H. L.; Trulsen, J. K.

    2013-07-01

    The space-time evolution of an initial step-like plasma density variation is studied. We give particular attention to formulate the problem in a way that opens for the possibility of realizing the conditions experimentally. After a short transient time interval of the order of the electron plasma period, the solution is self-similar as illustrated by a video where the space-time evolution is reduced to be a function of the ratio x/t. Solutions of this form are usually found for problems without characteristic length and time scales, in our case the quasi-neutral limit. By introducing ion collisions with neutrals into the numerical analysis, we introduce a length scale, the collisional mean free path. We study the breakdown of the self-similarity of the solution as the mean free path is made shorter than the system length. Analytical results are presented for charge exchange collisions, demonstrating a short time collisionless evolution with an ensuing long time diffusive relaxation of the initial perturbation. For large times, we find a diffusion equation as the limiting analytical form for a charge-exchange collisional plasma, with a diffusion coefficient defined as the square of the ion sound speed divided by the (constant) ion collision frequency. The ion-neutral collision frequency acts as a parameter that allows a collisionless result to be obtained in one limit, while the solution of a diffusion equation is recovered in the opposite limit of large collision frequencies.

  10. A conjugate heat transfer procedure for gas turbine blades.

    PubMed

    Croce, G

    2001-05-01

    A conjugate heat transfer procedure, allowing for the use of different solvers on the solid and fluid domain(s), is presented. Information exchange between solid and fluid solution is limited to boundary condition values, and this exchange is carried out at any pseudo-time step. Global convergence rate of the procedure is, thus, of the same order of magnitude of stand-alone computations.

  11. On the development of efficient algorithms for three dimensional fluid flow

    NASA Technical Reports Server (NTRS)

    Maccormack, R. W.

    1988-01-01

    The difficulties of constructing efficient algorithms for three-dimensional flow are discussed. Reasonable candidates are analyzed and tested, and most are found to have obvious shortcomings. Yet, there is promise that an efficient class of algorithms exist between the severely time-step sized-limited explicit or approximately factored algorithms and the computationally intensive direct inversion of large sparse matrices by Gaussian elimination.

  12. Exploring the role of wood waste landfills in early detection of non-native alien wood-boring beetles

    Treesearch

    Davide Rassati; Massimo Faccoli; Lorenzo Marini; Robert A. Haack; Andrea Battisti; Edoardo Petrucco Toffolo

    2015-01-01

    Non-native wood-boring beetles (Coleoptera) represent one of the most commonly intercepted groups of insects at ports worldwide. The development of early detection methods is a crucial step when implementing rapid response programs so that non-native wood-boring beetles can be quickly detected and a timely action plan can be produced. However, due to the limited...

  13. A bill to require that the United States Government prioritize all obligations on the debt held by the public, Social Security benefits, and military pay in the event that the debt limit is reached, and for other purposes.

    THOMAS, 112th Congress

    Sen. Toomey, Pat [R-PA

    2011-07-26

    Senate - 07/27/2011 Read the second time. Placed on Senate Legislative Calendar under General Orders. Calendar No. 112. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  14. Synthesis of Various Metal/TiO2 Core/shell Nanorod Arrays

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Wang, Guan-zhong; Hong, Xun; Shen, Xiao-shuang

    2011-02-01

    We present a general approach to fabricate metal/TiO2 core/shell nanorod structures by two-step electrodeposition. Firstly, TiO2 nanotubes with uniform wall thickness are prepared in anodic aluminum oxide (AAO) membranes by electrodeposition. The wall thickness of the nanotubes could be easily controlled by modulating the deposition time, and their outer diameter and length are only limited by the channel diameter and the thickness of the AAO membranes, respectively. The nanotubes' tops prepared by this method are open, while the bottoms are connected directly with the Au film at the back of the AAO membranes. Secondly, Pd, Cu, and Fe elements are filled into the TiO2 nanotubes to form core/shell structures. The core/shell nanorods prepared by this two-step process are high density and free-standing, and their length is dependent on the deposition time.

  15. Speeding up biomolecular interactions by molecular sledding

    DOE PAGES

    Turkin, Alexander; Zhang, Lei; Marcozzi, Alessio; ...

    2015-10-07

    In numerous biological processes associations involve a protein with its binding partner, an event that is preceded by a diffusion-mediated search bringing the two partners together. Often hindered by crowding in biologically relevant environments, three-dimensional diffusion can be slow and result in long bimolecular association times. Moreover, the initial association step between two binding partners often represents a rate-limiting step in biotechnologically relevant reactions. We also demonstrate the practical use of an 11-a.a. DNA-interacting peptide derived from adenovirus to reduce the dimensionality of diffusional search processes and speed up associations between biological macromolecules. We functionalize binding partners with the peptidemore » and demonstrate that the ability of the peptide to one-dimensionally diffuse along DNA results in a 20-fold reduction in reaction time. We also show that modifying PCR primers with the peptide sled enables significant acceleration of standard PCR reactions.« less

  16. Antibody-Mediated Small Molecule Detection Using Programmable DNA-Switches.

    PubMed

    Rossetti, Marianna; Ippodrino, Rudy; Marini, Bruna; Palleschi, Giuseppe; Porchetta, Alessandro

    2018-06-13

    The development of rapid, cost-effective, and single-step methods for the detection of small molecules is crucial for improving the quality and efficiency of many applications ranging from life science to environmental analysis. Unfortunately, current methodologies still require multiple complex, time-consuming washing and incubation steps, which limit their applicability. In this work we present a competitive DNA-based platform that makes use of both programmable DNA-switches and antibodies to detect small target molecules. The strategy exploits both the advantages of proximity-based methods and structure-switching DNA-probes. The platform is modular and versatile and it can potentially be applied for the detection of any small target molecule that can be conjugated to a nucleic acid sequence. Here the rational design of programmable DNA-switches is discussed, and the sensitive, rapid, and single-step detection of different environmentally relevant small target molecules is demonstrated.

  17. The SMM Model as a Boundary Value Problem Using the Discrete Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Campbell, Joel

    2007-01-01

    A generalized single step stepwise mutation model (SMM) is developed that takes into account an arbitrary initial state to a certain partial difference equation. This is solved in both the approximate continuum limit and the more exact discrete form. A time evolution model is developed for Y DNA or mtDNA that takes into account the reflective boundary modeling minimum microsatellite length and the original difference equation. A comparison is made between the more widely known continuum Gaussian model and a discrete model, which is based on modified Bessel functions of the first kind. A correction is made to the SMM model for the probability that two individuals are related that takes into account a reflecting boundary modeling minimum microsatellite length. This method is generalized to take into account the general n-step model and exact solutions are found. A new model is proposed for the step distribution.

  18. The SMM model as a boundary value problem using the discrete diffusion equation.

    PubMed

    Campbell, Joel

    2007-12-01

    A generalized single-step stepwise mutation model (SMM) is developed that takes into account an arbitrary initial state to a certain partial difference equation. This is solved in both the approximate continuum limit and the more exact discrete form. A time evolution model is developed for Y DNA or mtDNA that takes into account the reflective boundary modeling minimum microsatellite length and the original difference equation. A comparison is made between the more widely known continuum Gaussian model and a discrete model, which is based on modified Bessel functions of the first kind. A correction is made to the SMM model for the probability that two individuals are related that takes into account a reflecting boundary modeling minimum microsatellite length. This method is generalized to take into account the general n-step model and exact solutions are found. A new model is proposed for the step distribution.

  19. Some considerations in the combustion of AP/composite propellants

    NASA Technical Reports Server (NTRS)

    Kumar, R. N.

    1972-01-01

    Theoretical studies are presented on the time-independent and oscillatory combustion of nonmetallized AP/composite propellants. Three hypotheses are introduced: (1) The extent of propellant degradation at the vaporization step has to be specified through a scientific criterion. (2) The condensed phase degradation reaction of ammonium perchlorate to a vaporizable state is the overall rate-limiting step. (3) Gas phase combustion rate is controlled by the mixing rate of fuel and oxidizer vapors. In the treatment of oscillatory combustion, the assumption of quasi-steady fluctuations in the gas phase is used to supplement these hypotheses. In comparison with experimental data, this study predicts several of the observations including a few that remain inconsistent with theoretical results.

  20. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    NASA Astrophysics Data System (ADS)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and erosion models. The statistical description of sub-daily variability is thus propagated through the model, allowing the effects of variability to be captured in the simulations. This results in cdfs of various fluxes, the integration of which over a day gives respective daily totals. Using 42-plot-years of surface runoff and soil erosion data from field studies in different environments from Australia and Nepal, simulation results from this cdf approach are compared with the sub-hourly (2-minute for Nepal and 6-minute for Australia) and daily models having similar process descriptions. Significant improvements in the simulation of surface runoff and erosion are achieved, compared with a daily model that uses average daily rainfall intensities. The cdf model compares well with a sub-hourly time-step model. This suggests that the approach captures the important effects of sub-daily variability while utilizing commonly available daily information. It is also found that the model parameters are more robustly defined using the cdf approach compared with the effective values obtained at the daily scale. This suggests that the cdf approach may offer improved model transferability spatially (to other areas) and temporally (to other periods).

  1. 11S Storage globulin from pumpkin seeds: regularities of proteolysis by papain.

    PubMed

    Rudakova, A S; Rudakov, S V; Kakhovskaya, I A; Shutov, A D

    2014-08-01

    Limited proteolysis of the α- and β-chains and deep cleavage of the αβ-subunits by the cooperative (one-by-one) mechanism was observed in the course of papain hydrolysis of cucurbitin, an 11S storage globulin from seeds of the pumpkin Cucurbita maxima. An independent analysis of the kinetics of the limited and cooperative proteolyses revealed that the reaction occurs in two successive steps. In the first step, limited proteolysis consisting of detachments of short terminal peptides from the α- and β-chains was observed. The cooperative proteolysis, which occurs as a pseudo-first order reaction, started at the second step. Therefore, the limited proteolysis at the first step plays a regulatory role, impacting the rate of deep degradation of cucurbitin molecules by the cooperative mechanism. Structural alterations of cucurbitin induced by limited proteolysis are suggested to generate its susceptibility to cooperative proteolysis. These alterations are tentatively discussed on the basis of the tertiary structure of the cucurbitin subunit pdb|2EVX in comparison with previously obtained data on features of degradation of soybean 11S globulin hydrolyzed by papain.

  2. Deciding to Decide: How Decisions Are Made and How Some Forces Affect the Process.

    PubMed

    McConnell, Charles R

    There is a decision-making pattern that applies in all situations, large or small, although in small decisions, the steps are not especially evident. The steps are gathering information, analyzing information and creating alternatives, selecting and implementing an alternative, and following up on implementation. The amount of effort applied in any decision situation should be consistent with the potential consequences of the decision. Essentially, all decisions are subject to certain limitations or constraints, forces, or circumstances that limit one's range of choices. Follow-up on implementation is the phase of decision making most often neglected, yet it is frequently the phase that determines success or failure. Risk and uncertainty are always present in a decision situation, and the application of human judgment is always necessary. In addition, there are often emotional forces at work that can at times unwittingly steer one away from that which is best or most workable under the circumstances and toward a suboptimal result based largely on the desires of the decision maker.

  3. The role of shock induced trailing-edge separation in limit cycle oscillations

    NASA Technical Reports Server (NTRS)

    Cunningham, Atlee M., Jr.

    1989-01-01

    The potential role of shock induced trailing edge separation (SITES) in limit cycle oscillations (LCO) was established. It was shown that the flip-flop characteristics of transition to and from SITES as well as its hysteresis could couple with wing modes with torsional motion and low damping. This connection led to the formulation of a very simple nonlinear math model using the linear equations of motion with a nonlinear step forcing function with hysteresis. A finite difference solution with time was developed and calculations were made for the F-111 TACT were used to determine the step forcing function due to SITES transition. Since no data were available for the hysteresis, a parameter study was conducted allowing the hysteresis effect to vary. Very small hysteresis effects, which were within expected bounds, were required to obtain reasonable response levels that essentially agreed with flight test results. Also in agreement with wind tunnel tests, LCO calculations for the 1/6 scale F-111 model showed that the model should have not experienced LCO.

  4. WAKES: Wavelet Adaptive Kinetic Evolution Solvers

    NASA Astrophysics Data System (ADS)

    Mardirian, Marine; Afeyan, Bedros; Larson, David

    2016-10-01

    We are developing a general capability to adaptively solve phase space evolution equations mixing particle and continuum techniques in an adaptive manner. The multi-scale approach is achieved using wavelet decompositions which allow phase space density estimation to occur with scale dependent increased accuracy and variable time stepping. Possible improvements on the SFK method of Larson are discussed, including the use of multiresolution analysis based Richardson-Lucy Iteration, adaptive step size control in explicit vs implicit approaches. Examples will be shown with KEEN waves and KEEPN (Kinetic Electrostatic Electron Positron Nonlinear) waves, which are the pair plasma generalization of the former, and have a much richer span of dynamical behavior. WAKES techniques are well suited for the study of driven and released nonlinear, non-stationary, self-organized structures in phase space which have no fluid, limit nor a linear limit, and yet remain undamped and coherent well past the drive period. The work reported here is based on the Vlasov-Poisson model of plasma dynamics. Work supported by a Grant from the AFOSR.

  5. Increased efficacy for in-house validation of real-time PCR GMO detection methods.

    PubMed

    Scholtens, I M J; Kok, E J; Hougs, L; Molenaar, B; Thissen, J T N M; van der Voet, H

    2010-03-01

    To improve the efficacy of the in-house validation of GMO detection methods (DNA isolation and real-time PCR, polymerase chain reaction), a study was performed to gain insight in the contribution of the different steps of the GMO detection method to the repeatability and in-house reproducibility. In the present study, 19 methods for (GM) soy, maize canola and potato were validated in-house of which 14 on the basis of an 8-day validation scheme using eight different samples and five on the basis of a more concise validation protocol. In this way, data was obtained with respect to the detection limit, accuracy and precision. Also, decision limits were calculated for declaring non-conformance (>0.9%) with 95% reliability. In order to estimate the contribution of the different steps in the GMO analysis to the total variation variance components were estimated using REML (residual maximum likelihood method). From these components, relative standard deviations for repeatability and reproducibility (RSD(r) and RSD(R)) were calculated. The results showed that not only the PCR reaction but also the factors 'DNA isolation' and 'PCR day' are important factors for the total variance and should therefore be included in the in-house validation. It is proposed to use a statistical model to estimate these factors from a large dataset of initial validations so that for similar GMO methods in the future, only the PCR step needs to be validated. The resulting data are discussed in the light of agreed European criteria for qualified GMO detection methods.

  6. Method of Simulating Flow-Through Area of a Pressure Regulator

    NASA Technical Reports Server (NTRS)

    Hass, Neal E. (Inventor); Schallhorn, Paul A. (Inventor)

    2011-01-01

    The flow-through area of a pressure regulator positioned in a branch of a simulated fluid flow network is generated. A target pressure is defined downstream of the pressure regulator. A projected flow-through area is generated as a non-linear function of (i) target pressure, (ii) flow-through area of the pressure regulator for a current time step and a previous time step, and (iii) pressure at the downstream location for the current time step and previous time step. A simulated flow-through area for the next time step is generated as a sum of (i) flow-through area for the current time step, and (ii) a difference between the projected flow-through area and the flow-through area for the current time step multiplied by a user-defined rate control parameter. These steps are repeated for a sequence of time steps until the pressure at the downstream location is approximately equal to the target pressure.

  7. Modelling uveal melanoma

    PubMed Central

    Foss, A.; Cree, I.; Dolin, P.; Hungerford, J.

    1999-01-01

    BACKGROUND/AIM—There has been no consistent pattern reported on how mortality for uveal melanoma varies with age. This information can be useful to model the complexity of the disease. The authors have examined ocular cancer trends, as an indirect measure for uveal melanoma mortality, to see how rates vary with age and to compare the results with their other studies on predicting metastatic disease.
METHODS—Age specific mortality was examined for England and Wales, the USA, and Canada. A log-log model was fitted to the data. The slopes of the log-log plots were used as measure of disease complexity and compared with the results of previous work on predicting metastatic disease.
RESULTS—The log-log model provided a good fit for the US and Canadian data, but the observed rates deviated for England and Wales among people over the age of 65 years. The log-log model for mortality data suggests that the underlying process depends upon four rate limiting steps, while a similar model for the incidence data suggests between three and four rate limiting steps. Further analysis of previous data on predicting metastatic disease on the basis of tumour size and blood vessel density would indicate a single rate limiting step between developing the primary tumour and developing metastatic disease.
CONCLUSIONS—There is significant underreporting or underdiagnosis of ocular melanoma for England and Wales in those over the age of 65 years. In those under the age of 65, a model is presented for ocular melanoma oncogenesis requiring three rate limiting steps to develop the primary tumour and a fourth rate limiting step to develop metastatic disease. The three steps in the generation of the primary tumour involve two key processes—namely, growth and angiogenesis within the primary tumour. The step from development of the primary to development of metastatic disease is likely to involve a single rate limiting process.

 PMID:10216060

  8. Development of a One-Step Duplex RT-PCR Method for the Simultaneous Detection of VP3/VP1 and VP1/P2B Regions of the Hepatitis A Virus.

    PubMed

    Kim, Mi-Ju; Lee, Shin-Young; Kim, Hyun-Joong; Lee, Jeong Su; Joo, In Sun; Kwak, Hyo Sun; Kim, Hae-Yeong

    2016-08-28

    The simultaneous detection and accurate identification of hepatitis A virus (HAV) is critical in food safety and epidemiological studies to prevent the spread of HAV outbreaks. Towards this goal, a one-step duplex reverse-transcription (RT)-PCR method was developed targeting the VP1/P2B and VP3/VP1 regions of the HAV genome for the qualitative detection of HAV. An HAV RT-qPCR standard curve was produced for the quantification of HAV RNA. The detection limit of the duplex RT-PCR method was 2.8 × 10(1) copies of HAV. The PCR products enabled HAV genotyping analysis through DNA sequencing, which can be applied for epidemiological investigations. The ability of this duplex RT-PCR method to detect HAV was evaluated with HAV-spiked samples of fresh lettuce, frozen strawberries, and oysters. The limit of detection of the one-step duplex RT-PCR for each food model was 9.4 × 10(2) copies/20 g fresh lettuce, 9.7 × 10(3) copies/20 g frozen strawberries, and 4.1 × 10(3) copies/1.5 g oysters. Use of a one-step duplex RT-PCR method has advantages such as shorter time, decreased cost, and decreased labor owing to the single amplification reaction instead of four amplifications necessary for nested RT-PCR.

  9. Ag-Modified In2O3/ZnO Nanobundles with High Formaldehyde Gas-Sensing Performance

    PubMed Central

    Fang, Fang; Bai, Lu; Song, Dongsheng; Yang, Hongping; Sun, Xiaoming; Sun, Hongyu; Zhu, Jing

    2015-01-01

    Ag-modified In2O3/ZnO bundles with micro/nano porous structures have been designed and synthesized with by hydrothermal method continuing with dehydration process. Each bundle consists of nanoparticles, where nanogaps of 10–30 nm are present between the nanoparticles, leading to a porous structure. This porous structure brings high surface area and fast gas diffusion, enhancing the gas sensitivity. Consequently, the HCHO gas-sensing performance of the Ag-modified In2O3/ZnO bundles have been tested, with the formaldehyde-detection limit of 100 ppb (parts per billion) and the response and recover times as short as 6 s and 3 s, respectively, at 300 °C and the detection limit of 100 ppb, response time of 12 s and recover times of 6 s at 100 °C. The HCHO sensing detect limitation matches the health standard limitation on the concentration of formaldehyde for indoor air. Moreover, the strategy to synthesize the nanobundles is just two-step heating and easy to scale up. Therefore, the Ag-modified In2O3/ZnO bundles are ready for industrialization and practical applications. PMID:26287205

  10. Design-for-manufacture of gradient-index optical systems using time-varying boundary condition diffusion

    NASA Astrophysics Data System (ADS)

    Harkrider, Curtis Jason

    2000-08-01

    The incorporation of gradient-index (GRIN) material into optical systems offers novel and practical solutions to lens design problems. However, widespread use of gradient-index optics has been limited by poor correlation between gradient-index designs and the refractive index profiles produced by ion exchange between glass and molten salt. Previously, a design-for- manufacture model was introduced that connected the design and fabrication processes through use of diffusion modeling linked with lens design software. This project extends the design-for-manufacture model into a time- varying boundary condition (TVBC) diffusion model. TVBC incorporates the time-dependent phenomenon of melt poisoning and introduces a new index profile control method, multiple-step diffusion. The ions displaced from the glass during the ion exchange fabrication process can reduce the total change in refractive index (Δn). Chemical equilibrium is used to model this melt poisoning process. Equilibrium experiments are performed in a titania silicate glass and chemically analyzed. The equilibrium model is fit to ion concentration data that is used to calculate ion exchange boundary conditions. The boundary conditions are changed purposely to control the refractive index profile in multiple-step TVBC diffusion. The glass sample is alternated between ion exchange with a molten salt bath and annealing. The time of each diffusion step can be used to exert control on the index profile. The TVBC computer model is experimentally verified and incorporated into the design- for-manufacture subroutine that runs in lens design software. The TVBC design-for-manufacture model is useful for fabrication-based tolerance analysis of gradient-index lenses and for the design of manufactureable GRIN lenses. Several optical elements are designed and fabricated using multiple-step diffusion, verifying the accuracy of the model. The strength of multiple-step diffusion process lies in its versatility. An axicon, imaging lens, and curved radial lens, all with different index profile requirements, are designed out of a single glass composition.

  11. Feasibility of Focused Stepping Practice During Inpatient Rehabilitation Poststroke and Potential Contributions to Mobility Outcomes.

    PubMed

    Hornby, T George; Holleran, Carey L; Leddy, Abigail L; Hennessy, Patrick; Leech, Kristan A; Connolly, Mark; Moore, Jennifer L; Straube, Donald; Lovell, Linda; Roth, Elliot

    2015-01-01

    Optimal physical therapy strategies to maximize locomotor function in patients early poststroke are not well established. Emerging data indicate that substantial amounts of task-specific stepping practice may improve locomotor function, although stepping practice provided during inpatient rehabilitation is limited (<300 steps/session). The purpose of this investigation was to determine the feasibility of providing focused stepping training to patients early poststroke and its potential association with walking and other mobility outcomes. Daily stepping was recorded on 201 patients <6 months poststroke (80% < 1 month) during inpatient rehabilitation following implementation of a focused training program to maximize stepping practice during clinical physical therapy sessions. Primary outcomes included distance and physical assistance required during a 6-minute walk test (6MWT) and balance using the Berg Balance Scale (BBS). Retrospective data analysis included multiple regression techniques to evaluate the contributions of demographics, training activities, and baseline motor function to primary outcomes at discharge. Median stepping activity recorded from patients was 1516 steps/d, which is 5 to 6 times greater than that typically observed. The number of steps per day was positively correlated with both discharge 6MWT and BBS and improvements from baseline (changes; r = 0.40-0.87), independently contributing 10% to 31% of the total variance. Stepping activity also predicted level of assistance at discharge and discharge location (home vs other facility). Providing focused, repeated stepping training was feasible early poststroke during inpatient rehabilitation and was related to mobility outcomes. Further research is required to evaluate the effectiveness of these training strategies on short- or long-term mobility outcomes as compared with conventional interventions. © The Author(s) 2015.

  12. Quantum state conversion in opto-electro-mechanical systems via shortcut to adiabaticity

    NASA Astrophysics Data System (ADS)

    Zhou, Xiao; Liu, Bao-Jie; Shao, L.-B.; Zhang, Xin-Ding; Xue, Zheng-Yuan

    2017-09-01

    Adiabatic processes have found many important applications in modern physics, the distinct merit of which is that accurate control over process timing is not required. However, such processes are slow, which limits their application in quantum computation, due to the limited coherent times of typical quantum systems. Here, we propose a scheme to implement quantum state conversion in opto-electro-mechanical systems via a shortcut to adiabaticity, where the process can be greatly speeded up while precise timing control is still not necessary. In our scheme, by modifying only the coupling strength, we can achieve fast quantum state conversion with high fidelity, where the adiabatic condition does not need to be met. In addition, the population of the unwanted intermediate state can be further suppressed. Therefore, our protocol presents an important step towards practical state conversion between optical and microwave photons, and thus may find many important applications in hybrid quantum information processing.

  13. Dynamic Scaling and Island Growth Kinetics in Pulsed Laser Deposition of SrTiO 3

    DOE PAGES

    Eres, Gyula; Tischler, J. Z.; Rouleau, C. M.; ...

    2016-11-11

    We use real-time diffuse surface x-ray diffraction to probe the evolution of island size distributions and its effects on surface smoothing in pulsed laser deposition (PLD) of SrTiO 3. In this study, we show that the island size evolution obeys dynamic scaling and two distinct regimes of island growth kinetics. Our data show that PLD film growth can persist without roughening despite thermally driven Ostwald ripening, the main mechanism for surface smoothing, being shut down. The absence of roughening is concomitant with decreasing island density, contradicting the prevailing view that increasing island density is the key to surface smoothing inmore » PLD. We also report a previously unobserved crossover from diffusion-limited to attachment-limited island growth that reveals the influence of nonequilibrium atomic level surface transport processes on the growth modes in PLD. We show by direct measurements that attachment-limited island growth is the dominant process in PLD that creates step flowlike behavior or quasistep flow as PLD “self-organizes” local step flow on a length scale consistent with the substrate temperature and PLD parameters.« less

  14. Dynamic Scaling and Island Growth Kinetics in Pulsed Laser Deposition of SrTiO 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eres, Gyula; Tischler, J. Z.; Rouleau, C. M.

    We use real-time diffuse surface x-ray diffraction to probe the evolution of island size distributions and its effects on surface smoothing in pulsed laser deposition (PLD) of SrTiO 3. In this study, we show that the island size evolution obeys dynamic scaling and two distinct regimes of island growth kinetics. Our data show that PLD film growth can persist without roughening despite thermally driven Ostwald ripening, the main mechanism for surface smoothing, being shut down. The absence of roughening is concomitant with decreasing island density, contradicting the prevailing view that increasing island density is the key to surface smoothing inmore » PLD. We also report a previously unobserved crossover from diffusion-limited to attachment-limited island growth that reveals the influence of nonequilibrium atomic level surface transport processes on the growth modes in PLD. We show by direct measurements that attachment-limited island growth is the dominant process in PLD that creates step flowlike behavior or quasistep flow as PLD “self-organizes” local step flow on a length scale consistent with the substrate temperature and PLD parameters.« less

  15. A Predictive Model for Toxicity Effects Assessment of Biotransformed Hepatic Drugs Using Iterative Sampling Method.

    PubMed

    Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella

    2016-12-09

    Measuring toxicity is one of the main steps in drug development. Hence, there is a high demand for computational models to predict the toxicity effects of the potential drugs. In this study, we used a dataset, which consists of four toxicity effects:mutagenic, tumorigenic, irritant and reproductive effects. The proposed model consists of three phases. In the first phase, rough set-based methods are used to select the most discriminative features for reducing the classification time and improving the classification performance. Due to the imbalanced class distribution, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. ITerative Sampling (ITS) method is proposed to avoid the limitations of those methods. ITS method has two steps. The first step (sampling step) iteratively modifies the prior distribution of the minority and majority classes. In the second step, a data cleaning method is used to remove the overlapping that is produced from the first step. In the third phase, Bagging classifier is used to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed model performed well in classifying the unknown samples according to all toxic effects in the imbalanced datasets.

  16. Fully chip-embedded automation of a multi-step lab-on-a-chip process using a modularized timer circuit.

    PubMed

    Kang, Junsu; Lee, Donghyeon; Heo, Young Jin; Chung, Wan Kyun

    2017-11-07

    For highly-integrated microfluidic systems, an actuation system is necessary to control the flow; however, the bulk of actuation devices including pumps or valves has impeded the broad application of integrated microfluidic systems. Here, we suggest a microfluidic process control method based on built-in microfluidic circuits. The circuit is composed of a fluidic timer circuit and a pneumatic logic circuit. The fluidic timer circuit is a serial connection of modularized timer units, which sequentially pass high pressure to the pneumatic logic circuit. The pneumatic logic circuit is a NOR gate array designed to control the liquid-controlling process. By using the timer circuit as a built-in signal generator, multi-step processes could be done totally inside the microchip without any external controller. The timer circuit uses only two valves per unit, and the number of process steps can be extended without limitation by adding timer units. As a demonstration, an automation chip has been designed for a six-step droplet treatment, which entails 1) loading, 2) separation, 3) reagent injection, 4) incubation, 5) clearing and 6) unloading. Each process was successfully performed for a pre-defined step-time without any external control device.

  17. Stair ascent with an innovative microprocessor-controlled exoprosthetic knee joint.

    PubMed

    Bellmann, Malte; Schmalz, Thomas; Ludwigs, Eva; Blumentritt, Siegmar

    2012-12-01

    Climbing stairs can pose a major challenge for above-knee amputees as a result of compromised motor performance and limitations to prosthetic design. A new, innovative microprocessor-controlled prosthetic knee joint, the Genium, incorporates a function that allows an above-knee amputee to climb stairs step over step. To execute this function, a number of different sensors and complex switching algorithms were integrated into the prosthetic knee joint. The function is intuitive for the user. A biomechanical study was conducted to assess objective gait measurements and calculate joint kinematics and kinetics as subjects ascended stairs. Results demonstrated that climbing stairs step over step is more biomechanically efficient for an amputee using the Genium prosthetic knee than the previously possible conventional method where the extended prosthesis is trailed as the amputee executes one or two steps at a time. There is a natural amount of stress on the residual musculoskeletal system, and it has been shown that the healthy contralateral side supports the movements of the amputated side. The mechanical power that the healthy contralateral knee joint needs to generate during the extension phase is also reduced. Similarly, there is near normal loading of the hip joint on the amputated side.

  18. Scalable asynchronous execution of cellular automata

    NASA Astrophysics Data System (ADS)

    Folino, Gianluigi; Giordano, Andrea; Mastroianni, Carlo

    2016-10-01

    The performance and scalability of cellular automata, when executed on parallel/distributed machines, are limited by the necessity of synchronizing all the nodes at each time step, i.e., a node can execute only after the execution of the previous step at all the other nodes. However, these synchronization requirements can be relaxed: a node can execute one step after synchronizing only with the adjacent nodes. In this fashion, different nodes can execute different time steps. This can be a notable advantageous in many novel and increasingly popular applications of cellular automata, such as smart city applications, simulation of natural phenomena, etc., in which the execution times can be different and variable, due to the heterogeneity of machines and/or data and/or executed functions. Indeed, a longer execution time at a node does not slow down the execution at all the other nodes but only at the neighboring nodes. This is particularly advantageous when the nodes that act as bottlenecks vary during the application execution. The goal of the paper is to analyze the benefits that can be achieved with the described asynchronous implementation of cellular automata, when compared to the classical all-to-all synchronization pattern. The performance and scalability have been evaluated through a Petri net model, as this model is very useful to represent the synchronization barrier among nodes. We examined the usual case in which the territory is partitioned into a number of regions, and the computation associated with a region is assigned to a computing node. We considered both the cases of mono-dimensional and two-dimensional partitioning. The results show that the advantage obtained through the asynchronous execution, when compared to the all-to-all synchronous approach is notable, and it can be as large as 90% in terms of speedup.

  19. A model for integrating clinical care and basic science research, and pitfalls of performing complex research projects for addressing a clinical challenge.

    PubMed

    Steck, R; Epari, D R; Schuetz, M A

    2010-07-01

    The collaboration of clinicians with basic science researchers is crucial for addressing clinically relevant research questions. In order to initiate such mutually beneficial relationships, we propose a model where early career clinicians spend a designated time embedded in established basic science research groups, in order to pursue a postgraduate qualification. During this time, clinicians become integral members of the research team, fostering long term relationships and opening up opportunities for continuing collaboration. However, for these collaborations to be successful there are pitfalls to be avoided. Limited time and funding can lead to attempts to answer clinical challenges with highly complex research projects characterised by a large number of "clinical" factors being introduced in the hope that the research outcomes will be more clinically relevant. As a result, the complexity of such studies and variability of its outcomes may lead to difficulties in drawing scientifically justified and clinically useful conclusions. Consequently, we stress that it is the basic science researcher and the clinician's obligation to be mindful of the limitations and challenges of such multi-factorial research projects. A systematic step-by-step approach to address clinical research questions with limited, but highly targeted and well defined research projects provides the solid foundation which may lead to the development of a longer term research program for addressing more challenging clinical problems. Ultimately, we believe that it is such models, encouraging the vital collaboration between clinicians and researchers for the work on targeted, well defined research projects, which will result in answers to the important clinical challenges of today. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  20. Long time stability of small-amplitude Breathers in a mixed FPU-KG model

    NASA Astrophysics Data System (ADS)

    Paleari, Simone; Penati, Tiziano

    2016-12-01

    In the limit of small couplings in the nearest neighbor interaction, and small total energy, we apply the resonant normal form result of a previous paper of ours to a finite but arbitrarily large mixed Fermi-Pasta-Ulam Klein-Gordon chain, i.e., with both linear and nonlinear terms in both the on-site and interaction potential, with periodic boundary conditions. An existence and orbital stability result for Breathers of such a normal form, which turns out to be a generalized discrete nonlinear Schrödinger model with exponentially decaying all neighbor interactions, is first proved. Exploiting such a result as an intermediate step, a long time stability theorem for the true Breathers of the KG and FPU-KG models, in the anti-continuous limit, is proven.

  1. A Multi-Scale Distribution Model for Non-Equilibrium Populations Suggests Resource Limitation in an Endangered Rodent

    PubMed Central

    Bean, William T.; Stafford, Robert; Butterfield, H. Scott; Brashares, Justin S.

    2014-01-01

    Species distributions are known to be limited by biotic and abiotic factors at multiple temporal and spatial scales. Species distribution models, however, frequently assume a population at equilibrium in both time and space. Studies of habitat selection have repeatedly shown the difficulty of estimating resource selection if the scale or extent of analysis is incorrect. Here, we present a multi-step approach to estimate the realized and potential distribution of the endangered giant kangaroo rat. First, we estimate the potential distribution by modeling suitability at a range-wide scale using static bioclimatic variables. We then examine annual changes in extent at a population-level. We define “available” habitat based on the total suitable potential distribution at the range-wide scale. Then, within the available habitat, model changes in population extent driven by multiple measures of resource availability. By modeling distributions for a population with robust estimates of population extent through time, and ecologically relevant predictor variables, we improved the predictive ability of SDMs, as well as revealed an unanticipated relationship between population extent and precipitation at multiple scales. At a range-wide scale, the best model indicated the giant kangaroo rat was limited to areas that received little to no precipitation in the summer months. In contrast, the best model for shorter time scales showed a positive relation with resource abundance, driven by precipitation, in the current and previous year. These results suggest that the distribution of the giant kangaroo rat was limited to the wettest parts of the drier areas within the study region. This multi-step approach reinforces the differing relationship species may have with environmental variables at different scales, provides a novel method for defining “available” habitat in habitat selection studies, and suggests a way to create distribution models at spatial and temporal scales relevant to theoretical and applied ecologists. PMID:25237807

  2. Methods, systems and devices for detecting and locating ferromagnetic objects

    DOEpatents

    Roybal, Lyle Gene [Idaho Falls, ID; Kotter, Dale Kent [Shelley, ID; Rohrbaugh, David Thomas [Idaho Falls, ID; Spencer, David Frazer [Idaho Falls, ID

    2010-01-26

    Methods for detecting and locating ferromagnetic objects in a security screening system. One method includes a step of acquiring magnetic data that includes magnetic field gradients detected during a period of time. Another step includes representing the magnetic data as a function of the period of time. Another step includes converting the magnetic data to being represented as a function of frequency. Another method includes a step of sensing a magnetic field for a period of time. Another step includes detecting a gradient within the magnetic field during the period of time. Another step includes identifying a peak value of the gradient detected during the period of time. Another step includes identifying a portion of time within the period of time that represents when the peak value occurs. Another step includes configuring the portion of time over the period of time to represent a ratio.

  3. A review of hybrid implicit explicit finite difference time domain method

    NASA Astrophysics Data System (ADS)

    Chen, Juan

    2018-06-01

    The finite-difference time-domain (FDTD) method has been extensively used to simulate varieties of electromagnetic interaction problems. However, because of its Courant-Friedrich-Levy (CFL) condition, the maximum time step size of this method is limited by the minimum size of cell used in the computational domain. So the FDTD method is inefficient to simulate the electromagnetic problems which have very fine structures. To deal with this problem, the Hybrid Implicit Explicit (HIE)-FDTD method is developed. The HIE-FDTD method uses the hybrid implicit explicit difference in the direction with fine structures to avoid the confinement of the fine spatial mesh on the time step size. So this method has much higher computational efficiency than the FDTD method, and is extremely useful for the problems which have fine structures in one direction. In this paper, the basic formulations, time stability condition and dispersion error of the HIE-FDTD method are presented. The implementations of several boundary conditions, including the connect boundary, absorbing boundary and periodic boundary are described, then some applications and important developments of this method are provided. The goal of this paper is to provide an historical overview and future prospects of the HIE-FDTD method.

  4. Implicit-Explicit Time Integration Methods for Non-hydrostatic Atmospheric Models

    NASA Astrophysics Data System (ADS)

    Gardner, D. J.; Guerra, J. E.; Hamon, F. P.; Reynolds, D. R.; Ullrich, P. A.; Woodward, C. S.

    2016-12-01

    The Accelerated Climate Modeling for Energy (ACME) project is developing a non-hydrostatic atmospheric dynamical core for high-resolution coupled climate simulations on Department of Energy leadership class supercomputers. An important factor in computational efficiency is avoiding the overly restrictive time step size limitations of fully explicit time integration methods due to the stiffest modes present in the model (acoustic waves). In this work we compare the accuracy and performance of different Implicit-Explicit (IMEX) splittings of the non-hydrostatic equations and various Additive Runge-Kutta (ARK) time integration methods. Results utilizing the Tempest non-hydrostatic atmospheric model and the ARKode package show that the choice of IMEX splitting and ARK scheme has a significant impact on the maximum stable time step size as well as solution quality. Horizontally Explicit Vertically Implicit (HEVI) approaches paired with certain ARK methods lead to greatly improved runtimes. With effective preconditioning IMEX splittings that incorporate some implicit horizontal dynamics can be competitive with HEVI results. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-699187

  5. The Structure of Walking Activity in People After Stroke Compared With Older Adults Without Disability: A Cross-Sectional Study

    PubMed Central

    Roos, Margaret A.; Rudolph, Katherine S.

    2012-01-01

    Background People with stroke have reduced walking activity. It is not known whether this deficit is due to a reduction in all aspects of walking activity or only in specific areas. Understanding specific walking activity deficits is necessary for the development of interventions that maximize improvements in activity after stroke. Objective The purpose of this study was to examine walking activity in people poststroke compared with older adults without disability. Design A cross-sectional study was conducted. Methods Fifty-four participants poststroke and 18 older adults without disability wore a step activity monitor for 3 days. The descriptors of walking activity calculated included steps per day (SPD), bouts per day (BPD), steps per bout (SPB), total time walking per day (TTW), percentage of time walking per day (PTW), and frequency of short, medium, and long walking bouts. Results Individuals classified as household and limited community ambulators (n=29) did not differ on any measure and were grouped (HHA-LCA group) for comparison with unlimited community ambulators (UCA group) (n=22) and with older adults without disability (n=14). The SPD, TTW, PTW, and BPD measurements were greatest in older adults and lowest in the HHA-LCA group. Seventy-two percent to 74% of all walking bouts were short, and this finding did not differ across groups. Walking in all categories (short, medium, and long) was lowest in the HHA-LCA group, greater in the UCA group, and greatest in older adults without disability. Limitations Three days of walking activity were captured. Conclusions The specific descriptors of walking activity presented provide insight into walking deficits after stroke that cannot be ascertained by looking at steps per day alone. The deficits that were revealed could be addressed through appropriate exercise prescription, underscoring the need to analyze the structure of walking activity. PMID:22677293

  6. Civil & Military Operations: Evolutionary Prep Steps to Pass Smart Power Current Limitations

    DTIC Science & Technology

    2011-06-01

    and outcomes – Identifying the best time, place, and method for action – Reduced ambiguity for action application, reduced side effects – Find the...improvements to arrive at increased accuracy, precision, and reduction of un-intended effects . The examples of these streams will demonstrate the...DIME – Diplomatic, Intelligence, Military, and Economic; EBO – Effects Based Operations. 16th International Command and Control Research and

  7. Predicting Protein Structure Using Parallel Genetic Algorithms.

    DTIC Science & Technology

    1994-12-01

    Molecular dynamics attempts to simulate the protein folding process. However, the time steps required for this simulation are on the order of one...harmonics. These two factors have limited molecular dynamics simulations to less than a few nanoseconds (10-9 sec), even on today’s fastest supercomputers...By " Predicting rotein Structure D istribticfiar.. ................ Using Parallel Genetic Algorithms ,Avaiu " ’ •"... Dist THESIS I IGeorge H

  8. Effects of different excitation waveforms on detection and characterisation of delamination in PV modules by active infrared thermography

    NASA Astrophysics Data System (ADS)

    Sinha, Archana; Gupta, Rajesh

    2017-10-01

    Delamination significantly affects the performance and reliability of photovoltaic (PV) modules. Recently, an active infrared thermography approach using step heating has been exploited for the detection and characterisation of delamination in PV modules. However, step heating takes longer observation time and causes overheating problems. This paper presents the effects of different thermal excitation waveforms namely rectangular, half-sine and short pulse, on the detection and characterisation of delamination in PV module by experiments and simulations. For simulation, a 3-dimensional electro-thermal model of heat conduction, based on resistance-capacitance network approach, has been exploited to study the variation in maximum thermal contrast and peak contrast time with the delamination thickness and heating parameters. Results show that the rectangular waveform provides better detection of delamination due to higher absolute contrast, while the half-sine waveform allows better characterisation of delamination in the PV modules with low-cost and low-power heat source. The high-energy short pulse enabled quick visualisation of delamination, but has limited practical implementation. The advantages and limitations of each waveform have been highlighted to assess the specific requirement for appropriate choice in the non-destructive thermographic inspection of delamination in PV modules at the manufacturing units or outdoor fields.

  9. LentiPro26: novel stable cell lines for constitutive lentiviral vector production.

    PubMed

    Tomás, H A; Rodrigues, A F; Carrondo, M J T; Coroadinha, A S

    2018-03-27

    Lentiviral vectors (LVs) are excellent tools to promote gene transfer and stable gene expression. Their potential has been already demonstrated in gene therapy clinical trials for the treatment of diverse disorders. For large scale LV production, a stable producer system is desirable since it allows scalable and cost-effective viral productions, with increased reproducibility and safety. However, the development of stable systems has been challenging and time-consuming, being the selection of cells presenting high expression levels of Gag-Pro-Pol polyprotein and the cytotoxicity associated with some viral components, the main limitations. Hereby is described the establishment of a new LV producer cell line using a mutated less active viral protease to overcome potential cytotoxic limitations. The stable transfection of bicistronic expression cassettes with re-initiation of the translation mechanism enabled the generation of LentiPro26 packaging populations supporting high titers. Additionally, by skipping intermediate clone screening steps and performing only one final clone screening, it was possible to save time and generate LentiPro26-A59 cell line, that constitutively produces titers above 10 6 TU.mL -1 .day -1 , in less than six months. This work constitutes a step forward towards the development of improved LV producer cell lines, aiming to efficiently supply the clinical expanding gene therapy applications.

  10. Fully Flexible Docking of Medium Sized Ligand Libraries with RosettaLigand

    PubMed Central

    DeLuca, Samuel; Khar, Karen; Meiler, Jens

    2015-01-01

    RosettaLigand has been successfully used to predict binding poses in protein-small molecule complexes. However, the RosettaLigand docking protocol is comparatively slow in identifying an initial starting pose for the small molecule (ligand) making it unfeasible for use in virtual High Throughput Screening (vHTS). To overcome this limitation, we developed a new sampling approach for placing the ligand in the protein binding site during the initial ‘low-resolution’ docking step. It combines the translational and rotational adjustments to the ligand pose in a single transformation step. The new algorithm is both more accurate and more time-efficient. The docking success rate is improved by 10–15% in a benchmark set of 43 protein/ligand complexes, reducing the number of models that typically need to be generated from 1000 to 150. The average time to generate a model is reduced from 50 seconds to 10 seconds. As a result we observe an effective 30-fold speed increase, making RosettaLigand appropriate for docking medium sized ligand libraries. We demonstrate that this improved initial placement of the ligand is critical for successful prediction of an accurate binding position in the ‘high-resolution’ full atom refinement step. PMID:26207742

  11. Real-time traffic sign detection and recognition

    NASA Astrophysics Data System (ADS)

    Herbschleb, Ernst; de With, Peter H. N.

    2009-01-01

    The continuous growth of imaging databases increasingly requires analysis tools for extraction of features. In this paper, a new architecture for the detection of traffic signs is proposed. The architecture is designed to process a large database with tens of millions of images with a resolution up to 4,800x2,400 pixels. Because of the size of the database, a high reliability as well as a high throughput is required. The novel architecture consists of a three-stage algorithm with multiple steps per stage, combining both color and specific spatial information. The first stage contains an area-limitation step which is performance critical in both the detection rate as the overall processing time. The second stage locates suggestions for traffic signs using recently published feature processing. The third stage contains a validation step to enhance reliability of the algorithm. During this stage, the traffic signs are recognized. Experiments show a convincing detection rate of 99%. With respect to computational speed, the throughput for line-of-sight images of 800×600 pixels is 35 Hz and for panorama images it is 4 Hz. Our novel architecture outperforms existing algorithms, with respect to both detection rate and throughput

  12. Mobile magnetic particles as solid-supports for rapid surface-based bioanalysis in continuous flow.

    PubMed

    Peyman, Sally A; Iles, Alexander; Pamme, Nicole

    2009-11-07

    An extremely versatile microfluidic device is demonstrated in which multi-step (bio)chemical procedures can be performed in continuous flow. The system operates by generating several co-laminar flow streams, which contain reagents for specific (bio)reactions across a rectangular reaction chamber. Functionalized magnetic microparticles are employed as mobile solid-supports and are pulled from one side of the reaction chamber to the other by use of an external magnetic field. As the particles traverse the co-laminar reagent streams, binding and washing steps are performed on their surface in one operation in continuous flow. The applicability of the platform was first demonstrated by performing a proof-of-principle binding assay between streptavidin coated magnetic particles and biotin in free solution with a limit of detection of 20 ng mL(-1) of free biotin. The system was then applied to a mouse IgG sandwich immunoassay as a first example of a process involving two binding steps and two washing steps, all performed within 60 s, a fraction of the time required for conventional testing.

  13. An Automated Method of Scanning Probe Microscopy (SPM) Data Analysis and Reactive Site Tracking for Mineral-Water Interface Reactions Observed at the Nanometer Scale

    NASA Astrophysics Data System (ADS)

    Campbell, B. D.; Higgins, S. R.

    2008-12-01

    Developing a method for bridging the gap between macroscopic and microscopic measurements of reaction kinetics at the mineral-water interface has important implications in geological and chemical fields. Investigating these reactions on the nanometer scale with SPM is often limited by image analysis and data extraction due to the large quantity of data usually obtained in SPM experiments. Here we present a computer algorithm for automated analysis of mineral-water interface reactions. This algorithm automates the analysis of sequential SPM images by identifying the kinetically active surface sites (i.e., step edges), and by tracking the displacement of these sites from image to image. The step edge positions in each image are readily identified and tracked through time by a standard edge detection algorithm followed by statistical analysis on the Hough Transform of the edge-mapped image. By quantifying this displacement as a function of time, the rate of step edge displacement is determined. Furthermore, the total edge length, also determined from analysis of the Hough Transform, combined with the computed step speed, yields the surface area normalized rate of the reaction. The algorithm was applied to a study of the spiral growth of the calcite(104) surface from supersaturated solutions, yielding results almost 20 times faster than performing this analysis by hand, with results being statistically similar for both analysis methods. This advance in analysis of kinetic data from SPM images will facilitate the building of experimental databases on the microscopic kinetics of mineral-water interface reactions.

  14. Transition path time distributions for Lévy flights

    NASA Astrophysics Data System (ADS)

    Janakiraman, Deepika

    2018-07-01

    This paper presents a study of transition path time distributions for Lévy noise-induced barrier crossing. Transition paths are short segments of the reactive trajectories and span the barrier region of the potential without spilling into the reactant/product wells. The time taken to traverse this segment is referred to as the transition path time. Since the transition path is devoid of excursions in the minimum, the corresponding time will give the exclusive barrier crossing time, unlike . This work explores the distribution of transition path times for superdiffusive barrier crossing, analytically. This is made possible by approximating the barrier by an inverted parabola. Using this approximation, the distributions are evaluated in both over- and under-damped limits of friction. The short-time behaviour of the distributions, provide analytical evidence for single-step transition events—a feature in Lévy-barrier crossing as observed in prior simulation studies. The average transition path time is calculated as a function of the Lévy index (α), and the optimal value of α leading to minimum average transition path time is discussed, in both the limits of friction. Langevin dynamics simulations corroborating with the analytical results are also presented.

  15. High-speed time-reversed ultrasonically encoded (TRUE) optical focusing inside dynamic scattering media at 793 nm

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Lai, Puxiang; Ma, Cheng; Xu, Xiao; Suzuki, Yuta; Grabar, Alexander A.; Wang, Lihong V.

    2014-03-01

    Time-reversed ultrasonically encoded (TRUE) optical focusing is an emerging technique that focuses light deep into scattering media by phase-conjugating ultrasonically encoded diffuse light. In previous work, the speed of TRUE focusing was limited to no faster than 1 Hz by the response time of the photorefractive phase conjugate mirror, or the data acquisition and streaming speed of the digital camera; photorefractive-crystal-based TRUE focusing was also limited to the visible spectral range. These time-consuming schemes prevent this technique from being applied in vivo, since living biological tissue has a speckle decorrelation time on the order of a millisecond. In this work, using a Tedoped Sn2P2S6 photorefractive crystal at a near-infrared wavelength of 793 nm, we achieved TRUE focusing inside dynamic scattering media having a speckle decorrelation time as short as 7.7 ms. As the achieved speed approaches the tissue decorrelation rate, this work is an important step forward toward in vivo applications of TRUE focusing in deep tissue imaging, photodynamic therapy, and optical manipulation.

  16. 29 CFR 1926.1053 - Ladders.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) SAFETY AND HEALTH REGULATIONS FOR CONSTRUCTION Stairways and Ladders § 1926.1053 Ladders. Link to an... structural defects, such as, but not limited to, broken or missing rungs, cleats, or steps, broken or split..., such as, but not limited to, broken or missing rungs, cleats, or steps, broken or split rails, or...

  17. 16 CFR 701.3 - Written warranty terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... in compliance with part 703 of this subchapter; (7) Any limitations on the duration of implied... warranty duration; (5) A step-by-step explanation of the procedure which the consumer should follow in... following statement: Some States do not allow limitations on how long an implied warranty lasts, so the...

  18. Modeling the heterogeneous catalytic activity of a single nanoparticle using a first passage time distribution formalism

    NASA Astrophysics Data System (ADS)

    Das, Anusheela; Chaudhury, Srabanti

    2015-11-01

    Metal nanoparticles are heterogeneous catalysts and have a multitude of non-equivalent, catalytic sites on the nanoparticle surface. The product dissociation step in such reaction schemes can follow multiple pathways. Proposed here for the first time is a completely analytical theoretical framework, based on the first passage time distribution, that incorporates the effect of heterogeneity in nanoparticle catalysis explicitly by considering multiple, non-equivalent catalytic sites on the nanoparticle surface. Our results show that in nanoparticle catalysis, the effect of dynamic disorder is manifested even at limiting substrate concentrations in contrast to an enzyme that has only one well-defined active site.

  19. Serial transverse enteroplasty to facilitate enteral autonomy in selected children with short bowel syndrome.

    PubMed

    Wester, T; Borg, H; Naji, H; Stenström, P; Westbacke, G; Lilja, H E

    2014-09-01

    Serial transverse enteroplasty (STEP) was first described in 2003 as a method for lengthening and tapering of the bowel in short bowel syndrome. The aim of this multicentre study was to review the outcome of a Swedish cohort of children who underwent STEP. All children who had a STEP procedure at one of the four centres of paediatric surgery in Sweden between September 2005 and January 2013 were included in this observational cohort study. Demographic details, and data from the time of STEP and at follow-up were collected from the case records and analysed. Twelve patients had a total of 16 STEP procedures; four children underwent a second STEP. The first STEP was performed at a median age of 5·8 (range 0·9-19·0) months. There was no death at a median follow-up of 37·2 (range 3·0-87·5) months and no child had small bowel transplantation. Seven of the 12 children were weaned from parenteral nutrition at a median of 19·5 (range 2·3-42·9) months after STEP. STEP is a useful procedure for selected patients with short bowel syndrome and seems to facilitate weaning from parenteral nutrition. At mid-term follow-up a majority of the children had achieved enteral autonomy. The study is limited by the small sample size and lack of a control group. © 2014 The Authors. BJS published by John Wiley & Sons Ltd on behalf of BJS Society Ltd.

  20. Connecting spatial and temporal scales of tropical precipitation in observations and the MetUM-GA6

    NASA Astrophysics Data System (ADS)

    Martin, Gill M.; Klingaman, Nicholas P.; Moise, Aurel F.

    2017-01-01

    This study analyses tropical rainfall variability (on a range of temporal and spatial scales) in a set of parallel Met Office Unified Model (MetUM) simulations at a range of horizontal resolutions, which are compared with two satellite-derived rainfall datasets. We focus on the shorter scales, i.e. from the native grid and time step of the model through sub-daily to seasonal, since previous studies have paid relatively little attention to sub-daily rainfall variability and how this feeds through to longer scales. We find that the behaviour of the deep convection parametrization in this model on the native grid and time step is largely independent of the grid-box size and time step length over which it operates. There is also little difference in the rainfall variability on larger/longer spatial/temporal scales. Tropical convection in the model on the native grid/time step is spatially and temporally intermittent, producing very large rainfall amounts interspersed with grid boxes/time steps of little or no rain. In contrast, switching off the deep convection parametrization, albeit at an unrealistic resolution for resolving tropical convection, results in very persistent (for limited periods), but very sporadic, rainfall. In both cases, spatial and temporal averaging smoothes out this intermittency. On the ˜ 100 km scale, for oceanic regions, the spectra of 3-hourly and daily mean rainfall in the configurations with parametrized convection agree fairly well with those from satellite-derived rainfall estimates, while at ˜ 10-day timescales the averages are overestimated, indicating a lack of intra-seasonal variability. Over tropical land the results are more varied, but the model often underestimates the daily mean rainfall (partly as a result of a poor diurnal cycle) but still lacks variability on intra-seasonal timescales. Ultimately, such work will shed light on how uncertainties in modelling small-/short-scale processes relate to uncertainty in climate change projections of rainfall distribution and variability, with a view to reducing such uncertainty through improved modelling of small-/short-scale processes.

  1. How many steps/day are enough? for children and adolescents

    PubMed Central

    2011-01-01

    Worldwide, public health physical activity guidelines include special emphasis on populations of children (typically 6-11 years) and adolescents (typically 12-19 years). Existing guidelines are commonly expressed in terms of frequency, time, and intensity of behaviour. However, the simple step output from both accelerometers and pedometers is gaining increased credibility in research and practice as a reasonable approximation of daily ambulatory physical activity volume. Therefore, the purpose of this article is to review existing child and adolescent objectively monitored step-defined physical activity literature to provide researchers, practitioners, and lay people who use accelerometers and pedometers with evidence-based translations of these public health guidelines in terms of steps/day. In terms of normative data (i.e., expected values), the updated international literature indicates that we can expect 1) among children, boys to average 12,000 to 16,000 steps/day and girls to average 10,000 to 13,000 steps/day; and, 2) adolescents to steadily decrease steps/day until approximately 8,000-9,000 steps/day are observed in 18-year olds. Controlled studies of cadence show that continuous MVPA walking produces 3,300-3,500 steps in 30 minutes or 6,600-7,000 steps in 60 minutes in 10-15 year olds. Limited evidence suggests that a total daily physical activity volume of 10,000-14,000 steps/day is associated with 60-100 minutes of MVPA in preschool children (approximately 4-6 years of age). Across studies, 60 minutes of MVPA in primary/elementary school children appears to be achieved, on average, within a total volume of 13,000 to 15,000 steps/day in boys and 11,000 to 12,000 steps/day in girls. For adolescents (both boys and girls), 10,000 to 11,700 may be associated with 60 minutes of MVPA. Translations of time- and intensity-based guidelines may be higher than existing normative data (e.g., in adolescents) and therefore will be more difficult to achieve (but not impossible nor contraindicated). Recommendations are preliminary and further research is needed to confirm and extend values for measured cadences, associated speeds, and MET values in young people; continue to accumulate normative data (expected values) for both steps/day and MVPA across ages and populations; and, conduct longitudinal and intervention studies in children and adolescents required to inform the shape of step-defined physical activity dose-response curves associated with various health parameters. PMID:21798014

  2. Image Processing for Bioluminescence Resonance Energy Transfer Measurement-BRET-Analyzer.

    PubMed

    Chastagnier, Yan; Moutin, Enora; Hemonnot, Anne-Laure; Perroy, Julie

    2017-01-01

    A growing number of tools now allow live recordings of various signaling pathways and protein-protein interaction dynamics in time and space by ratiometric measurements, such as Bioluminescence Resonance Energy Transfer (BRET) Imaging. Accurate and reproducible analysis of ratiometric measurements has thus become mandatory to interpret quantitative imaging. In order to fulfill this necessity, we have developed an open source toolset for Fiji- BRET-Analyzer -allowing a systematic analysis, from image processing to ratio quantification. We share this open source solution and a step-by-step tutorial at https://github.com/ychastagnier/BRET-Analyzer. This toolset proposes (1) image background subtraction, (2) image alignment over time, (3) a composite thresholding method of the image used as the denominator of the ratio to refine the precise limits of the sample, (4) pixel by pixel division of the images and efficient distribution of the ratio intensity on a pseudocolor scale, and (5) quantification of the ratio mean intensity and standard variation among pixels in chosen areas. In addition to systematize the analysis process, we show that the BRET-Analyzer allows proper reconstitution and quantification of the ratiometric image in time and space, even from heterogeneous subcellular volumes. Indeed, analyzing twice the same images, we demonstrate that compared to standard analysis BRET-Analyzer precisely define the luminescent specimen limits, enlightening proficient strengths from small and big ensembles over time. For example, we followed and quantified, in live, scaffold proteins interaction dynamics in neuronal sub-cellular compartments including dendritic spines, for half an hour. In conclusion, BRET-Analyzer provides a complete, versatile and efficient toolset for automated reproducible and meaningful image ratio analysis.

  3. Solar System Chaos and Orbital Solutions for Paleoclimate Studies: Limits and New Results

    NASA Astrophysics Data System (ADS)

    Zeebe, R. E.

    2017-12-01

    I report results from accurate numerical integrations of Solar System orbits over the past 100 Myr. The simulations used different integrator algorithms, step sizes, and initial conditions (NASA, INPOP), and included effects from general relativity, different models of the Moon, the Sun's quadrupole moment, and up to ten asteroids. In one simulation, I probed the potential effect of a hypothetical Planet 9 on the dynamics of the system. The most expensive integration required 4 months wall-clock time (Bulirsch-Stoer algorithm) and showed a maximum relative energy error < 2.5e{-13} over the past 100 Myr. The difference in Earth's eccentricity (DeE) was used to track the difference between two solutions, which were considered to diverge at time tau when DeE irreversibly crossed 10% of Earth's mean eccentricity ( 0.028 x 0.1). My results indicate that finding a unique orbital solution is limited by initial conditions from current ephemerides to 54 Myr. Bizarrely, the 4-month Bulirsch-Stoer integration and a different integration scheme that required only 5 hours wall-clock time (symplectic, 12-day time step, Moon as a simple quadrupole perturbation), agree to 63 Myr. Solutions including 3 and 10 asteroids diverge at tau 48 Myr. The effect of a hypothetical Planet 9 on DeE becomes discernible at 66 Myr. Using tau as a criterion, the current state-of-the-art solutions all differ from previously published results beyond 50 Myr. The current study provides new orbital solutions for application in geological studies. I will also comment on the prospect of constraining astronomical solutions by geologic data.

  4. Accuracy of the Garmin 920 XT HRM to perform HRV analysis.

    PubMed

    Cassirame, Johan; Vanhaesebrouck, Romain; Chevrolat, Simon; Mourot, Laurent

    2017-12-01

    Heart rate variability (HRV) analysis is widely used to investigate autonomous cardiac drive. This method requires periodogram measurement, which can be obtained by an electrocardiogram (ECG) or from a heart rate monitor (HRM), e.g. the Garmin 920 XT device. The purpose of this investigation was to assess the accuracy of RR time series measurements from a Garmin 920 XT HRM as compared to a standard ECG, and to verify whether the measurements thus obtained are suitable for HRV analysis. RR time series were collected simultaneously with an ECG (Powerlab system, AD Instruments, Castell Hill, Australia) and a Garmin XT 920 in 11 healthy subjects during three conditions, namely in the supine position, the standing position and during moderate exercise. In a first step, we compared RR time series obtained with both tools using the Bland and Altman method to obtain the limits of agreement in all three conditions. In a second step, we compared the results of HRV analysis between the ECG RR time series and Garmin 920 XT series. Results show that the accuracy of this system is in accordance with the literature in terms of the limits of agreement. In the supine position, bias was 0.01, - 2.24, + 2.26 ms; in the standing position, - 0.01, - 3.12, + 3.11 ms respectively, and during exercise, - 0.01, - 4.43 and + 4.40 ms. Regarding HRV analysis, we did not find any difference for HRV analysis in the supine position, but the standing and exercise conditions both showed small modifications.

  5. 40 CFR 35.909 - Step 2+3 grants.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... ASSISTANCE Grants for Construction of Treatment Works-Clean Water Act § 35.909 Step 2+3 grants. (a) Authority... design (step 2) and construction (step 3) of a waste water treatment works. (b) Limitations. The Regional... Water and Waste Management finds to have unusually high costs of construction, the Regional...

  6. 40 CFR 35.909 - Step 2+3 grants.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ASSISTANCE Grants for Construction of Treatment Works-Clean Water Act § 35.909 Step 2+3 grants. (a) Authority... design (step 2) and construction (step 3) of a waste water treatment works. (b) Limitations. The Regional... Water and Waste Management finds to have unusually high costs of construction, the Regional...

  7. 40 CFR 35.909 - Step 2+3 grants.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ASSISTANCE Grants for Construction of Treatment Works-Clean Water Act § 35.909 Step 2+3 grants. (a) Authority... design (step 2) and construction (step 3) of a waste water treatment works. (b) Limitations. The Regional... Water and Waste Management finds to have unusually high costs of construction, the Regional...

  8. 40 CFR 35.909 - Step 2+3 grants.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... design (step 2) and construction (step 3) of a waste water treatment works. (b) Limitations. The Regional... ASSISTANCE Grants for Construction of Treatment Works-Clean Water Act § 35.909 Step 2+3 grants. (a) Authority... Water and Waste Management finds to have unusually high costs of construction, the Regional...

  9. SQERTSS: Dynamic rank based throttling of transition probabilities in kinetic Monte Carlo simulations

    DOE PAGES

    Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; ...

    2017-06-09

    Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of “KMC stiffness” (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps / cpu-time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order tomore » achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events -- allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm designed for use in achieving and simulating steady-state conditions in KMC simulations. Lastly, as shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.« less

  10. SQERTSS: Dynamic rank based throttling of transition probabilities in kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; Savara, Aditya

    2017-10-01

    Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of "KMC stiffness" (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps/CPU time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order to achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events-allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm is designed for use in achieving and simulating steady-state conditions in KMC simulations. As shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.

  11. Preparation of next-generation sequencing libraries using Nextera™ technology: simultaneous DNA fragmentation and adaptor tagging by in vitro transposition.

    PubMed

    Caruccio, Nicholas

    2011-01-01

    DNA library preparation is a common entry point and bottleneck for next-generation sequencing. Current methods generally consist of distinct steps that often involve significant sample loss and hands-on time: DNA fragmentation, end-polishing, and adaptor-ligation. In vitro transposition with Nextera™ Transposomes simultaneously fragments and covalently tags the target DNA, thereby combining these three distinct steps into a single reaction. Platform-specific sequencing adaptors can be added, and the sample can be enriched and bar-coded using limited-cycle PCR to prepare di-tagged DNA fragment libraries. Nextera technology offers a streamlined, efficient, and high-throughput method for generating bar-coded libraries compatible with multiple next-generation sequencing platforms.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Simonetto, Andrea

    This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less

  13. Detection limits for nanoparticles in solution with classical turbidity spectra

    NASA Astrophysics Data System (ADS)

    Le Blevennec, G.

    2013-09-01

    Detection of nanoparticles in solution is required to manage safety and environmental problems. Spectral transmission turbidity method has now been known for a long time. It is derived from the Mie Theory and can be applied to any number of spheres, randomly distributed and separated by large distance compared to wavelength. Here, we describe a method for determination of size, distribution and concentration of nanoparticles in solution using UV-Vis transmission measurements. The method combines Mie and Beer Lambert computation integrated in a best fit approximation. In a first step, a validation of the approach is completed on silver nanoparticles solution. Verification of results is realized with Transmission Electronic Microscopy measurements for size distribution and an Inductively Coupled Plasma Mass Spectrometry for concentration. In view of the good agreement obtained, a second step of work focuses on how to manage the concentration to be the most accurate on the size distribution. Those efficient conditions are determined by simple computation. As we are dealing with nanoparticles, one of the key points is to know what the size limits reachable are with that kind of approach based on classical electromagnetism. In taking into account the transmission spectrometer accuracy limit we determine for several types of materials, metals, dielectrics, semiconductors the particle size limit detectable by such a turbidity method. These surprising results are situated at the quantum physics frontier.

  14. Smoke and mirrors: Ultra-rapid-scan FT-IR spectrometry

    NASA Astrophysics Data System (ADS)

    Manning, C. J.

    1998-06-01

    Fourier transform-infrared spectrometers have dominated the marketplace and the experimental literature of vibrational spectroscopy for almost three decades. These versatile instruments have been applied to a wide variety of measurements in both industrial and research settings. There has been, however, an ongoing need for enhanced time resolution. Limitations of time resolution in FT-IR measurements arise from the modulation frequencies intrinsic to the spectral multiplexing. Events which are slower than the minimum scan time, about 40 milliseconds at 4-cm-1 resolution, can be readily monitored with conventional instrumentation. For shorter transients, various step-scan, stroboscopic and asynchronous methods have been demonstrated to provide excellent time resolution, down to nanoseconds, but these approaches are limited to events which can be repeated many times with minimal variations. Some of these methods are also susceptible to low-frequency noise sources. The intrinsic scan time of conventional FT-IR spectrometers is limited by the force that can be applied to the moving mirror. In commercial systems the moving mirror is invariably driven by a voice coil linear motor. The maximum force that can be exerted by the voice coil is sharply limited to a few Newtons. It is desirable to decrease the scan time by a large factor, but the required force scales as the square of the scan rate, while the voltage applied to the coil must scale as the cube of the rate. A more suitable approach to very-rapid-scan FT-IR spectrometry may be the use of rotating optical components which do not have to turn around at the end of travel. There is, however, an apparent symmetry mismatch between rotating elements and the nominally planar wavefronts in a Michelson interferometer. In spite of the mismatch, numerous interferometer designs based on rotating elements have been proposed and demonstrated. Some of these designs are suitable for operation with scan times from tens of milliseconds to milliseconds, and perhaps faster, at 4-cm-1 resolution. A novel interferometer design utilizing a single-sided precessing disk mirror allows a complete interferogram to be measured in 1 millisecond or less. A prototype instrument of this design has been constructed and tested. One application reported here is the measurement of a transient combustion event. While combustion reactions can be conveniently repeated under some circumstances, such as with gas-phase reactants, the shot-to-shot variation is unacceptably large for step-scan measurements. Preliminary data, illustrating operation and performance of the system, are presented. It is thought that the high modulation frequencies have resulted in superior rejection of multiplicative noise.

  15. Glass frit nebulizer for atomic spectrometry

    USGS Publications Warehouse

    Layman, L.R.

    1982-01-01

    The nebuilizatlon of sample solutions Is a critical step In most flame or plasma atomic spectrometrlc methods. A novel nebulzatlon technique, based on a porous glass frit, has been Investigated. Basic operating parameters and characteristics have been studied to determine how thte new nebulizer may be applied to atomic spectrometrlc methods. The results of preliminary comparisons with pneumatic nebulizers Indicate several notable differences. The frit nebulizer produces a smaller droplet size distribution and has a higher sample transport efficiency. The mean droplet size te approximately 0.1 ??m, and up to 94% of the sample te converted to usable aerosol. The most significant limitations In the performance of the frit nebulizer are the stow sample equMbratton time and the requirement for wash cycles between samples. Loss of solute by surface adsorption and contamination of samples by leaching from the glass were both found to be limitations only In unusual cases. This nebulizer shows great promise where sample volume te limited or where measurements require long nebullzatlon times.

  16. Delayed photolysis of liposomes: a strategy for the precision timing of bolus drug release using ex-vivo photochemical sensitization

    NASA Astrophysics Data System (ADS)

    Kozikowski, Raymond T.; Sorg, Brian S.

    2012-03-01

    Chemotherapy is a standard treatment for metastatic cancer. However drug toxicity limits the dosage that can safely be used, thus reducing treatment efficacy. Drug carrier particles, like liposomes, can help reduce toxicity by shielding normal tissue from drug and selectively depositing drug in tumors. Over years of development, liposomes have been optimized to avoid uptake by the Reticuloendothelial System (RES) as well as effectively retain their drug content during circulation. As a result, liposomes release drug passively, by slow leakage, but this uncontrolled drug release can limit treatment efficacy as it can be difficult to achieve therapeutic concentrations of drug at tumor sites even with tumor-specific accumulation of the carriers. Lipid membranes can be photochemically lysed by both Type I (photosensitizer-substrate) and Type II (photosensitizer-oxygen) reactions. It has been demonstrated in red blood cells (RBCs) in vitro that these photolysis reactions can occur in two distinct steps: a light-initiated reaction followed by a thermally-initiated reaction. These separable activation steps allow for the delay of photohemolysis in a controlled manner using the irradiation energy, temperature and photosensitizer concentration. In this work we have translated this technique from RBCs to liposomal nanoparticles. To that end, we present in vitro data demonstrating this delayed bolus release from liposomes, as well as the ability to control the timing of this event. Further, we demonstrate for the first time the improved delivery of bioavailable cargo selectively to target sites in vivo.

  17. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. II. Linear scaling domain based pair natural orbital coupled cluster theory

    NASA Astrophysics Data System (ADS)

    Riplinger, Christoph; Pinski, Peter; Becker, Ute; Valeev, Edward F.; Neese, Frank

    2016-01-01

    Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate previous implementation.

  18. Upper Limit of Weights in TAI Computation

    NASA Technical Reports Server (NTRS)

    Thomas, Claudine; Azoubib, Jacques

    1996-01-01

    The international reference time scale International Atomic Time (TAI) computed by the Bureau International des Poids et Mesures (BIPM) relies on a weighted average of data from a large number of atomic clocks. In it, the weight attributed to a given clock depends on its long-term stability. In this paper the TAI algorithm is used as the basis for a discussion of how to implement an upper limit of weight for clocks contributing to the ensemble time. This problem is approached through the comparison of two different techniques. In one case, a maximum relative weight is fixed: no individual clock can contribute more than a given fraction to the resulting time scale. The weight of each clock is then adjusted according to the qualities of the whole set of contributing elements. In the other case, a parameter characteristic of frequency stability is chosen: no individual clock can appear more stable than the stated limit. This is equivalent to choosing an absolute limit of weight and attributing this to to the most stable clocks independently of the other elements of the ensemble. The first technique is more robust than the second and automatically optimizes the stability of the resulting time scale, but leads to a more complicated computatio. The second technique has been used in the TAI algorithm since the very beginning. Careful analysis of tests on real clock data shows that improvement of the stability of the time scale requires revision from time to time of the fixed value chosen for the upper limit of absolute weight. In particular, we present results which confirm the decision of the CCDS Working Group on TAI to increase the absolute upper limit by a factor of 2.5. We also show that the use of an upper relative contribution further helps to improve the stability and may be a useful step towards better use of the massive ensemble of HP 507IA clocks now contributing to TAI.

  19. One-step multiplex real-time RT-PCR assay for detecting and genotyping wild-type group A rotavirus strains and vaccine strains (Rotarix® and RotaTeq®) in stool samples.

    PubMed

    Gautam, Rashi; Mijatovic-Rustempasic, Slavica; Esona, Mathew D; Tam, Ka Ian; Quaye, Osbourne; Bowen, Michael D

    2016-01-01

    Background. Group A rotavirus (RVA) infection is the major cause of acute gastroenteritis (AGE) in young children worldwide. Introduction of two live-attenuated rotavirus vaccines, RotaTeq® and Rotarix®, has dramatically reduced RVA associated AGE and mortality in developed as well as in many developing countries. High-throughput methods are needed to genotype rotavirus wild-type strains and to identify vaccine strains in stool samples. Quantitative RT-PCR assays (qRT-PCR) offer several advantages including increased sensitivity, higher throughput, and faster turnaround time. Methods. In this study, a one-step multiplex qRT-PCR assay was developed to detect and genotype wild-type strains and vaccine (Rotarix® and RotaTeq®) rotavirus strains along with an internal processing control (Xeno or MS2 RNA). Real-time RT-PCR assays were designed for VP7 (G1, G2, G3, G4, G9, G12) and VP4 (P[4], P[6] and P[8]) genotypes. The multiplex qRT-PCR assay also included previously published NSP3 qRT-PCR for rotavirus detection and Rotarix® NSP2 and RotaTeq® VP6 qRT-PCRs for detection of Rotarix® and RotaTeq® vaccine strains respectively. The multiplex qRT-PCR assay was validated using 853 sequence confirmed stool samples and 24 lab cultured strains of different rotavirus genotypes. By using thermostable rTth polymerase enzyme, dsRNA denaturation, reverse transcription (RT) and amplification (PCR) steps were performed in single tube by uninterrupted thermocycling profile to reduce chances of sample cross contamination and for rapid generation of results. For quantification, standard curves were generated using dsRNA transcripts derived from RVA gene segments. Results. The VP7 qRT-PCRs exhibited 98.8-100% sensitivity, 99.7-100% specificity, 85-95% efficiency and a limit of detection of 4-60 copies per singleplex reaction. The VP7 qRT-PCRs exhibited 81-92% efficiency and limit of detection of 150-600 copies in multiplex reactions. The VP4 qRT-PCRs exhibited 98.8-100% sensitivity, 100% specificity, 86-89% efficiency and a limit of detection of 12-400 copies per singleplex reactions. The VP4 qRT-PCRs exhibited 82-90% efficiency and limit of detection of 120-4000 copies in multiplex reaction. Discussion. The one-step multiplex qRT-PCR assay will facilitate high-throughput rotavirus genotype characterization for monitoring circulating rotavirus wild-type strains causing rotavirus infections, determining the frequency of Rotarix® and RotaTeq® vaccine strains and vaccine-derived reassortants associated with AGE, and help to identify novel rotavirus strains derived by reassortment between vaccine and wild-type strains.

  20. One-step multiplex real-time RT-PCR assay for detecting and genotyping wild-type group A rotavirus strains and vaccine strains (Rotarix® and RotaTeq®) in stool samples

    PubMed Central

    Mijatovic-Rustempasic, Slavica; Esona, Mathew D.; Tam, Ka Ian; Quaye, Osbourne; Bowen, Michael D.

    2016-01-01

    Background. Group A rotavirus (RVA) infection is the major cause of acute gastroenteritis (AGE) in young children worldwide. Introduction of two live-attenuated rotavirus vaccines, RotaTeq® and Rotarix®, has dramatically reduced RVA associated AGE and mortality in developed as well as in many developing countries. High-throughput methods are needed to genotype rotavirus wild-type strains and to identify vaccine strains in stool samples. Quantitative RT-PCR assays (qRT-PCR) offer several advantages including increased sensitivity, higher throughput, and faster turnaround time. Methods. In this study, a one-step multiplex qRT-PCR assay was developed to detect and genotype wild-type strains and vaccine (Rotarix® and RotaTeq®) rotavirus strains along with an internal processing control (Xeno or MS2 RNA). Real-time RT-PCR assays were designed for VP7 (G1, G2, G3, G4, G9, G12) and VP4 (P[4], P[6] and P[8]) genotypes. The multiplex qRT-PCR assay also included previously published NSP3 qRT-PCR for rotavirus detection and Rotarix® NSP2 and RotaTeq® VP6 qRT-PCRs for detection of Rotarix® and RotaTeq® vaccine strains respectively. The multiplex qRT-PCR assay was validated using 853 sequence confirmed stool samples and 24 lab cultured strains of different rotavirus genotypes. By using thermostable rTth polymerase enzyme, dsRNA denaturation, reverse transcription (RT) and amplification (PCR) steps were performed in single tube by uninterrupted thermocycling profile to reduce chances of sample cross contamination and for rapid generation of results. For quantification, standard curves were generated using dsRNA transcripts derived from RVA gene segments. Results. The VP7 qRT-PCRs exhibited 98.8–100% sensitivity, 99.7–100% specificity, 85–95% efficiency and a limit of detection of 4–60 copies per singleplex reaction. The VP7 qRT-PCRs exhibited 81–92% efficiency and limit of detection of 150–600 copies in multiplex reactions. The VP4 qRT-PCRs exhibited 98.8–100% sensitivity, 100% specificity, 86–89% efficiency and a limit of detection of 12–400 copies per singleplex reactions. The VP4 qRT-PCRs exhibited 82–90% efficiency and limit of detection of 120–4000 copies in multiplex reaction. Discussion. The one-step multiplex qRT-PCR assay will facilitate high-throughput rotavirus genotype characterization for monitoring circulating rotavirus wild-type strains causing rotavirus infections, determining the frequency of Rotarix® and RotaTeq® vaccine strains and vaccine-derived reassortants associated with AGE, and help to identify novel rotavirus strains derived by reassortment between vaccine and wild-type strains. PMID:26839745

  1. Notes on the ExactPack Implementation of the DSD Rate Stick Solver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaul, Ann

    It has been shown above that the discretization scheme implemented in the ExactPack solver for the DSD Rate Stick equation is consistent with the Rate Stick PDE. In addition, a stability analysis has provided a CFL condition for a stable time step. Together, consistency and stability imply convergence of the scheme, which is expected to be close to first-order in time and second-order in space. It is understood that the nonlinearity of the underlying PDE will affect this rate somewhat. In the solver I implemented in ExactPack, I used the one-sided boundary condition described above at the outer boundary. Inmore » addition, I used 80% of the time step calculated in the stability analysis above. By making these two changes, I was able to implement a solver that calculates the solution without any arbitrary limits placed on the values of the curvature at the boundary. Thus, the calculation is driven directly by the conditions at the boundary as formulated in the DSD theory. The chosen scheme is completely coherent and defensible from a mathematical standpoint.« less

  2. Mechanism of Nitrogenase H 2 Formation by Metal-Hydride Protonation Probed by Mediated Electrocatalysis and H/D Isotope Effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khadka, Nimesh; Milton, Ross D.; Shaw, Sudipta

    Nitrogenase catalyzes the reduction of dinitrogen (N2) to ammonia (NH3) with obligatory reduction of protons (H+) to dihydrogen (H2) through a mechanism involving reductive elimination of two [Fe-H-Fe] bridging hydrides at its active site FeMo-cofactor. The overall rate-limiting step is associated with ATP-driven electron delivery from Fe protein, precluding isotope effect measurements on substrate reduction steps. Here, we use mediated bioelectrocatalysis to drive electron delivery to MoFe protein without Fe protein and ATP hydrolysis, thereby eliminating the normal rate-limiting step. The ratio of catalytic current in mixtures of H2O and D2O, the proton inventory, changes linearly with the D2O/H2O ratio,more » revealing that a single H/D is involved in the rate limiting step. Kinetic models, along with measurements that vary the electron/proton delivery rate and use different substrates, reveal that the rate-limiting step under these conditions is the H2 formation reaction. Altering the chemical environment around the active site FeMo-cofactor in the MoFe protein either by substituting nearby amino acids or transferring the isolated FeMo-cofactor into a different peptide matrix, changes the net isotope effect, but the proton inventory plot remains linear, consistent with an unchanging rate-limiting step. Density functional theory predicts a transition state for H2 formation where the proton from S-H+ moves to the hydride in Fe-H-, predicting the number and magnitude of the observed H/D isotope effect. This study not only reveals the mechanism of H2 formation, but also illustrates a strategy for mechanistic study that can be applied to other enzymes and to biomimetic complexes.« less

  3. Multi-site Stochastic Simulation of Daily Streamflow with Markov Chain and KNN Algorithm

    NASA Astrophysics Data System (ADS)

    Mathai, J.; Mujumdar, P.

    2017-12-01

    A key focus of this study is to develop a method which is physically consistent with the hydrologic processes that can capture short-term characteristics of daily hydrograph as well as the correlation of streamflow in temporal and spatial domains. In complex water resource systems, flow fluctuations at small time intervals require that discretisation be done at small time scales such as daily scales. Also, simultaneous generation of synthetic flows at different sites in the same basin are required. We propose a method to equip water managers with a streamflow generator within a stochastic streamflow simulation framework. The motivation for the proposed method is to generate sequences that extend beyond the variability represented in the historical record of streamflow time series. The method has two steps: In step 1, daily flow is generated independently at each station by a two-state Markov chain, with rising limb increments randomly sampled from a Gamma distribution and the falling limb modelled as exponential recession and in step 2, the streamflow generated in step 1 is input to a nonparametric K-nearest neighbor (KNN) time series bootstrap resampler. The KNN model, being data driven, does not require assumptions on the dependence structure of the time series. A major limitation of KNN based streamflow generators is that they do not produce new values, but merely reshuffle the historical data to generate realistic streamflow sequences. However, daily flow generated using the Markov chain approach is capable of generating a rich variety of streamflow sequences. Furthermore, the rising and falling limbs of daily hydrograph represent different physical processes, and hence they need to be modelled individually. Thus, our method combines the strengths of the two approaches. We show the utility of the method and improvement over the traditional KNN by simulating daily streamflow sequences at 7 locations in the Godavari River basin in India.

  4. Promoting ADL independence in vulnerable, community-dwelling older adults: a pilot RCT comparing 3-Step Workout for Life versus resistance exercise

    PubMed Central

    Liu, Chiung-ju; Xu, Huiping; Keith, NiCole R; Clark, Daniel O

    2017-01-01

    Background Resistance exercise is effective to increase muscle strength for older adults; however, its effect on the outcome of activities of daily living is often limited. The purpose of this study was to examine whether 3-Step Workout for Life (which combines resistance exercise, functional exercise, and activities of daily living exercise) would be more beneficial than resistance exercise alone. Methods A single-blind randomized controlled trial was conducted. Fifty-two inactive, community-dwelling older adults (mean age =73 years) with muscle weakness and difficulty in activities of daily living were randomized to receive 3-Step Workout for Life or resistance exercise only. Participants in the 3-Step Workout for Life Group performed functional movements and selected activities of daily living at home in addition to resistance exercise. Participants in the Resistance Exercise Only Group performed resistance exercise only. Both groups were comparable in exercise intensity (moderate), duration (50–60 minutes each time for 10 weeks), and frequency (three times a week). Assessment of Motor and Process Skills, a standard performance test on activities of daily living, was administered at baseline, postintervention, and 6 months after intervention completion. Results At postintervention, the 3-Step Workout for Life Group showed improvement on the outcome measure (mean change from baseline =0.29, P=0.02), but the improvement was not greater than the Resistance Exercise Only Group (group mean difference =0.24, P=0.13). However, the Resistance Exercise Only Group showed a significant decline (mean change from baseline =−0.25, P=0.01) 6 months after the intervention completion. Meanwhile, the superior effect of 3-Step Workout for Life was observed (group mean difference =0.37, P<0.01). Conclusion Compared to resistance exercise alone, 3-Step Workout for Life improves the performance of activities of daily living and attenuates the disablement process in older adults. PMID:28769559

  5. Theoretical analysis of Lumry-Eyring models in differential scanning calorimetry

    PubMed Central

    Sanchez-Ruiz, Jose M.

    1992-01-01

    A theoretical analysis of several protein denaturation models (Lumry-Eyring models) that include a rate-limited step leading to an irreversibly denatured state of the protein (the final state) has been carried out. The differential scanning calorimetry transitions predicted for these models can be broadly classified into four groups: situations A, B, C, and C′. (A) The transition is calorimetrically irreversible but the rate-limited, irreversible step takes place with significant rate only at temperatures slightly above those corresponding to the transition. Equilibrium thermodynamics analysis is permissible. (B) The transition is distorted by the occurrence of the rate-limited step; nevertheless, it contains thermodynamic information about the reversible unfolding of the protein, which could be obtained upon the appropriate data treatment. (C) The heat absorption is entirely determined by the kinetics of formation of the final state and no thermodynamic information can be extracted from the calorimetric transition; the rate-determining step is the irreversible process itself. (C′) same as C, but, in this case, the rate-determining step is a previous step in the unfolding pathway. It is shown that ligand and protein concentration effects on transitions corresponding to situation C (strongly rate-limited transitions) are similar to those predicted by equilibrium thermodynamics for simple reversible unfolding models. It has been widely held in recent literature that experimentally observed ligand and protein concentration effects support the applicability of equilibrium thermodynamics to irreversible protein denaturation. The theoretical analysis reported here disfavors this claim. PMID:19431826

  6. Stability and delay sensitivity of neutral fractional-delay systems.

    PubMed

    Xu, Qi; Shi, Min; Wang, Zaihua

    2016-08-01

    This paper generalizes the stability test method via integral estimation for integer-order neutral time-delay systems to neutral fractional-delay systems. The key step in stability test is the calculation of the number of unstable characteristic roots that is described by a definite integral over an interval from zero to a sufficient large upper limit. Algorithms for correctly estimating the upper limits of the integral are given in two concise ways, parameter dependent or independent. A special feature of the proposed method is that it judges the stability of fractional-delay systems simply by using rough integral estimation. Meanwhile, the paper shows that for some neutral fractional-delay systems, the stability is extremely sensitive to the change of time delays. Examples are given for demonstrating the proposed method as well as the delay sensitivity.

  7. Response of the Cardiovascular System to Vibration and Combined Stresses

    DTIC Science & Technology

    1980-11-01

    flow meter ( Zepeda Instruments) and our di- mension meter (Schussler and Associates) resulted in two suggestions: ’) an outline of possible steps to take...tionally, the flowmeter gate was not adjustable, further limiting our timing ability. Given the features of the Zepeda flowmeter in design (square-wave...dimension meter clock pulse (divided down) as the flow oscillator, rather than capturing the flow oscillator as was necessary with the Zepeda meter. This

  8. Simple, rapid and green one-step strategy to synthesis of graphene/carbon nanotubes/chitosan hybrid as solid-phase extraction for square-wave voltammetric detection of methyl parathion.

    PubMed

    Liu, Yan; Yang, Shanli; Niu, Weifen

    2013-08-01

    Simple, rapid, green and one-step electrodeposition strategy was first proposed to synthesis of graphene/carbon nanotubes/chitosan (GR/CNTs/CS) hybrid. The one-step electrodeposition approach for the construction of GR-based hybrid is green environmentally, which would not involve the chemical reduction of graphene oxide (GO) and therefore result in no further contamination. The whole procedure is simple and needs only several minutes. Combining the advantages of GR (large surface area, high conductivity and good adsorption ability), CNTs (high surface area, high enrichment capability and good adsorption ability) and CS (good adsorption and excellent film-forming ability), the obtained GR/CNTs/CS composite could be highly efficient to capture organophosphate pesticides (OPs) and used as solid phase extraction (SPE). The GR/CNTs/CS sensor is used for enzymeless detection of OPs, using methyl parathion (MP) as a model analyte. Significant redox response of MP on GR/CNTs/CS sensor is proved. The linear range is wide from 2.0ngmL(-1) to 500ngmL(-1), with a detection limit of 0.5ngmL(-1). Detection limit of the proposed sensor is much lower than those enzyme-based sensors and many other enzymeless sensors. Moreover, the proposed sensor exhibits high reproducibility, long-time storage stability and satisfactory anti-interference ability. This work provides a green and one-step route for the preparation of GR-based hybrid, and also offers a new promising protocol for OPs analysis. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Role of initial correlation in coarsening of a ferromagnet

    NASA Astrophysics Data System (ADS)

    Chakraborty, Saikat; Das, Subir K.

    2015-06-01

    We study the dynamics of ordering in ferromagnets via Monte Carlo simulations of the Ising model, employing the Glauber spin-flip mechanism, in space dimensions d = 2 and 3, on square and simple cubic lattices. Results for the persistence probability and the domain growth are discussed for quenches to various temperatures (Tf) below the critical one (Tc), from different initial temperatures Ti ≥ Tc. In long time limit, for Ti>Tc, the persistence probability exhibits power-law decay with exponents θ ≃ 0.22 and ≃ 0.18 in d = 2 and 3, respectively. For finite Ti, the early time behavior is a different power-law whose life-time diverges and exponent decreases as Ti → Tc. The two steps are connected via power-law as a function of domain length and the crossover to the second step occurs when this characteristic length exceeds the equilibrium correlation length at T = Ti. Ti = Tc is expected to provide a new universality class for which we obtain θ ≡ θc ≃ 0.035 in d = 2 and ≃0.105 in d = 3. The time dependence of the average domain size ℓ, however, is observed to be rather insensitive to the choice of Ti.

  10. An unconditionally stable Runge-Kutta method for unsteady flows

    NASA Technical Reports Server (NTRS)

    Jorgenson, Philip C. E.; Chima, Rodrick V.

    1988-01-01

    A quasi-three dimensional analysis was developed for unsteady rotor-stator interaction in turbomachinery. The analysis solves the unsteady Euler or thin-layer Navier-Stokes equations in a body fitted coordinate system. It accounts for the effects of rotation, radius change, and stream surface thickness. The Baldwin-Lomax eddy viscosity model is used for turbulent flows. The equations are integrated in time using a four stage Runge-Kutta scheme with a constant time step. Implicit residual smoothing was employed to accelerate the solution of the time accurate computations. The scheme is described and accuracy analyses are given. Results are shown for a supersonic through-flow fan designed for NASA Lewis. The rotor:stator blade ratio was taken as 1:1. Results are also shown for the first stage of the Space Shuttle Main Engine high pressure fuel turbopump. Here the blade ratio is 2:3. Implicit residual smoothing was used to increase the time step limit of the unsmoothed scheme by a factor of six with negligible differences in the unsteady results. It is felt that the implicitly smoothed Runge-Kutta scheme is easily competitive with implicit schemes for unsteady flows while retaining the simplicity of an explicit scheme.

  11. Initial condition of stochastic self-assembly

    NASA Astrophysics Data System (ADS)

    Davis, Jason K.; Sindi, Suzanne S.

    2016-02-01

    The formation of a stable protein aggregate is regarded as the rate limiting step in the establishment of prion diseases. In these systems, once aggregates reach a critical size the growth process accelerates and thus the waiting time until the appearance of the first critically sized aggregate is a key determinant of disease onset. In addition to prion diseases, aggregation and nucleation is a central step of many physical, chemical, and biological process. Previous studies have examined the first-arrival time at a critical nucleus size during homogeneous self-assembly under the assumption that at time t =0 the system was in the all-monomer state. However, in order to compare to in vivo biological experiments where protein constituents inherited by a newly born cell likely contain intermediate aggregates, other possibilities must be considered. We consider one such possibility by conditioning the unique ergodic size distribution on subcritical aggregate sizes; this least-informed distribution is then used as an initial condition. We make the claim that this initial condition carries fewer assumptions than an all-monomer one and verify that it can yield significantly different averaged waiting times relative to the all-monomer condition under various models of assembly.

  12. Step-by-step management of refractory gastresophageal reflux disease.

    PubMed

    Hershcovici, T; Fass, R

    2013-01-01

    Up to a third of the patients who receive proton pump inhibitor (PPI) once daily will demonstrate lack or partial response to treatment. There are various mechanisms that contribute to PPI failure and they include residual acid reflux, weakly acidic and weakly alkaline reflux, esophageal hypersensitivity, and psychological comorbidity, among others. Some of these underlying mechanisms may coincide in the same patient. Evaluation for proper compliance and adequate dosing time of PPIs should be the first management step before ordering invasive diagnostic tests. Doubling the PPI dose or switching to another PPI is the second step of management. Upper endoscopy and pH testing appear to have limited diagnostic value in patients who failed PPI treatment. In contrast, esophageal impedance with pH testing (multichannel intraluminal impedance MII-pH) on therapy appears to provide the most insightful information about the subsequent management of these patients (step 3). In step 4, treatment should be tailored to the specific underlying mechanism of patient's PPI failure. For those who demonstrate weakly acidic or weakly alkaline reflux as the underlying cause of their residual symptoms, transient lower esophageal sphincter relaxation reducers, endoscopic treatment, antireflux surgery and pain modulators should be considered. In those with functional heartburn, pain modulators are the cornerstone of therapy. © 2012 Copyright the Authors. Journal compilation © 2012, Wiley Periodicals, Inc. and the International Society for Diseases of the Esophagus.

  13. Turnover of cyclic 2,3-diphosphoglycerate in Methanobacterium thermoautotrophicum. Phosphate flux in P1- and H2-limited chemostat cultures.

    PubMed

    Krueger, R D; Campbell, J W; Fahrney, D E

    1986-09-15

    The archaebacterium Methanobacterium thermoautotrophicum was grown at 65 degrees C in H2- and Pi-limited chemostat cultures at dilution rates corresponding to 3- and 4-h doubling times, respectively. Under these conditions the steady state concentration of cyclic 2,3-diphosphoglycerate was 44 mM in the H2-limited cells and 13 mM in the cells grown under Pi limitation. Flux of Pi into the cyclic pyrophosphate pool was estimated by two 32P-labeling procedures: approach to isotopic equilibrium and replacement of prelabeled cyclic diphosphoglycerate with unlabeled compound. The results unequivocally demonstrate turnover of the phosphoryl groups; either both phosphoryl groups of the cyclic pyrophosphate leave together or the second leaves at a faster rate. The half-life of the rate-determining step for loss of the phosphoryl groups was approximately equal to the culture doubling time. The Pi flowing into the cyclic diphosphoglycerate pool accounted for 19% of the total Pi flux into Pi-limited cells and 43% of the total for H2-limited cells. The high phosphate flux through the large cyclic diphosphoglycerate pool suggests that this molecule plays an important role in the phosphorus metabolism of this methanogen.

  14. Coherent diffractive imaging of time-evolving samples with improved temporal resolution

    DOE PAGES

    Ulvestad, A.; Tripathi, A.; Hruszkewycz, S. O.; ...

    2016-05-19

    Bragg coherent x-ray diffractive imaging is a powerful technique for investigating dynamic nanoscale processes in nanoparticles immersed in reactive, realistic environments. Its temporal resolution is limited, however, by the oversampling requirements of three-dimensional phase retrieval. Here, we show that incorporating the entire measurement time series, which is typically a continuous physical process, into phase retrieval allows the oversampling requirement at each time step to be reduced, leading to a subsequent improvement in the temporal resolution by a factor of 2-20 times. The increased time resolution will allow imaging of faster dynamics and of radiation-dose-sensitive samples. Furthermore, this approach, which wemore » call "chrono CDI," may find use in improving the time resolution in other imaging techniques.« less

  15. Seakeeping with the semi-Lagrangian particle finite element method

    NASA Astrophysics Data System (ADS)

    Nadukandi, Prashanth; Servan-Camas, Borja; Becker, Pablo Agustín; Garcia-Espinosa, Julio

    2017-07-01

    The application of the semi-Lagrangian particle finite element method (SL-PFEM) for the seakeeping simulation of the wave adaptive modular vehicle under spray generating conditions is presented. The time integration of the Lagrangian advection is done using the explicit integration of the velocity and acceleration along the streamlines (X-IVAS). Despite the suitability of the SL-PFEM for the considered seakeeping application, small time steps were needed in the X-IVAS scheme to control the solution accuracy. A preliminary proposal to overcome this limitation of the X-IVAS scheme for seakeeping simulations is presented.

  16. Impact of temporal resolution of inputs on hydrological model performance: An analysis based on 2400 flood events

    NASA Astrophysics Data System (ADS)

    Ficchì, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-07-01

    Hydro-climatic data at short time steps are considered essential to model the rainfall-runoff relationship, especially for short-duration hydrological events, typically flash floods. Also, using fine time step information may be beneficial when using or analysing model outputs at larger aggregated time scales. However, the actual gain in prediction efficiency using short time-step data is not well understood or quantified. In this paper, we investigate the extent to which the performance of hydrological modelling is improved by short time-step data, using a large set of 240 French catchments, for which 2400 flood events were selected. Six-minute rain gauge data were available and the GR4 rainfall-runoff model was run with precipitation inputs at eight different time steps ranging from 6 min to 1 day. Then model outputs were aggregated at seven different reference time scales ranging from sub-hourly to daily for a comparative evaluation of simulations at different target time steps. Three classes of model performance behaviour were found for the 240 test catchments: (i) significant improvement of performance with shorter time steps; (ii) performance insensitivity to the modelling time step; (iii) performance degradation as the time step becomes shorter. The differences between these groups were analysed based on a number of catchment and event characteristics. A statistical test highlighted the most influential explanatory variables for model performance evolution at different time steps, including flow auto-correlation, flood and storm duration, flood hydrograph peakedness, rainfall-runoff lag time and precipitation temporal variability.

  17. Examples of Linking Codes Within GeoFramework

    NASA Astrophysics Data System (ADS)

    Tan, E.; Choi, E.; Thoutireddy, P.; Aivazis, M.; Lavier, L.; Quenette, S.; Gurnis, M.

    2003-12-01

    Geological processes usually encompass a broad spectrum of length and time scales. Traditionally, a modeling code (solver) is written to solve a problem with specific length and time scales in mind. The utility of the solver beyond the designated purpose is usually limited. Furthermore, two distinct solvers, even if each can solve complementary parts of a new problem, are difficult to link together to solve the problem as a whole. For example, Lagrangian deformation model with visco-elastoplastic crust is used to study deformation near plate boundary. Ideally, the driving force of the deformation should be derived from underlying mantle convection, and it requires linking the Lagrangian deformation model with a Eulerian mantle convection model. As our understanding of geological processes evolves, the need of integrated modeling codes, which should reuse existing codes as much as possible, begins to surface. GeoFramework project addresses this need by developing a suite of reusable and re-combinable tools for the Earth science community. GeoFramework is based on and extends Pyre, a Python-based modeling framework, recently developed to link solid (Lagrangian) and fluid (Eulerian) models, as well as mesh generators, visualization packages, and databases, with one another for engineering applications. Under the framework, a solver is aware of the existence of other solvers and can interact with each other via exchanging information across adjacent boundary. A solver needs to conform a standard interface and provide its own implementation for exchanging boundary information. The framework also provides facilities to control the coordination between interacting solvers. We will show an example of linking two solvers within GeoFramework. CitcomS is a finite element code which solves for thermal convection within a 3D spherical shell. CitcomS can solve for problems either within a full spherical (global) domain or a restricted (regional) domain of a full sphere by using different meshers. We can embed a regional CitcomS solver within a global CitcomS solver. We not that linking instances of the same solver is conceptually equivalent to linking to different solvers. The global solver has a coarser grid and a longer stable time step than the regional solver. Therefore, a global-solver time step consists of several regional-solver time steps. The time-marching scheme is described below. First, the global solver is advanced one global-solver time step. Then, the regional solver is advanced for several regional-solver time steps until it catches up global solver. Within each regional-solver time step, the velocity field of the global solver is interpolated in time and then is imposed to the regional solver as boundary conditions. Finally, the temperature field of the regional solver is extrapolated in space and is fed back to the global. These two solvers are linked and synchronized by the time-marching scheme. An effort to embed a visco-elastoplastic representation of the crust within viscous mantle flow is underway.

  18. High-throughput screening of chromatographic separations: IV. Ion-exchange.

    PubMed

    Kelley, Brian D; Switzer, Mary; Bastek, Patrick; Kramarczyk, Jack F; Molnar, Kathleen; Yu, Tianning; Coffman, Jon

    2008-08-01

    Ion-exchange (IEX) chromatography steps are widely applied in protein purification processes because of their high capacity, selectivity, robust operation, and well-understood principles. Optimization of IEX steps typically involves resin screening and selection of the pH and counterion concentrations of the load, wash, and elution steps. Time and material constraints associated with operating laboratory columns often preclude evaluating more than 20-50 conditions during early stages of process development. To overcome this limitation, a high-throughput screening (HTS) system employing a robotic liquid handling system and 96-well filterplates was used to evaluate various operating conditions for IEX steps for monoclonal antibody (mAb) purification. A screening study for an adsorptive cation-exchange step evaluated eight different resins. Sodium chloride concentrations defining the operating boundaries of product binding and elution were established at four different pH levels for each resin. Adsorption isotherms were measured for 24 different pH and salt combinations for a single resin. An anion-exchange flowthrough step was then examined, generating data on mAb adsorption for 48 different combinations of pH and counterion concentration for three different resins. The mAb partition coefficients were calculated and used to estimate the characteristic charge of the resin-protein interaction. Host cell protein and residual Protein A impurity levels were also measured, providing information on selectivity within this operating window. The HTS system shows promise for accelerating process development of IEX steps, enabling rapid acquisition of large datasets addressing the performance of the chromatography step under many different operating conditions. (c) 2008 Wiley Periodicals, Inc.

  19. Mechanistic kinetic models of enzymatic cellulose hydrolysis-A review.

    PubMed

    Jeoh, Tina; Cardona, Maria J; Karuna, Nardrapee; Mudinoor, Akshata R; Nill, Jennifer

    2017-07-01

    Bioconversion of lignocellulose forms the basis for renewable, advanced biofuels, and bioproducts. Mechanisms of hydrolysis of cellulose by cellulases have been actively studied for nearly 70 years with significant gains in understanding of the cellulolytic enzymes. Yet, a full mechanistic understanding of the hydrolysis reaction has been elusive. We present a review to highlight new insights gained since the most recent comprehensive review of cellulose hydrolysis kinetic models by Bansal et al. (2009) Biotechnol Adv 27:833-848. Recent models have taken a two-pronged approach to tackle the challenge of modeling the complex heterogeneous reaction-an enzyme-centric modeling approach centered on the molecularity of the cellulase-cellulose interactions to examine rate limiting elementary steps and a substrate-centric modeling approach aimed at capturing the limiting property of the insoluble cellulose substrate. Collectively, modeling results suggest that at the molecular-scale, how rapidly cellulases can bind productively (complexation) and release from cellulose (decomplexation) is limiting, while the overall hydrolysis rate is largely insensitive to the catalytic rate constant. The surface area of the insoluble substrate and the degrees of polymerization of the cellulose molecules in the reaction both limit initial hydrolysis rates only. Neither enzyme-centric models nor substrate-centric models can consistently capture hydrolysis time course at extended reaction times. Thus, questions of the true reaction limiting factors at extended reaction times and the role of complexation and decomplexation in rate limitation remain unresolved. Biotechnol. Bioeng. 2017;114: 1369-1385. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  20. Crystal structure of norcoclaurine-6-O-methyltransferase, a key rate-limiting step in the synthesis of benzylisoquinoline alkaloids.

    PubMed

    Robin, Adeline Y; Giustini, Cécile; Graindorge, Matthieu; Matringe, Michel; Dumas, Renaud

    2016-09-01

    Growing pharmaceutical interest in benzylisoquinoline alkaloids (BIA) coupled with their chemical complexity make metabolic engineering of microbes to create alternative platforms of production an increasingly attractive proposition. However, precise knowledge of rate-limiting enzymes and negative feedback inhibition by end-products of BIA metabolism is of paramount importance for this emerging field of synthetic biology. In this work we report the structural characterization of (S)-norcoclaurine-6-O-methyltransferase (6OMT), a key rate-limiting step enzyme involved in the synthesis of reticuline, the final intermediate to be shared between the different end-products of BIA metabolism, such as morphine, papaverine, berberine and sanguinarine. Four different crystal structures of the enzyme from Thalictrum flavum (Tf 6OMT) were solved: the apoenzyme, the complex with S-adenosyl-l-homocysteine (SAH), the complexe with SAH and the substrate and the complex with SAH and a feedback inhibitor, sanguinarine. The Tf 6OMT structural study provides a molecular understanding of its substrate specificity, active site structure and reaction mechanism. This study also clarifies the inhibition of Tf 6OMT by previously suggested feedback inhibitors. It reveals its high and time-dependent sensitivity toward sanguinarine. © 2016 The Authors The Plant Journal © 2016 John Wiley & Sons Ltd.

  1. Completing the Physical Representation of Quantum Algorithms Provides a Quantitative Explanation of Their Computational Speedup

    NASA Astrophysics Data System (ADS)

    Castagnoli, Giuseppe

    2018-03-01

    The usual representation of quantum algorithms, limited to the process of solving the problem, is physically incomplete. We complete it in three steps: (i) extending the representation to the process of setting the problem, (ii) relativizing the extended representation to the problem solver to whom the problem setting must be concealed, and (iii) symmetrizing the relativized representation for time reversal to represent the reversibility of the underlying physical process. The third steps projects the input state of the representation, where the problem solver is completely ignorant of the setting and thus the solution of the problem, on one where she knows half solution (half of the information specifying it when the solution is an unstructured bit string). Completing the physical representation shows that the number of computation steps (oracle queries) required to solve any oracle problem in an optimal quantum way should be that of a classical algorithm endowed with the advanced knowledge of half solution.

  2. Terminal-Area Aircraft Intent Inference Approach Based on Online Trajectory Clustering.

    PubMed

    Yang, Yang; Zhang, Jun; Cai, Kai-quan

    2015-01-01

    Terminal-area aircraft intent inference (T-AII) is a prerequisite to detect and avoid potential aircraft conflict in the terminal airspace. T-AII challenges the state-of-the-art AII approaches due to the uncertainties of air traffic situation, in particular due to the undefined flight routes and frequent maneuvers. In this paper, a novel T-AII approach is introduced to address the limitations by solving the problem with two steps that are intent modeling and intent inference. In the modeling step, an online trajectory clustering procedure is designed for recognizing the real-time available routes in replacing of the missed plan routes. In the inference step, we then present a probabilistic T-AII approach based on the multiple flight attributes to improve the inference performance in maneuvering scenarios. The proposed approach is validated with real radar trajectory and flight attributes data of 34 days collected from Chengdu terminal area in China. Preliminary results show the efficacy of the presented approach.

  3. Gold Nanorod-based Photo-PCR System for One-Step, Rapid Detection of Bacteria

    PubMed Central

    Kim, Jinjoo; Kim, Hansol; Park, Ji Ho; Jon, Sangyong

    2017-01-01

    The polymerase chain reaction (PCR) has been an essential tool for diagnosis of infectious diseases, but conventional PCR still has some limitations with respect to applications to point-of-care (POC) diagnostic systems that require rapid detection and miniaturization. Here we report a light-based PCR method, termed as photo-PCR, which enables rapid detection of bacteria in a single step. In the photo-PCR system, poly(enthylene glycol)-modified gold nanorods (PEG-GNRs), used as a heat generator, are added into the PCR mixture, which is subsequently periodically irradiated with a 808-nm laser to create thermal cycling. Photo-PCR was able to significantly reduce overall thermal cycling time by integrating bacterial cell lysis and DNA amplification into a single step. Furthermore, when combined with KAPA2G fast polymerase and cooling system, the entire process of bacterial genomic DNA extraction and amplification was further shortened, highlighting the potential of photo-PCR for use in a portable, POC diagnostic system. PMID:29071186

  4. Spurious Behavior of Shock-Capturing Methods: Problems Containing Stiff Source Terms and Discontinuities

    NASA Technical Reports Server (NTRS)

    Yee, Helen M. C.; Kotov, D. V.; Wang, Wei; Shu, Chi-Wang

    2013-01-01

    The goal of this paper is to relate numerical dissipations that are inherited in high order shock-capturing schemes with the onset of wrong propagation speed of discontinuities. For pointwise evaluation of the source term, previous studies indicated that the phenomenon of wrong propagation speed of discontinuities is connected with the smearing of the discontinuity caused by the discretization of the advection term. The smearing introduces a nonequilibrium state into the calculation. Thus as soon as a nonequilibrium value is introduced in this manner, the source term turns on and immediately restores equilibrium, while at the same time shifting the discontinuity to a cell boundary. The present study is to show that the degree of wrong propagation speed of discontinuities is highly dependent on the accuracy of the numerical method. The manner in which the smearing of discontinuities is contained by the numerical method and the overall amount of numerical dissipation being employed play major roles. Moreover, employing finite time steps and grid spacings that are below the standard Courant-Friedrich-Levy (CFL) limit on shockcapturing methods for compressible Euler and Navier-Stokes equations containing stiff reacting source terms and discontinuities reveals surprising counter-intuitive results. Unlike non-reacting flows, for stiff reactions with discontinuities, employing a time step and grid spacing that are below the CFL limit (based on the homogeneous part or non-reacting part of the governing equations) does not guarantee a correct solution of the chosen governing equations. Instead, depending on the numerical method, time step and grid spacing, the numerical simulation may lead to (a) the correct solution (within the truncation error of the scheme), (b) a divergent solution, (c) a wrong propagation speed of discontinuities solution or (d) other spurious solutions that are solutions of the discretized counterparts but are not solutions of the governing equations. The present investigation for three very different stiff system cases confirms some of the findings of Lafon & Yee (1996) and LeVeque & Yee (1990) for a model scalar PDE. The findings might shed some light on the reported difficulties in numerical combustion and problems with stiff nonlinear (homogeneous) source terms and discontinuities in general.

  5. Are randomly grown graphs really random?

    PubMed

    Callaway, D S; Hopcroft, J E; Kleinberg, J M; Newman, M E; Strogatz, S H

    2001-10-01

    We analyze a minimal model of a growing network. At each time step, a new vertex is added; then, with probability delta, two vertices are chosen uniformly at random and joined by an undirected edge. This process is repeated for t time steps. In the limit of large t, the resulting graph displays surprisingly rich characteristics. In particular, a giant component emerges in an infinite-order phase transition at delta=1/8. At the transition, the average component size jumps discontinuously but remains finite. In contrast, a static random graph with the same degree distribution exhibits a second-order phase transition at delta=1/4, and the average component size diverges there. These dramatic differences between grown and static random graphs stem from a positive correlation between the degrees of connected vertices in the grown graph-older vertices tend to have higher degree, and to link with other high-degree vertices, merely by virtue of their age. We conclude that grown graphs, however randomly they are constructed, are fundamentally different from their static random graph counterparts.

  6. Development of a multiplex probe combination-based one-step real-time reverse transcription-PCR for NA subtype typing of avian influenza virus.

    PubMed

    Sun, Zhihao; Qin, Tao; Meng, Feifei; Chen, Sujuan; Peng, Daxin; Liu, Xiufan

    2017-10-18

    Nine influenza virus neuraminidase (NA) subtypes have been identified in poultry and wild birds. Few methods are available for rapid and simple NA subtyping. Here we developed a multiplex probe combination-based one-step real-time reverse transcriptase PCR (rRT-PCR) to detect nine avian influenza virus NA subtypes. Nine primer-probe pairs were assigned to three groups based on the different fluorescent dyes of the probes (FAM, HEX, or Texas Red). Each probe detected only one NA subtype, without cross reactivity. The detection limit was less than 100 EID 50 or 100 copies of cDNA per reaction. Data obtained using this method with allantoic fluid samples isolated from live bird markets and H9N2-infected chickens correlated well with data obtained using virus isolation and sequencing, but was more sensitive. This new method provides a specific and sensitive alternative to conventional NA-subtyping methods.

  7. Compressible, multiphase semi-implicit method with moment of fluid interface representation

    DOE PAGES

    Jemison, Matthew; Sussman, Mark; Arienti, Marco

    2014-09-16

    A unified method for simulating multiphase flows using an exactly mass, momentum, and energy conserving Cell-Integrated Semi-Lagrangian advection algorithm is presented. The deforming material boundaries are represented using the moment-of-fluid method. Our new algorithm uses a semi-implicit pressure update scheme that asymptotically preserves the standard incompressible pressure projection method in the limit of infinite sound speed. The asymptotically preserving attribute makes the new method applicable to compressible and incompressible flows including stiff materials; enabling large time steps characteristic of incompressible flow algorithms rather than the small time steps required by explicit methods. Moreover, shocks are captured and material discontinuities aremore » tracked, without the aid of any approximate or exact Riemann solvers. As a result, wimulations of underwater explosions and fluid jetting in one, two, and three dimensions are presented which illustrate the effectiveness of the new algorithm at efficiently computing multiphase flows containing shock waves and material discontinuities with large “impedance mismatch.”« less

  8. Now hiring! Empirically testing a three-step intervention to increase faculty gender diversity in STEM

    USGS Publications Warehouse

    Smith, Jessi L.; Handley, Ian M.; Zale, Alexander V.; Rushing, Sara; Potvin, Martha A.

    2015-01-01

    Workforce homogeneity limits creativity, discovery, and job satisfaction; nonetheless, the vast majority of university faculty in science, technology, engineering, and mathematics (STEM) fields are men. We conducted a randomized and controlled three-step faculty search intervention based in self-determination theory aimed at increasing the number of women faculty in STEM at one US university where increasing diversity had historically proved elusive. Results show that the numbers of women candidates considered for and offered tenure-track positions were significantly higher in the intervention groups compared with those in controls. Searches in the intervention were 6.3 times more likely to make an offer to a woman candidate, and women who were made an offer were 5.8 times more likely to accept the offer from an intervention search. Although the focus was on increasing women faculty within STEM, the intervention can be adapted to other scientific and academic communities to advance diversity along any dimension.

  9. Now Hiring! Empirically Testing a Three-Step Intervention to Increase Faculty Gender Diversity in STEM

    PubMed Central

    Smith, Jessi L.; Handley, Ian M.; Zale, Alexander V.; Rushing, Sara; Potvin, Martha A.

    2015-01-01

    Workforce homogeneity limits creativity, discovery, and job satisfaction; nonetheless, the vast majority of university faculty in science, technology, engineering, and mathematics (STEM) fields are men. We conducted a randomized and controlled three-step faculty search intervention based in self-determination theory aimed at increasing the number of women faculty in STEM at one US university where increasing diversity had historically proved elusive. Results show that the numbers of women candidates considered for and offered tenure-track positions were significantly higher in the intervention groups compared with those in controls. Searches in the intervention were 6.3 times more likely to make an offer to a woman candidate, and women who were made an offer were 5.8 times more likely to accept the offer from an intervention search. Although the focus was on increasing women faculty within STEM, the intervention can be adapted to other scientific and academic communities to advance diversity along any dimension. PMID:26955075

  10. The Markov process admits a consistent steady-state thermodynamic formalism

    NASA Astrophysics Data System (ADS)

    Peng, Liangrong; Zhu, Yi; Hong, Liu

    2018-01-01

    The search for a unified formulation for describing various non-equilibrium processes is a central task of modern non-equilibrium thermodynamics. In this paper, a novel steady-state thermodynamic formalism was established for general Markov processes described by the Chapman-Kolmogorov equation. Furthermore, corresponding formalisms of steady-state thermodynamics for the master equation and Fokker-Planck equation could be rigorously derived in mathematics. To be concrete, we proved that (1) in the limit of continuous time, the steady-state thermodynamic formalism for the Chapman-Kolmogorov equation fully agrees with that for the master equation; (2) a similar one-to-one correspondence could be established rigorously between the master equation and Fokker-Planck equation in the limit of large system size; (3) when a Markov process is restrained to one-step jump, the steady-state thermodynamic formalism for the Fokker-Planck equation with discrete state variables also goes to that for master equations, as the discretization step gets smaller and smaller. Our analysis indicated that general Markov processes admit a unified and self-consistent non-equilibrium steady-state thermodynamic formalism, regardless of underlying detailed models.

  11. Optimal execution in high-frequency trading with Bayesian learning

    NASA Astrophysics Data System (ADS)

    Du, Bian; Zhu, Hongliang; Zhao, Jingdong

    2016-11-01

    We consider optimal trading strategies in which traders submit bid and ask quotes to maximize the expected quadratic utility of total terminal wealth in a limit order book. The trader's bid and ask quotes will be changed by the Poisson arrival of market orders. Meanwhile, the trader may update his estimate of other traders' target sizes and directions by Bayesian learning. The solution of optimal execution in the limit order book is a two-step procedure. First, we model an inactive trading with no limit order in the market. The dealer simply holds dollars and shares of stocks until terminal time. Second, he calibrates his bid and ask quotes to the limit order book. The optimal solutions are given by dynamic programming and in fact they are globally optimal. We also give numerical simulation to the value function and optimal quotes at the last part of the article.

  12. Consistency of internal fluxes in a hydrological model running at multiple time steps

    NASA Astrophysics Data System (ADS)

    Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-04-01

    Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model complexity on time step is also analysed. References: Perrin, C., Michel, C., Andréassian, V., 2003. Improvement of a parsimonious model for streamflow simulation. Journal of Hydrology, 279(1-4): 275-289. DOI:10.1016/S0022-1694(03)00225-7

  13. Management of primary and metastasized melanoma in Germany in the time period 1976-2005: an analysis of the Central Malignant Melanoma Registry of the German Dermatological Society.

    PubMed

    Schwager, Silke S; Leiter, Ulrike; Buettner, Petra G; Voit, Christiane; Marsch, Wolfgang; Gutzmer, Ralf; Näher, Helmut; Gollnick, Harald; Bröcker, Eva Bettina; Garbe, Claus

    2008-04-01

    This study analysed the changes of excision margins in correlation with tumour thickness as recorded over the last three decades in Germany. The study also evaluated surgical management in different geographical regions and treatment options for metastasized melanoma. A total of 42 625 patients with invasive primary cutaneous melanoma, recorded by the German Central Malignant Melanoma Registry between 1976 and 2005 were included. Multiple linear regression analysis was used to investigate time trends of excision margins adjusted for tumour thickness. Excision margins of 5.0 cm were widely used in the late 1970s but since then have been replaced by smaller margins that are dependent on tumour thickness. In the case of primary melanoma, one-step surgery dominated until 1985 and was mostly replaced by two-step excisions since the early 1990s. In eastern Germany, one-step management remained common until the late 1990s. During the last three decades loco-regional metastases were predominantly treated by surgery (up to 80%), whereas systemic therapy decreased. The primary treatment of distant metastases has consistently been systemic chemotherapy. This descriptive retrospective study revealed a significant decrease in excision margins to a maximum of 2.00 cm. A significant trend towards two-step excisions in primary cutaneous melanoma was observed throughout Germany. Management of metastasized melanoma showed a tendency towards surgical procedures in limited disease and an ongoing trend to systemic treatment in advanced disease.

  14. Fluid transport properties by equilibrium molecular dynamics. I. Methodology at extreme fluid states

    NASA Astrophysics Data System (ADS)

    Dysthe, D. K.; Fuchs, A. H.; Rousseau, B.

    1999-02-01

    The Green-Kubo formalism for evaluating transport coefficients by molecular dynamics has been applied to flexible, multicenter models of linear and branched alkanes in the gas phase and in the liquid phase from ambient conditions to close to the triple point. The effects of integration time step, potential cutoff and system size have been studied and shown to be small compared to the computational precision except for diffusion in gaseous n-butane. The RATTLE algorithm is shown to give accurate transport coefficients for time steps up to a limit of 8 fs. The different relaxation mechanisms in the fluids have been studied and it is shown that the longest relaxation time of the system governs the statistical precision of the results. By measuring the longest relaxation time of a system one can obtain a reliable error estimate from a single trajectory. The accuracy of the Green-Kubo method is shown to be as good as the precision for all states and models used in this study even when the system relaxation time becomes very long. The efficiency of the method is shown to be comparable to nonequilibrium methods. The transport coefficients for two recently proposed potential models are presented, showing deviations from experiment of 0%-66%.

  15. Performance analysis and kernel size study of the Lynx real-time operating system

    NASA Technical Reports Server (NTRS)

    Liu, Yuan-Kwei; Gibson, James S.; Fernquist, Alan R.

    1993-01-01

    This paper analyzes the Lynx real-time operating system (LynxOS), which has been selected as the operating system for the Space Station Freedom Data Management System (DMS). The features of LynxOS are compared to other Unix-based operating system (OS). The tools for measuring the performance of LynxOS, which include a high-speed digital timer/counter board, a device driver program, and an application program, are analyzed. The timings for interrupt response, process creation and deletion, threads, semaphores, shared memory, and signals are measured. The memory size of the DMS Embedded Data Processor (EDP) is limited. Besides, virtual memory is not suitable for real-time applications because page swap timing may not be deterministic. Therefore, the DMS software, including LynxOS, has to fit in the main memory of an EDP. To reduce the LynxOS kernel size, the following steps are taken: analyzing the factors that influence the kernel size; identifying the modules of LynxOS that may not be needed in an EDP; adjusting the system parameters of LynxOS; reconfiguring the device drivers used in the LynxOS; and analyzing the symbol table. The reductions in kernel disk size, kernel memory size and total kernel size reduction from each step mentioned above are listed and analyzed.

  16. Time-dependent rheological behavior of natural polysaccharide xanthan gum solutions in interrupted shear and step-incremental/reductional shear flow fields

    NASA Astrophysics Data System (ADS)

    Lee, Ji-Seok; Song, Ki-Won

    2015-11-01

    The objective of the present study is to systematically elucidate the time-dependent rheological behavior of concentrated xanthan gum systems in complicated step-shear flow fields. Using a strain-controlled rheometer (ARES), step-shear flow behaviors of a concentrated xanthan gum model solution have been experimentally investigated in interrupted shear flow fields with a various combination of different shear rates, shearing times and rest times, and step-incremental and step-reductional shear flow fields with various shearing times. The main findings obtained from this study are summarized as follows. (i) In interrupted shear flow fields, the shear stress is sharply increased until reaching the maximum stress at an initial stage of shearing times, and then a stress decay towards a steady state is observed as the shearing time is increased in both start-up shear flow fields. The shear stress is suddenly decreased immediately after the imposed shear rate is stopped, and then slowly decayed during the period of a rest time. (ii) As an increase in rest time, the difference in the maximum stress values between the two start-up shear flow fields is decreased whereas the shearing time exerts a slight influence on this behavior. (iii) In step-incremental shear flow fields, after passing through the maximum stress, structural destruction causes a stress decay behavior towards a steady state as an increase in shearing time in each step shear flow region. The time needed to reach the maximum stress value is shortened as an increase in step-increased shear rate. (iv) In step-reductional shear flow fields, after passing through the minimum stress, structural recovery induces a stress growth behavior towards an equilibrium state as an increase in shearing time in each step shear flow region. The time needed to reach the minimum stress value is lengthened as a decrease in step-decreased shear rate.

  17. Vectorization of a particle code used in the simulation of rarefied hypersonic flow

    NASA Technical Reports Server (NTRS)

    Baganoff, D.

    1990-01-01

    A limitation of the direct simulation Monte Carlo (DSMC) method is that it does not allow efficient use of vector architectures that predominate in current supercomputers. Consequently, the problems that can be handled are limited to those of one- and two-dimensional flows. This work focuses on a reformulation of the DSMC method with the objective of designing a procedure that is optimized to the vector architectures found on machines such as the Cray-2. In addition, it focuses on finding a better balance between algorithmic complexity and the total number of particles employed in a simulation so that the overall performance of a particle simulation scheme can be greatly improved. Simulations of the flow about a 3D blunt body are performed with 10 to the 7th particles and 4 x 10 to the 5th mesh cells. Good statistics are obtained with time averaging over 800 time steps using 4.5 h of Cray-2 single-processor CPU time.

  18. Real-Time Imaging System for the OpenPET

    NASA Astrophysics Data System (ADS)

    Tashima, Hideaki; Yoshida, Eiji; Kinouchi, Shoko; Nishikido, Fumihiko; Inadama, Naoko; Murayama, Hideo; Suga, Mikio; Haneishi, Hideaki; Yamaya, Taiga

    2012-02-01

    The OpenPET and its real-time imaging capability have great potential for real-time tumor tracking in medical procedures such as biopsy and radiation therapy. For the real-time imaging system, we intend to use the one-pass list-mode dynamic row-action maximum likelihood algorithm (DRAMA) and implement it using general-purpose computing on graphics processing units (GPGPU) techniques. However, it is difficult to make consistent reconstructions in real-time because the amount of list-mode data acquired in PET scans may be large depending on the level of radioactivity, and the reconstruction speed depends on the amount of the list-mode data. In this study, we developed a system to control the data used in the reconstruction step while retaining quantitative performance. In the proposed system, the data transfer control system limits the event counts to be used in the reconstruction step according to the reconstruction speed, and the reconstructed images are properly intensified by using the ratio of the used counts to the total counts. We implemented the system on a small OpenPET prototype system and evaluated the performance in terms of the real-time tracking ability by displaying reconstructed images in which the intensity was compensated. The intensity of the displayed images correlated properly with the original count rate and a frame rate of 2 frames per second was achieved with average delay time of 2.1 s.

  19. Proceedings of the International Conference on Stiff Computation, April 12-14, 1982, Park City, Utah. Volume II.

    DTIC Science & Technology

    1982-01-01

    concepts. Fatunla (1981) proposed symmetric hybrid schemes well suited to periodic initial value problems. A generalization of this idea is proposed...one time step to another was kept below a prescribed value. Obviously this limits the truncation error only in some vague, general sense. The schemes ...STIFFLY STABLE LINEAR MULTISTEP METHODS. S.O. FATUNLA, Trinity College, Dublin: P-STABLE HYBRID SCHEMES FOR INITIAL VALUE PROBLEMS APRIL 13, 1982 G

  20. Development of 3D Oxide Fuel Mechanics Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, B. W.; Casagranda, A.; Pitts, S. A.

    This report documents recent work to improve the accuracy and robustness of the mechanical constitutive models used in the BISON fuel performance code. These developments include migration of the fuel mechanics models to be based on the MOOSE Tensor Mechanics module, improving the robustness of the smeared cracking model, implementing a capability to limit the time step size based on material model response, and improving the robustness of the return mapping iterations used in creep and plasticity models.

  1. Adding the third dimension on adaptive optics retina imager thanks to full-field optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Blavier, Marie; Blanco, Leonardo; Glanc, Marie; Pouplard, Florence; Tick, Sarah; Maksimovic, Ivan; Mugnier, Laurent; Chènegros, Guillaume; Rousset, Gérard; Lacombe, François; Pâques, Michel; Le Gargasson, Jean-François; Sahel, José-Alain

    2009-02-01

    Retinal pathologies, like ARMD or glaucoma, need to be early detected, requiring imaging instruments with resolution at a cellular scale. However, in vivo retinal cells studies and early diagnoses are severely limited by the lack of resolution on eye-fundus images from classical ophthalmologic instruments. We built a 2D retina imager using Adaptive Optics to improve lateral resolution. This imager is currently used in clinical environment. We are currently developing a time domain full-field optical coherence tomograph. The first step was to conceive the images reconstruction algorithms and validation was realized on non-biological samples. Ex vivo retina are currently being imaged. The final step will consist in coupling both setups to acquire high resolution retina cross-sections.

  2. Rapid oxidation/stabilization technique for carbon foams, carbon fibers and C/C composites

    DOEpatents

    Tan, Seng; Tan, Cher-Dip

    2004-05-11

    An enhanced method for the post processing, i.e. oxidation or stabilization, of carbon materials including, but not limited to, carbon foams, carbon fibers, dense carbon-carbon composites, carbon/ceramic and carbon/metal composites, which method requires relatively very short and more effective such processing steps. The introduction of an "oxygen spill over catalyst" into the carbon precursor by blending with the carbon starting material or exposure of the carbon precursor to such a material supplies required oxygen at the atomic level and permits oxidation/stabilization of carbon materials in a fraction of the time and with a fraction of the energy normally required to accomplish such carbon processing steps. Carbon based foams, solids, composites and fiber products made utilizing this method are also described.

  3. Simultaneous multielement atomic absorption spectrometry with graphite furnace atomization

    NASA Astrophysics Data System (ADS)

    Harnly, James M.; Miller-Ihli, Nancy J.; O'Haver, Thomas C.

    The extended analytical range capability of a simultaneous multielement atomic absorption continuum source spectrometer (SIMAAC) was tested for furnace atomization with respect to the signal measurement mode (peak height and area), the atomization mode (from the wall or from a platform), and the temperature program mode (stepped or ramped atomization). These parameters were evaluated with respect to the shapes of the analytical curves, the detection limits, carry-over contamination and accuracy. Peak area measurements gave more linear calibration curves. Methods for slowing the atomization step heating rate, the use of a ramped temperature program or a platform, produced similar calibration curves and longer linear ranges than atomization with a stepped temperature program. Peak height detection limits were best using stepped atomization from the wall. Peak area detection limits for all atomization modes were similar. Carry-over contamination was worse for peak area than peak height, worse for ramped atomization than stepped atomization, and worse for atomization from a platform than from the wall. Accurate determinations (100 ± 12% for Ca, Cu, Fe, Mn, and Zn in National Bureau of Standards' Standard Reference Materials Bovine Liver 1577 and Rice Flour 1568 were obtained using peak area measurements with ramped atomization from the wall and stepped atomization from a platform. Only stepped atomization from a platform gave accurate recoveries for K. Accurate recoveries, 100 ± 10%, with precisions ranging from 1 to 36 % (standard deviation), were obtained for the determination of Al, Co, Cr, Fe, Mn, Mo, Ni. Pb, V and Zn in Acidified Waters (NBS SRM 1643 and 1643a) using stepped atomization from a platform.

  4. Reliability of wireless monitoring using a wearable patch sensor in high-risk surgical patients at a step-down unit in the Netherlands: a clinical validation study.

    PubMed

    Breteler, Martine J M; Huizinga, Erik; van Loon, Kim; Leenen, Luke P H; Dohmen, Daan A J; Kalkman, Cor J; Blokhuis, Taco J

    2018-02-27

    Intermittent vital signs measurements are the current standard on hospital wards, typically recorded once every 8 hours. Early signs of deterioration may therefore be missed. Recent innovations have resulted in 'wearable' sensors, which may capture patient deterioration at an earlier stage. The objective of this study was to determine whether a wireless 'patch' sensor is able to reliably measure respiratory and heart rate continuously in high-risk surgical patients. The secondary objective was to explore the potential of the wireless sensor to serve as a safety monitor. In an observational methods comparisons study, patients were measured with both the wireless sensor and bedside routine standard for at least 24 hours. University teaching hospital, single centre. Twenty-five postoperative surgical patients admitted to a step-down unit. Primary outcome measures were limits of agreement and bias of heart rate and respiratory rate. Secondary outcome measures were sensor reliability, defined as time until first occurrence of data loss. 1568 hours of vital signs data were analysed. Bias and 95% limits of agreement for heart rate were -1.1 (-8.8 to 6.5) beats per minute. For respiration rate, bias was -2.3 breaths per minute with wide limits of agreement (-15.8 to 11.2 breaths per minute). Median filtering over a 15 min period improved limits of agreement of both respiration and heart rate. 63% of the measurements were performed without data loss greater than 2 min. Overall data loss was limited (6% of time). The wireless sensor is capable of accurately measuring heart rate, but accuracy for respiratory rate was outside acceptable limits. Remote monitoring has the potential to contribute to early recognition of physiological decline in high-risk patients. Future studies should focus on the ability to detect patient deterioration on low care environments and at home after discharge. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. Antibodies and Selection of Monoclonal Antibodies.

    PubMed

    Hanack, Katja; Messerschmidt, Katrin; Listek, Martin

    Monoclonal antibodies are universal binding molecules with a high specificity for their target and are indispensable tools in research, diagnostics and therapy. The biotechnological generation of monoclonal antibodies was enabled by the hybridoma technology published in 1975 by Köhler and Milstein. Today monoclonal antibodies are used in a variety of applications as flow cytometry, magnetic cell sorting, immunoassays or therapeutic approaches. First step of the generation process is the immunization of the organism with appropriate antigen. After a positive immune response the spleen cells are isolated and fused with myeloma cells in order to generate stable, long-living antibody-producing cell lines - hybridoma cells. In the subsequent identification step the culture supernatants of all hybridoma cells are screened weekly for the production of the antibody of interest. Hybridoma cells producing the antibody of interest are cloned by limited dilution till a monoclonal hybridoma is found. This is a very time-consuming and laborious process and therefore different selection strategies were developed since 1975 in order to facilitate the generation of monoclonal antibodies. Apart from common automation of pipetting processes and ELISA testing there are some promising approaches to select the right monoclonal antibody very early in the process to reduce time and effort of the generation. In this chapter different selection strategies for antibody-producing hybridoma cells are presented and analysed regarding to their benefits compared to conventional limited dilution technology.

  6. A Statistical Weather-Driven Streamflow Model: Enabling future flow predictions in data-scarce headwater streams

    NASA Astrophysics Data System (ADS)

    Rosner, A.; Letcher, B. H.; Vogel, R. M.

    2014-12-01

    Predicting streamflow in headwaters and over a broad spatial scale pose unique challenges due to limited data availability. Flow observation gages for headwaters streams are less common than for larger rivers, and gages with records lengths of ten year or more are even more scarce. Thus, there is a great need for estimating streamflows in ungaged or sparsely-gaged headwaters. Further, there is often insufficient basin information to develop rainfall-runoff models that could be used to predict future flows under various climate scenarios. Headwaters in the northeastern U.S. are of particular concern to aquatic biologists, as these stream serve as essential habitat for native coldwater fish. In order to understand fish response to past or future environmental drivers, estimates of seasonal streamflow are needed. While there is limited flow data, there is a wealth of data for historic weather conditions. Observed data has been modeled to interpolate a spatially continuous historic weather dataset. (Mauer et al 2002). We present a statistical model developed by pairing streamflow observations with precipitation and temperature information for the same and preceding time-steps. We demonstrate this model's use to predict flow metrics at the seasonal time-step. While not a physical model, this statistical model represents the weather drivers. Since this model can predict flows not directly tied to reference gages, we can generate flow estimates for historic as well as potential future conditions.

  7. Concurrent validity of the Microsoft Kinect for Windows v2 for measuring spatiotemporal gait parameters.

    PubMed

    Dolatabadi, Elham; Taati, Babak; Mihailidis, Alex

    2016-09-01

    This paper presents a study to evaluate the concurrent validity of the Microsoft Kinect for Windows v2 for measuring the spatiotemporal parameters of gait. Twenty healthy adults performed several sequences of walks across a GAITRite mat under three different conditions: usual pace, fast pace, and dual task. Each walking sequence was simultaneously captured with two Kinect for Windows v2 and the GAITRite system. An automated algorithm was employed to extract various spatiotemporal features including stance time, step length, step time and gait velocity from the recorded Kinect v2 sequences. Accuracy in terms of reliability, concurrent validity and limits of agreement was examined for each gait feature under different walking conditions. The 95% Bland-Altman limits of agreement were narrow enough for the Kinect v2 to be a valid tool for measuring all reported spatiotemporal parameters of gait in all three conditions. An excellent intraclass correlation coefficient (ICC2, 1) ranging from 0.9 to 0.98 was observed for all gait measures across different walking conditions. The inter trial reliability of all gait parameters were shown to be strong for all walking types (ICC3, 1 > 0.73). The results of this study suggest that the Kinect for Windows v2 has the capacity to measure selected spatiotemporal gait parameters for healthy adults. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  8. A robot and control algorithm that can synchronously assist in naturalistic motion during body-weight-supported gait training following neurologic injury.

    PubMed

    Aoyagi, Daisuke; Ichinose, Wade E; Harkema, Susan J; Reinkensmeyer, David J; Bobrow, James E

    2007-09-01

    Locomotor training using body weight support on a treadmill and manual assistance is a promising rehabilitation technique following neurological injuries, such as spinal cord injury (SCI) and stroke. Previous robots that automate this technique impose constraints on naturalistic walking due to their kinematic structure, and are typically operated in a stiff mode, limiting the ability of the patient or human trainer to influence the stepping pattern. We developed a pneumatic gait training robot that allows for a full range of natural motion of the legs and pelvis during treadmill walking, and provides compliant assistance. However, we observed an unexpected consequence of the device's compliance: unimpaired and SCI individuals invariably began walking out-of-phase with the device. Thus, the robot perturbed rather than assisted stepping. To address this problem, we developed a novel algorithm that synchronizes the device in real-time to the actual motion of the individual by sensing the state error and adjusting the replay timing to reduce this error. This paper describes data from experiments with individuals with SCI that demonstrate the effectiveness of the synchronization algorithm, and the potential of the device for relieving the trainers of strenuous work while maintaining naturalistic stepping.

  9. Non-Invasive Transcranial Brain Therapy Guided by CT Scans: an In Vivo Monkey Study

    NASA Astrophysics Data System (ADS)

    Marquet, F.; Pernot, M.; Aubry, J.-F.; Montaldo, G.; Tanter, M.; Boch, A.-L.; Kujas, M.; Seilhean, D.; Fink, M.

    2007-05-01

    Brain therapy using focused ultrasound remains very limited due to the strong aberrations induced by the skull. A minimally invasive technique using time-reversal was validated recently in-vivo on 20 sheeps. But such a technique requires a hydrophone at the focal point for the first step of the time-reversal procedure. A completely noninvasive therapy requires a reliable model of the acoustic properties of the skull in order to simulate this first step. 3-D simulations based on high-resolution CT images of a skull have been successfully performed with a finite differences code developed in our Laboratory. Thanks to the skull porosity, directly extracted from the CT images, we reconstructed acoustic speed, density and absorption maps and performed the computation. Computed wavefronts are in good agreement with experimental wavefronts acquired through the same part of the skull and this technique was validated in-vitro in the laboratory. A stereotactic frame has been designed and built in order to perform non invasive transcranial focusing in vivo. Here we describe all the steps of our new protocol, from the CT-scans to the therapy treatment and the first in vivo results on a monkey will be presented. This protocol is based on protocols already existing in radiotherapy.

  10. Magnetization-induced dynamics of a Josephson junction coupled to a nanomagnet

    NASA Astrophysics Data System (ADS)

    Ghosh, Roopayan; Maiti, Moitri; Shukrinov, Yury M.; Sengupta, K.

    2017-11-01

    We study the superconducting current of a Josephson junction (JJ) coupled to an external nanomagnet driven by a time-dependent magnetic field both without and in the presence of an external ac drive. We provide an analytic, albeit perturbative, solution for the Landau-Lifshitz (LL) equations governing the coupled JJ-nanomagnet system in the presence of a magnetic field with arbitrary time dependence oriented along the easy axis of the nanomagnet's magnetization and in the limit of weak dimensionless coupling ɛ0 between the JJ and the nanomagnet. We show the existence of Shapiro-type steps in the I -V characteristics of the JJ subjected to a voltage bias for a constant or periodically varying magnetic field and explore the effect of rotation of the magnetic field and the presence of an external ac drive on these steps. We support our analytic results with exact numerical solution of the LL equations. We also extend our results to dissipative nanomagnets by providing a perturbative solution to the Landau-Lifshitz-Gilbert (LLG) equations for weak dissipation. We study the fate of magnetization-induced Shapiro steps in the presence of dissipation both from our analytical results and via numerical solution of the coupled LLG equations. We discuss experiments which can test our theory.

  11. Three-step approach for prediction of limit cycle pressure oscillations in combustion chambers of gas turbines

    NASA Astrophysics Data System (ADS)

    Iurashev, Dmytro; Campa, Giovanni; Anisimov, Vyacheslav V.; Cosatto, Ezio

    2017-11-01

    Currently, gas turbine manufacturers frequently face the problem of strong acoustic combustion driven oscillations inside combustion chambers. These combustion instabilities can cause extensive wear and sometimes even catastrophic damages to combustion hardware. This requires prevention of combustion instabilities, which, in turn, requires reliable and fast predictive tools. This work presents a three-step method to find stability margins within which gas turbines can be operated without going into self-excited pressure oscillations. As a first step, a set of unsteady Reynolds-averaged Navier-Stokes simulations with the Flame Speed Closure (FSC) model implemented in the OpenFOAM® environment are performed to obtain the flame describing function of the combustor set-up. The standard FSC model is extended in this work to take into account the combined effect of strain and heat losses on the flame. As a second step, a linear three-time-lag-distributed model for a perfectly premixed swirl-stabilized flame is extended to the nonlinear regime. The factors causing changes in the model parameters when applying high-amplitude velocity perturbations are analysed. As a third step, time-domain simulations employing a low-order network model implemented in Simulink® are performed. In this work, the proposed method is applied to a laboratory test rig. The proposed method permits not only the unsteady frequencies of acoustic oscillations to be computed, but the amplitudes of such oscillations as well. Knowing the amplitudes of unstable pressure oscillations, it is possible to determine how these oscillations are harmful to the combustor equipment. The proposed method has a low cost because it does not require any license for computational fluid dynamics software.

  12. Pathway and rate-limiting step of glyphosate degradation by Aspergillus oryzae A-F02.

    PubMed

    Fu, Gui-Ming; Chen, Yan; Li, Ru-Yi; Yuan, Xiao-Qiang; Liu, Cheng-Mei; Li, Bin; Wan, Yin

    2017-09-14

    Aspergillus oryzae A-F02, a glyphosate-degrading fungus, was isolated from an aeration tank in a pesticide factory. The pathway and rate-limiting step of glyphosate (GP) degradation were investigated through metabolite analysis. GP, aminomethylphosphonic acid (AMPA), and methylamine were detected in the fermentation liquid of A. oryzae A-F02, whereas sarcosine and glycine were not. The pathway of GP degradation in A. oryzae A-F02 was revealed: GP was first degraded into AMPA, which was then degraded into methylamine. Finally, methylamine was further degraded into other products. Investigating the effects of the exogenous addition of substrates and metabolites showed that the degradation of GP to AMPA is the rate-limiting step of GP degradation by A. oryzae A-F02. In addition, the accumulation of AMPA and methylamine did not cause feedback inhibition in GP degradation. Results showed that degrading GP to AMPA was a crucial step in the degradation of GP, which determines the degradation rate of GP by A. oryzae A-F02.

  13. An adaptive time-stepping strategy for solving the phase field crystal model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Zhengru, E-mail: zrzhang@bnu.edu.cn; Ma, Yuan, E-mail: yuner1022@gmail.com; Qiao, Zhonghua, E-mail: zqiao@polyu.edu.hk

    2013-09-15

    In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. Themore » numerical experiments demonstrate that the CPU time is significantly saved for long time simulations.« less

  14. Oxidation-driven surface dynamics on NiAl(100)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin, Hailang; Chen, Xidong; Li, Liang

    Atomic steps, a defect common to all crystal surfaces, can play an important role in many physical and chemical processes. However, attempts to predict surface dynamics under nonequilibrium conditions are usually frustrated by poor knowledge of the atomic processes of surface motion arising from mass transport from/to surface steps. Using low-energy electron microscopy that spatially and temporally resolves oxide film growth during the oxidation of NiAl(100) we demonstrate that surface steps are impermeable to oxide film growth. The advancement of the oxide occurs exclusively on the same terrace and requires the coordinated migration of surface steps. The resulting piling upmore » of surface steps ahead of the oxide growth front progressively impedes the oxide growth. This process is reversed during oxide decomposition. The migration of the substrate steps is found to be a surface-step version of the well-known Hele-Shaw problem, governed by detachment (attachment) of Al atoms at step edges induced by the oxide growth (decomposition). As a result, by comparing with the oxidation of NiAl(110) that exhibits unimpeded oxide film growth over substrate steps, we suggest that whenever steps are the source of atoms used for oxide growth they limit the oxidation process; when atoms are supplied from the bulk, the oxidation rate is not limited by the motion of surface steps.« less

  15. Oxidation-driven surface dynamics on NiAl(100)

    DOE PAGES

    Qin, Hailang; Chen, Xidong; Li, Liang; ...

    2014-12-29

    Atomic steps, a defect common to all crystal surfaces, can play an important role in many physical and chemical processes. However, attempts to predict surface dynamics under nonequilibrium conditions are usually frustrated by poor knowledge of the atomic processes of surface motion arising from mass transport from/to surface steps. Using low-energy electron microscopy that spatially and temporally resolves oxide film growth during the oxidation of NiAl(100) we demonstrate that surface steps are impermeable to oxide film growth. The advancement of the oxide occurs exclusively on the same terrace and requires the coordinated migration of surface steps. The resulting piling upmore » of surface steps ahead of the oxide growth front progressively impedes the oxide growth. This process is reversed during oxide decomposition. The migration of the substrate steps is found to be a surface-step version of the well-known Hele-Shaw problem, governed by detachment (attachment) of Al atoms at step edges induced by the oxide growth (decomposition). As a result, by comparing with the oxidation of NiAl(110) that exhibits unimpeded oxide film growth over substrate steps, we suggest that whenever steps are the source of atoms used for oxide growth they limit the oxidation process; when atoms are supplied from the bulk, the oxidation rate is not limited by the motion of surface steps.« less

  16. The Role of Moist Processes in the Intrinsic Predictability of Indian Ocean Cyclones

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taraphdar, Sourav; Mukhopadhyay, P.; Leung, Lai-Yung R.

    The role of moist processes and the possibility of error cascade from cloud scale processes affecting the intrinsic predictable time scale of a high resolution convection permitting model within the environment of tropical cyclones (TCs) over the Indian region are investigated. Consistent with past studies of extra-tropical cyclones, it is demonstrated that moist processes play a major role in forecast error growth which may ultimately limit the intrinsic predictability of the TCs. Small errors in the initial conditions may grow rapidly and cascades from smaller scales to the larger scales through strong diabatic heating and nonlinearities associated with moist convection.more » Results from a suite of twin perturbation experiments for four tropical cyclones suggest that the error growth is significantly higher in cloud permitting simulation at 3.3 km resolutions compared to simulations at 3.3 km and 10 km resolution with parameterized convection. Convective parameterizations with prescribed convective time scales typically longer than the model time step allows the effects of microphysical tendencies to average out so convection responds to a smoother dynamical forcing. Without convective parameterizations, the finer-scale instabilities resolved at 3.3 km resolution and stronger vertical motion that results from the cloud microphysical parameterizations removing super-saturation at each model time step can ultimately feed the error growth in convection permitting simulations. This implies that careful considerations and/or improvements in cloud parameterizations are needed if numerical predictions are to be improved through increased model resolution. Rapid upscale error growth from convective scales may ultimately limit the intrinsic mesoscale predictability of the TCs, which further supports the needs for probabilistic forecasts of these events, even at the mesoscales.« less

  17. Application of global kinetic models to HMX beta-delta transition and cookoff processes.

    PubMed

    Wemhoff, Aaron P; Burnham, Alan K; Nichols, Albert L

    2007-03-08

    The reduction of the number of reactions in kinetic models for both the HMX (octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) beta-delta phase transition and thermal cookoff provides an attractive alternative to traditional multi-stage kinetic models due to reduced calibration effort requirements. In this study, we use the LLNL code ALE3D to provide calibrated kinetic parameters for a two-reaction bidirectional beta-delta HMX phase transition model based on Sandia instrumented thermal ignition (SITI) and scaled thermal explosion (STEX) temperature history curves, and a Prout-Tompkins cookoff model based on one-dimensional time to explosion (ODTX) data. Results show that the two-reaction bidirectional beta-delta transition model presented here agrees as well with STEX and SITI temperature history curves as a reversible four-reaction Arrhenius model yet requires an order of magnitude less computational effort. In addition, a single-reaction Prout-Tompkins model calibrated to ODTX data provides better agreement with ODTX data than a traditional multistep Arrhenius model and can contain up to 90% fewer chemistry-limited time steps for low-temperature ODTX simulations. Manual calibration methods for the Prout-Tompkins kinetics provide much better agreement with ODTX experimental data than parameters derived from differential scanning calorimetry (DSC) measurements at atmospheric pressure. The predicted surface temperature at explosion for STEX cookoff simulations is a weak function of the cookoff model used, and a reduction of up to 15% of chemistry-limited time steps can be achieved by neglecting the beta-delta transition for this type of simulation. Finally, the inclusion of the beta-delta transition model in the overall kinetics model can affect the predicted time to explosion by 1% for the traditional multistep Arrhenius approach, and up to 11% using a Prout-Tompkins cookoff model.

  18. Design and Processing of a Novel Chaos-Based Stepped Frequency Synthesized Wideband Radar Signal.

    PubMed

    Zeng, Tao; Chang, Shaoqiang; Fan, Huayu; Liu, Quanhua

    2018-03-26

    The linear stepped frequency and linear frequency shift keying (FSK) signal has been widely used in radar systems. However, such linear modulation signals suffer from the range-Doppler coupling that degrades radar multi-target resolution. Moreover, the fixed frequency-hopping or frequency-coded sequence can be easily predicted by the interception receiver in the electronic countermeasures (ECM) environments, which limits radar anti-jamming performance. In addition, the single FSK modulation reduces the radar low probability of intercept (LPI) performance, for it cannot achieve a large time-bandwidth product. To solve such problems, we propose a novel chaos-based stepped frequency (CSF) synthesized wideband signal in this paper. The signal introduces chaotic frequency hopping between the coherent stepped frequency pulses, and adopts a chaotic frequency shift keying (CFSK) and phase shift keying (PSK) composited coded modulation in a subpulse, called CSF-CFSK/PSK. Correspondingly, the processing method for the signal has been proposed. According to our theoretical analyses and the simulations, the proposed signal and processing method achieve better multi-target resolution and LPI performance. Furthermore, flexible modulation is able to increase the robustness against identification of the interception receiver and improve the anti-jamming performance of the radar.

  19. Detonation Diffraction in a Multi-Step Channel

    DTIC Science & Technology

    2010-12-01

    openings. This allowed the detonation wave diffraction transmission limits to be determined for hydrogen/air mixtures and to better understand...imaging systems to provide shock wave detail and velocity information. The images were observed through a newly designed explosive proof optical section...stepped openings. This allowed the detonation wave diffraction transmission limits to be determined for hydrogen/air mixtures and to better

  20. Prospective Optimization with Limited Resources

    PubMed Central

    Snider, Joseph; Lee, Dongpyo; Poizner, Howard; Gepshtein, Sergei

    2015-01-01

    The future is uncertain because some forthcoming events are unpredictable and also because our ability to foresee the myriad consequences of our own actions is limited. Here we studied how humans select actions under such extrinsic and intrinsic uncertainty, in view of an exponentially expanding number of prospects on a branching multivalued visual stimulus. A triangular grid of disks of different sizes scrolled down a touchscreen at a variable speed. The larger disks represented larger rewards. The task was to maximize the cumulative reward by touching one disk at a time in a rapid sequence, forming an upward path across the grid, while every step along the path constrained the part of the grid accessible in the future. This task captured some of the complexity of natural behavior in the risky and dynamic world, where ongoing decisions alter the landscape of future rewards. By comparing human behavior with behavior of ideal actors, we identified the strategies used by humans in terms of how far into the future they looked (their “depth of computation”) and how often they attempted to incorporate new information about the future rewards (their “recalculation period”). We found that, for a given task difficulty, humans traded off their depth of computation for the recalculation period. The form of this tradeoff was consistent with a complete, brute-force exploration of all possible paths up to a resource-limited finite depth. A step-by-step analysis of the human behavior revealed that participants took into account very fine distinctions between the future rewards and that they abstained from some simple heuristics in assessment of the alternative paths, such as seeking only the largest disks or avoiding the smaller disks. The participants preferred to reduce their depth of computation or increase the recalculation period rather than sacrifice the precision of computation. PMID:26367309

  1. ATP-mediated intrinsic peroxidase-like activity of Fe3O4-based nanozyme: One step detection of blood glucose at physiological pH.

    PubMed

    Vallabani, N V Srikanth; Karakoti, Ajay S; Singh, Sanjay

    2017-05-01

    Fe 3 O 4 nanoparticles (Fe 3 O 4 NPs), demonstrating peroxidase-like activity has garnered attention in the detection of several biomolecules, therefore, emerged as an excellent nano-biosensing agent. The intrinsic peroxidase-like activity of Fe 3 O 4 NPs at acidic pH is the fundamental action driving the oxidation of substrates like TMB, resulting in a colorimetric product formation used in the detection of biomolecules. Hence, the detection sensitivity essentially depends on the ability of oxidation by Fe 3 O 4 NPs in presence of H 2 O 2 . However, the limited sensitivity and pH condition constraint have been identified as the major drawbacks in the detection of biomolecules at physiological pH. Herein, we report overwhelming of the fundamental limitation of acidic pH and tuning the peroxidase-like activity of Fe 3 O 4 NPs at physiological pH by using ATP. In presence of ATP, Fe 3 O 4 NPs exhibited enhanced peroxidase-like activity over a wide range of pH and temperatures. Mechanistically, it was found that the ability of ATP to participate in single electron transfer reaction, through complexation with Fe 3 O 4 NPs, results in the generation of hydroxyl radicals which are responsible for enhanced peroxidase activity at physiological pH. We utilized this ATP-mediated enhanced peroxidase-like activity of Fe 3 O 4 NPs for single step detection of glucose with a colorimetric detection limit of 50μM. Further, we extended this single step detection method to monitor glucose level in human blood serum and detected in a time span of <5min at pH 7.4. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Prospective Optimization with Limited Resources.

    PubMed

    Snider, Joseph; Lee, Dongpyo; Poizner, Howard; Gepshtein, Sergei

    2015-09-01

    The future is uncertain because some forthcoming events are unpredictable and also because our ability to foresee the myriad consequences of our own actions is limited. Here we studied how humans select actions under such extrinsic and intrinsic uncertainty, in view of an exponentially expanding number of prospects on a branching multivalued visual stimulus. A triangular grid of disks of different sizes scrolled down a touchscreen at a variable speed. The larger disks represented larger rewards. The task was to maximize the cumulative reward by touching one disk at a time in a rapid sequence, forming an upward path across the grid, while every step along the path constrained the part of the grid accessible in the future. This task captured some of the complexity of natural behavior in the risky and dynamic world, where ongoing decisions alter the landscape of future rewards. By comparing human behavior with behavior of ideal actors, we identified the strategies used by humans in terms of how far into the future they looked (their "depth of computation") and how often they attempted to incorporate new information about the future rewards (their "recalculation period"). We found that, for a given task difficulty, humans traded off their depth of computation for the recalculation period. The form of this tradeoff was consistent with a complete, brute-force exploration of all possible paths up to a resource-limited finite depth. A step-by-step analysis of the human behavior revealed that participants took into account very fine distinctions between the future rewards and that they abstained from some simple heuristics in assessment of the alternative paths, such as seeking only the largest disks or avoiding the smaller disks. The participants preferred to reduce their depth of computation or increase the recalculation period rather than sacrifice the precision of computation.

  3. Design, synthesis and in vitro kinetic study of tranexamic acid prodrugs for the treatment of bleeding conditions

    NASA Astrophysics Data System (ADS)

    Karaman, Rafik; Ghareeb, Hiba; Dajani, Khuloud Kamal; Scrano, Laura; Hallak, Hussein; Abu-Lafi, Saleh; Mecca, Gennaro; Bufo, Sabino A.

    2013-07-01

    Based on density functional theory (DFT) calculations for the acid-catalyzed hydrolysis of several maleamic acid amide derivatives four tranexamic acid prodrugs were designed. The DFT results on the acid catalyzed hydrolysis revealed that the reaction rate-limiting step is determined on the nature of the amine leaving group. When the amine leaving group was a primary amine or tranexamic acid moiety, the tetrahedral intermediate collapse was the rate-limiting step, whereas in the cases by which the amine leaving group was aciclovir or cefuroxime the rate-limiting step was the tetrahedral intermediate formation. The linear correlation between the calculated DFT and experimental rates for N-methylmaleamic acids 1- 7 provided a credible basis for designing tranexamic acid prodrugs that have the potential to release the parent drug in a sustained release fashion. For example, based on the calculated B3LYP/6-31G(d,p) rates the predicted t1/2 (a time needed for 50 % of the prodrug to be converted into drug) values for tranexamic acid prodrugs ProD 1- ProD 4 at pH 2 were 556 h [50.5 h as calculated by B3LYP/311+G(d,p)] and 6.2 h as calculated by GGA: MPW1K), 253 h, 70 s and 1.7 h, respectively. Kinetic study on the interconversion of the newly synthesized tranexamic acid prodrug ProD 1 revealed that the t1/2 for its conversion to the parent drug was largely affected by the pH of the medium. The experimental t1/2 values in 1 N HCl, buffer pH 2 and buffer pH 5 were 54 min, 23.9 and 270 h, respectively.

  4. Errors in Postural Preparation Lead to Increased Choice Reaction Times for Step Initiation in Older Adults

    PubMed Central

    Nutt, John G.; Horak, Fay B.

    2011-01-01

    Background. This study asked whether older adults were more likely than younger adults to err in the initial direction of their anticipatory postural adjustment (APA) prior to a step (indicating a motor program error), whether initial motor program errors accounted for reaction time differences for step initiation, and whether initial motor program errors were linked to inhibitory failure. Methods. In a stepping task with choice reaction time and simple reaction time conditions, we measured forces under the feet to quantify APA onset and step latency and we used body kinematics to quantify forward movement of center of mass and length of first step. Results. Trials with APA errors were almost three times as common for older adults as for younger adults, and they were nine times more likely in choice reaction time trials than in simple reaction time trials. In trials with APA errors, step latency was delayed, correlation between APA onset and step latency was diminished, and forward motion of the center of mass prior to the step was increased. Participants with more APA errors tended to have worse Stroop interference scores, regardless of age. Conclusions. The results support the hypothesis that findings of slow choice reaction time step initiation in older adults are attributable to inclusion of trials with incorrect initial motor preparation and that these errors are caused by deficits in response inhibition. By extension, the results also suggest that mixing of trials with correct and incorrect initial motor preparation might explain apparent choice reaction time slowing with age in upper limb tasks. PMID:21498431

  5. Efficient Machine Learning Approach for Optimizing Scientific Computing Applications on Emerging HPC Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arumugam, Kamesh

    Efficient parallel implementations of scientific applications on multi-core CPUs with accelerators such as GPUs and Xeon Phis is challenging. This requires - exploiting the data parallel architecture of the accelerator along with the vector pipelines of modern x86 CPU architectures, load balancing, and efficient memory transfer between different devices. It is relatively easy to meet these requirements for highly structured scientific applications. In contrast, a number of scientific and engineering applications are unstructured. Getting performance on accelerators for these applications is extremely challenging because many of these applications employ irregular algorithms which exhibit data-dependent control-ow and irregular memory accesses. Furthermore,more » these applications are often iterative with dependency between steps, and thus making it hard to parallelize across steps. As a result, parallelism in these applications is often limited to a single step. Numerical simulation of charged particles beam dynamics is one such application where the distribution of work and memory access pattern at each time step is irregular. Applications with these properties tend to present significant branch and memory divergence, load imbalance between different processor cores, and poor compute and memory utilization. Prior research on parallelizing such irregular applications have been focused around optimizing the irregular, data-dependent memory accesses and control-ow during a single step of the application independent of the other steps, with the assumption that these patterns are completely unpredictable. We observed that the structure of computation leading to control-ow divergence and irregular memory accesses in one step is similar to that in the next step. It is possible to predict this structure in the current step by observing the computation structure of previous steps. In this dissertation, we present novel machine learning based optimization techniques to address the parallel implementation challenges of such irregular applications on different HPC architectures. In particular, we use supervised learning to predict the computation structure and use it to address the control-ow and memory access irregularities in the parallel implementation of such applications on GPUs, Xeon Phis, and heterogeneous architectures composed of multi-core CPUs with GPUs or Xeon Phis. We use numerical simulation of charged particles beam dynamics simulation as a motivating example throughout the dissertation to present our new approach, though they should be equally applicable to a wide range of irregular applications. The machine learning approach presented here use predictive analytics and forecasting techniques to adaptively model and track the irregular memory access pattern at each time step of the simulation to anticipate the future memory access pattern. Access pattern forecasts can then be used to formulate optimization decisions during application execution which improves the performance of the application at a future time step based on the observations from earlier time steps. In heterogeneous architectures, forecasts can also be used to improve the memory performance and resource utilization of all the processing units to deliver a good aggregate performance. We used these optimization techniques and anticipation strategy to design a cache-aware, memory efficient parallel algorithm to address the irregularities in the parallel implementation of charged particles beam dynamics simulation on different HPC architectures. Experimental result using a diverse mix of HPC architectures shows that our approach in using anticipation strategy is effective in maximizing data reuse, ensuring workload balance, minimizing branch and memory divergence, and in improving resource utilization.« less

  6. Engines with ideal efficiency and nonzero power for sublinear transport laws

    NASA Astrophysics Data System (ADS)

    Koning, Jesper; Indekeu, Joseph O.

    2016-11-01

    It is known that an engine with ideal efficiency (η = 1 for a chemical engine and e = eCarnot for a thermal one) has zero power because a reversible cycle takes an infinite time. However, at least from a theoretical point of view, it is possible to conceive (irreversible) engines with nonzero power that can reach ideal efficiency. Here this is achieved by replacing the usual linear transport law by a sublinear one and taking the step-function limit for the particle current (chemical engine) or heat current (thermal engine) versus the applied force. It is shown that in taking this limit exact thermodynamic inequalities relating the currents to the entropy production are not violated.

  7. Robust levitation control for maglev systems with guaranteed bounded airgap.

    PubMed

    Xu, Jinquan; Chen, Ye-Hwa; Guo, Hong

    2015-11-01

    The robust control design problem for the levitation control of a nonlinear uncertain maglev system is considered. The uncertainty is (possibly) fast time-varying. The system has magnitude limitation on the airgap between the suspended chassis and the guideway in order to prevent undesirable contact. Furthermore, the (global) matching condition is not satisfied. After a three-step state transformation, a robust control scheme for the maglev vehicle is proposed, which is able to guarantee the uniform boundedness and uniform ultimate boundedness of the system, regardless of the uncertainty. The magnitude limitation of the airgap is guaranteed, regardless of the uncertainty. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Voluntary stepping behavior under single- and dual-task conditions in chronic stroke survivors: A comparison between the involved and uninvolved legs.

    PubMed

    Melzer, Itshak; Goldring, Melissa; Melzer, Yehudit; Green, Elad; Tzedek, Irit

    2010-12-01

    If balance is lost, quick step execution can prevent falls. Research has shown that speed of voluntary stepping was able to predict future falls in old adults. The aim of the study was to investigate voluntary stepping behavior, as well as to compare timing and leg push-off force-time relation parameters of involved and uninvolved legs in stroke survivors during single- and dual-task conditions. We also aimed to compare timing and leg push-off force-time relation parameters between stroke survivors and healthy individuals in both task conditions. Ten stroke survivors performed a voluntary step execution test with their involved and uninvolved legs under two conditions: while focusing only on the stepping task and while a separate attention-demanding task was performed simultaneously. Temporal parameters related to the step time were measured including the duration of the step initiation phase, the preparatory phase, the swing phase, and the total step time. In addition, force-time parameters representing the push-off power during stepping were calculated from ground reaction data and compared with 10 healthy controls. The involved legs of stroke survivors had a significantly slower stepping time than uninvolved legs due to increased swing phase duration during both single- and dual-task conditions. For dual compared to single task, the stepping time increased significantly due to a significant increase in the duration of step initiation. In general, the force time parameters were significantly different in both legs of stroke survivors as compared to healthy controls, with no significant effect of dual compared with single-task conditions in both groups. The inability of stroke survivors to swing the involved leg quickly may be the most significant factor contributing to the large number of falls to the paretic side. The results suggest that stroke survivors were unable to rapidly produce muscle force in fast actions. This may be the mechanism of delayed execution of a fast step when balance is lost, thus increasing the likelihood of falls in stroke survivors. Copyright © 2010 Elsevier Ltd. All rights reserved.

  9. Reactive nanolaminate pulsed-laser ignition mechanism: Modeling and experimental evidence of diffusion limited reactions

    DOE PAGES

    Yarrington, C. D.; Abere, M. J.; Adams, D. P.; ...

    2017-04-03

    We irradiated Al/Pt nanolaminates with a bilayer thickness (tb, width of an Al/Pt pair-layer) of 164 nm with single laser pulses with durations of 10 ms and 0.5 ms at 189 W/cm 2 and 1189 W/cm 2, respectively. The time to ignition was measured for each pulse, and shorter ignition times were observed for the higher power/shorter pulse width. While the shorter pulse shows uniform brightness, videographic images of the irradiated area shortly after ignition show a non-uniform radial brightness for the longer pulse. A diffusion-limited single step reaction mechanism was implemented in a finite element package to model themore » progress from reactants to products at both pulse widths. Finally, the model captures well both the observed ignition delay and qualitative observations regarding the non-uniform radial temperature.« less

  10. Online fault adaptive control for efficient resource management in Advanced Life Support Systems

    NASA Technical Reports Server (NTRS)

    Abdelwahed, Sherif; Wu, Jian; Biswas, Gautam; Ramirez, John; Manders, Eric-J

    2005-01-01

    This article presents the design and implementation of a controller scheme for efficient resource management in Advanced Life Support Systems. In the proposed approach, a switching hybrid system model is used to represent the dynamics of the system components and their interactions. The operational specifications for the controller are represented by utility functions, and the corresponding resource management problem is formulated as a safety control problem. The controller is designed as a limited-horizon online supervisory controller that performs a limited forward search on the state-space of the system at each time step, and uses the utility functions to decide on the best action. The feasibility and accuracy of the online algorithm can be assessed at design time. We demonstrate the effectiveness of the scheme by running a set of experiments on the Reverse Osmosis (RO) subsystem of the Water Recovery System (WRS).

  11. The role of thermal and lubricant boundary layers in the transient thermal analysis of spur gears

    NASA Technical Reports Server (NTRS)

    El-Bayoumy, L. E.; Akin, L. S.; Townsend, D. P.; Choy, F. C.

    1989-01-01

    An improved convection heat-transfer model has been developed for the prediction of the transient tooth surface temperature of spur gears. The dissipative quality of the lubricating fluid is shown to be limited to the capacity extent of the thermal boundary layer. This phenomenon can be of significance in the determination of the thermal limit of gears accelerating to the point where gear scoring occurs. Steady-state temperature prediction is improved considerably through the use of a variable integration time step that substantially reduces computer time. Computer-generated plots of temperature contours enable the user to animate the propagation of the thermal wave as the gears come into and out of contact, thus contributing to better understanding of this complex problem. This model has a much better capability at predicting gear-tooth temperatures than previous models.

  12. Online fault adaptive control for efficient resource management in Advanced Life Support Systems.

    PubMed

    Abdelwahed, Sherif; Wu, Jian; Biswas, Gautam; Ramirez, John; Manders, Eric-J

    2005-01-01

    This article presents the design and implementation of a controller scheme for efficient resource management in Advanced Life Support Systems. In the proposed approach, a switching hybrid system model is used to represent the dynamics of the system components and their interactions. The operational specifications for the controller are represented by utility functions, and the corresponding resource management problem is formulated as a safety control problem. The controller is designed as a limited-horizon online supervisory controller that performs a limited forward search on the state-space of the system at each time step, and uses the utility functions to decide on the best action. The feasibility and accuracy of the online algorithm can be assessed at design time. We demonstrate the effectiveness of the scheme by running a set of experiments on the Reverse Osmosis (RO) subsystem of the Water Recovery System (WRS).

  13. Sensitivity of monthly streamflow forecasts to the quality of rainfall forcing: When do dynamical climate forecasts outperform the Ensemble Streamflow Prediction (ESP) method?

    NASA Astrophysics Data System (ADS)

    Tanguy, M.; Prudhomme, C.; Harrigan, S.; Smith, K. A.; Parry, S.

    2017-12-01

    Forecasting hydrological extremes is challenging, especially at lead times over 1 month for catchments with limited hydrological memory and variable climates. One simple way to derive monthly or seasonal hydrological forecasts is to use historical climate data to drive hydrological models using the Ensemble Streamflow Prediction (ESP) method. This gives a range of possible future streamflow given known initial hydrologic conditions alone. The degree of skill of ESP depends highly on the forecast initialisation month and catchment type. Using dynamic rainfall forecasts as driving data instead of historical data could potentially improve streamflow predictions. A lot of effort is being invested within the meteorological community to improve these forecasts. However, while recent progress shows promise (e.g. NAO in winter), the skill of these forecasts at monthly to seasonal timescales is generally still limited, and the extent to which they might lead to improved hydrological forecasts is an area of active research. Additionally, these meteorological forecasts are currently being produced at 1 month or seasonal time-steps in the UK, whereas hydrological models require forcings at daily or sub-daily time-steps. Keeping in mind these limitations of available rainfall forecasts, the objectives of this study are to find out (i) how accurate monthly dynamical rainfall forecasts need to be to outperform ESP, and (ii) how the method used to disaggregate monthly rainfall forecasts into daily rainfall time series affects results. For the first objective, synthetic rainfall time series were created by increasingly degrading observed data (proxy for a `perfect forecast') from 0 % to +/-50 % error. For the second objective, three different methods were used to disaggregate monthly rainfall data into daily time series. These were used to force a simple lumped hydrological model (GR4J) to generate streamflow predictions at a one-month lead time for over 300 catchments representative of the range of UK's hydro-climatic conditions. These forecasts were then benchmarked against the traditional ESP method. It is hoped that the results of this work will help the meteorological community to identify where to focus their efforts in order to increase the usefulness of their forecasts within hydrological forecasting systems.

  14. Validation of thigh-based accelerometer estimates of postural allocation in 5-12 year-olds.

    PubMed

    van Loo, Christiana M T; Okely, Anthony D; Batterham, Marijka J; Hinkley, Trina; Ekelund, Ulf; Brage, Søren; Reilly, John J; Jones, Rachel A; Janssen, Xanne; Cliff, Dylan P

    2017-03-01

    To validate activPAL3™ (AP3) for classifying postural allocation, estimating time spent in postures and examining the number of breaks in sedentary behaviour (SB) in 5-12 year-olds. Laboratory-based validation study. Fifty-seven children completed 15 sedentary, light- and moderate-to-vigorous intensity activities. Direct observation (DO) was used as the criterion measure. The accuracy of AP3 was examined using a confusion matrix, equivalence testing, Bland-Altman procedures and a paired t-test for 5-8y and 9-12y. Sensitivity of AP3 was 86.8%, 82.5% and 85.3% for sitting/lying, standing, and stepping, respectively, in 5-8y and 95.3%, 81.5% and 85.1%, respectively, in 9-12y. Time estimates of AP3 were equivalent to DO for sitting/lying in 9-12y and stepping in all ages, but not for sitting/lying in 5-12y and standing in all ages. Underestimation of sitting/lying time was smaller in 9-12y (1.4%, limits of agreement [LoA]: -13.8 to 11.1%) compared to 5-8y (12.6%, LoA: -39.8 to 14.7%). Underestimation for stepping time was small (5-8y: 6.5%, LoA: -18.3 to 5.3%; 9-12y: 7.6%, LoA: -16.8 to 1.6%). Considerable overestimation was found for standing (5-8y: 36.8%, LoA: -16.3 to 89.8%; 9-12y: 19.3%, LoA: -1.6 to 36.9%). SB breaks were significantly overestimated (5-8y: 53.2%, 9-12y: 28.3%, p<0.001). AP3 showed acceptable accuracy for classifying postures, however estimates of time spent standing were consistently overestimated and individual error was considerable. Estimates of sitting/lying were more accurate for 9-12y. Stepping time was accurately estimated for all ages. SB breaks were significantly overestimated, although the absolute difference was larger in 5-8y. Surveillance applications of AP3 would be acceptable, however, individual level applications might be less accurate. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  15. Efficiency and flexibility using implicit methods within atmosphere dycores

    NASA Astrophysics Data System (ADS)

    Evans, K. J.; Archibald, R.; Norman, M. R.; Gardner, D. J.; Woodward, C. S.; Worley, P.; Taylor, M.

    2016-12-01

    A suite of explicit and implicit methods are evaluated for a range of configurations of the shallow water dynamical core within the spectral-element Community Atmosphere Model (CAM-SE) to explore their relative computational performance. The configurations are designed to explore the attributes of each method under different but relevant model usage scenarios including varied spectral order within an element, static regional refinement, and scaling to large problem sizes. The limitations and benefits of using explicit versus implicit, with different discretizations and parameters, are discussed in light of trade-offs such as MPI communication, memory, and inherent efficiency bottlenecks. For the regionally refined shallow water configurations, the implicit BDF2 method is about the same efficiency as an explicit Runge-Kutta method, without including a preconditioner. Performance of the implicit methods with the residual function executed on a GPU is also presented; there is speed up for the residual relative to a CPU, but overwhelming transfer costs motivate moving more of the solver to the device. Given the performance behavior of implicit methods within the shallow water dynamical core, the recommendation for future work using implicit solvers is conditional based on scale separation and the stiffness of the problem. The strong growth of linear iterations with increasing resolution or time step size is the main bottleneck to computational efficiency. Within the hydrostatic dynamical core, of CAM-SE, we present results utilizing approximate block factorization preconditioners implemented using the Trilinos library of solvers. They reduce the cost of linear system solves and improve parallel scalability. We provide a summary of the remaining efficiency considerations within the preconditioner and utilization of the GPU, as well as a discussion about the benefits of a time stepping method that provides converged and stable solutions for a much wider range of time step sizes. As more complex model components, for example new physics and aerosols, are connected in the model, having flexibility in the time stepping will enable more options for combining and resolving multiple scales of behavior.

  16. A modular approach to intensity-modulated arc therapy optimization with noncoplanar trajectories

    NASA Astrophysics Data System (ADS)

    Papp, Dávid; Bortfeld, Thomas; Unkelbach, Jan

    2015-07-01

    Utilizing noncoplanar beam angles in volumetric modulated arc therapy (VMAT) has the potential to combine the benefits of arc therapy, such as short treatment times, with the benefits of noncoplanar intensity modulated radiotherapy (IMRT) plans, such as improved organ sparing. Recently, vendors introduced treatment machines that allow for simultaneous couch and gantry motion during beam delivery to make noncoplanar VMAT treatments possible. Our aim is to provide a reliable optimization method for noncoplanar isocentric arc therapy plan optimization. The proposed solution is modular in the sense that it can incorporate different existing beam angle selection and coplanar arc therapy optimization methods. Treatment planning is performed in three steps. First, a number of promising noncoplanar beam directions are selected using an iterative beam selection heuristic; these beams serve as anchor points of the arc therapy trajectory. In the second step, continuous gantry/couch angle trajectories are optimized using a simple combinatorial optimization model to define a beam trajectory that efficiently visits each of the anchor points. Treatment time is controlled by limiting the time the beam needs to trace the prescribed trajectory. In the third and final step, an optimal arc therapy plan is found along the prescribed beam trajectory. In principle any existing arc therapy optimization method could be incorporated into this step; for this work we use a sliding window VMAT algorithm. The approach is demonstrated using two particularly challenging cases. The first one is a lung SBRT patient whose planning goals could not be satisfied with fewer than nine noncoplanar IMRT fields when the patient was treated in the clinic. The second one is a brain tumor patient, where the target volume overlaps with the optic nerves and the chiasm and it is directly adjacent to the brainstem. Both cases illustrate that the large number of angles utilized by isocentric noncoplanar VMAT plans can help improve dose conformity, homogeneity, and organ sparing simultaneously using the same beam trajectory length and delivery time as a coplanar VMAT plan.

  17. Extracapsular cataract extraction training: junior ophthalmology residents' self-reported satisfaction level with their proficiency and initial learning barrier.

    PubMed

    Ting, Daniel Shu Wei; Tan, Sarah; Lee, Shu Yen; Rosman, Mohamad; Aw, Ai Tee; Yeo, Ian Yew San

    2015-07-01

    To investigate residents' self-reported satisfaction level with their proficiency in extracapsular cataract extraction (ECCE) surgery and the initial barriers to learning the procedure. This is a single-centre prospective descriptive case series involving eight first-year ophthalmology residents in Singapore National Eye Center. We recorded the demographics, frequency of review by the residents of their own surgical videos and their satisfaction level with their proficiency at each of the ECCE steps using a 5-point Likert scale. All ECCE surgical videos between October 2013 and May 2014 were collected and analysed for the overall time taken for the surgery and the time taken to perform the individual steps of the procedure. The mean age of the residents was 27.6 ± 1.5 years and 62.5% (5/8) were women. More than half (62.5%, 5/8) reviewed their own surgical videos while 37.5% (3/8) discussed the surgical videos with their peers or supervisors. Of the ECCE steps, the residents were most dissatisfied with their proficiency in performing irrigation and aspiration (87.5%, 7/8), followed by suturing (62.5%, 5/8), intraocular lens insertion (62.5%, 5/8) and tin can capsulotomy (62.5%, 5/8). The average time taken for each ECCE case was 55.0 ± 12.2 min and, of all the steps, most time was spent on suturing (20.5 ± 6.8 min), followed by irrigation and aspiration (5.5 ± 3.6 min) and tin can capsulotomy (3.3 ± 1.8 min). The first-year ophthalmology residents were most dissatisfied with their proficiency in irrigation/aspiration, suturing and tin can capsulotomy. More training needs to be directed to these areas during teaching sessions in the operating room, wet laboratory or cataract simulation training sessions. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  18. Kinematic, muscular, and metabolic responses during exoskeletal-, elliptical-, or therapist-assisted stepping in people with incomplete spinal cord injury.

    PubMed

    Hornby, T George; Kinnaird, Catherine R; Holleran, Carey L; Rafferty, Miriam R; Rodriguez, Kelly S; Cain, Julie B

    2012-10-01

    Robotic-assisted locomotor training has demonstrated some efficacy in individuals with neurological injury and is slowly gaining clinical acceptance. Both exoskeletal devices, which control individual joint movements, and elliptical devices, which control endpoint trajectories, have been utilized with specific patient populations and are available commercially. No studies have directly compared training efficacy or patient performance during stepping between devices. The purpose of this study was to evaluate kinematic, electromyographic (EMG), and metabolic responses during elliptical- and exoskeletal-assisted stepping in individuals with incomplete spinal cord injury (SCI) compared with therapist-assisted stepping. Design A prospective, cross-sectional, repeated-measures design was used. Participants with incomplete SCI (n=11) performed 3 separate bouts of exoskeletal-, elliptical-, or therapist-assisted stepping. Unilateral hip and knee sagittal-plane kinematics, lower-limb EMG recordings, and oxygen consumption were compared across stepping conditions and with control participants (n=10) during treadmill stepping. Exoskeletal stepping kinematics closely approximated normal gait patterns, whereas significantly greater hip and knee flexion postures were observed during elliptical-assisted stepping. Measures of kinematic variability indicated consistent patterns in control participants and during exoskeletal-assisted stepping, whereas therapist- and elliptical-assisted stepping kinematics were more variable. Despite specific differences, EMG patterns generally were similar across stepping conditions in the participants with SCI. In contrast, oxygen consumption was consistently greater during therapist-assisted stepping. Limitations Limitations included a small sample size, lack of ability to evaluate kinetics during stepping, unilateral EMG recordings, and sagittal-plane kinematics. Despite specific differences in kinematics and EMG activity, metabolic activity was similar during stepping in each robotic device. Understanding potential differences and similarities in stepping performance with robotic assistance may be important in delivery of repeated locomotor training using robotic or therapist assistance and for consumers of robotic devices.

  19. Analysis on burnup step effect for evaluating reactor criticality and fuel breeding ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saputra, Geby; Purnama, Aditya Rizki; Permana, Sidik

    Criticality condition of the reactors is one of the important factors for evaluating reactor operation and nuclear fuel breeding ratio is another factor to show nuclear fuel sustainability. This study analyzes the effect of burnup steps and cycle operation step for evaluating the criticality condition of the reactor as well as the performance of nuclear fuel breeding or breeding ratio (BR). Burnup step is performed based on a day step analysis which is varied from 10 days up to 800 days and for cycle operation from 1 cycle up to 8 cycles reactor operations. In addition, calculation efficiency based onmore » the variation of computer processors to run the analysis in term of time (time efficiency in the calculation) have been also investigated. Optimization method for reactor design analysis which is used a large fast breeder reactor type as a reference case was performed by adopting an established reactor design code of JOINT-FR. The results show a criticality condition becomes higher for smaller burnup step (day) and for breeding ratio becomes less for smaller burnup step (day). Some nuclides contribute to make better criticality when smaller burnup step due to individul nuclide half-live. Calculation time for different burnup step shows a correlation with the time consuming requirement for more details step calculation, although the consuming time is not directly equivalent with the how many time the burnup time step is divided.« less

  20. An online-coupled NWP/ACT model with conserved Lagrangian levels

    NASA Astrophysics Data System (ADS)

    Sørensen, B.; Kaas, E.; Lauritzen, P. H.

    2012-04-01

    Numerical weather and climate modelling is under constant development. Semi-implicit semi-Lagrangian (SISL) models have proven to be numerically efficient in both short-range weather forecasts and climate models, due to the ability to use long time steps. Chemical/aerosol feedback mechanism are becoming more and more relevant in NWP as well as climate models, since the biogenic and anthropogenic emissions can have a direct effect on the dynamics and radiative properties of the atmosphere. To include chemical feedback mechanisms in the NWP models, on-line coupling is crucial. In 3D semi-Lagrangian schemes with quasi-Lagrangian vertical coordinates the Lagrangian levels are remapped to Eulerian model levels each time step. This remapping introduces an undesirable tendency to smooth sharp gradients and creates unphysical numerical diffusion in the vertical distribution. A semi-Lagrangian advection method is introduced, it combines an inherently mass conserving 2D semi-Lagrangian scheme, with a SISL scheme employing both hybrid vertical coordinates and a fully Lagrangian vertical coordinate. This minimizes the vertical diffusion and thus potentially improves the simulation of the vertical profiles of moisture, clouds, and chemical constituents. Since the Lagrangian levels suffer from traditional Lagrangian limitations caused by the convergence and divergence of the flow, remappings to the Eulerian model levels are generally still required - but this need only be applied after a number of time steps - unless dynamic remapping methods are used. For this several different remapping methods has been implemented. The combined scheme is mass conserving, consistent, and multi-tracer efficient.

  1. Reliability and convergent validity of the five-step test in people with chronic stroke.

    PubMed

    Ng, Shamay S M; Tse, Mimi M Y; Tam, Eric W C; Lai, Cynthia Y Y

    2018-01-10

    (i) To estimate the intra-rater, inter-rater and test-retest reliabilities of the Five-Step Test (FST), as well as the minimum detectable change in FST completion times in people with stroke. (ii) To estimate the convergent validity of the FST with other measures of stroke-specific impairments. (iii) To identify the best cut-off times for distinguishing FST performance in people with stroke from that of healthy older adults. A cross-sectional study. University-based rehabilitation centre. Forty-eight people with stroke and 39 healthy controls. None. The FST, along with (for the stroke survivors only) scores on the Fugl-Meyer Lower Extremity Assessment (FMA-LE), the Berg Balance Scale (BBS), Limits of Stability (LOS) tests, and Activities-specific Balance Confidence (ABC) scale were tested. The FST showed excellent intra-rater (intra-class correlation coefficient; ICC = 0.866-0.905), inter-rater (ICC = 0.998), and test-retest (ICC = 0.838-0.842) reliabilities. A minimum detectable change of 9.16 s was found for the FST in people with stroke. The FST correlated significantly with the FMA-LE, BBS, and LOS results in the forward and sideways directions (r = -0.411 to -0.716, p < 0.004). The FST completion time of 13.35 s was shown to discriminate reliably between people with stroke and healthy older adults. The FST is a reliable, easy-to-administer clinical test for assessing stroke survivors' ability to negotiate steps and stairs.

  2. A Global Magnetohydrodynamic Model of Jovian Magnetosphere

    NASA Technical Reports Server (NTRS)

    Walker, Raymond J.; Sharber, James (Technical Monitor)

    2001-01-01

    The goal of this project was to develop a new global magnetohydrodynamic model of the interaction of the Jovian magnetosphere with the solar wind. Observations from 28 orbits of Jupiter by Galileo along with those from previous spacecraft at Jupiter, Pioneer 10 and 11, Voyager I and 2 and Ulysses, have revealed that the Jovian magnetosphere is a vast, complicated system. The Jovian aurora also has been monitored for several years. Like auroral observations at Earth, these measurements provide us with a global picture of magnetospheric dynamics. Despite this wide range of observations, we have limited quantitative understanding of the Jovian magnetosphere and how it interacts with the solar wind. For the past several years we have been working toward a quantitative understanding of the Jovian magnetosphere and its interaction with the solar wind by employing global magnetohydrodynamic simulations to model the magnetosphere. Our model has been an explicit MHD code (previously used to model the Earth's magnetosphere) to study Jupiter's magnetosphere. We continue to obtain important insights with this code, but it suffers from some severe limitations. In particular with this code we are limited to considering the region outside of 15RJ, with cell sizes of about 1.5R(sub J). The problem arises because of the presence of widely separated time scales throughout the magnetosphere. The numerical stability criterion for explicit MHD codes is the CFL limit and is given by C(sub max)(Delta)t/(Delta)x less than 1 where C(sub max) is the maximum group velocity in a given cell, (Delta)x is the grid spacing and (Delta)t is the time step. If the maximum wave velocity is C(sub w) and the flow speed is C(sub f), C(sub max) = C(sub w) + C(sub f). Near Jupiter the Alfven wave speed becomes very large (it approaches the speed of light at one Jovian radius). Operating with this time step makes the calculation essentially intractable. Therefore under this funding we have been designing a new MHD model that will be able to compute solutions in the wide parameter regime of the Jovian magnetosphere.

  3. Detection of Only Viable Bacterial Spores Using a Live/Dead Indicator in Mixed Populations

    NASA Technical Reports Server (NTRS)

    Behar, Alberto E.; Stam, Christina N.; Smiley, Ronald

    2013-01-01

    This method uses a photoaffinity label that recognizes DNA and can be used to distinguish populations of bacterial cells from bacterial spores without the use of heat shocking during conventional culture, and live from dead bacterial spores using molecular-based methods. Biological validation of commercial sterility using traditional and alternative technologies remains challenging. Recovery of viable spores is cumbersome, as the process requires substantial incubation time, and the extended time to results limits the ability to quickly evaluate the efficacy of existing technologies. Nucleic acid amplification approaches such as PCR (polymerase chain reaction) have shown promise for improving time to detection for a wide range of applications. Recent real-time PCR methods are particularly promising, as these methods can be made at least semi-quantitative by correspondence to a standard curve. Nonetheless, PCR-based methods are rarely used for process validation, largely because the DNA from dead bacterial cells is highly stable and hence, DNA-based amplification methods fail to discriminate between live and inactivated microorganisms. Currently, no published method has been shown to effectively distinguish between live and dead bacterial spores. This technology uses a DNA binding photoaffinity label that can be used to distinguish between live and dead bacterial spores with detection limits ranging from 109 to 102 spores/mL. An environmental sample suspected of containing a mixture of live and dead vegetative cells and bacterial endospores is treated with a photoaffinity label. This step will eliminate any vegetative cells (live or dead) and dead endospores present in the sample. To further determine the bacterial spore viability, DNA is extracted from the spores and total population is quantified by real-time PCR. The current NASA standard assay takes 72 hours for results. Part of this procedure requires a heat shock step at 80 degC for 15 minutes before the sample can be plated. Using a photoaffinity label would remove this step from the current assay as the label readily penetrates both live and dead bacterial cells. Secondly, the photoaffinity label can only penetrate dead bacterial spores, leaving behind the viable spore population. This would allow for rapid bacterial spore detection in a matter of hours compared to the several days that it takes for the NASA standard assay.

  4. Balance and postural skills in normal-weight and overweight prepubertal boys.

    PubMed

    Deforche, Benedicte I; Hills, Andrew P; Worringham, Charles J; Davies, Peter S W; Murphy, Alexia J; Bouckaert, Jacques J; De Bourdeaudhuij, Ilse M

    2009-01-01

    This study investigated differences in balance and postural skills in normal-weight versus overweight prepubertal boys. Fifty-seven 8-10-year-old boys were categorized overweight (N = 25) or normal-weight (N = 32) according to the International Obesity Task Force cut-off points for overweight in children. The Balance Master, a computerized pressure plate system, was used to objectively measure six balance skills: sit-to-stand, walk, step up/over, tandem walk (walking on a line), unilateral stance and limits of stability. In addition, three standardized field tests were employed: standing on one leg on a balance beam, walking heel-to-toe along the beam and the multiple sit-to-stand test. Overweight boys showed poorer performances on several items assessed on the Balance Master. Overweight boys had slower weight transfer (p < 0.05), lower rising index (p < 0.05) and greater sway velocity (p < 0.001) in the sit-to-stand test, greater step width while walking (p < 0.05) and lower speed when walking on a line (p < 0.01) compared with normal-weight counterparts. Performance on the step up/over test, the unilateral stance and the limits of stability were comparable between both groups. On the balance beam, overweight boys could not hold their balance on one leg as long (p < 0.001) and had fewer correct steps in the heel-to-toe test (p < 0.001) than normal-weight boys. Finally, overweight boys were slower in standing up and sitting down five times in the multiple sit-to-stand task (p < 0.01). This study demonstrates that when categorised by body mass index (BMI) level, overweight prepubertal boys displayed lower capacity on several static and dynamic balance and postural skills.

  5. a Method of Time-Series Change Detection Using Full Polsar Images from Different Sensors

    NASA Astrophysics Data System (ADS)

    Liu, W.; Yang, J.; Zhao, J.; Shi, H.; Yang, L.

    2018-04-01

    Most of the existing change detection methods using full polarimetric synthetic aperture radar (PolSAR) are limited to detecting change between two points in time. In this paper, a novel method was proposed to detect the change based on time-series data from different sensors. Firstly, the overall difference image of a time-series PolSAR was calculated by ominous statistic test. Secondly, difference images between any two images in different times ware acquired by Rj statistic test. Generalized Gaussian mixture model (GGMM) was used to obtain time-series change detection maps in the last step for the proposed method. To verify the effectiveness of the proposed method, we carried out the experiment of change detection by using the time-series PolSAR images acquired by Radarsat-2 and Gaofen-3 over the city of Wuhan, in China. Results show that the proposed method can detect the time-series change from different sensors.

  6. Trajectory errors of different numerical integration schemes diagnosed with the MPTRAC advection module driven by ECMWF operational analyses

    NASA Astrophysics Data System (ADS)

    Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars

    2018-02-01

    The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration scheme and the appropriate time step should possibly take into account the typical altitude ranges as well as the total length of the simulations to achieve the most efficient simulations. However, trying to summarize, we recommend the third-order Runge-Kutta method with a time step of 170 s or the midpoint scheme with a time step of 100 s for efficient simulations of up to 10 days of simulation time for the specific ECMWF high-resolution data set considered in this study. Purely stratospheric simulations can use significantly larger time steps of 800 and 1100 s for the midpoint scheme and the third-order Runge-Kutta method, respectively.

  7. High-resolution broadband spectroscopy using externally dispersed interferometry at the Hale telescope: part 2, photon noise theory

    NASA Astrophysics Data System (ADS)

    Erskine, David J.; Edelstein, Jerry; Wishnow, Edward; Sirk, Martin; Muirhead, Philip S.; Muterspaugh, Matthew W.; Lloyd, James P.

    2016-10-01

    High-resolution broadband spectroscopy at near-infrared (NIR) wavelengths (950 to 2450 nm) has been performed using externally dispersed interferometry (EDI) at the Hale telescope at Mt. Palomar, with the TEDI interferometer mounted within the central hole of the 200-in. primary mirror in series with the comounted TripleSpec NIR echelle spectrograph. These are the first multidelay EDI demonstrations on starlight. We demonstrated very high (10×) resolution boost and dramatic (20× or more) robustness to point spread function wavelength drifts in the native spectrograph. Data analysis, results, and instrument noise are described in a companion paper (part 1). This part 2 describes theoretical photon limited and readout noise limited behaviors, using simulated spectra and instrument model with noise added at the detector. We show that a single interferometer delay can be used to reduce the high frequency noise at the original resolution (1× boost case), and that except for delays much smaller than the native response peak half width, the fringing and nonfringing noises act uncorrelated and add in quadrature. This is due to the frequency shifting of the noise due to the heterodyning effect. We find a sum rule for the noise variance for multiple delays. The multiple delay EDI using a Gaussian distribution of exposure times has noise-to-signal ratio for photon-limited noise similar to a classical spectrograph with reduced slitwidth and reduced flux, proportional to the square root of resolution boost achieved, but without the focal spot limitation and pixel spacing Nyquist limitations. At low boost (˜1×) EDI has ˜1.4× smaller noise than conventional, and at >10× boost, EDI has ˜1.4× larger noise than conventional. Readout noise is minimized by the use of three or four steps instead of 10 of TEDI. Net noise grows as step phases change from symmetrical arrangement with wavenumber across the band. For three (or four) steps, we calculate a multiplicative bandwidth of 1.8:1 (2.3:1), sufficient to handle the visible band (400 to 700 nm, 1.8:1) and most of TripleSpec (2.6:1).

  8. High-resolution broadband spectroscopy using externally dispersed interferometry at the Hale telescope: Part 2, photon noise theory

    DOE PAGES

    Erskine, David J.; Edelstein, Jerry; Wishnow, Edward; ...

    2016-10-01

    High-resolution broadband spectroscopy at near-infrared (NIR) wavelengths (950 to 2450 nm) has been performed using externally dispersed interferometry (EDI) at the Hale telescope at Mt. Palomar, with the TEDI interferometer mounted within the central hole of the 200-in. primary mirror in series with the comounted TripleSpec NIR echelle spectrograph. These are the first multidelay EDI demonstrations on starlight. We demonstrated very high (10×) resolution boost and dramatic (20× or more) robustness to point spread function wavelength drifts in the native spectrograph. Data analysis, results, and instrument noise are described in a companion paper (part 1). This part 2 describes theoreticalmore » photon limited and readout noise limited behaviors, using simulated spectra and instrument model with noise added at the detector. We show that a single interferometer delay can be used to reduce the high frequency noise at the original resolution (1× boost case), and that except for delays much smaller than the native response peak half width, the fringing and nonfringing noises act uncorrelated and add in quadrature. This is due to the frequency shifting of the noise due to the heterodyning effect. We find a sum rule for the noise variance for multiple delays. The multiple delay EDI using a Gaussian distribution of exposure times has noise-to-signal ratio for photon-limited noise similar to a classical spectrograph with reduced slitwidth and reduced flux, proportional to the square root of resolution boost achieved, but without the focal spot limitation and pixel spacing Nyquist limitations. At low boost (~1×) EDI has ~1.4× smaller noise than conventional, and at >10× boost, EDI has ~1.4× larger noise than conventional. Readout noise is minimized by the use of three or four steps instead of 10 of TEDI. Net noise grows as step phases change from symmetrical arrangement with wavenumber across the band. As a result, for three (or four) steps, we calculate a multiplicative bandwidth of 1.8:1 (2.3:1), sufficient to handle the visible band (400 to 700 nm, 1.8:1) and most of TripleSpec (2.6:1).« less

  9. High-resolution broadband spectroscopy using externally dispersed interferometry at the Hale telescope: Part 2, photon noise theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erskine, David J.; Edelstein, Jerry; Wishnow, Edward

    High-resolution broadband spectroscopy at near-infrared (NIR) wavelengths (950 to 2450 nm) has been performed using externally dispersed interferometry (EDI) at the Hale telescope at Mt. Palomar, with the TEDI interferometer mounted within the central hole of the 200-in. primary mirror in series with the comounted TripleSpec NIR echelle spectrograph. These are the first multidelay EDI demonstrations on starlight. We demonstrated very high (10×) resolution boost and dramatic (20× or more) robustness to point spread function wavelength drifts in the native spectrograph. Data analysis, results, and instrument noise are described in a companion paper (part 1). This part 2 describes theoreticalmore » photon limited and readout noise limited behaviors, using simulated spectra and instrument model with noise added at the detector. We show that a single interferometer delay can be used to reduce the high frequency noise at the original resolution (1× boost case), and that except for delays much smaller than the native response peak half width, the fringing and nonfringing noises act uncorrelated and add in quadrature. This is due to the frequency shifting of the noise due to the heterodyning effect. We find a sum rule for the noise variance for multiple delays. The multiple delay EDI using a Gaussian distribution of exposure times has noise-to-signal ratio for photon-limited noise similar to a classical spectrograph with reduced slitwidth and reduced flux, proportional to the square root of resolution boost achieved, but without the focal spot limitation and pixel spacing Nyquist limitations. At low boost (~1×) EDI has ~1.4× smaller noise than conventional, and at >10× boost, EDI has ~1.4× larger noise than conventional. Readout noise is minimized by the use of three or four steps instead of 10 of TEDI. Net noise grows as step phases change from symmetrical arrangement with wavenumber across the band. As a result, for three (or four) steps, we calculate a multiplicative bandwidth of 1.8:1 (2.3:1), sufficient to handle the visible band (400 to 700 nm, 1.8:1) and most of TripleSpec (2.6:1).« less

  10. The general alcoholics anonymous tools of recovery: the adoption of 12-step practices and beliefs.

    PubMed

    Greenfield, Brenna L; Tonigan, J Scott

    2013-09-01

    Working the 12 steps is widely prescribed for Alcoholics Anonymous (AA) members although the relative merits of different methods for measuring step work have received minimal attention and even less is known about how step work predicts later substance use. The current study (1) compared endorsements of step work on an face-valid or direct measure, the Alcoholics Anonymous Inventory (AAI), with an indirect measure of step work, the General Alcoholics Anonymous Tools of Recovery (GAATOR); (2) evaluated the underlying factor structure of the GAATOR and changes in step work over time; (3) examined changes in the endorsement of step work over time; and (4) investigated how, if at all, 12-step work predicted later substance use. New AA affiliates (N = 130) completed assessments at intake, 3, 6, and 9 months. Significantly more participants endorsed step work on the GAATOR than on the AAI for nine of the 12 steps. An exploratory factor analysis revealed a two-factor structure for the GAATOR comprising behavioral step work and spiritual step work. Behavioral step work did not change over time, but was predicted by having a sponsor, while Spiritual step work decreased over time and increases were predicted by attending 12-step meetings or treatment. Behavioral step work did not prospectively predict substance use. In contrast, spiritual step work predicted percent days abstinent. Behavioral step work and spiritual step work appear to be conceptually distinct components of step work that have distinct predictors and unique impacts on outcomes. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  11. Two-step chlorination: A new approach to disinfection of a primary sewage effluent.

    PubMed

    Li, Yu; Yang, Mengting; Zhang, Xiangru; Jiang, Jingyi; Liu, Jiaqi; Yau, Cie Fu; Graham, Nigel J D; Li, Xiaoyan

    2017-01-01

    Sewage disinfection aims at inactivating pathogenic microorganisms and preventing the transmission of waterborne diseases. Chlorination is extensively applied for disinfecting sewage effluents. The objective of achieving a disinfection goal and reducing disinfectant consumption and operational costs remains a challenge in sewage treatment. In this study, we have demonstrated that, for the same chlorine dosage, a two-step addition of chlorine (two-step chlorination) was significantly more efficient in disinfecting a primary sewage effluent than a one-step addition of chlorine (one-step chlorination), and shown how the two-step chlorination was optimized with respect to time interval and dosage ratio. Two-step chlorination of the sewage effluent attained its highest disinfection efficiency at a time interval of 19 s and a dosage ratio of 5:1. Compared to one-step chlorination, two-step chlorination enhanced the disinfection efficiency by up to 0.81- or even 1.02-log for two different chlorine doses and contact times. An empirical relationship involving disinfection efficiency, time interval and dosage ratio was obtained by best fitting. Mechanisms (including a higher overall Ct value, an intensive synergistic effect, and a shorter recovery time) were proposed for the higher disinfection efficiency of two-step chlorination in the sewage effluent disinfection. Annual chlorine consumption costs in one-step and two-step chlorination of the primary sewage effluent were estimated. Compared to one-step chlorination, two-step chlorination reduced the cost by up to 16.7%. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Coulomb fission in dielectric dication clusters: experiment and theory on steps that may underpin the electrospray mechanism.

    PubMed

    Chen, Xiaojing; Bichoutskaia, Elena; Stace, Anthony J

    2013-05-16

    A series of five molecular dication clusters, (H2O)n(2+), (NH3)n(2+), (CH3CN)n(2+), (C5H5N)n(2+), and (C6H6)n(2+), have been studied for the purpose of identifying patterns of behavior close to the Rayleigh instability limit where the clusters might be expected to exhibit Coulomb fission. Experiments show that the instability limit for each dication covers a range of sizes and that on a time scale of 10(-4) s ions close to the limit can undergo either Coulomb fission or neutral evaporation. The observed fission pathways exhibit considerable asymmetry in the sizes of the charged fragments, and are associated with kinetic (ejection) energies of ~0.9 eV. Coulomb fission has been modeled using a theory recently formulated to describe how charged particles of dielectric materials interact with one another (Bichoutskaia et al. J. Chem. Phys. 2010, 133, 024105). The calculated electrostatic interaction energy between separating fragments accounts for the observed asymmetric fragmentation and for the magnitudes of the measured ejection energies. The close match between theory and experiment suggests that a significant fraction of excess charge resides on the surfaces of the fragment ions. The experiments provided support for a fundamental step in the electrospray ionization (ESI) mechanism, namely the ejection from droplets of small solvated charge carriers. At the same time, the theory shows how water and acetonitrile may behave slightly differently as ESI solvents. However, the theory also reveals deficiencies in the point-charge image-charge model that has previously been used to quantify Coulomb fission in the electrospray process.

  13. Application of statistical experimental design to the optimisation of microextraction by packed sorbent for the analysis of nonsteroidal anti-inflammatory drugs in human urine by ultra-high pressure liquid chromatography.

    PubMed

    Magiera, Sylwia; Gülmez, Şefika; Michalik, Aleksandra; Baranowska, Irena

    2013-08-23

    A new approach based on microextraction by packed sorbent (MEPS) and a reversed-phase ultra-high pressure liquid chromatography (UHPLC) method was developed and validated for the determination and quantification of nonsteroidal anti-inflammatory drugs (NSAIDs) (acetylsalicylic acid, ketoprofen, diclofenac, naproxen and ibuprofen) in human urine. The important factors that could influence the extraction were previously screened using the Plackett-Burman design approach. The optimal MEPS extraction conditions were obtained using C18 phase as a sorbent, small sample volume (20μL) and a short time period (approximately 5min) for the entire sample preparation step. The analytes were separated on a core-shell column (Poroshell 120 EC-C18; 100mm×3.0mm; 2.7μm) using a binary mobile phase composed of aqueous 0.1% trifluoroacetic acid and acetonitrile in the gradient elution mode (4.5min of analysis time). The analytical method was fully validated based on linearity, limits of detection (LOD), limits of quantification (LOQ), inter- and intra-day precision and accuracy, and extraction yield. Under optimised conditions, excellent linearity (R(2)>0.9991), limits of detection (1.07-16.2ngmL(-1)) and precision (0.503-9.15% RSD) were observed for the target drugs. The average absolute recoveries of the analysed compounds extracted from the urine samples were 89.4-107%. The proposed method was also applied to the analysis of NSAIDs in human urine. The new approach offers an attractive alternative for the analysis of selected drugs from urine samples, providing several advantages including fewer sample preparation steps, faster sample throughput and ease of performance compared to traditional methodologies. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    NASA Technical Reports Server (NTRS)

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  15. Numerical System Solver Developed for the National Cycle Program

    NASA Technical Reports Server (NTRS)

    Binder, Michael P.

    1999-01-01

    As part of the National Cycle Program (NCP), a powerful new numerical solver has been developed to support the simulation of aeropropulsion systems. This software uses a hierarchical object-oriented design. It can provide steady-state and time-dependent solutions to nonlinear and even discontinuous problems typically encountered when aircraft and spacecraft propulsion systems are simulated. It also can handle constrained solutions, in which one or more factors may limit the behavior of the engine system. Timedependent simulation capabilities include adaptive time-stepping and synchronization with digital control elements. The NCP solver is playing an important role in making the NCP a flexible, powerful, and reliable simulation package.

  16. Toward practical 3D radiography of pipeline girth welds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wassink, Casper, E-mail: casper.wassink@applusrtd.com; Hol, Martijn, E-mail: martijn.hol@applusrtd.com; Flikweert, Arjan, E-mail: martijn.hol@applusrtd.com

    2015-03-31

    Digital radiography has made its way into in-the-field girth weld testing. With recent generations of detectors and x-ray tubes it is possible to reach the image quality desired in standards as well as the speed of inspection desired to be competitive with film radiography and automated ultrasonic testing. This paper will show the application of these technologies in the RTD Rayscan system. The method for achieving an image quality that complies with or even exceeds prevailing industrial standards will be presented, as well as the application on pipeline girth welds with CRA layers. A next step in development will bemore » to also achieve a measurement of weld flaw height to allow for performing an Engineering Critical Assessment on the weld. This will allow for similar acceptance limits as currently used with Automated Ultrasonic Testing of pipeline girth welds. Although a sufficient sizing accuracy was already demonstrated and qualified in the TomoCAR system, testing in some applications is restricted to time limits. The paper will present some experiments that were performed to achieve flaw height approximation within these time limits.« less

  17. Transient Structures and Possible Limits of Data Recording in Phase-Change Materials.

    PubMed

    Hu, Jianbo; Vanacore, Giovanni M; Yang, Zhe; Miao, Xiangshui; Zewail, Ahmed H

    2015-07-28

    Phase-change materials (PCMs) represent the leading candidates for universal data storage devices, which exploit the large difference in the physical properties of their transitional lattice structures. On a nanoscale, it is fundamental to determine their performance, which is ultimately controlled by the speed limit of transformation among the different structures involved. Here, we report observation with atomic-scale resolution of transient structures of nanofilms of crystalline germanium telluride, a prototypical PCM, using ultrafast electron crystallography. A nonthermal transformation from the initial rhombohedral phase to the cubic structure was found to occur in 12 ps. On a much longer time scale, hundreds of picoseconds, equilibrium heating of the nanofilm is reached, driving the system toward amorphization, provided that high excitation energy is invoked. These results elucidate the elementary steps defining the structural pathway in the transformation of crystalline-to-amorphous phase transitions and describe the essential atomic motions involved when driven by an ultrafast excitation. The establishment of the time scales of the different transient structures, as reported here, permits determination of the possible limit of performance, which is crucial for high-speed recording applications of PCMs.

  18. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    NASA Astrophysics Data System (ADS)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  19. Deep Neural Network Emulation of a High-Order, WENO-Limited, Space-Time Reconstruction

    NASA Astrophysics Data System (ADS)

    Norman, M. R.; Hall, D. M.

    2017-12-01

    Deep Neural Networks (DNNs) have been used to emulate a number of processes in atmospheric models, including radiation and even so-called super-parameterization of moist convection. In each scenario, the DNN provides a good representation of the process even for inputs that have not been encountered before. More notably, they provide an emulation at a fraction of the cost of the original routine, giving speed-ups of 30× and even up to 200× compared to the runtime costs of the original routines. However, to our knowledge there has not been an investigation into using DNNs to emulate the dynamics. The most likely reason for this is that dynamics operators are typically both linear and low cost, meaning they cannot be sped up by a non-linear DNN emulation. However, there exist high-cost non-linear space-time dynamics operators that significantly reduce the number of parallel data transfers necessary to complete an atmospheric simulation. The WENO-limited Finite-Volume method with ADER-DT time integration is a prime example of this - needing only two parallel communications per large, fully limited time step. However, it comes at a high cost in terms of computation, which is why many would hesitate to use it. This talk investigates DNN emulation of the WENO-limited space-time finite-volume reconstruction procedure - the most expensive portion of this method, which densely clusters a large amount of non-linear computation. Different training techniques and network architectures are tested, and the accuracy and speed-up of each is given.

  20. Method of Lines Transpose an Implicit Vlasov Maxwell Solver for Plasmas

    DTIC Science & Technology

    2015-04-17

    boundary crossings should be rare. Numerical results for the Bennett pinch are given in Figure 9. In order to resolve large gradients near the center of the...contributing to the large error at the center of the beam due to large gradients there) and with the finite beam cut-off radius and the outflow boundary...usable time step size can be limited by the numerical accuracy of the method when there are large gradients (high-frequency content) in the solution. We

  1. Analysis of operator splitting errors for near-limit flame simulations

    NASA Astrophysics Data System (ADS)

    Lu, Zhen; Zhou, Hua; Li, Shan; Ren, Zhuyin; Lu, Tianfeng; Law, Chung K.

    2017-04-01

    High-fidelity simulations of ignition, extinction and oscillatory combustion processes are of practical interest in a broad range of combustion applications. Splitting schemes, widely employed in reactive flow simulations, could fail for stiff reaction-diffusion systems exhibiting near-limit flame phenomena. The present work first employs a model perfectly stirred reactor (PSR) problem with an Arrhenius reaction term and a linear mixing term to study the effects of splitting errors on the near-limit combustion phenomena. Analysis shows that the errors induced by decoupling of the fractional steps may result in unphysical extinction or ignition. The analysis is then extended to the prediction of ignition, extinction and oscillatory combustion in unsteady PSRs of various fuel/air mixtures with a 9-species detailed mechanism for hydrogen oxidation and an 88-species skeletal mechanism for n-heptane oxidation, together with a Jacobian-based analysis for the time scales. The tested schemes include the Strang splitting, the balanced splitting, and a newly developed semi-implicit midpoint method. Results show that the semi-implicit midpoint method can accurately reproduce the dynamics of the near-limit flame phenomena and it is second-order accurate over a wide range of time step size. For the extinction and ignition processes, both the balanced splitting and midpoint method can yield accurate predictions, whereas the Strang splitting can lead to significant shifts on the ignition/extinction processes or even unphysical results. With an enriched H radical source in the inflow stream, a delay of the ignition process and the deviation on the equilibrium temperature are observed for the Strang splitting. On the contrary, the midpoint method that solves reaction and diffusion together matches the fully implicit accurate solution. The balanced splitting predicts the temperature rise correctly but with an over-predicted peak. For the sustainable and decaying oscillatory combustion from cool flames, both the Strang splitting and the midpoint method can successfully capture the dynamic behavior, whereas the balanced splitting scheme results in significant errors.

  2. Analysis of operator splitting errors for near-limit flame simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Zhen; Zhou, Hua; Li, Shan

    High-fidelity simulations of ignition, extinction and oscillatory combustion processes are of practical interest in a broad range of combustion applications. Splitting schemes, widely employed in reactive flow simulations, could fail for stiff reaction–diffusion systems exhibiting near-limit flame phenomena. The present work first employs a model perfectly stirred reactor (PSR) problem with an Arrhenius reaction term and a linear mixing term to study the effects of splitting errors on the near-limit combustion phenomena. Analysis shows that the errors induced by decoupling of the fractional steps may result in unphysical extinction or ignition. The analysis is then extended to the prediction ofmore » ignition, extinction and oscillatory combustion in unsteady PSRs of various fuel/air mixtures with a 9-species detailed mechanism for hydrogen oxidation and an 88-species skeletal mechanism for n-heptane oxidation, together with a Jacobian-based analysis for the time scales. The tested schemes include the Strang splitting, the balanced splitting, and a newly developed semi-implicit midpoint method. Results show that the semi-implicit midpoint method can accurately reproduce the dynamics of the near-limit flame phenomena and it is second-order accurate over a wide range of time step size. For the extinction and ignition processes, both the balanced splitting and midpoint method can yield accurate predictions, whereas the Strang splitting can lead to significant shifts on the ignition/extinction processes or even unphysical results. With an enriched H radical source in the inflow stream, a delay of the ignition process and the deviation on the equilibrium temperature are observed for the Strang splitting. On the contrary, the midpoint method that solves reaction and diffusion together matches the fully implicit accurate solution. The balanced splitting predicts the temperature rise correctly but with an over-predicted peak. For the sustainable and decaying oscillatory combustion from cool flames, both the Strang splitting and the midpoint method can successfully capture the dynamic behavior, whereas the balanced splitting scheme results in significant errors.« less

  3. Ultra High Strain Rate Nanoindentation Testing.

    PubMed

    Sudharshan Phani, Pardhasaradhi; Oliver, Warren Carl

    2017-06-17

    Strain rate dependence of indentation hardness has been widely used to study time-dependent plasticity. However, the currently available techniques limit the range of strain rates that can be achieved during indentation testing. Recent advances in electronics have enabled nanomechanical measurements with very low noise levels (sub nanometer) at fast time constants (20 µs) and high data acquisition rates (100 KHz). These capabilities open the doors for a wide range of ultra-fast nanomechanical testing, for instance, indentation testing at very high strain rates. With an accurate dynamic model and an instrument with fast time constants, step load tests can be performed which enable access to indentation strain rates approaching ballistic levels (i.e., 4000 1/s). A novel indentation based testing technique involving a combination of step load and constant load and hold tests that enables measurement of strain rate dependence of hardness spanning over seven orders of magnitude in strain rate is presented. A simple analysis is used to calculate the equivalent uniaxial response from indentation data and compared to the conventional uniaxial data for commercial purity aluminum. Excellent agreement is found between the indentation and uniaxial data over several orders of magnitude of strain rate.

  4. Generalized reference fields and source interpolation for the difference formulation of radiation transport

    NASA Astrophysics Data System (ADS)

    Luu, Thomas; Brooks, Eugene D.; Szőke, Abraham

    2010-03-01

    In the difference formulation for the transport of thermally emitted photons the photon intensity is defined relative to a reference field, the black body at the local material temperature. This choice of reference field combines the separate emission and absorption terms that nearly cancel, thereby removing the dominant cause of noise in the Monte Carlo solution of thick systems, but introduces time and space derivative source terms that cannot be determined until the end of the time step. The space derivative source term can also lead to noise induced crashes under certain conditions where the real physical photon intensity differs strongly from a black body at the local material temperature. In this paper, we consider a difference formulation relative to the material temperature at the beginning of the time step, or in cases where an alternative temperature better describes the radiation field, that temperature. The result is a method where iterative solution of the material energy equation is efficient and noise induced crashes are avoided. We couple our generalized reference field scheme with an ad hoc interpolation of the space derivative source, resulting in an algorithm that produces the correct flux between zones as the physical system approaches the thick limit.

  5. ASIS v1.0: an adaptive solver for the simulation of atmospheric chemistry

    NASA Astrophysics Data System (ADS)

    Cariolle, Daniel; Moinat, Philippe; Teyssèdre, Hubert; Giraud, Luc; Josse, Béatrice; Lefèvre, Franck

    2017-04-01

    This article reports on the development and tests of the adaptive semi-implicit scheme (ASIS) solver for the simulation of atmospheric chemistry. To solve the ordinary differential equation systems associated with the time evolution of the species concentrations, ASIS adopts a one-step linearized implicit scheme with specific treatments of the Jacobian of the chemical fluxes. It conserves mass and has a time-stepping module to control the accuracy of the numerical solution. In idealized box-model simulations, ASIS gives results similar to the higher-order implicit schemes derived from the Rosenbrock's and Gear's methods and requires less computation and run time at the moderate precision required for atmospheric applications. When implemented in the MOCAGE chemical transport model and the Laboratoire de Météorologie Dynamique Mars general circulation model, the ASIS solver performs well and reveals weaknesses and limitations of the original semi-implicit solvers used by these two models. ASIS can be easily adapted to various chemical schemes and further developments are foreseen to increase its computational efficiency, and to include the computation of the concentrations of the species in aqueous-phase in addition to gas-phase chemistry.

  6. The prevalence of upright non-stepping time in comparison to stepping time in 11-13 year old school children across seasons.

    PubMed

    McCrorie, P Rw; Duncan, E; Granat, M H; Stansfield, B W

    2012-11-01

    Evidence suggests that behaviours such as standing are beneficial for our health. Unfortunately, little is known of the prevalence of this state, its importance in relation to time spent stepping or variation across seasons. The aim of this study was to quantify, in young adolescents, the prevalence and seasonal changes in time spent upright and not stepping (UNSt(time)) as well as time spent upright and stepping (USt(time)), and their contribution to overall upright time (U(time)). Thirty-three adolescents (12.2 ± 0.3 y) wore the activPAL activity monitor during four school days on two occasions: November/December (winter) and May/June (summer). UNSt(time) contributed 60% of daily U(time) at winter (Mean = 196 min) and 53% at summer (Mean = 171 min); a significant seasonal effect, p < 0.001. USt(time) was significantly greater in summer compared to winter (153 min versus 131 min, p < 0.001). The effects in UNSt(time) could be explained through significant seasonal differences during the school hours (09:00-16:00), whereas the effects in USt(time) could be explained through significant seasonal differences in the evening period (16:00-22:00). Adolescents spent a greater amount of time upright and not stepping than they did stepping, in both winter and summer. The observed seasonal effects for both UNSt(time) and USt(time) provide important information for behaviour change intervention programs.

  7. Rapid and specific detection of Salmonella in water samples using real-time PCR and High Resolution Melt (HRM) curve analysis.

    PubMed

    van Blerk, G N; Leibach, L; Mabunda, A; Chapman, A; Louw, D

    2011-01-01

    A real-time PCR assay combined with a pre-enrichment step for the specific and rapid detection of Salmonella in water samples is described. Following amplification of the invA gene target, High Resolution Melt (HRM) curve analysis was used to discriminate between products formed and to positively identify invA amplification. The real-time PCR assay was evaluated for specificity and sensitivity. The assay displayed 100% specificity for Salmonella and combined with a 16-18 h non-selective pre-enrichment step, the assay proved to be highly sensitive with a detection limit of 1.0 CFU/ml for surface water samples. The detection assay also demonstrated a high intra-run and inter-run repeatability with very little variation in invA amplicon melting temperature. When applied to water samples received routinely by the laboratory, the assay showed the presence of Salmonella in particularly surface water and treated effluent samples. Using the HRM based assay, the time required for Salmonella detection was drastically shortened to less than 24 h compared to several days when using standard culturing methods. This assay provides a useful tool for routine water quality monitoring as well as for quick screening during disease outbreaks.

  8. Immediate Effects of Clock-Turn Strategy on the Pattern and Performance of Narrow Turning in Persons With Parkinson Disease.

    PubMed

    Yang, Wen-Chieh; Hsu, Wei-Li; Wu, Ruey-Meei; Lin, Kwan-Hwa

    2016-10-01

    Turning difficulty is common in people with Parkinson disease (PD). The clock-turn strategy is a cognitive movement strategy to improve turning performance in people with PD despite its effects are unverified. Therefore, this study aimed to investigate the effects of the clock-turn strategy on the pattern of turning steps, turning performance, and freezing of gait during a narrow turning, and how these effects were influenced by concurrent performance of a cognitive task (dual task). Twenty-five people with PD were randomly assigned to the clock-turn or usual-turn group. Participants performed the Timed Up and Go test with and without concurrent cognitive task during the medication OFF period. The clock-turn group performed the Timed Up and Go test using the clock-turn strategy, whereas participants in the usual-turn group performed in their usual manner. Measurements were taken during the 180° turn of the Timed Up and Go test. The pattern of turning steps was evaluated by step time variability and step time asymmetry. Turning performance was evaluated by turning time and number of turning steps. The number and duration of freezing of gait were calculated by video review. The clock-turn group had lower step time variability and step time asymmetry than the usual-turn group. Furthermore, the clock-turn group turned faster with fewer freezing of gait episodes than the usual-turn group. Dual task increased the step time variability and step time asymmetry in both groups but did not affect turning performance and freezing severity. The clock-turn strategy reduces turning time and freezing of gait during turning, probably by lowering step time variability and asymmetry. Dual task compromises the effects of the clock-turn strategy, suggesting a competition for attentional resources.Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, http://links.lww.com/JNPT/A141).

  9. On the limits of numerical astronomical solutions used in paleoclimate studies

    NASA Astrophysics Data System (ADS)

    Zeebe, Richard E.

    2017-04-01

    Numerical solutions of the equations of the Solar System estimate Earth's orbital parameters in the past and represent the backbone of cyclostratigraphy and astrochronology, now widely applied in geology and paleoclimatology. Given one numerical realization of a Solar System model (i.e., obtained using one code or integrator package), various parameters determine the properties of the solution and usually limit its validity to a certain time period. Such limitations are denoted here as "internal" and include limitations due to (i) the underlying physics/physical model and (ii) numerics. The physics include initial coordinates and velocities of Solar System bodies, treatment of the Moon and asteroids, the Sun's quadrupole moment, and the intrinsic dynamics of the Solar System itself, i.e., its chaotic nature. Numerical issues include solver algorithm, numerical accuracy (e.g., time step), and round-off errors. At present, internal limitations seem to restrict the validity of astronomical solutions to perhaps the past 50 or 60 myr. However, little is currently known about "external" limitations, that is, how do different numerical realizations compare, say, between different investigators using different codes and integrators? Hitherto only two solutions for Earth's eccentricity appear to be used in paleoclimate studies, provided by two different groups that integrated the full Solar System equations over the past >100 myr (Laskar and coworkers and Varadi et al. 2003). In this contribution, I will present results from new Solar System integrations for Earth's eccentricity obtained using the integrator package HNBody (Rauch and Hamilton 2002). I will discuss the various internal limitations listed above within the framework of the present simulations. I will also compare the results to the existing solutions, the details of which are still being sorted out as several simulations are still running at the time of writing.

  10. Comparison of step-by-step kinematics in repeated 30m sprints in female soccer players.

    PubMed

    van den Tillaar, Roland

    2018-01-04

    The aim of this study was to compare kinematics in repeated 30m sprints in female soccer players. Seventeen subjects performed seven 30m sprints every 30s in one session. Kinematics were measured with an infrared contact mat and laser gun, and running times with an electronic timing device. The main findings were that sprint times increased in the repeated sprint ability test. The main changes in kinematics during the repeated sprint ability test were increased contact time and decreased step frequency, while no change in step length was observed. The step velocity increased in almost each step until the 14, which occurred around 22m. After this, the velocity was stable until the last step, when it decreased. This increase in step velocity was mainly caused by the increased step length and decreased contact times. It was concluded that the fatigue induced in repeated 30m sprints in female soccer players resulted in decreased step frequency and increased contact time. Employing this approach in combination with a laser gun and infrared mat for 30m makes it very easy to analyse running kinematics in repeated sprints in training. This extra information gives the athlete, coach and sports scientist the opportunity to give more detailed feedback and help to target these changes in kinematics better to enhance repeated sprint performance.

  11. Implicit time accurate simulation of unsteady flow

    NASA Astrophysics Data System (ADS)

    van Buuren, René; Kuerten, Hans; Geurts, Bernard J.

    2001-03-01

    Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright

  12. Energy Data Management Manual for the Wastewater Treatment Sector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lemar, Paul; De Fontaine, Andre

    Energy efficiency has become a higher priority within the wastewater treatment sector, with facility operators and state and local governments ramping up efforts to reduce energy costs and improve environmental performance. Across the country, municipal wastewater treatment plants are estimated to consume more than 30 terawatt hours per year of electricity, which equates to about $2 billion in annual electric costs. Electricity alone can constitute 25% to 40% of a wastewater treatment plant’s annual operating budget and make up a significant portion of a given municipality’s total energy bill. These energy needs are expected to grow over time, driven bymore » population growth and increasingly stringent water quality requirements. The purpose of this document is to describe the benefits of energy data management, explain how it can help drive savings when linked to a strong energy management program, and provide clear, step-by-step guidance to wastewater treatment plants on how to appropriately track energy performance. It covers the basics of energy data management and related concepts and describes different options for key steps, recognizing that a single approach may not work for all agencies. Wherever possible, the document calls out simpler, less time-intensive approaches to help smaller plants with more limited resources measure and track energy performance. Reviews of key, publicly available energy-tracking tools are provided to help organizations select a tool that makes the most sense for them. Finally, this document describes additional steps wastewater treatment plant operators can take to build on their energy data management systems and further accelerate energy savings.« less

  13. Single-step affinity purification of enzyme biotherapeutics: a platform methodology for accelerated process development.

    PubMed

    Brower, Kevin P; Ryakala, Venkat K; Bird, Ryan; Godawat, Rahul; Riske, Frank J; Konstantinov, Konstantin; Warikoo, Veena; Gamble, Jean

    2014-01-01

    Downstream sample purification for quality attribute analysis is a significant bottleneck in process development for non-antibody biologics. Multi-step chromatography process train purifications are typically required prior to many critical analytical tests. This prerequisite leads to limited throughput, long lead times to obtain purified product, and significant resource requirements. In this work, immunoaffinity purification technology has been leveraged to achieve single-step affinity purification of two different enzyme biotherapeutics (Fabrazyme® [agalsidase beta] and Enzyme 2) with polyclonal and monoclonal antibodies, respectively, as ligands. Target molecules were rapidly isolated from cell culture harvest in sufficient purity to enable analysis of critical quality attributes (CQAs). Most importantly, this is the first study that demonstrates the application of predictive analytics techniques to predict critical quality attributes of a commercial biologic. The data obtained using the affinity columns were used to generate appropriate models to predict quality attributes that would be obtained after traditional multi-step purification trains. These models empower process development decision-making with drug substance-equivalent product quality information without generation of actual drug substance. Optimization was performed to ensure maximum target recovery and minimal target protein degradation. The methodologies developed for Fabrazyme were successfully reapplied for Enzyme 2, indicating platform opportunities. The impact of the technology is significant, including reductions in time and personnel requirements, rapid product purification, and substantially increased throughput. Applications are discussed, including upstream and downstream process development support to achieve the principles of Quality by Design (QbD) as well as integration with bioprocesses as a process analytical technology (PAT). © 2014 American Institute of Chemical Engineers.

  14. Endoclip Magnetic Resonance Imaging Screening: A Local Practice Review.

    PubMed

    Accorsi, Fabio; Lalonde, Alain; Leswick, David A

    2018-05-01

    Not all endoscopically placed clips (endoclips) are magnetic resonance imaging (MRI) compatible. At many institutions, endoclip screening is part of the pre-MRI screening process. Our objective is to determine the contribution of each step of this endoclip screening protocol in determining a patient's endoclip status at our institution. A retrospective review of patients' endoscopic histories on general MRI screening forms for patients scanned during a 40-day period was performed to assess the percentage of patients that require endoclip screening at our institution. Following this, a prospective evaluation of 614 patients' endoclip screening determined the percentage of these patients ultimately exposed to each step in the protocol (exposure), and the percentage of patients whose endoclip status was determined with reasonable certainty by each step (determination). Exposure and determination values for each step were calculated as follows (exposure, determination): verbal interview (100%, 86%), review of past available imaging (14%, 36%), review of endoscopy report (9%, 57%), and new abdominal radiograph (4%, 96%), or CT (0.2%, 100%) for evaluation of potential endoclips. Only 1 patient did not receive MRI because of screening (in situ gastrointestinal endoclip identified). Verbal interview is invaluable to endoclip screening, clearing 86% of patients with minimal monetary and time investment. Conversely, the limited availability of endoscopy reports and relevant past imaging somewhat restricts the determination rates of these. New imaging (radiograph or computed tomography) is required <5% of the time, and although costly and associated with patient irradiation, has excellent determination rates (above 96%) when needed. Copyright © 2017 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.

  15. Method and apparatus for automated assembly

    DOEpatents

    Jones, Rondall E.; Wilson, Randall H.; Calton, Terri L.

    1999-01-01

    A process and apparatus generates a sequence of steps for assembly or disassembly of a mechanical system. Each step in the sequence is geometrically feasible, i.e., the part motions required are physically possible. Each step in the sequence is also constraint feasible, i.e., the step satisfies user-definable constraints. Constraints allow process and other such limitations, not usually represented in models of the completed mechanical system, to affect the sequence.

  16. Limits of acceptable change and natural resources planning: when is LAC useful, when is it not?

    Treesearch

    David N. Cole; Stephen F. McCool

    1997-01-01

    There are ways to improve the LAC process and its implementational procedures. One significant procedural modification is the addition of a new step. This step — which becomes the first step in the process — involves more explicitly defining goals and desired conditions. For other steps in the process, clarifications of concept and terminology are advanced, as are...

  17. Agreement between pedometer and accelerometer in measuring physical activity in overweight and obese pregnant women.

    PubMed

    Kinnunen, Tarja I; Tennant, Peter W G; McParlin, Catherine; Poston, Lucilla; Robson, Stephen C; Bell, Ruth

    2011-06-27

    Inexpensive, reliable objective methods are needed to measure physical activity (PA) in large scale trials. This study compared the number of pedometer step counts with accelerometer data in pregnant women in free-living conditions to assess agreement between these measures. Pregnant women (n = 58) with body mass index ≥25 kg/m(2) at median 13 weeks' gestation wore a GT1M Actigraph accelerometer and a Yamax Digi-Walker CW-701 pedometer for four consecutive days. The Spearman rank correlation coefficients were determined between pedometer step counts and various accelerometer measures of PA. Total agreement between accelerometer and pedometer step counts was evaluated by determining the 95% limits of agreement estimated using a regression-based method. Agreement between the monitors in categorising participants as active or inactive was assessed by determining Kappa. Pedometer step counts correlated moderately (r = 0.36 to 0.54) with most accelerometer measures of PA. Overall step counts recorded by the pedometer and the accelerometer were not significantly different (medians 5961 vs. 5687 steps/day, p = 0.37). However, the 95% limits of agreement ranged from -2690 to 2656 steps/day for the mean step count value (6026 steps/day) and changed substantially over the range of values. Agreement between the monitors in categorising participants to active and inactive varied from moderate to good depending on the criteria adopted. Despite statistically significant correlations and similar median step counts, the overall agreement between pedometer and accelerometer step counts was poor and varied with activity level. Pedometer and accelerometer steps cannot be used interchangeably in overweight and obese pregnant women.

  18. Where we stand, where we are moving: Surveying computational techniques for identifying miRNA genes and uncovering their regulatory role.

    PubMed

    Kleftogiannis, Dimitrios; Korfiati, Aigli; Theofilatos, Konstantinos; Likothanassis, Spiros; Tsakalidis, Athanasios; Mavroudi, Seferina

    2013-06-01

    Traditional biology was forced to restate some of its principles when the microRNA (miRNA) genes and their regulatory role were firstly discovered. Typically, miRNAs are small non-coding RNA molecules which have the ability to bind to the 3'untraslated region (UTR) of their mRNA target genes for cleavage or translational repression. Existing experimental techniques for their identification and the prediction of the target genes share some important limitations such as low coverage, time consuming experiments and high cost reagents. Hence, many computational methods have been proposed for these tasks to overcome these limitations. Recently, many researchers emphasized on the development of computational approaches to predict the participation of miRNA genes in regulatory networks and to analyze their transcription mechanisms. All these approaches have certain advantages and disadvantages which are going to be described in the present survey. Our work is differentiated from existing review papers by updating the methodologies list and emphasizing on the computational issues that arise from the miRNA data analysis. Furthermore, in the present survey, the various miRNA data analysis steps are treated as an integrated procedure whose aims and scope is to uncover the regulatory role and mechanisms of the miRNA genes. This integrated view of the miRNA data analysis steps may be extremely useful for all researchers even if they work on just a single step. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Redefining the lower statistical limit in x-ray phase-contrast imaging

    NASA Astrophysics Data System (ADS)

    Marschner, M.; Birnbacher, L.; Willner, M.; Chabior, M.; Fehringer, A.; Herzen, J.; Noël, P. B.; Pfeiffer, F.

    2015-03-01

    Phase-contrast x-ray computed tomography (PCCT) is currently investigated and developed as a potentially very interesting extension of conventional CT, because it promises to provide high soft-tissue contrast for weakly absorbing samples. For data acquisition several images at different grating positions are combined to obtain a phase-contrast projection. For short exposure times, which are necessary for lower radiation dose, the photon counts in a single stepping position are very low. In this case, the currently used phase-retrieval does not provide reliable results for some pixels. This uncertainty results in statistical phase wrapping, which leads to a higher standard deviation in the phase-contrast projections than theoretically expected. For even lower statistics, the phase retrieval breaks down completely and the phase information is lost. New measurement procedures rely on a linear approximation of the sinusoidal phase stepping curve around the zero crossings. In this case only two images are acquired to obtain the phase-contrast projection. The approximation is only valid for small phase values. However, typically nearly all pixels are within this regime due to the differential nature of the signal. We examine the statistical properties of a linear approximation method and illustrate by simulation and experiment that the lower statistical limit can be redefined using this method. That means that the phase signal can be retrieved even with very low photon counts and statistical phase wrapping can be avoided. This is an important step towards enhanced image quality in PCCT with very low photon counts.

  20. Gold Nanoparticle Labels and Heterogeneous Immunoassays: The Case for the Inverted Substrate.

    PubMed

    Crawford, Alexis C; Young, Colin C; Porter, Marc D

    2018-06-15

    This paper examines how the difference in the spatial orientation of the capture substrate influences the analytical sensitivity and limits of detection for immunoassays that use gold nanoparticle labels (AuNPs) and rely on diffusion in quiet solution in the antigen capture and labeling steps. Ideally, the accumulation of both reactants should follow a dependence governed by the rate in which diffusion delivers reactants to the capture surface. In other words, the accumulation of reactants should increase with the square root of the incubation time, i.e., t1/2. The work herein shows, however, that this expectation is only obeyed when the capture substrate is oriented to direct the gravity-induced sedimentation of the AuNP labels away from the substrate. Using an assay for human IgG, the results show that circumventing the sedimentation of the gold nanoparticle labels by substrate inversion enables the dependence of the labeling step on diffusion, reduces nonspecific label adsorption, and improves the estimated detection limit by ~30×. High-density maps of the signal across the two types of substrates also demonstrate that inversion in the labeling step results in a more uniform distribution of AuNP labels across the surface, which translates to a greater measurement reproducibility. These results, which are supported by model simulations via the Mason-Weaver sedimentation-diffusion equation, and their potential implications when using other nanoparticle labels and related materials in diagnostic tests and other applications, are briefly discussed.

  1. On improving the iterative convergence properties of an implicit approximate-factorization finite difference algorithm. [considering transonic flow

    NASA Technical Reports Server (NTRS)

    Desideri, J. A.; Steger, J. L.; Tannehill, J. C.

    1978-01-01

    The iterative convergence properties of an approximate-factorization implicit finite-difference algorithm are analyzed both theoretically and numerically. Modifications to the base algorithm were made to remove the inconsistency in the original implementation of artificial dissipation. In this way, the steady-state solution became independent of the time-step, and much larger time-steps can be used stably. To accelerate the iterative convergence, large time-steps and a cyclic sequence of time-steps were used. For a model transonic flow problem governed by the Euler equations, convergence was achieved with 10 times fewer time-steps using the modified differencing scheme. A particular form of instability due to variable coefficients is also analyzed.

  2. Determination of chloropropanols in foods by one-step extraction and derivatization using pressurized liquid extraction and gas chromatography-mass spectrometry.

    PubMed

    Racamonde, I; González, P; Lorenzo, R A; Carro, A M

    2011-09-28

    3-Chloropropane-1,2-diol (3-MCPD) and 1,3-dichloro-2-propanol (1,3-DCP) were determined for the first time in bakery foods using pressurized liquid extraction (PLE) combined with in situ derivatization and GC-MS analysis. This one-step protocol uses N,O-bis(trimethylsilyl)trifluoroacetamide (BSTFA) as silylation reagent. Initially, screening experimental design was applied to evaluate the effects of the variables potentially affecting the extraction process, namely extraction time (min) and temperature (°C), number of cycles, dispersant reagent (diatomaceous earth in powder form and as particulate matter with high pore volume Extrelut NT) and percent of flush ethyl acetate volume (%). To reduce the time of analysis and improve the sensitivity, derivatization of the compounds was performed in the cell extraction. Conditions, such as the volume of BSTFA, temperature and time for the in situ derivatization of analytes using PLE, were optimized by a screening design followed to a Doehlert response surface design. The effect of the in-cell dispersants/adsorbents with diatomaceous earth, Florisil and sodium sulfate anhydrous was investigated using a Box-Behnken design. Using the final best conditions, 1 g of sample dispersed with 0.1 g of sodium sulfate anhydrous and 2.5 g diatomaceous earth was extracted with ethyl acetate. 1 g of Florisil, as clean-up adsorbent, and 70 μL of BSTFA were used for 3 min at 70°C. Under the optimum conditions, the calibration curves showed good linearity (R(2)>0.9994) and precision (relative standard deviation, RSD≤2.4%) within the tested ranges. The limits of quantification for 1,3-DCP and 3-MCDP, 1.6 and 1.7 μg kg(-1), respectively, are far below the established limits in the European and American legislations. The accuracy, precision, linearity, and limits of quantification provided make this analytical method suitable for routine control. The method was applied to the analysis of several toasted bread, snacks, cookies and cereal samples, none of which contained chloropropanols at concentrations above the legislation levels. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Abundance of ammonia oxidizing bacteria and archaea under long-term maize cropping systems.

    USDA-ARS?s Scientific Manuscript database

    Nitrification involves the oxidation of ammonium and is an important component of the overall N cycle. Nitrification occurs in two steps; first by oxidizing ammonium to nitrite, and then to nitrate. The first step is often the rate limiting step. Until recently ammonia-oxidizing bacteria were though...

  4. Comparison of step-by-step kinematics of resisted, assisted and unloaded 20-m sprint runs.

    PubMed

    van den Tillaar, Roland; Gamble, Paul

    2018-03-26

    This investigation examined step-by-step kinematics of sprint running acceleration. Using a randomised counterbalanced approach, 37 female team handball players (age 17.8 ± 1.6 years, body mass 69.6 ± 9.1 kg, height 1.74 ± 0.06 m) performed resisted, assisted and unloaded 20-m sprints within a single session. 20-m sprint times and step velocity, as well as step length, step frequency, contact and flight times of each step were evaluated for each condition with a laser gun and an infrared mat. Almost all measured parameters were altered for each step under the resisted and assisted sprint conditions (η 2  ≥ 0.28). The exception was step frequency, which did not differ between assisted and normal sprints. Contact time, flight time and step frequency at almost each step were different between 'fast' vs. 'slow' sub-groups (η 2  ≥ 0.22). Nevertheless overall both groups responded similarly to the respective sprint conditions. No significant differences in step length were observed between groups for the respective condition. It is possible that continued exposure to assisted sprinting might allow the female team-sports players studied to adapt their coordination to the 'over-speed' condition and increase step frequency. It is notable that step-by-step kinematics in these sprints were easy to obtain using relatively inexpensive equipment with possibilities of direct feedback.

  5. Biomechanical influences on balance recovery by stepping.

    PubMed

    Hsiao, E T; Robinovitch, S N

    1999-10-01

    Stepping represents a common means for balance recovery after a perturbation to upright posture. Yet little is known regarding the biomechanical factors which determine whether a step succeeds in preventing a fall. In the present study, we developed a simple pendulum-spring model of balance recovery by stepping, and used this to assess how step length and step contact time influence the effort (leg contact force) and feasibility of balance recovery by stepping. We then compared model predictions of step characteristics which minimize leg contact force to experimentally observed values over a range of perturbation strengths. At all perturbation levels, experimentally observed step execution times were higher than optimal, and step lengths were smaller than optimal. However, the predicted increase in leg contact force associated with these deviations was substantial only for large perturbations. Furthermore, increases in the strength of the perturbation caused subjects to take larger, quicker steps, which reduced their predicted leg contact force. We interpret these data to reflect young subjects' desire to minimize recovery effort, subject to neuromuscular constraints on step execution time and step length. Finally, our model predicts that successful balance recovery by stepping is governed by a coupling between step length, step execution time, and leg strength, so that the feasibility of balance recovery decreases unless declines in one capacity are offset by enhancements in the others. This suggests that one's risk for falls may be affected more by small but diffuse neuromuscular impairments than by larger impairment in a single motor capacity.

  6. The General Alcoholics Anonymous Tools of Recovery: The Adoption of 12-Step Practices and Beliefs

    PubMed Central

    Greenfield, Brenna L.; Tonigan, J. Scott

    2013-01-01

    Working the 12 steps is widely prescribed for Alcoholics Anonymous (AA) members although the relative merits of different methods for measuring step-work have received minimal attention and even less is known about how step-work predicts later substance use. The current study (1) compared endorsements of step-work on an face-valid or direct measure, the Alcoholics Anonymous Inventory (AAI), with an indirect measure of step-work, the General Alcoholics Anonymous Tools of Recovery (GAATOR), (2) evaluated the underlying factor structure of the GAATOR and changes in step-work over time, (3) examined changes in the endorsement of step-work over time, and (4) investigated how, if at all, 12-step-work predicted later substance use. New AA affiliates (N = 130) completed assessments at intake, 3, 6, and 9 months. Significantly more participants endorsed step-work on the GAATOR than on the AAI for nine of the 12 steps. An exploratory factor analysis revealed a two-factor structure for the GAATOR comprising Behavioral Step-Work and Spiritual Step-Work. Behavioral Step-Work did not change over time, but was predicted by having a sponsor, while Spiritual Step-Work decreased over time and increases were predicted by attending 12-step meetings or treatment. Behavioral Step-Work did not prospectively predict substance use. In contrast, Spiritual Step-Work predicted percent days abstinent, an effect that is consistent with recent work on the mediating effects of spiritual growth, AA, and increased abstinence. Behavioral and Spiritual Step-Work appear to be conceptually distinct components of step-work that have distinct predictors and unique impacts on outcomes. PMID:22867293

  7. Measuring border delay and crossing times at the US-Mexico border : part II. Step-by-step guidelines for implementing a radio frequency identification (RFID) system to measure border crossing and wait times.

    DOT National Transportation Integrated Search

    2012-06-01

    The purpose of these step-by-step guidelines is to assist in planning, designing, and deploying a system that uses radio frequency identification (RFID) technology to measure the time needed for commercial vehicles to complete the northbound border c...

  8. Mass imbalances in EPANET water-quality simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Michael J.; Janke, Robert; Taxon, Thomas N.

    EPANET is widely employed to simulate water quality in water distribution systems. However, the time-driven simulation approach used to determine concentrations of water-quality constituents provides accurate results, in general, only for small water-quality time steps; use of an adequately short time step may not be feasible. Overly long time steps can yield errors in concentrations and result in situations in which constituent mass is not conserved. Mass may not be conserved even when EPANET gives no errors or warnings. This paper explains how such imbalances can occur and provides examples of such cases; it also presents a preliminary event-driven approachmore » that conserves mass with a water-quality time step that is as long as the hydraulic time step. Results obtained using the current approach converge, or tend to converge, to those obtained using the new approach as the water-quality time step decreases. Improving the water-quality routing algorithm used in EPANET could eliminate mass imbalances and related errors in estimated concentrations.« less

  9. A Semi-Implicit, Three-Dimensional Model for Estuarine Circulation

    USGS Publications Warehouse

    Smith, Peter E.

    2006-01-01

    A semi-implicit, finite-difference method for the numerical solution of the three-dimensional equations for circulation in estuaries is presented and tested. The method uses a three-time-level, leapfrog-trapezoidal scheme that is essentially second-order accurate in the spatial and temporal numerical approximations. The three-time-level scheme is shown to be preferred over a two-time-level scheme, especially for problems with strong nonlinearities. The stability of the semi-implicit scheme is free from any time-step limitation related to the terms describing vertical diffusion and the propagation of the surface gravity waves. The scheme does not rely on any form of vertical/horizontal mode-splitting to treat the vertical diffusion implicitly. At each time step, the numerical method uses a double-sweep method to transform a large number of small tridiagonal equation systems and then uses the preconditioned conjugate-gradient method to solve a single, large, five-diagonal equation system for the water surface elevation. The governing equations for the multi-level scheme are prepared in a conservative form by integrating them over the height of each horizontal layer. The layer-integrated volumetric transports replace velocities as the dependent variables so that the depth-integrated continuity equation that is used in the solution for the water surface elevation is linear. Volumetric transports are computed explicitly from the momentum equations. The resulting method is mass conservative, efficient, and numerically accurate.

  10. Fully decoupled monolithic projection method for natural convection problems

    NASA Astrophysics Data System (ADS)

    Pan, Xiaomin; Kim, Kyoungyoun; Lee, Changhoon; Choi, Jung-Il

    2017-04-01

    To solve time-dependent natural convection problems, we propose a fully decoupled monolithic projection method. The proposed method applies the Crank-Nicolson scheme in time and the second-order central finite difference in space. To obtain a non-iterative monolithic method from the fully discretized nonlinear system, we first adopt linearizations of the nonlinear convection terms and the general buoyancy term with incurring second-order errors in time. Approximate block lower-upper decompositions, along with an approximate factorization technique, are additionally employed to a global linearly coupled system, which leads to several decoupled subsystems, i.e., a fully decoupled monolithic procedure. We establish global error estimates to verify the second-order temporal accuracy of the proposed method for velocity, pressure, and temperature in terms of a discrete l2-norm. Moreover, according to the energy evolution, the proposed method is proved to be stable if the time step is less than or equal to a constant. In addition, we provide numerical simulations of two-dimensional Rayleigh-Bénard convection and periodic forced flow. The results demonstrate that the proposed method significantly mitigates the time step limitation, reduces the computational cost because only one Poisson equation is required to be solved, and preserves the second-order temporal accuracy for velocity, pressure, and temperature. Finally, the proposed method reasonably predicts a three-dimensional Rayleigh-Bénard convection for different Rayleigh numbers.

  11. Viking Afterbody Heating Computations and Comparisons to Flight Data

    NASA Technical Reports Server (NTRS)

    Edquist, Karl T.; Wright, Michael J.; Allen, Gary A., Jr.

    2006-01-01

    Computational fluid dynamics predictions of Viking Lander 1 entry vehicle afterbody heating are compared to flight data. The analysis includes a derivation of heat flux from temperature data at two base cover locations, as well as a discussion of available reconstructed entry trajectories. Based on the raw temperature-time history data, convective heat flux is derived to be 0.63-1.10 W/cm2 for the aluminum base cover at the time of thermocouple failure. Peak heat flux at the fiberglass base cover thermocouple is estimated to be 0.54-0.76 W/cm2, occurring 16 seconds after peak stagnation point heat flux. Navier-Stokes computational solutions are obtained with two separate codes using an 8- species Mars gas model in chemical and thermal non-equilibrium. Flowfield solutions using local time-stepping did not result in converged heating at either thermocouple location. A global time-stepping approach improved the computational stability, but steady state heat flux was not reached for either base cover location. Both thermocouple locations lie within a separated flow region of the base cover that is likely unsteady. Heat flux computations averaged over the solution history are generally below the flight data and do not vary smoothly over time for both base cover locations. Possible reasons for the mismatch between flight data and flowfield solutions include underestimated conduction effects and limitations of the computational methods.

  12. Viking Afterbody Heating Computations and Comparisons to Flight Data

    NASA Technical Reports Server (NTRS)

    Edquist, Karl T.; Wright, Michael J.; Allen, Gary A., Jr.

    2006-01-01

    Computational fluid dynamics predictions of Viking Lander 1 entry vehicle afterbody heating are compared to flight data. The analysis includes a derivation of heat flux from temperature data at two base cover locations, as well as a discussion of available reconstructed entry trajectories. Based on the raw temperature-time history data, convective heat flux is derived to be 0.63-1.10 W/sq cm for the aluminum base cover at the time of thermocouple failure. Peak heat flux at the fiberglass base cover thermocouple is estimated to be 0.54-0.76 W/sq cm, occurring 16 seconds after peak stagnation point heat flux. Navier-Stokes computational solutions are obtained with two separate codes using an 8-species Mars gas model in chemical and thermal non-equilibrium. Flowfield solutions using local time-stepping did not result in converged heating at either thermocouple location. A global time-stepping approach improved the computational stability, but steady state heat flux was not reached for either base cover location. Both thermocouple locations lie within a separated flow region of the base cover that is likely unsteady. Heat flux computations averaged over the solution history are generally below the flight data and do not vary smoothly over time for both base cover locations. Possible reasons for the mismatch between flight data and flowfield solutions include underestimated conduction effects and limitations of the computational methods.

  13. An Embedded Device for Real-Time Noninvasive Intracranial Pressure Estimation.

    PubMed

    Matthews, Jonathan M; Fanelli, Andrea; Heldt, Thomas

    2018-01-01

    The monitoring of intracranial pressure (ICP) is indicated for diagnosing and guiding therapy in many neurological conditions. Current monitoring methods, however, are highly invasive, limiting their use to the most critically ill patients only. Our goal is to develop and test an embedded device that performs all necessary mathematical operations in real-time for noninvasive ICP (nICP) estimation based on a previously developed model-based approach that uses cerebral blood flow velocity (CBFV) and arterial blood pressure (ABP) waveforms. The nICP estimation algorithm along with the required preprocessing steps were implemented on an NXP LPC4337 microcontroller unit (MCU). A prototype device using the MCU was also developed, complete with display, recording functionality, and peripheral interfaces for ABP and CBFV monitoring hardware. The device produces an estimate of mean ICP once per minute and performs the necessary computations in 410 ms, on average. Real-time nICP estimates differed from the original batch-mode MATLAB implementation of theestimation algorithm by 0.63 mmHg (root-mean-square error). We have demonstrated that real-time nICP estimation is possible on a microprocessor platform, which offers the advantages of low cost, small size, and product modularity over a general-purpose computer. These attributes take a step toward the goal of real-time nICP estimation at the patient's bedside in a variety of clinical settings.

  14. Chemical reactions at aqueous interfaces

    NASA Astrophysics Data System (ADS)

    Vecitis, Chad David

    2009-12-01

    Interfaces or phase boundaries are a unique chemical environment relative to individual gas, liquid, or solid phases. Interfacial reaction mechanisms and kinetics are often at variance with homogeneous chemistry due to mass transfer, molecular orientation, and catalytic effects. Aqueous interfaces are a common subject of environmental science and engineering research, and three environmentally relevant aqueous interfaces are investigated in this thesis: 1) fluorochemical sonochemistry (bubble-water), 2) aqueous aerosol ozonation (gas-water droplet), and 3) electrolytic hydrogen production and simultaneous organic oxidation (water-metal/semiconductor). Direct interfacial analysis under environmentally relevant conditions is difficult, since most surface-specific techniques require relatively `extreme' conditions. Thus, the experimental investigations here focus on the development of chemical reactors and analytical techniques for the completion of time/concentration-dependent measurements of reactants and their products. Kinetic modeling, estimations, and/or correlations were used to extract information on interfacially relevant processes. We found that interfacial chemistry was determined to be the rate-limiting step to a subsequent series of relatively fast homogeneous reactions, for example: 1) Pyrolytic cleavage of the ionic headgroup of perfluorooctanesulfonate (PFOS) and perfluorooctanoate (PFOA) adsorbed to cavitating bubble-water interfaces during sonolysis was the rate-determining step in transformation to their inorganic constituents carbon monoxide, carbon dioxide, and fluoride; 2) ozone oxidation of aqueous iodide to hypoiodous acid at the aerosol-gas interface is the rate-determining step in the oxidation of bromide and chloride to dihalogens; 3) Electrolytic oxidation of anodic titanol surface groups is rate-limiting for the overall oxidation of organics by the dichloride radical. We also found chemistry unique to the interface, for example: 1) Adsorption of dilute PFOS(aq) and PFOA(aq) to acoustically cavitating bubble interfaces was greater than equilibrium expectations due to high-velocity bubble radial oscillations; 2) Relative ozone oxidation kinetics of aqueous iodide, sulfite, and thiosulfate were at variance with previously reported bulk aqueous kinetics; 3) Organics that directly chelated with the anode surface were oxidized by direct electron transfer, resulting in immediate carbon dioxide production but slower overall oxidation kinetics. Chemical reactions at aqueous interfaces can be the rate-limiting step of a reaction network and often display novel mechanisms and kinetics as compared to homogeneous chemistry.

  15. Physical Activity Assessment Between Consumer- and Research-Grade Accelerometers: A Comparative Study in Free-Living Conditions.

    PubMed

    Dominick, Gregory M; Winfree, Kyle N; Pohlig, Ryan T; Papas, Mia A

    2016-09-19

    Wearable activity monitors such as Fitbit enable users to track various attributes of their physical activity (PA) over time and have the potential to be used in research to promote and measure PA behavior. However, the measurement accuracy of Fitbit in absolute free-living conditions is largely unknown. To examine the measurement congruence between Fitbit Flex and ActiGraph GT3X for quantifying steps, metabolic equivalent tasks (METs), and proportion of time in sedentary activity and light-, moderate-, and vigorous-intensity PA in healthy adults in free-living conditions. A convenience sample of 19 participants (4 men and 15 women), aged 18-37 years, concurrently wore the Fitbit Flex (wrist) and ActiGraph GT3X (waist) for 1- or 2-week observation periods (n=3 and n=16, respectively) that included self-reported bouts of daily exercise. Data were examined for daily activity, averaged over 14 days and for minutes of reported exercise. Average day-level data included steps, METs, and proportion of time in different intensity levels. Minute-level data included steps, METs, and mean intensity score (0 = sedentary, 3 = vigorous) for overall reported exercise bouts (N=120) and by exercise type (walking, n=16; run or sports, n=44; cardio machine, n=20). Measures of steps were similar between devices for average day- and minute-level observations (all P values > .05). Fitbit significantly overestimated METs for average daily activity, for overall minutes of reported exercise bouts, and for walking and run or sports exercises (mean difference 0.70, 1.80, 3.16, and 2.00 METs, respectively; all P values < .001). For average daily activity, Fitbit significantly underestimated the proportion of time in sedentary and light intensity by 20% and 34%, respectively, and overestimated time by 3% in both moderate and vigorous intensity (all P values < .001). Mean intensity scores were not different for overall minutes of exercise or for run or sports and cardio-machine exercises (all P values > .05). Fitbit Flex provides accurate measures of steps for daily activity and minutes of reported exercise, regardless of exercise type. Although the proportion of time in different intensity levels varied between devices, examining the mean intensity score for minute-level bouts across different exercise types enabled interdevice comparisons that revealed similar measures of exercise intensity. Fitbit Flex is shown to have measurement limitations that may affect its potential utility and validity for measuring PA attributes in free-living conditions.

  16. Random Walks on Cartesian Products of Certain Nonamenable Groups and Integer Lattices

    NASA Astrophysics Data System (ADS)

    Vishnepolsky, Rachel

    A random walk on a discrete group satisfies a local limit theorem with power law exponent \\alpha if the return probabilities follow the asymptotic law. P{ return to starting point after n steps } ˜ Crhonn-alpha.. A group has a universal local limit theorem if all random walks on the group with finitely supported step distributions obey a local limit theorem with the same power law exponent. Given two groups that obey universal local limit theorems, it is not known whether their cartesian product also has a universal local limit theorem. We settle the question affirmatively in one case, by considering a random walk on the cartesian product of a nonamenable group whose Cayley graph is a tree, and the integer lattice. As corollaries, we derive large deviations estimates and a central limit theorem.

  17. Robust estimation-free prescribed performance back-stepping control of air-breathing hypersonic vehicles without affine models

    NASA Astrophysics Data System (ADS)

    Bu, Xiangwei; Wu, Xiaoyan; Huang, Jiaqi; Wei, Daozhi

    2016-11-01

    This paper investigates the design of a novel estimation-free prescribed performance non-affine control strategy for the longitudinal dynamics of an air-breathing hypersonic vehicle (AHV) via back-stepping. The proposed control scheme is capable of guaranteeing tracking errors of velocity, altitude, flight-path angle, pitch angle and pitch rate with prescribed performance. By prescribed performance, we mean that the tracking error is limited to a predefined arbitrarily small residual set, with convergence rate no less than a certain constant, exhibiting maximum overshoot less than a given value. Unlike traditional back-stepping designs, there is no need of an affine model in this paper. Moreover, both the tedious analytic and numerical computations of time derivatives of virtual control laws are completely avoided. In contrast to estimation-based strategies, the presented estimation-free controller possesses much lower computational costs, while successfully eliminating the potential problem of parameter drifting. Owing to its independence on an accurate AHV model, the studied methodology exhibits excellent robustness against system uncertainties. Finally, simulation results from a fully nonlinear model clarify and verify the design.

  18. Forecasting seasonal outbreaks of influenza.

    PubMed

    Shaman, Jeffrey; Karspeck, Alicia

    2012-12-11

    Influenza recurs seasonally in temperate regions of the world; however, our ability to predict the timing, duration, and magnitude of local seasonal outbreaks of influenza remains limited. Here we develop a framework for initializing real-time forecasts of seasonal influenza outbreaks, using a data assimilation technique commonly applied in numerical weather prediction. The availability of real-time, web-based estimates of local influenza infection rates makes this type of quantitative forecasting possible. Retrospective ensemble forecasts are generated on a weekly basis following assimilation of these web-based estimates for the 2003-2008 influenza seasons in New York City. The findings indicate that real-time skillful predictions of peak timing can be made more than 7 wk in advance of the actual peak. In addition, confidence in those predictions can be inferred from the spread of the forecast ensemble. This work represents an initial step in the development of a statistically rigorous system for real-time forecast of seasonal influenza.

  19. Forecasting seasonal outbreaks of influenza

    PubMed Central

    Shaman, Jeffrey; Karspeck, Alicia

    2012-01-01

    Influenza recurs seasonally in temperate regions of the world; however, our ability to predict the timing, duration, and magnitude of local seasonal outbreaks of influenza remains limited. Here we develop a framework for initializing real-time forecasts of seasonal influenza outbreaks, using a data assimilation technique commonly applied in numerical weather prediction. The availability of real-time, web-based estimates of local influenza infection rates makes this type of quantitative forecasting possible. Retrospective ensemble forecasts are generated on a weekly basis following assimilation of these web-based estimates for the 2003–2008 influenza seasons in New York City. The findings indicate that real-time skillful predictions of peak timing can be made more than 7 wk in advance of the actual peak. In addition, confidence in those predictions can be inferred from the spread of the forecast ensemble. This work represents an initial step in the development of a statistically rigorous system for real-time forecast of seasonal influenza. PMID:23184969

  20. A sandpile model of grain blocking and consequences for sediment dynamics in step-pool streams

    NASA Astrophysics Data System (ADS)

    Molnar, P.

    2012-04-01

    Coarse grains (cobbles to boulders) are set in motion in steep mountain streams by floods with sufficient energy to erode the particles locally and transport them downstream. During transport, grains are often blocked and form width-spannings structures called steps, separated by pools. The step-pool system is a transient, self-organizing and self-sustaining structure. The temporary storage of sediment in steps and the release of that sediment in avalanche-like pulses when steps collapse, leads to a complex nonlinear threshold-driven dynamics in sediment transport which has been observed in laboratory experiments (e.g., Zimmermann et al., 2010) and in the field (e.g., Turowski et al., 2011). The basic question in this paper is if the emergent statistical properties of sediment transport in step-pool systems may be linked to the transient state of the bed, i.e. sediment storage and morphology, and to the dynamics in sediment input. The hypothesis is that this state, in which sediment transporting events due to the collapse and rebuilding of steps of all sizes occur, is analogous to a critical state in self-organized open dissipative dynamical systems (Bak et al., 1988). To exlore the process of self-organization, a cellular automaton sandpile model is used to simulate the processes of grain blocking and hydraulically-driven step collapse in a 1-d channel. Particles are injected at the top of the channel and are allowed to travel downstream based on various local threshold rules, with the travel distance drawn from a chosen probability distribution. In sandpile modelling this is a simple 1-d limited non-local model, however it has been shown to have nontrivial dynamical behaviour (Kadanoff et al., 1989), and it captures the essence of stochastic sediment transport in step-pool systems. The numerical simulations are used to illustrate the differences between input and output sediment transport rates, mainly focussing on the magnification of intermittency and variability in the system response by the processes of grain blocking and step collapse. The temporal correlation in input and output rates and the number of grains stored in the system at any given time are quantified by spectral analysis and statistics of long-range dependence. Although the model is only conceptually conceived to represent the real processes of step formation and collapse, connections will be made between the modelling results and some field and laboratory data on step-pool systems. The main focus in the discussion will be to demonstrate how even in such a simple model the processes of grain blocking and step collapse may impact the sediment transport rates to the point that certain changes in input are not visible anymore, along the lines of "shredding the signals" proposed by Jerolmack and Paola (2010). The consequences are that the notions of stability and equilibrium, the attribution of cause and effect, and the timescales of process and form in step-pool systems, and perhaps in many other fluvial systems, may have very limited applicability.

  1. Use of lean and six sigma methodology to improve operating room efficiency in a high-volume tertiary-care academic medical center.

    PubMed

    Cima, Robert R; Brown, Michael J; Hebl, James R; Moore, Robin; Rogers, James C; Kollengode, Anantha; Amstutz, Gwendolyn J; Weisbrod, Cheryl A; Narr, Bradly J; Deschamps, Claude

    2011-07-01

    Operating rooms (ORs) are resource-intense and costly hospital units. Maximizing OR efficiency is essential to maintaining an economically viable institution. OR efficiency projects often focus on a limited number of ORs or cases. Efforts across an entire OR suite have not been reported. Lean and Six Sigma methodologies were developed in the manufacturing industry to increase efficiency by eliminating non-value-added steps. We applied Lean and Six Sigma methodologies across an entire surgical suite to improve efficiency. A multidisciplinary surgical process improvement team constructed a value stream map of the entire surgical process from the decision for surgery to discharge. Each process step was analyzed in 3 domains, ie, personnel, information processed, and time. Multidisciplinary teams addressed 5 work streams to increase value at each step: minimizing volume variation; streamlining the preoperative process; reducing nonoperative time; eliminating redundant information; and promoting employee engagement. Process improvements were implemented sequentially in surgical specialties. Key performance metrics were collected before and after implementation. Across 3 surgical specialties, process redesign resulted in substantial improvements in on-time starts and reduction in number of cases past 5 pm. Substantial gains were achieved in nonoperative time, staff overtime, and ORs saved. These changes resulted in substantial increases in margin/OR/day. Use of Lean and Six Sigma methodologies increased OR efficiency and financial performance across an entire operating suite. Process mapping, leadership support, staff engagement, and sharing performance metrics are keys to enhancing OR efficiency. The performance gains were substantial, sustainable, positive financially, and transferrable to other specialties. Copyright © 2011 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  2. Are Pressure Time Integral and Cumulative Plantar Stress Related to First Metatarsophalangeal Joint Pain? Results From a Community-Based Study.

    PubMed

    Rao, Smita; Douglas Gross, K; Niu, Jingbo; Nevitt, Michael C; Lewis, Cora E; Torner, James C; Hietpas, Jean; Felson, David; Hillstrom, Howard J

    2016-09-01

    To examine the relationship between plantar stress over a step, cumulative plantar stress over a day, and first metatarsophalangeal (MTP) joint pain among older adults. Plantar stress and first MTP pain were assessed within the Multicenter Osteoarthritis Study. All included participants were asked if they had pain, aching, or stiffness at the first MTP joint on most days for the past 30 days. Pressure time integral (PTI) was quantified as participants walked on a pedobarograph, and mean steps per day were obtained using an accelerometer. Cumulative plantar stress was calculated as the product of regional PTI and mean steps per day. Quintiles of hallucal and second metatarsal PTI and cumulative plantar stress were generated. The relationship between predictors and the odds ratio of first MTP pain was assessed using a logistic regression model. Feet in the quintile with the lowest hallux PTI had 2.14 times increased odds of first MTP pain (95% confidence interval [95% CI] 1.42-3.25, P < 0.01). Feet in the quintile with the lowest second metatarsal PTI had 1.50 times increased odds of first MTP pain (95% CI 1.01-2.23, P = 0.042). Cumulative plantar stress was unassociated with first MTP pain. Lower PTI was modestly associated with increased prevalence of frequent first MTP pain at both the hallux and second metatarsal. Lower plantar loading may indicate the presence of an antalgic gait strategy and may reflect an attempt at pain avoidance. The lack of association with cumulative plantar stress may suggest that patients do not limit their walking as a pain-avoidance mechanism. © 2016, American College of Rheumatology.

  3. Are Pressure Time Integral and Cumulative Plantar Stress Related to First Metatarsophalangeal Joint Pain? Results From a Community-Based Study

    PubMed Central

    RAO, SMITA; GROSS, K. DOUGLAS; NIU, JINGBO; NEVITT, MICHAEL C.; LEWIS, CORA E.; TORNER, JAMES C.; HIETPAS, JEAN; FELSON, DAVID; HILLSTROM, HOWARD J.

    2017-01-01

    Objective To examine the relationship between plantar stress over a step, cumulative plantar stress over a day, and first metatarsophalangeal (MTP) joint pain among older adults. Methods Plantar stress and first MTP pain were assessed within the Multicenter Osteoarthritis Study. All included participants were asked if they had pain, aching, or stiffness at the first MTP joint on most days for the past 30 days. Pressure time integral (PTI) was quantified as participants walked on a pedobarograph, and mean steps per day were obtained using an accelerometer. Cumulative plantar stress was calculated as the product of regional PTI and mean steps per day. Quintiles of hallucal and second metatarsal PTI and cumulative plantar stress were generated. The relationship between predictors and the odds ratio of first MTP pain was assessed using a logistic regression model. Results Feet in the quintile with the lowest hallux PTI had 2.14 times increased odds of first MTP pain (95% confidence interval [95% CI] 1.42–3.25, P < 0.01). Feet in the quintile with the lowest second metatarsal PTI had 1.50 times increased odds of first MTP pain (95% CI 1.01–2.23, P = 0.042). Cumulative plantar stress was unassociated with first MTP pain. Conclusion Lower PTI was modestly associated with increased prevalence of frequent first MTP pain at both the hallux and second metatarsal. Lower plantar loading may indicate the presence of an antalgic gait strategy and may reflect an attempt at pain avoidance. The lack of association with cumulative plantar stress may suggest that patients do not limit their walking as a pain-avoidance mechanism. PMID:26713755

  4. Estimation for general birth-death processes

    PubMed Central

    Crawford, Forrest W.; Minin, Vladimir N.; Suchard, Marc A.

    2013-01-01

    Birth-death processes (BDPs) are continuous-time Markov chains that track the number of “particles” in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution. PMID:25328261

  5. Development of a highly sensitive real-time nested RT-PCR assay in a single closed tube for detection of enterovirus 71 in hand, foot, and mouth disease.

    PubMed

    Niu, Peihua; Qi, Shunxiang; Yu, Benzhang; Zhang, Chen; Wang, Ji; Li, Qi; Ma, Xuejun

    2016-11-01

    Enterovirus 71 (EV71) is one of the major causative agents of outbreaks of hand, foot, and mouth disease (HFMD). A commercial TaqMan probe-based real-time PCR assay has been widely used for the differential detection of EV71 despite its relatively high cost and failure to detect samples with a low viral load (Ct value > 35). In this study, a highly sensitive real-time nested RT-PCR (RTN RT-PCR) assay in a single closed tube for detection of EV71 in HFMD was developed. The sensitivity and specificity of this assay were evaluated using a reference EV71 stock and a panel of controls consisting of coxsackievirus A16 (CVA16) and common respiratory viruses, respectively. The clinical performance of this assay was evaluated and compared with those of a commercial TaqMan probe-based real-time PCR (qRT-PCR) assay and a traditional two-step nested RT-PCR assay. The limit of detection for the RTN RT-PCR assay was 0.01 TCID50/ml, with a Ct value of 38.3, which was the same as that of the traditional two-step nested RT-PCR assay and approximately tenfold lower than that of the qRT-PCR assay. When testing the reference strain EV71, this assay showed favorable detection reproducibility and no obvious cross-reactivity. The testing results of 100 clinical throat swabs from HFMD-suspected patients revealed that 41 samples were positive for EV71 by both RTN RT-PCR and traditional two-step nested RT-PCR assays, whereas only 29 were EV71 positive by qRT-PCR assay.

  6. Estimation for general birth-death processes.

    PubMed

    Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A

    2014-04-01

    Birth-death processes (BDPs) are continuous-time Markov chains that track the number of "particles" in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution.

  7. Real-time PCR and its application to mumps rapid diagnosis.

    PubMed

    Jin, L; Feng, Y; Parry, R; Cui, A; Lu, Y

    2007-11-01

    A real-time polymerase chain reaction assay was initially developed in China to detect mumps genome. The primers and TaqMan-MGB probe were selected from regions of the hemagglutinin gene of mumps virus. The primers and probe for the real-time PCR were evaluated by both laboratories in China and in the UK using three different pieces of equipment, LightCycler (Roche), MJ DNA Engine Option 2 (BIO-RAD) and TaqMan (ABI Prism) on different samples. The reaction was performed with either a one-step (China) or two-step (UK) process. The sensitivity (10 copies) was estimated using a serial dilution of constructed mumps-plasmid DNA and a linear standard curve was obtained between 10 and 10(7) DNA copies/reaction, which can be used to quantify viral loads. The detection limit on cell culture-grown virus was approximately 2 pfu/ml with a two-step assay on TaqMan, which was equivalent to the sensitivity of the nested PCR routinely used in the UK. The specificity was proved by testing a range of respiratory viruses and several genotypes of mumps strains. The concentration of primers and probe is 22 pmol and 6.25 or 7 pmol respectively for a 25 microl reaction. The assay took 3 hr from viral RNA extraction to complete the detection using any of the three pieces of equipment. Three hundred forty-one (35 in China and 306 in the UK) clinical specimens were tested, the results showing that this real-time PCR assay is suitable for rapid and accurate detection of mumps virus RNA in various types of clinical specimens. (c) 2007 Wiley-Liss, Inc.

  8. Continuous motion scan ptychography: Characterization for increased speed in coherent x-ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Junjing; Nashed, Youssef S. G.; Chen, Si

    Ptychography is a coherent diffraction imaging (CDI) method for extended objects in which diffraction patterns are acquired sequentially from overlapping coherent illumination spots. The object’s complex transmission function can be reconstructed from those diffraction patterns at a spatial resolution limited only by the scattering strength of the object and the detector geometry. Most experiments to date have positioned the illumination spots on the sample using a move-settle-measure sequence in which the move and settle steps can take longer to complete than the measure step. We describe here the use of a continuous “fly-scan” mode for ptychographic data collection in whichmore » the sample is moved continuously, so that the experiment resembles one of integrating the diffraction patterns from multiple probe positions. This allows one to use multiple probe mode reconstruction methods to obtain an image of the object and also of the illumination function. We show in simulations, and in x-ray imaging experiments, some of the characteristics of fly-scan ptychography, including a factor of 25 reduction in the data acquisition time. This approach will become increasingly important as brighter x-ray sources are developed, such as diffraction limited storage rings.« less

  9. Stress history and fracture pattern in fault-related folds based on limit analysis: application to the Sub-Andean thrust belt of Bolivia

    NASA Astrophysics Data System (ADS)

    Barbe, Charlotte; Leroy, Yves; Ben Miloud, Camille

    2017-04-01

    A methodology is proposed to construct the stress history of a complex fault-related fold in which the deformation mechanisms are essentially frictional. To illustrate the approach, fours steps of the deformation of an initially horizontally layered sand/silicone laboratory experiment (Driehaus et al., J. of Struc. Geol., 65, 2014) are analysed with the kinematic approach of limit analysis (LA). The stress, conjugate to the virtual velocity gradient in the sense of mechanicam power, is a proxy for the true statically admmissible stress field which prevailed over the structure. The material properties, friction angles and cohesion, including their time evolution are selected such that the deformation pattern predicted by the LA is consistent with the two main thrusting events, the first forward and the second backward once the layers have sufficiently rotated. The fractures associated to the stress field determined at each step are convected on today configuration to define the complete pattern which should be observed. The end results are presented along virtual vertical wells and could be used within the oil industry at an early phase of exploration to prepare drealing operations.

  10. Rapid analysis of charge variants of monoclonal antibodies using non-linear salt gradient in cation-exchange high performance liquid chromatography.

    PubMed

    Joshi, Varsha; Kumar, Vijesh; Rathore, Anurag S

    2015-08-07

    A method is proposed for rapid development of a short, analytical cation exchange high performance liquid chromatography method for analysis of charge heterogeneity in monoclonal antibody products. The parameters investigated and optimized include pH, shape of elution gradient and length of the column. It is found that the most important parameter for development of a shorter method is the choice of the shape of elution gradient. In this paper, we propose a step by step approach to develop a non-linear sigmoidal shape gradient for analysis of charge heterogeneity for two different monoclonal antibody products. The use of this gradient not only decreases the run time of the method to 4min against the conventional method that takes more than 40min but also the resolution is retained. Superiority of the phosphate gradient over sodium chloride gradient for elution of mAbs is also observed. The method has been successfully evaluated for specificity, sensitivity, linearity, limit of detection, and limit of quantification. Application of this method as a potential at-line process analytical technology tool has been suggested. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Continuous motion scan ptychography: Characterization for increased speed in coherent x-ray imaging

    DOE PAGES

    Deng, Junjing; Nashed, Youssef S. G.; Chen, Si; ...

    2015-02-23

    Ptychography is a coherent diffraction imaging (CDI) method for extended objects in which diffraction patterns are acquired sequentially from overlapping coherent illumination spots. The object’s complex transmission function can be reconstructed from those diffraction patterns at a spatial resolution limited only by the scattering strength of the object and the detector geometry. Most experiments to date have positioned the illumination spots on the sample using a move-settle-measure sequence in which the move and settle steps can take longer to complete than the measure step. We describe here the use of a continuous “fly-scan” mode for ptychographic data collection in whichmore » the sample is moved continuously, so that the experiment resembles one of integrating the diffraction patterns from multiple probe positions. This allows one to use multiple probe mode reconstruction methods to obtain an image of the object and also of the illumination function. We show in simulations, and in x-ray imaging experiments, some of the characteristics of fly-scan ptychography, including a factor of 25 reduction in the data acquisition time. This approach will become increasingly important as brighter x-ray sources are developed, such as diffraction limited storage rings.« less

  12. In Situ Observation of Calcium Aluminate Inclusions Dissolution into Steelmaking Slag

    NASA Astrophysics Data System (ADS)

    Miao, Keyan; Haas, Alyssa; Sharma, Mukesh; Mu, Wangzhong; Dogan, Neslihan

    2018-06-01

    The dissolution rate of calcium aluminate inclusions in CaO-SiO2-Al2O3 slags has been studied using confocal scanning laser microscopy (CSLM) at elevated temperatures: 1773 K, 1823 K, and 1873 K (1500 °C, 1550 °C, and 1600 °C). The inclusion particles used in this experimental work were produced in our laboratory and their production technique is explained in detail. Even though the particles had irregular shapes, there was no rotation observed. Further, the total dissolution time decreased with increasing temperature and decreasing SiO2 content in the slag. The rate limiting steps are discussed in terms of shrinking core models and diffusion into a stagnant fluid model. It is shown that the rate limiting step for dissolution is mass transfer in the slag at 1823 K and 1873 K (1550 °C and 1600 °C). Further investigations are required to determine the dissolution mechanism at 1773 K (1500 °C). The calculated diffusion coefficients were inversely proportional to the slag viscosity and the obtained values for the systems studied ranged between 5.64 × 10-12 and 5.8 × 10-10 m2/s.

  13. Continuous motion scan ptychography: characterization for increased speed in coherent x-ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Junjing; Nashed, Youssef S. G.; Chen, Si

    2015-01-01

    Ptychography is a coherent diffraction imaging (CDI) method for extended objects in which diffraction patterns are acquired sequentially from overlapping coherent illumination spots. The object's complex transmission function can be reconstructed from those diffraction patterns at a spatial resolution limited only by the scattering strength of the object and the detector geometry. Most experiments to date have positioned the illumination spots on the sample using a move-settle-measure sequence in which the move and settle steps can take longer to complete than the measure step. We describe here the use of a continuous "fly-scan" mode for ptychographic data collection in whichmore » the sample is moved continuously, so that the experiment resembles one of integrating the diffraction patterns from multiple probe positions. This allows one to use multiple probe mode reconstruction methods to obtain an image of the object and also of the illumination function. We show in simulations, and in x-ray imaging experiments, some of the characteristics of fly-scan ptychography, including a factor of 25 reduction in the data acquisition time. This approach will become increasingly important as brighter x-ray sources are developed, such as diffraction limited storage rings.« less

  14. Continuous motion scan ptychography: characterization for increased speed in coherent x-ray imaging.

    PubMed

    Deng, Junjing; Nashed, Youssef S G; Chen, Si; Phillips, Nicholas W; Peterka, Tom; Ross, Rob; Vogt, Stefan; Jacobsen, Chris; Vine, David J

    2015-03-09

    Ptychography is a coherent diffraction imaging (CDI) method for extended objects in which diffraction patterns are acquired sequentially from overlapping coherent illumination spots. The object's complex transmission function can be reconstructed from those diffraction patterns at a spatial resolution limited only by the scattering strength of the object and the detector geometry. Most experiments to date have positioned the illumination spots on the sample using a move-settle-measure sequence in which the move and settle steps can take longer to complete than the measure step. We describe here the use of a continuous "fly-scan" mode for ptychographic data collection in which the sample is moved continuously, so that the experiment resembles one of integrating the diffraction patterns from multiple probe positions. This allows one to use multiple probe mode reconstruction methods to obtain an image of the object and also of the illumination function. We show in simulations, and in x-ray imaging experiments, some of the characteristics of fly-scan ptychography, including a factor of 25 reduction in the data acquisition time. This approach will become increasingly important as brighter x-ray sources are developed, such as diffraction limited storage rings.

  15. Phase-field modeling of two-dimensional crystal growth with anisotropic diffusion.

    PubMed

    Meca, Esteban; Shenoy, Vivek B; Lowengrub, John

    2013-11-01

    In the present article, we introduce a phase-field model for thin-film growth with anisotropic step energy, attachment kinetics, and diffusion, with second-order (thin-interface) corrections. We are mainly interested in the limit in which kinetic anisotropy dominates, and hence we study how the expected shape of a crystallite, which in the long-time limit is the kinetic Wulff shape, is modified by anisotropic diffusion. We present results that prove that anisotropic diffusion plays an important, counterintuitive role in the evolving crystal shape, and we add second-order corrections to the model that provide a significant increase in accuracy for small supersaturations. We also study the effect of different crystal symmetries and discuss the influence of the deposition rate.

  16. A Novel Selective Deep Eutectic Solvent Extraction Method for Versatile Determination of Copper in Sediment Samples by ICP-OES.

    PubMed

    Bağda, Esra; Altundağ, Huseyin; Tüzen, Mustafa; Soylak, Mustafa

    2017-08-01

    In the present study, a simple, mono step deep eutectic solvent (DES) extraction was developed for selective extraction of copper from sediment samples. The optimization of all experimental parameters, e.g. DES type, sample/DES ratio, contact time and temperature were performed with using BCR-280 R (lake sediment certified reference material). The limit of detection (LOD) and the limit of quantification (LOQ) were found as 1.2 and 3.97 µg L -1 , respectively. The RSD of the procedure was 7.5%. The proposed extraction method was applied to river and lake sediments sampled from Serpincik, Çeltek, Kızılırmak (Fadl and Tecer region of the river), Sivas-Turkey.

  17. Scanning near-field optical microscopy.

    PubMed

    Vobornik, Dusan; Vobornik, Slavenka

    2008-02-01

    An average human eye can see details down to 0,07 mm in size. The ability to see smaller details of the matter is correlated with the development of the science and the comprehension of the nature. Today's science needs eyes for the nano-world. Examples are easily found in biology and medical sciences. There is a great need to determine shape, size, chemical composition, molecular structure and dynamic properties of nano-structures. To do this, microscopes with high spatial, spectral and temporal resolution are required. Scanning Near-field Optical Microscopy (SNOM) is a new step in the evolution of microscopy. The conventional, lens-based microscopes have their resolution limited by diffraction. SNOM is not subject to this limitation and can offer up to 70 times better resolution.

  18. Integrating Clinical Services for HIV, Tuberculosis, and Cryptococcal Disease in the Developing World: A Step Forward with 2 Novel Diagnostic Tests

    PubMed Central

    Vijayan, Tara; Klausner, Jeffrey D.

    2014-01-01

    The success of antiretroviral therapy (ART) programs in the developing world is limited by the lack of adequate diagnostic tests to screen for life-threatening opportunistic infections such as tuberculosis (TB) and cryptococcal disease. Furthermore, there is an increasing need for implementation research in measuring the effectiveness of currently available rapid diagnostic tests. The recently developed lateral flow assays for both cryptococcal disease and TB have the potential to improve care and greatly reduce the time to initiation of ART among individuals who need it the most. However, we caution that the data on feasibility and effectiveness of these assays are limited and such research agendas must be prioritized. PMID:24065780

  19. One-step volumetric additive manufacturing of complex polymer structures

    PubMed Central

    Shusteff, Maxim; Browar, Allison E. M.; Kelly, Brett E.; Henriksson, Johannes; Weisgraber, Todd H.; Panas, Robert M.; Fang, Nicholas X.; Spadaccini, Christopher M.

    2017-01-01

    Two limitations of additive manufacturing methods that arise from layer-based fabrication are slow speed and geometric constraints (which include poor surface quality). Both limitations are overcome in the work reported here, introducing a new volumetric additive fabrication paradigm that produces photopolymer structures with complex nonperiodic three-dimensional geometries on a time scale of seconds. We implement this approach using holographic patterning of light fields, demonstrate the fabrication of a variety of structures, and study the properties of the light patterns and photosensitive resins required for this fabrication approach. The results indicate that low-absorbing resins containing ~0.1% photoinitiator, illuminated at modest powers (~10 to 100 mW), may be successfully used to build full structures in ~1 to 10 s. PMID:29230437

  20. One-step volumetric additive manufacturing of complex polymer structures.

    PubMed

    Shusteff, Maxim; Browar, Allison E M; Kelly, Brett E; Henriksson, Johannes; Weisgraber, Todd H; Panas, Robert M; Fang, Nicholas X; Spadaccini, Christopher M

    2017-12-01

    Two limitations of additive manufacturing methods that arise from layer-based fabrication are slow speed and geometric constraints (which include poor surface quality). Both limitations are overcome in the work reported here, introducing a new volumetric additive fabrication paradigm that produces photopolymer structures with complex nonperiodic three-dimensional geometries on a time scale of seconds. We implement this approach using holographic patterning of light fields, demonstrate the fabrication of a variety of structures, and study the properties of the light patterns and photosensitive resins required for this fabrication approach. The results indicate that low-absorbing resins containing ~0.1% photoinitiator, illuminated at modest powers (~10 to 100 mW), may be successfully used to build full structures in ~1 to 10 s.

Top