Sample records for multiple time steps

  1. Molecular dynamics based enhanced sampling of collective variables with very large time steps.

    PubMed

    Chen, Pei-Yang; Tuckerman, Mark E

    2018-01-14

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  2. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    NASA Astrophysics Data System (ADS)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  3. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    NASA Technical Reports Server (NTRS)

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  4. Validity of the Instrumented Push and Release Test to Quantify Postural Responses in Persons With Multiple Sclerosis.

    PubMed

    El-Gohary, Mahmoud; Peterson, Daniel; Gera, Geetanjali; Horak, Fay B; Huisinga, Jessie M

    2017-07-01

    To test the validity of wearable inertial sensors to provide objective measures of postural stepping responses to the push and release clinical test in people with multiple sclerosis. Cross-sectional study. University medical center balance disorder laboratory. Total sample N=73; persons with multiple sclerosis (PwMS) n=52; healthy controls n=21. Stepping latency, time and number of steps required to reach stability, and initial step length were calculated using 3 inertial measurement units placed on participants' lumbar spine and feet. Correlations between inertial sensor measures and measures obtained from the laboratory-based systems were moderate to strong and statistically significant for all variables: time to release (r=.992), latency (r=.655), time to stability (r=.847), time of first heel strike (r=.665), number of steps (r=.825), and first step length (r=.592). Compared with healthy controls, PwMS demonstrated a longer time to stability and required a larger number of steps to reach stability. The instrumented push and release test is a valid measure of postural responses in PwMS and could be used as a clinical outcome measures for patient care decisions or for clinical trials aimed at improving postural control in PwMS. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  5. Rational reduction of periodic propagators for off-period observations.

    PubMed

    Blanton, Wyndham B; Logan, John W; Pines, Alexander

    2004-02-01

    Many common solid-state nuclear magnetic resonance problems take advantage of the periodicity of the underlying Hamiltonian to simplify the computation of an observation. Most of the time-domain methods used, however, require the time step between observations to be some integer or reciprocal-integer multiple of the period, thereby restricting the observation bandwidth. Calculations of off-period observations are usually reduced to brute force direct methods resulting in many demanding matrix multiplications. For large spin systems, the matrix multiplication becomes the limiting step. A simple method that can dramatically reduce the number of matrix multiplications required to calculate the time evolution when the observation time step is some rational fraction of the period of the Hamiltonian is presented. The algorithm implements two different optimization routines. One uses pattern matching and additional memory storage, while the other recursively generates the propagators via time shifting. The net result is a significant speed improvement for some types of time-domain calculations.

  6. Time-Accurate Local Time Stepping and High-Order Time CESE Methods for Multi-Dimensional Flows Using Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

    2013-01-01

    With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

  7. Two-step relaxation mode analysis with multiple evolution times applied to all-atom molecular dynamics protein simulation.

    PubMed

    Karasawa, N; Mitsutake, A; Takano, H

    2017-12-01

    Proteins implement their functionalities when folded into specific three-dimensional structures, and their functions are related to the protein structures and dynamics. Previously, we applied a relaxation mode analysis (RMA) method to protein systems; this method approximately estimates the slow relaxation modes and times via simulation and enables investigation of the dynamic properties underlying the protein structural fluctuations. Recently, two-step RMA with multiple evolution times has been proposed and applied to a slightly complex homopolymer system, i.e., a single [n]polycatenane. This method can be applied to more complex heteropolymer systems, i.e., protein systems, to estimate the relaxation modes and times more accurately. In two-step RMA, we first perform RMA and obtain rough estimates of the relaxation modes and times. Then, we apply RMA with multiple evolution times to a small number of the slowest relaxation modes obtained in the previous calculation. Herein, we apply this method to the results of principal component analysis (PCA). First, PCA is applied to a 2-μs molecular dynamics simulation of hen egg-white lysozyme in aqueous solution. Then, the two-step RMA method with multiple evolution times is applied to the obtained principal components. The slow relaxation modes and corresponding relaxation times for the principal components are much improved by the second RMA.

  8. Two-step relaxation mode analysis with multiple evolution times applied to all-atom molecular dynamics protein simulation

    NASA Astrophysics Data System (ADS)

    Karasawa, N.; Mitsutake, A.; Takano, H.

    2017-12-01

    Proteins implement their functionalities when folded into specific three-dimensional structures, and their functions are related to the protein structures and dynamics. Previously, we applied a relaxation mode analysis (RMA) method to protein systems; this method approximately estimates the slow relaxation modes and times via simulation and enables investigation of the dynamic properties underlying the protein structural fluctuations. Recently, two-step RMA with multiple evolution times has been proposed and applied to a slightly complex homopolymer system, i.e., a single [n ] polycatenane. This method can be applied to more complex heteropolymer systems, i.e., protein systems, to estimate the relaxation modes and times more accurately. In two-step RMA, we first perform RMA and obtain rough estimates of the relaxation modes and times. Then, we apply RMA with multiple evolution times to a small number of the slowest relaxation modes obtained in the previous calculation. Herein, we apply this method to the results of principal component analysis (PCA). First, PCA is applied to a 2-μ s molecular dynamics simulation of hen egg-white lysozyme in aqueous solution. Then, the two-step RMA method with multiple evolution times is applied to the obtained principal components. The slow relaxation modes and corresponding relaxation times for the principal components are much improved by the second RMA.

  9. Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics — Monte Carlo Canonical Propagation Algorithm

    PubMed Central

    Weare, Jonathan; Dinner, Aaron R.; Roux, Benoît

    2016-01-01

    A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method. PMID:26918826

  10. Consistency of internal fluxes in a hydrological model running at multiple time steps

    NASA Astrophysics Data System (ADS)

    Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-04-01

    Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model complexity on time step is also analysed. References: Perrin, C., Michel, C., Andréassian, V., 2003. Improvement of a parsimonious model for streamflow simulation. Journal of Hydrology, 279(1-4): 275-289. DOI:10.1016/S0022-1694(03)00225-7

  11. MIMO equalization with adaptive step size for few-mode fiber transmission systems.

    PubMed

    van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J

    2014-01-13

    Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.

  12. Multiple time step integrators in ab initio molecular dynamics.

    PubMed

    Luehr, Nathan; Markland, Thomas E; Martínez, Todd J

    2014-02-28

    Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy.

  13. Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors

    PubMed Central

    Pan, Jin; Ma, Boyuan

    2018-01-01

    This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D) direction of arrival (DOA) and signal sorting, with a low-cost circular synthetic array (CSA) consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM) algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step) and the maximization (M-step). In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML) estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations. PMID:29617323

  14. A comparison of the effects of visual deprivation and regular body weight support treadmill training on improving over-ground walking of stroke patients: a multiple baseline single subject design.

    PubMed

    Kim, Jeong-Soo; Kang, Sun-Young; Jeon, Hye-Seon

    2015-01-01

    The body-weight-support treadmill (BWST) is commonly used for gait rehabilitation, but other forms of BWST are in development, such as visual-deprivation BWST (VDBWST). In this study, we compare the effect of VDBWST training and conventional BWST training on spatiotemporal gait parameters for three individuals who had hemiparetic strokes. We used a single-subject experimental design, alternating multiple baselines across the individuals. We recruited three individuals with hemiparesis from stroke; two on the left side and one on the right. For the main outcome measures we assessed spatiotemporal gait parameters using GAITRite, including: gait velocity; cadence; step time of the affected side (STA); step time of the non-affected side (STN); step length of the affected side (SLA); step length of the non-affected side (SLN); step-time asymmetry (ST-asymmetry); and step-length asymmetry (SL-asymmetry). Gait velocity, cadence, SLA, and SLN increased from baseline after both interventions, but STA, ST-asymmetry, and SL-asymmetry decreased from the baseline after the interventions. The VDBWST was significantly more effective than the BWST for increasing gait velocity and cadence and for decreasing ST-asymmetry. VDBWST is more effective than BWST for improving gait performance during the rehabilitation for ground walking.

  15. Three Steps to Mastering Multiplication Facts

    ERIC Educational Resources Information Center

    Kling, Gina; Bay-Williams, Jennifer M.

    2015-01-01

    "That was the day I decided I was bad at math." Countless times, preservice and in-service teachers make statements such as this after sharing vivid memories of learning multiplication facts. Timed tests; public competitive games, such as Around the World; and visible displays of who has and has not mastered groups of facts still…

  16. Design-for-manufacture of gradient-index optical systems using time-varying boundary condition diffusion

    NASA Astrophysics Data System (ADS)

    Harkrider, Curtis Jason

    2000-08-01

    The incorporation of gradient-index (GRIN) material into optical systems offers novel and practical solutions to lens design problems. However, widespread use of gradient-index optics has been limited by poor correlation between gradient-index designs and the refractive index profiles produced by ion exchange between glass and molten salt. Previously, a design-for- manufacture model was introduced that connected the design and fabrication processes through use of diffusion modeling linked with lens design software. This project extends the design-for-manufacture model into a time- varying boundary condition (TVBC) diffusion model. TVBC incorporates the time-dependent phenomenon of melt poisoning and introduces a new index profile control method, multiple-step diffusion. The ions displaced from the glass during the ion exchange fabrication process can reduce the total change in refractive index (Δn). Chemical equilibrium is used to model this melt poisoning process. Equilibrium experiments are performed in a titania silicate glass and chemically analyzed. The equilibrium model is fit to ion concentration data that is used to calculate ion exchange boundary conditions. The boundary conditions are changed purposely to control the refractive index profile in multiple-step TVBC diffusion. The glass sample is alternated between ion exchange with a molten salt bath and annealing. The time of each diffusion step can be used to exert control on the index profile. The TVBC computer model is experimentally verified and incorporated into the design- for-manufacture subroutine that runs in lens design software. The TVBC design-for-manufacture model is useful for fabrication-based tolerance analysis of gradient-index lenses and for the design of manufactureable GRIN lenses. Several optical elements are designed and fabricated using multiple-step diffusion, verifying the accuracy of the model. The strength of multiple-step diffusion process lies in its versatility. An axicon, imaging lens, and curved radial lens, all with different index profile requirements, are designed out of a single glass composition.

  17. Algorithm for Training a Recurrent Multilayer Perceptron

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Rais, Omar T.; Menon, Sunil K.; Atiya, Amir F.

    2004-01-01

    An improved algorithm has been devised for training a recurrent multilayer perceptron (RMLP) for optimal performance in predicting the behavior of a complex, dynamic, and noisy system multiple time steps into the future. [An RMLP is a computational neural network with self-feedback and cross-talk (both delayed by one time step) among neurons in hidden layers]. Like other neural-network-training algorithms, this algorithm adjusts network biases and synaptic-connection weights according to a gradient-descent rule. The distinguishing feature of this algorithm is a combination of global feedback (the use of predictions as well as the current output value in computing the gradient at each time step) and recursiveness. The recursive aspect of the algorithm lies in the inclusion of the gradient of predictions at each time step with respect to the predictions at the preceding time step; this recursion enables the RMLP to learn the dynamics. It has been conjectured that carrying the recursion to even earlier time steps would enable the RMLP to represent a noisier, more complex system.

  18. Evaluation of atomic pressure in the multiple time-step integration algorithm.

    PubMed

    Andoh, Yoshimichi; Yoshii, Noriyuki; Yamada, Atsushi; Okazaki, Susumu

    2017-04-15

    In molecular dynamics (MD) calculations, reduction in calculation time per MD loop is essential. A multiple time-step (MTS) integration algorithm, the RESPA (Tuckerman and Berne, J. Chem. Phys. 1992, 97, 1990-2001), enables reductions in calculation time by decreasing the frequency of time-consuming long-range interaction calculations. However, the RESPA MTS algorithm involves uncertainties in evaluating the atomic interaction-based pressure (i.e., atomic pressure) of systems with and without holonomic constraints. It is not clear which intermediate forces and constraint forces in the MTS integration procedure should be used to calculate the atomic pressure. In this article, we propose a series of equations to evaluate the atomic pressure in the RESPA MTS integration procedure on the basis of its equivalence to the Velocity-Verlet integration procedure with a single time step (STS). The equations guarantee time-reversibility even for the system with holonomic constrants. Furthermore, we generalize the equations to both (i) arbitrary number of inner time steps and (ii) arbitrary number of force components (RESPA levels). The atomic pressure calculated by our equations with the MTS integration shows excellent agreement with the reference value with the STS, whereas pressures calculated using the conventional ad hoc equations deviated from it. Our equations can be extended straightforwardly to the MTS integration algorithm for the isothermal NVT and isothermal-isobaric NPT ensembles. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  19. Solution of the Average-Passage Equations for the Incompressible Flow through Multiple-Blade-Row Turbomachinery

    DTIC Science & Technology

    1994-02-01

    numerical treatment. An explicit numerical procedure based on Runqe-Kutta time stepping for cell-centered, hexahedral finite volumes is...An explicit numerical procedure based on Runge-Kutta time stepping for cell-centered, hexahedral finite volumes is outlined for the approximate...Discretization 16 3.1 Cell-Centered Finite -Volume Discretization in Space 16 3.2 Artificial Dissipation 17 3.3 Time Integration 21 3.4 Convergence

  20. One-step formation of multiple Pickering emulsions stabilized by self-assembled poly(dodecyl acrylate-co-acrylic acid) nanoparticles.

    PubMed

    Zhu, Ye; Sun, Jianhua; Yi, Chenglin; Wei, Wei; Liu, Xiaoya

    2016-09-13

    In this study, a one-step generation of stable multiple Pickering emulsions using pH-responsive polymeric nanoparticles as the only emulsifier was reported. The polymeric nanoparticles were self-assembled from an amphiphilic random copolymer poly(dodecyl acrylate-co-acrylic acid) (PDAA), and the effect of the copolymer content on the size and morphology of PDAA nanoparticles was determined by dynamic light scattering (DLS) and transmission electron microscopy (TEM). The emulsification study of PDAA nanoparticles revealed that multiple Pickering emulsions could be generated through a one-step phase inversion process by using PDAA nanoparticles as the stabilizer. Moreover, the emulsification performance of PDAA nanoparticles at different pH values demonstrated that multiple emulsions with long-time stability could only be stabilized by PDAA nanoparticles at pH 5.5, indicating that the surface wettability of PDAA nanoparticles plays a crucial role in determining the type and stability of the prepared Pickering emulsions. Additionally, the polarity of oil does not affect the emulsification performance of PDAA nanoparticles, and a wide range of oils could be used as the oil phase to prepare multiple emulsions. These results demonstrated that multiple Pickering emulsions could be generated via the one-step emulsification process using self-assembled polymeric nanoparticles as the stabilizer, and the prepared multiple emulsions have promising potential to be applied in the cosmetic, medical, and food industries.

  1. Modeling Stepped Leaders Using a Time Dependent Multi-dipole Model and High-speed Video Data

    NASA Astrophysics Data System (ADS)

    Karunarathne, S.; Marshall, T.; Stolzenburg, M.; Warner, T. A.; Orville, R. E.

    2012-12-01

    In summer of 2011, we collected lightning data with 10 stations of electric field change meters (bandwidth of 0.16 Hz - 2.6 MHz) on and around NASA/Kennedy Space Center (KSC) covering nearly 70 km × 100 km area. We also had a high-speed video (HSV) camera recording 50,000 images per second collocated with one of the electric field change meters. In this presentation we describe our use of these data to model the electric field change caused by stepped leaders. Stepped leaders of a cloud to ground lightning flash typically create the initial path for the first return stroke (RS). Most of the time, stepped leaders have multiple complex branches, and one of these branches will create the ground connection for the RS to start. HSV data acquired with a short focal length lens at ranges of 5-25 km from the flash are useful for obtaining the 2-D location of these multiple branches developing at the same time. Using HSV data along with data from the KSC Lightning Detection and Ranging (LDAR2) system and the Cloud to Ground Lightning Surveillance System (CGLSS), the 3D path of a leader may be estimated. Once the path of a stepped leader is obtained, the time dependent multi-dipole model [ Lu, Winn,and Sonnenfeld, JGR 2011] can be used to match the electric field change at various sensor locations. Based on this model, we will present the time-dependent charge distribution along a leader channel and the total charge transfer during the stepped leader phase.

  2. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    NASA Astrophysics Data System (ADS)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  3. A high-order relaxation method with projective integration for solving nonlinear systems of hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Lafitte, Pauline; Melis, Ward; Samaey, Giovanni

    2017-07-01

    We present a general, high-order, fully explicit relaxation scheme which can be applied to any system of nonlinear hyperbolic conservation laws in multiple dimensions. The scheme consists of two steps. In a first (relaxation) step, the nonlinear hyperbolic conservation law is approximated by a kinetic equation with stiff BGK source term. Then, this kinetic equation is integrated in time using a projective integration method. After taking a few small (inner) steps with a simple, explicit method (such as direct forward Euler) to damp out the stiff components of the solution, the time derivative is estimated and used in an (outer) Runge-Kutta method of arbitrary order. We show that, with an appropriate choice of inner step size, the time step restriction on the outer time step is similar to the CFL condition for the hyperbolic conservation law. Moreover, the number of inner time steps is also independent of the stiffness of the BGK source term. We discuss stability and consistency, and illustrate with numerical results (linear advection, Burgers' equation and the shallow water and Euler equations) in one and two spatial dimensions.

  4. A stochastical event-based continuous time step rainfall generator based on Poisson rectangular pulse and microcanonical random cascade models

    NASA Astrophysics Data System (ADS)

    Pohle, Ina; Niebisch, Michael; Zha, Tingting; Schümberg, Sabine; Müller, Hannes; Maurer, Thomas; Hinz, Christoph

    2017-04-01

    Rainfall variability within a storm is of major importance for fast hydrological processes, e.g. surface runoff, erosion and solute dissipation from surface soils. To investigate and simulate the impacts of within-storm variabilities on these processes, long time series of rainfall with high resolution are required. Yet, observed precipitation records of hourly or higher resolution are in most cases available only for a small number of stations and only for a few years. To obtain long time series of alternating rainfall events and interstorm periods while conserving the statistics of observed rainfall events, the Poisson model can be used. Multiplicative microcanonical random cascades have been widely applied to disaggregate rainfall time series from coarse to fine temporal resolution. We present a new coupling approach of the Poisson rectangular pulse model and the multiplicative microcanonical random cascade model that preserves the characteristics of rainfall events as well as inter-storm periods. In the first step, a Poisson rectangular pulse model is applied to generate discrete rainfall events (duration and mean intensity) and inter-storm periods (duration). The rainfall events are subsequently disaggregated to high-resolution time series (user-specified, e.g. 10 min resolution) by a multiplicative microcanonical random cascade model. One of the challenges of coupling these models is to parameterize the cascade model for the event durations generated by the Poisson model. In fact, the cascade model is best suited to downscale rainfall data with constant time step such as daily precipitation data. Without starting from a fixed time step duration (e.g. daily), the disaggregation of events requires some modifications of the multiplicative microcanonical random cascade model proposed by Olsson (1998): Firstly, the parameterization of the cascade model for events of different durations requires continuous functions for the probabilities of the multiplicative weights, which we implemented through sigmoid functions. Secondly, the branching of the first and last box is constrained to preserve the rainfall event durations generated by the Poisson rectangular pulse model. The event-based continuous time step rainfall generator has been developed and tested using 10 min and hourly rainfall data of four stations in North-Eastern Germany. The model performs well in comparison to observed rainfall in terms of event durations and mean event intensities as well as wet spell and dry spell durations. It is currently being tested using data from other stations across Germany and in different climate zones. Furthermore, the rainfall event generator is being applied in modelling approaches aimed at understanding the impact of rainfall variability on hydrological processes. Reference Olsson, J.: Evaluation of a scaling cascade model for temporal rainfall disaggregation, Hydrology and Earth System Sciences, 2, 19.30

  5. Dissolvable fluidic time delays for programming multi-step assays in instrument-free paper diagnostics.

    PubMed

    Lutz, Barry; Liang, Tinny; Fu, Elain; Ramachandran, Sujatha; Kauffman, Peter; Yager, Paul

    2013-07-21

    Lateral flow tests (LFTs) are an ingenious format for rapid and easy-to-use diagnostics, but they are fundamentally limited to assay chemistries that can be reduced to a single chemical step. In contrast, most laboratory diagnostic assays rely on multiple timed steps carried out by a human or a machine. Here, we use dissolvable sugar applied to paper to create programmable flow delays and present a paper network topology that uses these time delays to program automated multi-step fluidic protocols. Solutions of sucrose at different concentrations (10-70% of saturation) were added to paper strips and dried to create fluidic time delays spanning minutes to nearly an hour. A simple folding card format employing sugar delays was shown to automate a four-step fluidic process initiated by a single user activation step (folding the card); this device was used to perform a signal-amplified sandwich immunoassay for a diagnostic biomarker for malaria. The cards are capable of automating multi-step assay protocols normally used in laboratories, but in a rapid, low-cost, and easy-to-use format.

  6. Dissolvable fluidic time delays for programming multi-step assays in instrument-free paper diagnostics

    PubMed Central

    Lutz, Barry; Liang, Tinny; Fu, Elain; Ramachandran, Sujatha; Kauffman, Peter; Yager, Paul

    2013-01-01

    Lateral flow tests (LFTs) are an ingenious format for rapid and easy-to-use diagnostics, but they are fundamentally limited to assay chemistries that can be reduced to a single chemical step. In contrast, most laboratory diagnostic assays rely on multiple timed steps carried out by a human or a machine. Here, we use dissolvable sugar applied to paper to create programmable flow delays and present a paper network topology that uses these time delays to program automated multi-step fluidic protocols. Solutions of sucrose at different concentrations (10-70% of saturation) were added to paper strips and dried to create fluidic time delays spanning minutes to nearly an hour. A simple folding card format employing sugar delays was shown to automate a four-step fluidic process initiated by a single user activation step (folding the card); this device was used to perform a signal-amplified sandwich immunoassay for a diagnostic biomarker for malaria. The cards are capable of automating multi-step assay protocols normally used in laboratories, but in a rapid, low-cost, and easy-to-use format. PMID:23685876

  7. Contribution of lower limb eccentric work and different step responses to balance recovery among older adults.

    PubMed

    Nagano, Hanatsu; Levinger, Pazit; Downie, Calum; Hayes, Alan; Begg, Rezaul

    2015-09-01

    Falls during walking reflect susceptibility to balance loss and the individual's capacity to recover stability. Balance can be recovered using either one step or multiple steps but both responses are impaired with ageing. To investigate older adults' (n=15, 72.5±4.8 yrs) recovery step control a tether-release procedure was devised to induce unanticipated forward balance loss. Three-dimensional position-time data combined with foot-ground reaction forces were used to measure balance recovery. Dependent variables were; margin of stability (MoS) and available response time (ART) for spatial and temporal balance measures in the transverse and sagittal planes; lower limb joint angles and joint negative/positive work; and spatio-temporal gait parameters. Relative to multi-step responses, single-step recovery was more effective in maintaining balance, indicated by greater MoS and longer ART. MoS in the sagittal plane measure and ART in the transverse plane distinguished single step responses from multiple steps. When MoS and ART were negative (<0), balance was not secured and additional steps would be required to establish the new base of support for balance recovery. Single-step responses demonstrated greater step length and velocity and when the recovery foot landed, greater centre of mass downward velocity. Single-step strategies also showed greater ankle dorsiflexion, increased knee maximum flexion and more negative work at the ankle and knee. Collectively these findings suggest that single-step responses are more effective in forward balance recovery by directing falling momentum downward to be absorbed as lower limb eccentric work. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. A Data Parallel Multizone Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)

    1995-01-01

    We have developed a data parallel multizone compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the "chimera" approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. The design choices can be summarized as: 1. finite differences on structured grids; 2. implicit time-stepping with either distributed solves or data motion and local solves; 3. sequential stepping through multiple zones with interzone data transfer via a distributed data structure. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran (HPF). One interesting feature is the issue of turbulence modeling, where the architecture of a parallel machine makes the use of an algebraic turbulence model awkward, whereas models based on transport equations are more natural. We will present some performance figures for the code on the CM-5, and consider the issues involved in transitioning the code to HPF for portability to other parallel platforms.

  9. A MULTIPLE GRID ALGORITHM FOR ONE-DIMENSIONAL TRANSIENT OPEN CHANNEL FLOWS. (R825200)

    EPA Science Inventory

    Numerical modeling of open channel flows with shocks using explicit finite difference schemes is constrained by the choice of time step, which is limited by the CFL stability criteria. To overcome this limitation, in this work we introduce the application of a multiple grid al...

  10. Modeling fatigue.

    PubMed

    Sumner, Walton; Xu, Jin Zhong

    2002-01-01

    The American Board of Family Practice is developing a patient simulation program to evaluate diagnostic and management skills. The simulator must give temporally and physiologically reasonable answers to symptom questions such as "Have you been tired?" A three-step process generates symptom histories. In the first step, the simulator determines points in time where it should calculate instantaneous symptom status. In the second step, a Bayesian network implementing a roughly physiologic model of the symptom generates a value on a severity scale at each sampling time. Positive, zero, and negative values represent increased, normal, and decreased status, as applicable. The simulator plots these values over time. In the third step, another Bayesian network inspects this plot and reports how the symptom changed over time. This mechanism handles major trends, multiple and concurrent symptom causes, and gradually effective treatments. Other temporal insights, such as observations about short-term symptom relief, require complimentary mechanisms.

  11. Enriching the biological space of natural products and charting drug metabolites, through real time biotransformation monitoring: The NMR tube bioreactor.

    PubMed

    Chatzikonstantinou, Alexandra V; Chatziathanasiadou, Maria V; Ravera, Enrico; Fragai, Marco; Parigi, Giacomo; Gerothanassis, Ioannis P; Luchinat, Claudio; Stamatis, Haralambos; Tzakos, Andreas G

    2018-01-01

    Natural products offer a wide range of biological activities, but they are not easily integrated in the drug discovery pipeline, because of their inherent scaffold intricacy and the associated complexity in their synthetic chemistry. Enzymes may be used to perform regioselective and stereoselective incorporation of functional groups in the natural product core, avoiding harsh reaction conditions, several protection/deprotection and purification steps. Herein, we developed a three step protocol carried out inside an NMR-tube. 1st-step: STD-NMR was used to predict the: i) capacity of natural products as enzyme substrates and ii) possible regioselectivity of the biotransformations. 2nd-step: The real-time formation of multiple-biotransformation products in the NMR-tube bioreactor was monitored in-situ. 3rd-step: STD-NMR was applied in the mixture of the biotransformed products to screen ligands for protein targets. Herein, we developed a simple and time-effective process, the "NMR-tube bioreactor", that is able to: (i) predict which component of a mixture of natural products can be enzymatically transformed, (ii) monitor in situ the transformation efficacy and regioselectivity in crude extracts and multiple substrate biotransformations without fractionation and (iii) simultaneously screen for interactions of the biotransformation products with pharmaceutical protein targets. We have developed a green, time-, and cost-effective process that provide a simple route from natural products to lead compounds for drug discovery. This process can speed up the most crucial steps in the early drug discovery process, and reduce the chemical manipulations usually involved in the pipeline, improving the environmental compatibility. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Semi-autonomous remote sensing time series generation tool

    NASA Astrophysics Data System (ADS)

    Babu, Dinesh Kumar; Kaufmann, Christof; Schmidt, Marco; Dhams, Thorsten; Conrad, Christopher

    2017-10-01

    High spatial and temporal resolution data is vital for crop monitoring and phenology change detection. Due to the lack of satellite architecture and frequent cloud cover issues, availability of daily high spatial data is still far from reality. Remote sensing time series generation of high spatial and temporal data by data fusion seems to be a practical alternative. However, it is not an easy process, since it involves multiple steps and also requires multiple tools. In this paper, a framework of Geo Information System (GIS) based tool is presented for semi-autonomous time series generation. This tool will eliminate the difficulties by automating all the steps and enable the users to generate synthetic time series data with ease. Firstly, all the steps required for the time series generation process are identified and grouped into blocks based on their functionalities. Later two main frameworks are created, one to perform all the pre-processing steps on various satellite data and the other one to perform data fusion to generate time series. The two frameworks can be used individually to perform specific tasks or they could be combined to perform both the processes in one go. This tool can handle most of the known geo data formats currently available which makes it a generic tool for time series generation of various remote sensing satellite data. This tool is developed as a common platform with good interface which provides lot of functionalities to enable further development of more remote sensing applications. A detailed description on the capabilities and the advantages of the frameworks are given in this paper.

  13. Influence of phase inversion on the formation and stability of one-step multiple emulsions.

    PubMed

    Morais, Jacqueline M; Rocha-Filho, Pedro A; Burgess, Diane J

    2009-07-21

    A novel method of preparation of water-in-oil-in-micelle-containing water (W/O/W(m)) multiple emulsions using the one-step emulsification method is reported. These multiple emulsions were normal (not temporary) and stable over a 60 day test period. Previously, reported multiple emulsion by the one-step method were abnormal systems that formed at the inversion point of simple emulsion (where there is an incompatibility in the Ostwald and Bancroft theories, and typically these are O/W/O systems). Pseudoternary phase diagrams and bidimensional process-composition (phase inversion) maps were constructed to assist in process and composition optimization. The surfactants used were PEG40 hydrogenated castor oil and sorbitan oleate, and mineral and vegetables oils were investigated. Physicochemical characterization studies showed experimentally, for the first time, the significance of the ultralow surface tension point on multiple emulsion formation by one-step via phase inversion processes. Although the significance of ultralow surface tension has been speculated previously, to the best of our knowledge, this is the first experimental confirmation. The multiple emulsion system reported here was dependent not only upon the emulsification temperature, but also upon the component ratios, therefore both the emulsion phase inversion and the phase inversion temperature were considered to fully explain their formation. Accordingly, it is hypothesized that the formation of these normal multiple emulsions is not a result of a temporary incompatibility (at the inversion point) during simple emulsion preparation, as previously reported. Rather, these normal W/O/W(m) emulsions are a result of the simultaneous occurrence of catastrophic and transitional phase inversion processes. The formation of the primary emulsions (W/O) is in accordance with the Ostwald theory ,and the formation of the multiple emulsions (W/O/W(m)) is in agreement with the Bancroft theory.

  14. Knee point search using cascading top-k sorting with minimized time complexity.

    PubMed

    Wang, Zheng; Tseng, Shian-Shyong

    2013-01-01

    Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.

  15. A facile single-step synthesis of ternary multicore magneto-plasmonic nanoparticles.

    PubMed

    Benelmekki, Maria; Bohra, Murtaza; Kim, Jeong-Hwan; Diaz, Rosa E; Vernieres, Jerome; Grammatikopoulos, Panagiotis; Sowwan, Mukhles

    2014-04-07

    We report a facile single-step synthesis of ternary hybrid nanoparticles (NPs) composed of multiple dumbbell-like iron-silver (FeAg) cores encapsulated by a silicon (Si) shell using a versatile co-sputter gas-condensation technique. In comparison to previously reported binary magneto-plasmonic NPs, the advantage conferred by a Si shell is to bind the multiple magneto-plasmonic (FeAg) cores together and prevent them from aggregation at the same time. Further, we demonstrate that the size of the NPs and number of cores in each NP can be modulated over a wide range by tuning the experimental parameters.

  16. Neural correlates of gait variability in people with multiple sclerosis with fall history.

    PubMed

    Kalron, Alon; Allali, Gilles; Achiron, Anat

    2018-05-28

    Investigate the association between step time variability and related brain structures in accordance with fall status in people with multiple sclerosis (PwMS). The study included 225 PwMS. A whole-brain MRI was performed by a high-resolution 3.0-Telsa MR scanner in addition to volumetric analysis based on 3D T1-weighted images using the FreeSurfer image analysis suite. Step time variability was measured by an electronic walkway. Participants were defined as "fallers" (at least two falls during the previous year) and "non-fallers". One hundred and five PwMS were defined as fallers and had a greater step time variability compared to non-fallers (5.6% (S.D.=3.4) vs. 3.4% (S.D.=1.5); p=0.001). MS fallers exhibited a reduced volume in the left caudate and both cerebellum hemispheres compared to non-fallers. By using a linear regression analysis no association was found between gait variability and related brain structures in the total cohort and non-fallers group. However, the analysis found an association between the left hippocampus and left putamen volumes with step time variability in the faller group; p=0.031, 0.048, respectively, controlling for total cranial volume, walking speed, disability, age and gender. Nevertheless, according to the hierarchical regression model, the contribution of these brain measures to predict gait variability was relatively small compared to walking speed. An association between low left hippocampal, putamen volumes and step time variability was found in PwMS with a history of falls, suggesting brain structural characteristics may be related to falls and increased gait variability in PwMS. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  17. Matrix-Dominated Time-Dependent Deformation and Damage of Graphite Epoxy Composite -- Experimental Data under Multiple-Step Relaxation.

    DTIC Science & Technology

    1983-05-01

    50 50 51 - 2 1 No input -- -- -- -- -- -- -- 2 No input - - - - - - - 3 No input -- - - - - - - 4 No input - - - - - - - 5 52 53 54 -- 329 329 330 6...398 *SOverall 160 160 161 161 6 1 162 153 164 165 399 400 401 402 2 166 167 168 169 403 404 405 406 3 170 171 172 173 407 408 409 410 4 174 175 176...0 ’<K fle.- 0 * 00 0 0 FILTERED DATA 70,:TIME =T - 183.798 (HRS.) - 52 - o- - A T360/52M8 - STEP/RELAXATIO4 - SPECIMEN No. 2 - STEP No. 5 00 0= mmia a

  18. A comparison of artificial compressibility and fractional step methods for incompressible flow computations

    NASA Technical Reports Server (NTRS)

    Chan, Daniel C.; Darian, Armen; Sindir, Munir

    1992-01-01

    We have applied and compared the efficiency and accuracy of two commonly used numerical methods for the solution of Navier-Stokes equations. The artificial compressibility method augments the continuity equation with a transient pressure term and allows one to solve the modified equations as a coupled system. Due to its implicit nature, one can have the luxury of taking a large temporal integration step at the expense of higher memory requirement and larger operation counts per step. Meanwhile, the fractional step method splits the Navier-Stokes equations into a sequence of differential operators and integrates them in multiple steps. The memory requirement and operation count per time step are low, however, the restriction on the size of time marching step is more severe. To explore the strengths and weaknesses of these two methods, we used them for the computation of a two-dimensional driven cavity flow with Reynolds number of 100 and 1000, respectively. Three grid sizes, 41 x 41, 81 x 81, and 161 x 161 were used. The computations were considered after the L2-norm of the change of the dependent variables in two consecutive time steps has fallen below 10(exp -5).

  19. Visualization of time-varying MRI data for MS lesion analysis

    NASA Astrophysics Data System (ADS)

    Tory, Melanie K.; Moeller, Torsten; Atkins, M. Stella

    2001-05-01

    Conventional methods to diagnose and follow treatment of Multiple Sclerosis require radiologists and technicians to compare current images with older images of a particular patient, on a slic-by-slice basis. Although there has been progress in creating 3D displays of medical images, little attempt has been made to design visual tools that emphasize change over time. We implemented several ideas that attempt to address this deficiency. In one approach, isosurfaces of segmented lesions at each time step were displayed either on the same image (each time step in a different color), or consecutively in an animation. In a second approach, voxel- wise differences between time steps were calculated and displayed statically using ray casting. Animation was used to show cumulative changes over time. Finally, in a method borrowed from computational fluid dynamics (CFD), glyphs (small arrow-like objects) were rendered with a surface model of the lesions to indicate changes at localized points.

  20. Predicting falls in older adults using the four square step test.

    PubMed

    Cleary, Kimberly; Skornyakov, Elena

    2017-10-01

    The Four Square Step Test (FSST) is a performance-based balance tool involving stepping over four single-point canes placed on the floor in a cross configuration. The purpose of this study was to evaluate properties of the FSST in older adults who lived independently. Forty-five community dwelling older adults provided fall history and completed the FSST, Berg Balance Scale (BBS), Timed Up and Go (TUG), and Tinetti in random order. Future falls were recorded for 12 months following testing. The FSST accurately distinguished between non-fallers and multiple fallers, and the 15-second threshold score accurately distinguished multiple fallers from non-multiple fallers based on fall history. The FSST predicted future falls, and performance on the FSST was significantly correlated with performance on the BBS, TUG, and Tinetti. However, the test is not appropriate for older adults who use walkers. Overall, the FSST is a valid yet underutilized measure of balance performance and fall prediction tool that physical therapists should consider using in ambulatory community dwelling older adults.

  1. Load forecasting via suboptimal seasonal autoregressive models and iteratively reweighted least squares estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mbamalu, G.A.N.; El-Hawary, M.E.

    The authors propose suboptimal least squares or IRWLS procedures for estimating the parameters of a seasonal multiplicative AR model encountered during power system load forecasting. The proposed method involves using an interactive computer environment to estimate the parameters of a seasonal multiplicative AR process. The method comprises five major computational steps. The first determines the order of the seasonal multiplicative AR process, and the second uses the least squares or the IRWLS to estimate the optimal nonseasonal AR model parameters. In the third step one obtains the intermediate series by back forecast, which is followed by using the least squaresmore » or the IRWLS to estimate the optimal season AR parameters. The final step uses the estimated parameters to forecast future load. The method is applied to predict the Nova Scotia Power Corporation's 168 lead time hourly load. The results obtained are documented and compared with results based on the Box and Jenkins method.« less

  2. An optimal control strategy for hybrid actuator systems: Application to an artificial muscle with electric motor assist.

    PubMed

    Ishihara, Koji; Morimoto, Jun

    2018-03-01

    Humans use multiple muscles to generate such joint movements as an elbow motion. With multiple lightweight and compliant actuators, joint movements can also be efficiently generated. Similarly, robots can use multiple actuators to efficiently generate a one degree of freedom movement. For this movement, the desired joint torque must be properly distributed to each actuator. One approach to cope with this torque distribution problem is an optimal control method. However, solving the optimal control problem at each control time step has not been deemed a practical approach due to its large computational burden. In this paper, we propose a computationally efficient method to derive an optimal control strategy for a hybrid actuation system composed of multiple actuators, where each actuator has different dynamical properties. We investigated a singularly perturbed system of the hybrid actuator model that subdivided the original large-scale control problem into smaller subproblems so that the optimal control outputs for each actuator can be derived at each control time step and applied our proposed method to our pneumatic-electric hybrid actuator system. Our method derived a torque distribution strategy for the hybrid actuator by dealing with the difficulty of solving real-time optimal control problems. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  3. Validity of four approaches of using repeaters' MCAT scores in medical school admissions to predict USMLE Step 1 total scores.

    PubMed

    Zhao, Xiaohui; Oppler, Scott; Dunleavy, Dana; Kroopnick, Marc

    2010-10-01

    This study investigated the validity of four approaches (the average, most recent, highest-within-administration, and highest-across-administration approaches) of using repeaters' Medical College Admission Test (MCAT) scores to predict Step 1 scores. Using the differential predication method, this study investigated the magnitude of differences in the expected Step 1 total scores between MCAT nonrepeaters and three repeater groups (two-time, three-time, and four-time test takers) for the four scoring approaches. For the average score approach, matriculants with the same MCAT average are expected to achieve similar Step 1 total scores regardless of whether the individual attempted the MCAT exam one or multiple times. For the other three approaches, repeaters are expected to achieve lower Step 1 scores than nonrepeaters; for a given MCAT score, as the number of attempts increases, the expected Step 1 decreases. The effect was strongest for the highest-across-administration approach, followed by the highest-within-administration approach, and then the most recent approach. Using the average score is the best approach for considering repeaters' MCAT scores in medical school admission decisions.

  4. Experimental studies of systematic multiple-energy operation at HIMAC synchrotron

    NASA Astrophysics Data System (ADS)

    Mizushima, K.; Katagiri, K.; Iwata, Y.; Furukawa, T.; Fujimoto, T.; Sato, S.; Hara, Y.; Shirai, T.; Noda, K.

    2014-07-01

    Multiple-energy synchrotron operation providing carbon-ion beams with various energies has been used for scanned particle therapy at NIRS. An energy range from 430 to 56 MeV/u and about 200 steps within this range are required to vary the Bragg peak position for effective treatment. The treatment also demands the slow extraction of beam with highly reliable properties, such as spill, position and size, for all energies. We propose an approach to generating multiple-energy operation meeting these requirements within a short time. In this approach, the device settings at most energy steps are determined without manual adjustments by using systematic parameter tuning depending on the beam energy. Experimental verification was carried out at the HIMAC synchrotron, and its results proved that this approach can greatly reduce the adjustment period.

  5. MIMO nonlinear ultrasonic tomography by propagation and backpropagation method.

    PubMed

    Dong, Chengdong; Jin, Yuanwei

    2013-03-01

    This paper develops a fast ultrasonic tomographic imaging method in a multiple-input multiple-output (MIMO) configuration using the propagation and backpropagation (PBP) method. By this method, ultrasonic excitation signals from multiple sources are transmitted simultaneously to probe the objects immersed in the medium. The scattering signals are recorded by multiple receivers. Utilizing the nonlinear ultrasonic wave propagation equation and the received time domain scattered signals, the objects are to be reconstructed iteratively in three steps. First, the propagation step calculates the predicted acoustic potential data at the receivers using an initial guess. Second, the difference signal between the predicted value and the measured data is calculated. Third, the backpropagation step computes updated acoustical potential data by backpropagating the difference signal to the same medium computationally. Unlike the conventional PBP method for tomographic imaging where each source takes turns to excite the acoustical field until all the sources are used, the developed MIMO-PBP method achieves faster image reconstruction by utilizing multiple source simultaneous excitation. Furthermore, we develop an orthogonal waveform signaling method using a waveform delay scheme to reduce the impact of speckle patterns in the reconstructed images. By numerical experiments we demonstrate that the proposed MIMO-PBP tomographic imaging method results in faster convergence and achieves superior imaging quality.

  6. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    NASA Astrophysics Data System (ADS)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  7. A simple test of choice stepping reaction time for assessing fall risk in people with multiple sclerosis.

    PubMed

    Tijsma, Mylou; Vister, Eva; Hoang, Phu; Lord, Stephen R

    2017-03-01

    Purpose To determine (a) the discriminant validity for established fall risk factors and (b) the predictive validity for falls of a simple test of choice stepping reaction time (CSRT) in people with multiple sclerosis (MS). Method People with MS (n = 210, 21-74y) performed the CSRT, sensorimotor, balance and neuropsychological tests in a single session. They were then followed up for falls using monthly fall diaries for 6 months. Results The CSRT test had excellent discriminant validity with respect to established fall risk factors. Frequent fallers (≥3 falls) performed significantly worse in the CSRT test than non-frequent fallers (0-2 falls). With the odds of suffering frequent falls increasing 69% with each SD increase in CSRT (OR = 1.69, 95% CI: 1.27-2.26, p = <0.001). In regression analysis, CSRT was best explained by sway, time to complete the 9-Hole Peg test, knee extension strength of the weaker leg, proprioception and the time to complete the Trails B test (multiple R 2   =   0.449, p < 0.001). Conclusions A simple low tech CSRT test has excellent discriminative and predictive validity in relation to falls in people with MS. This test may prove useful in documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions. Implications for rehabilitation Good choice stepping reaction time (CSRT) is required for maintaining balance. A simple low-tech CSRT test has excellent discriminative and predictive validity in relation to falls in people with MS. This test may prove useful documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions.

  8. Efficient Control Law Simulation for Multiple Mobile Robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Driessen, B.J.; Feddema, J.T.; Kotulski, J.D.

    1998-10-06

    In this paper we consider the problem of simulating simple control laws involving large numbers of mobile robots. Such simulation can be computationally prohibitive if the number of robots is large enough, say 1 million, due to the 0(N2 ) cost of each time step. This work therefore uses hierarchical tree-based methods for calculating the control law. These tree-based approaches have O(NlogN) cost per time step, thus allowing for efficient simulation involving a large number of robots. For concreteness, a decentralized control law which involves only the distance and bearing to the closest neighbor robot will be considered. The timemore » to calculate the control law for each robot at each time step is demonstrated to be O(logN).« less

  9. Imaging workflow and calibration for CT-guided time-domain fluorescence tomography

    PubMed Central

    Tichauer, Kenneth M.; Holt, Robert W.; El-Ghussein, Fadi; Zhu, Qun; Dehghani, Hamid; Leblond, Frederic; Pogue, Brian W.

    2011-01-01

    In this study, several key optimization steps are outlined for a non-contact, time-correlated single photon counting small animal optical tomography system, using simultaneous collection of both fluorescence and transmittance data. The system is presented for time-domain image reconstruction in vivo, illustrating the sensitivity from single photon counting and the calibration steps needed to accurately process the data. In particular, laser time- and amplitude-referencing, detector and filter calibrations, and collection of a suitable instrument response function are all presented in the context of time-domain fluorescence tomography and a fully automated workflow is described. Preliminary phantom time-domain reconstructed images demonstrate the fidelity of the workflow for fluorescence tomography based on signal from multiple time gates. PMID:22076264

  10. A multiple-block multigrid method for the solution of the three-dimensional Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Atkins, Harold

    1991-01-01

    A multiple block multigrid method for the solution of the three dimensional Euler and Navier-Stokes equations is presented. The basic flow solver is a cell vertex method which employs central difference spatial approximations and Runge-Kutta time stepping. The use of local time stepping, implicit residual smoothing, multigrid techniques and variable coefficient numerical dissipation results in an efficient and robust scheme is discussed. The multiblock strategy places the block loop within the Runge-Kutta Loop such that accuracy and convergence are not affected by block boundaries. This has been verified by comparing the results of one and two block calculations in which the two block grid is generated by splitting the one block grid. Results are presented for both Euler and Navier-Stokes computations of wing/fuselage combinations.

  11. DSP-Based dual-polarity mass spectrum pattern recognition for bio-detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riot, V; Coffee, K; Gard, E

    2006-04-21

    The Bio-Aerosol Mass Spectrometry (BAMS) instrument analyzes single aerosol particles using a dual-polarity time-of-flight mass spectrometer recording simultaneously spectra of thirty to a hundred thousand points on each polarity. We describe here a real-time pattern recognition algorithm developed at Lawrence Livermore National Laboratory that has been implemented on a nine Digital Signal Processor (DSP) system from Signatec Incorporated. The algorithm first preprocesses independently the raw time-of-flight data through an adaptive baseline removal routine. The next step consists of a polarity dependent calibration to a mass-to-charge representation, reducing the data to about five hundred to a thousand channels per polarity. Themore » last step is the identification step using a pattern recognition algorithm based on a library of known particle signatures including threat agents and background particles. The identification step includes integrating the two polarities for a final identification determination using a score-based rule tree. This algorithm, operating on multiple channels per-polarity and multiple polarities, is well suited for parallel real-time processing. It has been implemented on the PMP8A from Signatec Incorporated, which is a computer based board that can interface directly to the two one-Giga-Sample digitizers (PDA1000 from Signatec Incorporated) used to record the two polarities of time-of-flight data. By using optimized data separation, pipelining, and parallel processing across the nine DSPs it is possible to achieve a processing speed of up to a thousand particles per seconds, while maintaining the recognition rate observed on a non-real time implementation. This embedded system has allowed the BAMS technology to improve its throughput and therefore its sensitivity while maintaining a large dynamic range (number of channels and two polarities) thus maintaining the systems specificity for bio-detection.« less

  12. Changes in Geologic Time Understanding in a Class for Preservice Teachers

    ERIC Educational Resources Information Center

    Teed, Rebecca; Slattery, William

    2011-01-01

    The paradigm of geologic time is built on complex concepts, and students master it in multiple steps. Concepts in Geology is an inquiry-based geology class for preservice teachers at Wright State University. The instructors used the Geoscience Concept Inventory (GCI) to determine if students' understanding of key ideas about geologic time and…

  13. Anisotropic Dispersion and Partial Localization of Acoustic Surface Plasmons on an Atomically Stepped Surface: Au(788)

    NASA Astrophysics Data System (ADS)

    Smerieri, M.; Vattuone, L.; Savio, L.; Langer, T.; Tegenkamp, C.; Pfnür, H.; Silkin, V. M.; Rocca, M.

    2014-10-01

    Understanding acoustic surface plasmons (ASPs) in the presence of nanosized gratings is necessary for the development of future devices that couple light with ASPs. We show here by experiment and theory that two ASPs exist on Au(788), a vicinal surface with an ordered array of monoatomic steps. The ASPs propagate across the steps as long as their wavelength exceeds the terrace width, thereafter becoming localized. Our investigation identifies, for the first time, ASPs coupled with intersubband transitions involving multiple surface-state subbands.

  14. Improving Multiple Fault Diagnosability using Possible Conflicts

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2012-01-01

    Multiple fault diagnosis is a difficult problem for dynamic systems. Due to fault masking, compensation, and relative time of fault occurrence, multiple faults can manifest in many different ways as observable fault signature sequences. This decreases diagnosability of multiple faults, and therefore leads to a loss in effectiveness of the fault isolation step. We develop a qualitative, event-based, multiple fault isolation framework, and derive several notions of multiple fault diagnosability. We show that using Possible Conflicts, a model decomposition technique that decouples faults from residuals, we can significantly improve the diagnosability of multiple faults compared to an approach using a single global model. We demonstrate these concepts and provide results using a multi-tank system as a case study.

  15. Simulating fire and forest dynamics for a coordinated landscape fuel treatment project in the Sierra Nevada

    Treesearch

    Brandon M. Collins; Scott L. Stephens; Gary B. Roller; John Battles

    2011-01-01

    We evaluate an actual landscape fuel treatment project that was designed by local U. S. Forest Service managers in the northern Sierra Nevada. We model the effects of this project at reducing landscape-level fire behavior at multiple time steps, up to nearly 30 yr beyond treatment implementation. Additionally, we modeled planned treatments under multiple diameter-...

  16. Building the Bridge between Home and School: One Rural School's Steps to Interrogate and Celebrate Multiple Literacies

    ERIC Educational Resources Information Center

    Hansen, Faith Beyer

    2009-01-01

    This paper examines one rural school's efforts to recognize and celebrate the multiple literacies of its students, by hosting a LINK Up family night that highlights the varied funds of knowledge of school, community and home. In dialogue with excerpts from Sherman Alexie's novel, "The Absolute True Diaries of a Part Time Indian," the paper…

  17. Promoting step responses of children with multiple disabilities through a walker device and microswitches with contingent stimuli.

    PubMed

    Lancioni, G E; De Pace, C; Singh, N N; O'Reilly, M F; Sigafoos, J; Didden, R

    2008-08-01

    Children with severe or profound intellectual and motor disabilities often present problems of balance and locomotion and spend much of their time sitting or lying, with negative consequences for their development and social image. This study provides a replication of recent (pilot) studies using a walker (support) device and microswitches with preferred stimuli to promote locomotion in two children with multiple disabilities. One child used an ABAB design; the other only an AB sequence. Both succeeded in increasing their frequencies of step responses during the B (intervention) phase(s). These findings support the positive evidence already available on the effectiveness of this intervention in motivating and promoting children's locomotion.

  18. Efficacy and Safety of a Novel Three-Step Medial Release Technique in Varus Total Knee Arthroplasty.

    PubMed

    Kim, Min Woo; Koh, In Jun; Kim, Ju Hwan; Jung, Jae Jong; In, Yong

    2015-09-01

    We investigated the efficacy and safety of our novel three-step medial release technique in varus total knee arthroplasty (TKA) over time. Two hundred sixty seven consecutive varus TKAs were performed by applying the algorithmic release technique which consisted of sequential release of the deep medial collateral ligament (step 1), the semimembranosus (step 2), and multiple needle puncturing of the superficial medial collateral ligament (step 3). One hundred seventeen, 114, and 36 knees were balanced after step 1, 2, and 3 releases, respectively. There were no significant differences in changes of medial and lateral laxities between groups in over a year. Our novel stepwise medial release technique was efficacious and safe in balancing varus knees during TKA. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Orthogonal tandem catalysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lohr, Tracy L.; Marks, Tobin J.

    2015-05-20

    Tandem catalysis is a growing field that is beginning to yield important scientific and technological advances toward new and more efficient catalytic processes. 'One-pot' tandem reactions, where multiple catalysts and reagents, combined in a single reaction vessel undergo a sequence of precisely staged catalytic steps, are highly attractive from the standpoint of reducing both waste and time. Orthogonal tandem catalysis is a subset of one-pot reactions in which more than one catalyst is used to promote two or more mechanistically distinct reaction steps. This Perspective summarizes and analyses some of the recent developments and successes in orthogonal tandem catalysis, withmore » particular focus on recent strategies to address catalyst incompatibility. We also highlight the concept of thermodynamic leveraging by coupling multiple catalyst cycles to effect challenging transformations not observed in single-step processes, and to encourage application of this technique to energetically unfavourable or demanding reactions.« less

  20. Soil pretreatment and fast cell lysis for direct polymerase chain reaction from forest soils for terminal restriction fragment length polymorphism analysis of fungal communities

    Treesearch

    Fei Cheng; Lin Hou; Keith Woeste; Zhengchun Shang; Xiaobang Peng; Peng Zhao; Shuoxin Zhang

    2016-01-01

    Humic substances in soil DNA samples can influence the assessment of microbial diversity and community composition. Using multiple steps during or after cell lysis adds expenses, is time-consuming, and causes DNA loss. A pretreatment of soil samples and a single step DNA extraction may improve experimental results. In order to optimize a protocol for obtaining high...

  1. Impaired Response Selection During Stepping Predicts Falls in Older People-A Cohort Study.

    PubMed

    Schoene, Daniel; Delbaere, Kim; Lord, Stephen R

    2017-08-01

    Response inhibition, an important executive function, has been identified as a risk factor for falls in older people. This study investigated whether step tests that include different levels of response inhibition differ in their ability to predict falls and whether such associations are mediated by measures of attention, speed, and/or balance. A cohort study with a 12-month follow-up was conducted in community-dwelling older people without major cognitive and mobility impairments. Participants underwent 3 step tests: (1) choice stepping reaction time (CSRT) requiring rapid decision making and step initiation; (2) inhibitory choice stepping reaction time (iCSRT) requiring additional response inhibition and response-selection (go/no-go); and (3) a Stroop Stepping Test (SST) under congruent and incongruent conditions requiring conflict resolution. Participants also completed tests of processing speed, balance, and attention as potential mediators. Ninety-three of the 212 participants (44%) fell in the follow-up period. Of the step tests, only components of the iCSRT task predicted falls in this time with the relative risk per standard deviation for the reaction time (iCSRT-RT) = 1.23 (95%CI = 1.10-1.37). Multiple mediation analysis indicated that the iCSRT-RT was independently associated with falls and not mediated through slow processing speed, poor balance, or inattention. Combined stepping and response inhibition as measured in a go/no-go test stepping paradigm predicted falls in older people. This suggests that integrity of the response-selection component of a voluntary stepping response is crucial for minimizing fall risk. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.

  2. An easy-to-use calculating machine to simulate steady state and non-steady-state preparative separations by multiple dual mode counter-current chromatography with semi-continuous loading of feed mixtures.

    PubMed

    Kostanyan, Artak E; Shishilov, Oleg N

    2018-06-01

    Multiple dual mode counter-current chromatography (MDM CCC) separation processes with semi-continuous large sample loading consist of a succession of two counter-current steps: with "x" phase (first step) and "y" phase (second step) flow periods. A feed mixture dissolved in the "x" phase is continuously loaded into a CCC machine at the beginning of the first step of each cycle over a constant time with the volumetric rate equal to the flow rate of the pure "x" phase. An easy-to-use calculating machine is developed to simulate the chromatograms and the amounts of solutes eluted with the phases at each cycle for steady-state (the duration of the flow periods of the phases is kept constant for all the cycles) and non-steady-state (with variable duration of alternating phase elution steps) separations. Using the calculating machine, the separation of mixtures containing up to five components can be simulated and designed. Examples of the application of the calculating machine for the simulation of MDM CCC processes are discussed. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. On the performance of voltage stepping for the simulation of adaptive, nonlinear integrate-and-fire neuronal networks.

    PubMed

    Kaabi, Mohamed Ghaith; Tonnelier, Arnaud; Martinez, Dominique

    2011-05-01

    In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.

  4. A high-throughput semi-automated preparation for filtered synaptoneurosomes.

    PubMed

    Murphy, Kathryn M; Balsor, Justin; Beshara, Simon; Siu, Caitlin; Pinto, Joshua G A

    2014-09-30

    Synaptoneurosomes have become an important tool for studying synaptic proteins. The filtered synaptoneurosomes preparation originally developed by Hollingsworth et al. (1985) is widely used and is an easy method to prepare synaptoneurosomes. The hand processing steps in that preparation, however, are labor intensive and have become a bottleneck for current proteomic studies using synaptoneurosomes. For this reason, we developed new steps for tissue homogenization and filtration that transform the preparation of synaptoneurosomes to a high-throughput, semi-automated process. We implemented a standardized protocol with easy to follow steps for homogenizing multiple samples simultaneously using a FastPrep tissue homogenizer (MP Biomedicals, LLC) and then filtering all of the samples in centrifugal filter units (EMD Millipore, Corp). The new steps dramatically reduce the time to prepare synaptoneurosomes from hours to minutes, increase sample recovery, and nearly double enrichment for synaptic proteins. These steps are also compatible with biosafety requirements for working with pathogen infected brain tissue. The new high-throughput semi-automated steps to prepare synaptoneurosomes are timely technical advances for studies of low abundance synaptic proteins in valuable tissue samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Multi-channel time-reversal receivers for multi and 1-bit implementations

    DOEpatents

    Candy, James V.; Chambers, David H.; Guidry, Brian L.; Poggio, Andrew J.; Robbins, Christopher L.

    2008-12-09

    A communication system for transmitting a signal through a channel medium comprising digitizing the signal, time-reversing the digitized signal, and transmitting the signal through the channel medium. In one embodiment a transmitter is adapted to transmit the signal, a multiplicity of receivers are adapted to receive the signal, a digitizer digitizes the signal, and a time-reversal signal processor is adapted to time-reverse the digitized signal. An embodiment of the present invention includes multi bit implementations. Another embodiment of the present invention includes 1-bit implementations. Another embodiment of the present invention includes a multiplicity of receivers used in the step of transmitting the signal through the channel medium.

  6. Modeling the heterogeneous catalytic activity of a single nanoparticle using a first passage time distribution formalism

    NASA Astrophysics Data System (ADS)

    Das, Anusheela; Chaudhury, Srabanti

    2015-11-01

    Metal nanoparticles are heterogeneous catalysts and have a multitude of non-equivalent, catalytic sites on the nanoparticle surface. The product dissociation step in such reaction schemes can follow multiple pathways. Proposed here for the first time is a completely analytical theoretical framework, based on the first passage time distribution, that incorporates the effect of heterogeneity in nanoparticle catalysis explicitly by considering multiple, non-equivalent catalytic sites on the nanoparticle surface. Our results show that in nanoparticle catalysis, the effect of dynamic disorder is manifested even at limiting substrate concentrations in contrast to an enzyme that has only one well-defined active site.

  7. Semantic Structures of One-Step Word Problems Involving Multiplication or Division.

    ERIC Educational Resources Information Center

    Schmidt, Siegbert; Weiser, Werner

    1995-01-01

    Proposes a four-category classification of semantic structures of one-step word problems involving multiplication and division: forming the n-th multiple of measures, combinatorial multiplication, composition of operators, and multiplication by formula. This classification is compatible with semantic structures of addition and subtraction word…

  8. One-step formation of w/o/w multiple emulsions stabilized by single amphiphilic block copolymers.

    PubMed

    Hong, Liangzhi; Sun, Guanqing; Cai, Jinge; Ngai, To

    2012-02-07

    Multiple emulsions are complex polydispersed systems in which both oil-in-water (O/W) and water-in-oil (W/O) emulsion exists simultaneously. They are often prepared accroding to a two-step process and commonly stabilized using a combination of hydrophilic and hydrophobic surfactants. Recently, some reports have shown that multiple emulsions can also be produced through one-step method with simultaneous occurrence of catastrophic and transitional phase inversions. However, these reported multiple emulsions need surfactant blends and are usually described as transitory or temporary systems. Herein, we report a one-step phase inversion process to produce water-in-oil-in-water (W/O/W) multiple emulsions stabilized solely by a synthetic diblock copolymer. Unlike the use of small molecule surfactant combinations, block copolymer stabilized multiple emulsions are remarkably stable and show the ability to separately encapsulate both polar and nonpolar cargos. The importance of the conformation of the copolymer surfactant at the interfaces with regards to the stability of the multiple emulsions using the one-step method is discussed.

  9. PVT: An Efficient Computational Procedure to Speed up Next-generation Sequence Analysis

    PubMed Central

    2014-01-01

    Background High-throughput Next-Generation Sequencing (NGS) techniques are advancing genomics and molecular biology research. This technology generates substantially large data which puts up a major challenge to the scientists for an efficient, cost and time effective solution to analyse such data. Further, for the different types of NGS data, there are certain common challenging steps involved in analysing those data. Spliced alignment is one such fundamental step in NGS data analysis which is extremely computational intensive as well as time consuming. There exists serious problem even with the most widely used spliced alignment tools. TopHat is one such widely used spliced alignment tools which although supports multithreading, does not efficiently utilize computational resources in terms of CPU utilization and memory. Here we have introduced PVT (Pipelined Version of TopHat) where we take up a modular approach by breaking TopHat’s serial execution into a pipeline of multiple stages, thereby increasing the degree of parallelization and computational resource utilization. Thus we address the discrepancies in TopHat so as to analyze large NGS data efficiently. Results We analysed the SRA dataset (SRX026839 and SRX026838) consisting of single end reads and SRA data SRR1027730 consisting of paired-end reads. We used TopHat v2.0.8 to analyse these datasets and noted the CPU usage, memory footprint and execution time during spliced alignment. With this basic information, we designed PVT, a pipelined version of TopHat that removes the redundant computational steps during ‘spliced alignment’ and breaks the job into a pipeline of multiple stages (each comprising of different step(s)) to improve its resource utilization, thus reducing the execution time. Conclusions PVT provides an improvement over TopHat for spliced alignment of NGS data analysis. PVT thus resulted in the reduction of the execution time to ~23% for the single end read dataset. Further, PVT designed for paired end reads showed an improved performance of ~41% over TopHat (for the chosen data) with respect to execution time. Moreover we propose PVT-Cloud which implements PVT pipeline in cloud computing system. PMID:24894600

  10. PVT: an efficient computational procedure to speed up next-generation sequence analysis.

    PubMed

    Maji, Ranjan Kumar; Sarkar, Arijita; Khatua, Sunirmal; Dasgupta, Subhasis; Ghosh, Zhumur

    2014-06-04

    High-throughput Next-Generation Sequencing (NGS) techniques are advancing genomics and molecular biology research. This technology generates substantially large data which puts up a major challenge to the scientists for an efficient, cost and time effective solution to analyse such data. Further, for the different types of NGS data, there are certain common challenging steps involved in analysing those data. Spliced alignment is one such fundamental step in NGS data analysis which is extremely computational intensive as well as time consuming. There exists serious problem even with the most widely used spliced alignment tools. TopHat is one such widely used spliced alignment tools which although supports multithreading, does not efficiently utilize computational resources in terms of CPU utilization and memory. Here we have introduced PVT (Pipelined Version of TopHat) where we take up a modular approach by breaking TopHat's serial execution into a pipeline of multiple stages, thereby increasing the degree of parallelization and computational resource utilization. Thus we address the discrepancies in TopHat so as to analyze large NGS data efficiently. We analysed the SRA dataset (SRX026839 and SRX026838) consisting of single end reads and SRA data SRR1027730 consisting of paired-end reads. We used TopHat v2.0.8 to analyse these datasets and noted the CPU usage, memory footprint and execution time during spliced alignment. With this basic information, we designed PVT, a pipelined version of TopHat that removes the redundant computational steps during 'spliced alignment' and breaks the job into a pipeline of multiple stages (each comprising of different step(s)) to improve its resource utilization, thus reducing the execution time. PVT provides an improvement over TopHat for spliced alignment of NGS data analysis. PVT thus resulted in the reduction of the execution time to ~23% for the single end read dataset. Further, PVT designed for paired end reads showed an improved performance of ~41% over TopHat (for the chosen data) with respect to execution time. Moreover we propose PVT-Cloud which implements PVT pipeline in cloud computing system.

  11. Influence of the Supramolecular Micro-Assembly of Multiple Emulsions on their Biopharmaceutical Features and In vivo Therapeutic Response.

    PubMed

    Cilurzo, Felisa; Cristiano, Maria Chiara; Di Marzio, Luisa; Cosco, Donato; Carafa, Maria; Ventura, Cinzia Anna; Fresta, Massimo; Paolino, Donatella

    2015-01-01

    The ability of some surfactants to self-assemble in a water/oil bi-phase environment thus forming supramolecular structure leading to the formation of w/o/w multiple emulsions was investigated. The w/o/w multiple emulsions obtained by self-assembling (one-step preparation method) were compared with those prepared following the traditional two-step procedure. Methyl-nicotinate was used as a hydrophilic model drug. The formation of the multiple emulsion structure was evidenced by optical microscopy, which showed a mean size of the inner oil droplets of 6 μm and 10 μm for one-step and two-step multiple emulsions, respectively. The in vitrobiopharmaceutical features of the various w/o/w multiple emulsion formulations were evaluated by means of viscosimetry studies, drug release and in vitro percutaneous permeation experiments through human stratum corneum and viable epidermis membranes. The self-assembled multiple emulsions allowed a more gradual percutaneous permeation (a zero-order permeation rate) than the two-step ones. The in vivotopical carrier properties of the two different multiple emulsions were evaluated on healthy human volunteers by using the spectrophotometry of reflectance, an in vivonon invasive method. These multiple emulsion systems were also compared with conventional emulsion formulations. Our findings demonstrated that the multiple emulsions obtained by self-assembling were able to provide a more sustained drug delivery into the skin and hence a longer therapeutic action than two-step multiple emulsions and conventional emulsion formulations. Finally, our findings showed that the supramolecular micro-assembly of multiple emulsions was able to influence not only the biopharmaceutical characteristics but also the potential in vivotherapeutic response.

  12. Multiple-Beam Detection of Fast Transient Radio Sources

    NASA Technical Reports Server (NTRS)

    Thompson, David R.; Wagstaff, Kiri L.; Majid, Walid A.

    2011-01-01

    A method has been designed for using multiple independent stations to discriminate fast transient radio sources from local anomalies, such as antenna noise or radio frequency interference (RFI). This can improve the sensitivity of incoherent detection for geographically separated stations such as the very long baseline array (VLBA), the future square kilometer array (SKA), or any other coincident observations by multiple separated receivers. The transients are short, broadband pulses of radio energy, often just a few milliseconds long, emitted by a variety of exotic astronomical phenomena. They generally represent rare, high-energy events making them of great scientific value. For RFI-robust adaptive detection of transients, using multiple stations, a family of algorithms has been developed. The technique exploits the fact that the separated stations constitute statistically independent samples of the target. This can be used to adaptively ignore RFI events for superior sensitivity. If the antenna signals are independent and identically distributed (IID), then RFI events are simply outlier data points that can be removed through robust estimation such as a trimmed or Winsorized estimator. The alternative "trimmed" estimator is considered, which excises the strongest n signals from the list of short-beamed intensities. Because local RFI is independent at each antenna, this interference is unlikely to occur at many antennas on the same step. Trimming the strongest signals provides robustness to RFI that can theoretically outperform even the detection performance of the same number of antennas at a single site. This algorithm requires sorting the signals at each time step and dispersion measure, an operation that is computationally tractable for existing array sizes. An alternative uses the various stations to form an ensemble estimate of the conditional density function (CDF) evaluated at each time step. Both methods outperform standard detection strategies on a test sequence of VLBA data, and both are efficient enough for deployment in real-time, online transient detection applications.

  13. Proposed variations of the stepped-wedge design can be used to accommodate multiple interventions.

    PubMed

    Lyons, Vivian H; Li, Lingyu; Hughes, James P; Rowhani-Rahbar, Ali

    2017-06-01

    Stepped-wedge design (SWD) cluster-randomized trials have traditionally been used for evaluating a single intervention. We aimed to explore design variants suitable for evaluating multiple interventions in an SWD trial. We identified four specific variants of the traditional SWD that would allow two interventions to be conducted within a single cluster-randomized trial: concurrent, replacement, supplementation, and factorial SWDs. These variants were chosen to flexibly accommodate study characteristics that limit a one-size-fits-all approach for multiple interventions. In the concurrent SWD, each cluster receives only one intervention, unlike the other variants. The replacement SWD supports two interventions that will not or cannot be used at the same time. The supplementation SWD is appropriate when the second intervention requires the presence of the first intervention, and the factorial SWD supports the evaluation of intervention interactions. The precision for estimating intervention effects varies across the four variants. Selection of the appropriate design variant should be driven by the research question while considering the trade-off between the number of steps, number of clusters, restrictions for concurrent implementation of the interventions, lingering effects of each intervention, and precision of the intervention effect estimates. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Risk of falls in older people during fast-walking--the TASCOG study.

    PubMed

    Callisaya, M L; Blizzard, L; McGinley, J L; Srikanth, V K

    2012-07-01

    To investigate the relationship between fast-walking and falls in older people. Individuals aged 60-86 years were randomly selected from the electoral roll (n=176). Gait speed, step length, cadence and a walk ratio were recorded during preferred- and fast-walking using an instrumented walkway. Falls were recorded prospectively over 12 months. Log multinomial regression was used to estimate the relative risk of single and multiple falls associated with gait variables during fast-walking and change between preferred- and fast-walking. Covariates included age, sex, mood, physical activity, sensorimotor and cognitive measures. The risk of multiple falls was increased for those with a smaller walk ratio (shorter steps, faster cadence) during fast-walking (RR 0.92, CI 0.87, 0.97) and greater reduction in the walk ratio (smaller increase in step length, larger increase in cadence) when changing to fast-walking (RR 0.73, CI 0.63, 0.85). These gait patterns were associated with poorer physiological and cognitive function (p<0.05). A higher risk of multiple falls was also seen for those in the fastest quarter of gait speed (p=0.01) at fast-walking. A trend for better reaction time, balance, memory and physical activity for higher categories of gait speed was stronger for fallers than non-fallers (p<0.05). Tests of fast-walking may be useful in identifying older individuals at risk of multiple falls. There may be two distinct groups at risk--the frail person with short shuffling steps, and the healthy person exposed to greater risk. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Steady state preparative multiple dual mode counter-current chromatography: Productivity and selectivity. Theory and experimental verification.

    PubMed

    Kostanyan, Artak E; Erastov, Andrey A

    2015-08-07

    In the steady state (SS) multiple dual mode (MDM) counter-current chromatography (CCC), at the beginning of the first step of every cycle the sample dissolved in one of the phases is continuously fed into a CCC device over a constant time, not exceeding the run time of the first step. After a certain number of cycles, the steady state regime is achieved, where concentrations vary over time during each cycle, however, the concentration profiles of solutes eluted with both phases remain constant in all subsequent cycles. The objective of this work was to develop analytical expressions to describe the SS MDM CCC separation processes, which can be helpful to simulate and design these processes and select a suitable compromise between the productivity and the selectivity in the preparative and production CCC separations. Experiments carried out using model mixtures of compounds from the GUESSmix with solvent system hexane/ethyl acetate/methanol/water demonstrated a reasonable agreement between the predictions of the theory and the experimental results. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Age-related changes in gait adaptability in response to unpredictable obstacles and stepping targets.

    PubMed

    Caetano, Maria Joana D; Lord, Stephen R; Schoene, Daniel; Pelicioni, Paulo H S; Sturnieks, Daina L; Menant, Jasmine C

    2016-05-01

    A large proportion of falls in older people occur when walking. Limitations in gait adaptability might contribute to tripping; a frequently reported cause of falls in this group. To evaluate age-related changes in gait adaptability in response to obstacles or stepping targets presented at short notice, i.e.: approximately two steps ahead. Fifty older adults (aged 74±7 years; 34 females) and 21 young adults (aged 26±4 years; 12 females) completed 3 usual gait speed (baseline) trials. They then completed the following randomly presented gait adaptability trials: obstacle avoidance, short stepping target, long stepping target and no target/obstacle (3 trials of each). Compared with the young, the older adults slowed significantly in no target/obstacle trials compared with the baseline trials. They took more steps and spent more time in double support while approaching the obstacle and stepping targets, demonstrated poorer stepping accuracy and made more stepping errors (failed to hit the stepping targets/avoid the obstacle). The older adults also reduced velocity of the two preceding steps and shortened the previous step in the long stepping target condition and in the obstacle avoidance condition. Compared with their younger counterparts, the older adults exhibited a more conservative adaptation strategy characterised by slow, short and multiple steps with longer time in double support. Even so, they demonstrated poorer stepping accuracy and made more stepping errors. This reduced gait adaptability may place older adults at increased risk of falling when negotiating unexpected hazards. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Computation of Transonic Nozzle Sound Transmission and Rotor Problems by the Dispersion-Relation-Preserving Scheme

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Aganin, Alexei

    2000-01-01

    The transonic nozzle transmission problem and the open rotor noise radiation problem are solved computationally. Both are multiple length scales problems. For efficient and accurate numerical simulation, the multiple-size-mesh multiple-time-step Dispersion-Relation-Preserving scheme is used to calculate the time periodic solution. To ensure an accurate solution, high quality numerical boundary conditions are also needed. For the nozzle problem, a set of nonhomogeneous, outflow boundary conditions are required. The nonhomogeneous boundary conditions not only generate the incoming sound waves but also, at the same time, allow the reflected acoustic waves and entropy waves, if present, to exit the computation domain without reflection. For the open rotor problem, there is an apparent singularity at the axis of rotation. An analytic extension approach is developed to provide a high quality axis boundary treatment.

  18. One-step production of multiple emulsions: microfluidic, polymer-stabilized and particle-stabilized approaches.

    PubMed

    Clegg, Paul S; Tavacoli, Joe W; Wilde, Pete J

    2016-01-28

    Multiple emulsions have great potential for application in food science as a means to reduce fat content or for controlled encapsulation and release of actives. However, neither production nor stability is straightforward. Typically, multiple emulsions are prepared via two emulsification steps and a variety of approaches have been deployed to give long-term stability. It is well known that multiple emulsions can be prepared in a single step by harnessing emulsion inversion, although the resulting emulsions are usually short lived. Recently, several contrasting methods have been demonstrated which give rise to stable multiple emulsions via one-step production processes. Here we review the current state of microfluidic, polymer-stabilized and particle-stabilized approaches; these rely on phase separation, the role of electrolyte and the trapping of solvent with particles respectively.

  19. Temporal rainfall disaggregation using a multiplicative cascade model for spatial application in urban hydrology

    NASA Astrophysics Data System (ADS)

    Müller, H.; Haberlandt, U.

    2018-01-01

    Rainfall time series of high temporal resolution and spatial density are crucial for urban hydrology. The multiplicative random cascade model can be used for temporal rainfall disaggregation of daily data to generate such time series. Here, the uniform splitting approach with a branching number of 3 in the first disaggregation step is applied. To achieve a final resolution of 5 min, subsequent steps after disaggregation are necessary. Three modifications at different disaggregation levels are tested in this investigation (uniform splitting at Δt = 15 min, linear interpolation at Δt = 7.5 min and Δt = 3.75 min). Results are compared both with observations and an often used approach, based on the assumption that a time steps with Δt = 5.625 min, as resulting if a branching number of 2 is applied throughout, can be replaced with Δt = 5 min (called the 1280 min approach). Spatial consistence is implemented in the disaggregated time series using a resampling algorithm. In total, 24 recording stations in Lower Saxony, Northern Germany with a 5 min resolution have been used for the validation of the disaggregation procedure. The urban-hydrological suitability is tested with an artificial combined sewer system of about 170 hectares. The results show that all three variations outperform the 1280 min approach regarding reproduction of wet spell duration, average intensity, fraction of dry intervals and lag-1 autocorrelation. Extreme values with durations of 5 min are also better represented. For durations of 1 h, all approaches show only slight deviations from the observed extremes. The applied resampling algorithm is capable to achieve sufficient spatial consistence. The effects on the urban hydrological simulations are significant. Without spatial consistence, flood volumes of manholes and combined sewer overflow are strongly underestimated. After resampling, results using disaggregated time series as input are in the range of those using observed time series. Best overall performance regarding rainfall statistics are obtained by the method in which the disaggregation process ends at time steps with 7.5 min duration, deriving the 5 min time steps by linear interpolation. With subsequent resampling this method leads to a good representation of manhole flooding and combined sewer overflow volume in terms of hydrological simulations and outperforms the 1280 min approach.

  20. MIMO channel estimation and evaluation for airborne traffic surveillance in cellular networks

    NASA Astrophysics Data System (ADS)

    Vahidi, Vahid; Saberinia, Ebrahim

    2018-01-01

    A channel estimation (CE) procedure based on compressed sensing is proposed to estimate the multiple-input multiple-output sparse channel for traffic data transmission from drones to ground stations. The proposed procedure consists of an offline phase and a real-time phase. In the offline phase, a pilot arrangement method, which considers the interblock and block mutual coherence simultaneously, is proposed. The real-time phase contains three steps. At the first step, it obtains the priori estimate of the channel by block orthogonal matching pursuit; afterward, it utilizes that estimated channel to calculate the linear minimum mean square error of the received pilots. Finally, the block compressive sampling matching pursuit utilizes the enhanced received pilots to estimate the channel more accurately. The performance of the CE procedure is evaluated by simulating the transmission of traffic data through the communication channel and evaluating its fidelity for car detection after demodulation. Simulation results indicate that the proposed CE technique enhances the performance of the car detection in a traffic image considerably.

  1. Continuous-Time Bilinear System Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    2003-01-01

    The objective of this paper is to describe a new method for identification of a continuous-time multi-input and multi-output bilinear system. The approach is to make judicious use of the linear-model properties of the bilinear system when subjected to a constant input. Two steps are required in the identification process. The first step is to use a set of pulse responses resulting from a constant input of one sample period to identify the state matrix, the output matrix, and the direct transmission matrix. The second step is to use another set of pulse responses with the same constant input over multiple sample periods to identify the input matrix and the coefficient matrices associated with the coupling terms between the state and the inputs. Numerical examples are given to illustrate the concept and the computational algorithm for the identification method.

  2. Theory and Performance of AIMS for Active Interrogation

    NASA Astrophysics Data System (ADS)

    Walters, William J.; Royston, Katherine E. K.; Haghighat, Alireza

    2014-06-01

    A hybrid Monte Carlo and deterministic methodology has been developed for application to active interrogation systems. The methodology consists of four steps: i) determination of neutron flux distribution due to neutron source transport and subcritical multiplication; ii) generation of gamma source distribution from (n, γ) interactions; iii) determination of gamma current at a detector window; iv) detection of gammas by the detector. This paper discusses the theory and results of the first three steps for the case of a cargo container with a sphere of HEU in third-density water. In the first step, a response-function formulation has been developed to calculate the subcritical multiplication and neutron flux distribution. Response coefficients are pre-calculated using the MCNP5 Monte Carlo code. The second step uses the calculated neutron flux distribution and Bugle-96 (n, γ) cross sections to find the resulting gamma source distribution. Finally, in the third step the gamma source distribution is coupled with a pre-calculated adjoint function to determine the gamma flux at a detector window. A code, AIMS (Active Interrogation for Monitoring Special-Nuclear-materials), has been written to output the gamma current for an source-detector assembly scanning across the cargo using the pre-calculated values and takes significantly less time than a reference MCNP5 calculation.

  3. Multi-step prediction for influenza outbreak by an adjusted long short-term memory.

    PubMed

    Zhang, J; Nawata, K

    2018-05-01

    Influenza results in approximately 3-5 million annual cases of severe illness and 250 000-500 000 deaths. We urgently need an accurate multi-step-ahead time-series forecasting model to help hospitals to perform dynamical assignments of beds to influenza patients for the annually varied influenza season, and aid pharmaceutical companies to formulate a flexible plan of manufacturing vaccine for the yearly different influenza vaccine. In this study, we utilised four different multi-step prediction algorithms in the long short-term memory (LSTM). The result showed that implementing multiple single-output prediction in a six-layer LSTM structure achieved the best accuracy. The mean absolute percentage errors from two- to 13-step-ahead prediction for the US influenza-like illness rates were all <15%, averagely 12.930%. To the best of our knowledge, it is the first time that LSTM has been applied and refined to perform multi-step-ahead prediction for influenza outbreaks. Hopefully, this modelling methodology can be applied in other countries and therefore help prevent and control influenza worldwide.

  4. A split-step method to include electron–electron collisions via Monte Carlo in multiple rate equation simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huthmacher, Klaus; Molberg, Andreas K.; Rethfeld, Bärbel

    2016-10-01

    A split-step numerical method for calculating ultrafast free-electron dynamics in dielectrics is introduced. The two split steps, independently programmed in C++11 and FORTRAN 2003, are interfaced via the presented open source wrapper. The first step solves a deterministic extended multi-rate equation for the ionization, electron–phonon collisions, and single photon absorption by free-carriers. The second step is stochastic and models electron–electron collisions using Monte-Carlo techniques. This combination of deterministic and stochastic approaches is a unique and efficient method of calculating the nonlinear dynamics of 3D materials exposed to high intensity ultrashort pulses. Results from simulations solving the proposed model demonstrate howmore » electron–electron scattering relaxes the non-equilibrium electron distribution on the femtosecond time scale.« less

  5. HITEMP Material and Structural Optimization Technology Transfer

    NASA Technical Reports Server (NTRS)

    Collier, Craig S.; Arnold, Steve (Technical Monitor)

    2001-01-01

    The feasibility of adding viscoelasticity and the Generalized Method of Cells (GMC) for micromechanical viscoelastic behavior into the commercial HyperSizer structural analysis and optimization code was investigated. The viscoelasticity methodology was developed in four steps. First, a simplified algorithm was devised to test the iterative time stepping method for simple one-dimensional multiple ply structures. Second, GMC code was made into a callable subroutine and incorporated into the one-dimensional code to test the accuracy and usability of the code. Third, the viscoelastic time-stepping and iterative scheme was incorporated into HyperSizer for homogeneous, isotropic viscoelastic materials. Finally, the GMC was included in a version of HyperSizer. MS Windows executable files implementing each of these steps is delivered with this report, as well as source code. The findings of this research are that both viscoelasticity and GMC are feasible and valuable additions to HyperSizer and that the door is open for more advanced nonlinear capability, such as viscoplasticity.

  6. Study Behaviors and USMLE Step 1 Performance: Implications of a Student Self-Directed Parallel Curriculum.

    PubMed

    Burk-Rafel, Jesse; Santen, Sally A; Purkiss, Joel

    2017-11-01

    To determine medical students' study behaviors when preparing for the United States Medical Licensing Examination (USMLE) Step 1, and how these behaviors are associated with Step 1 scores when controlling for likely covariates. The authors distributed a study-behaviors survey in 2014 and 2015 at their institution to two cohorts of medical students who had recently taken Step 1. Demographic and academic data were linked to responses. Descriptive statistics, bivariate correlations, and multiple linear regression analyses were performed. Of 332 medical students, 274 (82.5%) participated. Most students (n = 211; 77.0%) began studying for Step 1 during their preclinical curriculum, increasing their intensity during a protected study period during which they averaged 11.0 hours studying per day (standard deviation [SD] 2.1) over a period of 35.3 days (SD 6.2). Students used numerous third-party resources, including reading an exam-specific 700-page review book on average 2.1 times (SD 0.8) and completing an average of 3,597 practice multiple-choice questions (SD 1,611). Initiating study prior to the designated study period, increased review book usage, and attempting more practice questions were all associated with higher Step 1 scores, even when controlling for Medical College Admission Test scores, preclinical exam performance, and self-identified score goal (adjusted R = 0.56, P < .001). Medical students at one public institution engaged in a self-directed, "parallel" Step 1 curriculum using third-party study resources. Several study behaviors were associated with improved USMLE Step 1 performance, informing both institutional- and student-directed preparation for this high-stakes exam.

  7. Accessory stimulus modulates executive function during stepping task

    PubMed Central

    Watanabe, Tatsunori; Koyama, Soichiro; Tanabe, Shigeo

    2015-01-01

    When multiple sensory modalities are simultaneously presented, reaction time can be reduced while interference enlarges. The purpose of this research was to examine the effects of task-irrelevant acoustic accessory stimuli simultaneously presented with visual imperative stimuli on executive function during stepping. Executive functions were assessed by analyzing temporal events and errors in the initial weight transfer of the postural responses prior to a step (anticipatory postural adjustment errors). Eleven healthy young adults stepped forward in response to a visual stimulus. We applied a choice reaction time task and the Simon task, which consisted of congruent and incongruent conditions. Accessory stimuli were randomly presented with the visual stimuli. Compared with trials without accessory stimuli, the anticipatory postural adjustment error rates were higher in trials with accessory stimuli in the incongruent condition and the reaction times were shorter in trials with accessory stimuli in all the task conditions. Analyses after division of trials according to whether anticipatory postural adjustment error occurred or not revealed that the reaction times of trials with anticipatory postural adjustment errors were reduced more than those of trials without anticipatory postural adjustment errors in the incongruent condition. These results suggest that accessory stimuli modulate the initial motor programming of stepping by lowering decision threshold and exclusively under spatial incompatibility facilitate automatic response activation. The present findings advance the knowledge of intersensory judgment processes during stepping and may aid in the development of intervention and evaluation tools for individuals at risk of falls. PMID:25925321

  8. Impact of SCBA size and fatigue from different firefighting work cycles on firefighter gait.

    PubMed

    Kesler, Richard M; Bradley, Faith F; Deetjen, Grace S; Angelini, Michael J; Petrucci, Matthew N; Rosengren, Karl S; Horn, Gavin P; Hsiao-Wecksler, Elizabeth T

    2018-04-04

    Risk of slips, trips and falls in firefighters maybe influenced by the firefighter's equipment and duration of firefighting. This study examined the impact of a four self-contained breathing apparatus (SCBA) three SCBA of increasing size and a prototype design and three work cycles one bout (1B), two bouts with a five-minute break (2B) and two bouts back-to-back (BB) on gait in 30 firefighters. Five gait parameters (double support time, single support time, stride length, step width and stride velocity) were examined pre- and post-firefighting activity. The two largest SCBA resulted in longer double support times relative to the smallest SCBA. Multiple bouts of firefighting activity resulted in increased single and double support time and decreased stride length, step width and stride velocity. These results suggest that with larger SCBA or longer durations of activity, firefighters may adopt more conservative gait patterns to minimise fall risk. Practitioner Summary: The effects of four self-contained breathing apparatus (SCBA) and three work cycles on five gait parameters were examined pre- and post-firefighting activity. Both SCBA size and work cycle affected gait. The two largest SCBA resulted in longer double support times. Multiple bouts of activity resulted in more conservative gait patterns.

  9. Reliability and Concurrent Validity of the Narrow Path Walking Test in Persons With Multiple Sclerosis.

    PubMed

    Rosenblum, Uri; Melzer, Itshak

    2017-01-01

    About 90% of people with multiple sclerosis (PwMS) have gait instability and 50% fall. Reliable and clinically feasible methods of gait instability assessment are needed. The study investigated the reliability and validity of the Narrow Path Walking Test (NPWT) under single-task (ST) and dual-task (DT) conditions for PwMS. Thirty PwMS performed the NPWT on 2 different occasions, a week apart. Number of Steps, Trial Time, Trial Velocity, Step Length, Number of Step Errors, Number of Cognitive Task Errors, and Number of Balance Losses were measured. Intraclass correlation coefficients (ICC2,1) were calculated from the average values of NPWT parameters. Absolute reliability was quantified from standard error of measurement (SEM) and smallest real difference (SRD). Concurrent validity of NPWT with Functional Reach Test, Four Square Step Test (FSST), 12-item Multiple Sclerosis Walking Scale (MSWS-12), and 2 Minute Walking Test (2MWT) was determined using partial correlations. Intraclass correlation coefficients (ICCs) for most NPWT parameters during ST and DT ranged from 0.46-0.94 and 0.55-0.95, respectively. The highest relative reliability was found for Number of Step Errors (ICC = 0.94 and 0.93, for ST and DT, respectively) and Trial Velocity (ICC = 0.83 and 0.86, for ST and DT, respectively). Absolute reliability was high for Number of Step Errors in ST (SEM % = 19.53%) and DT (SEM % = 18.14%) and low for Trial Velocity in ST (SEM % = 6.88%) and DT (SEM % = 7.29%). Significant correlations for Number of Step Errors and Trial Velocity were found with FSST, MSWS-12, and 2MWT. In persons with PwMS performing the NPWT, Number of Step Errors and Trial Velocity were highly reliable parameters. Based on correlations with other measures of gait instability, Number of Step Errors was the most valid parameter of dynamic balance under the conditions of our test.Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, available at: http://links.lww.com/JNPT/A159).

  10. The Effects of Multiple-Step and Single-Step Directions on Fourth and Fifth Grade Students' Grammar Assessment Performance

    ERIC Educational Resources Information Center

    Mazerik, Matthew B.

    2006-01-01

    The mean scores of English Language Learners (ELL) and English Only (EO) students in 4th and 5th grade (N = 110), across the teacher-administered Grammar Skills Test, were examined for differences in participants' scores on assessments containing single-step directions and assessments containing multiple-step directions. The results indicated no…

  11. A new paper-based platform technology for point-of-care diagnostics.

    PubMed

    Gerbers, Roman; Foellscher, Wilke; Chen, Hong; Anagnostopoulos, Constantine; Faghri, Mohammad

    2014-10-21

    Currently, the Lateral flow Immunoassays (LFIAs) are not able to perform complex multi-step immunodetection tests because of their inability to introduce multiple reagents in a controlled manner to the detection area autonomously. In this research, a point-of-care (POC) paper-based lateral flow immunosensor was developed incorporating a novel microfluidic valve technology. Layers of paper and tape were used to create a three-dimensional structure to form the fluidic network. Unlike the existing LFIAs, multiple directional valves are embedded in the test strip layers to control the order and the timing of mixing for the sample and multiple reagents. In this paper, we report a four-valve device which autonomously directs three different fluids to flow sequentially over the detection area. As proof of concept, a three-step alkaline phosphatase based Enzyme-Linked ImmunoSorbent Assay (ELISA) protocol with Rabbit IgG as the model analyte was conducted to prove the suitability of the device for immunoassays. The detection limit of about 4.8 fm was obtained.

  12. Spike-like solitary waves in incompressible boundary layers driven by a travelling wave.

    PubMed

    Feng, Peihua; Zhang, Jiazhong; Wang, Wei

    2016-06-01

    Nonlinear waves produced in an incompressible boundary layer driven by a travelling wave are investigated, with damping considered as well. As one of the typical nonlinear waves, the spike-like wave is governed by the driven-damped Benjamin-Ono equation. The wave field enters a completely irregular state beyond a critical time, increasing the amplitude of the driving wave continuously. On the other hand, the number of spikes of solitary waves increases through multiplication of the wave pattern. The wave energy grows in a sequence of sharp steps, and hysteresis loops are found in the system. The wave energy jumps to different levels with multiplication of the wave, which is described by winding number bifurcation of phase trajectories. Also, the phenomenon of multiplication and hysteresis steps is found when varying the speed of driving wave as well. Moreover, the nature of the change of wave pattern and its energy is the stability loss of the wave caused by saddle-node bifurcation.

  13. Does an Adolescent’s Accuracy of Recall Improve with a Second 24-h Dietary Recall?

    PubMed Central

    Kerr, Deborah A.; Wright, Janine L.; Dhaliwal, Satvinder S.; Boushey, Carol J.

    2015-01-01

    The multiple-pass 24-h dietary recall is used in most national dietary surveys. Our purpose was to assess if adolescents’ accuracy of recall improved when a 5-step multiple-pass 24-h recall was repeated. Participants (n = 24), were Chinese-American youths aged between 11 and 15 years and lived in a supervised environment as part of a metabolic feeding study. The 24-h recalls were conducted on two occasions during the first five days of the study. The four steps (quick list; forgotten foods; time and eating occasion; detailed description of the food/beverage) of the 24-h recall were assessed for matches by category. Differences were observed in the matching for the time and occasion step (p < 0.01), detailed description (p < 0.05) and portion size matching (p < 0.05). Omission rates were higher for the second recall (p < 0.05 quick list; p < 0.01 forgotten foods). The adolescents over-estimated energy intake on the first (11.3% ± 22.5%; p < 0.05) and second recall (10.1% ± 20.8%) compared with the known food and beverage items. These results suggest that the adolescents’ accuracy to recall food items declined with a second 24-h recall when repeated over two non-consecutive days. PMID:25984743

  14. Classification and Clustering Methods for Multiple Environmental Factors in Gene-Environment Interaction: Application to the Multi-Ethnic Study of Atherosclerosis.

    PubMed

    Ko, Yi-An; Mukherjee, Bhramar; Smith, Jennifer A; Kardia, Sharon L R; Allison, Matthew; Diez Roux, Ana V

    2016-11-01

    There has been an increased interest in identifying gene-environment interaction (G × E) in the context of multiple environmental exposures. Most G × E studies analyze one exposure at a time, but we are exposed to multiple exposures in reality. Efficient analysis strategies for complex G × E with multiple environmental factors in a single model are still lacking. Using the data from the Multiethnic Study of Atherosclerosis, we illustrate a two-step approach for modeling G × E with multiple environmental factors. First, we utilize common clustering and classification strategies (e.g., k-means, latent class analysis, classification and regression trees, Bayesian clustering using Dirichlet Process) to define subgroups corresponding to distinct environmental exposure profiles. Second, we illustrate the use of an additive main effects and multiplicative interaction model, instead of the conventional saturated interaction model using product terms of factors, to study G × E with the data-driven exposure subgroups defined in the first step. We demonstrate useful analytical approaches to translate multiple environmental exposures into one summary class. These tools not only allow researchers to consider several environmental exposures in G × E analysis but also provide some insight into how genes modify the effect of a comprehensive exposure profile instead of examining effect modification for each exposure in isolation.

  15. An Alternative Explanation for "Step-Like" Early VLF Event

    NASA Astrophysics Data System (ADS)

    Moore, R. C.

    2016-12-01

    A newly-deployed array of VLF receivers along the East Coast of the United States is ideally suited for detecting VLF scattering from lightning-induced disturbances to the lower ionosphere. The array was deployed in May 2016, and one VLF receiver was deployed only 20 km from the NAA transmitter (24.0 kHz) in Cutler, Maine. The phase of the NAA signal at this closest site varies significantly with time, due simply to the impedance match of the transmitter varying with time. Additionally, both the amplitude and phase exhibit periods of rapid shifts that could possibly explain at least some "step-like" VLF scattering events. Here, we distinguish between "step-like" VLF scattering events and other events in that "step-like" events are typically not closely associated with a detected causative lightning flash and also tend to exhibit little or no recovery to ambient conditions after the event onset. We present an analysis of VLF observations from the East Coast array that demonstrates interesting examples of step-like VLF events far from the transmitter that are associated with step-like events very close to the transmitter. We conclude that step-like VLF events should be treated with caution, unless definitively associated with a causative lightning flash and/or detected using observations of multiple transmitter signals.

  16. Limited-memory fast gradient descent method for graph regularized nonnegative matrix factorization.

    PubMed

    Guan, Naiyang; Wei, Lei; Luo, Zhigang; Tao, Dacheng

    2013-01-01

    Graph regularized nonnegative matrix factorization (GNMF) decomposes a nonnegative data matrix X[Symbol:see text]R(m x n) to the product of two lower-rank nonnegative factor matrices, i.e.,W[Symbol:see text]R(m x r) and H[Symbol:see text]R(r x n) (r < min {m,n}) and aims to preserve the local geometric structure of the dataset by minimizing squared Euclidean distance or Kullback-Leibler (KL) divergence between X and WH. The multiplicative update rule (MUR) is usually applied to optimize GNMF, but it suffers from the drawback of slow-convergence because it intrinsically advances one step along the rescaled negative gradient direction with a non-optimal step size. Recently, a multiple step-sizes fast gradient descent (MFGD) method has been proposed for optimizing NMF which accelerates MUR by searching the optimal step-size along the rescaled negative gradient direction with Newton's method. However, the computational cost of MFGD is high because 1) the high-dimensional Hessian matrix is dense and costs too much memory; and 2) the Hessian inverse operator and its multiplication with gradient cost too much time. To overcome these deficiencies of MFGD, we propose an efficient limited-memory FGD (L-FGD) method for optimizing GNMF. In particular, we apply the limited-memory BFGS (L-BFGS) method to directly approximate the multiplication of the inverse Hessian and the gradient for searching the optimal step size in MFGD. The preliminary results on real-world datasets show that L-FGD is more efficient than both MFGD and MUR. To evaluate the effectiveness of L-FGD, we validate its clustering performance for optimizing KL-divergence based GNMF on two popular face image datasets including ORL and PIE and two text corpora including Reuters and TDT2. The experimental results confirm the effectiveness of L-FGD by comparing it with the representative GNMF solvers.

  17. Multiple dual mode counter-current chromatography with variable duration of alternating phase elution steps.

    PubMed

    Kostanyan, Artak E; Erastov, Andrey A; Shishilov, Oleg N

    2014-06-20

    The multiple dual mode (MDM) counter-current chromatography separation processes consist of a succession of two isocratic counter-current steps and are characterized by the shuttle (forward and back) transport of the sample in chromatographic columns. In this paper, the improved MDM method based on variable duration of alternating phase elution steps has been developed and validated. The MDM separation processes with variable duration of phase elution steps are analyzed. Basing on the cell model, analytical solutions are developed for impulse and non-impulse sample loading at the beginning of the column. Using the analytical solutions, a calculation program is presented to facilitate the simulation of MDM with variable duration of phase elution steps, which can be used to select optimal process conditions for the separation of a given feed mixture. Two options of the MDM separation are analyzed: 1 - with one-step solute elution: the separation is conducted so, that the sample is transferred forward and back with upper and lower phases inside the column until the desired separation of the components is reached, and then each individual component elutes entirely within one step; 2 - with multi-step solute elution, when the fractions of individual components are collected in over several steps. It is demonstrated that proper selection of the duration of individual cycles (phase flow times) can greatly increase the separation efficiency of CCC columns. Experiments were carried out using model mixtures of compounds from the GUESSmix with solvent systems hexane/ethyl acetate/methanol/water. The experimental results are compared to the predictions of the theory. A good agreement between theory and experiment has been demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Cross-Scale Modelling of Subduction from Minute to Million of Years Time Scale

    NASA Astrophysics Data System (ADS)

    Sobolev, S. V.; Muldashev, I. A.

    2015-12-01

    Subduction is an essentially multi-scale process with time-scales spanning from geological to earthquake scale with the seismic cycle in-between. Modelling of such process constitutes one of the largest challenges in geodynamic modelling today.Here we present a cross-scale thermomechanical model capable of simulating the entire subduction process from rupture (1 min) to geological time (millions of years) that employs elasticity, mineral-physics-constrained non-linear transient viscous rheology and rate-and-state friction plasticity. The model generates spontaneous earthquake sequences. The adaptive time-step algorithm recognizes moment of instability and drops the integration time step to its minimum value of 40 sec during the earthquake. The time step is then gradually increased to its maximal value of 5 yr, following decreasing displacement rates during the postseismic relaxation. Efficient implementation of numerical techniques allows long-term simulations with total time of millions of years. This technique allows to follow in details deformation process during the entire seismic cycle and multiple seismic cycles. We observe various deformation patterns during modelled seismic cycle that are consistent with surface GPS observations and demonstrate that, contrary to the conventional ideas, the postseismic deformation may be controlled by viscoelastic relaxation in the mantle wedge, starting within only a few hours after the great (M>9) earthquakes. Interestingly, in our model an average slip velocity at the fault closely follows hyperbolic decay law. In natural observations, such deformation is interpreted as an afterslip, while in our model it is caused by the viscoelastic relaxation of mantle wedge with viscosity strongly varying with time. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-year time range. We will also present results of the modeling of deformation of the upper plate during multiple earthquake cycles at times of hundred thousand and million years and discuss effect of great earthquakes in changing long-term stress field in the upper plate.

  19. Optimization of magnetic flux density measurement using multiple RF receiver coils and multi-echo in MREIT.

    PubMed

    Jeong, Woo Chul; Chauhan, Munish; Sajib, Saurav Z K; Kim, Hyung Joong; Serša, Igor; Kwon, Oh In; Woo, Eung Je

    2014-09-07

    Magnetic Resonance Electrical Impedance Tomography (MREIT) is an MRI method that enables mapping of internal conductivity and/or current density via measurements of magnetic flux density signals. The MREIT measures only the z-component of the induced magnetic flux density B = (Bx, By, Bz) by external current injection. The measured noise of Bz complicates recovery of magnetic flux density maps, resulting in lower quality conductivity and current-density maps. We present a new method for more accurate measurement of the spatial gradient of the magnetic flux density gradient (∇ Bz). The method relies on the use of multiple radio-frequency receiver coils and an interleaved multi-echo pulse sequence that acquires multiple sampling points within each repetition time. The noise level of the measured magnetic flux density Bz depends on the decay rate of the signal magnitude, the injection current duration, and the coil sensitivity map. The proposed method uses three key steps. The first step is to determine a representative magnetic flux density gradient from multiple receiver coils by using a weighted combination and by denoising the measured noisy data. The second step is to optimize the magnetic flux density gradient by using multi-echo magnetic flux densities at each pixel in order to reduce the noise level of ∇ Bz and the third step is to remove a random noise component from the recovered ∇ Bz by solving an elliptic partial differential equation in a region of interest. Numerical simulation experiments using a cylindrical phantom model with included regions of low MRI signal to noise ('defects') verified the proposed method. Experimental results using a real phantom experiment, that included three different kinds of anomalies, demonstrated that the proposed method reduced the noise level of the measured magnetic flux density. The quality of the recovered conductivity maps using denoised ∇ Bz data showed that the proposed method reduced the conductivity noise level up to 3-4 times at each anomaly region in comparison to the conventional method.

  20. Stretching single atom contacts at multiple subatomic step-length.

    PubMed

    Wei, Yi-Min; Liang, Jing-Hong; Chen, Zhao-Bin; Zhou, Xiao-Shun; Mao, Bing-Wei; Oviedo, Oscar A; Leiva, Ezequiel P M

    2013-08-14

    This work describes jump-to-contact STM-break junction experiments leading to novel statistical distribution of last-step length associated with conductance of a single atom contact. Last-step length histograms are observed with up to five for Fe and three for Cu peaks at integral multiples close to 0.075 nm, a subatomic distance. A model is proposed in terms of gliding from a fcc hollow-site to a hcp hollow-site of adjacent atomic planes at 1/3 regular layer spacing along with tip stretching to account for the multiple subatomic step-length behavior.

  1. An efficient sequential approach to tracking multiple objects through crowds for real-time intelligent CCTV systems.

    PubMed

    Li, Liyuan; Huang, Weimin; Gu, Irene Yu-Hua; Luo, Ruijiang; Tian, Qi

    2008-10-01

    Efficiency and robustness are the two most important issues for multiobject tracking algorithms in real-time intelligent video surveillance systems. We propose a novel 2.5-D approach to real-time multiobject tracking in crowds, which is formulated as a maximum a posteriori estimation problem and is approximated through an assignment step and a location step. Observing that the occluding object is usually less affected by the occluded objects, sequential solutions for the assignment and the location are derived. A novel dominant color histogram (DCH) is proposed as an efficient object model. The DCH can be regarded as a generalized color histogram, where dominant colors are selected based on a given distance measure. Comparing with conventional color histograms, the DCH only requires a few color components (31 on average). Furthermore, our theoretical analysis and evaluation on real data have shown that DCHs are robust to illumination changes. Using the DCH, efficient implementations of sequential solutions for the assignment and location steps are proposed. The assignment step includes the estimation of the depth order for the objects in a dispersing group, one-by-one assignment, and feature exclusion from the group representation. The location step includes the depth-order estimation for the objects in a new group, the two-phase mean-shift location, and the exclusion of tracked objects from the new position in the group. Multiobject tracking results and evaluation from public data sets are presented. Experiments on image sequences captured from crowded public environments have shown good tracking results, where about 90% of the objects have been successfully tracked with the correct identification numbers by the proposed method. Our results and evaluation have indicated that the method is efficient and robust for tracking multiple objects (>or= 3) in complex occlusion for real-world surveillance scenarios.

  2. Two Simple and Efficient Algorithms to Compute the SP-Score Objective Function of a Multiple Sequence Alignment.

    PubMed

    Ranwez, Vincent

    2016-01-01

    Multiple sequence alignment (MSA) is a crucial step in many molecular analyses and many MSA tools have been developed. Most of them use a greedy approach to construct a first alignment that is then refined by optimizing the sum of pair score (SP-score). The SP-score estimation is thus a bottleneck for most MSA tools since it is repeatedly required and is time consuming. Given an alignment of n sequences and L sites, I introduce here optimized solutions reaching O(nL) time complexity for affine gap cost, instead of O(n2L), which are easy to implement.

  3. The effect of some heat treatment parameters on the dimensional stability of AISI D2

    NASA Astrophysics Data System (ADS)

    Surberg, Cord Henrik; Stratton, Paul; Lingenhöle, Klaus

    2008-01-01

    The tool steel AISI D2 is usually processed by vacuum hardening followed by multiple tempering cycles. It has been suggested that a deep cold treatment in between the hardening and tempering processes could reduce processing time and improve the final properties and dimensional stability. Hardened blocks were then subjected to various combinations of single and multiple tempering steps (520 and 540 °C) and deep cold treatments (-90, -120 and -150 °C). The greatest dimensional stability was achieved by deep cold treatments at the lowest temperature used and was independent of the deep cold treatment time.

  4. 40 CFR 86.1232-96 - Vehicle preconditioning.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... awaiting testing, to prevent unusual loading of the canisters. During this time care must be taken to... vehicles with multiple canisters in a series configuration, the set of canisters must be preconditioned as... designed for vapor load or purge steps, the service port shall be used during testing to precondition the...

  5. A MULTIPLE GRID APPROACH FOR OPEN CHANNEL FLOWS WITH STRONG SHOCKS. (R825200)

    EPA Science Inventory

    Abstract

    Explicit finite difference schemes are being widely used for modeling open channel flows accompanied with shocks. A characteristic feature of explicit schemes is the small time step, which is limited by the CFL stability condition. To overcome this limitation,...

  6. Neural Correlates of Temporal Credit Assignment in the Parietal Lobe

    PubMed Central

    Eisenberg, Ian; Gottlieb, Jacqueline

    2014-01-01

    Empirical studies of decision making have typically assumed that value learning is governed by time, such that a reward prediction error arising at a specific time triggers temporally-discounted learning for all preceding actions. However, in natural behavior, goals must be acquired through multiple actions, and each action can have different significance for the final outcome. As is recognized in computational research, carrying out multi-step actions requires the use of credit assignment mechanisms that focus learning on specific steps, but little is known about the neural correlates of these mechanisms. To investigate this question we recorded neurons in the monkey lateral intraparietal area (LIP) during a serial decision task where two consecutive eye movement decisions led to a final reward. The underlying decision trees were structured such that the two decisions had different relationships with the final reward, and the optimal strategy was to learn based on the final reward at one of the steps (the “F” step) but ignore changes in this reward at the remaining step (the “I” step). In two distinct contexts, the F step was either the first or the second in the sequence, controlling for effects of temporal discounting. We show that LIP neurons had the strongest value learning and strongest post-decision responses during the transition after the F step regardless of the serial position of this step. Thus, the neurons encode correlates of temporal credit assignment mechanisms that allocate learning to specific steps independently of temporal discounting. PMID:24523935

  7. Accelerated x-ray scatter projection imaging using multiple continuously moving pencil beams

    NASA Astrophysics Data System (ADS)

    Dydula, Christopher; Belev, George; Johns, Paul C.

    2017-03-01

    Coherent x-ray scatter varies with angle and photon energy in a manner dependent on the chemical composition of the scattering material, even for amorphous materials. Therefore, images generated from scattered photons can have much higher contrast than conventional projection radiographs. We are developing a scatter projection imaging prototype at the BioMedical Imaging and Therapy (BMIT) facility of the Canadian Light Source (CLS) synchrotron in Saskatoon, Canada. The best images are obtained using step-and-shoot scanning with a single pencil beam and area detector to capture sequentially the scatter pattern for each primary beam location on the sample. Primary x-ray transmission is recorded simultaneously using photodiodes. The technological challenge is to acquire the scatter data in a reasonable time. Using multiple pencil beams producing partially-overlapping scatter patterns reduces acquisition time but increases complexity due to the need for a disentangling algorithm to extract the data. Continuous sample motion, rather than step-and-shoot, also reduces acquisition time at the expense of introducing motion blur. With a five-beam (33.2 keV, 3.5 mm2 beam area) continuous sample motion configuration, a rectangular array of 12 x 100 pixels with 1 mm sampling width has been acquired in 0.4 minutes (3000 pixels per minute). The acquisition speed is 38 times the speed for single beam step-and-shoot. A system model has been developed to calculate detected scatter patterns given the material composition of the object to be imaged. Our prototype development, image acquisition of a plastic phantom and modelling are described.

  8. Proposed variations of the stepped-wedge design can be used to accommodate multiple interventions

    PubMed Central

    Lyons, Vivian H; Li, Lingyu; Hughes, James P; Rowhani-Rahbar, Ali

    2018-01-01

    Objective Stepped wedge design (SWD) cluster randomized trials have traditionally been used for evaluating a single intervention. We aimed to explore design variants suitable for evaluating multiple interventions in a SWD trial. Study Design and Setting We identified four specific variants of the traditional SWD that would allow two interventions to be conducted within a single cluster randomized trial: Concurrent, Replacement, Supplementation and Factorial SWDs. These variants were chosen to flexibly accommodate study characteristics that limit a one-size-fits-all approach for multiple interventions. Results In the Concurrent SWD, each cluster receives only one intervention, unlike the other variants. The Replacement SWD supports two interventions that will not or cannot be employed at the same time. The Supplementation SWD is appropriate when the second intervention requires the presence of the first intervention, and the Factorial SWD supports the evaluation of intervention interactions. The precision for estimating intervention effects varies across the four variants. Conclusion Selection of the appropriate design variant should be driven by the research question while considering the trade-off between the number of steps, number of clusters, restrictions for concurrent implementation of the interventions, lingering effects of each intervention, and precision of the intervention effect estimates. PMID:28412466

  9. The discriminant capabilities of stability measures, trunk kinematics, and step kinematics in classifying successful and failed compensatory stepping responses by young adults.

    PubMed

    Crenshaw, Jeremy R; Rosenblatt, Noah J; Hurt, Christopher P; Grabiner, Mark D

    2012-01-03

    This study evaluated the discriminant capability of stability measures, trunk kinematics, and step kinematics to classify successful and failed compensatory stepping responses. In addition, the shared variance between stability measures, step kinematics, and trunk kinematics is reported. The stability measures included the anteroposterior distance (d) between the body center of mass and the stepping limb toe, the margin of stability (MOS), as well as time-to-boundary considering velocity (TTB(v)), velocity and acceleration (TTB(a)), and MOS (TTB(MOS)). Kinematic measures included trunk flexion angle and angular velocity, step length, and the time after disturbance onset of recovery step completion. Fourteen young adults stood on a treadmill that delivered surface accelerations necessitating multiple forward compensatory steps. Thirteen subjects fell from an initial disturbance, but recovered from a second, identical disturbance. Trunk flexion velocity at completion of the first recovery step and trunk flexion angle at completion of the second step had the greatest overall classification of all measures (92.3%). TTB(v) and TTB(a) at completion of both steps had the greatest classification accuracy of all stability measures (80.8%). The length of the first recovery step (r ≤ 0.70) and trunk flexion angle at completion of the second recovery step (r ≤ -0.54) had the largest correlations with stability measures. Although TTB(v) and TTB(a) demonstrated somewhat smaller discriminant capabilities than trunk kinematics, the small correlations between these stability measures and trunk kinematics (|r| ≤ 0.52) suggest that they reflect two important, yet different, aspects of a compensatory stepping response. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. A clinical test of stepping and change of direction to identify multiple falling older adults.

    PubMed

    Dite, Wayne; Temple, Viviene A

    2002-11-01

    To establish the reliability and validity of a new clinical test of dynamic standing balance, the Four Square Step Test (FSST), to evaluate its sensitivity, specificity, and predictive value in identifying subjects who fall, and to compare it with 3 established balance and mobility tests. A 3-group comparison performed by using 3 validated tests and 1 new test. A rehabilitation center and university medical school in Australia. Eighty-one community-dwelling adults over the age of 65 years. Subjects were age- and gender-matched to form 3 groups: multiple fallers, nonmultiple fallers, and healthy comparisons. Not applicable. Time to complete the FSST and Timed Up and Go test and the number of steps to complete the Step Test and Functional Reach Test distance. High reliability was found for interrater (n=30, intraclass correlation coefficient [ICC]=.99) and retest reliability (n=20, ICC=.98). Evidence for validity was found through correlation with other existing balance tests. Validity was supported, with the FSST showing significantly better performance scores (P<.01) for each of the healthier and less impaired groups. The FSST also revealed a sensitivity of 85%, a specificity of 88% to 100%, and a positive predictive value of 86%. As a clinical test, the FSST is reliable, valid, easy to score, quick to administer, requires little space, and needs no special equipment. It is unique in that it involves stepping over low objects (2.5cm) and movement in 4 directions. The FSST had higher combined sensitivity and specificity for identifying differences between groups in the selected sample population of older adults than the 3 tests with which it was compared. Copyright 2002 by the American Congress of Rehabilitation Medicine and the American Academy of Physical Medicine and Rehabilitation

  11. Cluster observations of ion dispersion discontinuities in the polar cusp

    NASA Astrophysics Data System (ADS)

    Escoubet, C. P.; Berchem, J.; Pitout, F.; Richard, R. L.; Trattner, K. J.; Grison, B.; Taylor, M. G.; Masson, A.; Dunlop, M. W.; Dandouras, I. S.; Reme, H.; Fazakerley, A. N.

    2009-12-01

    The reconnection between the interplanetary magnetic field (IMF) and the Earth’s magnetic field is taking place at the magnetopause on magnetic field lines threading through the polar cusp. When the IMF is southward, reconnection occurs near the subsolar point, which is magnetically connected to the equatorward boundary of the polar cusp. Subsequently the ions injected through the reconnection point precipitate in the cusp and are dispersed poleward. If reconnection is continuous and operates at constant rate, the ion dispersion is smooth and continuous. On the other hand if the reconnection rate varies, we expect interruption in the dispersion forming energy steps or staircase. Similarly, multiple entries near the magnetopause could also produce steps at low or mid-altitude when a spacecraft is crossing subsequently the field lines originating from these multiple sources. In addition, motion of the magnetopause induced by solar wind pressure changes or erosion due to reconnection can also induce a motion of the polar cusp and a disruption of the ions dispersion observed by a spacecraft. Cluster with four spacecraft following each other in the mid-altitude cusp can be used to distinguish between these “temporal” and “spatial” effects. We will present a cusp crossing with two spacecraft, separated by around two minutes. The two spacecraft observed a very similar dispersion with a step in energy in its centre and two other dispersions poleward. We will show that the steps could be temporal (assuming that the time between two reconnection bursts corresponds to the time delay between the two spacecraft) but it would be a fortuitous coincidence. On the other hand the steps and the two poleward dispersions could be explained by spatial effects if we take into account the motion of the open-closed boundary between the two spacecraft crossings.

  12. Using time-delay to improve social play skills with peers for children with autism.

    PubMed

    Liber, Daniella B; Frea, William D; Symon, Jennifer B G

    2008-02-01

    Interventions that teach social communication and play skills are crucial for the development of children with autism. The time delay procedure is effective in teaching language acquisition, social use of language, discrete behaviors, and chained activities to individuals with autism and developmental delays. In this study, three boys with autism, attending a non-public school, were taught play activities that combined a play sequence with requesting peer assistance, using a graduated time delay procedure. A multiple-baseline across subjects design demonstrated the success of this procedure to teach multiple-step social play sequences. Results indicated an additional gain of an increase in pretend play by one of the participants. Two also demonstrated a generalization of the skills learned through the time delay procedure.

  13. Pilates exercise training vs. physical therapy for improving walking and balance in people with multiple sclerosis: a randomized controlled trial.

    PubMed

    Kalron, Alon; Rosenblum, Uri; Frid, Lior; Achiron, Anat

    2017-03-01

    Evaluate the effects of a Pilates exercise programme on walking and balance in people with multiple sclerosis and compare this exercise approach to conventional physical therapy sessions. Randomized controlled trial. Multiple Sclerosis Center, Sheba Medical Center, Tel-Hashomer, Israel. Forty-five people with multiple sclerosis, 29 females, mean age (SD) was 43.2 (11.6) years; mean Expanded Disability Status Scale (S.D) was 4.3 (1.3). Participants received 12 weekly training sessions of either Pilates ( n=22) or standardized physical therapy ( n=23) in an outpatient basis. Spatio-temporal parameters of walking and posturography parameters during static stance. Functional tests included the Time Up and Go Test, 2 and 6-minute walk test, Functional Reach Test, Berg Balance Scale and the Four Square Step Test. In addition, the following self-report forms included the Multiple Sclerosis Walking Scale and Modified Fatigue Impact Scale. At the termination, both groups had significantly increased their walking speed ( P=0.021) and mean step length ( P=0.023). According to the 2-minute and 6-minute walking tests, both groups at the end of the intervention program had increased their walking speed. Mean (SD) increase in the Pilates and physical therapy groups were 39.1 (78.3) and 25.3 (67.2) meters, respectively. There was no effect of group X time in all instrumented and clinical balance and gait measures. Pilates is a possible treatment option for people with multiple sclerosis in order to improve their walking and balance capabilities. However, this approach does not have any significant advantage over standardized physical therapy.

  14. Addressing Spatial Dependence Bias in Climate Model Simulations—An Independent Component Analysis Approach

    NASA Astrophysics Data System (ADS)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2018-02-01

    Conventional bias correction is usually applied on a grid-by-grid basis, meaning that the resulting corrections cannot address biases in the spatial distribution of climate variables. To solve this problem, a two-step bias correction method is proposed here to correct time series at multiple locations conjointly. The first step transforms the data to a set of statistically independent univariate time series, using a technique known as independent component analysis (ICA). The mutually independent signals can then be bias corrected as univariate time series and back-transformed to improve the representation of spatial dependence in the data. The spatially corrected data are then bias corrected at the grid scale in the second step. The method has been applied to two CMIP5 General Circulation Model simulations for six different climate regions of Australia for two climate variables—temperature and precipitation. The results demonstrate that the ICA-based technique leads to considerable improvements in temperature simulations with more modest improvements in precipitation. Overall, the method results in current climate simulations that have greater equivalency in space and time with observational data.

  15. Round-robin comparison of methods for the detection of human enteric viruses in lettuce.

    PubMed

    Le Guyader, Françoise S; Schultz, Anna-Charlotte; Haugarreau, Larissa; Croci, Luciana; Maunula, Leena; Duizer, Erwin; Lodder-Verschoor, Froukje; von Bonsdorff, Carl-Henrik; Suffredini, Elizabetha; van der Poel, Wim M M; Reymundo, Rosanna; Koopmans, Marion

    2004-10-01

    Five methods that detect human enteric virus contamination in lettuce were compared. To mimic multiple contaminations as observed after sewage contamination, artificial contamination was with human calicivirus and poliovirus and animal calicivirus strains at different concentrations. Nucleic acid extractions were done at the same time in the same laboratory to reduce assay-to-assay variability. Results showed that the two critical steps are the washing step and removal of inhibitors. The more reliable methods (sensitivity, simplicity, low cost) included an elution/concentration step and a commercial kit. Such development of sensitive methods for viral detection in foods other than shellfish is important to improve food safety.

  16. Fast-tracking determination of homozygous transgenic lines and transgene stacking using a reliable quantitative real-time PCR assay.

    PubMed

    Wang, Xianghong; Jiang, Daiming; Yang, Daichang

    2015-01-01

    The selection of homozygous lines is a crucial step in the characterization of newly generated transgenic plants. This is particularly time- and labor-consuming when transgenic stacking is required. Here, we report a fast and accurate method based on quantitative real-time PCR with a rice gene RBE4 as a reference gene for selection of homozygous lines when using multiple transgenic stacking in rice. Use of this method allowed can be used to determine the stacking of up to three transgenes within four generations. Selection accuracy reached 100 % for a single locus and 92.3 % for two loci. This method confers distinct advantages over current transgenic research methodologies, as it is more accurate, rapid, and reliable. Therefore, this protocol could be used to efficiently select homozygous plants and to expedite time- and labor-consuming processes normally required for multiple transgene stacking. This protocol was standardized for determination of multiple gene stacking in molecular breeding via marker-assisted selection.

  17. Monitoring gait in multiple sclerosis with novel wearable motion sensors.

    PubMed

    Moon, Yaejin; McGinnis, Ryan S; Seagers, Kirsten; Motl, Robert W; Sheth, Nirav; Wright, John A; Ghaffari, Roozbeh; Sosnoff, Jacob J

    2017-01-01

    Mobility impairment is common in people with multiple sclerosis (PwMS) and there is a need to assess mobility in remote settings. Here, we apply a novel wireless, skin-mounted, and conformal inertial sensor (BioStampRC, MC10 Inc.) to examine gait characteristics of PwMS under controlled conditions. We determine the accuracy and precision of BioStampRC in measuring gait kinematics by comparing to contemporary research-grade measurement devices. A total of 45 PwMS, who presented with diverse walking impairment (Mild MS = 15, Moderate MS = 15, Severe MS = 15), and 15 healthy control subjects participated in the study. Participants completed a series of clinical walking tests. During the tests participants were instrumented with BioStampRC and MTx (Xsens, Inc.) sensors on their shanks, as well as an activity monitor GT3X (Actigraph, Inc.) on their non-dominant hip. Shank angular velocity was simultaneously measured with the inertial sensors. Step number and temporal gait parameters were calculated from the data recorded by each sensor. Visual inspection and the MTx served as the reference standards for computing the step number and temporal parameters, respectively. Accuracy (error) and precision (variance of error) was assessed based on absolute and relative metrics. Temporal parameters were compared across groups using ANOVA. Mean accuracy±precision for the BioStampRC was 2±2 steps error for step number, 6±9ms error for stride time and 6±7ms error for step time (0.6-2.6% relative error). Swing time had the least accuracy±precision (25±19ms error, 5±4% relative error) among the parameters. GT3X had the least accuracy±precision (8±14% relative error) in step number estimate among the devices. Both MTx and BioStampRC detected significantly distinct gait characteristics between PwMS with different disability levels (p<0.01). BioStampRC sensors accurately and precisely measure gait parameters in PwMS across diverse walking impairment levels and detected differences in gait characteristics by disability level in PwMS. This technology has the potential to provide granular monitoring of gait both inside and outside the clinic.

  18. QUICR-learning for Multi-Agent Coordination

    NASA Technical Reports Server (NTRS)

    Agogino, Adrian K.; Tumer, Kagan

    2006-01-01

    Coordinating multiple agents that need to perform a sequence of actions to maximize a system level reward requires solving two distinct credit assignment problems. First, credit must be assigned for an action taken at time step t that results in a reward at time step t > t. Second, credit must be assigned for the contribution of agent i to the overall system performance. The first credit assignment problem is typically addressed with temporal difference methods such as Q-learning. The second credit assignment problem is typically addressed by creating custom reward functions. To address both credit assignment problems simultaneously, we propose the "Q Updates with Immediate Counterfactual Rewards-learning" (QUICR-learning) designed to improve both the convergence properties and performance of Q-learning in large multi-agent problems. QUICR-learning is based on previous work on single-time-step counterfactual rewards described by the collectives framework. Results on a traffic congestion problem shows that QUICR-learning is significantly better than a Q-learner using collectives-based (single-time-step counterfactual) rewards. In addition QUICR-learning provides significant gains over conventional and local Q-learning. Additional results on a multi-agent grid-world problem show that the improvements due to QUICR-learning are not domain specific and can provide up to a ten fold increase in performance over existing methods.

  19. Development and application of a hybrid transport methodology for active interrogation systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Royston, K.; Walters, W.; Haghighat, A.

    A hybrid Monte Carlo and deterministic methodology has been developed for application to active interrogation systems. The methodology consists of four steps: i) neutron flux distribution due to neutron source transport and subcritical multiplication; ii) generation of gamma source distribution from (n, 7) interactions; iii) determination of gamma current at a detector window; iv) detection of gammas by the detector. This paper discusses the theory and results of the first three steps for the case of a cargo container with a sphere of HEU in third-density water cargo. To complete the first step, a response-function formulation has been developed tomore » calculate the subcritical multiplication and neutron flux distribution. Response coefficients are pre-calculated using the MCNP5 Monte Carlo code. The second step uses the calculated neutron flux distribution and Bugle-96 (n, 7) cross sections to find the resulting gamma source distribution. In the third step the gamma source distribution is coupled with a pre-calculated adjoint function to determine the gamma current at a detector window. The AIMS (Active Interrogation for Monitoring Special-Nuclear-Materials) software has been written to output the gamma current for a source-detector assembly scanning across a cargo container using the pre-calculated values and taking significantly less time than a reference MCNP5 calculation. (authors)« less

  20. Multisensor Arrays for Greater Reliability and Accuracy

    NASA Technical Reports Server (NTRS)

    Immer, Christopher; Eckhoff, Anthony; Lane, John; Perotti, Jose; Randazzo, John; Blalock, Norman; Ree, Jeff

    2004-01-01

    Arrays of multiple, nominally identical sensors with sensor-output-processing electronic hardware and software are being developed in order to obtain accuracy, reliability, and lifetime greater than those of single sensors. The conceptual basis of this development lies in the statistical behavior of multiple sensors and a multisensor-array (MSA) algorithm that exploits that behavior. In addition, advances in microelectromechanical systems (MEMS) and integrated circuits are exploited. A typical sensor unit according to this concept includes multiple MEMS sensors and sensor-readout circuitry fabricated together on a single chip and packaged compactly with a microprocessor that performs several functions, including execution of the MSA algorithm. In the MSA algorithm, the readings from all the sensors in an array at a given instant of time are compared and the reliability of each sensor is quantified. This comparison of readings and quantification of reliabilities involves the calculation of the ratio between every sensor reading and every other sensor reading, plus calculation of the sum of all such ratios. Then one output reading for the given instant of time is computed as a weighted average of the readings of all the sensors. In this computation, the weight for each sensor is the aforementioned value used to quantify its reliability. In an optional variant of the MSA algorithm that can be implemented easily, a running sum of the reliability value for each sensor at previous time steps as well as at the present time step is used as the weight of the sensor in calculating the weighted average at the present time step. In this variant, the weight of a sensor that continually fails gradually decreases, so that eventually, its influence over the output reading becomes minimal: In effect, the sensor system "learns" which sensors to trust and which not to trust. The MSA algorithm incorporates a criterion for deciding whether there remain enough sensor readings that approximate each other sufficiently closely to constitute a majority for the purpose of quantifying reliability. This criterion is, simply, that if there do not exist at least three sensors having weights greater than a prescribed minimum acceptable value, then the array as a whole is deemed to have failed.

  1. PLE in the analysis of plant compounds. Part II: One-cycle PLE in determining total amount of analyte in plant material.

    PubMed

    Dawidowicz, Andrzej L; Wianowska, Dorota

    2005-04-29

    Pressurised liquid extraction (PLE) is recognised as one of the most effective sample preparation methods. Despite the enhanced extraction power of PLE, the full recovery of an analyte from plant material may require multiple extractions of the same sample. The presented investigations show the possibility of estimating the true concentration value of an analyte in plant material employing one-cycle PLE in which plant samples of different weight are used. The performed experiments show a linear dependence between the reciprocal value of the analyte amount (E*), extracted in single-step PLE from a plant matrix, and the ratio of plant material mass to extrahent volume (m(p)/V(s)). Hence, time-consuming multi-step PLE can be replaced by a few single-step PLEs performed at different (m(p)/V(s)) ratios. The concentrations of rutin in Sambucus nigra L. and caffeine in tea and coffee estimated by means of the tested procedure are almost the same as their concentrations estimated by multiple PLE.

  2. Multisensor data fusion across time and space

    NASA Astrophysics Data System (ADS)

    Villeneuve, Pierre V.; Beaven, Scott G.; Reed, Robert A.

    2014-06-01

    Field measurement campaigns typically deploy numerous sensors having different sampling characteristics for spatial, temporal, and spectral domains. Data analysis and exploitation is made more difficult and time consuming as the sample data grids between sensors do not align. This report summarizes our recent effort to demonstrate feasibility of a processing chain capable of "fusing" image data from multiple independent and asynchronous sensors into a form amenable to analysis and exploitation using commercially-available tools. Two important technical issues were addressed in this work: 1) Image spatial registration onto a common pixel grid, 2) Image temporal interpolation onto a common time base. The first step leverages existing image matching and registration algorithms. The second step relies upon a new and innovative use of optical flow algorithms to perform accurate temporal upsampling of slower frame rate imagery. Optical flow field vectors were first derived from high-frame rate, high-resolution imagery, and then finally used as a basis for temporal upsampling of the slower frame rate sensor's imagery. Optical flow field values are computed using a multi-scale image pyramid, thus allowing for more extreme object motion. This involves preprocessing imagery to varying resolution scales and initializing new vector flow estimates using that from the previous coarser-resolution image. Overall performance of this processing chain is demonstrated using sample data involving complex too motion observed by multiple sensors mounted to the same base. Multiple sensors were included, including a high-speed visible camera, up to a coarser resolution LWIR camera.

  3. Extending substructure based iterative solvers to multiple load and repeated analyses

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel

    1993-01-01

    Direct solvers currently dominate commercial finite element structural software, but do not scale well in the fine granularity regime targeted by emerging parallel processors. Substructure based iterative solvers--often called also domain decomposition algorithms--lend themselves better to parallel processing, but must overcome several obstacles before earning their place in general purpose structural analysis programs. One such obstacle is the solution of systems with many or repeated right hand sides. Such systems arise, for example, in multiple load static analyses and in implicit linear dynamics computations. Direct solvers are well-suited for these problems because after the system matrix has been factored, the multiple or repeated solutions can be obtained through relatively inexpensive forward and backward substitutions. On the other hand, iterative solvers in general are ill-suited for these problems because they often must restart from scratch for every different right hand side. In this paper, we present a methodology for extending the range of applications of domain decomposition methods to problems with multiple or repeated right hand sides. Basically, we formulate the overall problem as a series of minimization problems over K-orthogonal and supplementary subspaces, and tailor the preconditioned conjugate gradient algorithm to solve them efficiently. The resulting solution method is scalable, whereas direct factorization schemes and forward and backward substitution algorithms are not. We illustrate the proposed methodology with the solution of static and dynamic structural problems, and highlight its potential to outperform forward and backward substitutions on parallel computers. As an example, we show that for a linear structural dynamics problem with 11640 degrees of freedom, every time-step beyond time-step 15 is solved in a single iteration and consumes 1.0 second on a 32 processor iPSC-860 system; for the same problem and the same parallel processor, a pair of forward/backward substitutions at each step consumes 15.0 seconds.

  4. Multiple-input single-output closed-loop isometric force control using asynchronous intrafascicular multi-electrode stimulation.

    PubMed

    Frankel, Mitchell A; Dowden, Brett R; Mathews, V John; Normann, Richard A; Clark, Gregory A; Meek, Sanford G

    2011-06-01

    Although asynchronous intrafascicular multi-electrode stimulation (IFMS) can evoke fatigue-resistant muscle force, a priori determination of the necessary stimulation parameters for precise force production is not possible. This paper presents a proportionally-modulated, multiple-input single-output (MISO) controller that was designed and experimentally validated for real-time, closed-loop force-feedback control of asynchronous IFMS. Experiments were conducted on anesthetized felines with a Utah Slanted Electrode Array implanted in the sciatic nerve, either acutely or chronically ( n = 1 for each). Isometric forces were evoked in plantar-flexor muscles, and target forces consisted of up to 7 min of step, sinusoidal, and more complex time-varying trajectories. The controller was successful in evoking steps in force with time-to-peak of less than 0.45 s, steady-state ripple of less than 7% of the mean steady-state force, and near-zero steady-state error even in the presence of muscle fatigue, but with transient overshoot of near 20%. The controller was also successful in evoking target sinusoidal and complex time-varying force trajectories with amplitude error of less than 0.5 N and time delay of approximately 300 ms. This MISO control strategy can potentially be used to develop closed-loop asynchronous IFMS controllers for a wide variety of multi-electrode stimulation applications to restore lost motor function.

  5. Step responses of a torsional system with multiple clearances: Study of vibro-impact phenomenon using experimental and computational methods

    NASA Astrophysics Data System (ADS)

    Oruganti, Pradeep Sharma; Krak, Michael D.; Singh, Rajendra

    2018-01-01

    Recently Krak and Singh (2017) proposed a scientific experiment that examined vibro-impacts in a torsional system under a step down excitation and provided preliminary measurements and limited non-linear model studies. A major goal of this article is to extend the prior work with a focus on the examination of vibro-impact phenomena observed under step responses in a torsional system with one, two or three controlled clearances. First, new measurements are made at several locations with a higher sampling frequency. Measured angular accelerations are examined in both time and time-frequency domains. Minimal order non-linear models of the experiment are successfully constructed, using piecewise linear stiffness and Coulomb friction elements; eight cases of the generic system are examined though only three are experimentally studied. Measured and predicted responses for single and dual clearance configurations exhibit double sided impacts and time varying periods suggest softening trends under the step down torque. Non-linear models are experimentally validated by comparing results with new measurements and with those previously reported. Several metrics are utilized to quantify and compare the measured and predicted responses (including peak to peak accelerations). Eigensolutions and step responses of the corresponding linearized models are utilized to better understand the nature of the non-linear dynamic system. Finally, the effect of step amplitude on the non-linear responses is examined for several configurations, and hardening trends are observed in the torsional system with three clearances.

  6. DIALIGN P: fast pair-wise and multiple sequence alignment using parallel processors.

    PubMed

    Schmollinger, Martin; Nieselt, Kay; Kaufmann, Michael; Morgenstern, Burkhard

    2004-09-09

    Parallel computing is frequently used to speed up computationally expensive tasks in Bioinformatics. Herein, a parallel version of the multi-alignment program DIALIGN is introduced. We propose two ways of dividing the program into independent sub-routines that can be run on different processors: (a) pair-wise sequence alignments that are used as a first step to multiple alignment account for most of the CPU time in DIALIGN. Since alignments of different sequence pairs are completely independent of each other, they can be distributed to multiple processors without any effect on the resulting output alignments. (b) For alignments of large genomic sequences, we use a heuristics by splitting up sequences into sub-sequences based on a previously introduced anchored alignment procedure. For our test sequences, this combined approach reduces the program running time of DIALIGN by up to 97%. By distributing sub-routines to multiple processors, the running time of DIALIGN can be crucially improved. With these improvements, it is possible to apply the program in large-scale genomics and proteomics projects that were previously beyond its scope.

  7. Method for network analyzation and apparatus

    DOEpatents

    Bracht, Roger B.; Pasquale, Regina V.

    2001-01-01

    A portable network analyzer and method having multiple channel transmit and receive capability for real-time monitoring of processes which maintains phase integrity, requires low power, is adapted to provide full vector analysis, provides output frequencies of up to 62.5 MHz and provides fine sensitivity frequency resolution. The present invention includes a multi-channel means for transmitting and a multi-channel means for receiving, both in electrical communication with a software means for controlling. The means for controlling is programmed to provide a signal to a system under investigation which steps consecutively over a range of predetermined frequencies. The resulting received signal from the system provides complete time domain response information by executing a frequency transform of the magnitude and phase information acquired at each frequency step.

  8. Parallel optoelectronic trinary signed-digit division

    NASA Astrophysics Data System (ADS)

    Alam, Mohammad S.

    1999-03-01

    The trinary signed-digit (TSD) number system has been found to be very useful for parallel addition and subtraction of any arbitrary length operands in constant time. Using the TSD addition and multiplication modules as the basic building blocks, we develop an efficient algorithm for performing parallel TSD division in constant time. The proposed division technique uses one TSD subtraction and two TSD multiplication steps. An optoelectronic correlator based architecture is suggested for implementation of the proposed TSD division algorithm, which fully exploits the parallelism and high processing speed of optics. An efficient spatial encoding scheme is used to ensure better utilization of space bandwidth product of the spatial light modulators used in the optoelectronic implementation.

  9. Fractal analysis of lateral movement in biomembranes.

    PubMed

    Gmachowski, Lech

    2018-04-01

    Lateral movement of a molecule in a biomembrane containing small compartments (0.23-μm diameter) and large ones (0.75 μm) is analyzed using a fractal description of its walk. The early time dependence of the mean square displacement varies from linear due to the contribution of ballistic motion. In small compartments, walking molecules do not have sufficient time or space to develop an asymptotic relation and the diffusion coefficient deduced from the experimental records is lower than that measured without restrictions. The model makes it possible to deduce the molecule step parameters, namely the step length and time, from data concerning confined and unrestricted diffusion coefficients. This is also possible using experimental results for sub-diffusive transport. The transition from normal to anomalous diffusion does not affect the molecule step parameters. The experimental literature data on molecular trajectories recorded at a high time resolution appear to confirm the modeled value of the mean free path length of DOPE for Brownian and anomalous diffusion. Although the step length and time give the proper values of diffusion coefficient, the DOPE speed calculated as their quotient is several orders of magnitude lower than the thermal speed. This is interpreted as a result of intermolecular interactions, as confirmed by lateral diffusion of other molecules in different membranes. The molecule step parameters are then utilized to analyze the problem of multiple visits in small compartments. The modeling of the diffusion exponent results in a smooth transition to normal diffusion on entering a large compartment, as observed in experiments.

  10. Contact formation and gettering of precipitated impurities by multiple firing during semiconductor device fabrication

    DOEpatents

    Sopori, Bhushan

    2014-05-27

    Methods for contact formation and gettering of precipitated impurities by multiple firing during semiconductor device fabrication are provided. In one embodiment, a method for fabricating an electrical semiconductor device comprises: a first step that includes gettering of impurities from a semiconductor wafer and forming a backsurface field; and a second step that includes forming a front contact for the semiconductor wafer, wherein the second step is performed after completion of the first step.

  11. Characteristics of return stroke electric fields produced by lightning flashes at distances of 1 to 15 kilometers

    NASA Technical Reports Server (NTRS)

    Hopf, CH.

    1991-01-01

    Electric field derivative signals from single and multiple lightning strokes are presented. For about 25 pct. of all acquired waveforms, produced by return strokes, stepped leaders or intracloud discharges, type and distance of the signal source are known from the observations by an all sky video camera system. The analysis of the electric field derivative waveforms in the time domain shows a significant difference in the impulse width between return stroke signals and those of stepped leaders and intracloud discharges. In addition, the computed amplitude density spectrum of return stroke waveforms lies by a factor of 10 above that of stepped leaders and intracloud discharges in the frequency range from 50 to 500 kHz.

  12. So you think you've designed an effective recruitment protocol?

    PubMed

    Green, Cara; Vandall-Walker, Virginia

    2017-03-22

    Background Recruiting acutely ill patients to participate in research can be challenging. This paper outlines the difficulties the first author encountered in a study and the steps she took to overcome problems with research ethics, gain access to participants and implement a recruitment protocol in multiple hospitals. It also compares these steps with literature related to recruitment. Aim To inform and inspire neophyte researchers about the need for planning and resilience when dealing with recruitment challenges in multiple hospitals. Discussion The multiple enablers and barriers to the successful implementation of a hospital-based study recruitment protocol are explored based on a neophyte researcher's optimistic assumptions about this stage of the study. Conclusions Perseverance, adequately planning for contingencies, and accepting the barriers and challenges to recruitment are essential for completing one's research study and ensuring fulfilment as a researcher. Implications for practice Healthcare students carrying out research require adequate knowledge about conducting hospital-based, patient research to inform their recruitment plan. Maximising control over recruitment, allowing for adequate time to conduct data collection, and maintaining a good work ethic will help to ensure success.

  13. Can Reduced-Step Polishers Be as Effective as Multiple-Step Polishers in Enhancing Surface Smoothness?

    PubMed

    Kemaloglu, Hande; Karacolak, Gamze; Turkun, L Sebnem

    2017-02-01

    The aim of this study was to evaluate the effects of various finishing and polishing systems on the final surface roughness of a resin composite. Hypotheses tested were: (1) reduced-step polishing systems are as effective as multiple-step systems on reducing the surface roughness of a resin composite and (2) the number of application steps in an F/P system has no effect on reducing surface roughness. Ninety discs of a nano-hybrid resin composite were fabricated and divided into nine groups (n = 10). Except the control, all of the specimens were roughened prior to be polished by: Enamel Plus Shiny, Venus Supra, One-gloss, Sof-Lex Wheels, Super-Snap, Enhance/PoGo, Clearfil Twist Dia, and rubber cups. The surface roughness was measured and the surfaces were examined under scanning electron microscope. Results were analyzed with analysis of variance and Holm-Sidak's multiple comparisons test (p < 0.05). Significant differences were found among the surface roughness of all groups (p < 0.05). The smoothest surfaces were obtained under Mylar strips and the results were not different than Super-Snap, Enhance/PoGo, and Sof-Lex Spiral Wheels. The group that showed the roughest surface was the rubber cup group and these results were similar to those of the One-gloss, Enamel Plus Shiny, and Venus Supra groups. (1) The number of application steps has no effect on the performance of F/P systems. (2) Reduced-step polishers used after a finisher can be preferable to multiple-step systems when used on nanohybrid resin composites. (3) The effect of F/P systems on surface roughness seems to be material-dependent rather than instrument- or system-dependent. Reduced-step systems used after a prepolisher can be an acceptable alternative to multiple-step systems on enhancing the surface smoothness of a nanohybrid composite; however, their effectiveness depends on the materials' properties. (J Esthet Restor Dent 29:31-40, 2017). © 2016 Wiley Periodicals, Inc.

  14. Step by Step: Biology Undergraduates' Problem-Solving Procedures during Multiple-Choice Assessment

    ERIC Educational Resources Information Center

    Prevost, Luanna B.; Lemons, Paula P.

    2016-01-01

    This study uses the theoretical framework of domain-specific problem solving to explore the procedures students use to solve multiple-choice problems about biology concepts. We designed several multiple-choice problems and administered them on four exams. We trained students to produce written descriptions of how they solved the problem, and this…

  15. A Navier-Strokes Chimera Code on the Connection Machine CM-5: Design and Performance

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)

    1994-01-01

    We have implemented a three-dimensional compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the 'chimera' approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. A parallel machine like the CM-5 is well-suited for finite-difference methods on structured grids. The regular pattern of connections of a structured mesh maps well onto the architecture of the machine. So the first design choice, finite differences on a structured mesh, is natural. We use centered differences in space, with added artificial dissipation terms. When numerically solving the Navier-Stokes equations, there are liable to be some mesh cells near a solid body that are small in at least one direction. This mesh cell geometry can impose a very severe CFL (Courant-Friedrichs-Lewy) condition on the time step for explicit time-stepping methods. Thus, though explicit time-stepping is well-suited to the architecture of the machine, we have adopted implicit time-stepping. We have further taken the approximate factorization approach. This creates the need to solve large banded linear systems and creates the first possible barrier to an efficient algorithm. To overcome this first possible barrier we have considered two options. The first is just to solve the banded linear systems with data spread over the whole machine, using whatever fast method is available. This option is adequate for solving scalar tridiagonal systems, but for scalar pentadiagonal or block tridiagonal systems it is somewhat slower than desired. The second option is to 'transpose' the flow and geometry variables as part of the time-stepping process: Start with x-lines of data in-processor. Form explicit terms in x, then transpose so y-lines of data are in-processor. Form explicit terms in y, then transpose so z-lines are in processor. Form explicit terms in z, then solve linear systems in the z-direction. Transpose to the y-direction, then solve linear systems in the y-direction. Finally transpose to the x direction and solve linear systems in the x-direction. This strategy avoids inter-processor communication when differencing and solving linear systems, but requires a large amount of communication when doing the transposes. The transpose method is more efficient than the non-transpose strategy when dealing with scalar pentadiagonal or block tridiagonal systems. For handling geometrically complex problems the chimera strategy was adopted. For multiple zone cases we compute on each zone sequentially (using the whole parallel machine), then send the chimera interpolation data to a distributed data structure (array) laid out over the whole machine. This information transfer implies an irregular communication pattern, and is the second possible barrier to an efficient algorithm. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran. We make use of the Connection Machine Scientific Software Library (CMSSL) for the linear solver and array transpose operations.

  16. Recoded and nonrecoded trinary signed-digit adders and multipliers with redundant-bit representations

    NASA Astrophysics Data System (ADS)

    Cherri, Abdallah K.; Alam, Mohammed S.

    1998-07-01

    Highly-efficient two-step recoded and one-step nonrecoded trinary signed-digit (TSD) carry-free adders subtracters are presented on the basis of redundant-bit representation for the operands digits. It has been shown that only 24 (30) minterms are needed to implement the two-step recoded (the one-step nonrecoded) TSD addition for any operand length. Optical implementation of the proposed arithmetic can be carried out by use of correlation- or matrix-multiplication-based schemes, saving 50% of the system memory. Furthermore, we present four different multiplication designs based on our proposed recoded and nonrecoded TSD adders. Our multiplication designs require a small number of reduced minterms to generate the multiplication partial products. Finally, a recently proposed pipelined iterative-tree algorithm can be used in the TSD adders multipliers; consequently, efficient use of all available adders can be made.

  17. Recoded and nonrecoded trinary signed-digit adders and multipliers with redundant-bit representations.

    PubMed

    Cherri, A K; Alam, M S

    1998-07-10

    Highly-efficient two-step recoded and one-step nonrecoded trinary signed-digit (TSD) carry-free adders-subtracters are presented on the basis of redundant-bit representation for the operands' digits. It has been shown that only 24 (30) minterms are needed to implement the two-step recoded (the one-step nonrecoded) TSD addition for any operand length. Optical implementation of the proposed arithmetic can be carried out by use of correlation- or matrix-multiplication-based schemes, saving 50% of the system memory. Furthermore, we present four different multiplication designs based on our proposed recoded and nonrecoded TSD adders. Our multiplication designs require a small number of reduced minterms to generate the multiplication partial products. Finally, a recently proposed pipelined iterative-tree algorithm can be used in the TSD adders-multipliers; consequently, efficient use of all available adders can be made.

  18. Step-off, vertical electromagnetic responses of a deep resistivity layer buried in marine sediments

    NASA Astrophysics Data System (ADS)

    Jang, Hangilro; Jang, Hannuree; Lee, Ki Ha; Kim, Hee Joon

    2013-04-01

    A frequency-domain, marine controlled-source electromagnetic (CSEM) method has been applied successfully in deep water areas for detecting hydrocarbon (HC) reservoirs. However, a typical technique with horizontal transmitters and receivers requires large source-receiver separations with respect to the target depth. A time-domain EM system with vertical transmitters and receivers can be an alternative because vertical electric fields are sensitive to deep resistive layers. In this paper, a time-domain modelling code, with multiple source and receiver dipoles that are finite in length, has been written to investigate transient EM problems. With the use of this code, we calculate step-off responses for one-dimensional HC reservoir models. Although the vertical electric field has much smaller amplitude of signal than the horizontal field, vertical currents resulting from a vertical transmitter are sensitive to resistive layers. The modelling shows a significant difference between step-off responses of HC- and water-filled reservoirs, and the contrast can be recognized at late times at relatively short offsets. A maximum contrast occurs at more than 4 s, being delayed with the depth of the HC layer.

  19. Filtering for networked control systems with single/multiple measurement packets subject to multiple-step measurement delays and multiple packet dropouts

    NASA Astrophysics Data System (ADS)

    Moayedi, Maryam; Foo, Yung Kuan; Chai Soh, Yeng

    2011-03-01

    The minimum-variance filtering problem in networked control systems, where both random measurement transmission delays and packet dropouts may occur, is investigated in this article. Instead of following the many existing results that solve the problem by using probabilistic approaches based on the probabilities of the uncertainties occurring between the sensor and the filter, we propose a non-probabilistic approach by time-stamping the measurement packets. Both single-measurement and multiple measurement packets are studied. We also consider the case of burst arrivals, where more than one packet may arrive between the receiver's previous and current sampling times; the scenario where the control input is non-zero and subject to delays and packet dropouts is examined as well. It is shown that, in such a situation, the optimal state estimate would generally be dependent on the possible control input. Simulations are presented to demonstrate the performance of the various proposed filters.

  20. Real-Time Visualization of Network Behaviors for Situational Awareness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Best, Daniel M.; Bohn, Shawn J.; Love, Douglas V.

    Plentiful, complex, and dynamic data make understanding the state of an enterprise network difficult. Although visualization can help analysts understand baseline behaviors in network traffic and identify off-normal events, visual analysis systems often do not scale well to operational data volumes (in the hundreds of millions to billions of transactions per day) nor to analysis of emergent trends in real-time data. We present a system that combines multiple, complementary visualization techniques coupled with in-stream analytics, behavioral modeling of network actors, and a high-throughput processing platform called MeDICi. This system provides situational understanding of real-time network activity to help analysts takemore » proactive response steps. We have developed these techniques using requirements gathered from the government users for which the tools are being developed. By linking multiple visualization tools to a streaming analytic pipeline, and designing each tool to support a particular kind of analysis (from high-level awareness to detailed investigation), analysts can understand the behavior of a network across multiple levels of abstraction.« less

  1. Virus elimination during the purification of monoclonal antibodies by column chromatography and additional steps.

    PubMed

    Roberts, Peter L

    2014-01-01

    The theoretical potential for virus transmission by monoclonal antibody based therapeutic products has led to the inclusion of appropriate virus reduction steps. In this study, virus elimination by the chromatographic steps used during the purification process for two (IgG-1 & -3) monoclonal antibodies (MAbs) have been investigated. Both the Protein G (>7log) and ion-exchange (5 log) chromatography steps were very effective for eliminating both enveloped and non-enveloped viruses over the life-time of the chromatographic gel. However, the contribution made by the final gel filtration step was more limited, i.e., 3 log. Because these chromatographic columns were recycled between uses, the effectiveness of the column sanitization procedures (guanidinium chloride for protein G or NaOH for ion-exchange) were tested. By evaluating standard column runs immediately after each virus spiked run, it was possible to directly confirm that there was no cross contamination with virus between column runs (guanidinium chloride or NaOH). To further ensure the virus safety of the product, two specific virus elimination steps have also been included in the process. A solvent/detergent step based on 1% triton X-100 rapidly inactivating a range of enveloped viruses by >6 log inactivation within 1 min of a 60 min treatment time. Virus removal by virus filtration step was also confirmed to be effective for those viruses of about 50 nm or greater. In conclusion, the combination of these multiple steps ensures a high margin of virus safety for this purification process. © 2014 American Institute of Chemical Engineers.

  2. A multi-time-step noise reduction method for measuring velocity statistics from particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Machicoane, Nathanaël; López-Caballero, Miguel; Bourgoin, Mickael; Aliseda, Alberto; Volk, Romain

    2017-10-01

    We present a method to improve the accuracy of velocity measurements for fluid flow or particles immersed in it, based on a multi-time-step approach that allows for cancellation of noise in the velocity measurements. Improved velocity statistics, a critical element in turbulent flow measurements, can be computed from the combination of the velocity moments computed using standard particle tracking velocimetry (PTV) or particle image velocimetry (PIV) techniques for data sets that have been collected over different values of time intervals between images. This method produces Eulerian velocity fields and Lagrangian velocity statistics with much lower noise levels compared to standard PIV or PTV measurements, without the need of filtering and/or windowing. Particle displacement between two frames is computed for multiple different time-step values between frames in a canonical experiment of homogeneous isotropic turbulence. The second order velocity structure function of the flow is computed with the new method and compared to results from traditional measurement techniques in the literature. Increased accuracy is also demonstrated by comparing the dissipation rate of turbulent kinetic energy measured from this function against previously validated measurements.

  3. [Predicting the outcome in severe injuries: an analysis of 2069 patients from the trauma register of the German Society of Traumatology (DGU)].

    PubMed

    Rixen, D; Raum, M; Bouillon, B; Schlosser, L E; Neugebauer, E

    2001-03-01

    On hospital admission numerous variables are documented from multiple trauma patients. The value of these variables to predict outcome are discussed controversially. The aim was the ability to initially determine the probability of death of multiple trauma patients. Thus, a multivariate probability model was developed based on data obtained from the trauma registry of the Deutsche Gesellschaft für Unfallchirurgie (DGU). On hospital admission the DGU trauma registry collects more than 30 variables prospectively. In the first step of analysis those variables were selected, that were assumed to be clinical predictors for outcome from literature. In a second step a univariate analysis of these variables was performed. For all primary variables with univariate significance in outcome prediction a multivariate logistic regression was performed in the third step and a multivariate prognostic model was developed. 2069 patients from 20 hospitals were prospectively included in the trauma registry from 01.01.1993-31.12.1997 (age 39 +/- 19 years; 70.0% males; ISS 22 +/- 13; 18.6% lethality). From more than 30 initially documented variables, the age, the GCS, the ISS, the base excess (BE) and the prothrombin time were the most important prognostic factors to predict the probability of death (P(death)). The following prognostic model was developed: P(death) = 1/1 + e(-[k + beta 1(age) + beta 2(GCS) + beta 3(ISS) + beta 4(BE) + beta 5(prothrombin time)]) where: k = -0.1551, beta 1 = 0.0438 with p < 0.0001, beta 2 = -0.2067 with p < 0.0001, beta 3 = 0.0252 with p = 0.0071, beta 4 = -0.0840 with p < 0.0001 and beta 5 = -0.0359 with p < 0.0001. Each of the five variables contributed significantly to the multifactorial model. These data show that the age, GCS, ISS, base excess and prothrombin time are potentially important predictors to initially identify multiple trauma patients with a high risk of lethality. With the base excess and prothrombin time value, as only variables of this multifactorial model that can be therapeutically influenced, it might be possible to better guide early and aggressive therapy.

  4. Optimizing Aircraft Trajectories with Multiple Cruise Altitudes in the Presence of Winds

    NASA Technical Reports Server (NTRS)

    Ng, Hok K.; Sridhar, Banavar; Grabbe, Shon

    2014-01-01

    This study develops a trajectory optimization algorithm for approximately minimizing aircraft travel time and fuel burn by combining a method for computing minimum-time routes in winds on multiple horizontal planes, and an aircraft fuel burn model for generating fuel-optimal vertical profiles. It is applied to assess the potential benefits of flying user-preferred routes for commercial cargo flights operating between Anchorage, Alaska and major airports in Asia and the contiguous United States. Flying wind optimal trajectories with a fuel-optimal vertical profile reduces average fuel burn of international flights cruising at a single altitude by 1-3 percent. The potential fuel savings of performing en-route step climbs are not significant for many shorter domestic cargo flights that have only one step climb. Wind-optimal trajectories reduce fuel burn and travel time relative to the flight plan route by up to 3 percent for the domestic cargo flights. However, for trans-oceanic traffic, the fuel burn savings could be as much as 10 percent. The actual savings in operations will vary from the simulation results due to differences in the aircraft models and user defined cost indices. In general, the savings are proportional to trip length, and depend on the en-route wind conditions and aircraft types.

  5. Predicting Retention Times of Naturally Occurring Phenolic Compounds in Reversed-Phase Liquid Chromatography: A Quantitative Structure-Retention Relationship (QSRR) Approach

    PubMed Central

    Akbar, Jamshed; Iqbal, Shahid; Batool, Fozia; Karim, Abdul; Chan, Kim Wei

    2012-01-01

    Quantitative structure-retention relationships (QSRRs) have successfully been developed for naturally occurring phenolic compounds in a reversed-phase liquid chromatographic (RPLC) system. A total of 1519 descriptors were calculated from the optimized structures of the molecules using MOPAC2009 and DRAGON softwares. The data set of 39 molecules was divided into training and external validation sets. For feature selection and mapping we used step-wise multiple linear regression (SMLR), unsupervised forward selection followed by step-wise multiple linear regression (UFS-SMLR) and artificial neural networks (ANN). Stable and robust models with significant predictive abilities in terms of validation statistics were obtained with negation of any chance correlation. ANN models were found better than remaining two approaches. HNar, IDM, Mp, GATS2v, DISP and 3D-MoRSE (signals 22, 28 and 32) descriptors based on van der Waals volume, electronegativity, mass and polarizability, at atomic level, were found to have significant effects on the retention times. The possible implications of these descriptors in RPLC have been discussed. All the models are proven to be quite able to predict the retention times of phenolic compounds and have shown remarkable validation, robustness, stability and predictive performance. PMID:23203132

  6. Determining heterogeneous slip activity on multiple slip systems from single crystal orientation pole figures

    DOE PAGES

    Pagan, Darren C.; Miller, Matthew P.

    2016-09-01

    A new experimental method to determine heterogeneity of shear strains associated with crystallographic slip in the bulk of ductile, crystalline materials is outlined. The method quantifies the time resolved evolution of misorientation within plastically deforming crystals using single crystal orientation pole figures (SCPFs) measured in-situ with X-ray diffraction. A multiplicative decomposition of the crystal kinematics is used to interpret the distributions of lattice plane orientation observed on the SCPFs in terms of heterogeneous slip activity (shear strains) on multiple slip systems. Here, to show the method’s utility, the evolution of heterogeneous slip is quantified in a silicon single crystal plasticallymore » deformed at high temperature at multiple load steps, with slip activity in sub-volumes of the crystal analyzed simultaneously.« less

  7. The effects of pulsed auditory stimulation on various gait measurements in persons with Parkinson's Disease.

    PubMed

    Freedland, Robert L; Festa, Carmel; Sealy, Marita; McBean, Andrew; Elghazaly, Paul; Capan, Ariel; Brozycki, Lori; Nelson, Arthur J; Rothman, Jeffrey

    2002-01-01

    The purpose of this study was to examine the Functional Ambulation Performance Score (FAP; a quantitative gait measure) in persons with Parkinson's Disease (PD) using the auditory stimulation of a metronome (ASM). Participants (n = 16; 5F/11M; range 60--84 yrs.) had a primary diagnosis of PD and were all independent ambulators. Footfall data were collected while participants walked multiple times on an electronic walkway under the following conditions: 1) PRETEST: establishing baseline cadence, 2) ASM: metronome set to baseline cadence, 3) 10ASM: metronome set to 10% FAP scores increased between PRETEST and POSTTEST. PRE/POSTTEST comparisons also indicated decreases in cycle time and double support and increases in step length and step-extremity ratio (step length/leg length). The results confirm prior findings that auditory stimulation can be used to positively influence the gait of persons with PD and suggest beneficial effects of ASM as an adjunct to dopaminergic therapy to treat gait dysfunctions in PD.

  8. Accelerating Time Integration for the Shallow Water Equations on the Sphere Using GPUs

    DOE PAGES

    Archibald, R.; Evans, K. J.; Salinger, A.

    2015-06-01

    The push towards larger and larger computational platforms has made it possible for climate simulations to resolve climate dynamics across multiple spatial and temporal scales. This direction in climate simulation has created a strong need to develop scalable timestepping methods capable of accelerating throughput on high performance computing. This study details the recent advances in the implementation of implicit time stepping of the spectral element dynamical core within the United States Department of Energy (DOE) Accelerated Climate Model for Energy (ACME) on graphical processing units (GPU) based machines. We demonstrate how solvers in the Trilinos project are interfaced with ACMEmore » and GPU kernels to increase computational speed of the residual calculations in the implicit time stepping method for the atmosphere dynamics. We demonstrate the optimization gains and data structure reorganization that facilitates the performance improvements.« less

  9. Standardization of a two-step real-time polymerase chain reaction based method for species-specific detection of medically important Aspergillus species.

    PubMed

    Das, P; Pandey, P; Harishankar, A; Chandy, M; Bhattacharya, S; Chakrabarti, A

    2017-01-01

    Standardization of Aspergillus polymerase chain reaction (PCR) poses two technical challenges (a) standardization of DNA extraction, (b) optimization of PCR against various medically important Aspergillus species. Many cases of aspergillosis go undiagnosed because of relative insensitivity of conventional diagnostic methods such as microscopy, culture or antigen detection. The present study is an attempt to standardize real-time PCR assay for rapid sensitive and specific detection of Aspergillus DNA in EDTA whole blood. Three nucleic acid extraction protocols were compared and a two-step real-time PCR assay was developed and validated following the recommendations of the European Aspergillus PCR Initiative in our setup. In the first PCR step (pan-Aspergillus PCR), the target was 28S rDNA gene, whereas in the second step, species specific PCR the targets were beta-tubulin (for Aspergillus fumigatus, Aspergillus flavus, Aspergillus terreus), gene and calmodulin gene (for Aspergillus niger). Species specific identification of four medically important Aspergillus species, namely, A. fumigatus, A. flavus, A. niger and A. terreus were achieved by this PCR. Specificity of the PCR was tested against 34 different DNA source including bacteria, virus, yeast, other Aspergillus sp., other fungal species and for human DNA and had no false-positive reactions. The analytical sensitivity of the PCR was found to be 102 CFU/ml. The present protocol of two-step real-time PCR assays for genus- and species-specific identification for commonly isolated species in whole blood for diagnosis of invasive Aspergillus infections offers a rapid, sensitive and specific assay option and requires clinical validation at multiple centers.

  10. Physical and social contextual influences on children's leisure-time physical activity: an ecological momentary assessment study.

    PubMed

    Dunton, Genevieve F; Liao, Yue; Intille, Stephen; Wolch, Jennifer; Pentz, Mary Ann

    2011-01-01

    This study used real-time electronic surveys delivered through mobile phones, known as Ecological Momentary Assessment (EMA), to determine whether level and experience of leisure-time physical activity differ across children's physical and social contexts. Children (N = 121; ages 9 to 13 years; 52% male, 32% Hispanic/Latino) participated in 4 days (Fri.-Mon.) of EMA during nonschool time. Electronic surveys (20 total) assessed primary activity (eg, active play/sports/exercise), physical location (eg, home, outdoors), social context (eg, friends, alone), current mood (positive and negative affect), and enjoyment. Responses were time-matched to the number of steps and minutes of moderate-to-vigorous physical activity (MVPA; measured by accelerometer) in the 30 minutes before each survey. Mean steps and MVPA were greater outdoors than at home or at someone else's house (all P < .05). Steps were greater with multiple categories of company (eg, friends and family together) than with family members only or alone (all P < .05). Enjoyment was greater outdoors than at home or someone else's house (all P < .05). Negative affect was greater when alone and with family only than friends only (all P < .05). Results describing the value of outdoor and social settings could inform context-specific interventions in this age group.

  11. A Particle Smoother with Sequential Importance Resampling for soil hydraulic parameter estimation: A lysimeter experiment

    NASA Astrophysics Data System (ADS)

    Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry

    2013-04-01

    An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.

  12. Monitoring gait in multiple sclerosis with novel wearable motion sensors

    PubMed Central

    McGinnis, Ryan S.; Seagers, Kirsten; Motl, Robert W.; Sheth, Nirav; Wright, John A.; Ghaffari, Roozbeh; Sosnoff, Jacob J.

    2017-01-01

    Background Mobility impairment is common in people with multiple sclerosis (PwMS) and there is a need to assess mobility in remote settings. Here, we apply a novel wireless, skin-mounted, and conformal inertial sensor (BioStampRC, MC10 Inc.) to examine gait characteristics of PwMS under controlled conditions. We determine the accuracy and precision of BioStampRC in measuring gait kinematics by comparing to contemporary research-grade measurement devices. Methods A total of 45 PwMS, who presented with diverse walking impairment (Mild MS = 15, Moderate MS = 15, Severe MS = 15), and 15 healthy control subjects participated in the study. Participants completed a series of clinical walking tests. During the tests participants were instrumented with BioStampRC and MTx (Xsens, Inc.) sensors on their shanks, as well as an activity monitor GT3X (Actigraph, Inc.) on their non-dominant hip. Shank angular velocity was simultaneously measured with the inertial sensors. Step number and temporal gait parameters were calculated from the data recorded by each sensor. Visual inspection and the MTx served as the reference standards for computing the step number and temporal parameters, respectively. Accuracy (error) and precision (variance of error) was assessed based on absolute and relative metrics. Temporal parameters were compared across groups using ANOVA. Results Mean accuracy±precision for the BioStampRC was 2±2 steps error for step number, 6±9ms error for stride time and 6±7ms error for step time (0.6–2.6% relative error). Swing time had the least accuracy±precision (25±19ms error, 5±4% relative error) among the parameters. GT3X had the least accuracy±precision (8±14% relative error) in step number estimate among the devices. Both MTx and BioStampRC detected significantly distinct gait characteristics between PwMS with different disability levels (p<0.01). Conclusion BioStampRC sensors accurately and precisely measure gait parameters in PwMS across diverse walking impairment levels and detected differences in gait characteristics by disability level in PwMS. This technology has the potential to provide granular monitoring of gait both inside and outside the clinic. PMID:28178288

  13. Stepped MS(All) Relied Transition (SMART): An approach to rapidly determine optimal multiple reaction monitoring mass spectrometry parameters for small molecules.

    PubMed

    Ye, Hui; Zhu, Lin; Wang, Lin; Liu, Huiying; Zhang, Jun; Wu, Mengqiu; Wang, Guangji; Hao, Haiping

    2016-02-11

    Multiple reaction monitoring (MRM) is a universal approach for quantitative analysis because of its high specificity and sensitivity. Nevertheless, optimization of MRM parameters remains as a time and labor-intensive task particularly in multiplexed quantitative analysis of small molecules in complex mixtures. In this study, we have developed an approach named Stepped MS(All) Relied Transition (SMART) to predict the optimal MRM parameters of small molecules. SMART requires firstly a rapid and high-throughput analysis of samples using a Stepped MS(All) technique (sMS(All)) on a Q-TOF, which consists of serial MS(All) events acquired from low CE to gradually stepped-up CE values in a cycle. The optimal CE values can then be determined by comparing the extracted ion chromatograms for the ion pairs of interest among serial scans. The SMART-predicted parameters were found to agree well with the parameters optimized on a triple quadrupole from the same vendor using a mixture of standards. The parameters optimized on a triple quadrupole from a different vendor was also employed for comparison, and found to be linearly correlated with the SMART-predicted parameters, suggesting the potential applications of the SMART approach among different instrumental platforms. This approach was further validated by applying to simultaneous quantification of 31 herbal components in the plasma of rats treated with a herbal prescription. Because the sMS(All) acquisition can be accomplished in a single run for multiple components independent of standards, the SMART approach are expected to find its wide application in the multiplexed quantitative analysis of complex mixtures. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. A Study on Segmented Multiple-Step Forming of Doubly Curved Thick Plate by Reconfigurable Multi-Punch Dies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ko, Young Ho; Han, Myoung Soo; Han, Jong Man

    2007-05-17

    Doubly curved thick plate forming in shipbuilding industries is currently performed by a thermal forming process, called as Line Heating by using gas flame torches. Due to the empirical manual work of it, the industries are eager for an alternative way to manufacture curved thick plates for ships. It was envisaged in this study to manufacture doubly curved thick plates by the multi-punch die forming. Experiments and finite element analyses were conducted to evaluate the feasibility of the reconfigurable discrete die forming to the thick plates. Single and segmented multiple step forming procedures were considered from both forming efficiency andmore » accuracy. Configuration of the multi-punch dies suitable for the segmented multiple step forming was also explored. As a result, Segmented multiple step forming with matched dies had a limited formability when the objective shapes become complicate, while a unmatched die configuration provided better possibility to manufacture large curved plates for ships.« less

  15. Capacity planning for electronic waste management facilities under uncertainty: multi-objective multi-time-step model development.

    PubMed

    Poonam Khanijo Ahluwalia; Nema, Arvind K

    2011-07-01

    Selection of optimum locations for locating new facilities and decision regarding capacities at the proposed facilities is a major concern for municipal authorities/managers. The decision as to whether a single facility is preferred over multiple facilities of smaller capacities would vary with varying priorities to cost and associated risks such as environmental or health risk or risk perceived by the society. Currently management of waste streams such as that of computer waste is being done using rudimentary practices and is flourishing as an unorganized sector, mainly as backyard workshops in many cities of developing nations such as India. Uncertainty in the quantification of computer waste generation is another major concern due to the informal setup of present computer waste management scenario. Hence, there is a need to simultaneously address uncertainty in waste generation quantities while analyzing the tradeoffs between cost and associated risks. The present study aimed to address the above-mentioned issues in a multi-time-step, multi-objective decision-support model, which can address multiple objectives of cost, environmental risk, socially perceived risk and health risk, while selecting the optimum configuration of existing and proposed facilities (location and capacities).

  16. Lower limb muscle moments and power during recovery from forward loss of balance in male and female single and multiple steppers.

    PubMed

    Carty, Christopher P; Cronin, Neil J; Lichtwark, Glen A; Mills, Peter M; Barrett, Rod S

    2012-12-01

    Studying recovery responses to loss of balance may help to explain why older adults are susceptible to falls. The purpose of the present study was to assess whether male and female older adults, that use a single or multiple step recovery strategy, differ in the proportion of lower limb strength used and power produced during the stepping phase of balance recovery. Eighty-four community-dwelling older adults (47 men, 37 women) participated in the study. Isometric strength of the ankle, knee and hip joint flexors and extensors was assessed using a dynamometer. Loss of balance was induced by releasing participants from a static forward lean (4 trials at each of 3 forward lean angles). Participants were instructed to recover with a single step and were subsequently classified as using a single or multiple step recovery strategy for each trial. (1) Females were weaker than males and the proportion of females that were able to recover with a single step were lower than for males at each lean magnitude. (2) Multiple compared to single steppers used a significantly higher proportion of their hip extension strength and produced less knee and ankle joint peak power during stepping, at the intermediate lean angle. Strength deficits in female compared to male participants may explain why a lower proportion of female participants were able to recover with a single step. The inability to generate sufficient power in the stepping limb appears to be a limiting factor in single step recovery from forward loss of balance. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  17. Estimating the number of people in crowded scenes

    NASA Astrophysics Data System (ADS)

    Kim, Minjin; Kim, Wonjun; Kim, Changick

    2011-01-01

    This paper presents a method to estimate the number of people in crowded scenes without using explicit object segmentation or tracking. The proposed method consists of three steps as follows: (1) extracting space-time interest points using eigenvalues of the local spatio-temporal gradient matrix, (2) generating crowd regions based on space-time interest points, and (3) estimating the crowd density based on the multiple regression. In experimental results, the efficiency and robustness of our proposed method are demonstrated by using PETS 2009 dataset.

  18. A Multiple Time-Step Finite State Projection Algorithm for the Solution to the Chemical Master Equation

    DTIC Science & Technology

    2006-11-30

    except in the simplest of circumstances. This belief has driven the com- putational research community to devise clever kinetic Monte Carlo ( KMC ... KMC rou- tine is very slow; cutting the error in half requires four times the number of simulations. Since a single simulation may contain huge numbers...subintervals [9–14]. Both approximation types, system partitioning and τ leaping, have been very successful in increasing the scope of problems to which KMC

  19. Impact of favorite stimuli automatically delivered on step responses of persons with multiple disabilities during their use of walker devices.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Campodonico, Francesca; Piazzolla, Giorgia; Scalini, Lorenza; Oliva, Doretta

    2005-01-01

    Favorite stimuli were automatically delivered contingent on the performance of steps by two persons (a boy and a woman) with multiple disabilities during their use of support walker devices. The study lasted about 4 months and was carried out according to a multiple baseline design across participants. Recording concerned the participants' frequencies of steps and their indices of happiness during baseline and intervention sessions. Data showed that both participants had a significant increase in each of these two measures during the intervention phase. Implications of the findings and new research issues are discussed.

  20. Text-based Analytics for Biosurveillance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charles, Lauren E.; Smith, William P.; Rounds, Jeremiah

    The ability to prevent, mitigate, or control a biological threat depends on how quickly the threat is identified and characterized. Ensuring the timely delivery of data and analytics is an essential aspect of providing adequate situational awareness in the face of a disease outbreak. This chapter outlines an analytic pipeline for supporting an advanced early warning system that can integrate multiple data sources and provide situational awareness of potential and occurring disease situations. The pipeline, includes real-time automated data analysis founded on natural language processing (NLP), semantic concept matching, and machine learning techniques, to enrich content with metadata related tomore » biosurveillance. Online news articles are presented as an example use case for the pipeline, but the processes can be generalized to any textual data. In this chapter, the mechanics of a streaming pipeline are briefly discussed as well as the major steps required to provide targeted situational awareness. The text-based analytic pipeline includes various processing steps as well as identifying article relevance to biosurveillance (e.g., relevance algorithm) and article feature extraction (who, what, where, why, how, and when). The ability to prevent, mitigate, or control a biological threat depends on how quickly the threat is identified and characterized. Ensuring the timely delivery of data and analytics is an essential aspect of providing adequate situational awareness in the face of a disease outbreak. This chapter outlines an analytic pipeline for supporting an advanced early warning system that can integrate multiple data sources and provide situational awareness of potential and occurring disease situations. The pipeline, includes real-time automated data analysis founded on natural language processing (NLP), semantic concept matching, and machine learning techniques, to enrich content with metadata related to biosurveillance. Online news articles are presented as an example use case for the pipeline, but the processes can be generalized to any textual data. In this chapter, the mechanics of a streaming pipeline are briefly discussed as well as the major steps required to provide targeted situational awareness. The text-based analytic pipeline includes various processing steps as well as identifying article relevance to biosurveillance (e.g., relevance algorithm) and article feature extraction (who, what, where, why, how, and when).« less

  1. Improving the Specificity of Plasmodium falciparum Malaria Diagnosis in High-Transmission Settings with a Two-Step Rapid Diagnostic Test and Microscopy Algorithm.

    PubMed

    Murungi, Moses; Fulton, Travis; Reyes, Raquel; Matte, Michael; Ntaro, Moses; Mulogo, Edgar; Nyehangane, Dan; Juliano, Jonathan J; Siedner, Mark J; Boum, Yap; Boyce, Ross M

    2017-05-01

    Poor specificity may negatively impact rapid diagnostic test (RDT)-based diagnostic strategies for malaria. We performed real-time PCR on a subset of subjects who had undergone diagnostic testing with a multiple-antigen (histidine-rich protein 2 and pan -lactate dehydrogenase pLDH [HRP2/pLDH]) RDT and microscopy. We determined the sensitivity and specificity of the RDT in comparison to results of PCR for the detection of Plasmodium falciparum malaria. We developed and evaluated a two-step algorithm utilizing the multiple-antigen RDT to screen patients, followed by confirmatory microscopy for those individuals with HRP2-positive (HRP2 + )/pLDH-negative (pLDH - ) results. In total, dried blood spots (DBS) were collected from 276 individuals. There were 124 (44.9%) individuals with an HRP2 + /pLDH + result, 94 (34.1%) with an HRP2 + /pLDH - result, and 58 (21%) with a negative RDT result. The sensitivity and specificity of the RDT compared to results with real-time PCR were 99.4% (95% confidence interval [CI], 95.9 to 100.0%) and 46.7% (95% CI, 37.7 to 55.9%), respectively. Of the 94 HRP2 + /pLDH - results, only 32 (34.0%) and 35 (37.2%) were positive by microscopy and PCR, respectively. The sensitivity and specificity of the two-step algorithm compared to results with real-time PCR were 95.5% (95% CI, 90.5 to 98.0%) and 91.0% (95% CI, 84.1 to 95.2), respectively. HRP2 antigen bands demonstrated poor specificity for the diagnosis of malaria compared to that of real-time PCR in a high-transmission setting. The most likely explanation for this finding is the persistence of HRP2 antigenemia following treatment of an acute infection. The two-step diagnostic algorithm utilizing microscopy as a confirmatory test for indeterminate HRP2 + /pLDH - results showed significantly improved specificity with little loss of sensitivity in a high-transmission setting. Copyright © 2017 American Society for Microbiology.

  2. Effective image differencing with convolutional neural networks for real-time transient hunting

    NASA Astrophysics Data System (ADS)

    Sedaghat, Nima; Mahabal, Ashish

    2018-06-01

    Large sky surveys are increasingly relying on image subtraction pipelines for real-time (and archival) transient detection. In this process one has to contend with varying point-spread function (PSF) and small brightness variations in many sources, as well as artefacts resulting from saturated stars and, in general, matching errors. Very often the differencing is done with a reference image that is deeper than individual images and the attendant difference in noise characteristics can also lead to artefacts. We present here a deep-learning approach to transient detection that encapsulates all the steps of a traditional image-subtraction pipeline - image registration, background subtraction, noise removal, PSF matching and subtraction - in a single real-time convolutional network. Once trained, the method works lightening-fast and, given that it performs multiple steps in one go, the time saved and false positives eliminated for multi-CCD surveys like Zwicky Transient Facility and Large Synoptic Survey Telescope will be immense, as millions of subtractions will be needed per night.

  3. Depicting Changes in Multiple Symptoms Over Time.

    PubMed

    Muehrer, Rebecca J; Brown, Roger L; Lanuza, Dorothy M

    2015-09-01

    Ridit analysis, an acronym for Relative to an Identified Distribution, is a method for assessing change in ordinal data and can be used to show how individual symptoms change or remain the same over time. The purposes of this article are to (a) describe how to use ridit analysis to assess change in a symptom measure using data from a longitudinal study, (b) give a step-by-step example of ridit analysis, (c) show the clinical relevance of applying ridit analysis, and (d) display results in an innovative graphic. Mean ridit effect sizes were calculated for the frequency and distress of 64 symptoms in lung transplant patients before and after transplant. Results were displayed in a bubble graph. Ridit analysis allowed us to maintain the specificity of individual symptoms and to show how each symptom changed or remained the same over time. The bubble graph provides an efficient way for clinicians to identify changes in symptom frequency and distress over time. © The Author(s) 2014.

  4. Fully implicit adaptive mesh refinement solver for 2D MHD

    NASA Astrophysics Data System (ADS)

    Philip, B.; Chacon, L.; Pernice, M.

    2008-11-01

    Application of implicit adaptive mesh refinement (AMR) to simulate resistive magnetohydrodynamics is described. Solving this challenging multi-scale, multi-physics problem can improve understanding of reconnection in magnetically-confined plasmas. AMR is employed to resolve extremely thin current sheets, essential for an accurate macroscopic description. Implicit time stepping allows us to accurately follow the dynamical time scale of the developing magnetic field, without being restricted by fast Alfven time scales. At each time step, the large-scale system of nonlinear equations is solved by a Jacobian-free Newton-Krylov method together with a physics-based preconditioner. Each block within the preconditioner is solved optimally using the Fast Adaptive Composite grid method, which can be considered as a multiplicative Schwarz method on AMR grids. We will demonstrate the excellent accuracy and efficiency properties of the method with several challenging reduced MHD applications, including tearing, island coalescence, and tilt instabilities. B. Philip, L. Chac'on, M. Pernice, J. Comput. Phys., in press (2008)

  5. Kinematic and behavioral analyses of protective stepping strategies and risk for falls among community living older adults.

    PubMed

    Bair, Woei-Nan; Prettyman, Michelle G; Beamer, Brock A; Rogers, Mark W

    2016-07-01

    Protective stepping evoked by externally applied lateral perturbations reveals balance deficits underlying falls. However, a lack of comprehensive information about the control of different stepping strategies in relation to the magnitude of perturbation limits understanding of balance control in relation to age and fall status. The aim of this study was to investigate different protective stepping strategies and their kinematic and behavioral control characteristics in response to different magnitudes of lateral waist-pulls between older fallers and non-fallers. Fifty-two community-dwelling older adults (16 fallers) reacted naturally to maintain balance in response to five magnitudes of lateral waist-pulls. The balance tolerance limit (BTL, waist-pull magnitude where protective steps transitioned from single to multiple steps), first step control characteristics (stepping frequency and counts, spatial-temporal kinematic, and trunk position at landing) of four naturally selected protective step types were compared between fallers and non-fallers at- and above-BTL. Fallers took medial-steps most frequently while non-fallers most often took crossover-back-steps. Only non-fallers varied their step count and first step control parameters by step type at the instants of step initiation (onset time) and termination (trunk position), while both groups modulated step execution parameters (single stance duration and step length) by step type. Group differences were generally better demonstrated above-BTL. Fallers primarily used a biomechanically less effective medial-stepping strategy that may be partially explained by reduced somato-sensation. Fallers did not modulate their step parameters by step type at first step initiation and termination, instances particularly vulnerable to instability, reflecting their limitations in balance control during protective stepping. Copyright © 2016. Published by Elsevier Ltd.

  6. Preparing to take the USMLE Step 1: a survey on medical students' self-reported study habits.

    PubMed

    Kumar, Andre D; Shah, Monisha K; Maley, Jason H; Evron, Joshua; Gyftopoulos, Alex; Miller, Chad

    2015-05-01

    The USA Medical Licensing Examination Step 1 is a computerised multiple-choice examination that tests the basic biomedical sciences. It is administered after the second year in a traditional four-year MD programme. Most Step 1 scores fall between 140 and 260, with a mean (SD) of 227 (22). Step 1 scores are an important selection criterion for residency choice. Little is known about which study habits are associated with a higher score. To identify which self-reported study habits correlate with a higher Step 1 score. A survey regarding Step 1 study habits was sent to third year medical students at Tulane University School of Medicine every year between 2009 and 2011. The survey was sent approximately 3 months after the examination. 256 out of 475 students (54%) responded. The mean (SD) Step 1 score was 229.5 (22.1). Students who estimated studying more than 8-11 h per day had higher scores (p<0.05), but there was no added benefit with additional study time. Those who reported studying <40 days achieved higher scores (p<0.05). Those who estimated completing >2000 practice questions also obtained higher scores (p<0.01). Students who reported studying in a group, spending the majority of study time on practice questions or taking >40 preparation days did not achieve higher scores. Certain self-reported study habits may correlate with a higher Step 1 score compared with others. Given the importance of achieving a high Step 1 score on residency choice, it is important to further identify which characteristics may lead to a higher score. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  7. Multiple-modality exercise and mind-motor training to improve mobility in older adults: A randomized controlled trial.

    PubMed

    Boa Sorte Silva, Narlon C; Gill, Dawn P; Gregory, Michael A; Bocti, John; Petrella, Robert J

    2018-03-01

    To investigate the effects of multiple-modality exercise with or without additional mind-motor training on mobility outcomes in older adults with subjective cognitive complaints. This was a 24-week randomized controlled trial with a 28-week no-contact follow-up. Community-dwelling older adults underwent a thrice -weekly, Multiple-Modality exercise and Mind-Motor (M4) training or Multiple-Modality (M2) exercise with an active control intervention (balance, range of motion and breathing exercises). Study outcomes included differences between groups at 24weeks and after the no-contact follow-up (i.e., 52weeks) in usual and dual-task (DT, i.e., serial sevens [S7] and phonemic verbal fluency [VF] tasks) gait velocity, step length and cycle time variability, as well as DT cognitive accuracy. 127 participants (mean age 67.5 [7.3] years, 71% women) were randomized to either M2 (n=64) or M4 (n=63) groups. Participants were assessed at baseline, intervention endpoint (24weeks), and study endpoint (52weeks). At 24weeks, the M2 group demonstrated greater improvements in usual gait velocity, usual step length, and DT gait velocity (VF) compared to the M4 group, and no between- or within-group changes in DT accuracy were observed. At 52weeks, the M2 group retained the gains in gait velocity and step length, whereas the M4 group demonstrated trends for improvement (p=0.052) in DT cognitive accuracy (VF). Our results suggest that additional mind-motor training was not effective to improve mobility outcomes. In fact, participants in the active control group experienced greater benefits as a result of the intervention. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Number of Nanoparticles per Cell through a Spectrophotometric Method - A key parameter to Assess Nanoparticle-based Cellular Assays.

    PubMed

    Unciti-Broceta, Juan D; Cano-Cortés, Victoria; Altea-Manzano, Patricia; Pernagallo, Salvatore; Díaz-Mochón, Juan J; Sánchez-Martín, Rosario M

    2015-05-15

    Engineered nanoparticles (eNPs) for biological and biomedical applications are produced from functionalised nanoparticles (NPs) after undergoing multiple handling steps, giving rise to an inevitable loss of NPs. Herein we present a practical method to quantify nanoparticles (NPs) number per volume in an aqueous suspension using standard spectrophotometers and minute amounts of the suspensions (up to 1 μL). This method allows, for the first time, to analyse cellular uptake by reporting NPs number added per cell, as opposed to current methods which are related to solid content (w/V) of NPs. In analogy to the parameter used in viral infective assays (multiplicity of infection), we propose to name this novel parameter as multiplicity of nanofection.

  9. Dynamics of multiple-goal pursuit.

    PubMed

    Louro, Maria J; Pieters, Rik; Zeelenberg, Marcel

    2007-08-01

    The authors propose and test a model of multiple-goal pursuit that specifies how individuals allocate effort among multiple goals over time. The model predicts that whether individuals decide to step up effort, coast, abandon the current goal, or switch to pursue another goal is determined jointly by the emotions that flow from prior goal progress and the proximity to future goal attainment, and proximally determined by changes in expectancies about goal attainment. Results from a longitudinal diary study and 2 experiments show that positive and negative goal-related emotions can have diametrically opposing effects on goal-directed behavior, depending on the individual's proximity to goal attainment. The findings resolve contrasting predictions about the influence of positive and negative emotions in volitional behavior, critically amend the goal gradient hypothesis, and provide new insights into the dynamics and determinants of multiple-goal pursuit.

  10. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    PubMed

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  11. Kinetic energy definition in velocity Verlet integration for accurate pressure evaluation

    NASA Astrophysics Data System (ADS)

    Jung, Jaewoon; Kobayashi, Chigusa; Sugita, Yuji

    2018-04-01

    In molecular dynamics (MD) simulations, a proper definition of kinetic energy is essential for controlling pressure as well as temperature in the isothermal-isobaric condition. The virial theorem provides an equation that connects the average kinetic energy with the product of particle coordinate and force. In this paper, we show that the theorem is satisfied in MD simulations with a larger time step and holonomic constraints of bonds, only when a proper definition of kinetic energy is used. We provide a novel definition of kinetic energy, which is calculated from velocities at the half-time steps (t - Δt/2 and t + Δt/2) in the velocity Verlet integration method. MD simulations of a 1,2-dispalmitoyl-sn-phosphatidylcholine (DPPC) lipid bilayer and a water box using the kinetic energy definition could reproduce the physical properties in the isothermal-isobaric condition properly. We also develop a multiple time step (MTS) integration scheme with the kinetic energy definition. MD simulations with the MTS integration for the DPPC and water box systems provided the same quantities as the velocity Verlet integration method, even when the thermostat and barostat are updated less frequently.

  12. Kinetic energy definition in velocity Verlet integration for accurate pressure evaluation.

    PubMed

    Jung, Jaewoon; Kobayashi, Chigusa; Sugita, Yuji

    2018-04-28

    In molecular dynamics (MD) simulations, a proper definition of kinetic energy is essential for controlling pressure as well as temperature in the isothermal-isobaric condition. The virial theorem provides an equation that connects the average kinetic energy with the product of particle coordinate and force. In this paper, we show that the theorem is satisfied in MD simulations with a larger time step and holonomic constraints of bonds, only when a proper definition of kinetic energy is used. We provide a novel definition of kinetic energy, which is calculated from velocities at the half-time steps (t - Δt/2 and t + Δt/2) in the velocity Verlet integration method. MD simulations of a 1,2-dispalmitoyl-sn-phosphatidylcholine (DPPC) lipid bilayer and a water box using the kinetic energy definition could reproduce the physical properties in the isothermal-isobaric condition properly. We also develop a multiple time step (MTS) integration scheme with the kinetic energy definition. MD simulations with the MTS integration for the DPPC and water box systems provided the same quantities as the velocity Verlet integration method, even when the thermostat and barostat are updated less frequently.

  13. Development of the Modified Four Square Step Test and its reliability and validity in people with stroke.

    PubMed

    Roos, Margaret A; Reisman, Darcy S; Hicks, Gregory; Rose, William; Rudolph, Katherine S

    2016-01-01

    Adults with stroke have difficulty avoiding obstacles when walking, especially when a time constraint is imposed. The Four Square Step Test (FSST) evaluates dynamic balance by requiring individuals to step over canes in multiple directions while being timed, but many people with stroke are unable to complete it. The purposes of this study were to (1) modify the FSST by replacing the canes with tape so that more persons with stroke could successfully complete the test and (2) examine the reliability and validity of the modified version. Fifty-five subjects completed the Modified FSST (mFSST) by stepping over tape in all four directions while being timed. The mFSST resulted in significantly greater numbers of subjects completing the test than the FSST (39/55 [71%] and 33/55 [60%], respectively) (p < 0.04). The test-retest, intrarater, and interrater reliability of the mFSST were excellent (intraclass correlation coefficient ranges: 0.81-0.99). Construct and concurrent validity of the mFSST were also established. The minimal detectable change was 6.73 s. The mFSST, an ideal measure of dynamic balance, can identify progress in people with stroke in varied settings and can be completed by a wide range of people with stroke in approximately 5 min with the use of minimal equipment (tape, stop watch).

  14. Specific arithmetic calculation deficits in children with Turner syndrome.

    PubMed

    Rovet, J; Szekely, C; Hockenberry, M N

    1994-12-01

    Study 1 compared arithmetic processing skills on the WRAT-R in 45 girls with Turner syndrome (TS) and 92 age-matched female controls. Results revealed significant underachievement by subjects with TS, which reflected their poorer performance on problems requiring the retrieval of addition and multiplication facts and procedural knowledge for addition and division operations. TS subjects did not differ qualitatively from controls in type of procedural error committed. Study 2, which compared the performance of 10 subjects with TS and 31 controls on the Keymath Diagnostic Arithmetic Test, showed that the TS group had less adequate knowledge of arithmetic, subtraction, and multiplication procedures but did not differ from controls on Fact items. Error analyses revealed that TS subjects were more likely to confuse component steps or fail to separate intermediate steps or to complete problems. TS subjects relied to a greater degree on verbal than visual-spatial abilities in arithmetic processing while their visual-spatial abilities were associated with retrieval of simple multidigit addition facts and knowledge of subtraction, multiplication, and division procedures. Differences between the TS and control groups increased with age for Keymath, but not WRAT-R, procedures. Discrepant findings are related to the different task constraints (timed vs. untimed, single vs. alternate versions, size of item pool) and the use of different strategies (counting vs. fact retrieval). It is concluded that arithmetic difficulties in females with TS are due to less adequate procedural skills, combined with poorer fact retrieval in timed testing situations, rather than to inadequate visual-spatial abilities.

  15. Step On It! - Workplace cardiovascular risk assessment of New York City yellow taxi drivers

    PubMed Central

    Gany, Francesca; Bari, Sehrish; Gill, Pavan; Ramirez, Julia; Ayash, Claudia; Loeb, Rebecca; Aragones, Abraham; Leng, Jennifer

    2015-01-01

    Background Multiple factors associated with taxi driving can increase the risk of cardiovascular disease (CVD) in taxi drivers. Methods This paper describes the results of Step On It!, which assessed CVD risk factors among New York City taxi drivers at John F. Kennedy International Airport. Drivers completed an intake questionnaire and free screenings for blood pressure, glucose and body mass index (BMI). Results 466 drivers participated. 9% had random plasma glucose values >200 mg/dl. 77% had elevated BMIs. Immigrants who lived in the U.S. for >10 years had 2.5 times the odds (CI: 1.1–5.9) of having high blood pressure compared to newer immigrants. Discussion Abnormalities documented in this study were significant, especially for immigrants with greater duration of residence in the U.S., and underscore the potential for elevated CVD risk in this vulnerable population, and the need to address this risk through frameworks that utilize multiple levels of intervention. PMID:25680879

  16. Porous polycarbene-bearing membrane actuator for ultrasensitive weak-acid detection and real-time chemical reaction monitoring.

    PubMed

    Sun, Jian-Ke; Zhang, Weiyi; Guterman, Ryan; Lin, Hui-Juan; Yuan, Jiayin

    2018-04-30

    Soft actuators with integration of ultrasensitivity and capability of simultaneous interaction with multiple stimuli through an entire event ask for a high level of structure complexity, adaptability, and/or multi-responsiveness, which is a great challenge. Here, we develop a porous polycarbene-bearing membrane actuator built up from ionic complexation between a poly(ionic liquid) and trimesic acid (TA). The actuator features two concurrent structure gradients, i.e., an electrostatic complexation (EC) degree and a density distribution of a carbene-NH 3 adduct (CNA) along the membrane cross-section. The membrane actuator performs the highest sensitivity among the state-of-the-art soft proton actuators toward acetic acid at 10 -6  mol L -1 (M) level in aqueous media. Through competing actuation of the two gradients, it is capable of monitoring an entire process of proton-involved chemical reactions that comprise multiple stimuli and operational steps. The present achievement constitutes a significant step toward real-life application of soft actuators in chemical sensing and reaction technology.

  17. Tests of stepping as indicators of mobility, balance, and fall risk in balance-impaired older adults.

    PubMed

    Cho, Be-long; Scarpace, Diane; Alexander, Neil B

    2004-07-01

    To determine the relationships between two tests of stepping ability (the maximal step length (MSL) and rapid step test (RST)) and standard tests of standing balance, gait, mobility, and functional impairment in a group of at-risk older adults. Cross-sectional study. University-based laboratory. One hundred sixty-seven mildly balance-impaired older adults recruited for a balance-training and fall-reduction program (mean age 78, range 65-90). Measures of stepping maximally (MSL, the ability to maximally step out and return to the initial position) and rapidly (RST, the time taken to step out and return in multiple directions as fast as possible); standard measures of balance, gait, and mobility including timed tandem stance (TS), tandem walk (TW, both timing and errors), timed unipedal stance (US), timed up and go (TUG), performance oriented mobility assessment (POMA), and 6-minute walk (SMW); measures of leg strength (peak knee and ankle torque and power at slow and fast speeds); self-report measures of frequent falls (>2 per 12 months), disability (Established Population for Epidemiologic Studies of the Elderly (EPESE) physical function), and confidence to avoid falls (Activity-specific Balance Confidence (ABC) Scale). Spearman and Pearson correlation, intraclass correlation coefficient, logistic regression, and linear regression were used for data analysis. MSL consistently predicted a number of self-report and performance measures at least as well as other standard balance measures. MSL correlations with EPESE physical function, ABC, TUG, and POMA scores; SMW; and peak maximum knee and ankle torque and power were at least as high as those correlations seen with TS, TW, or US. MSL score was associated with the risk of being a frequent faller. In addition, the six MSL directions were highly correlated (up to 0.96), and any one of the leg directions yielded similar relationships with functional measures and a history of falls. Relationships between RST and these measures were relatively modest. MSL is as good a predictor of mobility performance, frequent falls, self-reported function, and balance confidence as standard stance tests such as US. MSL simplified to one direction may be a useful clinical indicator of mobility, balance, and fall risk in older adults.

  18. Impact of non-uniform correlation structure on sample size and power in multiple-period cluster randomised trials.

    PubMed

    Kasza, J; Hemming, K; Hooper, R; Matthews, Jns; Forbes, A B

    2017-01-01

    Stepped wedge and cluster randomised crossover trials are examples of cluster randomised designs conducted over multiple time periods that are being used with increasing frequency in health research. Recent systematic reviews of both of these designs indicate that the within-cluster correlation is typically taken account of in the analysis of data using a random intercept mixed model, implying a constant correlation between any two individuals in the same cluster no matter how far apart in time they are measured: within-period and between-period intra-cluster correlations are assumed to be identical. Recently proposed extensions allow the within- and between-period intra-cluster correlations to differ, although these methods require that all between-period intra-cluster correlations are identical, which may not be appropriate in all situations. Motivated by a proposed intensive care cluster randomised trial, we propose an alternative correlation structure for repeated cross-sectional multiple-period cluster randomised trials in which the between-period intra-cluster correlation is allowed to decay depending on the distance between measurements. We present results for the variance of treatment effect estimators for varying amounts of decay, investigating the consequences of the variation in decay on sample size planning for stepped wedge, cluster crossover and multiple-period parallel-arm cluster randomised trials. We also investigate the impact of assuming constant between-period intra-cluster correlations instead of decaying between-period intra-cluster correlations. Our results indicate that in certain design configurations, including the one corresponding to the proposed trial, a correlation decay can have an important impact on variances of treatment effect estimators, and hence on sample size and power. An R Shiny app allows readers to interactively explore the impact of correlation decay.

  19. Method of upgrading oils containing hydroxyaromatic hydrocarbon compounds to highly aromatic gasoline

    DOEpatents

    Baker, Eddie G.; Elliott, Douglas C.

    1993-01-01

    The present invention is a multi-stepped method of converting an oil which is produced by various biomass and coal conversion processes and contains primarily single and multiple ring hydroxyaromatic hydrocarbon compounds to highly aromatic gasoline. The single and multiple ring hydroxyaromatic hydrocarbon compounds in a raw oil material are first deoxygenated to produce a deoxygenated oil material containing single and multiple ring aromatic compounds. Then, water is removed from the deoxygenated oil material. The next step is distillation to remove the single ring aromatic compouns as gasoline. In the third step, the multiple ring aromatics remaining in the deoxygenated oil material are cracked in the presence of hydrogen to produce a cracked oil material containing single ring aromatic compounds. Finally, the cracked oil material is then distilled to remove the single ring aromatics as gasoline.

  20. Method of upgrading oils containing hydroxyaromatic hydrocarbon compounds to highly aromatic gasoline

    DOEpatents

    Baker, E.G.; Elliott, D.C.

    1993-01-19

    The present invention is a multi-stepped method of converting an oil which is produced by various biomass and coal conversion processes and contains primarily single and multiple ring hydroxyaromatic hydrocarbon compounds to highly aromatic gasoline. The single and multiple ring hydroxyaromatic hydrocarbon compounds in a raw oil material are first deoxygenated to produce a deoxygenated oil material containing single and multiple ring aromatic compounds. Then, water is removed from the deoxygenated oil material. The next step is distillation to remove the single ring aromatic compounds as gasoline. In the third step, the multiple ring aromatics remaining in the deoxygenated oil material are cracked in the presence of hydrogen to produce a cracked oil material containing single ring aromatic compounds. Finally, the cracked oil material is then distilled to remove the single ring aromatics as gasoline.

  1. Demonstration of a Near and Mid-Infrared Detector Using Multiple Step Quantum Wells

    DTIC Science & Technology

    2003-09-01

    MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited. DEMONSTRATION OF A NEAR AND MID-INFRARED... Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction...AVAILABILITY STATEMENT Approved for public release; distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) In this

  2. Alcohol and drug treatment involvement, 12-step attendance and abstinence: 9-year cross-lagged analysis of adults in an integrated health plan.

    PubMed

    Witbrodt, Jane; Ye, Yu; Bond, Jason; Chi, Felicia; Weisner, Constance; Mertens, Jennifer

    2014-04-01

    This study explored causal relationships between post-treatment 12-step attendance and abstinence at multiple data waves and examined indirect paths leading from treatment initiation to abstinence 9-years later. Adults (N = 1945) seeking help for alcohol or drug use disorders from integrated healthcare organization outpatient treatment programs were followed at 1-, 5-, 7- and 9-years. Path modeling with cross-lagged partial regression coefficients was used to test causal relationships. Cross-lagged paths indicated greater 12-step attendance during years 1 and 5 and were casually related to past-30-day abstinence at years 5 and 7 respectfully, suggesting 12-step attendance leads to abstinence (but not vice versa) well into the post-treatment period. Some gender differences were found in these relationships. Three significant time-lagged, indirect paths emerged linking treatment duration to year-9 abstinence. Conclusions are discussed in the context of other studies using longitudinal designs. For outpatient clients, results reinforce the value of lengthier treatment duration and 12-step attendance in year 1. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Synthesis of phenanthridinones from N-methoxybenzamides and arenes by multiple palladium-catalyzed C-H activation steps at room temperature.

    PubMed

    Karthikeyan, Jaganathan; Cheng, Chien-Hong

    2011-10-10

    Many steps make light work: substituted phenanthridinones can be obtained with high regioselectivity and in very good yields by palladium-catalyzed cyclization reactions of N-methoxybenzamides with arenes. The reaction proceeds through multiple oxidative C-H activation and C-C/C-N formation steps in one pot at room temperature, and thus provides a simple method for generating bioactive phenanthridinones. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Using Multiple-Stimulus without Replacement Preference Assessments to Increase Student Engagement and Performance

    ERIC Educational Resources Information Center

    Weaver, Adam D.; McKevitt, Brian C.; Farris, Allie M.

    2017-01-01

    Multiple-stimulus without replacement preference assessment is a research-based method for identifying appropriate rewards for students with emotional and behavioral disorders. This article presents a brief history of how this technology evolved and describes a step-by-step approach for conducting the procedure. A discussion of necessary materials…

  5. A comparison of the stochastic and machine learning approaches in hydrologic time series forecasting

    NASA Astrophysics Data System (ADS)

    Kim, T.; Joo, K.; Seo, J.; Heo, J. H.

    2016-12-01

    Hydrologic time series forecasting is an essential task in water resources management and it becomes more difficult due to the complexity of runoff process. Traditional stochastic models such as ARIMA family has been used as a standard approach in time series modeling and forecasting of hydrological variables. Due to the nonlinearity in hydrologic time series data, machine learning approaches has been studied with the advantage of discovering relevant features in a nonlinear relation among variables. This study aims to compare the predictability between the traditional stochastic model and the machine learning approach. Seasonal ARIMA model was used as the traditional time series model, and Random Forest model which consists of decision tree and ensemble method using multiple predictor approach was applied as the machine learning approach. In the application, monthly inflow data from 1986 to 2015 of Chungju dam in South Korea were used for modeling and forecasting. In order to evaluate the performances of the used models, one step ahead and multi-step ahead forecasting was applied. Root mean squared error and mean absolute error of two models were compared.

  6. TEMPO-Assisted Free Radical-Initiated Peptide Sequencing Mass Spectrometry (FRIPS MS) in Q-TOF and Orbitrap Mass Spectrometers: Single-Step Peptide Backbone Dissociations in Positive Ion Mode

    NASA Astrophysics Data System (ADS)

    Jang, Inae; Lee, Sun Young; Hwangbo, Song; Kang, Dukjin; Lee, Hookeun; Kim, Hugh I.; Moon, Bongjin; Oh, Han Bin

    2017-01-01

    The present study demonstrates that one-step peptide backbone fragmentations can be achieved using the TEMPO [2-(2,2,6,6-tetramethyl piperidine-1-oxyl)]-assisted free radical-initiated peptide sequencing (FRIPS) mass spectrometry in a hybrid quadrupole time-of-flight (Q-TOF) mass spectrometer and a Q-Exactive Orbitrap instrument in positive ion mode, in contrast to two-step peptide fragmentation in an ion-trap mass spectrometer (reference Anal. Chem. 85, 7044-7051 (30)). In the hybrid Q-TOF and Q-Exactive instruments, higher collisional energies can be applied to the target peptides, compared with the low collisional energies applied by the ion-trap instrument. The higher energy deposition and the additional multiple collisions in the collision cell in both instruments appear to result in one-step peptide backbone dissociations in positive ion mode. This new finding clearly demonstrates that the TEMPO-assisted FRIPS approach is a very useful tool in peptide mass spectrometry research.

  7. Impact of heat treatment on the physical properties of noncrystalline multisolute systems concentrated in frozen aqueous solutions.

    PubMed

    Izutsu, Ken-ichi; Yomota, Chikako; Kawanishi, Toru

    2011-12-01

    The purpose of this study was to elucidate the effect of heat treatment on the miscibility of multiple concentrated solutes that mimic biopharmaceutical formulations in frozen solutions. The first heating thermal analysis of frozen solutions containing either a low-molecular-weight saccharide (e.g., sucrose, trehalose, and glucose) or a polymer (e.g., polyvinylpyrrolidone and dextran) and their mixtures from -70°C showed a single transition at glass transition temperature of maximally freeze-concentrated solution (T(g) ') that indicated mixing of the freeze-concentrated multiple solutes. The heat treatment of single-solute and various polymer-rich mixture frozen solutions at temperatures far above their T(g) ' induced additional ice crystallization that shifted the transitions upward in the following scan. Contrarily, the heat treatment of frozen disaccharide-rich solutions induced two-step heat flow changes (T(g) ' splitting) that suggested separation of the solutes into multiple concentrated noncrystalline phases, different in the solute compositions. The extent of the T(g) ' splitting depended on the heat treatment temperature and time. Two-step glass transition was observed in some sucrose and dextran mixture solids, lyophilized after the heat treatment. Increasing mobility of solute molecules during the heat treatment should allow spatial reordering of some concentrated solute mixtures into thermodynamically favorable multiple phases. Copyright © 2011 Wiley-Liss, Inc.

  8. Magnetic timing valves for fluid control in paper-based microfluidics.

    PubMed

    Li, Xiao; Zwanenburg, Philip; Liu, Xinyu

    2013-07-07

    Multi-step analytical tests, such as an enzyme-linked immunosorbent assay (ELISA), require delivery of multiple fluids into a reaction zone and counting the incubation time at different steps. This paper presents a new type of paper-based magnetic valves that can count the time and turn on or off a fluidic flow accordingly, enabling timed fluid control in paper-based microfluidics. The timing capability of these valves is realized using a paper timing channel with an ionic resistor, which can detect the event of a solution flowing through the resistor and trigger an electromagnet (through a simple circuit) to open or close a paper cantilever valve. Based on this principle, we developed normally-open and normally-closed valves with a timing period up to 30.3 ± 2.1 min (sufficient for an ELISA on paper-based platforms). Using the normally-open valve, we performed an enzyme-based colorimetric reaction commonly used for signal readout of ELISAs, which requires a timed delivery of an enzyme substrate to a reaction zone. This design adds a new fluid-control component to the tool set for developing paper-based microfluidic devices, and has the potential to improve the user-friendliness of these devices.

  9. Brief International Cognitive Assessment for MS (BICAMS): international standards for validation.

    PubMed

    Benedict, Ralph H B; Amato, Maria Pia; Boringa, Jan; Brochet, Bruno; Foley, Fred; Fredrikson, Stan; Hamalainen, Paivi; Hartung, Hans; Krupp, Lauren; Penner, Iris; Reder, Anthony T; Langdon, Dawn

    2012-07-16

    An international expert consensus committee recently recommended a brief battery of tests for cognitive evaluation in multiple sclerosis. The Brief International Cognitive Assessment for MS (BICAMS) battery includes tests of mental processing speed and memory. Recognizing that resources for validation will vary internationally, the committee identified validation priorities, to facilitate international acceptance of BICAMS. Practical matters pertaining to implementation across different languages and countries were discussed. Five steps to achieve optimal psychometric validation were proposed. In Step 1, test stimuli should be standardized for the target culture or language under consideration. In Step 2, examiner instructions must be standardized and translated, including all information from manuals necessary for administration and interpretation. In Step 3, samples of at least 65 healthy persons should be studied for normalization, matched to patients on demographics such as age, gender and education. The objective of Step 4 is test-retest reliability, which can be investigated in a small sample of MS and/or healthy volunteers over 1-3 weeks. Finally, in Step 5, criterion validity should be established by comparing MS and healthy controls. At this time, preliminary studies are underway in a number of countries as we move forward with this international assessment tool for cognition in MS.

  10. Systematic procedure for designing processes with multiple environmental objectives.

    PubMed

    Kim, Ki-Joo; Smith, Raymond L

    2005-04-01

    Evaluation of multiple objectives is very important in designing environmentally benign processes. It requires a systematic procedure for solving multiobjective decision-making problems due to the complex nature of the problems, the need for complex assessments, and the complicated analysis of multidimensional results. In this paper, a novel systematic procedure is presented for designing processes with multiple environmental objectives. This procedure has four steps: initialization, screening, evaluation, and visualization. The first two steps are used for systematic problem formulation based on mass and energy estimation and order of magnitude analysis. In the third step, an efficient parallel multiobjective steady-state genetic algorithm is applied to design environmentally benign and economically viable processes and to provide more accurate and uniform Pareto optimal solutions. In the last step a new visualization technique for illustrating multiple objectives and their design parameters on the same diagram is developed. Through these integrated steps the decision-maker can easily determine design alternatives with respect to his or her preferences. Most importantly, this technique is independent of the number of objectives and design parameters. As a case study, acetic acid recovery from aqueous waste mixtures is investigated by minimizing eight potential environmental impacts and maximizing total profit. After applying the systematic procedure, the most preferred design alternatives and their design parameters are easily identified.

  11. Reactive stepping behaviour in response to forward loss of balance predicts future falls in community-dwelling older adults.

    PubMed

    Carty, Christopher P; Cronin, Neil J; Nicholson, Deanne; Lichtwark, Glen A; Mills, Peter M; Kerr, Graham; Cresswell, Andrew G; Barrett, Rod S

    2015-01-01

    a fall occurs when an individual experiences a loss of balance from which they are unable to recover. Assessment of balance recovery ability in older adults may therefore help to identify individuals at risk of falls. The purpose of this 12-month prospective study was to assess whether the ability to recover from a forward loss of balance with a single step across a range of lean magnitudes was predictive of falls. two hundred and one community-dwelling older adults, aged 65-90 years, underwent baseline testing of sensori-motor function and balance recovery ability followed by 12-month prospective falls evaluation. Balance recovery ability was defined by whether participants required either single or multiple steps to recover from forward loss of balance from three lean magnitudes, as well as the maximum lean magnitude participants could recover from with a single step. forty-four (22%) participants experienced one or more falls during the follow-up period. Maximal recoverable lean magnitude and use of multiple steps to recover at the 15% body weight (BW) and 25%BW lean magnitudes significantly predicted a future fall (odds ratios 1.08-1.26). The Physiological Profile Assessment, an established tool that assesses variety of sensori-motor aspects of falls risk, was also predictive of falls (Odds ratios 1.22 and 1.27, respectively), whereas age, sex, postural sway and timed up and go were not predictive. reactive stepping behaviour in response to forward loss of balance and physiological profile assessment are independent predictors of a future fall in community-dwelling older adults. Exercise interventions designed to improve reactive stepping behaviour may protect against future falls. © The Author 2014. Published by Oxford University Press on behalf of the British Geriatrics Society. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Pooling Data from Multiple Longitudinal Studies: The Role of Item Response Theory in Integrative Data Analysis

    PubMed Central

    Curran, Patrick J.; Hussong, Andrea M.; Cai, Li; Huang, Wenjing; Chassin, Laurie; Sher, Kenneth J.; Zucker, Robert A.

    2010-01-01

    There are a number of significant challenges encountered when studying development over an extended period of time including subject attrition, changing measurement structures across group and developmental period, and the need to invest substantial time and money. Integrative data analysis is an emerging set of methodologies that overcomes many of the challenges of single sample designs through the pooling of data drawn from multiple existing developmental studies. This approach is characterized by a host of advantages, but this also introduces several new complexities that must be addressed prior to broad adoption by developmental researchers. In this paper we focus on methods for fitting measurement models and creating scale scores using data drawn from multiple longitudinal studies. We present findings from the analysis of repeated measures of internalizing symptomatology that were pooled from three existing developmental studies. We describe and demonstrate each step in the analysis and we conclude with a discussion of potential limitations and directions for future research. PMID:18331129

  13. A robust approach towards unknown transformation, regional adjacency graphs, multigraph matching, segmentation video frames from unnamed aerial vehicles (UAV)

    NASA Astrophysics Data System (ADS)

    Gohatre, Umakant Bhaskar; Patil, Venkat P.

    2018-04-01

    In computer vision application, the multiple object detection and tracking, in real-time operation is one of the important research field, that have gained a lot of attentions, in last few years for finding non stationary entities in the field of image sequence. The detection of object is advance towards following the moving object in video and then representation of object is step to track. The multiple object recognition proof is one of the testing assignment from detection multiple objects from video sequence. The picture enrollment has been for quite some time utilized as a reason for the location the detection of moving multiple objects. The technique of registration to discover correspondence between back to back casing sets in view of picture appearance under inflexible and relative change. The picture enrollment is not appropriate to deal with event occasion that can be result in potential missed objects. In this paper, for address such problems, designs propose novel approach. The divided video outlines utilizing area adjancy diagram of visual appearance and geometric properties. Then it performed between graph sequences by using multi graph matching, then getting matching region labeling by a proposed graph coloring algorithms which assign foreground label to respective region. The plan design is robust to unknown transformation with significant improvement in overall existing work which is related to moving multiple objects detection in real time parameters.

  14. Fast matrix multiplication and its algebraic neighbourhood

    NASA Astrophysics Data System (ADS)

    Pan, V. Ya.

    2017-11-01

    Matrix multiplication is among the most fundamental operations of modern computations. By 1969 it was still commonly believed that the classical algorithm was optimal, although the experts already knew that this was not so. Worldwide interest in matrix multiplication instantly exploded in 1969, when Strassen decreased the exponent 3 of cubic time to 2.807. Then everyone expected to see matrix multiplication performed in quadratic or nearly quadratic time very soon. Further progress, however, turned out to be capricious. It was at stalemate for almost a decade, then a combination of surprising techniques (completely independent of Strassen's original ones and much more advanced) enabled a new decrease of the exponent in 1978-1981 and then again in 1986, to 2.376. By 2017 the exponent has still not passed through the barrier of 2.373, but most disturbing was the curse of recursion — even the decrease of exponents below 2.7733 required numerous recursive steps, and each of them squared the problem size. As a result, all algorithms supporting such exponents supersede the classical algorithm only for inputs of immense sizes, far beyond any potential interest for the user. We survey the long study of fast matrix multiplication, focusing on neglected algorithms for feasible matrix multiplication. We comment on their design, the techniques involved, implementation issues, the impact of their study on the modern theory and practice of Algebraic Computations, and perspectives for fast matrix multiplication. Bibliography: 163 titles.

  15. Method and apparatus for determining nutrient stimulation of biological processes

    DOEpatents

    Colwell, F.S.; Geesey, G.G.; Gillis, R.J.; Lehman, R.M.

    1997-11-11

    A method and apparatus is described for determining the nutrients to stimulate microorganisms in a particular environment. A representative sample of microorganisms from a particular environment are contacted with multiple support means wherein each support means has intimately associated with the surface of the support means a different nutrient composition for said microorganisms in said sample. The multiple support means is allowed to remain in contact with the microorganisms in the sample for a time period sufficient to measure differences in microorganism effects for the multiple support means. Microorganism effects for the multiple support means are then measured and compared. The invention is particularly adaptable to being conducted in situ. The additional steps of regulating nutrients added to the particular environment of microorganisms can enhance the desired results. Biological systems particularly suitable for this invention are bioremediation, biologically enhanced oil recovery, biological leaching of metals, and agricultural bioprocesses. 5 figs.

  16. HERMIES-3: A step toward autonomous mobility, manipulation, and perception

    NASA Technical Reports Server (NTRS)

    Weisbin, C. R.; Burks, B. L.; Einstein, J. R.; Feezell, R. R.; Manges, W. W.; Thompson, D. H.

    1989-01-01

    HERMIES-III is an autonomous robot comprised of a seven degree-of-freedom (DOF) manipulator designed for human scale tasks, a laser range finder, a sonar array, an omni-directional wheel-driven chassis, multiple cameras, and a dual computer system containing a 16-node hypercube expandable to 128 nodes. The current experimental program involves performance of human-scale tasks (e.g., valve manipulation, use of tools), integration of a dexterous manipulator and platform motion in geometrically complex environments, and effective use of multiple cooperating robots (HERMIES-IIB and HERMIES-III). The environment in which the robots operate has been designed to include multiple valves, pipes, meters, obstacles on the floor, valves occluded from view, and multiple paths of differing navigation complexity. The ongoing research program supports the development of autonomous capability for HERMIES-IIB and III to perform complex navigation and manipulation under time constraints, while dealing with imprecise sensory information.

  17. Method and apparatus for determining nutrient stimulation of biological processes

    DOEpatents

    Colwell, Frederick S.; Geesey, Gill G.; Gillis, Richard J.; Lehman, R. Michael

    1999-01-01

    A method and apparatus for determining the nutrients to stimulate microorganisms in a particular environment. A representative sample of microorganisms from a particular environment are contacted with multiple support means wherein each support means has intimately associated with the surface of the support means a different nutrient composition for said microorganisms in said sample. The multiple support means is allowed to remain in contact with the microorganisms in the sample for a time period sufficient to measure difference in microorganism effects for the multiple support means. Microorganism effects for the multiple support means are then measured and compared. The invention is particularly adaptable to being conducted in situ. The additional steps of regulating nutrients added to the particular environment of microorganisms can enhance the desired results. Biological systems particularly suitable for this invention are bioremediation, biologically enhanced oil recovery, biological leaching of metals, and agricultural bioprocesses.

  18. Method and apparatus for determining nutrient stimulation of biological processes

    DOEpatents

    Colwell, F.S.; Geesey, G.G.; Gillis, R.J.; Lehman, R.M.

    1999-07-13

    A method and apparatus are disclosed for determining the nutrients to stimulate microorganisms in a particular environment. A representative sample of microorganisms from a particular environment are contacted with multiple support means wherein each support means has intimately associated with the surface of the support means a different nutrient composition for microorganisms in the sample. The multiple support means is allowed to remain in contact with the microorganisms in the sample for a time period sufficient to measure difference in microorganism effects for the multiple support means. Microorganism effects for the multiple support means are then measured and compared. The invention is particularly adaptable to being conducted in situ. The additional steps of regulating nutrients added to the particular environment of microorganisms can enhance the desired results. Biological systems particularly suitable for this invention are bioremediation, biologically enhanced oil recovery, biological leaching of metals, and agricultural bioprocesses. 5 figs.

  19. Method and apparatus for determining nutrient stimulation of biological processes

    DOEpatents

    Colwell, Frederick S.; Geesey, Gill G.; Gillis, Richard J.; Lehman, R. Michael

    1997-01-01

    A method and apparatus for determining the nutrients to stimulate microorganisms in a particular environment. A representative sample of microorganisms from a particular environment are contacted with multiple support means wherein each support means has intimately associated with the surface of the support means a different nutrient composition for said microorganisms in said sample. The multiple support means is allowed to remain in contact with the microorganisms in the sample for a time period sufficient to measure differences in microorganism effects for the multiple support means. Microorganism effects for the multiple support means are then measured and compared. The invention is particularly adaptable to being conducted in situ. The additional steps of regulating nutrients added to the particular environment of microorganisms can enhance the desired results. Biological systems particularly suitable for this invention are bioremediation, biologically enhanced oil recovery, biological leaching of metals, and agricultural bioprocesses.

  20. Gemi: PCR Primers Prediction from Multiple Alignments

    PubMed Central

    Sobhy, Haitham; Colson, Philippe

    2012-01-01

    Designing primers and probes for polymerase chain reaction (PCR) is a preliminary and critical step that requires the identification of highly conserved regions in a given set of sequences. This task can be challenging if the targeted sequences display a high level of diversity, as frequently encountered in microbiologic studies. We developed Gemi, an automated, fast, and easy-to-use bioinformatics tool with a user-friendly interface to design primers and probes based on multiple aligned sequences. This tool can be used for the purpose of real-time and conventional PCR and can deal efficiently with large sets of sequences of a large size. PMID:23316117

  1. Theoretical Study of the Mechanism Behind the para-Selective Nitration of Toluene in Zeolite H-Beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersen, Amity; Govind, Niranjan; Subramanian, Lalitha

    Periodic density functional theory calculations were performed to investigate the origin of the favorable para-selective nitration of toluene exhibited by zeolite H-beta with acetyl nitrate nitration agent. Energy calculations were performed for each of the 32 crystallographically unique Bronsted acid sites of a beta polymorph B zeolite unit cell with multiple Bronsted acid sites of comparable stability. However, one particular aluminum T-site with three favorable Bronsted site oxygens embedded in a straight 12-T channel wall provides multiple favorable proton transfer sites. Transition state searches around this aluminum site were performed to determine the barrier to reaction for both para andmore » ortho nitration of toluene. A three-step process was assumed for the nitration of toluene with two organic intermediates: the pi- and sigma-complexes. The rate limiting step is the proton transfer from the sigma-complex to a zeolite Bronsted site. The barrier for this step in ortho nitration is shown to be nearly 2.5 times that in para nitration. This discrepancy appears to be due to steric constraints imposed by the curvature of the large 12-T pore channels of beta and the toluene methyl group in the ortho approach that are not present in the para approach.« less

  2. Mutational Effects and Population Dynamics During Viral Adaptation Challenge Current Models

    PubMed Central

    Miller, Craig R.; Joyce, Paul; Wichman, Holly A.

    2011-01-01

    Adaptation in haploid organisms has been extensively modeled but little tested. Using a microvirid bacteriophage (ID11), we conducted serial passage adaptations at two bottleneck sizes (104 and 106), followed by fitness assays and whole-genome sequencing of 631 individual isolates. Extensive genetic variation was observed including 22 beneficial, several nearly neutral, and several deleterious mutations. In the three large bottleneck lines, up to eight different haplotypes were observed in samples of 23 genomes from the final time point. The small bottleneck lines were less diverse. The small bottleneck lines appeared to operate near the transition between isolated selective sweeps and conditions of complex dynamics (e.g., clonal interference). The large bottleneck lines exhibited extensive interference and less stochasticity, with multiple beneficial mutations establishing on a variety of backgrounds. Several leapfrog events occurred. The distribution of first-step adaptive mutations differed significantly from the distribution of second-steps, and a surprisingly large number of second-step beneficial mutations were observed on a highly fit first-step background. Furthermore, few first-step mutations appeared as second-steps and second-steps had substantially smaller selection coefficients. Collectively, the results indicate that the fitness landscape falls between the extremes of smooth and fully uncorrelated, violating the assumptions of many current mutational landscape models. PMID:21041559

  3. The Lunar Laser Communication Demonstration: NASA's First Step Toward Very High Data Rate Support of Science and Exploration Missions

    NASA Astrophysics Data System (ADS)

    Boroson, Don M.; Robinson, Bryan S.

    2014-12-01

    Future NASA missions for both Science and Exploration will have needs for much higher data rates than are presently available, even with NASA's highly-capable Space- and Deep-Space Networks. As a first step towards this end, for one month in late 2013, NASA's Lunar Laser Communication Demonstration (LLCD) successfully demonstrated for the first time high-rate duplex laser communications between a satellite in lunar orbit, the Lunar Atmosphere and Dust Environment Explorer (LADEE), and multiple ground stations on the Earth. It constituted the longest-range laser communication link ever built and demonstrated the highest communication data rates ever achieved to or from the Moon.

  4. Visualizing multiattribute Web transactions using a freeze technique

    NASA Astrophysics Data System (ADS)

    Hao, Ming C.; Cotting, Daniel; Dayal, Umeshwar; Machiraju, Vijay; Garg, Pankaj

    2003-05-01

    Web transactions are multidimensional and have a number of attributes: client, URL, response times, and numbers of messages. One of the key questions is how to simultaneously lay out in a graph the multiple relationships, such as the relationships between the web client response times and URLs in a web access application. In this paper, we describe a freeze technique to enhance a physics-based visualization system for web transactions. The idea is to freeze one set of objects before laying out the next set of objects during the construction of the graph. As a result, we substantially reduce the force computation time. This technique consists of three steps: automated classification, a freeze operation, and a graph layout. These three steps are iterated until the final graph is generated. This iterated-freeze technique has been prototyped in several e-service applications at Hewlett Packard Laboratories. It has been used to visually analyze large volumes of service and sales transactions at online web sites.

  5. Lanczos eigensolution method for high-performance computers

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.

    1991-01-01

    The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.

  6. Probabilistic Round Trip Contamination Analysis of a Mars Sample Acquisition and Handling Process Using Markovian Decompositions

    NASA Technical Reports Server (NTRS)

    Hudson, Nicolas; Lin, Ying; Barengoltz, Jack

    2010-01-01

    A method for evaluating the probability of a Viable Earth Microorganism (VEM) contaminating a sample during the sample acquisition and handling (SAH) process of a potential future Mars Sample Return mission is developed. A scenario where multiple core samples would be acquired using a rotary percussive coring tool, deployed from an arm on a MER class rover is analyzed. The analysis is conducted in a structured way by decomposing sample acquisition and handling process into a series of discrete time steps, and breaking the physical system into a set of relevant components. At each discrete time step, two key functions are defined: The probability of a VEM being released from each component, and the transport matrix, which represents the probability of VEM transport from one component to another. By defining the expected the number of VEMs on each component at the start of the sampling process, these decompositions allow the expected number of VEMs on each component at each sampling step to be represented as a Markov chain. This formalism provides a rigorous mathematical framework in which to analyze the probability of a VEM entering the sample chain, as well as making the analysis tractable by breaking the process down into small analyzable steps.

  7. A time-driven, activity-based costing methodology for determining the costs of red blood cell transfusion in patients with beta thalassaemia major.

    PubMed

    Burns, K E; Haysom, H E; Higgins, A M; Waters, N; Tahiri, R; Rushford, K; Dunstan, T; Saxby, K; Kaplan, Z; Chunilal, S; McQuilten, Z K; Wood, E M

    2018-04-10

    To describe the methodology to estimate the total cost of administration of a single unit of red blood cells (RBC) in adults with beta thalassaemia major in an Australian specialist haemoglobinopathy centre. Beta thalassaemia major is a genetic disorder of haemoglobin associated with multiple end-organ complications and typically requiring lifelong RBC transfusion therapy. New therapeutic agents are becoming available based on advances in understanding of the disorder and its consequences. Assessment of the true total cost of transfusion, incorporating both product and activity costs, is required in order to evaluate the benefits and costs of these new therapies. We describe the bottom-up, time-driven, activity-based costing methodology used to develop process maps to provide a step-by-step outline of the entire transfusion pathway. Detailed flowcharts for each process are described. Direct observations and timing of the process maps document all activities, resources, staff, equipment and consumables in detail. The analysis will include costs associated with performing these processes, including resources and consumables. Sensitivity analyses will be performed to determine the impact of different staffing levels, timings and probabilities associated with performing different tasks. Thirty-one process maps have been developed, with over 600 individual activities requiring multiple timings. These will be used for future detailed cost analyses. Detailed process maps using bottom-up, time-driven, activity-based costing for determining the cost of RBC transfusion in thalassaemia major have been developed. These could be adapted for wider use to understand and compare the costs and complexities of transfusion in other settings. © 2018 British Blood Transfusion Society.

  8. Sensitivity Equation Derivation for Transient Heat Transfer Problems

    NASA Technical Reports Server (NTRS)

    Hou, Gene; Chien, Ta-Cheng; Sheen, Jeenson

    2004-01-01

    The focus of the paper is on the derivation of sensitivity equations for transient heat transfer problems modeled by different discretization processes. Two examples will be used in this study to facilitate the discussion. The first example is a coupled, transient heat transfer problem that simulates the press molding process in fabrication of composite laminates. These state equations are discretized into standard h-version finite elements and solved by a multiple step, predictor-corrector scheme. The sensitivity analysis results based upon the direct and adjoint variable approaches will be presented. The second example is a nonlinear transient heat transfer problem solved by a p-version time-discontinuous Galerkin's Method. The resulting matrix equation of the state equation is simply in the form of Ax = b, representing a single step, time marching scheme. A direct differentiation approach will be used to compute the thermal sensitivities of a sample 2D problem.

  9. MTS-MD of Biomolecules Steered with 3D-RISM-KH Mean Solvation Forces Accelerated with Generalized Solvation Force Extrapolation.

    PubMed

    Omelyan, Igor; Kovalenko, Andriy

    2015-04-14

    We developed a generalized solvation force extrapolation (GSFE) approach to speed up multiple time step molecular dynamics (MTS-MD) of biomolecules steered with mean solvation forces obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model with the Kovalenko-Hirata closure). GSFE is based on a set of techniques including the non-Eckart-like transformation of coordinate space separately for each solute atom, extension of the force-coordinate pair basis set followed by selection of the best subset, balancing the normal equations by modified least-squares minimization of deviations, and incremental increase of outer time step in motion integration. Mean solvation forces acting on the biomolecule atoms in conformations at successive inner time steps are extrapolated using a relatively small number of best (closest) solute atomic coordinates and corresponding mean solvation forces obtained at previous outer time steps by converging the 3D-RISM-KH integral equations. The MTS-MD evolution steered with GSFE of 3D-RISM-KH mean solvation forces is efficiently stabilized with our optimized isokinetic Nosé-Hoover chain (OIN) thermostat. We validated the hybrid MTS-MD/OIN/GSFE/3D-RISM-KH integrator on solvated organic and biomolecules of different stiffness and complexity: asphaltene dimer in toluene solvent, hydrated alanine dipeptide, miniprotein 1L2Y, and protein G. The GSFE accuracy and the OIN efficiency allowed us to enlarge outer time steps up to huge values of 1-4 ps while accurately reproducing conformational properties. Quasidynamics steered with 3D-RISM-KH mean solvation forces achieves time scale compression of conformational changes coupled with solvent exchange, resulting in further significant acceleration of protein conformational sampling with respect to real time dynamics. Overall, this provided a 50- to 1000-fold effective speedup of conformational sampling for these systems, compared to conventional MD with explicit solvent. We have been able to fold the miniprotein from a fully denatured, extended state in about 60 ns of quasidynamics steered with 3D-RISM-KH mean solvation forces, compared to the average physical folding time of 4-9 μs observed in experiment.

  10. Antibody-Mediated Small Molecule Detection Using Programmable DNA-Switches.

    PubMed

    Rossetti, Marianna; Ippodrino, Rudy; Marini, Bruna; Palleschi, Giuseppe; Porchetta, Alessandro

    2018-06-13

    The development of rapid, cost-effective, and single-step methods for the detection of small molecules is crucial for improving the quality and efficiency of many applications ranging from life science to environmental analysis. Unfortunately, current methodologies still require multiple complex, time-consuming washing and incubation steps, which limit their applicability. In this work we present a competitive DNA-based platform that makes use of both programmable DNA-switches and antibodies to detect small target molecules. The strategy exploits both the advantages of proximity-based methods and structure-switching DNA-probes. The platform is modular and versatile and it can potentially be applied for the detection of any small target molecule that can be conjugated to a nucleic acid sequence. Here the rational design of programmable DNA-switches is discussed, and the sensitive, rapid, and single-step detection of different environmentally relevant small target molecules is demonstrated.

  11. The CanOE strategy: integrating genomic and metabolic contexts across multiple prokaryote genomes to find candidate genes for orphan enzymes.

    PubMed

    Smith, Adam Alexander Thil; Belda, Eugeni; Viari, Alain; Medigue, Claudine; Vallenet, David

    2012-05-01

    Of all biochemically characterized metabolic reactions formalized by the IUBMB, over one out of four have yet to be associated with a nucleic or protein sequence, i.e. are sequence-orphan enzymatic activities. Few bioinformatics annotation tools are able to propose candidate genes for such activities by exploiting context-dependent rather than sequence-dependent data, and none are readily accessible and propose result integration across multiple genomes. Here, we present CanOE (Candidate genes for Orphan Enzymes), a four-step bioinformatics strategy that proposes ranked candidate genes for sequence-orphan enzymatic activities (or orphan enzymes for short). The first step locates "genomic metabolons", i.e. groups of co-localized genes coding proteins catalyzing reactions linked by shared metabolites, in one genome at a time. These metabolons can be particularly helpful for aiding bioanalysts to visualize relevant metabolic data. In the second step, they are used to generate candidate associations between un-annotated genes and gene-less reactions. The third step integrates these gene-reaction associations over several genomes using gene families, and summarizes the strength of family-reaction associations by several scores. In the final step, these scores are used to rank members of gene families which are proposed for metabolic reactions. These associations are of particular interest when the metabolic reaction is a sequence-orphan enzymatic activity. Our strategy found over 60,000 genomic metabolons in more than 1,000 prokaryote organisms from the MicroScope platform, generating candidate genes for many metabolic reactions, of which more than 70 distinct orphan reactions. A computational validation of the approach is discussed. Finally, we present a case study on the anaerobic allantoin degradation pathway in Escherichia coli K-12.

  12. Validation of a One-Step Method for Extracting Fatty Acids from Salmon, Chicken and Beef Samples.

    PubMed

    Zhang, Zhichao; Richardson, Christine E; Hennebelle, Marie; Taha, Ameer Y

    2017-10-01

    Fatty acid extraction methods are time-consuming and expensive because they involve multiple steps and copious amounts of extraction solvents. In an effort to streamline the fatty acid extraction process, this study compared the standard Folch lipid extraction method to a one-step method involving a column that selectively elutes the lipid phase. The methods were tested on raw beef, salmon, and chicken. Compared to the standard Folch method, the one-step extraction process generally yielded statistically insignificant differences in chicken and salmon fatty acid concentrations, percent composition and weight percent. Initial testing showed that beef stearic, oleic and total fatty acid concentrations were significantly lower by 9-11% with the one-step method as compared to the Folch method, but retesting on a different batch of samples showed a significant 4-8% increase in several omega-3 and omega-6 fatty acid concentrations with the one-step method relative to the Folch. Overall, the findings reflect the utility of a one-step extraction method for routine and rapid monitoring of fatty acids in chicken and salmon. Inconsistencies in beef concentrations, although minor (within 11%), may be due to matrix effects. A one-step fatty acid extraction method has broad applications for rapidly and routinely monitoring fatty acids in the food supply and formulating controlled dietary interventions. © 2017 Institute of Food Technologists®.

  13. Stepwise approach to establishing multiple outreach laboratory information system-electronic medical record interfaces.

    PubMed

    Pantanowitz, Liron; Labranche, Wayne; Lareau, William

    2010-05-26

    Clinical laboratory outreach business is changing as more physician practices adopt an electronic medical record (EMR). Physician connectivity with the laboratory information system (LIS) is consequently becoming more important. However, there are no reports available to assist the informatician with establishing and maintaining outreach LIS-EMR connectivity. A four-stage scheme is presented that was successfully employed to establish unidirectional and bidirectional interfaces with multiple physician EMRs. This approach involves planning (step 1), followed by interface building (step 2) with subsequent testing (step 3), and finally ongoing maintenance (step 4). The role of organized project management, software as a service (SAAS), and alternate solutions for outreach connectivity are discussed.

  14. Stepwise approach to establishing multiple outreach laboratory information system-electronic medical record interfaces

    PubMed Central

    Pantanowitz, Liron; LaBranche, Wayne; Lareau, William

    2010-01-01

    Clinical laboratory outreach business is changing as more physician practices adopt an electronic medical record (EMR). Physician connectivity with the laboratory information system (LIS) is consequently becoming more important. However, there are no reports available to assist the informatician with establishing and maintaining outreach LIS–EMR connectivity. A four-stage scheme is presented that was successfully employed to establish unidirectional and bidirectional interfaces with multiple physician EMRs. This approach involves planning (step 1), followed by interface building (step 2) with subsequent testing (step 3), and finally ongoing maintenance (step 4). The role of organized project management, software as a service (SAAS), and alternate solutions for outreach connectivity are discussed. PMID:20805958

  15. Development of a fast and feasible spectrum modeling technique for flattening filter free beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Woong; Bush, Karl; Mok, Ed

    Purpose: To develop a fast and robust technique for the determination of optimized photon spectra for flattening filter free (FFF) beams to be applied in convolution/superposition dose calculations. Methods: A two-step optimization method was developed to derive optimal photon spectra for FFF beams. In the first step, a simple functional form of the photon spectra proposed by Ali ['Functional forms for photon spectra of clinical linacs,' Phys. Med. Biol. 57, 31-50 (2011)] is used to determine generalized shapes of the photon spectra. In this method, the photon spectra were defined for the ranges of field sizes to consider the variationsmore » of the contributions of scattered photons with field size. Percent depth doses (PDDs) for each field size were measured and calculated to define a cost function, and a collapsed cone convolution (CCC) algorithm was used to calculate the PDDs. In the second step, the generalized functional form of the photon spectra was fine-tuned in a process whereby the weights of photon fluence became the optimizing free parameters. A line search method was used for the optimization and first order derivatives with respect to the optimizing parameters were derived from the CCC algorithm to enhance the speed of the optimization. The derived photon spectra were evaluated, and the dose distributions using the optimized spectra were validated. Results: The optimal spectra demonstrate small variations with field size for the 6 MV FFF beam and relatively large variations for the 10 MV FFF beam. The mean energies of the optimized 6 MV FFF spectra were decreased from 1.31 MeV for a 3 Multiplication-Sign 3 cm{sup 2} field to 1.21 MeV for a 40 Multiplication-Sign 40 cm{sup 2} field, and from 2.33 MeV at 3 Multiplication-Sign 3 cm{sup 2} to 2.18 MeV at 40 Multiplication-Sign 40 cm{sup 2} for the 10 MV FFF beam. The developed method could significantly improve the agreement between the calculated and measured PDDs. Root mean square differences on the optimized PDDs were observed to be 0.41% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.21% (40 Multiplication-Sign 40 cm{sup 2}) for the 6 MV FFF beam, and 0.35% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.29% (40 Multiplication-Sign 40 cm{sup 2}) for the 10 MV FFF beam. The first order derivatives from the functional form were found to improve the speed of computational time up to 20 times compared to the other techniques. Conclusions: The derived photon spectra resulted in good agreements with measured PDDs over the range of field sizes investigated. The suggested method is easily applicable to commercial radiation treatment planning systems since it only requires measured PDDs as input.« less

  16. An Examination of Four Traditional School Physical Activity Models on Children's Step Counts and MVPA.

    PubMed

    Brusseau, Timothy A; Kulinna, Pamela H

    2015-03-01

    Schools have been identified as primary societal institutions for promoting children's physical activity (PA); however, limited evidence exists demonstrating which traditional school-based PA models maximize children's PA. The purpose of this study was to compare step counts and moderate-to-vigorous physical activity (MVPA) across 4 traditional school PA modules. Step count and MVPA data were collected on 5 consecutive school days from 298 children (Mage = 10.0 ± 0.6 years; 55% female) in Grade 5. PA was measured using the NL-1000 piezoelectric pedometer. The 4 models included (a) recess only, (b) multiple recesses, (c) recess and physical education (PE), and (d) multiple recesses and PE. Children accumulated the greatest PA on days that they had PE and multiple recess opportunities (5,242 ± 1,690 steps; 15.3 ± 8.8 min of MVPA). Children accumulated the least amount of PA on days with only 1 recess opportunity (3,312 ± 445 steps; 7.1 ± 2.3 min of MVPA). Across all models, children accumulated an additional 1,140 steps and 4.1 min of MVPA on PE days. It appears that PE is the most important school PA opportunity for maximizing children's PA. However, on days without PE, a 2nd recess can increase school PA by 20% (Δ = 850 steps; 3.8 min of MVPA).

  17. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    NASA Astrophysics Data System (ADS)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  18. Growth and adhesion properties of monosodium urate monohydrate (MSU) crystals

    NASA Astrophysics Data System (ADS)

    Perrin, Clare M.

    The presence of monosodium urate monohydrate (MSU) crystals in the synovial fluid has long been associated with the joint disease gout. To elucidate the molecular level growth mechanism and adhesive properties of MSU crystals, atomic force microscopy (AFM), scanning electron microscopy, and dynamic light scattering (DLS) techniques were employed in the characterization of the (010) and (1-10) faces of MSU, as well as physiologically relevant solutions supersaturated with urate. Topographical AFM imaging of both MSU (010) and (1-10) revealed the presence of crystalline layers of urate arranged into v-shaped features of varying height. Growth rates were measured for both monolayers (elementary steps) and multiple layers (macrosteps) on both crystal faces under a wide range of urate supersaturation in physiologically relevant solutions. Step velocities for monolayers and multiple layers displayed a second order polynomial dependence on urate supersaturation on MSU (010) and (1-10), with step velocities on (1-10) generally half of those measured on MSU (010) in corresponding growth conditions. Perpendicular step velocities on MSU (010) were obtained and also showed a second order polynomial dependence of step velocity with respect to urate supersaturation, which implies a 2D-island nucleation growth mechanism for MSU (010). Extensive topographical imaging of MSU (010) showed island adsorption from urate growth solutions under all urate solution concentrations investigated, lending further support for the determined growth mechanism. Island sizes derived from DLS experiments on growth solutions were in agreement with those measured on MSU (010) topographical images. Chemical force microscopy (CFM) was utilized to characterize the adhesive properties of MSU (010) and (1-10). AFM probes functionalized with amino acid derivatives and bio-macromolecules found in the synovial fluid were brought into contact with both crystal faces and adhesion forces were tabulated into histograms for comparison. AFM probes functionalized with -COO-, -CH3, and -OH functionalities displayed similar adhesion force with both crystal surfaces of MSU, while adhesion force on (1-10) was three times greater than (010) for -NH2+ probes. For AFM probes functionalized with bovine serum albumin, adhesion force was three times greater on MSU (1-10) than (010), most likely due to the more ionic nature of (1-10).

  19. Multiple applications of the U.S. EPA 1312 leach procedure to mine waste from the Animas watershed, SW Colorado

    USGS Publications Warehouse

    Fey, David L.; Church, Stan E.; Driscoll, Rhonda L.; Adams, Monique G.

    2011-01-01

    Eleven acid-sulphate and quartz-sericite-pyrite altered mine waste samples from the Animas River watershed in SW Colorado were subjected to a series of 5 to 6 successive leaches using the US EPA 1312 leach protocol to evaluate the transport of metals and loss of acidity from mine wastes as a function of time. Multi-acid digestion ICP-AES analyses, X-ray diffraction (XRD) mineral identification, total sulphur, and net acid potential (NAP) determinations were performed on the initial starting materials. Multiple leaching steps generally showed a 'flushing' effect, whereby elements loosely bound, presumably as water-soluble salts, were removed. Aluminum, Cd, Fe, Mg, Mn, Sr, Zn, and S showed decreasing concentration trends, whereas Cu concentrations showed initially decreasing trends, followed by increasing trends in later steps. Concentrations of Zn in the first leach step were independent of whole-sample Zn content. Lead and Ba concentrations consistently increased with each step, indicating that anglesite (PbSO4) and barite (BaSO4), respectively, were dissolving in successive leach steps. Comparison of Fe content with NAP resulted in a modest correlation. However, using the S analyses and XRD identification of sulphide minerals to apportion S amongst enargite, barite, anglesite/galena, and sphalerite, and assigning the remaining S to pyrite, provided a useful correlation between estimated pyrite content and NAP. Whole-sample mass loss correlated well with NAP, but individual elements' behaviors varied between positive correlation (e.g. Al, Fe, Mg), no apparent correlation (Ca, Cd, Pb, Zn), and negative correlation (Cu). Comparison of the summed titrated acidities of the leachates with the whole-sample NAP values yielded an estimate of the fraction of NAP consumed, and led to an estimate of the time it would take to consume the sample acidity by weathering. We estimate, on the basis of these experiments, the acidity in the upper 30 cm would be consumed in 200–1000 years. In addition, calculations suggest that the acidity would be depleted before the complete store of the metals Cu-Cd-Zn in these mine wastes would be released to the environment.

  20. Multiple applications of the U.S. EPA 1312 leach procedure to mine waste from the animas watershed, SW Colorado

    USGS Publications Warehouse

    Fey, D.L.; Church, S.E.; Driscoll, R.L.; Adams, M.G.

    2011-01-01

    Eleven acid-sulphate and quartz-sericite-pyrite altered mine waste samples from the Animas River watershed in SW Colorado were subjected to a series of 5 to 6 successive leaches using the US EPA 1312 leach protocol to evaluate the transport of metals and loss of acidity from mine wastes as a function of time. Multi-acid digestion ICP-AES analyses, X-ray diffraction (XRD) mineral identification, total sulphur, and net acid potential (NAP) determinations were performed on the initial starting materials. Multiple leaching steps generally showed a 'flushing' effect, whereby elements loosely bound, presumably as water-soluble salts, were removed. Aluminum, Cd, Fe, Mg, Mn, Sr, Zn, and S showed decreasing concentration trends, whereas Cu concentrations showed initially decreasing trends, followed by increasing trends in later steps. Concentrations of Zn in the first leach step were independent of whole-sample Zn content. Lead and Ba concentrations consistently increased with each step, indicating that anglesite (PbSO4) and barite (BaSO4), respectively, were dissolving in successive leach steps. Comparison of Fe content with NAP resulted in a modest correlation. However, using the S analyses and XRD identification of sulphide minerals to apportion S amongst enargite, barite, anglesite/galena, and sphalerite, and assigning the remaining S to pyrite, provided a useful correlation between estimated pyrite content and NAP. Whole-sample mass loss correlated well with NAP, but individual elements' behaviors varied between positive correlation (e.g. Al, Fe, Mg), no apparent correlation (Ca, Cd, Pb, Zn), and negative correlation (Cu). Comparison of the summed titrated acidities of the leachates with the whole-sample NAP values yielded an estimate of the fraction of NAP consumed, and led to an estimate of the time it would take to consume the sample acidity by weathering. We estimate, on the basis of these experiments, the acidity in the upper 30 cm would be consumed in 200-1000 years. In addition, calculations suggest that the acidity would be depleted before the complete store of the metals Cu-Cd-Zn in these mine wastes would be released to the environment. ?? 2011 AAG/Geological Society of London.

  1. Estimating V0[subscript 2]max Using a Personalized Step Test

    ERIC Educational Resources Information Center

    Webb, Carrie; Vehrs, Pat R.; George, James D.; Hager, Ronald

    2014-01-01

    The purpose of this study was to develop a step test with a personalized step rate and step height to predict cardiorespiratory fitness in 80 college-aged males and females using the self-reported perceived functional ability scale and data collected during the step test. Multiple linear regression analysis yielded a model (R = 0.90, SEE = 3.43…

  2. Engineering metabolic pathways in plants by multigene transformation.

    PubMed

    Zorrilla-López, Uxue; Masip, Gemma; Arjó, Gemma; Bai, Chao; Banakar, Raviraj; Bassie, Ludovic; Berman, Judit; Farré, Gemma; Miralpeix, Bruna; Pérez-Massot, Eduard; Sabalza, Maite; Sanahuja, Georgina; Vamvaka, Evangelia; Twyman, Richard M; Christou, Paul; Zhu, Changfu; Capell, Teresa

    2013-01-01

    Metabolic engineering in plants can be used to increase the abundance of specific valuable metabolites, but single-point interventions generally do not improve the yields of target metabolites unless that product is immediately downstream of the intervention point and there is a plentiful supply of precursors. In many cases, an intervention is necessary at an early bottleneck, sometimes the first committed step in the pathway, but is often only successful in shifting the bottleneck downstream, sometimes also causing the accumulation of an undesirable metabolic intermediate. Occasionally it has been possible to induce multiple genes in a pathway by controlling the expression of a key regulator, such as a transcription factor, but this strategy is only possible if such master regulators exist and can be identified. A more robust approach is the simultaneous expression of multiple genes in the pathway, preferably representing every critical enzymatic step, therefore removing all bottlenecks and ensuring completely unrestricted metabolic flux. This approach requires the transfer of multiple enzyme-encoding genes to the recipient plant, which is achieved most efficiently if all genes are transferred at the same time. Here we review the state of the art in multigene transformation as applied to metabolic engineering in plants, highlighting some of the most significant recent advances in the field.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bates, Robert; McConnell, Elizabeth

    Machining methods across many industries generally require multiple operations to machine and process advanced materials, features with micron precision, and complex shapes. The resulting multiple machining platforms can significantly affect manufacturing cycle time and the precision of the final parts, with a resultant increase in cost and energy consumption. Ultrafast lasers represent a transformative and disruptive technology that removes material with micron precision and in a single step manufacturing process. Such precision results from athermal ablation without modification or damage to the remaining material which is the key differentiator between ultrafast laser technologies and traditional laser technologies or mechanical processes.more » Athermal ablation without modification or damage to the material eliminates post-processing or multiple manufacturing steps. Combined with the appropriate technology to control the motion of the work piece, ultrafast lasers are excellent candidates to provide breakthrough machining capability for difficult-to-machine materials. At the project onset in early 2012, the project team recognized that substantial effort was necessary to improve the application of ultrafast laser and precise motion control technologies (for micromachining difficult-to-machine materials) to further the aggregate throughput and yield improvements over conventional machining methods. The project described in this report advanced these leading-edge technologies thru the development and verification of two platforms: a hybrid enhanced laser chassis and a multi-application testbed.« less

  4. Clinical Importance of Steps Taken per Day among Persons with Multiple Sclerosis

    PubMed Central

    Motl, Robert W.; Pilutti, Lara A.; Learmonth, Yvonne C.; Goldman, Myla D.; Brown, Ted

    2013-01-01

    Background The number of steps taken per day (steps/day) provides a reliable and valid outcome of free-living walking behavior in persons with multiple sclerosis (MS). Objective This study examined the clinical meaningfulness of steps/day using the minimal clinically important difference (MCID) value across stages representing the developing impact of MS. Methods This study was a secondary analysis of de-identified data from 15 investigations totaling 786 persons with MS and 157 healthy controls. All participants provided demographic information and wore an accelerometer or pedometer during the waking hours of a 7-day period. Those with MS further provided real-life, health, and clinical information and completed the Multiple Sclerosis Walking Scale-12 (MSWS-12) and Patient Determined Disease Steps (PDDS) scale. MCID estimates were based on regression analyses and analysis of variance for between group differences. Results The mean MCID from self-report scales that capture subtle changes in ambulation (1-point change in PDSS scores and 10-point change in MSWS-12 scores) was 779 steps/day (14% of mean score for MS sample); the mean MCID for clinical/health outcomes (MS type, duration, weight status) was 1,455 steps/day (26% of mean score for MS sample); real-life anchors (unemployment, divorce, assistive device use) resulted in a mean MCID of 2,580 steps/day (45% of mean score for MS sample); and the MCID for the cumulative impact of MS (MS vs. control) was 2,747 steps/day (48% of mean score for MS sample). Conclusion The change in motion sensor output of ∼800 steps/day appears to represent a lower-bound estimate of clinically meaningful change in free-living walking behavior in interventions of MS. PMID:24023843

  5. The impact of different scenarios for intermittent bladder catheterization on health state utilities: results from an internet-based time trade-off survey.

    PubMed

    Averbeck, Márcio Augusto; Krassioukov, Andrei; Thiruchelvam, Nikesh; Madersbacher, Helmut; Bøgelund, Mette; Igawa, Yasuhiko

    2018-06-08

    Intermittent catheterization (IC) is the gold standard for bladder management in patients with chronic urinary retention. Despite its medical benefits, IC-users experience a negative impact on their quality of life (QoL). For health economics based decision making, this impact is normally measured using generic QoL measures (such as EQ-5D) that estimate a single utility score which can be used to calculate Quality-Adjusted Life Years (QALYs). But these generic measures may not be sensitive to all relevant aspects of QoL affected by intermittent catheters. This study used alternative methods to estimate the health state utilities associated with different scenarios: using a multiple-use catheter, one-time-use catheter, pre-lubricated one-time-use catheter, and pre-lubricated one-time-use catheter with one less urinary tract infection (UTI) per year. Health state utilities were elicited through an internet-based Time Trade-Off (TTO) survey in adult volunteers representing the general population in Canada and the UK. Health states were developed to represent the catheters based on the following four attributes: steps and time needed for IC process, pain and the frequency of UTIs. The survey was completed by 956 respondents. One-time-use catheters, pre-lubricated one-time-use catheters and ready-to-use catheters were preferred to multiple-use catheters. The utility gains were associated with the following features: one-time-use (Canada: +0.013, UK: +0.021), ready-to-use (all: +0.017), and one less UTI/year (all: +0.011). Internet-based survey responders may have valued health states differently than the rest of the population: this might be a source of bias. Steps and time needed for the IC process, pain related to IC, and the frequency of UTIs have a significant impact on IC related utilities. These values could be incorporated into a cost utility analysis.

  6. Global finite-time attitude consensus tracking control for a group of rigid spacecraft

    NASA Astrophysics Data System (ADS)

    Li, Penghua

    2017-10-01

    The problem of finite-time attitude consensus for multiple rigid spacecraft with a leader-follower architecture is investigated in this paper. To achieve the finite-time attitude consensus, at the first step, a distributed finite-time convergent observer is proposed for each follower to estimate the leader's attitude in a finite time. Then based on the terminal sliding mode control method, a new finite-time attitude tracking controller is designed such that the leader's attitude can be tracked in a finite time. Finally, a finite-time observer-based distributed control strategy is proposed. It is shown that the attitude consensus can be achieved in a finite time under the proposed controller. Simulation results are given to show the effectiveness of the proposed method.

  7. CFD simulation of mechanical draft tube mixing in anaerobic digester tanks.

    PubMed

    Meroney, Robert N; Colorado, P E

    2009-03-01

    Computational Fluid Dynamics (CFD) was used to simulate the mixing characteristics of four different circular anaerobic digester tanks (diameters of 13.7, 21.3, 30.5, and 33.5m) equipped with single and multiple draft impeller tube mixers. Rates of mixing of step and slug injection of tracers were calculated from which digester volume turnover time (DVTT), mixture diffusion time (MDT), and hydraulic retention time (HRT) could be calculated. Washout characteristics were compared to analytic formulae to estimate any presence of partial mixing, dead volume, short-circuiting, or piston flow. CFD satisfactorily predicted performance of both model and full-scale circular tank configurations.

  8. Retrospective forecasting of the 2010-2014 Melbourne influenza seasons using multiple surveillance systems.

    PubMed

    Moss, R; Zarebski, A; Dawson, P; McCAW, J M

    2017-01-01

    Accurate forecasting of seasonal influenza epidemics is of great concern to healthcare providers in temperate climates, since these epidemics vary substantially in their size, timing and duration from year to year, making it a challenge to deliver timely and proportionate responses. Previous studies have shown that Bayesian estimation techniques can accurately predict when an influenza epidemic will peak many weeks in advance, and we have previously tailored these methods for metropolitan Melbourne (Australia) and Google Flu Trends data. Here we extend these methods to clinical observation and laboratory-confirmation data for Melbourne, on the grounds that these data sources provide more accurate characterizations of influenza activity. We show that from each of these data sources we can accurately predict the timing of the epidemic peak 4-6 weeks in advance. We also show that making simultaneous use of multiple surveillance systems to improve forecast skill remains a fundamental challenge. Disparate systems provide complementary characterizations of disease activity, which may or may not be comparable, and it is unclear how a 'ground truth' for evaluating forecasts against these multiple characterizations might be defined. These findings are a significant step towards making optimal use of routine surveillance data for outbreak forecasting.

  9. Combining stepped-care approaches with behavioral reinforcement to motivate employment in opioid-dependent outpatients.

    PubMed

    Kidorf, Michael; Neufeld, Karin; Brooner, Robert K

    2004-01-01

    Employment is associated with improved treatment outcome for opioid-dependent outpatients receiving methadone (e.g., Platt, 1995). Opioid-dependent individuals typically enter treatment unemployed and many remain unemployed despite reductions in heroin use. Additional interventions are needed to motivate employment seeking behaviors and outcome. This article reports on a promising approach to reduce the chronic unemployment commonplace in treatment-seeking, opioid-dependent patients--a "stepped care" service delivery intervention that incorporates multiple behavioral reinforcements to motivate patient participation in and adherence to the treatment plan. This therapeutic approach (Motivated Stepped Care--MSC; Brooner and Kidorf (2002) was refined and modified to motivate and support a range of positive treatment behaviors and outcomes in patients with opioid-dependence (Kidorf et al. 1999), including job-seeking and acquisition. Patients who are unemployed after one year of treatment are systematically advanced to more intensive steps of weekly counseling and remain there until employment is attained. Those who remain unemployed despite exposure to at least 4 weeks of counseling at the highest step of care (Step 3, which is 9 h weekly of counseling) are started on a methadone taper in preparation for discharge, which is reversible upon attaining a job. This article describes the MSC approach and presents rates of employment for patients who were judged capable of working (n = 228). A review of medical and billing records during August--September 2002 revealed that the great majority of these patients were employed (93%), usually in full-time positions. Employment was associated with less frequent advancement to higher intensities of weekly counseling because of drug use. Further, multiple indices of improved employment stability and functioning, including months of work, hours of work, and annualized salary, were associated with better drug use outcomes. These data suggest that the MSC intervention is an effective platform for motivating and supporting both job seeking and employment in patients with chronic and severe substance use disorder.

  10. Predicting United States Medical Licensure Examination Step 2 clinical knowledge scores from previous academic indicators.

    PubMed

    Monteiro, Kristina A; George, Paul; Dollase, Richard; Dumenco, Luba

    2017-01-01

    The use of multiple academic indicators to identify students at risk of experiencing difficulty completing licensure requirements provides an opportunity to increase support services prior to high-stakes licensure examinations, including the United States Medical Licensure Examination (USMLE) Step 2 clinical knowledge (CK). Step 2 CK is becoming increasingly important in decision-making by residency directors because of increasing undergraduate medical enrollment and limited available residency vacancies. We created and validated a regression equation to predict students' Step 2 CK scores from previous academic indicators to identify students at risk, with sufficient time to intervene with additional support services as necessary. Data from three cohorts of students (N=218) with preclinical mean course exam score, National Board of Medical Examination subject examinations, and USMLE Step 1 and Step 2 CK between 2011 and 2013 were used in analyses. The authors created models capable of predicting Step 2 CK scores from academic indicators to identify at-risk students. In model 1, preclinical mean course exam score and Step 1 score accounted for 56% of the variance in Step 2 CK score. The second series of models included mean preclinical course exam score, Step 1 score, and scores on three NBME subject exams, and accounted for 67%-69% of the variance in Step 2 CK score. The authors validated the findings on the most recent cohort of graduating students (N=89) and predicted Step 2 CK score within a mean of four points (SD=8). The authors suggest using the first model as a needs assessment to gauge the level of future support required after completion of preclinical course requirements, and rescreening after three of six clerkships to identify students who might benefit from additional support before taking USMLE Step 2 CK.

  11. Lightning channel current persists between strokes

    NASA Astrophysics Data System (ADS)

    Wendel, JoAnna

    2014-09-01

    The usual cloud-to-ground lightning occurs when a large negative charge contained in a "stepped leader" travels down toward the Earth's surface. It then meets a positive charge that comes up tens of meters from the ground, resulting in a powerful neutralizing explosion that begins the first return stroke of the lightning flash. The entire flash lasts only a few hundred milliseconds, but during that time, multiple subsequent stroke-return stroke sequences usually occur.

  12. An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)

    DTIC Science & Technology

    2010-03-01

    technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D

  13. Modeling Heterogeneous Carbon Nanotube Networks for Photovoltaic Applications Using Silvaco Atlas Software

    DTIC Science & Technology

    2012-06-01

    Nanotube MWCNT Multi-Walled Carbon Nanotube PET Polyethylene Terephthalate 4H-SiC 4-H Silicon Carbide AlGaAs Aluminum Gallium Arsenide...nanotubes ( MWCNTs ). SWCNTs are structured with one layer of graphene rolled into a CNT. MWCNTs are contrastingly composed of 23 multiple layers...simulation 19 times to extract cell parameters at #varying widths set cellWidth=200 loop steps=19 go atlas #Constants which are used to set the

  14. On applications of chimera grid schemes to store separation

    NASA Technical Reports Server (NTRS)

    Cougherty, F. C.; Benek, J. A.; Steger, J. L.

    1985-01-01

    A finite difference scheme which uses multiple overset meshes to simulate the aerodynamics of aircraft/store interaction and store separation is described. In this chimera, or multiple mesh, scheme, a complex configuration is mapped using a major grid about the main component of the configuration, and minor overset meshes are used to map each additional component such as a store. As a first step in modeling the aerodynamics of store separation, two dimensional inviscid flow calculations were carried out in which one of the minor meshes is allowed to move with respect to the major grid. Solutions of calibrated two dimensional problems indicate that allowing one mesh to move with respect to another does not adversely affect the time accuracy of an unsteady solution. Steady, inviscid three dimensional computations demonstrate the capability to simulate complex configurations, including closely packed multiple bodies.

  15. Overlapping MALDI-Mass Spectrometry Imaging for In-Parallel MS and MS/MS Data Acquisition without Sacrificing Spatial Resolution

    NASA Astrophysics Data System (ADS)

    Hansen, Rebecca L.; Lee, Young Jin

    2017-09-01

    Metabolomics experiments require chemical identifications, often through MS/MS analysis. In mass spectrometry imaging (MSI), this necessitates running several serial tissue sections or using a multiplex data acquisition method. We have previously developed a multiplex MSI method to obtain MS and MS/MS data in a single experiment to acquire more chemical information in less data acquisition time. In this method, each raster step is composed of several spiral steps and each spiral step is used for a separate scan event (e.g., MS or MS/MS). One main limitation of this method is the loss of spatial resolution as the number of spiral steps increases, limiting its applicability for high-spatial resolution MSI. In this work, we demonstrate multiplex MS imaging is possible without sacrificing spatial resolution by the use of overlapping spiral steps, instead of spatially separated spiral steps as used in the previous work. Significant amounts of matrix and analytes are still left after multiple spectral acquisitions, especially with nanoparticle matrices, so that high quality MS and MS/MS data can be obtained on virtually the same tissue spot. This method was then applied to visualize metabolites and acquire their MS/MS spectra in maize leaf cross-sections at 10 μm spatial resolution. [Figure not available: see fulltext.

  16. Accuracy of Multiple Pour Cast from Various Elastomer Impression Methods

    PubMed Central

    Saad Toman, Majed; Ali Al-Shahrani, Abdullah; Ali Al-Qarni, Abdullah

    2016-01-01

    The accurate duplicate cast obtained from a single impression reduces the profession clinical time, patient inconvenience, and extra material cost. The stainless steel working cast model assembly consisting of two abutments and one pontic area was fabricated. Two sets of six each custom aluminum trays were fabricated, with five mm spacer and two mm spacer. The impression methods evaluated during the study were additional silicone putty reline (two steps), heavy-light body (one step), monophase (one step), and polyether (one step). Type IV gypsum casts were poured at the interval of one hour, 12 hours, 24 hours, and 48 hours. The resultant cast was measured with traveling microscope for the comparative dimensional accuracy. The data obtained were subjected to Analysis of Variance test at significance level <0.05. The die obtained from two-step putty reline impression techniques had the percentage of variation for the height −0.36 to −0.97%, while diameter was increased by 0.40–0.90%. The values for one-step heavy-light body impression dies, additional silicone monophase impressions, and polyether were −0.73 to −1.21%, −1.34%, and −1.46% for the height and 0.50–0.80%, 1.20%, and −1.30% for the width, respectively. PMID:28096815

  17. High resolution crustal image of South California Continental Borderland: Reverse time imaging including multiples

    NASA Astrophysics Data System (ADS)

    Bian, A.; Gantela, C.

    2014-12-01

    Strong multiples were observed in marine seismic data of Los Angeles Regional Seismic Experiment (LARSE).It is crucial to eliminate these multiples in conventional ray-based or one-way wave-equation based depth image methods. As long as multiples contain information of target zone along travelling path, it's possible to use them as signal, to improve the illumination coverage thus enhance the image quality of structural boundaries. Reverse time migration including multiples is a two-way wave-equation based prestack depth image method that uses both primaries and multiples to map structural boundaries. Several factors, including source wavelet, velocity model, back ground noise, data acquisition geometry and preprocessing workflow may influence the quality of image. The source wavelet is estimated from direct arrival of marine seismic data. Migration velocity model is derived from integrated model building workflow, and the sharp velocity interfaces near sea bottom needs to be preserved in order to generate multiples in the forward and backward propagation steps. The strong amplitude, low frequency marine back ground noise needs to be removed before the final imaging process. High resolution reverse time image sections of LARSE Lines 1 and Line 2 show five interfaces: depth of sea-bottom, base of sedimentary basins, top of Catalina Schist, a deep layer and a possible pluton boundary. Catalina Schist shows highs in the San Clemente ridge, Emery Knoll, Catalina Ridge, under Catalina Basin on both the lines, and a minor high under Avalon Knoll. The high of anticlinal fold in Line 1 is under the north edge of Emery Knoll and under the San Clemente fault zone. An area devoid of any reflection features are interpreted as sides of an igneous plume.

  18. AlInAsSb/GaSb staircase avalanche photodiode

    NASA Astrophysics Data System (ADS)

    Ren, Min; Maddox, Scott; Chen, Yaojia; Woodson, Madison; Campbell, Joe C.; Bank, Seth

    2016-02-01

    Over 30 years ago, Capasso and co-workers [IEEE Trans. Electron Devices 30, 381 (1982)] proposed the staircase avalanche photodetector (APD) as a solid-state analog of the photomultiplier tube. In this structure, electron multiplication occurs deterministically at steps in the conduction band profile, which function as the dynodes of a photomultiplier tube, leading to low excess multiplication noise. Unlike traditional APDs, the origin of staircase gain is band engineering rather than large applied electric fields. Unfortunately, the materials available at the time, principally AlxGa1-xAs/GaAs, did not offer sufficiently large conduction band offsets and energy separations between the direct and indirect valleys to realize the full potential of the staircase gain mechanism. Here, we report a true staircase APD operation using alloys of a rather underexplored material, AlxIn1-xAsySb1-y, lattice-matched to GaSb. Single step "staircase" devices exhibited a constant gain of ˜2×, over a broad range of applied bias, operating temperature, and excitation wavelengths/intensities, consistent with Monte Carlo calculations.

  19. Multiple R&D projects scheduling optimization with improved particle swarm algorithm.

    PubMed

    Liu, Mengqi; Shan, Miyuan; Wu, Juan

    2014-01-01

    For most enterprises, in order to win the initiative in the fierce competition of market, a key step is to improve their R&D ability to meet the various demands of customers more timely and less costly. This paper discusses the features of multiple R&D environments in large make-to-order enterprises under constrained human resource and budget, and puts forward a multi-project scheduling model during a certain period. Furthermore, we make some improvements to existed particle swarm algorithm and apply the one developed here to the resource-constrained multi-project scheduling model for a simulation experiment. Simultaneously, the feasibility of model and the validity of algorithm are proved in the experiment.

  20. Stability of gas atomized reactive powders through multiple step in-situ passivation

    DOEpatents

    Anderson, Iver E.; Steinmetz, Andrew D.; Byrd, David J.

    2017-05-16

    A method for gas atomization of oxygen-reactive reactive metals and alloys wherein the atomized particles are exposed as they solidify and cool in a very short time to multiple gaseous reactive agents for the in-situ formation of a protective reaction film on the atomized particles. The present invention is especially useful for making highly pyrophoric reactive metal or alloy atomized powders, such as atomized magnesium and magnesium alloy powders. The gaseous reactive species (agents) are introduced into the atomization spray chamber at locations downstream of a gas atomizing nozzle as determined by the desired powder or particle temperature for the reactions and the desired thickness of the reaction film.

  1. Computational overlay metrology with adaptive data analytics

    NASA Astrophysics Data System (ADS)

    Schmitt-Weaver, Emil; Subramony, Venky; Ullah, Zakir; Matsunobu, Masazumi; Somasundaram, Ravin; Thomas, Joel; Zhang, Linmiao; Thul, Klaus; Bhattacharyya, Kaustuve; Goossens, Ronald; Lambregts, Cees; Tel, Wim; de Ruiter, Chris

    2017-03-01

    With photolithography as the fundamental patterning step in the modern nanofabrication process, every wafer within a semiconductor fab will pass through a lithographic apparatus multiple times. With more than 20,000 sensors producing more than 700GB of data per day across multiple subsystems, the combination of a light source and lithographic apparatus provide a massive amount of information for data analytics. This paper outlines how data analysis tools and techniques that extend insight into data that traditionally had been considered unmanageably large, known as adaptive analytics, can be used to show how data collected before the wafer is exposed can be used to detect small process dependent wafer-towafer changes in overlay.

  2. Preparation of next-generation sequencing libraries using Nextera™ technology: simultaneous DNA fragmentation and adaptor tagging by in vitro transposition.

    PubMed

    Caruccio, Nicholas

    2011-01-01

    DNA library preparation is a common entry point and bottleneck for next-generation sequencing. Current methods generally consist of distinct steps that often involve significant sample loss and hands-on time: DNA fragmentation, end-polishing, and adaptor-ligation. In vitro transposition with Nextera™ Transposomes simultaneously fragments and covalently tags the target DNA, thereby combining these three distinct steps into a single reaction. Platform-specific sequencing adaptors can be added, and the sample can be enriched and bar-coded using limited-cycle PCR to prepare di-tagged DNA fragment libraries. Nextera technology offers a streamlined, efficient, and high-throughput method for generating bar-coded libraries compatible with multiple next-generation sequencing platforms.

  3. Clustering procedures for the optimal selection of data sets from multiple crystals in macromolecular crystallography.

    PubMed

    Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L; Armour, Wes; Waterman, David G; Iwata, So; Evans, Gwyndaf

    2013-08-01

    The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein.

  4. Clustering procedures for the optimal selection of data sets from multiple crystals in macromolecular crystallography

    PubMed Central

    Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L.; Armour, Wes; Waterman, David G.; Iwata, So; Evans, Gwyndaf

    2013-01-01

    The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein. PMID:23897484

  5. Single-step methods for predicting orbital motion considering its periodic components

    NASA Astrophysics Data System (ADS)

    Lavrov, K. N.

    1989-01-01

    Modern numerical methods for integration of ordinary differential equations can provide accurate and universal solutions to celestial mechanics problems. The implicit single sequence algorithms of Everhart and multiple step computational schemes using a priori information on periodic components can be combined to construct implicit single sequence algorithms which combine their advantages. The construction and analysis of the properties of such algorithms are studied, utilizing trigonometric approximation of the solutions of differential equations containing periodic components. The algorithms require 10 percent more machine memory than the Everhart algorithms, but are twice as fast, and yield short term predictions valid for five to ten orbits with good accuracy and five to six times faster than algorithms using other methods.

  6. Protocol for Detection of Yersinia pestis in Environmental ...

    EPA Pesticide Factsheets

    Methods Report This is the first ever open-access and detailed protocol available to all government departments and agencies, and their contractors to detect Yersinia pestis, the pathogen that causes plague, from multiple environmental sample types including water. Each analytical method includes sample processing procedure for each sample type in a step-by-step manner. It includes real-time PCR, traditional microbiological culture, and the Rapid Viability PCR (RV-PCR) analytical methods. For large volume water samples it also includes an ultra-filtration-based sample concentration procedure. Because of such a non-restrictive availability of this protocol to all government departments and agencies, and their contractors, the nation will now have increased laboratory capacity to analyze large number of samples during a wide-area plague incident.

  7. Kinematic Validation of a Multi-Kinect v2 Instrumented 10-Meter Walkway for Quantitative Gait Assessments.

    PubMed

    Geerse, Daphne J; Coolen, Bert H; Roerdink, Melvyn

    2015-01-01

    Walking ability is frequently assessed with the 10-meter walking test (10MWT), which may be instrumented with multiple Kinect v2 sensors to complement the typical stopwatch-based time to walk 10 meters with quantitative gait information derived from Kinect's 3D body point's time series. The current study aimed to evaluate a multi-Kinect v2 set-up for quantitative gait assessments during the 10MWT against a gold-standard motion-registration system by determining between-systems agreement for body point's time series, spatiotemporal gait parameters and the time to walk 10 meters. To this end, the 10MWT was conducted at comfortable and maximum walking speed, while 3D full-body kinematics was concurrently recorded with the multi-Kinect v2 set-up and the Optotrak motion-registration system (i.e., the gold standard). Between-systems agreement for body point's time series was assessed with the intraclass correlation coefficient (ICC). Between-systems agreement was similarly determined for the gait parameters' walking speed, cadence, step length, stride length, step width, step time, stride time (all obtained for the intermediate 6 meters) and the time to walk 10 meters, complemented by Bland-Altman's bias and limits of agreement. Body point's time series agreed well between the motion-registration systems, particularly so for body points in motion. For both comfortable and maximum walking speeds, the between-systems agreement for the time to walk 10 meters and all gait parameters except step width was high (ICC ≥ 0.888), with negligible biases and narrow limits of agreement. Hence, body point's time series and gait parameters obtained with a multi-Kinect v2 set-up match well with those derived with a gold standard in 3D measurement accuracy. Future studies are recommended to test the clinical utility of the multi-Kinect v2 set-up to automate 10MWT assessments, thereby complementing the time to walk 10 meters with reliable spatiotemporal gait parameters obtained objectively in a quick, unobtrusive and patient-friendly manner.

  8. Kinematic Validation of a Multi-Kinect v2 Instrumented 10-Meter Walkway for Quantitative Gait Assessments

    PubMed Central

    Geerse, Daphne J.; Coolen, Bert H.; Roerdink, Melvyn

    2015-01-01

    Walking ability is frequently assessed with the 10-meter walking test (10MWT), which may be instrumented with multiple Kinect v2 sensors to complement the typical stopwatch-based time to walk 10 meters with quantitative gait information derived from Kinect’s 3D body point’s time series. The current study aimed to evaluate a multi-Kinect v2 set-up for quantitative gait assessments during the 10MWT against a gold-standard motion-registration system by determining between-systems agreement for body point’s time series, spatiotemporal gait parameters and the time to walk 10 meters. To this end, the 10MWT was conducted at comfortable and maximum walking speed, while 3D full-body kinematics was concurrently recorded with the multi-Kinect v2 set-up and the Optotrak motion-registration system (i.e., the gold standard). Between-systems agreement for body point’s time series was assessed with the intraclass correlation coefficient (ICC). Between-systems agreement was similarly determined for the gait parameters’ walking speed, cadence, step length, stride length, step width, step time, stride time (all obtained for the intermediate 6 meters) and the time to walk 10 meters, complemented by Bland-Altman’s bias and limits of agreement. Body point’s time series agreed well between the motion-registration systems, particularly so for body points in motion. For both comfortable and maximum walking speeds, the between-systems agreement for the time to walk 10 meters and all gait parameters except step width was high (ICC ≥ 0.888), with negligible biases and narrow limits of agreement. Hence, body point’s time series and gait parameters obtained with a multi-Kinect v2 set-up match well with those derived with a gold standard in 3D measurement accuracy. Future studies are recommended to test the clinical utility of the multi-Kinect v2 set-up to automate 10MWT assessments, thereby complementing the time to walk 10 meters with reliable spatiotemporal gait parameters obtained objectively in a quick, unobtrusive and patient-friendly manner. PMID:26461498

  9. Effect of film-based versus filmless operation on the productivity of CT technologists.

    PubMed

    Reiner, B I; Siegel, E L; Hooper, F J; Glasser, D

    1998-05-01

    To determine the relative time required for a technologist to perform a computed tomographic (CT) examination in a "filmless" versus a film-based environment. Time-motion studies were performed in 204 consecutive CT examinations. Images from 96 examinations were electronically transferred to a picture archiving and communication system (PACS) without being printed to film, and 108 were printed to film. The time required to obtain and electronically transfer the images or print the images to film and make the current and previous studies available to the radiologists for interpretation was recorded. The time required for a technologist to complete a CT examination was reduced by 45% with direct image transfer to the PACS compared with the time required in the film-based mode. This reduction was due to the elimination of a number of steps in the filming process, such as the printing at multiple window or level settings. The use of a PACS can result in the elimination of multiple time-intensive tasks for the CT technologist, resulting in a marked reduction in examination time. This reduction can result in increased productivity, and, hence greater cost-effectiveness with filmless operation.

  10. DYCAST: A finite element program for the crash analysis of structures

    NASA Technical Reports Server (NTRS)

    Pifko, A. B.; Winter, R.; Ogilvie, P.

    1987-01-01

    DYCAST is a nonlinear structural dynamic finite element computer code developed for crash simulation. The element library contains stringers, beams, membrane skin triangles, plate bending triangles and spring elements. Changing stiffnesses in the structure are accounted for by plasticity and very large deflections. Material nonlinearities are accommodated by one of three options: elastic-perfectly plastic, elastic-linear hardening plastic, or elastic-nonlinear hardening plastic of the Ramberg-Osgood type. Geometric nonlinearities are handled in an updated Lagrangian formulation by reforming the structure into its deformed shape after small time increments while accumulating deformations, strains, and forces. The nonlinearities due to combined loadings are maintained, and stiffness variation due to structural failures are computed. Numerical time integrators available are fixed-step central difference, modified Adams, Newmark-beta, and Wilson-theta. The last three have a variable time step capability, which is controlled internally by a solution convergence error measure. Other features include: multiple time-load history tables to subject the structure to time dependent loading; gravity loading; initial pitch, roll, yaw, and translation of the structural model with respect to the global system; a bandwidth optimizer as a pre-processor; and deformed plots and graphics as post-processors.

  11. Spatial interpolation schemes of daily precipitation for hydrologic modeling

    USGS Publications Warehouse

    Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.

    2012-01-01

    Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.

  12. Predictors of longitudinal substance use and mental health outcomes for patients in two integrated service delivery systems.

    PubMed

    Grella, Christine E; Stein, Judith A; Weisner, Constance; Chi, Felicia; Moos, Rudolf

    2010-07-01

    Individuals who have both substance use disorders and mental health problems have poorer treatment outcomes. This study examines the relationship of service utilization and 12-step participation to outcomes at 1 and 5 years for patients treated in one of two integrated service delivery systems: the Department of Veterans Affairs (VA) system and a health maintenance organization (HMO). Sub-samples from each system were selected using multiple criteria indicating severity of mental health problems at admission to substance use disorder treatment (VA=401; HMO=331). Separate and multiple group structural equation model analyses used baseline characteristics, service use, and 12-step participation as predictors of substance use and mental health outcomes at 1 and 5 years following admission. Substance use and related problems showed stability across time, however, these relationships were stronger among VA patients. More continuing care substance use outpatient visits were associated with reductions in mental health symptoms in both groups, whereas receipt of outpatient mental health services was associated with more severe psychological symptoms. Participation in 12-step groups had a stronger effect on reducing cocaine use among VA patients, whereas it had a stronger effect on reducing alcohol use among HMO patients. More outpatient psychological services had a stronger effect on reducing alcohol use among HMO patients. Common findings across these two systems demonstrate the persistence of substance use and related psychological problems, but also show that continuing care services and participation in 12-step groups are associated with better outcomes in both systems. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  13. Models for microtubule cargo transport coupling the Langevin equation to stochastic stepping motor dynamics: Caring about fluctuations.

    PubMed

    Bouzat, Sebastián

    2016-01-01

    One-dimensional models coupling a Langevin equation for the cargo position to stochastic stepping dynamics for the motors constitute a relevant framework for analyzing multiple-motor microtubule transport. In this work we explore the consistence of these models focusing on the effects of the thermal noise. We study how to define consistent stepping and detachment rates for the motors as functions of the local forces acting on them in such a way that the cargo velocity and run-time match previously specified functions of the external load, which are set on the base of experimental results. We show that due to the influence of the thermal fluctuations this is not a trivial problem, even for the single-motor case. As a solution, we propose a motor stepping dynamics which considers memory on the motor force. This model leads to better results for single-motor transport than the approaches previously considered in the literature. Moreover, it gives a much better prediction for the stall force of the two-motor case, highly compatible with the experimental findings. We also analyze the fast fluctuations of the cargo position and the influence of the viscosity, comparing the proposed model to the standard one, and we show how the differences on the single-motor dynamics propagate to the multiple motor situations. Finally, we find that the one-dimensional character of the models impede an appropriate description of the fast fluctuations of the cargo position at small loads. We show how this problem can be solved by considering two-dimensional models.

  14. Real-Time Adaptive Control of Flow-Induced Cavity Tones

    NASA Technical Reports Server (NTRS)

    Kegerise, Michael A.; Cabell, Randolph H.; Cattafesta, Louis N.

    2004-01-01

    An adaptive generalized predictive control (GPC) algorithm was formulated and applied to the cavity flow-tone problem. The algorithm employs gradient descent to update the GPC coefficients at each time step. The adaptive control algorithm demonstrated multiple Rossiter mode suppression at fixed Mach numbers ranging from 0.275 to 0.38. The algorithm was also able t o maintain suppression of multiple cavity tones as the freestream Mach number was varied over a modest range (0.275 to 0.29). Controller performance was evaluated with a measure of output disturbance rejection and an input sensitivity transfer function. The results suggest that disturbances entering the cavity flow are colocated with the control input at the cavity leading edge. In that case, only tonal components of the cavity wall-pressure fluctuations can be suppressed and arbitrary broadband pressure reduction is not possible. In the control-algorithm development, the cavity dynamics are treated as linear and time invariant (LTI) for a fixed Mach number. The experimental results lend support this treatment.

  15. Real-time multiple objects tracking on Raspberry-Pi-based smart embedded camera

    NASA Astrophysics Data System (ADS)

    Dziri, Aziz; Duranton, Marc; Chapuis, Roland

    2016-07-01

    Multiple-object tracking constitutes a major step in several computer vision applications, such as surveillance, advanced driver assistance systems, and automatic traffic monitoring. Because of the number of cameras used to cover a large area, these applications are constrained by the cost of each node, the power consumption, the robustness of the tracking, the processing time, and the ease of deployment of the system. To meet these challenges, the use of low-power and low-cost embedded vision platforms to achieve reliable tracking becomes essential in networks of cameras. We propose a tracking pipeline that is designed for fixed smart cameras and which can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on a low-cost embedded smart camera composed of a Raspberry-Pi board and a RaspiCam camera. The tracking quality and the processing speed obtained with the proposed pipeline are evaluated on publicly available datasets and compared to the state-of-the-art methods.

  16. Multiple stage miniature stepping motor

    DOEpatents

    Niven, William A.; Shikany, S. David; Shira, Michael L.

    1981-01-01

    A stepping motor comprising a plurality of stages which may be selectively activated to effect stepping movement of the motor, and which are mounted along a common rotor shaft to achieve considerable reduction in motor size and minimum diameter, whereby sequential activation of the stages results in successive rotor steps with direction being determined by the particular activating sequence followed.

  17. Lead Selenide Colloidal Quantum Dot Solar Cells Achieving High Open-Circuit Voltage with One-Step Deposition Strategy.

    PubMed

    Zhang, Yaohong; Wu, Guohua; Ding, Chao; Liu, Feng; Yao, Yingfang; Zhou, Yong; Wu, Congping; Nakazawa, Naoki; Huang, Qingxun; Toyoda, Taro; Wang, Ruixiang; Hayase, Shuzi; Zou, Zhigang; Shen, Qing

    2018-06-18

    Lead selenide (PbSe) colloidal quantum dots (CQDs) are considered to be a strong candidate for high-efficiency colloidal quantum dot solar cells (CQDSCs) due to its efficient multiple exciton generation. However, currently, even the best PbSe CQDSCs can only display open-circuit voltage ( V oc ) about 0.530 V. Here, we introduce a solution-phase ligand exchange method to prepare PbI 2 -capped PbSe (PbSe-PbI 2 ) CQD inks, and for the first time, the absorber layer of PbSe CQDSCs was deposited in one step by using this PbSe-PbI 2 CQD inks. One-step-deposited PbSe CQDs absorber layer exhibits fast charge transfer rate, reduced energy funneling, and low trap assisted recombination. The champion large-area (active area is 0.35 cm 2 ) PbSe CQDSCs fabricated with one-step PbSe CQDs achieve a power conversion efficiency (PCE) of 6.0% and a V oc of 0.616 V, which is the highest V oc among PbSe CQDSCs reported to date.

  18. A Paper-Based Device for Performing Loop-Mediated Isothermal Amplification with Real-Time Simultaneous Detection of Multiple DNA Targets.

    PubMed

    Seok, Youngung; Joung, Hyou-Arm; Byun, Ju-Young; Jeon, Hyo-Sung; Shin, Su Jeong; Kim, Sanghyo; Shin, Young-Beom; Han, Hyung Soo; Kim, Min-Gon

    2017-01-01

    Paper-based diagnostic devices have many advantages as a one of the multiple diagnostic test platforms for point-of-care (POC) testing because they have simplicity, portability, and cost-effectiveness. However, despite high sensitivity and specificity of nucleic acid testing (NAT), the development of NAT based on a paper platform has not progressed as much as the others because various specific conditions for nucleic acid amplification reactions such as pH, buffer components, and temperature, inhibitions from technical differences of paper-based device. Here, we propose a paper-based device for performing loop-mediated isothermal amplification (LAMP) with real-time simultaneous detection of multiple DNA targets. We determined the optimal chemical components to enable dry conditions for the LAMP reaction without lyophilization or other techniques. We also devised the simple paper device structure by sequentially stacking functional layers, and employed a newly discovered property of hydroxynaphthol blue fluorescence to analyze real-time LAMP signals in the paper device. This proposed platform allowed analysis of three different meningitis DNA samples in a single device with single-step operation. This LAMP-based multiple diagnostic device has potential for real-time analysis with quantitative detection of 10 2 -10 5 copies of genomic DNA. Furthermore, we propose the transformation of DNA amplification devices to a simple and affordable paper system approach with great potential for realizing a paper-based NAT system for POC testing.

  19. A Paper-Based Device for Performing Loop-Mediated Isothermal Amplification with Real-Time Simultaneous Detection of Multiple DNA Targets

    PubMed Central

    Seok, Youngung; Joung, Hyou-Arm; Byun, Ju-Young; Jeon, Hyo-Sung; Shin, Su Jeong; Kim, Sanghyo; Shin, Young-Beom; Han, Hyung Soo; Kim, Min-Gon

    2017-01-01

    Paper-based diagnostic devices have many advantages as a one of the multiple diagnostic test platforms for point-of-care (POC) testing because they have simplicity, portability, and cost-effectiveness. However, despite high sensitivity and specificity of nucleic acid testing (NAT), the development of NAT based on a paper platform has not progressed as much as the others because various specific conditions for nucleic acid amplification reactions such as pH, buffer components, and temperature, inhibitions from technical differences of paper-based device. Here, we propose a paper-based device for performing loop-mediated isothermal amplification (LAMP) with real-time simultaneous detection of multiple DNA targets. We determined the optimal chemical components to enable dry conditions for the LAMP reaction without lyophilization or other techniques. We also devised the simple paper device structure by sequentially stacking functional layers, and employed a newly discovered property of hydroxynaphthol blue fluorescence to analyze real-time LAMP signals in the paper device. This proposed platform allowed analysis of three different meningitis DNA samples in a single device with single-step operation. This LAMP-based multiple diagnostic device has potential for real-time analysis with quantitative detection of 102-105 copies of genomic DNA. Furthermore, we propose the transformation of DNA amplification devices to a simple and affordable paper system approach with great potential for realizing a paper-based NAT system for POC testing. PMID:28740546

  20. An integrative formal model of motivation and decision making: The MGPM*.

    PubMed

    Ballard, Timothy; Yeo, Gillian; Loft, Shayne; Vancouver, Jeffrey B; Neal, Andrew

    2016-09-01

    We develop and test an integrative formal model of motivation and decision making. The model, referred to as the extended multiple-goal pursuit model (MGPM*), is an integration of the multiple-goal pursuit model (Vancouver, Weinhardt, & Schmidt, 2010) and decision field theory (Busemeyer & Townsend, 1993). Simulations of the model generated predictions regarding the effects of goal type (approach vs. avoidance), risk, and time sensitivity on prioritization. We tested these predictions in an experiment in which participants pursued different combinations of approach and avoidance goals under different levels of risk. The empirical results were consistent with the predictions of the MGPM*. Specifically, participants pursuing 1 approach and 1 avoidance goal shifted priority from the approach to the avoidance goal over time. Among participants pursuing 2 approach goals, those with low time sensitivity prioritized the goal with the larger discrepancy, whereas those with high time sensitivity prioritized the goal with the smaller discrepancy. Participants pursuing 2 avoidance goals generally prioritized the goal with the smaller discrepancy. Finally, all of these effects became weaker as the level of risk increased. We used quantitative model comparison to show that the MGPM* explained the data better than the original multiple-goal pursuit model, and that the major extensions from the original model were justified. The MGPM* represents a step forward in the development of a general theory of decision making during multiple-goal pursuit. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  1. A two-step hierarchical hypothesis set testing framework, with applications to gene expression data on ordered categories

    PubMed Central

    2014-01-01

    Background In complex large-scale experiments, in addition to simultaneously considering a large number of features, multiple hypotheses are often being tested for each feature. This leads to a problem of multi-dimensional multiple testing. For example, in gene expression studies over ordered categories (such as time-course or dose-response experiments), interest is often in testing differential expression across several categories for each gene. In this paper, we consider a framework for testing multiple sets of hypothesis, which can be applied to a wide range of problems. Results We adopt the concept of the overall false discovery rate (OFDR) for controlling false discoveries on the hypothesis set level. Based on an existing procedure for identifying differentially expressed gene sets, we discuss a general two-step hierarchical hypothesis set testing procedure, which controls the overall false discovery rate under independence across hypothesis sets. In addition, we discuss the concept of the mixed-directional false discovery rate (mdFDR), and extend the general procedure to enable directional decisions for two-sided alternatives. We applied the framework to the case of microarray time-course/dose-response experiments, and proposed three procedures for testing differential expression and making multiple directional decisions for each gene. Simulation studies confirm the control of the OFDR and mdFDR by the proposed procedures under independence and positive correlations across genes. Simulation results also show that two of our new procedures achieve higher power than previous methods. Finally, the proposed methodology is applied to a microarray dose-response study, to identify 17 β-estradiol sensitive genes in breast cancer cells that are induced at low concentrations. Conclusions The framework we discuss provides a platform for multiple testing procedures covering situations involving two (or potentially more) sources of multiplicity. The framework is easy to use and adaptable to various practical settings that frequently occur in large-scale experiments. Procedures generated from the framework are shown to maintain control of the OFDR and mdFDR, quantities that are especially relevant in the case of multiple hypothesis set testing. The procedures work well in both simulations and real datasets, and are shown to have better power than existing methods. PMID:24731138

  2. Real-time traffic sign detection and recognition

    NASA Astrophysics Data System (ADS)

    Herbschleb, Ernst; de With, Peter H. N.

    2009-01-01

    The continuous growth of imaging databases increasingly requires analysis tools for extraction of features. In this paper, a new architecture for the detection of traffic signs is proposed. The architecture is designed to process a large database with tens of millions of images with a resolution up to 4,800x2,400 pixels. Because of the size of the database, a high reliability as well as a high throughput is required. The novel architecture consists of a three-stage algorithm with multiple steps per stage, combining both color and specific spatial information. The first stage contains an area-limitation step which is performance critical in both the detection rate as the overall processing time. The second stage locates suggestions for traffic signs using recently published feature processing. The third stage contains a validation step to enhance reliability of the algorithm. During this stage, the traffic signs are recognized. Experiments show a convincing detection rate of 99%. With respect to computational speed, the throughput for line-of-sight images of 800×600 pixels is 35 Hz and for panorama images it is 4 Hz. Our novel architecture outperforms existing algorithms, with respect to both detection rate and throughput

  3. Feasibility of Focused Stepping Practice During Inpatient Rehabilitation Poststroke and Potential Contributions to Mobility Outcomes.

    PubMed

    Hornby, T George; Holleran, Carey L; Leddy, Abigail L; Hennessy, Patrick; Leech, Kristan A; Connolly, Mark; Moore, Jennifer L; Straube, Donald; Lovell, Linda; Roth, Elliot

    2015-01-01

    Optimal physical therapy strategies to maximize locomotor function in patients early poststroke are not well established. Emerging data indicate that substantial amounts of task-specific stepping practice may improve locomotor function, although stepping practice provided during inpatient rehabilitation is limited (<300 steps/session). The purpose of this investigation was to determine the feasibility of providing focused stepping training to patients early poststroke and its potential association with walking and other mobility outcomes. Daily stepping was recorded on 201 patients <6 months poststroke (80% < 1 month) during inpatient rehabilitation following implementation of a focused training program to maximize stepping practice during clinical physical therapy sessions. Primary outcomes included distance and physical assistance required during a 6-minute walk test (6MWT) and balance using the Berg Balance Scale (BBS). Retrospective data analysis included multiple regression techniques to evaluate the contributions of demographics, training activities, and baseline motor function to primary outcomes at discharge. Median stepping activity recorded from patients was 1516 steps/d, which is 5 to 6 times greater than that typically observed. The number of steps per day was positively correlated with both discharge 6MWT and BBS and improvements from baseline (changes; r = 0.40-0.87), independently contributing 10% to 31% of the total variance. Stepping activity also predicted level of assistance at discharge and discharge location (home vs other facility). Providing focused, repeated stepping training was feasible early poststroke during inpatient rehabilitation and was related to mobility outcomes. Further research is required to evaluate the effectiveness of these training strategies on short- or long-term mobility outcomes as compared with conventional interventions. © The Author(s) 2015.

  4. Master of Puppets: Cooperative Multitasking for In Situ Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, Dmitriy; Lukic, Zarija

    2016-01-01

    Modern scientific and engineering simulations track the time evolution of billions of elements. For such large runs, storing most time steps for later analysis is not a viable strategy. It is far more efficient to analyze the simulation data while it is still in memory. Here, we present a novel design for running multiple codes in situ: using coroutines and position-independent executables we enable cooperative multitasking between simulation and analysis, allowing the same executables to post-process simulation output, as well as to process it on the fly, both in situ and in transit. We present Henson, an implementation of ourmore » design, and illustrate its versatility by tackling analysis tasks with different computational requirements. This design differs significantly from the existing frameworks and offers an efficient and robust approach to integrating multiple codes on modern supercomputers. The techniques we present can also be integrated into other in situ frameworks.« less

  5. Henson v1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monozov, Dmitriy; Lukie, Zarija

    2016-04-01

    Modern scientific and engineering simulations track the time evolution of billions of elements. For such large runs, storing most time steps for later analysis is not a viable strategy. It is far more efficient to analyze the simulation data while it is still in memory. The developers present a novel design for running multiple codes in situ: using coroutines and position-independent executables they enable cooperative multitasking between simulation and analysis, allowing the same executables to post-process simulation output, as well as to process it on the fly, both in situ and in transit. They present Henson, an implementation of ourmore » design, and illustrate its versatility by tackling analysis tasks with different computational requirements. Our design differs significantly from the existing frameworks and offers an efficient and robust approach to integrating multiple codes on modern supercomputers. The presented techniques can also be integrated into other in situ frameworks.« less

  6. Walk Ratio (Step Length/Cadence) as a Summary Index of Neuromotor Control of Gait: Application to Multiple Sclerosis

    ERIC Educational Resources Information Center

    Rota, Viviana; Perucca, Laura; Simone, Anna; Tesio, Luigi

    2011-01-01

    In healthy adults, the step length/cadence ratio [walk ratio (WR) in mm/(steps/min) and normalized for height] is known to be constant around 6.5 mm/(step/min). It is a speed-independent index of the overall neuromotor gait control, in as much as it reflects energy expenditure, balance, between-step variability, and attentional demand. The speed…

  7. Weakly-coupled 4-mode step-index FMF and demonstration of IM/DD MDM transmission.

    PubMed

    Hu, Tao; Li, Juhao; Ge, Dawei; Wu, Zhongying; Tian, Yu; Shen, Lei; Liu, Yaping; Chen, Su; Li, Zhengbin; He, Yongqi; Chen, Zhangyuan

    2018-04-02

    Weakly coupled-mode division multiplexing (MDM) over few-mode fibers (FMF) for short-reach transmission has attracted great interest, which can avoid multiple-input-multiple-output digital signal processing (MIMO-DSP) by greatly suppressing modal crosstalk. In this paper, step-index FMF supporting 4 linearity polarization (LP) modes for MIMO-free transmission is designed and fabricated for the first time, to our knowledge. Modal crosstalk of the fiber is suppressed by increasing the mode effective refractive index differences. The same fabrication method as standard single-mode fiber is adopted so that it is practical and cost-effective. The mode multiplexer/demultiplexer (MUX/DEMUX) consists of cascaded mode-selective couplers (MSCs), which are designed and fabricated by tapering the proposed FMF with single-mode fiber (SMF). The mode MUX and DEMUX achieve very low modal crosstalk not only for the multiplexing/demultiplexing but also for the coupling to/from the FMF. Based on the fabricated FMF and mode MUX/DEMUX, we successfully demonstrate the first simultaneous 4-modes (LP 01 , LP 11 , LP 21 & LP 31 ) 10-km FMF transmission with 10-Gb/s intensity modulation and MIMO-free direct detection (IM/DD). The modal crosstalk of the whole transmission link is successfully suppressed to less than -16.5 dB. The experimental results indicate that FMF with simple step-index structure supporting 4 weakly-coupled modes is feasible.

  8. A GPU-accelerated semi-implicit fractional step method for numerical solutions of incompressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Ha, Sanghyun; Park, Junshin; You, Donghyun

    2017-11-01

    Utility of the computational power of modern Graphics Processing Units (GPUs) is elaborated for solutions of incompressible Navier-Stokes equations which are integrated using a semi-implicit fractional-step method. Due to its serial and bandwidth-bound nature, the present choice of numerical methods is considered to be a good candidate for evaluating the potential of GPUs for solving Navier-Stokes equations using non-explicit time integration. An efficient algorithm is presented for GPU acceleration of the Alternating Direction Implicit (ADI) and the Fourier-transform-based direct solution method used in the semi-implicit fractional-step method. OpenMP is employed for concurrent collection of turbulence statistics on a CPU while Navier-Stokes equations are computed on a GPU. Extension to multiple NVIDIA GPUs is implemented using NVLink supported by the Pascal architecture. Performance of the present method is experimented on multiple Tesla P100 GPUs compared with a single-core Xeon E5-2650 v4 CPU in simulations of boundary-layer flow over a flat plate. Supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (Ministry of Science, ICT and Future Planning NRF-2016R1E1A2A01939553, NRF-2014R1A2A1A11049599, and Ministry of Trade, Industry and Energy 201611101000230).

  9. Fast and slow responses of Southern Ocean sea surface temperature to SAM in coupled climate models

    NASA Astrophysics Data System (ADS)

    Kostov, Yavor; Marshall, John; Hausmann, Ute; Armour, Kyle C.; Ferreira, David; Holland, Marika M.

    2017-03-01

    We investigate how sea surface temperatures (SSTs) around Antarctica respond to the Southern Annular Mode (SAM) on multiple timescales. To that end we examine the relationship between SAM and SST within unperturbed preindustrial control simulations of coupled general circulation models (GCMs) included in the Climate Modeling Intercomparison Project phase 5 (CMIP5). We develop a technique to extract the response of the Southern Ocean SST (55°S-70°S) to a hypothetical step increase in the SAM index. We demonstrate that in many GCMs, the expected SST step response function is nonmonotonic in time. Following a shift to a positive SAM anomaly, an initial cooling regime can transition into surface warming around Antarctica. However, there are large differences across the CMIP5 ensemble. In some models the step response function never changes sign and cooling persists, while in other GCMs the SST anomaly crosses over from negative to positive values only 3 years after a step increase in the SAM. This intermodel diversity can be related to differences in the models' climatological thermal ocean stratification in the region of seasonal sea ice around Antarctica. Exploiting this relationship, we use observational data for the time-mean meridional and vertical temperature gradients to constrain the real Southern Ocean response to SAM on fast and slow timescales.

  10. W/O/W multiple emulsions with diclofenac sodium.

    PubMed

    Lindenstruth, Kai; Müller, Bernd W

    2004-11-01

    The disperse oil droplets of W/O/W multiple emulsions contain small water droplets, in which drugs could be incorporated, but the structure of these emulsions is also the reason for possible instability. Due to the middle oil phase which acts as a 'semipermeable' membrane the passage of water across the oil phase can take place. However, the emulsions have been produced in a two-step-production process so not only the leakage of encapsulated drug molecules out of the inner water phase during storage but also a production-induced reduction of the encapsulation rate should be considered. The aim of this study was to ascertain how far the production-induced reduction of the encapsulation rate relates to the size of inner water droplets and to evaluate the relevance of multiple emulsions as drug carrier for diclofenac sodium. Therefore multiple emulsions were produced according to a central composite design. During the second production step it was observed that the parameters pressure and temperature have an influence on the size of the oil droplets in the W/O/W multiple emulsions. Further experiments with different W/O emulsions resulted in W/O/W multiple emulsions with different encapsulation rates of diclofenac sodium, due to the different sizes of the inner water droplets, which were obtained in the first production step.

  11. Multilevel resistive information storage and retrieval

    DOEpatents

    Lohn, Andrew; Mickel, Patrick R.

    2016-08-09

    The present invention relates to resistive random-access memory (RRAM or ReRAM) systems, as well as methods of employing multiple state variables to form degenerate states in such memory systems. The methods herein allow for precise write and read steps to form multiple state variables, and these steps can be performed electrically. Such an approach allows for multilevel, high density memory systems with enhanced information storage capacity and simplified information retrieval.

  12. Do You Know What I Feel? A First Step towards a Physiological Measure of the Subjective Well-Being of Persons with Profound Intellectual and Multiple Disabilities

    ERIC Educational Resources Information Center

    Vos, Pieter; De Cock, Paul; Petry, Katja; Van Den Noortgate, Wim; Maes, Bea

    2010-01-01

    Background: Because of limited communicative skills, it is not self-evident to measure subjective well-being in people with profound intellectual and multiple disabilities. As a first step towards a non-interpretive measure of subjective well-being, we explored how the respiratory, cardiovascular and electro dermal response systems were associated…

  13. Two-step emulsification process for water-in-oil-in-water multiple emulsions stabilized by lamellar liquid crystals.

    PubMed

    Ito, Toshifumi; Tsuji, Yukitaka; Aramaki, Kenji; Tonooka, Noriaki

    2012-01-01

    Multiple emulsions, also called complex emulsions or multiphase emulsions, include water-in-oil-in-water (W/O/W)-type and oil-in-water-in-oil (O/W/O)-type emulsions. W/O/W-type multiple emulsions, obtained by utilizing lamellar liquid crystal with a layer structure showing optical anisotropy at the periphery of emulsion droplets, are superior in stability to O/W/O-type emulsions. In this study, we investigated a two-step emulsification process for a W/O/W-type multiple emulsion utilizing liquid crystal emulsification. We found that a W/O/W-type multiple emulsion containing lamellar liquid crystal can be prepared by mixing a W/O-type emulsion (prepared by primary emulsification) with a lamellar liquid crystal obtained from poly(oxyethylene) stearyl ether, cetyl alcohol, and water, and by dispersing and emulsifying the mixture in an outer aqueous phase. When poly(oxyethylene) stearyl ether and cetyl alcohol are each used in a given amount and the amount of water added is varied from 0 to 15 g (total amount of emulsion, 100 g), a W/O/W-type multiple emulsion is efficiently prepared. When the W/O/W-type multiple emulsion was held in a thermostatic bath at 25°C, the droplet size distribution showed no change 0, 30, or 60 days after preparation. Moreover, the W/O/W-type multiple emulsion strongly encapsulated Uranine in the inner aqueous phase as compared with emulsions prepared by one-step emulsification.

  14. Comparison of two stand-alone CADe systems at multiple operating points

    NASA Astrophysics Data System (ADS)

    Sahiner, Berkman; Chen, Weijie; Pezeshk, Aria; Petrick, Nicholas

    2015-03-01

    Computer-aided detection (CADe) systems are typically designed to work at a given operating point: The device displays a mark if and only if the level of suspiciousness of a region of interest is above a fixed threshold. To compare the standalone performances of two systems, one approach is to select the parameters of the systems to yield a target false-positive rate that defines the operating point, and to compare the sensitivities at that operating point. Increasingly, CADe developers offer multiple operating points, which necessitates the comparison of two CADe systems involving multiple comparisons. To control the Type I error, multiple-comparison correction is needed for keeping the family-wise error rate (FWER) less than a given alpha-level. The sensitivities of a single modality at different operating points are correlated. In addition, the sensitivities of the two modalities at the same or different operating points are also likely to be correlated. It has been shown in the literature that when test statistics are correlated, well-known methods for controlling the FWER are conservative. In this study, we compared the FWER and power of three methods, namely the Bonferroni, step-up, and adjusted step-up methods in comparing the sensitivities of two CADe systems at multiple operating points, where the adjusted step-up method uses the estimated correlations. Our results indicate that the adjusted step-up method has a substantial advantage over other the two methods both in terms of the FWER and power.

  15. Fostering Autonomy through Syllabus Design: A Step-by-Step Guide for Success

    ERIC Educational Resources Information Center

    Ramírez Espinosa, Alexánder

    2016-01-01

    Promoting learner autonomy is relevant in the field of applied linguistics due to the multiple benefits it brings to the process of learning a new language. However, despite the vast array of research on how to foster autonomy in the language classroom, it is difficult to find step-by-step processes to design syllabi and curricula focused on the…

  16. Flowfield predictions for multiple body launch vehicles

    NASA Technical Reports Server (NTRS)

    Deese, Jerry E.; Pavish, D. L.; Johnson, Jerry G.; Agarwal, Ramesh K.; Soni, Bharat K.

    1992-01-01

    A method is developed for simulating inviscid and viscous flow around multicomponent launch vehicles. Grids are generated by the GENIE general-purpose grid-generation code, and the flow solver is a finite-volume Runge-Kutta time-stepping method. Turbulence effects are simulated using Baldwin and Lomax (1978) turbulence model. Calculations are presented for three multibody launch vehicle configurations: one with two small-diameter solid motors, one with nine small-diameter solid motors, and one with three large-diameter solid motors.

  17. N-Doped carbon spheres with hierarchical micropore-nanosheet networks for high performance supercapacitors.

    PubMed

    Wang, Shoupei; Zhang, Jianan; Shang, Pei; Li, Yuanyuan; Chen, Zhimin; Xu, Qun

    2014-10-18

    N-doped carbon spheres with hierarchical micropore-nanosheet networks (HPSCSs) were facilely fabricated by a one-step carbonization and activation process of N containing polymer spheres by KOH. With the synergy effect of the multiple structures, HPSCSs exhibit a very high specific capacitance of 407.9 F g(-1) at 1 mV s(-1) (1.2 times higher than that of porous carbon spheres) and a robust cycling stability for supercapacitors.

  18. Recommendations for Evaluating Multiple Filters in Ballast Water Management Systems for US Type Approval

    DTIC Science & Technology

    2016-01-01

    is extremely unlikely to be practicable . A second approach is to conduct a full suite of TA testing on a BWMS with a “base filter” configuration...that of full TA testing. Here, three land-based tests would be conducted, and O&M and component testing would also occur. If time or practicality ... Practical salinity units SAE Society of Automotive Engineers SDI Silt density index SOP Standard operating procedure STEP Shipboard Technology

  19. Real-time Integration of Biological, Optical and Physical Oceanographic Data from Multiple Vessels and Nearshore Sites using a Wireless Network

    DTIC Science & Technology

    1997-09-30

    field experiments in Puget Sound . Each research vessel will use multi- sensor profiling instrument packages which obtain high-resolution physical...field deployment of the wireless network is planned for May-July, 1998, at Orcas Island, WA. IMPACT We expect that wireless communication systems will...East Sound project to be a first step toward continental shelf and open ocean deployments with the next generation of wireless and satellite

  20. The design of a real-time formative evaluation of the implementation process of lifestyle interventions at two worksites using a 7-step strategy (BRAVO@Work).

    PubMed

    Wierenga, Debbie; Engbers, Luuk H; van Empelen, Pepijn; Hildebrandt, Vincent H; van Mechelen, Willem

    2012-08-07

    Worksite health promotion programs (WHPPs) offer an attractive opportunity to improve the lifestyle of employees. Nevertheless, broad scale and successful implementation of WHPPs in daily practice often fails. In the present study, called BRAVO@Work, a 7-step implementation strategy was used to develop, implement and embed a WHPP in two different worksites with a focus on multiple lifestyle interventions.This article describes the design and framework for the formative evaluation of this 7-step strategy under real-time conditions by an embedded scientist with the purpose to gain insight into whether this this 7-step strategy is a useful and effective implementation strategy. Furthermore, we aim to gain insight into factors that either facilitate or hamper the implementation process, the quality of the implemented lifestyle interventions and the degree of adoption, implementation and continuation of these interventions. This study is a formative evaluation within two different worksites with an embedded scientist on site to continuously monitor the implementation process. Each worksite (i.e. a University of Applied Sciences and an Academic Hospital) will assign a participating faculty or a department, to implement a WHPP focusing on lifestyle interventions using the 7-step strategy. The primary focus will be to describe the natural course of development, implementation and maintenance of a WHPP by studying [a] the use and adherence to the 7-step strategy, [b] barriers and facilitators that influence the natural course of adoption, implementation and maintenance, and [c] the implementation process of the lifestyle interventions. All data will be collected using qualitative (i.e. real-time monitoring and semi-structured interviews) and quantitative methods (i.e. process evaluation questionnaires) applying data triangulation. Except for the real-time monitoring, the data collection will take place at baseline and after 6, 12 and 18 months. This is one of the few studies to extensively and continuously monitor the natural course of the implementation process of a WHPP by a formative evaluation using a mix of quantitative and qualitative methods on different organizational levels (i.e. management, project group, employees) with an embedded scientist on site. NTR2861.

  1. Antigen Masking During Fixation and Embedding, Dissected

    PubMed Central

    Scalia, Carla Rossana; Boi, Giovanna; Bolognesi, Maddalena Maria; Riva, Lorella; Manzoni, Marco; DeSmedt, Linde; Bosisio, Francesca Maria; Ronchi, Susanna; Leone, Biagio Eugenio; Cattoretti, Giorgio

    2016-01-01

    Antigen masking in routinely processed tissue is a poorly understood process caused by multiple factors. We sought to dissect the effect on antigenicity of each step of processing by using frozen sections as proxies of the whole tissue. An equivalent extent of antigen masking occurs across variable fixation times at room temperature. Most antigens benefit from longer fixation times (>24 hr) for optimal detection after antigen retrieval (AR; for example, Ki-67, bcl-2, ER). The transfer to a graded alcohol series results in an enhanced staining effect, reproduced by treating the sections with detergents, possibly because of a better access of the polymeric immunohistochemical detection system to tissue structures. A second round of masking occurs upon entering the clearing agent, mostly at the paraffin embedding step. This may depend on the non-freezable water removal. AR fully reverses the masking due both to the fixation time and the paraffin embedding. AR itself destroys some epitopes which do not survive routine processing. Processed frozen sections are a tool to investigate fixation and processing requirements for antigens in routine specimens. PMID:27798289

  2. Acceleration of discrete stochastic biochemical simulation using GPGPU.

    PubMed

    Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira

    2015-01-01

    For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.

  3. Acceleration of discrete stochastic biochemical simulation using GPGPU

    PubMed Central

    Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira

    2015-01-01

    For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936

  4. Efficient hybrid metrology for focus, CD, and overlay

    NASA Astrophysics Data System (ADS)

    Tel, W. T.; Segers, B.; Anunciado, R.; Zhang, Y.; Wong, P.; Hasan, T.; Prentice, C.

    2017-03-01

    In the advent of multiple patterning techniques in semiconductor industry, metrology has progressively become a burden. With multiple patterning techniques such as Litho-Etch-Litho-Etch and Sidewall Assisted Double Patterning, the number of processing step have increased significantly and therefore, so as the amount of metrology steps needed for both control and yield monitoring. The amount of metrology needed is increasing in each and every node as more layers needed multiple patterning steps, and more patterning steps per layer. In addition to this, there is that need for guided defect inspection, which in itself requires substantially denser focus, overlay, and CD metrology as before. Metrology efficiency will therefore be cruicial to the next semiconductor nodes. ASML's emulated wafer concept offers a highly efficient method for hybrid metrology for focus, CD, and overlay. In this concept metrology is combined with scanner's sensor data in order to predict the on-product performance. The principle underlying the method is to isolate and estimate individual root-causes which are then combined to compute the on-product performance. The goal is to use all the information available to avoid ever increasing amounts of metrology.

  5. Electronic measurement apparatus movable in a cased borehole and compensating for casing resistance differences

    DOEpatents

    Vail, W.B. III.

    1991-12-24

    Methods of operation are described for an apparatus having at least two pairs of voltage measurement electrodes vertically disposed in a cased well to measure the resistivity of adjacent geological formations from inside the cased well. During stationary measurements with the apparatus at a fixed vertical depth within the cased well, the invention herein discloses methods of operation which include a measurement step and subsequent first and second compensation steps respectively resulting in improved accuracy of measurement. The invention also discloses multiple frequency methods of operation resulting in improved accuracy of measurement while the apparatus is simultaneously moved vertically in the cased well. The multiple frequency methods of operation disclose a first A.C. current having a first frequency that is conducted from the casing into formation and a second A.C. current having a second frequency that is conducted along the casing. The multiple frequency methods of operation simultaneously provide the measurement step and two compensation steps necessary to acquire accurate results while the apparatus is moved vertically in the cased well. 6 figures.

  6. Electronic measurement apparatus movable in a cased borehole and compensating for casing resistance differences

    DOEpatents

    Vail, III, William B.

    1991-01-01

    Methods of operation of an apparatus having at least two pairs of voltage measurement electrodes vertically disposed in a cased well to measure the resistivity of adjacent geological formations from inside the cased well. During stationary measurements with the apparatus at a fixed vertical depth within the cased well, the invention herein discloses methods of operation which include a measurement step and subsequent first and second compensation steps respectively resulting in improved accuracy of measurement. The invention also discloses multiple frequency methods of operation resulting in improved accuracy of measurement while the apparatus is simultaneously moved vertically in the cased well. The multiple frequency methods of operation disclose a first A.C. current having a first frequency that is conducted from the casing into formation and a second A.C. current having a second frequency that is conducted along the casing. The multiple frequency methods of operation simultaneously provide the measurement step and two compensation steps necessary to acquire accurate results while the apparatus is moved vertically in the cased well.

  7. Use of commercial video games to improve postural balance in patients with multiple sclerosis: A systematic review and meta-analysis of randomised controlled clinical trials.

    PubMed

    Parra-Moreno, M; Rodríguez-Juan, J J; Ruiz-Cárdenas, J D

    2018-03-07

    Commercial video games are considered an effective tool to improve postural balance in different populations. However, the effectiveness of these video games for patients with multiple sclerosis (MS) is unclear. To analyse existing evidence on the effects of commercial video games on postural balance in patients with MS. We conducted a systematic literature search on 11 databases (Academic-Search Complete, AMED, CENTRAL, CINAHL, WoS, IBECS, LILACS, Pubmed/Medline, Scielo, SPORTDiscus, and Science Direct) using the following terms: "multiple sclerosis", videogames, "video games", exergam*, "postural balance", posturography, "postural control", balance. Risk of bias was analysed by 2 independent reviewers. We conducted 3 fixed effect meta-analyses and calculated the difference of means (DM) and the 95% confidence interval (95% CI) for the Four Step Square Test, Timed 25-Foot Walk, and Berg Balance Scale. Five randomized controlled trials were included in the qualitative systematic review and 4 in the meta-analysis. We found no significant differences between the video game therapy group and the control group in Four Step Square Test (DM: -.74; 95% CI, -2.79-1.32; P=.48; I 2 =0%) and Timed 25-Foot Walk scores (DM: .15; 95% CI, -1.06-.76; P=.75; I 2 =0%). We did observe intergroup differences in BBS scores in favour of video game therapy (DM: 5.30; 95% CI, 3.39-7.21; P<.001; I 2 =0%), but these were not greater than the minimum detectable change reported in the literature. The effectiveness of commercial video game therapy for improving postural balance in patients with MS is limited. Copyright © 2018 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  8. Neuropsychological, balance, and mobility risk factors for falls in people with multiple sclerosis: a prospective cohort study.

    PubMed

    Hoang, Phu D; Cameron, Michelle H; Gandevia, Simon C; Lord, Stephen R

    2014-03-01

    To determine whether impaired performance in a range of vision, proprioception, neuropsychological, balance, and mobility tests and pain and fatigue are associated with falls in people with multiple sclerosis (PwMS). Prospective cohort study with 6-month follow-up. A multiple sclerosis (MS) physiotherapy clinic. Community-dwelling people (N=210; age range, 21-74y) with MS (Disease Steps 0-5). Not applicable. Incidence of falls during 6 months' follow-up. In the 6-month follow-up period, 83 participants (39.7%) experienced no falls, 57 (27.3%) fell once or twice, and 69 (33.0%) fell 3 or more times. Frequent falling (≥3) was associated with increased postural sway (eyes open and closed), poor leaning balance (as assessed with the coordinated stability task), slow choice stepping reaction time, reduced walking speed, reduced executive functioning (as assessed with the difference between Trail Making Test Part B and Trail Making Test Part A), reduced fine motor control (performance on the 9-Hole Peg Test [9-HPT]), and reported leg pain. Increased sway with the eyes closed, poor coordinated stability, and reduced performance in the 9-HPT were identified as variables that significantly and independently discriminated between frequent fallers and nonfrequent fallers (model χ(2)3=30.1, P<.001). The area under the receiver operating characteristic curve for this model was .712 (95% confidence interval, .638-.785). The study reveals important balance, coordination, and cognitive determinants of falls in PwMS. These should assist the development of effective strategies for prevention of falls in this high-risk group. Copyright © 2014 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  9. SQERTSS: Dynamic rank based throttling of transition probabilities in kinetic Monte Carlo simulations

    DOE PAGES

    Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; ...

    2017-06-09

    Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of “KMC stiffness” (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps / cpu-time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order tomore » achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events -- allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm designed for use in achieving and simulating steady-state conditions in KMC simulations. Lastly, as shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.« less

  10. Optimization of airport security lanes

    NASA Astrophysics Data System (ADS)

    Chen, Lin

    2018-05-01

    Current airport security management system is widely implemented all around the world to ensure the safety of passengers, but it might not be an optimum one. This paper aims to seek a better security system, which can maximize security while minimize inconvenience to passengers. Firstly, we apply Petri net model to analyze the steps where the main bottlenecks lie. Based on average tokens and time transition, the most time-consuming steps of security process can be found, including inspection of passengers' identification and documents, preparing belongings to be scanned and the process for retrieving belongings back. Then, we develop a queuing model to figure out factors affecting those time-consuming steps. As for future improvement, the effective measures which can be taken include transferring current system as single-queuing and multi-served, intelligently predicting the number of security checkpoints supposed to be opened, building up green biological convenient lanes. Furthermore, to test the theoretical results, we apply some data to stimulate the model. And the stimulation results are consistent with what we have got through modeling. Finally, we apply our queuing model to a multi-cultural background. The result suggests that by quantifying and modifying the variance in wait time, the model can be applied to individuals with various habits customs and habits. Generally speaking, our paper considers multiple affecting factors, employs several models and does plenty of calculations, which is practical and reliable for handling in reality. In addition, with more precise data available, we can further test and improve our models.

  11. SQERTSS: Dynamic rank based throttling of transition probabilities in kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; Savara, Aditya

    2017-10-01

    Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of "KMC stiffness" (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps/CPU time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order to achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events-allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm is designed for use in achieving and simulating steady-state conditions in KMC simulations. As shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.

  12. A Multi-Scale Distribution Model for Non-Equilibrium Populations Suggests Resource Limitation in an Endangered Rodent

    PubMed Central

    Bean, William T.; Stafford, Robert; Butterfield, H. Scott; Brashares, Justin S.

    2014-01-01

    Species distributions are known to be limited by biotic and abiotic factors at multiple temporal and spatial scales. Species distribution models, however, frequently assume a population at equilibrium in both time and space. Studies of habitat selection have repeatedly shown the difficulty of estimating resource selection if the scale or extent of analysis is incorrect. Here, we present a multi-step approach to estimate the realized and potential distribution of the endangered giant kangaroo rat. First, we estimate the potential distribution by modeling suitability at a range-wide scale using static bioclimatic variables. We then examine annual changes in extent at a population-level. We define “available” habitat based on the total suitable potential distribution at the range-wide scale. Then, within the available habitat, model changes in population extent driven by multiple measures of resource availability. By modeling distributions for a population with robust estimates of population extent through time, and ecologically relevant predictor variables, we improved the predictive ability of SDMs, as well as revealed an unanticipated relationship between population extent and precipitation at multiple scales. At a range-wide scale, the best model indicated the giant kangaroo rat was limited to areas that received little to no precipitation in the summer months. In contrast, the best model for shorter time scales showed a positive relation with resource abundance, driven by precipitation, in the current and previous year. These results suggest that the distribution of the giant kangaroo rat was limited to the wettest parts of the drier areas within the study region. This multi-step approach reinforces the differing relationship species may have with environmental variables at different scales, provides a novel method for defining “available” habitat in habitat selection studies, and suggests a way to create distribution models at spatial and temporal scales relevant to theoretical and applied ecologists. PMID:25237807

  13. A basket two-part model to analyze medical expenditure on interdependent multiple sectors.

    PubMed

    Sugawara, Shinya; Wu, Tianyi; Yamanishi, Kenji

    2018-05-01

    This study proposes a novel statistical methodology to analyze expenditure on multiple medical sectors using consumer data. Conventionally, medical expenditure has been analyzed by two-part models, which separately consider purchase decision and amount of expenditure. We extend the traditional two-part models by adding the step of basket analysis for dimension reduction. This new step enables us to analyze complicated interdependence between multiple sectors without an identification problem. As an empirical application for the proposed method, we analyze data of 13 medical sectors from the Medical Expenditure Panel Survey. In comparison with the results of previous studies that analyzed the multiple sector independently, our method provides more detailed implications of the impacts of individual socioeconomic status on the composition of joint purchases from multiple medical sectors; our method has a better prediction performance.

  14. Better dual-task processing in simultaneous interpreters

    PubMed Central

    Strobach, Tilo; Becker, Maxi; Schubert, Torsten; Kühn, Simone

    2015-01-01

    Simultaneous interpreting (SI) is a highly complex activity and requires the performance and coordination of multiple, simultaneous tasks: analysis and understanding of the discourse in a first language, reformulating linguistic material, storing of intermediate processing steps, and language production in a second language among others. It is, however, an open issue whether persons with experience in SI possess superior skills in coordination of multiple tasks and whether they are able to transfer these skills to lab-based dual-task situations. Within the present study, we set out to explore whether interpreting experience is associated with related higher-order executive functioning in the context of dual-task situations of the Psychological Refractory Period (PRP) type. In this PRP situation, we found faster reactions times in participants with experience in simultaneous interpretation in contrast to control participants without such experience. Thus, simultaneous interpreters possess superior skills in coordination of multiple tasks in lab-based dual-task situations. PMID:26528232

  15. Trehalose glycopolymer resists allow direct writing of protein patterns by electron-beam lithography

    NASA Astrophysics Data System (ADS)

    Bat, Erhan; Lee, Juneyoung; Lau, Uland Y.; Maynard, Heather D.

    2015-03-01

    Direct-write patterning of multiple proteins on surfaces is of tremendous interest for a myriad of applications. Precise arrangement of different proteins at increasingly smaller dimensions is a fundamental challenge to apply the materials in tissue engineering, diagnostics, proteomics and biosensors. Herein, we present a new resist that protects proteins during electron-beam exposure and its application in direct-write patterning of multiple proteins. Polymers with pendant trehalose units are shown to effectively crosslink to surfaces as negative resists, while at the same time providing stabilization to proteins during the vacuum and electron-beam irradiation steps. In this manner, arbitrary patterns of several different classes of proteins such as enzymes, growth factors and immunoglobulins are realized. Utilizing the high-precision alignment capability of electron-beam lithography, surfaces with complex patterns of multiple proteins are successfully generated at the micrometre and nanometre scale without requiring cleanroom conditions.

  16. Realization of quantum gates with multiple control qubits or multiple target qubits in a cavity

    NASA Astrophysics Data System (ADS)

    Waseem, Muhammad; Irfan, Muhammad; Qamar, Shahid

    2015-06-01

    We propose a scheme to realize a three-qubit controlled phase gate and a multi-qubit controlled NOT gate of one qubit simultaneously controlling n-target qubits with a four-level quantum system in a cavity. The implementation time for multi-qubit controlled NOT gate is independent of the number of qubit. Three-qubit phase gate is generalized to n-qubit phase gate with multiple control qubits. The number of steps reduces linearly as compared to conventional gate decomposition method. Our scheme can be applied to various types of physical systems such as superconducting qubits coupled to a resonator and trapped atoms in a cavity. Our scheme does not require adjustment of level spacing during the gate implementation. We also show the implementation of Deutsch-Joza algorithm. Finally, we discuss the imperfections due to cavity decay and the possibility of physical implementation of our scheme.

  17. Ultrasound image edge detection based on a novel multiplicative gradient and Canny operator.

    PubMed

    Zheng, Yinfei; Zhou, Yali; Zhou, Hao; Gong, Xiaohong

    2015-07-01

    To achieve the fast and accurate segmentation of ultrasound image, a novel edge detection method for speckle noised ultrasound images was proposed, which was based on the traditional Canny and a novel multiplicative gradient operator. The proposed technique combines a new multiplicative gradient operator of non-Newtonian type with the traditional Canny operator to generate the initial edge map, which is subsequently optimized by the following edge tracing step. To verify the proposed method, we compared it with several other edge detection methods that had good robustness to noise, with experiments on the simulated and in vivo medical ultrasound image. Experimental results showed that the proposed algorithm has higher speed for real-time processing, and the edge detection accuracy could be 75% or more. Thus, the proposed method is very suitable for fast and accurate edge detection of medical ultrasound images. © The Author(s) 2014.

  18. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE PAGES

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    2017-09-17

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  19. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  20. Speckle evolution with multiple steps of least-squares phase removal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Mingzhou; Dainty, Chris; Roux, Filippus S.

    2011-08-15

    We study numerically the evolution of speckle fields due to the annihilation of optical vortices after the least-squares phase has been removed. A process with multiple steps of least-squares phase removal is carried out to minimize both vortex density and scintillation index. Statistical results show that almost all the optical vortices can be removed from a speckle field, which finally decays into a quasiplane wave after such an iterative process.

  1. Real-Time Optimal Flood Control Decision Making and Risk Propagation Under Multiple Uncertainties

    NASA Astrophysics Data System (ADS)

    Zhu, Feilin; Zhong, Ping-An; Sun, Yimeng; Yeh, William W.-G.

    2017-12-01

    Multiple uncertainties exist in the optimal flood control decision-making process, presenting risks involving flood control decisions. This paper defines the main steps in optimal flood control decision making that constitute the Forecast-Optimization-Decision Making (FODM) chain. We propose a framework for supporting optimal flood control decision making under multiple uncertainties and evaluate risk propagation along the FODM chain from a holistic perspective. To deal with uncertainties, we employ stochastic models at each link of the FODM chain. We generate synthetic ensemble flood forecasts via the martingale model of forecast evolution. We then establish a multiobjective stochastic programming with recourse model for optimal flood control operation. The Pareto front under uncertainty is derived via the constraint method coupled with a two-step process. We propose a novel SMAA-TOPSIS model for stochastic multicriteria decision making. Then we propose the risk assessment model, the risk of decision-making errors and rank uncertainty degree to quantify the risk propagation process along the FODM chain. We conduct numerical experiments to investigate the effects of flood forecast uncertainty on optimal flood control decision making and risk propagation. We apply the proposed methodology to a flood control system in the Daduhe River basin in China. The results indicate that the proposed method can provide valuable risk information in each link of the FODM chain and enable risk-informed decisions with higher reliability.

  2. Multispectra CWT-based algorithm (MCWT) in mass spectra for peak extraction.

    PubMed

    Hsueh, Huey-Miin; Kuo, Hsun-Chih; Tsai, Chen-An

    2008-01-01

    An important objective in mass spectrometry (MS) is to identify a set of biomarkers that can be used to potentially distinguish patients between distinct treatments (or conditions) from tens or hundreds of spectra. A common two-step approach involving peak extraction and quantification is employed to identify the features of scientific interest. The selected features are then used for further investigation to understand underlying biological mechanism of individual protein or for development of genomic biomarkers to early diagnosis. However, the use of inadequate or ineffective peak detection and peak alignment algorithms in peak extraction step may lead to a high rate of false positives. Also, it is crucial to reduce the false positive rate in detecting biomarkers from ten or hundreds of spectra. Here a new procedure is introduced for feature extraction in mass spectrometry data that extends the continuous wavelet transform-based (CWT-based) algorithm to multiple spectra. The proposed multispectra CWT-based algorithm (MCWT) not only can perform peak detection for multiple spectra but also carry out peak alignment at the same time. The author' MCWT algorithm constructs a reference, which integrates information of multiple raw spectra, for feature extraction. The algorithm is applied to a SELDI-TOF mass spectra data set provided by CAMDA 2006 with known polypeptide m/z positions. This new approach is easy to implement and it outperforms the existing peak extraction method from the Bioconductor PROcess package.

  3. Terminal-Area Aircraft Intent Inference Approach Based on Online Trajectory Clustering.

    PubMed

    Yang, Yang; Zhang, Jun; Cai, Kai-quan

    2015-01-01

    Terminal-area aircraft intent inference (T-AII) is a prerequisite to detect and avoid potential aircraft conflict in the terminal airspace. T-AII challenges the state-of-the-art AII approaches due to the uncertainties of air traffic situation, in particular due to the undefined flight routes and frequent maneuvers. In this paper, a novel T-AII approach is introduced to address the limitations by solving the problem with two steps that are intent modeling and intent inference. In the modeling step, an online trajectory clustering procedure is designed for recognizing the real-time available routes in replacing of the missed plan routes. In the inference step, we then present a probabilistic T-AII approach based on the multiple flight attributes to improve the inference performance in maneuvering scenarios. The proposed approach is validated with real radar trajectory and flight attributes data of 34 days collected from Chengdu terminal area in China. Preliminary results show the efficacy of the presented approach.

  4. A time to search: finding the meaning of variable activation energy.

    PubMed

    Vyazovkin, Sergey

    2016-07-28

    This review deals with the phenomenon of variable activation energy frequently observed when studying the kinetics in the liquid or solid phase. This phenomenon commonly manifests itself through nonlinear Arrhenius plots or dependencies of the activation energy on conversion computed by isoconversional methods. Variable activation energy signifies a multi-step process and has a meaning of a collective parameter linked to the activation energies of individual steps. It is demonstrated that by using appropriate models of the processes, the link can be established in algebraic form. This allows one to analyze experimentally observed dependencies of the activation energy in a quantitative fashion and, as a result, to obtain activation energies of individual steps, to evaluate and predict other important parameters of the process, and generally to gain deeper kinetic and mechanistic insights. This review provides multiple examples of such analysis as applied to the processes of crosslinking polymerization, crystallization and melting of polymers, gelation, and solid-solid morphological and glass transitions. The use of appropriate computational techniques is discussed as well.

  5. Motofit - integrating neutron reflectometry acquisition, reduction and analysis into one, easy to use, package

    NASA Astrophysics Data System (ADS)

    Nelson, Andrew

    2010-11-01

    The efficient use of complex neutron scattering instruments is often hindered by the complex nature of their operating software. This complexity exists at each experimental step: data acquisition, reduction and analysis, with each step being as important as the previous. For example, whilst command line interfaces are powerful at automated acquisition they often reduce accessibility by novice users and sometimes reduce the efficiency for advanced users. One solution to this is the development of a graphical user interface which allows the user to operate the instrument by a simple and intuitive "push button" approach. This approach was taken by the Motofit software package for analysis of multiple contrast reflectometry data. Here we describe the extension of this package to cover the data acquisition and reduction steps for the Platypus time-of-flight neutron reflectometer. Consequently, the complete operation of an instrument is integrated into a single, easy to use, program, leading to efficient instrument usage.

  6. Evaluating uncertainties in multi-layer soil moisture estimation with support vector machines and ensemble Kalman filtering

    NASA Astrophysics Data System (ADS)

    Liu, Di; Mishra, Ashok K.; Yu, Zhongbo

    2016-07-01

    This paper examines the combination of support vector machines (SVM) and the dual ensemble Kalman filter (EnKF) technique to estimate root zone soil moisture at different soil layers up to 100 cm depth. Multiple experiments are conducted in a data rich environment to construct and validate the SVM model and to explore the effectiveness and robustness of the EnKF technique. It was observed that the performance of SVM relies more on the initial length of training set than other factors (e.g., cost function, regularization parameter, and kernel parameters). The dual EnKF technique proved to be efficient to improve SVM with observed data either at each time step or at a flexible time steps. The EnKF technique can reach its maximum efficiency when the updating ensemble size approaches a certain threshold. It was observed that the SVM model performance for the multi-layer soil moisture estimation can be influenced by the rainfall magnitude (e.g., dry and wet spells).

  7. How to Deal with Interval-Censored Data Practically while Assessing the Progression-Free Survival: A Step-by-Step Guide Using SAS and R Software.

    PubMed

    Dugué, Audrey Emmanuelle; Pulido, Marina; Chabaud, Sylvie; Belin, Lisa; Gal, Jocelyn

    2016-12-01

    We describe how to estimate progression-free survival while dealing with interval-censored data in the setting of clinical trials in oncology. Three procedures with SAS and R statistical software are described: one allowing for a nonparametric maximum likelihood estimation of the survival curve using the EM-ICM (Expectation and Maximization-Iterative Convex Minorant) algorithm as described by Wellner and Zhan in 1997; a sensitivity analysis procedure in which the progression time is assigned (i) at the midpoint, (ii) at the upper limit (reflecting the standard analysis when the progression time is assigned at the first radiologic exam showing progressive disease), or (iii) at the lower limit of the censoring interval; and finally, two multiple imputations are described considering a uniform or the nonparametric maximum likelihood estimation (NPMLE) distribution. Clin Cancer Res; 22(23); 5629-35. ©2016 AACR. ©2016 American Association for Cancer Research.

  8. UHPLC-Q-TOF-MS/MS Method Based on Four-Step Strategy for Metabolism Study of Fisetin in Vitro and in Vivo.

    PubMed

    Zhang, Xia; Yin, Jintuo; Liang, Caijuan; Sun, Yupeng; Zhang, Lantong

    2017-12-20

    Fisetin has been identified as an anticancer agent with antiangiogenic properties in mice. However, its metabolism in vitro (rat liver microsomes) and in vivo (rats) is presently not characterized. In this study, ultra-high-performance liquid chromatography coupled with hybrid triple quadrupole time-of-flight mass spectrometry (UHPLC-Q-TOF-MS) was employed for data acquiring, and a four-step analytical strategy was developed to screen and identify metabolites. First, full-scan was applied, which was dependent on a multiple mass defect filter (MMDF) combined with dynamic background subtraction (DBS). Then PeakView 1.2 and Metabolitepilot 1.5 software were used to load data to seek possible metabolites. Finally, metabolites were identified according to mass measurement and retention time. Moreover, isomers were distinguished based on Clog P parameter. Based on the proposed method, 53 metabolites in vivo and 14 metabolites in vitro were characterized. Moreover, metabolic pathways mainly included oxidation, reduction, hydrogenation, methylation, sulfation, and glucuronidation.

  9. From the Boltzmann to the Lattice-Boltzmann Equation:. Beyond BGK Collision Models

    NASA Astrophysics Data System (ADS)

    Philippi, Paulo Cesar; Hegele, Luiz Adolfo; Surmas, Rodrigo; Siebert, Diogo Nardelli; Dos Santos, Luís Orlando Emerich

    In this work, we present a derivation for the lattice-Boltzmann equation directly from the linearized Boltzmann equation, combining the following main features: multiple relaxation times and thermodynamic consistency in the description of non isothermal compressible flows. The method presented here is based on the discretization of increasingly order kinetic models of the Boltzmann equation. Following a Gross-Jackson procedure, the linearized collision term is developed in Hermite polynomial tensors and the resulting infinite series is diagonalized after a chosen integer N, establishing the order of approximation of the collision term. The velocity space is discretized, in accordance with a quadrature method based on prescribed abscissas (Philippi et al., Phys. Rev E 73, 056702, 2006). The problem of describing the energy transfer is discussed, in relation with the order of approximation of a two relaxation-times lattice Boltzmann model. The velocity-step, temperature-step and the shock tube problems are investigated, adopting lattices with 37, 53 and 81 velocities.

  10. Remote visual analysis of large turbulence databases at multiple scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pulido, Jesus; Livescu, Daniel; Kanov, Kalin

    The remote analysis and visualization of raw large turbulence datasets is challenging. Current accurate direct numerical simulations (DNS) of turbulent flows generate datasets with billions of points per time-step and several thousand time-steps per simulation. Until recently, the analysis and visualization of such datasets was restricted to scientists with access to large supercomputers. The public Johns Hopkins Turbulence database simplifies access to multi-terabyte turbulence datasets and facilitates the computation of statistics and extraction of features through the use of commodity hardware. In this paper, we present a framework designed around wavelet-based compression for high-speed visualization of large datasets and methodsmore » supporting multi-resolution analysis of turbulence. By integrating common technologies, this framework enables remote access to tools available on supercomputers and over 230 terabytes of DNS data over the Web. Finally, the database toolset is expanded by providing access to exploratory data analysis tools, such as wavelet decomposition capabilities and coherent feature extraction.« less

  11. Remote visual analysis of large turbulence databases at multiple scales

    DOE PAGES

    Pulido, Jesus; Livescu, Daniel; Kanov, Kalin; ...

    2018-06-15

    The remote analysis and visualization of raw large turbulence datasets is challenging. Current accurate direct numerical simulations (DNS) of turbulent flows generate datasets with billions of points per time-step and several thousand time-steps per simulation. Until recently, the analysis and visualization of such datasets was restricted to scientists with access to large supercomputers. The public Johns Hopkins Turbulence database simplifies access to multi-terabyte turbulence datasets and facilitates the computation of statistics and extraction of features through the use of commodity hardware. In this paper, we present a framework designed around wavelet-based compression for high-speed visualization of large datasets and methodsmore » supporting multi-resolution analysis of turbulence. By integrating common technologies, this framework enables remote access to tools available on supercomputers and over 230 terabytes of DNS data over the Web. Finally, the database toolset is expanded by providing access to exploratory data analysis tools, such as wavelet decomposition capabilities and coherent feature extraction.« less

  12. Multiple geographic origins of commensalism and complex dispersal history of Black Rats.

    PubMed

    Aplin, Ken P; Suzuki, Hitoshi; Chinen, Alejandro A; Chesser, R Terry; Ten Have, José; Donnellan, Stephen C; Austin, Jeremy; Frost, Angela; Gonzalez, Jean Paul; Herbreteau, Vincent; Catzeflis, Francois; Soubrier, Julien; Fang, Yin-Ping; Robins, Judith; Matisoo-Smith, Elizabeth; Bastos, Amanda D S; Maryanto, Ibnu; Sinaga, Martua H; Denys, Christiane; Van Den Bussche, Ronald A; Conroy, Chris; Rowe, Kevin; Cooper, Alan

    2011-01-01

    The Black Rat (Rattus rattus) spread out of Asia to become one of the world's worst agricultural and urban pests, and a reservoir or vector of numerous zoonotic diseases, including the devastating plague. Despite the global scale and inestimable cost of their impacts on both human livelihoods and natural ecosystems, little is known of the global genetic diversity of Black Rats, the timing and directions of their historical dispersals, and the risks associated with contemporary movements. We surveyed mitochondrial DNA of Black Rats collected across their global range as a first step towards obtaining an historical genetic perspective on this socioeconomically important group of rodents. We found a strong phylogeographic pattern with well-differentiated lineages of Black Rats native to South Asia, the Himalayan region, southern Indochina, and northern Indochina to East Asia, and a diversification that probably commenced in the early Middle Pleistocene. We also identified two other currently recognised species of Rattus as potential derivatives of a paraphyletic R. rattus. Three of the four phylogenetic lineage units within R. rattus show clear genetic signatures of major population expansion in prehistoric times, and the distribution of particular haplogroups mirrors archaeologically and historically documented patterns of human dispersal and trade. Commensalism clearly arose multiple times in R. rattus and in widely separated geographic regions, and this may account for apparent regionalism in their associated pathogens. Our findings represent an important step towards deeper understanding the complex and influential relationship that has developed between Black Rats and humans, and invite a thorough re-examination of host-pathogen associations among Black Rats.

  13. Multiple Geographic Origins of Commensalism and Complex Dispersal History of Black Rats

    PubMed Central

    Aplin, Ken P.; Suzuki, Hitoshi; Chinen, Alejandro A.; Chesser, R. Terry; ten Have, José; Donnellan, Stephen C.; Austin, Jeremy; Frost, Angela; Gonzalez, Jean Paul; Herbreteau, Vincent; Catzeflis, Francois; Soubrier, Julien; Fang, Yin-Ping; Robins, Judith; Matisoo-Smith, Elizabeth; Bastos, Amanda D. S.; Maryanto, Ibnu; Sinaga, Martua H.; Denys, Christiane; Van Den Bussche, Ronald A.; Conroy, Chris; Rowe, Kevin; Cooper, Alan

    2011-01-01

    The Black Rat (Rattus rattus) spread out of Asia to become one of the world's worst agricultural and urban pests, and a reservoir or vector of numerous zoonotic diseases, including the devastating plague. Despite the global scale and inestimable cost of their impacts on both human livelihoods and natural ecosystems, little is known of the global genetic diversity of Black Rats, the timing and directions of their historical dispersals, and the risks associated with contemporary movements. We surveyed mitochondrial DNA of Black Rats collected across their global range as a first step towards obtaining an historical genetic perspective on this socioeconomically important group of rodents. We found a strong phylogeographic pattern with well-differentiated lineages of Black Rats native to South Asia, the Himalayan region, southern Indochina, and northern Indochina to East Asia, and a diversification that probably commenced in the early Middle Pleistocene. We also identified two other currently recognised species of Rattus as potential derivatives of a paraphyletic R. rattus. Three of the four phylogenetic lineage units within R. rattus show clear genetic signatures of major population expansion in prehistoric times, and the distribution of particular haplogroups mirrors archaeologically and historically documented patterns of human dispersal and trade. Commensalism clearly arose multiple times in R. rattus and in widely separated geographic regions, and this may account for apparent regionalism in their associated pathogens. Our findings represent an important step towards deeper understanding the complex and influential relationship that has developed between Black Rats and humans, and invite a thorough re-examination of host-pathogen associations among Black Rats. PMID:22073158

  14. Multi-Species Fluxes for the Parallel Quiet Direct Simulation (QDS) Method

    NASA Astrophysics Data System (ADS)

    Cave, H. M.; Lim, C.-W.; Jermy, M. C.; Krumdieck, S. P.; Smith, M. R.; Lin, Y.-J.; Wu, J.-S.

    2011-05-01

    Fluxes of multiple species are implemented in the Quiet Direct Simulation (QDS) scheme for gas flows. Each molecular species streams independently. All species are brought to local equilibrium at the end of each time step. The multi species scheme is compared to DSMC simulation, on a test case of a Mach 20 flow of a xenon/helium mixture over a forward facing step. Depletion of the heavier species in the bow shock and the near-wall layer are seen. The multi-species QDS code is then used to model the flow in a pulsed-pressure chemical vapour deposition reactor set up for carbon film deposition. The injected gas is a mixture of methane and hydrogen. The temporal development of the spatial distribution of methane over the substrate is tracked.

  15. First Steps Toward Incorporating Image Based Diagnostics Into Particle Accelerator Control Systems Using Convolutional Neural Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edelen, A. L.; Biedron, S. G.; Milton, S. V.

    At present, a variety of image-based diagnostics are used in particle accelerator systems. Often times, these are viewed by a human operator who then makes appropriate adjustments to the machine. Given recent advances in using convolutional neural networks (CNNs) for image processing, it should be possible to use image diagnostics directly in control routines (NN-based or otherwise). This is especially appealing for non-intercepting diagnostics that could run continuously during beam operation. Here, we show results of a first step toward implementing such a controller: our trained CNN can predict multiple simulated downstream beam parameters at the Fermilab Accelerator Science andmore » Technology (FAST) facility's low energy beamline using simulated virtual cathode laser images, gun phases, and solenoid strengths.« less

  16. On cat's eyes and multiple disjoint cells natural convection flow in tall tilted cavities

    NASA Astrophysics Data System (ADS)

    Báez, Elsa; Nicolás, Alfredo

    2014-10-01

    Natural convection fluid flow in air-filled tall tilted cavities is studied numerically with a direct projection method applied on the unsteady Boussinesq approximation in primitive variables. The study is focused on the so called cat's eyes and multiple disjoint cells as the aspect ratio A and the angle of inclination ϕ of the cavity vary. Results have already been reported with primitive and stream function-vorticity variables. The former are validated with the latter ones, which in turn were validated through mesh size and time-step independence studies. The new results complemented with the previous ones lead to find out the fluid motion and heat transfer invariant properties of this thermal phenomenon, which is the novelty here.

  17. 2D-DIGE in Proteomics.

    PubMed

    Pasquali, Matias; Serchi, Tommaso; Planchon, Sebastien; Renaut, Jenny

    2017-01-01

    The two-dimensional difference gel electrophoresis method is a valuable approach for proteomics. The method, using cyanine fluorescent dyes, allows the co-migration of multiple protein samples in the same gel and their simultaneous detection, thus reducing experimental and analytical time. 2D-DIGE, compared to traditional post-staining 2D-PAGE protocols (e.g., colloidal Coomassie or silver nitrate), provides faster and more reliable gel matching, limiting the impact of gel to gel variation, and allows also a good dynamic range for quantitative comparisons. By the use of internal standards, it is possible to normalize for experimental variations in spot intensities and gel patterns. Here we describe the experimental steps we follow in our routine 2D-DIGE procedure that we then apply to multiple biological questions.

  18. Combinatorial Optimization Algorithms for Dynamic Multiple Fault Diagnosis in Automotive and Aerospace Applications

    NASA Astrophysics Data System (ADS)

    Kodali, Anuradha

    In this thesis, we develop dynamic multiple fault diagnosis (DMFD) algorithms to diagnose faults that are sporadic and coupled. Firstly, we formulate a coupled factorial hidden Markov model-based (CFHMM) framework to diagnose dependent faults occurring over time (dynamic case). Here, we implement a mixed memory Markov coupling model to determine the most likely sequence of (dependent) fault states, the one that best explains the observed test outcomes over time. An iterative Gauss-Seidel coordinate ascent optimization method is proposed for solving the problem. A soft Viterbi algorithm is also implemented within the framework for decoding dependent fault states over time. We demonstrate the algorithm on simulated and real-world systems with coupled faults; the results show that this approach improves the correct isolation rate as compared to the formulation where independent fault states are assumed. Secondly, we formulate a generalization of set-covering, termed dynamic set-covering (DSC), which involves a series of coupled set-covering problems over time. The objective of the DSC problem is to infer the most probable time sequence of a parsimonious set of failure sources that explains the observed test outcomes over time. The DSC problem is NP-hard and intractable due to the fault-test dependency matrix that couples the failed tests and faults via the constraint matrix, and the temporal dependence of failure sources over time. Here, the DSC problem is motivated from the viewpoint of a dynamic multiple fault diagnosis problem, but it has wide applications in operations research, for e.g., facility location problem. Thus, we also formulated the DSC problem in the context of a dynamically evolving facility location problem. Here, a facility can be opened, closed, or can be temporarily unavailable at any time for a given requirement of demand points. These activities are associated with costs or penalties, viz., phase-in or phase-out for the opening or closing of a facility, respectively. The set-covering matrix encapsulates the relationship among the rows (tests or demand points) and columns (faults or locations) of the system at each time. By relaxing the coupling constraints using Lagrange multipliers, the DSC problem can be decoupled into independent subproblems, one for each column. Each subproblem is solved using the Viterbi decoding algorithm, and a primal feasible solution is constructed by modifying the Viterbi solutions via a heuristic. The proposed Viterbi-Lagrangian relaxation algorithm (VLRA) provides a measure of suboptimality via an approximate duality gap. As a major practical extension of the above problem, we also consider the problem of diagnosing faults with delayed test outcomes, termed delay-dynamic set-covering (DDSC), and experiment with real-world problems that exhibit masking faults. Also, we present simulation results on OR-library datasets (set-covering formulations are predominantly validated on these matrices in the literature), posed as facility location problems. Finally, we implement these algorithms to solve problems in aerospace and automotive applications. Firstly, we address the diagnostic ambiguity problem in aerospace and automotive applications by developing a dynamic fusion framework that includes dynamic multiple fault diagnosis algorithms. This improves the correct fault isolation rate, while minimizing the false alarm rates, by considering multiple faults instead of the traditional data-driven techniques based on single fault (class)-single epoch (static) assumption. The dynamic fusion problem is formulated as a maximum a posteriori decision problem of inferring the fault sequence based on uncertain outcomes of multiple binary classifiers over time. The fusion process involves three steps: the first step transforms the multi-class problem into dichotomies using error correcting output codes (ECOC), thereby solving the concomitant binary classification problems; the second step fuses the outcomes of multiple binary classifiers over time using a sliding window or block dynamic fusion method that exploits temporal data correlations over time. We solve this NP-hard optimization problem via a Lagrangian relaxation (variational) technique. The third step optimizes the classifier parameters, viz., probabilities of detection and false alarm, using a genetic algorithm. The proposed algorithm is demonstrated by computing the diagnostic performance metrics on a twin-spool commercial jet engine, an automotive engine, and UCI datasets (problems with high classification error are specifically chosen for experimentation). We show that the primal-dual optimization framework performed consistently better than any traditional fusion technique, even when it is forced to give a single fault decision across a range of classification problems. Secondly, we implement the inference algorithms to diagnose faults in vehicle systems that are controlled by a network of electronic control units (ECUs). The faults, originating from various interactions and especially between hardware and software, are particularly challenging to address. Our basic strategy is to divide the fault universe of such cyber-physical systems in a hierarchical manner, and monitor the critical variables/signals that have impact at different levels of interactions. The proposed diagnostic strategy is validated on an electrical power generation and storage system (EPGS) controlled by two ECUs in an environment with CANoe/MATLAB co-simulation. Eleven faults are injected with the failures originating in actuator hardware, sensor, controller hardware and software components. Diagnostic matrix is established to represent the relationship between the faults and the test outcomes (also known as fault signatures) via simulations. The results show that the proposed diagnostic strategy is effective in addressing the interaction-caused faults.

  19. A Direct Position-Determination Approach for Multiple Sources Based on Neural Network Computation.

    PubMed

    Chen, Xin; Wang, Ding; Yin, Jiexin; Wu, Ying

    2018-06-13

    The most widely used localization technology is the two-step method that localizes transmitters by measuring one or more specified positioning parameters. Direct position determination (DPD) is a promising technique that directly localizes transmitters from sensor outputs and can offer superior localization performance. However, existing DPD algorithms such as maximum likelihood (ML)-based and multiple signal classification (MUSIC)-based estimations are computationally expensive, making it difficult to satisfy real-time demands. To solve this problem, we propose the use of a modular neural network for multiple-source DPD. In this method, the area of interest is divided into multiple sub-areas. Multilayer perceptron (MLP) neural networks are employed to detect the presence of a source in a sub-area and filter sources in other sub-areas, and radial basis function (RBF) neural networks are utilized for position estimation. Simulation results show that a number of appropriately trained neural networks can be successfully used for DPD. The performance of the proposed MLP-MLP-RBF method is comparable to the performance of the conventional MUSIC-based DPD algorithm for various signal-to-noise ratios and signal power ratios. Furthermore, the MLP-MLP-RBF network is less computationally intensive than the classical DPD algorithm and is therefore an attractive choice for real-time applications.

  20. Evaluating Dense 3d Reconstruction Software Packages for Oblique Monitoring of Crop Canopy Surface

    NASA Astrophysics Data System (ADS)

    Brocks, S.; Bareth, G.

    2016-06-01

    Crop Surface Models (CSMs) are 2.5D raster surfaces representing absolute plant canopy height. Using multiple CMSs generated from data acquired at multiple time steps, a crop surface monitoring is enabled. This makes it possible to monitor crop growth over time and can be used for monitoring in-field crop growth variability which is useful in the context of high-throughput phenotyping. This study aims to evaluate several software packages for dense 3D reconstruction from multiple overlapping RGB images on field and plot-scale. A summer barley field experiment located at the Campus Klein-Altendorf of University of Bonn was observed by acquiring stereo images from an oblique angle using consumer-grade smart cameras. Two such cameras were mounted at an elevation of 10 m and acquired images for a period of two months during the growing period of 2014. The field experiment consisted of nine barley cultivars that were cultivated in multiple repetitions and nitrogen treatments. Manual plant height measurements were carried out at four dates during the observation period. The software packages Agisoft PhotoScan, VisualSfM with CMVS/PMVS2 and SURE are investigated. The point clouds are georeferenced through a set of ground control points. Where adequate results are reached, a statistical analysis is performed.

  1. Minimum number of days required for a reliable estimate of daily step count and energy expenditure, in people with MS who walk unaided.

    PubMed

    Norris, Michelle; Anderson, Ross; Motl, Robert W; Hayes, Sara; Coote, Susan

    2017-03-01

    The purpose of this study was to examine the minimum number of days needed to reliably estimate daily step count and energy expenditure (EE), in people with multiple sclerosis (MS) who walked unaided. Seven days of activity monitor data were collected for 26 participants with MS (age=44.5±11.9years; time since diagnosis=6.5±6.2years; Patient Determined Disease Steps=≤3). Mean daily step count and mean daily EE (kcal) were calculated for all combinations of days (127 combinations), and compared to the respective 7-day mean daily step count or mean daily EE using intra-class correlations (ICC), the Generalizability Theory and Bland-Altman. For step count, ICC values of 0.94-0.98 and a G-coefficient of 0.81 indicate a minimum of any random 2-day combination is required to reliably calculate mean daily step count. For EE, ICC values of 0.96-0.99 and a G-coefficient of 0.83 indicate a minimum of any random 4-day combination is required to reliably calculate mean daily EE. For Bland-Altman analyses all combinations of days, bar single day combinations, resulted in a mean bias within ±10%, when expressed as a percentage of the 7-day mean daily step count or mean daily EE. A minimum of 2days for step count and 4days for EE, regardless of day type, is needed to reliably estimate daily step count and daily EE, in people with MS who walk unaided. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Factors Associated With Ambulatory Activity in De Novo Parkinson Disease.

    PubMed

    Christiansen, Cory; Moore, Charity; Schenkman, Margaret; Kluger, Benzi; Kohrt, Wendy; Delitto, Anthony; Berman, Brian; Hall, Deborah; Josbeno, Deborah; Poon, Cynthia; Robichaud, Julie; Wellington, Toby; Jain, Samay; Comella, Cynthia; Corcos, Daniel; Melanson, Ed

    2017-04-01

    Objective ambulatory activity during daily living has not been characterized for people with Parkinson disease prior to initiation of dopaminergic medication. Our goal was to characterize ambulatory activity based on average daily step count and examine determinants of step count in nonexercising people with de novo Parkinson disease. We analyzed baseline data from a randomized controlled trial, which excluded people performing regular endurance exercise. Of 128 eligible participants (mean ± SD = 64.3 ± 8.6 years), 113 had complete accelerometer data, which were used to determine daily step count. Multiple linear regression was used to identify factors associated with average daily step count over 10 days. Candidate explanatory variable categories were (1) demographics/anthropometrics, (2) Parkinson disease characteristics, (3) motor symptom severity, (4) nonmotor and behavioral characteristics, (5) comorbidities, and (6) cardiorespiratory fitness. Average daily step count was 5362 ± 2890 steps per day. Five factors explained 24% of daily step count variability, with higher step count associated with higher cardiorespiratory fitness (10%), no fear/worry of falling (5%), lower motor severity examination score (4%), more recent time since Parkinson disease diagnosis (3%), and the presence of a cardiovascular condition (2%). Daily step count in nonexercising people recruited for this intervention trial with de novo Parkinson disease approached sedentary lifestyle levels. Further study is warranted for elucidating factors explaining ambulatory activity, particularly cardiorespiratory fitness, and fear/worry of falling. Clinicians should consider the costs and benefits of exercise and activity behavior interventions immediately after diagnosis of Parkinson disease to attenuate the health consequences of low daily step count.Video Abstract available for more insights from the authors (see Video, Supplemental Digital Content 1, http://links.lww.com/JNPT/A170).

  3. Executive Functions Underlying Multiplicative Reasoning: Problem Type Matters

    ERIC Educational Resources Information Center

    Agostino, Alba; Johnson, Janice; Pascual-Leone, Juan

    2010-01-01

    We investigated the extent to which inhibition, updating, shifting, and mental-attentional capacity ("M"-capacity) contribute to children's ability to solve multiplication word problems. A total of 155 children in Grades 3-6 (8- to 13-year-olds) completed a set of multiplication word problems at two levels of difficulty: one-step and multiple-step…

  4. Application of a nonrandomized stepped wedge design to evaluate an evidence-based quality improvement intervention: a proof of concept using simulated data on patient-centered medical homes.

    PubMed

    Huynh, Alexis K; Lee, Martin L; Farmer, Melissa M; Rubenstein, Lisa V

    2016-10-21

    Stepped wedge designs have gained recognition as a method for rigorously assessing implementation of evidence-based quality improvement interventions (QIIs) across multiple healthcare sites. In theory, this design uses random assignment of sites to successive QII implementation start dates based on a timeline determined by evaluators. However, in practice, QII timing is often controlled more by site readiness. We propose an alternate version of the stepped wedge design that does not assume the randomized timing of implementation while retaining the method's analytic advantages and applying to a broader set of evaluations. To test the feasibility of a nonrandomized stepped wedge design, we developed simulated data on patient care experiences and on QII implementation that had the structures and features of the expected data from a planned QII. We then applied the design in anticipation of performing an actual QII evaluation. We used simulated data on 108,000 patients to model nonrandomized stepped wedge results from QII implementation across nine primary care sites over 12 quarters. The outcome we simulated was change in a single self-administered question on access to care used by Veterans Health Administration (VA), based in the United States, as part of its quarterly patient ratings of quality of care. Our main predictors were QII exposure and time. Based on study hypotheses, we assigned values of 4 to 11 % for improvement in access when sites were first exposed to implementation and 1 to 3 % improvement in each ensuing time period thereafter when sites continued with implementation. We included site-level (practice size) and respondent-level (gender, race/ethnicity) characteristics that might account for nonrandomized timing in site implementation of the QII. We analyzed the resulting data as a repeated cross-sectional model using HLM 7 with a three-level hierarchical data structure and an ordinal outcome. Levels in the data structure included patient ratings, timing of adoption of the QII, and primary care site. We were able to demonstrate a statistically significant improvement in adoption of the QII, as postulated in our simulation. The linear time trend while sites were in the control state was not significant, also as expected in the real life scenario of the example QII. We concluded that the nonrandomized stepped wedge design was feasible within the parameters of our planned QII with its data structure and content. Our statistical approach may be applicable to similar evaluations.

  5. Parsing the roles of neck-linker docking and tethered head diffusion in the stepping dynamics of kinesin.

    PubMed

    Zhang, Zhechun; Goldtzvik, Yonathan; Thirumalai, D

    2017-11-14

    Kinesin walks processively on microtubules (MTs) in an asymmetric hand-over-hand manner consuming one ATP molecule per 16-nm step. The individual contributions due to docking of the approximately 13-residue neck linker to the leading head (deemed to be the power stroke) and diffusion of the trailing head (TH) that contributes in propelling the motor by 16 nm have not been quantified. We use molecular simulations by creating a coarse-grained model of the MT-kinesin complex, which reproduces the measured stall force as well as the force required to dislodge the motor head from the MT, to show that nearly three-quarters of the step occurs by bidirectional stochastic motion of the TH. However, docking of the neck linker to the leading head constrains the extent of diffusion and minimizes the probability that kinesin takes side steps, implying that both the events are necessary in the motility of kinesin and for the maintenance of processivity. Surprisingly, we find that during a single step, the TH stochastically hops multiple times between the geometrically accessible neighboring sites on the MT before forming a stable interaction with the target binding site with correct orientation between the motor head and the [Formula: see text] tubulin dimer.

  6. Continuous analog of multiplicative algebraic reconstruction technique for computed tomography

    NASA Astrophysics Data System (ADS)

    Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya

    2016-03-01

    We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.

  7. Real-time inextensible surgical thread simulation.

    PubMed

    Xu, Lang; Liu, Qian

    2018-03-27

    This paper discusses a real-time simulation method of inextensible surgical thread based on the Cosserat rod theory using position-based dynamics (PBD). The method realizes stable twining and knotting of surgical thread while including inextensibility, bending, twisting and coupling effects. The Cosserat rod theory is used to model the nonlinear elastic behavior of surgical thread. The surgical thread model is solved with PBD to achieve a real-time, extremely stable simulation. Due to the one-dimensional linear structure of surgical thread, the direct solution of the distance constraint based on tridiagonal matrix algorithm is used to enhance stretching resistance in every constraint projection iteration. In addition, continuous collision detection and collision response guarantee a large time step and high performance. Furthermore, friction is integrated into the constraint projection process to stabilize the twining of multiple threads and complex contact situations. Through comparisons with existing methods, the surgical thread maintains constant length under large deformation after applying the direct distance constraint in our method. The twining and knotting of multiple threads correspond to stable solutions to contact and friction forces. A surgical suture scene is also modeled to demonstrate the practicality and simplicity of our method. Our method achieves stable and fast simulation of inextensible surgical thread. Benefiting from the unified particle framework, the rigid body, elastic rod, and soft body can be simultaneously simulated. The method is appropriate for applications in virtual surgery that require multiple dynamic bodies.

  8. Balance and postural skills in normal-weight and overweight prepubertal boys.

    PubMed

    Deforche, Benedicte I; Hills, Andrew P; Worringham, Charles J; Davies, Peter S W; Murphy, Alexia J; Bouckaert, Jacques J; De Bourdeaudhuij, Ilse M

    2009-01-01

    This study investigated differences in balance and postural skills in normal-weight versus overweight prepubertal boys. Fifty-seven 8-10-year-old boys were categorized overweight (N = 25) or normal-weight (N = 32) according to the International Obesity Task Force cut-off points for overweight in children. The Balance Master, a computerized pressure plate system, was used to objectively measure six balance skills: sit-to-stand, walk, step up/over, tandem walk (walking on a line), unilateral stance and limits of stability. In addition, three standardized field tests were employed: standing on one leg on a balance beam, walking heel-to-toe along the beam and the multiple sit-to-stand test. Overweight boys showed poorer performances on several items assessed on the Balance Master. Overweight boys had slower weight transfer (p < 0.05), lower rising index (p < 0.05) and greater sway velocity (p < 0.001) in the sit-to-stand test, greater step width while walking (p < 0.05) and lower speed when walking on a line (p < 0.01) compared with normal-weight counterparts. Performance on the step up/over test, the unilateral stance and the limits of stability were comparable between both groups. On the balance beam, overweight boys could not hold their balance on one leg as long (p < 0.001) and had fewer correct steps in the heel-to-toe test (p < 0.001) than normal-weight boys. Finally, overweight boys were slower in standing up and sitting down five times in the multiple sit-to-stand task (p < 0.01). This study demonstrates that when categorised by body mass index (BMI) level, overweight prepubertal boys displayed lower capacity on several static and dynamic balance and postural skills.

  9. Quantitative Analysis of Bioactive Compounds from Aromatic Plants by Means of Dynamic Headspace Extraction and Multiple Headspace Extraction-Gas Chromatography-Mass Spectrometry.

    PubMed

    Omar, Jone; Olivares, Maitane; Alonso, Ibone; Vallejo, Asier; Aizpurua-Olaizola, Oier; Etxebarria, Nestor

    2016-04-01

    Seven monoterpenes in 4 aromatic plants (sage, cardamom, lavender, and rosemary) were quantified in liquid extracts and directly in solid samples by means of dynamic headspace-gas chromatography-mass spectrometry (DHS-GC-MS) and multiple headspace extraction-gas chromatography-mass spectrometry (MHSE), respectively. The monoterpenes were 1st extracted by means of supercritical fluid extraction (SFE) and analyzed by an optimized DHS-GC-MS. The optimization of the dynamic extraction step and the desorption/cryo-focusing step were tackled independently by experimental design assays. The best working conditions were set at 30 °C for the incubation temperature, 5 min of incubation time, and 40 mL of purge volume for the dynamic extraction step of these bioactive molecules. The conditions of the desorption/cryo-trapping step from the Tenax TA trap were set at follows: the temperature was increased from 30 to 300 °C at 150 °C/min, although the cryo-trapping was maintained at -70 °C. In order to estimate the efficiency of the SFE process, the analysis of monoterpenes in the 4 aromatic plants was directly carried out by means of MHSE because it did not require any sample preparation. Good linearity (r2) > 0.99) and reproducibility (relative standard deviation % <12) was obtained for solid and liquid quantification approaches, in the ranges of 0.5 to 200 ng and 10 to 500 ng/mL, respectively. The developed methods were applied to analyze the concentration of 7 monoterpenes in aromatic plants obtaining concentrations in the range of 2 to 6000 ng/g and 0.25 to 110 μg/mg, respectively. © 2016 Institute of Food Technologists®

  10. Geometrical correction of the e-beam proximity effect for raster scan systems

    NASA Astrophysics Data System (ADS)

    Belic, Nikola; Eisenmann, Hans; Hartmann, Hans; Waas, Thomas

    1999-06-01

    Increasing demands on pattern fidelity and CD accuracy in e- beam lithography require a correction of the e-beam proximity effect. The new needs are mainly coming from OPC at mask level and x-ray lithography. The e-beam proximity limits the achievable resolution and affects neighboring structures causing under- or over-exposion depending on the local pattern densities and process settings. Methods to compensate for this unequilibrated does distribution usually use a dose modulation or multiple passes. In general raster scan systems are not able to apply variable doses in order to compensate for the proximity effect. For system of this kind a geometrical modulation of the original pattern offers a solution for compensation of line edge deviations due to the proximity effect. In this paper a new method for the fast correction of the e-beam proximity effect via geometrical pattern optimization is described. The method consists of two steps. In a first step the pattern dependent dose distribution caused by back scattering is calculated by convolution of the pattern with the long range part of the proximity function. The restriction to the long range part result in a quadratic sped gain in computing time for the transformation. The influence of the short range part coming from forward scattering is not pattern dependent and can therefore be determined separately in a second step. The second calculation yields the dose curve at the border of a written structure. The finite gradient of this curve leads to an edge displacement depending on the amount of underground dosage at the observed position which was previously determined in the pattern dependent step. This unintended edge displacement is corrected by splitting the line into segments and shifting them by multiples of the writers address grid to the opposite direction.

  11. Multi-time Scale Coordination of Distributed Energy Resources in Isolated Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayhorn, Ebony; Xie, Le; Butler-Purry, Karen

    2016-03-31

    In isolated power systems, including microgrids, distributed assets, such as renewable energy resources (e.g. wind, solar) and energy storage, can be actively coordinated to reduce dependency on fossil fuel generation. The key challenge of such coordination arises from significant uncertainty and variability occurring at small time scales associated with increased penetration of renewables. Specifically, the problem is with ensuring economic and efficient utilization of DERs, while also meeting operational objectives such as adequate frequency performance. One possible solution is to reduce the time step at which tertiary controls are implemented and to ensure feedback and look-ahead capability are incorporated tomore » handle variability and uncertainty. However, reducing the time step of tertiary controls necessitates investigating time-scale coupling with primary controls so as not to exacerbate system stability issues. In this paper, an optimal coordination (OC) strategy, which considers multiple time-scales, is proposed for isolated microgrid systems with a mix of DERs. This coordination strategy is based on an online moving horizon optimization approach. The effectiveness of the strategy was evaluated in terms of economics, technical performance, and computation time by varying key parameters that significantly impact performance. The illustrative example with realistic scenarios on a simulated isolated microgrid test system suggests that the proposed approach is generalizable towards designing multi-time scale optimal coordination strategies for isolated power systems.« less

  12. Rapid construction of capsid-modified adenoviral vectors through bacteriophage lambda Red recombination.

    PubMed

    Campos, Samuel K; Barry, Michael A

    2004-11-01

    There are extensive efforts to develop cell-targeting adenoviral vectors for gene therapy wherein endogenous cell-binding ligands are ablated and exogenous ligands are introduced by genetic means. Although current approaches can genetically manipulate the capsid genes of adenoviral vectors, these approaches can be time-consuming and require multiple steps to produce a modified viral genome. We present here the use of the bacteriophage lambda Red recombination system as a valuable tool for the easy and rapid construction of capsid-modified adenoviral genomes.

  13. Research in Computational Astrobiology

    NASA Technical Reports Server (NTRS)

    Chaban, Galina; Colombano, Silvano; Scargle, Jeff; New, Michael H.; Pohorille, Andrew; Wilson, Michael A.

    2003-01-01

    We report on several projects in the field of computational astrobiology, which is devoted to advancing our understanding of the origin, evolution and distribution of life in the Universe using theoretical and computational tools. Research projects included modifying existing computer simulation codes to use efficient, multiple time step algorithms, statistical methods for analysis of astrophysical data via optimal partitioning methods, electronic structure calculations on water-nuclei acid complexes, incorporation of structural information into genomic sequence analysis methods and calculations of shock-induced formation of polycylic aromatic hydrocarbon compounds.

  14. A natural language based search engine for ICD10 diagnosis encoding.

    PubMed

    Baud, Robert

    2004-01-01

    We have developed a multiple step process for implementing an ICD10 search engine. The complexity of the task has been shown and we recommend collecting adequate expertise before starting any implementation. Underestimation of the expert time and inadequate data resources are probable reasons for failure. We also claim that when all conditions are met in term of resource and availability of the expertise, the benefits of a responsive ICD10 search engine will be present and the investment will be successful.

  15. Droplet morphometry and velocimetry (DMV): a video processing software for time-resolved, label-free tracking of droplet parameters.

    PubMed

    Basu, Amar S

    2013-05-21

    Emerging assays in droplet microfluidics require the measurement of parameters such as drop size, velocity, trajectory, shape deformation, fluorescence intensity, and others. While micro particle image velocimetry (μPIV) and related techniques are suitable for measuring flow using tracer particles, no tool exists for tracking droplets at the granularity of a single entity. This paper presents droplet morphometry and velocimetry (DMV), a digital video processing software for time-resolved droplet analysis. Droplets are identified through a series of image processing steps which operate on transparent, translucent, fluorescent, or opaque droplets. The steps include background image generation, background subtraction, edge detection, small object removal, morphological close and fill, and shape discrimination. A frame correlation step then links droplets spanning multiple frames via a nearest neighbor search with user-defined matching criteria. Each step can be individually tuned for maximum compatibility. For each droplet found, DMV provides a time-history of 20 different parameters, including trajectory, velocity, area, dimensions, shape deformation, orientation, nearest neighbour spacing, and pixel statistics. The data can be reported via scatter plots, histograms, and tables at the granularity of individual droplets or by statistics accrued over the population. We present several case studies from industry and academic labs, including the measurement of 1) size distributions and flow perturbations in a drop generator, 2) size distributions and mixing rates in drop splitting/merging devices, 3) efficiency of single cell encapsulation devices, 4) position tracking in electrowetting operations, 5) chemical concentrations in a serial drop dilutor, 6) drop sorting efficiency of a tensiophoresis device, 7) plug length and orientation of nonspherical plugs in a serpentine channel, and 8) high throughput tracking of >250 drops in a reinjection system. Performance metrics show that highest accuracy and precision is obtained when the video resolution is >300 pixels per drop. Analysis time increases proportionally with video resolution. The current version of the software provides throughputs of 2-30 fps, suggesting the potential for real time analysis.

  16. Parallel processors and nonlinear structural dynamics algorithms and software

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.; Plaskacz, Edward J.

    1989-01-01

    The adaptation of a finite element program with explicit time integration to a massively parallel SIMD (single instruction multiple data) computer, the CONNECTION Machine is described. The adaptation required the development of a new algorithm, called the exchange algorithm, in which all nodal variables are allocated to the element with an exchange of nodal forces at each time step. The architectural and C* programming language features of the CONNECTION Machine are also summarized. Various alternate data structures and associated algorithms for nonlinear finite element analysis are discussed and compared. Results are presented which demonstrate that the CONNECTION Machine is capable of outperforming the CRAY XMP/14.

  17. Simulation of Synthetic Jets in Quiescent Air Using Unsteady Reynolds Averaged Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Turkel, Eli

    2006-01-01

    We apply an unsteady Reynolds-averaged Navier-Stokes (URANS) solver for the simulation of a synthetic jet created by a single diaphragm piezoelectric actuator in quiescent air. This configuration was designated as Case 1 for the CFDVAL2004 workshop held at Williamsburg, Virginia, in March 2004. Time-averaged and instantaneous data for this case were obtained at NASA Langley Research Center, using multiple measurement techniques. Computational results for this case using one-equation Spalart-Allmaras and two-equation Menter's turbulence models are presented along with the experimental data. The effect of grid refinement, preconditioning and time-step variation are also examined in this paper.

  18. Evaluation of accuracy in implant site preparation performed in single- or multi-step drilling procedures.

    PubMed

    Marheineke, Nadine; Scherer, Uta; Rücker, Martin; von See, Constantin; Rahlf, Björn; Gellrich, Nils-Claudius; Stoetzer, Marcus

    2018-06-01

    Dental implant failure and insufficient osseointegration are proven results of mechanical and thermal damage during the surgery process. We herein performed a comparative study of a less invasive single-step drilling preparation protocol and a conventional multiple drilling sequence. Accuracy of drilling holes was precisely analyzed and the influence of different levels of expertise of the handlers and additional use of drill template guidance was evaluated. Six experimental groups, deployed in an osseous study model, were representing template-guided and freehanded drilling actions in a stepwise drilling procedure in comparison to a single-drill protocol. Each experimental condition was studied by the drilling actions of respectively three persons without surgical knowledge as well as three highly experienced oral surgeons. Drilling actions were performed and diameters were recorded with a precision measuring instrument. Less experienced operators were able to significantly increase the drilling accuracy using a guiding template, especially when multi-step preparations are performed. Improved accuracy without template guidance was observed when experienced operators were executing single-step versus multi-step technique. Single-step drilling protocols have shown to produce more accurate results than multi-step procedures. The outcome of any protocol can be further improved by use of guiding templates. Operator experience can be a contributing factor. Single-step preparations are less invasive and are promoting osseointegration. Even highly experienced surgeons are achieving higher levels of accuracy by combining this technique with template guidance. Hereby template guidance enables a reduction of hands-on time and side effects during surgery and lead to a more predictable clinical diameter.

  19. Time step rescaling recovers continuous-time dynamical properties for discrete-time Langevin integration of nonequilibrium systems.

    PubMed

    Sivak, David A; Chodera, John D; Crooks, Gavin E

    2014-06-19

    When simulating molecular systems using deterministic equations of motion (e.g., Newtonian dynamics), such equations are generally numerically integrated according to a well-developed set of algorithms that share commonly agreed-upon desirable properties. However, for stochastic equations of motion (e.g., Langevin dynamics), there is still broad disagreement over which integration algorithms are most appropriate. While multiple desiderata have been proposed throughout the literature, consensus on which criteria are important is absent, and no published integration scheme satisfies all desiderata simultaneously. Additional nontrivial complications stem from simulating systems driven out of equilibrium using existing stochastic integration schemes in conjunction with recently developed nonequilibrium fluctuation theorems. Here, we examine a family of discrete time integration schemes for Langevin dynamics, assessing how each member satisfies a variety of desiderata that have been enumerated in prior efforts to construct suitable Langevin integrators. We show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting (related to the velocity Verlet discretization) that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.

  20. Multiplication Fact Fluency Using Doubles

    ERIC Educational Resources Information Center

    Flowers, Judith M.; Rubenstein, Rheta N.

    2010-01-01

    Not knowing multiplication facts creates a gap in a student's mathematics development and undermines confidence and disposition toward further mathematical learning. Learning multiplication facts is a first step in proportional reasoning, "the capstone of elementary arithmetic and the gateway to higher mathematics" (NRC 2001, p. 242). Proportional…

  1. A Multiplicative Cascade Model for High-Resolution Space-Time Downscaling of Rainfall

    NASA Astrophysics Data System (ADS)

    Raut, Bhupendra A.; Seed, Alan W.; Reeder, Michael J.; Jakob, Christian

    2018-02-01

    Distributions of rainfall with the time and space resolutions of minutes and kilometers, respectively, are often needed to drive the hydrological models used in a range of engineering, environmental, and urban design applications. The work described here is the first step in constructing a model capable of downscaling rainfall to scales of minutes and kilometers from time and space resolutions of several hours and a hundred kilometers. A multiplicative random cascade model known as the Short-Term Ensemble Prediction System is run with parameters from the radar observations at Melbourne (Australia). The orographic effects are added through multiplicative correction factor after the model is run. In the first set of model calculations, 112 significant rain events over Melbourne are simulated 100 times. Because of the stochastic nature of the cascade model, the simulations represent 100 possible realizations of the same rain event. The cascade model produces realistic spatial and temporal patterns of rainfall at 6 min and 1 km resolution (the resolution of the radar data), the statistical properties of which are in close agreement with observation. In the second set of calculations, the cascade model is run continuously for all days from January 2008 to August 2015 and the rainfall accumulations are compared at 12 locations in the greater Melbourne area. The statistical properties of the observations lie with envelope of the 100 ensemble members. The model successfully reproduces the frequency distribution of the 6 min rainfall intensities, storm durations, interarrival times, and autocorrelation function.

  2. Continuous motion scan ptychography: Characterization for increased speed in coherent x-ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Junjing; Nashed, Youssef S. G.; Chen, Si

    Ptychography is a coherent diffraction imaging (CDI) method for extended objects in which diffraction patterns are acquired sequentially from overlapping coherent illumination spots. The object’s complex transmission function can be reconstructed from those diffraction patterns at a spatial resolution limited only by the scattering strength of the object and the detector geometry. Most experiments to date have positioned the illumination spots on the sample using a move-settle-measure sequence in which the move and settle steps can take longer to complete than the measure step. We describe here the use of a continuous “fly-scan” mode for ptychographic data collection in whichmore » the sample is moved continuously, so that the experiment resembles one of integrating the diffraction patterns from multiple probe positions. This allows one to use multiple probe mode reconstruction methods to obtain an image of the object and also of the illumination function. We show in simulations, and in x-ray imaging experiments, some of the characteristics of fly-scan ptychography, including a factor of 25 reduction in the data acquisition time. This approach will become increasingly important as brighter x-ray sources are developed, such as diffraction limited storage rings.« less

  3. Continuous motion scan ptychography: Characterization for increased speed in coherent x-ray imaging

    DOE PAGES

    Deng, Junjing; Nashed, Youssef S. G.; Chen, Si; ...

    2015-02-23

    Ptychography is a coherent diffraction imaging (CDI) method for extended objects in which diffraction patterns are acquired sequentially from overlapping coherent illumination spots. The object’s complex transmission function can be reconstructed from those diffraction patterns at a spatial resolution limited only by the scattering strength of the object and the detector geometry. Most experiments to date have positioned the illumination spots on the sample using a move-settle-measure sequence in which the move and settle steps can take longer to complete than the measure step. We describe here the use of a continuous “fly-scan” mode for ptychographic data collection in whichmore » the sample is moved continuously, so that the experiment resembles one of integrating the diffraction patterns from multiple probe positions. This allows one to use multiple probe mode reconstruction methods to obtain an image of the object and also of the illumination function. We show in simulations, and in x-ray imaging experiments, some of the characteristics of fly-scan ptychography, including a factor of 25 reduction in the data acquisition time. This approach will become increasingly important as brighter x-ray sources are developed, such as diffraction limited storage rings.« less

  4. Continuous motion scan ptychography: characterization for increased speed in coherent x-ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Junjing; Nashed, Youssef S. G.; Chen, Si

    2015-01-01

    Ptychography is a coherent diffraction imaging (CDI) method for extended objects in which diffraction patterns are acquired sequentially from overlapping coherent illumination spots. The object's complex transmission function can be reconstructed from those diffraction patterns at a spatial resolution limited only by the scattering strength of the object and the detector geometry. Most experiments to date have positioned the illumination spots on the sample using a move-settle-measure sequence in which the move and settle steps can take longer to complete than the measure step. We describe here the use of a continuous "fly-scan" mode for ptychographic data collection in whichmore » the sample is moved continuously, so that the experiment resembles one of integrating the diffraction patterns from multiple probe positions. This allows one to use multiple probe mode reconstruction methods to obtain an image of the object and also of the illumination function. We show in simulations, and in x-ray imaging experiments, some of the characteristics of fly-scan ptychography, including a factor of 25 reduction in the data acquisition time. This approach will become increasingly important as brighter x-ray sources are developed, such as diffraction limited storage rings.« less

  5. Continuous motion scan ptychography: characterization for increased speed in coherent x-ray imaging.

    PubMed

    Deng, Junjing; Nashed, Youssef S G; Chen, Si; Phillips, Nicholas W; Peterka, Tom; Ross, Rob; Vogt, Stefan; Jacobsen, Chris; Vine, David J

    2015-03-09

    Ptychography is a coherent diffraction imaging (CDI) method for extended objects in which diffraction patterns are acquired sequentially from overlapping coherent illumination spots. The object's complex transmission function can be reconstructed from those diffraction patterns at a spatial resolution limited only by the scattering strength of the object and the detector geometry. Most experiments to date have positioned the illumination spots on the sample using a move-settle-measure sequence in which the move and settle steps can take longer to complete than the measure step. We describe here the use of a continuous "fly-scan" mode for ptychographic data collection in which the sample is moved continuously, so that the experiment resembles one of integrating the diffraction patterns from multiple probe positions. This allows one to use multiple probe mode reconstruction methods to obtain an image of the object and also of the illumination function. We show in simulations, and in x-ray imaging experiments, some of the characteristics of fly-scan ptychography, including a factor of 25 reduction in the data acquisition time. This approach will become increasingly important as brighter x-ray sources are developed, such as diffraction limited storage rings.

  6. Load and Pi control flux through the branched kinetic cycle of myosin V.

    PubMed

    Kad, Neil M; Trybus, Kathleen M; Warshaw, David M

    2008-06-20

    Myosin V is a processive actin-based motor protein that takes multiple 36-nm steps to deliver intracellular cargo to its destination. In the laser trap, applied load slows myosin V heavy meromyosin stepping and increases the probability of backsteps. In the presence of 40 mm phosphate (P(i)), both forward and backward steps become less load-dependent. From these data, we infer that P(i) release commits myosin V to undergo a highly load-dependent transition from a state in which ADP is bound to both heads and its lead head trapped in a pre-powerstroke conformation. Increasing the residence time in this state by applying load increases the probability of backstepping or detachment. The kinetics of detachment indicate that myosin V can detach from actin at two distinct points in the cycle, one of which is turned off by the presence of P(i). We propose a branched kinetic model to explain these data. Our model includes P(i) release prior to the most load-dependent step in the cycle, implying that P(i) release and load both act as checkpoints that control the flux through two parallel pathways.

  7. Management of primary and metastasized melanoma in Germany in the time period 1976-2005: an analysis of the Central Malignant Melanoma Registry of the German Dermatological Society.

    PubMed

    Schwager, Silke S; Leiter, Ulrike; Buettner, Petra G; Voit, Christiane; Marsch, Wolfgang; Gutzmer, Ralf; Näher, Helmut; Gollnick, Harald; Bröcker, Eva Bettina; Garbe, Claus

    2008-04-01

    This study analysed the changes of excision margins in correlation with tumour thickness as recorded over the last three decades in Germany. The study also evaluated surgical management in different geographical regions and treatment options for metastasized melanoma. A total of 42 625 patients with invasive primary cutaneous melanoma, recorded by the German Central Malignant Melanoma Registry between 1976 and 2005 were included. Multiple linear regression analysis was used to investigate time trends of excision margins adjusted for tumour thickness. Excision margins of 5.0 cm were widely used in the late 1970s but since then have been replaced by smaller margins that are dependent on tumour thickness. In the case of primary melanoma, one-step surgery dominated until 1985 and was mostly replaced by two-step excisions since the early 1990s. In eastern Germany, one-step management remained common until the late 1990s. During the last three decades loco-regional metastases were predominantly treated by surgery (up to 80%), whereas systemic therapy decreased. The primary treatment of distant metastases has consistently been systemic chemotherapy. This descriptive retrospective study revealed a significant decrease in excision margins to a maximum of 2.00 cm. A significant trend towards two-step excisions in primary cutaneous melanoma was observed throughout Germany. Management of metastasized melanoma showed a tendency towards surgical procedures in limited disease and an ongoing trend to systemic treatment in advanced disease.

  8. Very-short-term wind power prediction by a hybrid model with single- and multi-step approaches

    NASA Astrophysics Data System (ADS)

    Mohammed, E.; Wang, S.; Yu, J.

    2017-05-01

    Very-short-term wind power prediction (VSTWPP) has played an essential role for the operation of electric power systems. This paper aims at improving and applying a hybrid method of VSTWPP based on historical data. The hybrid method is combined by multiple linear regressions and least square (MLR&LS), which is intended for reducing prediction errors. The predicted values are obtained through two sub-processes:1) transform the time-series data of actual wind power into the power ratio, and then predict the power ratio;2) use the predicted power ratio to predict the wind power. Besides, the proposed method can include two prediction approaches: single-step prediction (SSP) and multi-step prediction (MSP). WPP is tested comparatively by auto-regressive moving average (ARMA) model from the predicted values and errors. The validity of the proposed hybrid method is confirmed in terms of error analysis by using probability density function (PDF), mean absolute percent error (MAPE) and means square error (MSE). Meanwhile, comparison of the correlation coefficients between the actual values and the predicted values for different prediction times and window has confirmed that MSP approach by using the hybrid model is the most accurate while comparing to SSP approach and ARMA. The MLR&LS is accurate and promising for solving problems in WPP.

  9. The Relationship Between Non-Symbolic Multiplication and Division in Childhood

    PubMed Central

    McCrink, Koleen; Shafto, Patrick; Barth, Hilary

    2016-01-01

    Children without formal education in addition and subtraction are able to perform multi-step operations over an approximate number of objects. Further, their performance improves when solving approximate (but not exact) addition and subtraction problems that allow for inversion as a shortcut (e.g., a + b − b = a). The current study examines children’s ability to perform multi-step operations, and the potential for an inversion benefit, for the operations of approximate, non-symbolic multiplication and division. Children were trained to compute a multiplication and division scaling factor (*2 or /2, *4 or /4), and then tested on problems that combined two of these factors in a way that either allowed for an inversion shortcut (e.g., 8 * 4 / 4) or did not (e.g., 8 * 4 / 2). Children’s performance was significantly better than chance for all scaling factors during training, and they successfully computed the outcomes of the multi-step testing problems. They did not exhibit a performance benefit for problems with the a * b / b structure, suggesting they did not draw upon inversion reasoning as a logical shortcut to help them solve the multi-step test problems. PMID:26880261

  10. Software forecasting as it is really done: A study of JPL software engineers

    NASA Technical Reports Server (NTRS)

    Griesel, Martha Ann; Hihn, Jairus M.; Bruno, Kristin J.; Fouser, Thomas J.; Tausworthe, Robert C.

    1993-01-01

    This paper presents a summary of the results to date of a Jet Propulsion Laboratory internally funded research task to study the costing process and parameters used by internally recognized software cost estimating experts. Protocol Analysis and Markov process modeling were used to capture software engineer's forecasting mental models. While there is significant variation between the mental models that were studied, it was nevertheless possible to identify a core set of cost forecasting activities, and it was also found that the mental models cluster around three forecasting techniques. Further partitioning of the mental models revealed clustering of activities, that is very suggestive of a forecasting lifecycle. The different forecasting methods identified were based on the use of multiple-decomposition steps or multiple forecasting steps. The multiple forecasting steps involved either forecasting software size or an additional effort forecast. Virtually no subject used risk reduction steps in combination. The results of the analysis include: the identification of a core set of well defined costing activities, a proposed software forecasting life cycle, and the identification of several basic software forecasting mental models. The paper concludes with a discussion of the implications of the results for current individual and institutional practices.

  11. A multiple-objective optimal exploration strategy

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.

    1988-01-01

    Exploration for natural resources is accomplished through partial sampling of extensive domains. Such imperfect knowledge is subject to sampling error. Complex systems of equations resulting from modelling based on the theory of correlated random fields are reduced to simple analytical expressions providing global indices of estimation variance. The indices are utilized by multiple objective decision criteria to find the best sampling strategies. The approach is not limited by geometric nature of the sampling, covers a wide range in spatial continuity and leads to a step-by-step procedure. ?? 1988.

  12. Simulation and optimization of pressure swing adsorption systmes using reduced-order modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, A.; Biegler, L.; Zitney, S.

    2009-01-01

    Over the past three decades, pressure swing adsorption (PSA) processes have been widely used as energyefficient gas separation techniques, especially for high purity hydrogen purification from refinery gases. Models for PSA processes are multiple instances of partial differential equations (PDEs) in time and space with periodic boundary conditions that link the processing steps together. The solution of this coupled stiff PDE system is governed by steep fronts moving with time. As a result, the optimization of such systems represents a significant computational challenge to current differential algebraic equation (DAE) optimization techniques and nonlinear programming algorithms. Model reduction is one approachmore » to generate cost-efficient low-order models which can be used as surrogate models in the optimization problems. This study develops a reducedorder model (ROM) based on proper orthogonal decomposition (POD), which is a low-dimensional approximation to a dynamic PDE-based model. The proposed method leads to a DAE system of significantly lower order, thus replacing the one obtained from spatial discretization and making the optimization problem computationally efficient. The method has been applied to the dynamic coupled PDE-based model of a twobed four-step PSA process for separation of hydrogen from methane. Separate ROMs have been developed for each operating step with different POD modes for each of them. A significant reduction in the order of the number of states has been achieved. The reduced-order model has been successfully used to maximize hydrogen recovery by manipulating operating pressures, step times and feed and regeneration velocities, while meeting product purity and tight bounds on these parameters. Current results indicate the proposed ROM methodology as a promising surrogate modeling technique for cost-effective optimization purposes.« less

  13. Key Steps in the Special Review Process

    EPA Pesticide Factsheets

    EPA uses this process when it has reason to believe that the use of a pesticide may result in unreasonable adverse effects on people or the environment. Steps include comprehensive risk and benefit analyses and multiple Position Documents.

  14. Scanning sequences after Gibbs sampling to find multiple occurrences of functional elements

    PubMed Central

    Tharakaraman, Kannan; Mariño-Ramírez, Leonardo; Sheetlin, Sergey L; Landsman, David; Spouge, John L

    2006-01-01

    Background Many DNA regulatory elements occur as multiple instances within a target promoter. Gibbs sampling programs for finding DNA regulatory elements de novo can be prohibitively slow in locating all instances of such an element in a sequence set. Results We describe an improvement to the A-GLAM computer program, which predicts regulatory elements within DNA sequences with Gibbs sampling. The improvement adds an optional "scanning step" after Gibbs sampling. Gibbs sampling produces a position specific scoring matrix (PSSM). The new scanning step resembles an iterative PSI-BLAST search based on the PSSM. First, it assigns an "individual score" to each subsequence of appropriate length within the input sequences using the initial PSSM. Second, it computes an E-value from each individual score, to assess the agreement between the corresponding subsequence and the PSSM. Third, it permits subsequences with E-values falling below a threshold to contribute to the underlying PSSM, which is then updated using the Bayesian calculus. A-GLAM iterates its scanning step to convergence, at which point no new subsequences contribute to the PSSM. After convergence, A-GLAM reports predicted regulatory elements within each sequence in order of increasing E-values, so users have a statistical evaluation of the predicted elements in a convenient presentation. Thus, although the Gibbs sampling step in A-GLAM finds at most one regulatory element per input sequence, the scanning step can now rapidly locate further instances of the element in each sequence. Conclusion Datasets from experiments determining the binding sites of transcription factors were used to evaluate the improvement to A-GLAM. Typically, the datasets included several sequences containing multiple instances of a regulatory motif. The improvements to A-GLAM permitted it to predict the multiple instances. PMID:16961919

  15. Simplified filtered Smith predictor for MIMO processes with multiple time delays.

    PubMed

    Santos, Tito L M; Torrico, Bismark C; Normey-Rico, Julio E

    2016-11-01

    This paper proposes a simplified tuning strategy for the multivariable filtered Smith predictor. It is shown that offset-free control can be achieved with step references and disturbances regardless of the poles of the primary controller, i.e., integral action is not explicitly required. This strategy reduces the number of design parameters and simplifies tuning procedure because the implicit integrative poles are not considered for design purposes. The simplified approach can be used to design continuous-time or discrete-time controllers. Three case studies are used to illustrate the advantages of the proposed strategy if compared with the standard approach, which is based on the explicit integrative action. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  16. GRAPEVINE: Grids about anything by Poisson's equation in a visually interactive networking environment

    NASA Technical Reports Server (NTRS)

    Sorenson, Reese L.; Mccann, Karen

    1992-01-01

    A proven 3-D multiple-block elliptic grid generator, designed to run in 'batch mode' on a supercomputer, is improved by the creation of a modern graphical user interface (GUI) running on a workstation. The two parts are connected in real time by a network. The resultant system offers a significant speedup in the process of preparing and formatting input data and the ability to watch the grid solution converge by replotting the grid at each iteration step. The result is a reduction in user time and CPU time required to generate the grid and an enhanced understanding of the elliptic solution process. This software system, called GRAPEVINE, is described, and certain observations are made concerning the creation of such software.

  17. A new adaptive multiple modelling approach for non-linear and non-stationary systems

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Gong, Yu; Hong, Xia

    2016-07-01

    This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.

  18. Sub-daily Statistical Downscaling of Meteorological Variables Using Neural Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Jitendra; Brooks, Bjørn-Gustaf J.; Thornton, Peter E

    2012-01-01

    A new open source neural network temporal downscaling model is described and tested using CRU-NCEP reanal ysis and CCSM3 climate model output. We downscaled multiple meteorological variables in tandem from monthly to sub-daily time steps while also retaining consistent correlations between variables. We found that our feed forward, error backpropagation approach produced synthetic 6 hourly meteorology with biases no greater than 0.6% across all variables and variance that was accurate within 1% for all variables except atmospheric pressure, wind speed, and precipitation. Correlations between downscaled output and the expected (original) monthly means exceeded 0.99 for all variables, which indicates thatmore » this approach would work well for generating atmospheric forcing data consistent with mass and energy conserved GCM output. Our neural network approach performed well for variables that had correlations to other variables of about 0.3 and better and its skill was increased by downscaling multiple correlated variables together. Poor replication of precipitation intensity however required further post-processing in order to obtain the expected probability distribution. The concurrence of precipitation events with expected changes in sub ordinate variables (e.g., less incident shortwave radiation during precipitation events) were nearly as consistent in the downscaled data as in the training data with probabilities that differed by no more than 6%. Our downscaling approach requires training data at the target time step and relies on a weak assumption that climate variability in the extrapolated data is similar to variability in the training data.« less

  19. Reliability of spatial-temporal gait parameters during dual-task interference in people with multiple sclerosis. A cross-sectional study.

    PubMed

    Monticone, Marco; Ambrosini, Emilia; Fiorentini, Roberta; Rocca, Barbara; Liquori, Valentina; Pedrocchi, Alessandra; Ferrante, Simona

    2014-09-01

    To evaluate the reliability and minimum detectable change (MDC) of spatial-temporal gait parameters in subjects with multiple sclerosis (MS) during dual tasking. This cross-sectional study involved 25 healthy subjects (mean age 49.9 ± 15.8 years) and 25 people with MS (mean age 49.2 ± 11.5 years). Gait under motor-cognitive and motor-motor dual tasking conditions was evaluated in two sessions separated by a one-day interval using the GAITRite Walkway System. Test-retest reliability was assessed using intraclass correlation coefficients (ICCs), standard errors of measurement (SEM), and coefficients of variation (CV). MDC scores were computed for the velocity, cadence, step and stride length, step and stride time, double support time, the % of gait cycle for single support and stance phase, and base of support. All of the gait parameters reported good to excellent ICCs under both conditions, with healthy subject values of >0.69 and MS subject values of >0.84. SEM values were always below 18% for both groups of subjects. The gait patterns of the people with MS were slightly more variable than those of the normal controls (CVs: 5.88-41.53% vs 2.84-30.48%). The assessment of quantitative gait parameters in healthy subjects and people with MS is highly reliable under both of the investigated dual tasking conditions. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. 326 Lung Age/Chronological Age Index as Indicator of Clinical Improvement or Severity in Asthma Patients

    PubMed Central

    Castrejon-Vázquez, Isabel; Vargas, Maria Eugenia; Sabido, Raúl Cicero; Tapía, Jorge Galicia

    2012-01-01

    Background Spirometry is a very useful clinical test to evaluate pulmonary function in asthma. However pulmonary function could be affected by the sex, time of clinical evolution, lung age (LA) and chronological age (CA). The aim of this study was to evaluate LA/CA as index of clinical improvement or severity in asthma patients. Methods The tenets of the Declaration of Helsinki were followed, and all patients gave their informed consent to participate in this study. Asthma severity was evaluated according with GINA classification. Spirometry was performed at the beginning of this study, at 46 days, 96 days, 192 days and after 8 months. Statistical analysis was performed using t test, 2-way ANOVA test, correlation and multiple regression models as well as ROC curves were also performed, a P < 0.05 was considered as significant. Results 70 asthma patients were included (22 male and 48 female), mean CA was 35-years old; mean LA was 48-years with a LA/CA index = 1.4, time of clinical evolution was 13 years. A LA/CA index = 1 (range 0.5 to 0.9) was observed in asymptomatic patients. LA/CA index over 1 were related with airway inflammation, and a LA/CA index more than 2 correlated with GINA step 3. Interestingly when we analyzed CA and LA, we observed that in female group more than 10 years of difference between CA and LA, (GINA Step2 and 3); while in male we observed (GINA Step1, Step2 and Step3). LA/CA index ≤ 1 was considered as normal. Conclusions LA/CA index is a good as clinical indicator of clinical improvement or severity in asthma patients in with excellent correlation of pulmonary function and age.

  1. Evaluation of an early step-down strategy from intravenous anidulafungin to oral azole therapy for the treatment of candidemia and other forms of invasive candidiasis: results from an open-label trial.

    PubMed

    Vazquez, Jose; Reboli, Annette C; Pappas, Peter G; Patterson, Thomas F; Reinhardt, John; Chin-Hong, Peter; Tobin, Ellis; Kett, Daniel H; Biswas, Pinaki; Swanson, Robert

    2014-02-21

    Hospitalized patients are at increased risk for candidemia and invasive candidiasis (C/IC). Improved therapeutic regimens with enhanced clinical and pharmacoeconomic outcomes utilizing existing antifungal agents are still needed. An open-label, non-comparative study evaluated an intravenous (i.v.) to oral step-down strategy. Patients with C/IC were treated with i.v. anidulafungin and after 5 days of i.v. therapy had the option to step-down to oral azole therapy (fluconazole or voriconazole) if they met prespecified criteria. The primary endpoint was the global response rate (clinical + microbiological) at end of treatment (EOT) in the modified intent-to-treat (MITT) population (at least one dose of anidulafungin plus positive Candida within 96 hours of study entry). Secondary endpoints included efficacy at other time points and in predefined patient subpopulations. Patients who stepped down early (≤ 7 days' anidulafungin) were identified as the "early switch" subpopulation. In total, 282 patients were enrolled, of whom 250 were included in the MITT population. The MITT global response rate at EOT was 83.7% (95% confidence interval, 78.7-88.8). Global response rates at all time points were generally similar in the early switch subpopulation compared with the MITT population. Global response rates were also similar across multiple Candida species, including C. albicans, C. glabrata, and C. parapsilosis. The most common treatment-related adverse events were nausea and vomiting (four patients each). A short course of i.v. anidulafungin, followed by early step-down to oral azole therapy, is an effective and well-tolerated approach for the treatment of C/IC. ClinicalTrials.gov: NCT00496197.

  2. A microfluidic device for preparing next generation DNA sequencing libraries and for automating other laboratory protocols that require one or more column chromatography steps.

    PubMed

    Tan, Swee Jin; Phan, Huan; Gerry, Benjamin Michael; Kuhn, Alexandre; Hong, Lewis Zuocheng; Min Ong, Yao; Poon, Polly Suk Yean; Unger, Marc Alexander; Jones, Robert C; Quake, Stephen R; Burkholder, William F

    2013-01-01

    Library preparation for next-generation DNA sequencing (NGS) remains a key bottleneck in the sequencing process which can be relieved through improved automation and miniaturization. We describe a microfluidic device for automating laboratory protocols that require one or more column chromatography steps and demonstrate its utility for preparing Next Generation sequencing libraries for the Illumina and Ion Torrent platforms. Sixteen different libraries can be generated simultaneously with significantly reduced reagent cost and hands-on time compared to manual library preparation. Using an appropriate column matrix and buffers, size selection can be performed on-chip following end-repair, dA tailing, and linker ligation, so that the libraries eluted from the chip are ready for sequencing. The core architecture of the device ensures uniform, reproducible column packing without user supervision and accommodates multiple routine protocol steps in any sequence, such as reagent mixing and incubation; column packing, loading, washing, elution, and regeneration; capture of eluted material for use as a substrate in a later step of the protocol; and removal of one column matrix so that two or more column matrices with different functional properties can be used in the same protocol. The microfluidic device is mounted on a plastic carrier so that reagents and products can be aliquoted and recovered using standard pipettors and liquid handling robots. The carrier-mounted device is operated using a benchtop controller that seals and operates the device with programmable temperature control, eliminating any requirement for the user to manually attach tubing or connectors. In addition to NGS library preparation, the device and controller are suitable for automating other time-consuming and error-prone laboratory protocols requiring column chromatography steps, such as chromatin immunoprecipitation.

  3. Evaluation of an early step-down strategy from intravenous anidulafungin to oral azole therapy for the treatment of candidemia and other forms of invasive candidiasis: results from an open-label trial

    PubMed Central

    2014-01-01

    Background Hospitalized patients are at increased risk for candidemia and invasive candidiasis (C/IC). Improved therapeutic regimens with enhanced clinical and pharmacoeconomic outcomes utilizing existing antifungal agents are still needed. Methods An open-label, non-comparative study evaluated an intravenous (IV) to oral step-down strategy. Patients with C/IC were treated with IV anidulafungin and after 5 days of IV therapy had the option to step-down to oral azole therapy (fluconazole or voriconazole) if they met prespecified criteria. The primary endpoint was the global response rate (clinical + microbiological) at end of treatment (EOT) in the modified intent-to-treat (MITT) population (at least one dose of anidulafungin plus positive Candida within 96 hours of study entry). Secondary endpoints included efficacy at other time points and in predefined patient subpopulations. Patients who stepped down early (≤ 7 days’ anidulafungin) were identified as the "early switch" subpopulation. Results In total, 282 patients were enrolled, of whom 250 were included in the MITT population. The MITT global response rate at EOT was 83.7% (95% confidence interval, 78.7–88.8). Global response rates at all time points were generally similar in the early switch subpopulation compared with the MITT population. Global response rates were also similar across multiple Candida species, including C. albicans, C. glabrata, and C. parapsilosis. The most common treatment-related adverse events were nausea and vomiting (four patients each). Conclusions A short course of IV anidulafungin, followed by early step-down to oral azole therapy, is an effective and well-tolerated approach for the treatment of C/IC. Trial registration ClinicalTrials.gov: NCT00496197 PMID:24559321

  4. A Microwell-Printing Fabrication Strategy for the On-Chip Templated Biosynthesis of Protein Microarrays for Surface Plasmon Resonance Imaging

    PubMed Central

    Manuel, Gerald; Lupták, Andrej; Corn, Robert M.

    2017-01-01

    A two-step templated, ribosomal biosynthesis/printing method for the fabrication of protein microarrays for surface plasmon resonance imaging (SPRI) measurements is demonstrated. In the first step, a sixteen component microarray of proteins is created in microwells by cell free on chip protein synthesis; each microwell contains both an in vitro transcription and translation (IVTT) solution and 350 femtomoles of a specific DNA template sequence that together are used to create approximately 40 picomoles of a specific hexahistidine-tagged protein. In the second step, the protein microwell array is used to contact print one or more protein microarrays onto nitrilotriacetic acid (NTA)-functionalized gold thin film SPRI chips for real-time SPRI surface bioaffinity adsorption measurements. Even though each microwell array element only contains approximately 40 picomoles of protein, the concentration is sufficiently high for the efficient bioaffinity adsorption and capture of the approximately 100 femtomoles of hexahistidine-tagged protein required to create each SPRI microarray element. As a first example, the protein biosynthesis process is verified with fluorescence imaging measurements of a microwell array containing His-tagged green fluorescent protein (GFP), yellow fluorescent protein (YFP) and mCherry (RFP), and then the fidelity of SPRI chips printed from this protein microwell array is ascertained by measuring the real-time adsorption of various antibodies specific to these three structurally related proteins. This greatly simplified two-step synthesis/printing fabrication methodology eliminates most of the handling, purification and processing steps normally required in the synthesis of multiple protein probes, and enables the rapid fabrication of SPRI protein microarrays from DNA templates for the study of protein-protein bioaffinity interactions. PMID:28706572

  5. A Microfluidic Device for Preparing Next Generation DNA Sequencing Libraries and for Automating Other Laboratory Protocols That Require One or More Column Chromatography Steps

    PubMed Central

    Tan, Swee Jin; Phan, Huan; Gerry, Benjamin Michael; Kuhn, Alexandre; Hong, Lewis Zuocheng; Min Ong, Yao; Poon, Polly Suk Yean; Unger, Marc Alexander; Jones, Robert C.; Quake, Stephen R.; Burkholder, William F.

    2013-01-01

    Library preparation for next-generation DNA sequencing (NGS) remains a key bottleneck in the sequencing process which can be relieved through improved automation and miniaturization. We describe a microfluidic device for automating laboratory protocols that require one or more column chromatography steps and demonstrate its utility for preparing Next Generation sequencing libraries for the Illumina and Ion Torrent platforms. Sixteen different libraries can be generated simultaneously with significantly reduced reagent cost and hands-on time compared to manual library preparation. Using an appropriate column matrix and buffers, size selection can be performed on-chip following end-repair, dA tailing, and linker ligation, so that the libraries eluted from the chip are ready for sequencing. The core architecture of the device ensures uniform, reproducible column packing without user supervision and accommodates multiple routine protocol steps in any sequence, such as reagent mixing and incubation; column packing, loading, washing, elution, and regeneration; capture of eluted material for use as a substrate in a later step of the protocol; and removal of one column matrix so that two or more column matrices with different functional properties can be used in the same protocol. The microfluidic device is mounted on a plastic carrier so that reagents and products can be aliquoted and recovered using standard pipettors and liquid handling robots. The carrier-mounted device is operated using a benchtop controller that seals and operates the device with programmable temperature control, eliminating any requirement for the user to manually attach tubing or connectors. In addition to NGS library preparation, the device and controller are suitable for automating other time-consuming and error-prone laboratory protocols requiring column chromatography steps, such as chromatin immunoprecipitation. PMID:23894273

  6. A small step in VLC systems - a big step in Li-Fi implementation

    NASA Astrophysics Data System (ADS)

    Rîurean, S. M.; Nagy, A. A.; Leba, M.; Ionica, A. C.

    2018-01-01

    Light is part of our sustainable environmental life so, using it would be the handiest and cheapest way for wireless communication. Since ever, light has been used to send messages in different ways and now, due to the high technological improvements, bits through light, at high speed on multiple paths, allow humans to communicate. Using the lighting system both for illumination and communication represents lately one of the worldwide main research issues with several implementations with real benefits. This paper presents a viable VLC system, that proves its sustainability for sending by light information not only few millimetres but meters away. This system has multiple potential applications in different areas where other communication systems are bottlenecked, too expensive, unavailable or even forbidden. Since a Li-Fi fully developed system requires bidirectional, multiple access communication, there are still some challenges towards a functional Li-Fi wireless network. Although important steps have been made, Li-Fi is still under experimental stage.

  7. Multiple cis-acting signals, some weak by necessity, collectively direct robust transport of oskar mRNA to the oocyte.

    PubMed

    Ryu, Young Hee; Kenny, Andrew; Gim, Youme; Snee, Mark; Macdonald, Paul M

    2017-09-15

    Localization of mRNAs can involve multiple steps, each with its own cis -acting localization signals and transport factors. How is the transition between different steps orchestrated? We show that the initial step in localization of Drosophila oskar mRNA - transport from nurse cells to the oocyte - relies on multiple cis -acting signals. Some of these are binding sites for the translational control factor Bruno, suggesting that Bruno plays an additional role in mRNA transport. Although transport of oskar mRNA is essential and robust, the localization activity of individual transport signals is weak. Notably, increasing the strength of individual transport signals, or adding a strong transport signal, disrupts the later stages of oskar mRNA localization. We propose that the oskar transport signals are weak by necessity; their weakness facilitates transfer of the oskar mRNA from the oocyte transport machinery to the machinery for posterior localization. © 2017. Published by The Company of Biologists Ltd.

  8. Improvement of surgical margin with a coupled saline-radio-frequency device for multiple colorectal liver metastases.

    PubMed

    Ogata, Satoshi; Kianmanesh, Reza; Varma, Deepak; Belghiti, Jacques

    2005-01-01

    Complete resection of colorectal liver metastases (LM) has been the only curative treatment. However, when LM are multiple and bilobar, only a few patients are candidates for curative surgery. We report on a 53-year-old woman with synchronous multiple and bilobar LM from sigmoidal cancer who became resectable after a multimodal strategy including preoperative systemic chemotherapy and two-step surgery. The spectacular decrease in tumor size after systemic chemotherapy led us to perform two-step surgery, including right portal-vein ligation and left liver metastasectomies, with a coupled saline-radiofrequency device, in order to improve the surgical margin. An extended right hepatectomy was performed later to remove the remaining right liver lesions. The patient was discharged after 28 days without major complication and was recurrence-free 14 months later. We conclude that improving the surgical margin with a coupled saline-radiofrequency device is feasible and effective, avoiding small remnant liver even after multiple tumorectomies. The multimodal strategy, including preoperative chemotherapy, two-step surgery, and tumorectomies, using a coupled saline-radiofrequency device, could increase the number of patients with diffuse bilobar liver metastases who can benefit from liver resection.

  9. GENESIS 1.1: A hybrid-parallel molecular dynamics simulator with enhanced sampling algorithms on multiple computational platforms.

    PubMed

    Kobayashi, Chigusa; Jung, Jaewoon; Matsunaga, Yasuhiro; Mori, Takaharu; Ando, Tadashi; Tamura, Koichi; Kamiya, Motoshi; Sugita, Yuji

    2017-09-30

    GENeralized-Ensemble SImulation System (GENESIS) is a software package for molecular dynamics (MD) simulation of biological systems. It is designed to extend limitations in system size and accessible time scale by adopting highly parallelized schemes and enhanced conformational sampling algorithms. In this new version, GENESIS 1.1, new functions and advanced algorithms have been added. The all-atom and coarse-grained potential energy functions used in AMBER and GROMACS packages now become available in addition to CHARMM energy functions. The performance of MD simulations has been greatly improved by further optimization, multiple time-step integration, and hybrid (CPU + GPU) computing. The string method and replica-exchange umbrella sampling with flexible collective variable choice are used for finding the minimum free-energy pathway and obtaining free-energy profiles for conformational changes of a macromolecule. These new features increase the usefulness and power of GENESIS for modeling and simulation in biological research. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  10. Nonstationary multivariate modeling of cerebral autoregulation during hypercapnia.

    PubMed

    Kostoglou, Kyriaki; Debert, Chantel T; Poulin, Marc J; Mitsis, Georgios D

    2014-05-01

    We examined the time-varying characteristics of cerebral autoregulation and hemodynamics during a step hypercapnic stimulus by using recursively estimated multivariate (two-input) models which quantify the dynamic effects of mean arterial blood pressure (ABP) and end-tidal CO2 tension (PETCO2) on middle cerebral artery blood flow velocity (CBFV). Beat-to-beat values of ABP and CBFV, as well as breath-to-breath values of PETCO2 during baseline and sustained euoxic hypercapnia were obtained in 8 female subjects. The multiple-input, single-output models used were based on the Laguerre expansion technique, and their parameters were updated using recursive least squares with multiple forgetting factors. The results reveal the presence of nonstationarities that confirm previously reported effects of hypercapnia on autoregulation, i.e. a decrease in the MABP phase lead, and suggest that the incorporation of PETCO2 as an additional model input yields less time-varying estimates of dynamic pressure autoregulation obtained from single-input (ABP-CBFV) models. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. Langevin dynamics for vector variables driven by multiplicative white noise: A functional formalism

    NASA Astrophysics Data System (ADS)

    Moreno, Miguel Vera; Arenas, Zochil González; Barci, Daniel G.

    2015-04-01

    We discuss general multidimensional stochastic processes driven by a system of Langevin equations with multiplicative white noise. In particular, we address the problem of how time reversal diffusion processes are affected by the variety of conventions available to deal with stochastic integrals. We present a functional formalism to build up the generating functional of correlation functions without any type of discretization of the Langevin equations at any intermediate step. The generating functional is characterized by a functional integration over two sets of commuting variables, as well as Grassmann variables. In this representation, time reversal transformation became a linear transformation in the extended variables, simplifying in this way the complexity introduced by the mixture of prescriptions and the associated calculus rules. The stochastic calculus is codified in our formalism in the structure of the Grassmann algebra. We study some examples such as higher order derivative Langevin equations and the functional representation of the micromagnetic stochastic Landau-Lifshitz-Gilbert equation.

  12. Multi-view video segmentation and tracking for video surveillance

    NASA Astrophysics Data System (ADS)

    Mohammadi, Gelareh; Dufaux, Frederic; Minh, Thien Ha; Ebrahimi, Touradj

    2009-05-01

    Tracking moving objects is a critical step for smart video surveillance systems. Despite the complexity increase, multiple camera systems exhibit the undoubted advantages of covering wide areas and handling the occurrence of occlusions by exploiting the different viewpoints. The technical problems in multiple camera systems are several: installation, calibration, objects matching, switching, data fusion, and occlusion handling. In this paper, we address the issue of tracking moving objects in an environment covered by multiple un-calibrated cameras with overlapping fields of view, typical of most surveillance setups. Our main objective is to create a framework that can be used to integrate objecttracking information from multiple video sources. Basically, the proposed technique consists of the following steps. We first perform a single-view tracking algorithm on each camera view, and then apply a consistent object labeling algorithm on all views. In the next step, we verify objects in each view separately for inconsistencies. Correspondent objects are extracted through a Homography transform from one view to the other and vice versa. Having found the correspondent objects of different views, we partition each object into homogeneous regions. In the last step, we apply the Homography transform to find the region map of first view in the second view and vice versa. For each region (in the main frame and mapped frame) a set of descriptors are extracted to find the best match between two views based on region descriptors similarity. This method is able to deal with multiple objects. Track management issues such as occlusion, appearance and disappearance of objects are resolved using information from all views. This method is capable of tracking rigid and deformable objects and this versatility lets it to be suitable for different application scenarios.

  13. BrachyView: multiple seed position reconstruction and comparison with CT post-implant dosimetry

    NASA Astrophysics Data System (ADS)

    Alnaghy, S.; Loo, K. J.; Cutajar, D. L.; Jalayer, M.; Tenconi, C.; Favoino, M.; Rietti, R.; Tartaglia, M.; Carriero, F.; Safavi-Naeini, M.; Bucci, J.; Jakubek, J.; Pospisil, S.; Zaider, M.; Lerch, M. L. F.; Rosenfeld, A. B.; Petasecca, M.

    2016-05-01

    BrachyView is a novel in-body imaging system utilising high-resolution pixelated silicon detectors (Timepix) and a pinhole collimator for brachytherapy source localisation. Recent studies have investigated various options for real-time intraoperative dynamic dose treatment planning to increase the quality of implants. In a previous proof-of-concept study, the justification of the pinhole concept was shown, allowing for the next step whereby multiple active seeds are implanted into a PMMA phantom to simulate a more realistic clinical scenario. In this study, 20 seeds were implanted and imaged using a lead pinhole of 400 μ m diameter. BrachyView was able to resolve the seed positions within 1-2 mm of expected positions, which was verified by co-registering with a full clinical post-implant CT scan.

  14. Novel Control Strategy for Multiple Run-of-the-River Hydro Power Plants to Provide Grid Ancillary Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohanpurkar, Manish; Luo, Yusheng; Hovsapian, Rob

    Hydropower plant (HPP) generation comprises a considerable portion of bulk electricity generation and is delivered with a low-carbon footprint. In fact, HPP electricity generation provides the largest share from renewable energy resources, which include wind and solar. Increasing penetration levels of wind and solar lead to a lower inertia on the electric grid, which poses stability challenges. In recent years, breakthroughs in energy storage technologies have demonstrated the economic and technical feasibility of extensive deployments of renewable energy resources on electric grids. If integrated with scalable, multi-time-step energy storage so that the total output can be controlled, multiple run-of-the-river (ROR)more » HPPs can be deployed. Although the size of a single energy storage system is much smaller than that of a typical reservoir, the ratings of storages and multiple ROR HPPs approximately equal the rating of a large, conventional HPP. This paper proposes cohesively managing multiple sets of energy storage systems distributed in different locations. This paper also describes the challenges associated with ROR HPP system architecture and operation.« less

  15. Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd

    2015-01-01

    Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.

  16. Changes in dual-task performance after 5 months of karate and fitness training for older adults to enhance fall prevention.

    PubMed

    Pliske, Gerald; Emmermacher, Peter; Weinbeer, Veronika; Witte, Kerstin

    2016-12-01

    Demographic changes resulting in an aging population are major factors for an increase of fall-related injuries. Especially in situations where dual tasks such as walking whilst talking have to be performed simultaneously the risk of a fall-related injury increases. It is well known that some types of martial art (e.g. Tai Chi) can reduce the risk of a fall. It is unknown if the same is true for karate. In this randomized, controlled study 68 people with a mean age of 69 years underwent 5-month karate training, 5-month fitness training or were part of a control group. Before and after the time of intervention a gait analysis with normal walk, a cognitive dual task and a motor dual task were performed. The gait parameter step frequency, walking speed, single-step time and single-step length were investigated. It could be seen that all groups improved their gait parameters after a 5-month period, even the control group. A sporty intervention seems to affect mainly the temporal gait parameters positively. This effect was especially demonstrated for normal walk and cognitive dual task. An improvement of the human walk seems to be possible through karate and fitness training, even under dual-task conditions. A prolonged intervention time with multiple repetitions of gait analysis could give better evidence if karate is a useful tool to increase fall prevention.

  17. Type-II generalized family-wise error rate formulas with application to sample size determination.

    PubMed

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Assessing the Effectiveness of First Step to Success: Are Short-Term Results the First Step to Long-Term Behavioral Improvements?

    ERIC Educational Resources Information Center

    Sumi, W. Carl; Woodbridge, Michelle W.; Javitz, Harold S.; Thornton, S. Patrick; Wagner, Mary; Rouspil, Kristen; Yu, Jennifer W.; Seeley, John R.; Walker, Hill M.; Golly, Annemieke; Small, Jason W.; Feil, Edward G.; Severson, Herbert H.

    2013-01-01

    This article reports on the effectiveness of First Step to Success, a secondary-level intervention appropriate for students in early elementary school who experience moderate to severe behavior problems and are at risk for academic failure. The authors demonstrate the intervention's short-term effects on multiple behavioral and academic outcomes…

  19. Multiple magnetization steps and plateaus across the antiferromagnetic to ferromagnetic transition in L a1 -xC exF e12B6 : Time delay of the metamagnetic transitions

    NASA Astrophysics Data System (ADS)

    Diop, L. V. B.; Isnard, O.

    2018-01-01

    The effects of cerium substitution on the structural and magnetic properties of the L a1 -xC exF e12B6 (0 ≤x ≤0.175 ) series of compounds have been studied. All of the compounds exhibit an antiferromagnetic ground state below the Néel temperature TN≈36 K . Both antiferromagnetic and paramagnetic states can be transformed into the ferromagnetic state irreversibly and reversibly depending on the magnitude of the applied magnetic field, the temperature, and the direction of their changes. Of particular interest is the low-temperature magnetization process. This process is discontinuous and evolves unexpected huge metamagnetic transitions consisting of a succession of sharp magnetization steps separated by plateaus, giving rise to an unusual avalanchelike behavior. At constant temperature and magnetic field, the evolution with time of the magnetization displays a spectacular spontaneous jump after a long incubation time. L a1 -xC exF e12B6 compounds exhibit a unique combination of exceptional features like large thermal hysteresis, giant magnetization jumps, and remarkably huge magnetic hysteresis for the field-induced first-order metamagnetic transition.

  20. Rearrangement of competing U2 RNA helices within the spliceosome promotes multiple steps in splicing

    PubMed Central

    Perriman, Rhonda J.; Ares, Manuel

    2007-01-01

    Nuclear pre-messenger RNA (pre-mRNA) splicing requires multiple spliceosomal small nuclear RNA (snRNA) and pre-mRNA rearrangements. Here we reveal a new snRNA conformational switch in which successive roles for two competing U2 helices, stem IIa and stem IIc, promote distinct splicing steps. When stem IIa is stabilized by loss of stem IIc, rapid ATP-independent and Cus2p-insensitive prespliceosome formation occurs. In contrast, hyperstabilized stem IIc improves the first splicing step on aberrant branchpoint pre-mRNAs and rescues temperature-sensitive U6–U57C, a U6 mutation that also suppresses first-step splicing defects of branchpoint mutations. A second, later role for stem IIa is revealed by its suppression of a cold-sensitive allele of the second-step splicing factor PRP16. Our data expose a spliceosomal progression cycle of U2 stem IIa formation, disruption by stem IIc, and then reformation of stem IIa before the second catalytic step. We propose that the competing stem IIa and stem IIc helices are key spliceosomal RNA elements that optimize juxtaposition of the proper reactive sites during splicing. PMID:17403781

  1. Mobility assessment: Sensitivity and specificity of measurement sets in older adults

    PubMed Central

    Panzer, Victoria P.; Wakefield, Dorothy B.; Hall, Charles B.; Wolfson, Leslie I.

    2011-01-01

    Objective To identify quantitative measurement variables that characterize mobility in older adults, meet reliability and validity criteria, distinguish fall-risk and predict future falls. Design Observational study with 1-year weekly falls follow-up Setting Mobility laboratory Participants Community-dwelling volunteers (n=74; 65–94 years old) categorized at entry as 27 ‘Non-fallers’ or 47 ‘Fallers’ by Medicare criteria (1 injury fall or >1 non-injury falls in the previous year). Interventions None Outcome Measures Test-retest and within-subject reliability, criterion and concurrent validity; predictive ability indicated by observed sensitivity and specificity to entry fall-risk group (Falls-status), Tinetti Performance Oriented Mobility Assessment (POMA), Computerized Dynamic Posturography Sensory Organization Test (SOT) and subsequent falls reported weekly. Results Measurement variables were selected that met reliability (ICC > 0.6) and/or discrimination (p<.01) criteria (Clinical variables- Turn- steps, time, Gait- velocity, Step-in-tub-time, and Downstairs- time; Force plate variables- Quiet standing Romberg ratio sway-area, Maximal lean- anterior-posterior excursion, Sit-to-stand medial-lateral excursion and sway-area). Sets were created (3 clinical, 2 force plate) utilizing combinations of variables appropriate for older adults with different functional activity levels and composite scores were calculated. Scores identified entry Falls-status and concurred with POMA and SOT. The Full clinical set (5 measurement variables) produced sensitivity/specificity (.80/.74) to Falls-status. Composite scores were sensitive and specific in predicting subsequent injury falls and multiple falls compared to Falls-status, POMA or SOT. Conclusions Sets of quantitative measurement variables obtained with this mobility battery provided sensitive prediction of future injury falls and screening for multiple subsequent falls using tasks that should be appropriate to diverse participants. PMID:21621667

  2. Internet Teleprescence by Real-Time View-Dependent Image Generation with Omnidirectional Video Camera

    NASA Astrophysics Data System (ADS)

    Morita, Shinji; Yamazawa, Kazumasa; Yokoya, Naokazu

    2003-01-01

    This paper describes a new networked telepresence system which realizes virtual tours into a visualized dynamic real world without significant time delay. Our system is realized by the following three steps: (1) video-rate omnidirectional image acquisition, (2) transportation of an omnidirectional video stream via internet, and (3) real-time view-dependent perspective image generation from the omnidirectional video stream. Our system is applicable to real-time telepresence in the situation where the real world to be seen is far from an observation site, because the time delay from the change of user"s viewing direction to the change of displayed image is small and does not depend on the actual distance between both sites. Moreover, multiple users can look around from a single viewpoint in a visualized dynamic real world in different directions at the same time. In experiments, we have proved that the proposed system is useful for internet telepresence.

  3. Step-rate cut-points for physical activity intensity in patients with multiple sclerosis: The effect of disability status.

    PubMed

    Agiovlasitis, Stamatis; Sandroff, Brian M; Motl, Robert W

    2016-02-15

    Evaluating the relationship between step-rate and rate of oxygen uptake (VO2) may allow for practical physical activity assessment in patients with multiple sclerosis (MS) of differing disability levels. To examine whether the VO2 to step-rate relationship during over-ground walking differs across varying disability levels among patients with MS and to develop step-rate thresholds for moderate- and vigorous-intensity physical activity. Adults with MS (N=58; age: 51 ± 9 years; 48 women) completed one over-ground walking trial at comfortable speed, one at 0.22 m · s(-1) slower, and one at 0.22 m · s(-1) faster. Each trial lasted 6 min. VO2 was measured with portable spirometry and steps with hand-tally. Disability status was classified as mild, moderate, or severe based on Expanded Disability Status Scale scores. Multi-level regression indicated that step-rate, disability status, and height significantly predicted VO2 (p<0.05). Based on this model, we developed step-rate thresholds for activity intensity that vary by disability status and height. A separate regression without height allowed for development of step-rate thresholds that vary only by disability status. The VO2 during over-ground walking differs among ambulatory patients with MS based on disability level and height, yielding different step-rate thresholds for physical activity intensity. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Efficiency and flexibility using implicit methods within atmosphere dycores

    NASA Astrophysics Data System (ADS)

    Evans, K. J.; Archibald, R.; Norman, M. R.; Gardner, D. J.; Woodward, C. S.; Worley, P.; Taylor, M.

    2016-12-01

    A suite of explicit and implicit methods are evaluated for a range of configurations of the shallow water dynamical core within the spectral-element Community Atmosphere Model (CAM-SE) to explore their relative computational performance. The configurations are designed to explore the attributes of each method under different but relevant model usage scenarios including varied spectral order within an element, static regional refinement, and scaling to large problem sizes. The limitations and benefits of using explicit versus implicit, with different discretizations and parameters, are discussed in light of trade-offs such as MPI communication, memory, and inherent efficiency bottlenecks. For the regionally refined shallow water configurations, the implicit BDF2 method is about the same efficiency as an explicit Runge-Kutta method, without including a preconditioner. Performance of the implicit methods with the residual function executed on a GPU is also presented; there is speed up for the residual relative to a CPU, but overwhelming transfer costs motivate moving more of the solver to the device. Given the performance behavior of implicit methods within the shallow water dynamical core, the recommendation for future work using implicit solvers is conditional based on scale separation and the stiffness of the problem. The strong growth of linear iterations with increasing resolution or time step size is the main bottleneck to computational efficiency. Within the hydrostatic dynamical core, of CAM-SE, we present results utilizing approximate block factorization preconditioners implemented using the Trilinos library of solvers. They reduce the cost of linear system solves and improve parallel scalability. We provide a summary of the remaining efficiency considerations within the preconditioner and utilization of the GPU, as well as a discussion about the benefits of a time stepping method that provides converged and stable solutions for a much wider range of time step sizes. As more complex model components, for example new physics and aerosols, are connected in the model, having flexibility in the time stepping will enable more options for combining and resolving multiple scales of behavior.

  5. Sensitivity of finite helical axis parameters to temporally varying realistic motion utilizing an idealized knee model.

    PubMed

    Johnson, T S; Andriacchi, T P; Erdman, A G

    2004-01-01

    Various uses of the screw or helical axis have previously been reported in the literature in an attempt to quantify the complex displacements and coupled rotations of in vivo human knee kinematics. Multiple methods have been used by previous authors to calculate the axis parameters, and it has been theorized that the mathematical stability and accuracy of the finite helical axis (FHA) is highly dependent on experimental variability and rotation increment spacing between axis calculations. Previous research has not addressed the sensitivity of the FHA for true in vivo data collection, as required for gait laboratory analysis. This research presents a controlled series of experiments simulating continuous data collection as utilized in gait analysis to investigate the sensitivity of the three-dimensional finite screw axis parameters of rotation, displacement, orientation and location with regard to time step increment spacing, utilizing two different methods for spatial location. Six-degree-of-freedom motion parameters are measured for an idealized rigid body knee model that is constrained to a planar motion profile for the purposes of error analysis. The kinematic data are collected using a multicamera optoelectronic system combined with an error minimization algorithm known as the point cluster method. Rotation about the screw axis is seen to be repeatable, accurate and time step increment insensitive. Displacement along the axis is highly dependent on time step increment sizing, with smaller rotation angles between calculations producing more accuracy. Orientation of the axis in space is accurate with only a slight filtering effect noticed during motion reversal. Locating the screw axis by a projected point onto the screw axis from the mid-point of the finite displacement is found to be less sensitive to motion reversal than finding the intersection of the axis with a reference plane. A filtering effect of the spatial location parameters was noted for larger time step increments during periods of little or no rotation.

  6. High-quality and interactive animations of 3D time-varying vector fields.

    PubMed

    Helgeland, Anders; Elboth, Thomas

    2006-01-01

    In this paper, we present an interactive texture-based method for visualizing three-dimensional unsteady vector fields. The visualization method uses a sparse and global representation of the flow, such that it does not suffer from the same perceptual issues as is the case for visualizing dense representations. The animation is made by injecting a collection of particles evenly distributed throughout the physical domain. These particles are then tracked along their path lines. At each time step, these particles are used as seed points to generate field lines using any vector field such as the velocity field or vorticity field. In this way, the animation shows the advection of particles while each frame in the animation shows the instantaneous vector field. In order to maintain a coherent particle density and to avoid clustering as time passes, we have developed a novel particle advection strategy which produces approximately evenly-spaced field lines at each time step. To improve rendering performance, we decouple the rendering stage from the preceding stages of the visualization method. This allows interactive exploration of multiple fields simultaneously, which sets the stage for a more complete analysis of the flow field. The final display is rendered using texture-based direct volume rendering.

  7. Quantifying Surface Water Dynamics at 30 Meter Spatial Resolution in the North American High Northern Latitudes 1991-2011

    NASA Technical Reports Server (NTRS)

    Carroll, Mark; Wooten, Margaret; DiMiceli, Charlene; Sohlberg, Robert; Kelly, Maureen

    2016-01-01

    The availability of a dense time series of satellite observations at moderate (30 m) spatial resolution is enabling unprecedented opportunities for understanding ecosystems around the world. A time series of data from Landsat was used to generate a series of three maps at decadal time step to show how surface water has changed from 1991 to 2011 in the high northern latitudes of North America. Previous attempts to characterize the change in surface water in this region have been limited in either spatial or temporal resolution, or both. This series of maps was generated for the NASA Arctic and Boreal Vulnerability Experiment (ABoVE), which began in fall 2015. These maps show a nominal extent of surface water by using multiple observations to make a single map for each time step. This increases the confidence that any detected changes are related to climate or ecosystem changes not simply caused by short duration weather events such as flood or drought. The methods and comparison to other contemporary maps of the region are presented here. Initial verification results indicate 96% producer accuracy and 54% user accuracy when compared to 2-m resolution World View-2 data. All water bodies that were omitted were one Landsat pixel or smaller, hence below detection limits of the instrument.

  8. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion.

    PubMed

    Min, Yugang; Santhanam, Anand; Neelakkantan, Harini; Ruddy, Bari H; Meeks, Sanford L; Kupelian, Patrick A

    2010-09-07

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  9. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-06-24

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  10. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  11. Implementation and Characterization of Three-Dimensional Particle-in-Cell Codes on Multiple-Instruction-Multiple-Data Massively Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Lyster, P. M.; Liewer, P. C.; Decyk, V. K.; Ferraro, R. D.

    1995-01-01

    A three-dimensional electrostatic particle-in-cell (PIC) plasma simulation code has been developed on coarse-grain distributed-memory massively parallel computers with message passing communications. Our implementation is the generalization to three-dimensions of the general concurrent particle-in-cell (GCPIC) algorithm. In the GCPIC algorithm, the particle computation is divided among the processors using a domain decomposition of the simulation domain. In a three-dimensional simulation, the domain can be partitioned into one-, two-, or three-dimensional subdomains ("slabs," "rods," or "cubes") and we investigate the efficiency of the parallel implementation of the push for all three choices. The present implementation runs on the Intel Touchstone Delta machine at Caltech; a multiple-instruction-multiple-data (MIMD) parallel computer with 512 nodes. We find that the parallel efficiency of the push is very high, with the ratio of communication to computation time in the range 0.3%-10.0%. The highest efficiency (> 99%) occurs for a large, scaled problem with 64(sup 3) particles per processing node (approximately 134 million particles of 512 nodes) which has a push time of about 250 ns per particle per time step. We have also developed expressions for the timing of the code which are a function of both code parameters (number of grid points, particles, etc.) and machine-dependent parameters (effective FLOP rate, and the effective interprocessor bandwidths for the communication of particles and grid points). These expressions can be used to estimate the performance of scaled problems--including those with inhomogeneous plasmas--to other parallel machines once the machine-dependent parameters are known.

  12. Measurements of gluconeogenesis and glycogenolysis: A methodological review

    USDA-ARS?s Scientific Manuscript database

    Gluconeogenesis is a complex metabolic process that involves multiple enzymatic steps regulated by myriad factors, including substrate concentrations, the redox state, activation and inhibition of specific enzyme steps, and hormonal modulation. At present, the most widely accepted technique to deter...

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zawisza, I; Yan, H; Yin, F

    Purpose: To assure that tumor motion is within the radiation field during high-dose and high-precision radiosurgery, real-time imaging and surrogate monitoring are employed. These methods are useful in providing real-time tumor/surrogate motion but no future information is available. In order to anticipate future tumor/surrogate motion and track target location precisely, an algorithm is developed and investigated for estimating surrogate motion multiple-steps ahead. Methods: The study utilized a one-dimensional surrogate motion signal divided into three components: (a) training component containing the primary data including the first frame to the beginning of the input subsequence; (b) input subsequence component of the surrogatemore » signal used as input to the prediction algorithm: (c) output subsequence component is the remaining signal used as the known output of the prediction algorithm for validation. The prediction algorithm consists of three major steps: (1) extracting subsequences from training component which best-match the input subsequence according to given criterion; (2) calculating weighting factors from these best-matched subsequence; (3) collecting the proceeding parts of the subsequences and combining them together with assigned weighting factors to form output. The prediction algorithm was examined for several patients, and its performance is assessed based on the correlation between prediction and known output. Results: Respiratory motion data was collected for 20 patients using the RPM system. The output subsequence is the last 50 samples (∼2 seconds) of a surrogate signal, and the input subsequence was 100 (∼3 seconds) frames prior to the output subsequence. Based on the analysis of correlation coefficient between predicted and known output subsequence, the average correlation is 0.9644±0.0394 and 0.9789±0.0239 for equal-weighting and relative-weighting strategies, respectively. Conclusion: Preliminary results indicate that the prediction algorithm is effective in estimating surrogate motion multiple-steps in advance. Relative-weighting method shows better prediction accuracy than equal-weighting method. More parameters of this algorithm are under investigation.« less

  14. The importance of Active Transportation to and from school for daily physical activity among children.

    PubMed

    Pabayo, Roman; Maximova, Katerina; Spence, John C; Vander Ploeg, Kerry; Wu, Biao; Veugelers, Paul J

    2012-09-01

    To investigate if students who use of Active Transportation (AT) to and from school among urban and rural Canadian children are more likely to meet physical activity recommendations. The Raising healthy Eating and Active Living in Alberta (REAL Kids Alberta) study is a population-based health survey among Grade 5 students. In 2009, physical activity levels were measured using time-stamped pedometers (number of steps/hour) among 688 children. Parents reported mode of transportation to and from school (AT/non-AT). Multilevel multiple linear regression analyses with corresponding β coefficients were conducted to quantify the relationship between mode of transportation to and from school with (1) overall step count, and (2) the likelihood of achieving at least 13,500 steps per day recommended for optimal growth and development. Among urban children, those who used AT to and from school accumulated more steps [β=1124(95% CI=170,2077)] and although not significant, were more likely to achieve the recommended 13,500 steps/day compared to those not using AT to and from school [OR=1.61(95% CI=0.93,2.81)]. Using AT to and from school appears to be beneficial to children by supplementing their physical activity, particularly those living in urban regions. Strategies to promote physical activity are needed, particular for children residing in rural regions and smaller towns. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Effectiveness of a walking programme to support adults with intellectual disabilities to increase physical activity: walk well cluster-randomised controlled trial.

    PubMed

    Melville, Craig A; Mitchell, Fiona; Stalker, Kirsten; Matthews, Lynsay; McConnachie, Alex; Murray, Heather M; Melling, Chris; Mutrie, Nanette

    2015-09-29

    Programs to change health behaviours have been identified as one way to reduce health inequalities experienced by disadvantaged groups. The objective of this study was to examine the effectiveness of a behaviour change programme to increase walking and reduce sedentary behaviour of adults with intellectual disabilities. We used a cluster randomised controlled design and recruited participants over 18 years old and not regularly involved in physical activity from intellectual disabilities community-based organisations. Assessments were carried out blind to allocation. Clusters of participants were randomly allocated to the Walk Well program or a 12-week waiting list control. Walk Well consisted of three face-to-face physical activity consultations incorporating behaviour change techniques, written resources for participants and carers, and an individualised, structured walking programme. The primary outcome measured with accelerometers was change in mean step count per day between baseline and 12 weeks. Secondary outcomes included percentage time per day sedentary and in moderate-vigorous physical activity (MVPA), body mass index (BMI), and subjective well being. One hundred two participants in 50 clusters were randomised. 82 (80.4%) participants completed the primary outcome. 66.7% of participants lived in the most deprived quintile on the Scottish Index of Multiple Deprivation. At baseline, participants walked 4780 (standard deviation 2432) steps per day, spent 65.5% (standard deviation 10.9) of time sedentary and 59% percent had a body mass in the obesity range. After the walking programme, the difference between mean counts of the Walk Well and control group was 69.5 steps per day [95% confidence interval (CI) -1054 to 1193.3]. There were no significant between group differences in percentage time sedentary 1.6% (95% CI -2.984 to 6.102), percentage time in MVPA 0.3% (95% CI -0.7 to 1.3), BMI -0.2 kg/m(2) (95% CI -0.8 to 0.4) or subjective well-being 0.3 (95% CI -0.9 to 1.5). This is the first published trial of a walking program for adults with intellectual disabilities. Positively changing physical activity and sedentary behaviours may require more intensive programmes or upstream approaches to address the multiple social disadvantages experienced by adults with intellectual disabilities. Since participants spent the majority of their time sedentary, home-based programmes to reduce sitting time may be a viable health improvement approach. Current Controlled Trials ISRCTN50494254.

  16. Personalized long-term prediction of cognitive function: Using sequential assessments to improve model performance.

    PubMed

    Chi, Chih-Lin; Zeng, Wenjun; Oh, Wonsuk; Borson, Soo; Lenskaia, Tatiana; Shen, Xinpeng; Tonellato, Peter J

    2017-12-01

    Prediction of onset and progression of cognitive decline and dementia is important both for understanding the underlying disease processes and for planning health care for populations at risk. Predictors identified in research studies are typically accessed at one point in time. In this manuscript, we argue that an accurate model for predicting cognitive status over relatively long periods requires inclusion of time-varying components that are sequentially assessed at multiple time points (e.g., in multiple follow-up visits). We developed a pilot model to test the feasibility of using either estimated or observed risk factors to predict cognitive status. We developed two models, the first using a sequential estimation of risk factors originally obtained from 8 years prior, then improved by optimization. This model can predict how cognition will change over relatively long time periods. The second model uses observed rather than estimated time-varying risk factors and, as expected, results in better prediction. This model can predict when newly observed data are acquired in a follow-up visit. Performances of both models that are evaluated in10-fold cross-validation and various patient subgroups show supporting evidence for these pilot models. Each model consists of multiple base prediction units (BPUs), which were trained using the same set of data. The difference in usage and function between the two models is the source of input data: either estimated or observed data. In the next step of model refinement, we plan to integrate the two types of data together to flexibly predict dementia status and changes over time, when some time-varying predictors are measured only once and others are measured repeatedly. Computationally, both data provide upper and lower bounds for predictive performance. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. The multiple resource inventory decision-making process

    Treesearch

    Victor A. Rudis

    1993-01-01

    A model of the multiple resource inventory decision-making process is presented that identifies steps in conducting inventories, describes the infrastructure, and points out knowledge gaps that are common to many interdisciplinary studies.Successful efforts to date suggest the need to bridge the gaps by sharing elements, maintain dialogue among stakeholders in multiple...

  18. Technical note: Simultaneous fully dynamic characterization of multiple input–output relationships in climate models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kravitz, Ben; MacMartin, Douglas G.; Rasch, Philip J.

    We introduce system identification techniques to climate science wherein multiple dynamic input–output relationships can be simultaneously characterized in a single simulation. This method, involving multiple small perturbations (in space and time) of an input field while monitoring output fields to quantify responses, allows for identification of different timescales of climate response to forcing without substantially pushing the climate far away from a steady state. We use this technique to determine the steady-state responses of low cloud fraction and latent heat flux to heating perturbations over 22 regions spanning Earth's oceans. We show that the response characteristics are similar to thosemore » of step-change simulations, but in this new method the responses for 22 regions can be characterized simultaneously. Moreover, we can estimate the timescale over which the steady-state response emerges. The proposed methodology could be useful for a wide variety of purposes in climate science, including characterization of teleconnections and uncertainty quantification to identify the effects of climate model tuning parameters.« less

  19. Technical note: Simultaneous fully dynamic characterization of multiple input–output relationships in climate models

    DOE PAGES

    Kravitz, Ben; MacMartin, Douglas G.; Rasch, Philip J.; ...

    2017-02-17

    We introduce system identification techniques to climate science wherein multiple dynamic input–output relationships can be simultaneously characterized in a single simulation. This method, involving multiple small perturbations (in space and time) of an input field while monitoring output fields to quantify responses, allows for identification of different timescales of climate response to forcing without substantially pushing the climate far away from a steady state. We use this technique to determine the steady-state responses of low cloud fraction and latent heat flux to heating perturbations over 22 regions spanning Earth's oceans. We show that the response characteristics are similar to thosemore » of step-change simulations, but in this new method the responses for 22 regions can be characterized simultaneously. Moreover, we can estimate the timescale over which the steady-state response emerges. The proposed methodology could be useful for a wide variety of purposes in climate science, including characterization of teleconnections and uncertainty quantification to identify the effects of climate model tuning parameters.« less

  20. Plasmonic and SERS performances of compound nanohole arrays fabricated by shadow sphere lithography

    NASA Astrophysics Data System (ADS)

    Skehan, Connor; Ai, Bin; Larson, Steven R.; Stone, Keenan M.; Dennis, William M.; Zhao, Yiping

    2018-03-01

    Several plasmonic compound nanohole arrays (CNAs), such as triangular nanoholes and fan-like nanoholes with multiple nanotips and nanogaps, are designed by a simple and efficient shadow sphere lithography technique by tuning the sphere mask size, the deposition and azimuthal angles, substrate temperature T S , and the number of deposition steps N. Compared with conventional circular nanohole arrays, the CNAs show more hot spots and exhibit new transmission speaks. Systematic finite-difference time-domain calculations indicate that different resonance modes excited by the various shaped and sized nanoholes are responsible for the enhanced plasmonic performances of CNAs. Compared to the CNA samples with only one circular hole in the unit cell, the Raman scattering intensity of the CNA with multiple triangular nanoholes, nanogaps, and nanotips can be enhanced up to 5-fold. These CNAs, due to the strong resonance due to the multiple structural features, are promising applications as optical filters, plasmonic sensors, and surface-enhanced spectroscopies.

  1. Brunn: an open source laboratory information system for microplates with a graphical plate layout design process.

    PubMed

    Alvarsson, Jonathan; Andersson, Claes; Spjuth, Ola; Larsson, Rolf; Wikberg, Jarl E S

    2011-05-20

    Compound profiling and drug screening generates large amounts of data and is generally based on microplate assays. Current information systems used for handling this are mainly commercial, closed source, expensive, and heavyweight and there is a need for a flexible lightweight open system for handling plate design, and validation and preparation of data. A Bioclipse plugin consisting of a client part and a relational database was constructed. A multiple-step plate layout point-and-click interface was implemented inside Bioclipse. The system contains a data validation step, where outliers can be removed, and finally a plate report with all relevant calculated data, including dose-response curves. Brunn is capable of handling the data from microplate assays. It can create dose-response curves and calculate IC50 values. Using a system of this sort facilitates work in the laboratory. Being able to reuse already constructed plates and plate layouts by starting out from an earlier step in the plate layout design process saves time and cuts down on error sources.

  2. A method for real-time generation of augmented reality work instructions via expert movements

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Bhaskar; Winer, Eliot

    2015-03-01

    Augmented Reality (AR) offers tremendous potential for a wide range of fields including entertainment, medicine, and engineering. AR allows digital models to be integrated with a real scene (typically viewed through a video camera) to provide useful information in a variety of contexts. The difficulty in authoring and modifying scenes is one of the biggest obstacles to widespread adoption of AR. 3D models must be created, textured, oriented and positioned to create the complex overlays viewed by a user. This often requires using multiple software packages in addition to performing model format conversions. In this paper, a new authoring tool is presented which uses a novel method to capture product assembly steps performed by a user with a depth+RGB camera. Through a combination of computer vision and imaging process techniques, each individual step is decomposed into objects and actions. The objects are matched to those in a predetermined geometry library and the actions turned into animated assembly steps. The subsequent instruction set is then generated with minimal user input. A proof of concept is presented to establish the method's viability.

  3. Self-powered integrated microfluidic point-of-care low-cost enabling (SIMPLE) chip

    PubMed Central

    Yeh, Erh-Chia; Fu, Chi-Cheng; Hu, Lucy; Thakur, Rohan; Feng, Jeffrey; Lee, Luke P.

    2017-01-01

    Portable, low-cost, and quantitative nucleic acid detection is desirable for point-of-care diagnostics; however, current polymerase chain reaction testing often requires time-consuming multiple steps and costly equipment. We report an integrated microfluidic diagnostic device capable of on-site quantitative nucleic acid detection directly from the blood without separate sample preparation steps. First, we prepatterned the amplification initiator [magnesium acetate (MgOAc)] on the chip to enable digital nucleic acid amplification. Second, a simplified sample preparation step is demonstrated, where the plasma is separated autonomously into 224 microwells (100 nl per well) without any hemolysis. Furthermore, self-powered microfluidic pumping without any external pumps, controllers, or power sources is accomplished by an integrated vacuum battery on the chip. This simple chip allows rapid quantitative digital nucleic acid detection directly from human blood samples (10 to 105 copies of methicillin-resistant Staphylococcus aureus DNA per microliter, ~30 min, via isothermal recombinase polymerase amplification). These autonomous, portable, lab-on-chip technologies provide promising foundations for future low-cost molecular diagnostic assays. PMID:28345028

  4. The fast multipole method and point dipole moment polarizable force fields.

    PubMed

    Coles, Jonathan P; Masella, Michel

    2015-01-14

    We present an implementation of the fast multipole method for computing Coulombic electrostatic and polarization forces from polarizable force-fields based on induced point dipole moments. We demonstrate the expected O(N) scaling of that approach by performing single energy point calculations on hexamer protein subunits of the mature HIV-1 capsid. We also show the long time energy conservation in molecular dynamics at the nanosecond scale by performing simulations of a protein complex embedded in a coarse-grained solvent using a standard integrator and a multiple time step integrator. Our tests show the applicability of fast multipole method combined with state-of-the-art chemical models in molecular dynamical systems.

  5. Synchrotron X-ray micro-tomography at the Advanced Light Source: Developments in high-temperature in-situ mechanical testing

    NASA Astrophysics Data System (ADS)

    Barnard, Harold S.; MacDowell, A. A.; Parkinson, D. Y.; Mandal, P.; Czabaj, M.; Gao, Y.; Maillet, E.; Blank, B.; Larson, N. M.; Ritchie, R. O.; Gludovatz, B.; Acevedo, C.; Liu, D.

    2017-06-01

    At the Advanced Light Source (ALS), Beamline 8.3.2 performs hard X-ray micro-tomography under conditions of high temperature, pressure, mechanical loading, and other realistic conditions using environmental test cells. With scan times of 10s-100s of seconds, the microstructural evolution of materials can be directly observed over multiple time steps spanning prescribed changes in the sample environment. This capability enables in-situ quasi-static mechanical testing of materials. We present an overview of our in-situ mechanical testing capabilities and recent hardware developments that enable flexural testing at high temperature and in combination with acoustic emission analysis.

  6. Additional development of the XTRAN3S computer program

    NASA Technical Reports Server (NTRS)

    Borland, C. J.

    1989-01-01

    Additional developments and enhancements to the XTRAN3S computer program, a code for calculation of steady and unsteady aerodynamics, and associated aeroelastic solutions, for 3-D wings in the transonic flow regime are described. Algorithm improvements for the XTRAN3S program were provided including an implicit finite difference scheme to enhance the allowable time step and vectorization for improved computational efficiency. The code was modified to treat configurations with a fuselage, multiple stores/nacelles/pylons, and winglets. Computer program changes (updates) for error corrections and updates for version control are provided.

  7. Investigation of the Evolution of Crystal Size and Shape during Temperature Cycling and in the Presence of a Polymeric Additive Using Combined Process Analytical Technologies

    PubMed Central

    2017-01-01

    Crystal size and shape can be manipulated to enhance the qualities of the final product. In this work the steady-state shape and size of succinic acid crystals, with and without a polymeric additive (Pluronic P123) at 350 mL, scale is reported. The effect of the amplitude of cycles as well as the heating/cooling rates is described, and convergent cycling (direct nucleation control) is compared to static cycling. The results show that the shape of succinic acid crystals changes from plate- to diamond-like after multiple cycling steps, and that the time required for this morphology change to occur is strongly related to the type of cycling. Addition of the polymer is shown to affect both the final shape of the crystals and the time needed to reach size and shape steady-state conditions. It is shown how this phenomenon can be used to improve the design of the crystallization step in order to achieve more efficient downstream operations and, in general, to help optimize the whole manufacturing process. PMID:28867966

  8. Evaluating the efficiency of a zakat institution over a period of time using data envelopment analysis

    NASA Astrophysics Data System (ADS)

    Krishnan, Anath Rau; Hamzah, Ahmad Aizuddin

    2017-08-01

    It is crucial for a zakat institution to evaluate and understand how efficiently they have operated in the past, thus ideal strategies could be developed for future improvement. However, evaluating the efficiency of a zakat institution is actually a challenging process as it involves the presence of multiple inputs or/and outputs. This paper proposes a step-by-step procedure comprising two data envelopment analysis models, namely dual Charnes-Cooper-Rhodes and slack-based model to quantitatively measure the overall efficiency of a zakat institution over a period of time. The applicability of the proposed procedure was demonstrated by evaluating the efficiency of Pusat Zakat Sabah, Malaysia from the year of 2007 up to 2015 by treating each year as a decision making unit. Two inputs (i.e. number of staff and number of branches) and two outputs (i.e. total collection and total distribution) were used to measure the overall efficiency achieved each year. The causes of inefficiency and strategy for future improvement were discussed based on the results.

  9. Fair Play Game: A Group Contingency Strategy to Increase Students' Active Behaviours in Physical Education

    ERIC Educational Resources Information Center

    Vidoni, Carla; Lee, Chang-Hung; Azevedo, L. B.

    2014-01-01

    A dependent group contingency strategy called Fair Play Game was applied to promote increase in number of steps during physical education classes for sixth-grade students. Results from a multiple baseline design across three classes showed that the mean number of steps for baseline vs. intervention were: Class 1: 43 vs. 64 steps/minute; Class 2:…

  10. Method of Simulating Flow-Through Area of a Pressure Regulator

    NASA Technical Reports Server (NTRS)

    Hass, Neal E. (Inventor); Schallhorn, Paul A. (Inventor)

    2011-01-01

    The flow-through area of a pressure regulator positioned in a branch of a simulated fluid flow network is generated. A target pressure is defined downstream of the pressure regulator. A projected flow-through area is generated as a non-linear function of (i) target pressure, (ii) flow-through area of the pressure regulator for a current time step and a previous time step, and (iii) pressure at the downstream location for the current time step and previous time step. A simulated flow-through area for the next time step is generated as a sum of (i) flow-through area for the current time step, and (ii) a difference between the projected flow-through area and the flow-through area for the current time step multiplied by a user-defined rate control parameter. These steps are repeated for a sequence of time steps until the pressure at the downstream location is approximately equal to the target pressure.

  11. Metabolic pathways as possible therapeutic targets for progressive multiple sclerosis.

    PubMed

    Heidker, Rebecca M; Emerson, Mitchell R; LeVine, Steven M

    2017-08-01

    Unlike relapsing remitting multiple sclerosis, there are very few therapeutic options for patients with progressive forms of multiple sclerosis. While immune mechanisms are key participants in the pathogenesis of relapsing remitting multiple sclerosis, the mechanisms underlying the development of progressive multiple sclerosis are less well understood. Putative mechanisms behind progressive multiple sclerosis have been put forth: insufficient energy production via mitochondrial dysfunction, activated microglia, iron accumulation, oxidative stress, activated astrocytes, Wallerian degeneration, apoptosis, etc . Furthermore, repair processes such as remyelination are incomplete. Experimental therapies that strive to improve metabolism within neurons and glia, e.g. , oligodendrocytes, could act to counter inadequate energy supplies and/or support remyelination. Most experimental approaches have been examined as standalone interventions; however, it is apparent that the biochemical steps being targeted are part of larger pathways, which are further intertwined with other metabolic pathways. Thus, the potential benefits of a tested intervention, or of an established therapy, e.g. , ocrelizumab, could be undermined by constraints on upstream and/or downstream steps. If correct, then this argues for a more comprehensive, multifaceted approach to therapy. Here we review experimental approaches to support neuronal and glial metabolism, and/or promote remyelination, which may have potential to lessen or delay progressive multiple sclerosis.

  12. A GPU-accelerated semi-implicit fractional-step method for numerical solutions of incompressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Ha, Sanghyun; Park, Junshin; You, Donghyun

    2018-01-01

    Utility of the computational power of Graphics Processing Units (GPUs) is elaborated for solutions of incompressible Navier-Stokes equations which are integrated using a semi-implicit fractional-step method. The Alternating Direction Implicit (ADI) and the Fourier-transform-based direct solution methods used in the semi-implicit fractional-step method take advantage of multiple tridiagonal matrices whose inversion is known as the major bottleneck for acceleration on a typical multi-core machine. A novel implementation of the semi-implicit fractional-step method designed for GPU acceleration of the incompressible Navier-Stokes equations is presented. Aspects of the programing model of Compute Unified Device Architecture (CUDA), which are critical to the bandwidth-bound nature of the present method are discussed in detail. A data layout for efficient use of CUDA libraries is proposed for acceleration of tridiagonal matrix inversion and fast Fourier transform. OpenMP is employed for concurrent collection of turbulence statistics on a CPU while the Navier-Stokes equations are computed on a GPU. Performance of the present method using CUDA is assessed by comparing the speed of solving three tridiagonal matrices using ADI with the speed of solving one heptadiagonal matrix using a conjugate gradient method. An overall speedup of 20 times is achieved using a Tesla K40 GPU in comparison with a single-core Xeon E5-2660 v3 CPU in simulations of turbulent boundary-layer flow over a flat plate conducted on over 134 million grids. Enhanced performance of 48 times speedup is reached for the same problem using a Tesla P100 GPU.

  13. Data-Driven Design of Intelligent Wireless Networks: An Overview and Tutorial.

    PubMed

    Kulin, Merima; Fortuna, Carolina; De Poorter, Eli; Deschrijver, Dirk; Moerman, Ingrid

    2016-06-01

    Data science or "data-driven research" is a research approach that uses real-life data to gain insight about the behavior of systems. It enables the analysis of small, simple as well as large and more complex systems in order to assess whether they function according to the intended design and as seen in simulation. Data science approaches have been successfully applied to analyze networked interactions in several research areas such as large-scale social networks, advanced business and healthcare processes. Wireless networks can exhibit unpredictable interactions between algorithms from multiple protocol layers, interactions between multiple devices, and hardware specific influences. These interactions can lead to a difference between real-world functioning and design time functioning. Data science methods can help to detect the actual behavior and possibly help to correct it. Data science is increasingly used in wireless research. To support data-driven research in wireless networks, this paper illustrates the step-by-step methodology that has to be applied to extract knowledge from raw data traces. To this end, the paper (i) clarifies when, why and how to use data science in wireless network research; (ii) provides a generic framework for applying data science in wireless networks; (iii) gives an overview of existing research papers that utilized data science approaches in wireless networks; (iv) illustrates the overall knowledge discovery process through an extensive example in which device types are identified based on their traffic patterns; (v) provides the reader the necessary datasets and scripts to go through the tutorial steps themselves.

  14. Data-Driven Design of Intelligent Wireless Networks: An Overview and Tutorial

    PubMed Central

    Kulin, Merima; Fortuna, Carolina; De Poorter, Eli; Deschrijver, Dirk; Moerman, Ingrid

    2016-01-01

    Data science or “data-driven research” is a research approach that uses real-life data to gain insight about the behavior of systems. It enables the analysis of small, simple as well as large and more complex systems in order to assess whether they function according to the intended design and as seen in simulation. Data science approaches have been successfully applied to analyze networked interactions in several research areas such as large-scale social networks, advanced business and healthcare processes. Wireless networks can exhibit unpredictable interactions between algorithms from multiple protocol layers, interactions between multiple devices, and hardware specific influences. These interactions can lead to a difference between real-world functioning and design time functioning. Data science methods can help to detect the actual behavior and possibly help to correct it. Data science is increasingly used in wireless research. To support data-driven research in wireless networks, this paper illustrates the step-by-step methodology that has to be applied to extract knowledge from raw data traces. To this end, the paper (i) clarifies when, why and how to use data science in wireless network research; (ii) provides a generic framework for applying data science in wireless networks; (iii) gives an overview of existing research papers that utilized data science approaches in wireless networks; (iv) illustrates the overall knowledge discovery process through an extensive example in which device types are identified based on their traffic patterns; (v) provides the reader the necessary datasets and scripts to go through the tutorial steps themselves. PMID:27258286

  15. Differences Between Gait on Stairs and Flat Surfaces in Relation to Fall Risk and Future Falls.

    PubMed

    Wang, Kejia; Delbaere, Kim; Brodie, Matthew A D; Lovell, Nigel H; Kark, Lauren; Lord, Stephen R; Redmond, Stephen J

    2017-11-01

    We used body-worn inertial sensors to quantify differences in semi-free-living gait between stairs and on normal flat ground in older adults, and investigated the utility of assessing gait on these terrains for predicting the occurrence of multiple falls. Eighty-two community-dwelling older adults wore two inertial sensors, on the lower back and the right ankle, during several bouts of walking on flat surfaces and up and down stairs, in between rests and activities of daily living. Derived from the vertical acceleration at the lower back, step rate was calculated from the signal's fundamental frequency. Step rate variability was the width of this fundamental frequency peak from the signal's power spectral density. Movement vigor was calculated at both body locations from the signal variance. Partial Spearman correlations between gait parameters and physiological fall risk factors (components from the Physiological Profile Assessment) were calculated while controlling for age and gender. Overall, anteroposterior vigor at the lower back in stair descent was lower in subjects with longer reaction times. Older adults walked more slowly on stairs, but they were not significantly slower on flat surfaces. Using logistic regression, faster step rate in stair descent was associated with multiple prospective falls over 12 months. No significant associations were shown from gait parameters derived during walking upstairs or on flat surfaces. These results suggest that stair descent gait may provide more insight into fall risk than regular walking and stair ascent, and that further sensor-based investigation into unsupervised gait on different terrains would be valuable.

  16. Transient-state kinetic approach to mechanisms of enzymatic catalysis.

    PubMed

    Fisher, Harvey F

    2005-03-01

    Transient-state kinetics by its inherent nature can potentially provide more directly observed detailed resolution of discrete events in the mechanistic time courses of enzyme-catalyzed reactions than its more widely used steady-state counterpart. The use of the transient-state approach, however, has been severely limited by the lack of any theoretically sound and applicable basis of interpreting the virtual cornucopia of time and signal-dependent phenomena that it provides. This Account describes the basic kinetic behavior of the transient state, critically examines some currently used analytic methods, discusses the application of a new and more soundly based "resolved component transient-state time-course method" to the L-glutamate-dehydrogenase reaction, and establishes new approaches for the analysis of both single- and multiple-step substituted transient-state kinetic isotope effects.

  17. Real Time Radiation Exposure And Health Risks

    NASA Technical Reports Server (NTRS)

    Hu, Shaowen; Barzilla, Janet E.; Semones, Edward J.

    2015-01-01

    Radiation from solar particle events (SPEs) poses a serious threat to future manned missions outside of low Earth orbit (LEO). Accurate characterization of the radiation environment in the inner heliosphere and timely monitoring the health risks to crew are essential steps to ensure the safety of future Mars missions. In this project we plan to develop an approach that can use the particle data from multiple satellites and perform near real-time simulations of radiation exposure and health risks for various exposure scenarios. Time-course profiles of dose rates will be calculated with HZETRN and PDOSE from the energy spectrum and compositions of the particles archived from satellites, and will be validated from recent radiation exposure measurements in space. Real-time estimation of radiation risks will be investigated using ARRBOD. This cross discipline integrated approach can improve risk mitigation by providing critical information for risk assessment and medical guidance to crew during SPEs.

  18. The effect of a novel minimally invasive strategy for infected necrotizing pancreatitis.

    PubMed

    Tong, Zhihui; Shen, Xiao; Ke, Lu; Li, Gang; Zhou, Jing; Pan, Yiyuan; Li, Baiqiang; Yang, Dongliang; Li, Weiqin; Li, Jieshou

    2017-11-01

    Step-up approach consisting of multiple minimally invasive techniques has gradually become the mainstream for managing infected pancreatic necrosis (IPN). In the present study, we aimed to compare the safety and efficacy of a novel four-step approach and the conventional approach in managing IPN. According to the treatment strategy, consecutive patients fulfilling the inclusion criteria were put into two time intervals to conduct a before-and-after comparison: the conventional group (2010-2011) and the novel four-step group (2012-2013). The conventional group was essentially open necrosectomy for any patient who failed percutaneous drainage of infected necrosis. And the novel drainage approach consisted of four different steps including percutaneous drainage, negative pressure irrigation, endoscopic necrosectomy and open necrosectomy in sequence. The primary endpoint was major complications (new-onset organ failure, sepsis or local complications, etc.). Secondary endpoints included mortality during hospitalization, need of emergency surgery, duration of organ failure and sepsis, etc. Of the 229 recruited patients, 92 were treated with the conventional approach and the remaining 137 were managed with the novel four-step approach. New-onset major complications occurred in 72 patients (78.3%) in the two-step group and 75 patients (54.7%) in the four-step group (p < 0.001). For other important endpoints, although there was no statistical difference in mortality between the two groups (p = 0.403), significantly fewer patients in the four-step group required emergency surgery when compared with the conventional group [14.6% (20/137) vs. 45.6% (42/92), p < 0.001]. In addition, stratified analysis revealed that the four-step approach group presented significantly lower incidence of new-onset organ failure and other major complications in patients with the most severe type of AP. Comparing with the conventional approach, the novel four-step approach significantly reduced the rate of new-onset major complications and requirement of emergency operations in treating IPN, especially in those with the most severe type of acute pancreatitis.

  19. Prevention of Osmotic Injury to Human Umbilical Vein Endothelial Cells for Biopreservation: A First Step Toward Biobanking of Endothelial Cells for Vascular Tissue Engineering.

    PubMed

    Niu, Dan; Zhao, Gang; Liu, Xiaoli; Zhou, Ping; Cao, Yunxia

    2016-03-01

    High-survival-rate cryopreservation of endothelial cells plays a critical role in vascular tissue engineering, while optimization of osmotic injuries is the first step toward successful cryopreservation. We designed a low-cost, easy-to-use, microfluidics-based microperfusion chamber to investigate the osmotic responses of human umbilical vein endothelial cells (HUVECs) at different temperatures, and then optimized the protocols for using cryoprotective agents (CPAs) to minimize osmotic injuries and improve processes before freezing and after thawing. The fundamental cryobiological parameters were measured using the microperfusion chamber, and then, the optimized protocols using these parameters were confirmed by survival evaluation and cell proliferation experiments. It was revealed for the first time that HUVECs have an unusually small permeability coefficient for Me2SO. Even at the concentrations well established for slow freezing of cells (1.5 M), one-step removal of CPAs for HUVECs might result in inevitable osmotic injuries, indicating that multiple-step removal is essential. Further experiments revealed that multistep removal of 1.5 M Me2SO at 25°C was the best protocol investigated, in good agreement with theory. These results should prove invaluable for optimization of cryopreservation protocols of HUVECs.

  20. Methods, systems and devices for detecting and locating ferromagnetic objects

    DOEpatents

    Roybal, Lyle Gene [Idaho Falls, ID; Kotter, Dale Kent [Shelley, ID; Rohrbaugh, David Thomas [Idaho Falls, ID; Spencer, David Frazer [Idaho Falls, ID

    2010-01-26

    Methods for detecting and locating ferromagnetic objects in a security screening system. One method includes a step of acquiring magnetic data that includes magnetic field gradients detected during a period of time. Another step includes representing the magnetic data as a function of the period of time. Another step includes converting the magnetic data to being represented as a function of frequency. Another method includes a step of sensing a magnetic field for a period of time. Another step includes detecting a gradient within the magnetic field during the period of time. Another step includes identifying a peak value of the gradient detected during the period of time. Another step includes identifying a portion of time within the period of time that represents when the peak value occurs. Another step includes configuring the portion of time over the period of time to represent a ratio.

  1. "Silicon millefeuille": From a silicon wafer to multiple thin crystalline films in a single step

    NASA Astrophysics Data System (ADS)

    Hernández, David; Trifonov, Trifon; Garín, Moisés; Alcubilla, Ramon

    2013-04-01

    During the last years, many techniques have been developed to obtain thin crystalline films from commercial silicon ingots. Large market applications are foreseen in the photovoltaic field, where important cost reductions are predicted, and also in advanced microelectronics technologies as three-dimensional integration, system on foil, or silicon interposers [Dross et al., Prog. Photovoltaics 20, 770-784 (2012); R. Brendel, Thin Film Crystalline Silicon Solar Cells (Wiley-VCH, Weinheim, Germany 2003); J. N. Burghartz, Ultra-Thin Chip Technology and Applications (Springer Science + Business Media, NY, USA, 2010)]. Existing methods produce "one at a time" silicon layers, once one thin film is obtained, the complete process is repeated to obtain the next layer. Here, we describe a technology that, from a single crystalline silicon wafer, produces a large number of crystalline films with controlled thickness in a single technological step.

  2. Multispot single-molecule FRET: High-throughput analysis of freely diffusing molecules

    PubMed Central

    Panzeri, Francesco

    2017-01-01

    We describe an 8-spot confocal setup for high-throughput smFRET assays and illustrate its performance with two characteristic experiments. First, measurements on a series of freely diffusing doubly-labeled dsDNA samples allow us to demonstrate that data acquired in multiple spots in parallel can be properly corrected and result in measured sample characteristics consistent with those obtained with a standard single-spot setup. We then take advantage of the higher throughput provided by parallel acquisition to address an outstanding question about the kinetics of the initial steps of bacterial RNA transcription. Our real-time kinetic analysis of promoter escape by bacterial RNA polymerase confirms results obtained by a more indirect route, shedding additional light on the initial steps of transcription. Finally, we discuss the advantages of our multispot setup, while pointing potential limitations of the current single laser excitation design, as well as analysis challenges and their solutions. PMID:28419142

  3. Risk as a Resource - A New Paradigm

    NASA Technical Reports Server (NTRS)

    Gindorf, Thomas E.

    1996-01-01

    NASA must change dramatically because of the current United States federal budget climate. The American people and their elected officials have mandated a smaller, more efficient and effective government. For the past decade, NASA's budget had grown at or slightly above the rate of inflation. In that era, taking all steps to avoid the risk of failure was the rule. Spacecraft development was characterized by extensive analyses, numerous reviews, and multiple conservative tests. This methodology was consistent with the long available schedules for developing hardware and software for very large, billion dollar spacecraft. Those days are over. The time when every identifiable step was taken to avoid risk is being replaced by a new paradigm which manages risk in much the same way as other resources (schedule, performance, or dollars) are managed. While success is paramount to survival, it can no longer be bought with a large growing NASA budget.

  4. Evidence integration in model-based tree search

    PubMed Central

    Solway, Alec; Botvinick, Matthew M.

    2015-01-01

    Research on the dynamics of reward-based, goal-directed decision making has largely focused on simple choice, where participants decide among a set of unitary, mutually exclusive options. Recent work suggests that the deliberation process underlying simple choice can be understood in terms of evidence integration: Noisy evidence in favor of each option accrues over time, until the evidence in favor of one option is significantly greater than the rest. However, real-life decisions often involve not one, but several steps of action, requiring a consideration of cumulative rewards and a sensitivity to recursive decision structure. We present results from two experiments that leveraged techniques previously applied to simple choice to shed light on the deliberation process underlying multistep choice. We interpret the results from these experiments in terms of a new computational model, which extends the evidence accumulation perspective to multiple steps of action. PMID:26324932

  5. Inkjet Deposition of Layer by Layer Assembled Films

    PubMed Central

    Andres, Christine M.; Kotov, Nicholas A.

    2010-01-01

    Layer-by-layer assembly (LBL) can create advanced composites with exceptional properties unavailable by other means, but the laborious deposition process and multiple dipping cycles hamper their utilization in microtechnologies and electronics. Multiple rinse steps provide both structural control and thermodynamic stability to LBL multilayers but they significantly limit their practical applications and contribute significantly to the processing time and waste. Here we demonstrate that by employing inkjet technology one can deliver the necessary quantities of LBL components required for film build-up without excess, eliminating the need for repetitive rinsing steps. This feature differentiates this approach from all other recognized LBL modalities. Using a model system of negatively charged gold nanoparticles and positively charged poly(diallyldimethylammonium) chloride, the material stability, nanoscale control over thickness and particle coverage offered by the inkjet LBL technique are shown to be equal or better than the multilayers made with traditional dipping cycles. The opportunity for fast deposition of complex metallic patterns using a simple inkjet printer was also shown. The additive nature of LBL deposition based on the formation of insoluble nanoparticle-polyelectrolyte complexes of various compositions provides an excellent opportunity for versatile, multi-component, and non-contact patterning for the simple production of stratified patterns that are much needed in advanced devices. PMID:20863114

  6. Coupling of a 3D Finite Element Model of Cardiac Ventricular Mechanics to Lumped Systems Models of the Systemic and Pulmonic Circulation

    PubMed Central

    Kerckhoffs, Roy C. P.; Neal, Maxwell L.; Gu, Quan; Bassingthwaighte, James B.; Omens, Jeff H.; McCulloch, Andrew D.

    2010-01-01

    In this study we present a novel, robust method to couple finite element (FE) models of cardiac mechanics to systems models of the circulation (CIRC), independent of cardiac phase. For each time step through a cardiac cycle, left and right ventricular pressures were calculated using ventricular compliances from the FE and CIRC models. These pressures served as boundary conditions in the FE and CIRC models. In succeeding steps, pressures were updated to minimize cavity volume error (FE minus CIRC volume) using Newton iterations. Coupling was achieved when a predefined criterion for the volume error was satisfied. Initial conditions for the multi-scale model were obtained by replacing the FE model with a varying elastance model, which takes into account direct ventricular interactions. Applying the coupling, a novel multi-scale model of the canine cardiovascular system was developed. Global hemodynamics and regional mechanics were calculated for multiple beats in two separate simulations with a left ventricular ischemic region and pulmonary artery constriction, respectively. After the interventions, global hemodynamics changed due to direct and indirect ventricular interactions, in agreement with previously published experimental results. The coupling method allows for simulations of multiple cardiac cycles for normal and pathophysiology, encompassing levels from cell to system. PMID:17111210

  7. Identifying Structure-Property Relationships Through DREAM.3D Representative Volume Elements and DAMASK Crystal Plasticity Simulations: An Integrated Computational Materials Engineering Approach

    NASA Astrophysics Data System (ADS)

    Diehl, Martin; Groeber, Michael; Haase, Christian; Molodov, Dmitri A.; Roters, Franz; Raabe, Dierk

    2017-05-01

    Predicting, understanding, and controlling the mechanical behavior is the most important task when designing structural materials. Modern alloy systems—in which multiple deformation mechanisms, phases, and defects are introduced to overcome the inverse strength-ductility relationship—give raise to multiple possibilities for modifying the deformation behavior, rendering traditional, exclusively experimentally-based alloy development workflows inappropriate. For fast and efficient alloy design, it is therefore desirable to predict the mechanical performance of candidate alloys by simulation studies to replace time- and resource-consuming mechanical tests. Simulation tools suitable for this task need to correctly predict the mechanical behavior in dependence of alloy composition, microstructure, texture, phase fractions, and processing history. Here, an integrated computational materials engineering approach based on the open source software packages DREAM.3D and DAMASK (Düsseldorf Advanced Materials Simulation Kit) that enables such virtual material development is presented. More specific, our approach consists of the following three steps: (1) acquire statistical quantities that describe a microstructure, (2) build a representative volume element based on these quantities employing DREAM.3D, and (3) evaluate the representative volume using a predictive crystal plasticity material model provided by DAMASK. Exemplarily, these steps are here conducted for a high-manganese steel.

  8. Rapid and Sensitive Isothermal Detection of Nucleic-acid Sequence by Multiple Cross Displacement Amplification.

    PubMed

    Wang, Yi; Wang, Yan; Ma, Ai-Jing; Li, Dong-Xun; Luo, Li-Juan; Liu, Dong-Xin; Jin, Dong; Liu, Kai; Ye, Chang-Yun

    2015-07-08

    We have devised a novel amplification strategy based on isothermal strand-displacement polymerization reaction, which was termed multiple cross displacement amplification (MCDA). The approach employed a set of ten specially designed primers spanning ten distinct regions of target sequence and was preceded at a constant temperature (61-65 °C). At the assay temperature, the double-stranded DNAs were at dynamic reaction environment of primer-template hybrid, thus the high concentration of primers annealed to the template strands without a denaturing step to initiate the synthesis. For the subsequent isothermal amplification step, a series of primer binding and extension events yielded several single-stranded DNAs and single-stranded single stem-loop DNA structures. Then, these DNA products enabled the strand-displacement reaction to enter into the exponential amplification. Three mainstream methods, including colorimetric indicators, agarose gel electrophoresis and real-time turbidity, were selected for monitoring the MCDA reaction. Moreover, the practical application of the MCDA assay was successfully evaluated by detecting the target pathogen nucleic acid in pork samples, which offered advantages on quick results, modest equipment requirements, easiness in operation, and high specificity and sensitivity. Here we expounded the basic MCDA mechanism and also provided details on an alternative (Single-MCDA assay, S-MCDA) to MCDA technique.

  9. CR-Calculus and adaptive array theory applied to MIMO random vibration control tests

    NASA Astrophysics Data System (ADS)

    Musella, U.; Manzato, S.; Peeters, B.; Guillaume, P.

    2016-09-01

    Performing Multiple-Input Multiple-Output (MIMO) tests to reproduce the vibration environment in a user-defined number of control points of a unit under test is necessary in applications where a realistic environment replication has to be achieved. MIMO tests require vibration control strategies to calculate the required drive signal vector that gives an acceptable replication of the target. This target is a (complex) vector with magnitude and phase information at the control points for MIMO Sine Control tests while in MIMO Random Control tests, in the most general case, the target is a complete spectral density matrix. The idea behind this work is to tailor a MIMO random vibration control approach that can be generalized to other MIMO tests, e.g. MIMO Sine and MIMO Time Waveform Replication. In this work the approach is to use gradient-based procedures over the complex space, applying the so called CR-Calculus and the adaptive array theory. With this approach it is possible to better control the process performances allowing the step-by-step Jacobian Matrix update. The theoretical bases behind the work are followed by an application of the developed method to a two-exciter two-axis system and by performance comparisons with standard methods.

  10. Time-jittered marine seismic data acquisition via compressed sensing and sparsity-promoting wavefield reconstruction

    NASA Astrophysics Data System (ADS)

    Wason, H.; Herrmann, F. J.; Kumar, R.

    2016-12-01

    Current efforts towards dense shot (or receiver) sampling and full azimuthal coverage to produce high resolution images have led to the deployment of multiple source vessels (or streamers) across marine survey areas. Densely sampled marine seismic data acquisition, however, is expensive, and hence necessitates the adoption of sampling schemes that save acquisition costs and time. Compressed sensing is a sampling paradigm that aims to reconstruct a signal--that is sparse or compressible in some transform domain--from relatively fewer measurements than required by the Nyquist sampling criteria. Leveraging ideas from the field of compressed sensing, we show how marine seismic acquisition can be setup as a compressed sensing problem. A step ahead from multi-source seismic acquisition is simultaneous source acquisition--an emerging technology that is stimulating both geophysical research and commercial efforts--where multiple source arrays/vessels fire shots simultaneously resulting in better coverage in marine surveys. Following the design principles of compressed sensing, we propose a pragmatic simultaneous time-jittered time-compressed marine acquisition scheme where single or multiple source vessels sail across an ocean-bottom array firing airguns at jittered times and source locations, resulting in better spatial sampling and speedup acquisition. Our acquisition is low cost since our measurements are subsampled. Simultaneous source acquisition generates data with overlapping shot records, which need to be separated for further processing. We can significantly impact the reconstruction quality of conventional seismic data from jittered data and demonstrate successful recovery by sparsity promotion. In contrast to random (sub)sampling, acquisition via jittered (sub)sampling helps in controlling the maximum gap size, which is a practical requirement of wavefield reconstruction with localized sparsifying transforms. We illustrate our results with simulations of simultaneous time-jittered marine acquisition for 2D and 3D ocean-bottom cable survey.

  11. Guidelines and Procedures for Computing Time-Series Suspended-Sediment Concentrations and Loads from In-Stream Turbidity-Sensor and Streamflow Data

    USGS Publications Warehouse

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.

    2009-01-01

    In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.

  12. Improved Prediction of Falls in Community-Dwelling Older Adults Through Phase-Dependent Entropy of Daily-Life Walking

    PubMed Central

    Ihlen, Espen A. F.; van Schooten, Kimberley S.; Bruijn, Sjoerd M.; van Dieën, Jaap H.; Vereijken, Beatrix; Helbostad, Jorunn L.; Pijnappels, Mirjam

    2018-01-01

    Age and age-related diseases have been suggested to decrease entropy of human gait kinematics, which is thought to make older adults more susceptible to falls. In this study we introduce a new entropy measure, called phase-dependent generalized multiscale entropy (PGME), and test whether this measure improves fall-risk prediction in community-dwelling older adults. PGME can assess phase-dependent changes in the stability of gait dynamics that result from kinematic changes in events such as heel strike and toe-off. PGME was assessed for trunk acceleration of 30 s walking epochs in a re-analysis of 1 week of daily-life activity data from the FARAO study, originally described by van Schooten et al. (2016). The re-analyzed data set contained inertial sensor data from 52 single- and 46 multiple-time prospective fallers in a 6 months follow-up period, and an equal number of non-falling controls matched by age, weight, height, gender, and the use of walking aids. The predictive ability of PGME for falls was assessed using a partial least squares regression. PGME had a superior predictive ability of falls among single-time prospective fallers when compared to the other gait features. The single-time fallers had a higher PGME (p < 0.0001) of their trunk acceleration at 60% of their step cycle when compared with non-fallers. No significant differences were found between PGME of multiple-time fallers and non-fallers, but PGME was found to improve the prediction model of multiple-time fallers when combined with other gait features. These findings suggest that taking into account phase-dependent changes in the stability of the gait dynamics has additional value for predicting falls in older people, especially for single-time prospective fallers. PMID:29556188

  13. Improved Prediction of Falls in Community-Dwelling Older Adults Through Phase-Dependent Entropy of Daily-Life Walking.

    PubMed

    Ihlen, Espen A F; van Schooten, Kimberley S; Bruijn, Sjoerd M; van Dieën, Jaap H; Vereijken, Beatrix; Helbostad, Jorunn L; Pijnappels, Mirjam

    2018-01-01

    Age and age-related diseases have been suggested to decrease entropy of human gait kinematics, which is thought to make older adults more susceptible to falls. In this study we introduce a new entropy measure, called phase-dependent generalized multiscale entropy (PGME), and test whether this measure improves fall-risk prediction in community-dwelling older adults. PGME can assess phase-dependent changes in the stability of gait dynamics that result from kinematic changes in events such as heel strike and toe-off. PGME was assessed for trunk acceleration of 30 s walking epochs in a re-analysis of 1 week of daily-life activity data from the FARAO study, originally described by van Schooten et al. (2016). The re-analyzed data set contained inertial sensor data from 52 single- and 46 multiple-time prospective fallers in a 6 months follow-up period, and an equal number of non-falling controls matched by age, weight, height, gender, and the use of walking aids. The predictive ability of PGME for falls was assessed using a partial least squares regression. PGME had a superior predictive ability of falls among single-time prospective fallers when compared to the other gait features. The single-time fallers had a higher PGME ( p < 0.0001) of their trunk acceleration at 60% of their step cycle when compared with non-fallers. No significant differences were found between PGME of multiple-time fallers and non-fallers, but PGME was found to improve the prediction model of multiple-time fallers when combined with other gait features. These findings suggest that taking into account phase-dependent changes in the stability of the gait dynamics has additional value for predicting falls in older people, especially for single-time prospective fallers.

  14. An efficient, reliable and inexpensive device for the rapid homogenization of multiple tissue samples by centrifugation.

    PubMed

    Ilyin, S E; Plata-Salamán, C R

    2000-02-15

    Homogenization of tissue samples is a common first step in the majority of current protocols for RNA, DNA, and protein isolation. This report describes a simple device for centrifugation-mediated homogenization of tissue samples. The method presented is applicable to RNA, DNA, and protein isolation, and we show examples where high quality total cell RNA, DNA, and protein were obtained from brain and other tissue samples. The advantages of the approach presented include: (1) a significant reduction in time investment relative to hand-driven or individual motorized-driven pestle homogenization; (2) easy construction of the device from inexpensive parts available in any laboratory; (3) high replicability in the processing; and (4) the capacity for the parallel processing of multiple tissue samples, thus allowing higher efficiency, reliability, and standardization.

  15. Precision medicine in chronic disease management: The multiple sclerosis BioScreen.

    PubMed

    Gourraud, Pierre-Antoine; Henry, Roland G; Cree, Bruce A C; Crane, Jason C; Lizee, Antoine; Olson, Marram P; Santaniello, Adam V; Datta, Esha; Zhu, Alyssa H; Bevan, Carolyn J; Gelfand, Jeffrey M; Graves, Jennifer S; Goodin, Douglas S; Green, Ari J; von Büdingen, H-Christian; Waubant, Emmanuelle; Zamvil, Scott S; Crabtree-Hartman, Elizabeth; Nelson, Sarah; Baranzini, Sergio E; Hauser, Stephen L

    2014-11-01

    We present a precision medicine application developed for multiple sclerosis (MS): the MS BioScreen. This new tool addresses the challenges of dynamic management of a complex chronic disease; the interaction of clinicians and patients with such a tool illustrates the extent to which translational digital medicine-that is, the application of information technology to medicine-has the potential to radically transform medical practice. We introduce 3 key evolutionary phases in displaying data to health care providers, patients, and researchers: visualization (accessing data), contextualization (understanding the data), and actionable interpretation (real-time use of the data to assist decision making). Together, these form the stepping stones that are expected to accelerate standardization of data across platforms, promote evidence-based medicine, support shared decision making, and ultimately lead to improved outcomes. © 2014 American Neurological Association.

  16. Morphing Aircraft Structures: Research in AFRL/RB

    DTIC Science & Technology

    2008-09-01

    various iterative steps in the process, etc. The solver also internally controls the step size for integration, as this is independent of the step...Coupling of Substructures for Dynamic Analyses,” AIAA Journal , Vol. 6, No. 7, 1968, pp. 1313-1319. 2“Using the State-Dependent Modal Force (MFORCE),” AFL...an actuation system consisting of multiple internal actuators, centrally computer controlled to implement any commanded morphing configuration; and

  17. Algorithms for Determining Physical Responses of Structures Under Load

    NASA Technical Reports Server (NTRS)

    Richards, W. Lance; Ko, William L.

    2012-01-01

    Ultra-efficient real-time structural monitoring algorithms have been developed to provide extensive information about the physical response of structures under load. These algorithms are driven by actual strain data to measure accurately local strains at multiple locations on the surface of a structure. Through a single point load calibration test, these structural strains are then used to calculate key physical properties of the structure at each measurement location. Such properties include the structure s flexural rigidity (the product of the structure's modulus of elasticity, and its moment of inertia) and the section modulus (the moment of inertia divided by the structure s half-depth). The resulting structural properties at each location can be used to determine the structure s bending moment, shear, and structural loads in real time while the structure is in service. The amount of structural information can be maximized through the use of highly multiplexed fiber Bragg grating technology using optical time domain reflectometry and optical frequency domain reflectometry, which can provide a local strain measurement every 10 mm on a single hair-sized optical fiber. Since local strain is used as input to the algorithms, this system serves multiple purposes of measuring strains and displacements, as well as determining structural bending moment, shear, and loads for assessing real-time structural health. The first step is to install a series of strain sensors on the structure s surface in such a way as to measure bending strains at desired locations. The next step is to perform a simple ground test calibration. For a beam of length l (see example), discretized into n sections and subjected to a tip load of P that places the beam in bending, the flexural rigidity of the beam can be experimentally determined at each measurement location x. The bending moment at each station can then be determined for any general set of loads applied during operation.

  18. Audiovisual Fundamentals; Basic Equipment Operation and Simple Materials Production.

    ERIC Educational Resources Information Center

    Bullard, John R.; Mether, Calvin E.

    A guide illustrated with simple sketches explains the functions and step-by-step uses of audiovisual (AV) equipment. Principles of projection, audio, AV equipment, lettering, limited-quantity and quantity duplication, and materials preservation are outlined. Apparatus discussed include overhead, opaque, slide-filmstrip, and multiple-loading slide…

  19. A new heterogeneous asynchronous explicit-implicit time integrator for nonsmooth dynamics

    NASA Astrophysics Data System (ADS)

    Fekak, Fatima-Ezzahra; Brun, Michael; Gravouil, Anthony; Depale, Bruno

    2017-07-01

    In computational structural dynamics, particularly in the presence of nonsmooth behavior, the choice of the time-step and the time integrator has a critical impact on the feasibility of the simulation. Furthermore, in some cases, as in the case of a bridge crane under seismic loading, multiple time-scales coexist in the same problem. In that case, the use of multi-time scale methods is suitable. Here, we propose a new explicit-implicit heterogeneous asynchronous time integrator (HATI) for nonsmooth transient dynamics with frictionless unilateral contacts and impacts. Furthermore, we present a new explicit time integrator for contact/impact problems where the contact constraints are enforced using a Lagrange multiplier method. In other words, the aim of this paper consists in using an explicit time integrator with a fine time scale in the contact area for reproducing high frequency phenomena, while an implicit time integrator is adopted in the other parts in order to reproduce much low frequency phenomena and to optimize the CPU time. In a first step, the explicit time integrator is tested on a one-dimensional example and compared to Moreau-Jean's event-capturing schemes. The explicit algorithm is found to be very accurate and the scheme has generally a higher order of convergence than Moreau-Jean's schemes and provides also an excellent energy behavior. Then, the two time scales explicit-implicit HATI is applied to the numerical example of a bridge crane under seismic loading. The results are validated in comparison to a fine scale full explicit computation. The energy dissipated in the implicit-explicit interface is well controlled and the computational time is lower than a full-explicit simulation.

  20. SW#db: GPU-Accelerated Exact Sequence Similarity Database Search.

    PubMed

    Korpar, Matija; Šošić, Martin; Blažeka, Dino; Šikić, Mile

    2015-01-01

    In recent years we have witnessed a growth in sequencing yield, the number of samples sequenced, and as a result-the growth of publicly maintained sequence databases. The increase of data present all around has put high requirements on protein similarity search algorithms with two ever-opposite goals: how to keep the running times acceptable while maintaining a high-enough level of sensitivity. The most time consuming step of similarity search are the local alignments between query and database sequences. This step is usually performed using exact local alignment algorithms such as Smith-Waterman. Due to its quadratic time complexity, alignments of a query to the whole database are usually too slow. Therefore, the majority of the protein similarity search methods prior to doing the exact local alignment apply heuristics to reduce the number of possible candidate sequences in the database. However, there is still a need for the alignment of a query sequence to a reduced database. In this paper we present the SW#db tool and a library for fast exact similarity search. Although its running times, as a standalone tool, are comparable to the running times of BLAST, it is primarily intended to be used for exact local alignment phase in which the database of sequences has already been reduced. It uses both GPU and CPU parallelization and was 4-5 times faster than SSEARCH, 6-25 times faster than CUDASW++ and more than 20 times faster than SSW at the time of writing, using multiple queries on Swiss-prot and Uniref90 databases.

  1. Simulation of Unique Pressure Changing Steps and Situations in Psa Processes

    NASA Technical Reports Server (NTRS)

    Ebner, Armin D.; Mehrotra, Amal; Knox, James C.; LeVan, Douglas; Ritter, James A.

    2007-01-01

    A more rigorous cyclic adsorption process simulator is being developed for use in the development and understanding of new and existing PSA processes. Unique features of this new version of the simulator that Ritter and co-workers have been developing for the past decade or so include: multiple absorbent layers in each bed, pressure drop in the column, valves for entering and exiting flows and predicting real-time pressurization and depressurization rates, ability to account for choked flow conditions, ability to pressurize and depressurize simultaneously from both ends of the columns, ability to equalize between multiple pairs of columns, ability to equalize simultaneously from both ends of pairs of columns, and ability to handle very large pressure ratios and hence velocities associated with deep vacuum systems. These changes to the simulator now provide for unique opportunities to study the effects of novel pressure changing steps and extreme process conditions on the performance of virtually any commercial or developmental PSA process. This presentation will provide an overview of the cyclic adsorption process simulator equations and algorithms used in the new adaptation. It will focus primarily on the novel pressure changing steps and their effects on the performance of a PSA system that epitomizes the extremes of PSA process design and operation. This PSA process is a sorbent-based atmosphere revitalization (SBAR) system that NASA is developing for new manned exploration vehicles. This SBAR system consists of a 2-bed 3-step 3-layer system that operates between atmospheric pressure and the vacuum of space, evacuates from both ends of the column simultaneously, experiences choked flow conditions during pressure changing steps, and experiences a continuously changing feed composition, as it removes metabolic CO2 and H20 from a closed and fixed volume, i.e., the spacecraft cabin. Important process performance indicators of this SBAR system are size, and the corresponding CO2 and H20 removal efficiencies, and N2 and O2 loss rates. Results of the fundamental behavior of this PSA process during extreme operating conditions will be presented and discussed.

  2. First spatial separation of a heavy ion isomeric beam with a multiple-reflection time-of-flight mass spectrometer

    NASA Astrophysics Data System (ADS)

    Dickel, T.; Plaß, W. R.; Ayet San Andres, S.; Ebert, J.; Geissel, H.; Haettner, E.; Hornung, C.; Miskun, I.; Pietri, S.; Purushothaman, S.; Reiter, M. P.; Rink, A.-K.; Scheidenberger, C.; Weick, H.; Dendooven, P.; Diwisch, M.; Greiner, F.; Heiße, F.; Knöbel, R.; Lippert, W.; Moore, I. D.; Pohjalainen, I.; Prochazka, A.; Ranjan, M.; Takechi, M.; Winfield, J. S.; Xu, X.

    2015-05-01

    211Po ions in the ground and isomeric states were produced via 238U projectile fragmentation at 1000 MeV/u. The 211Po ions were spatially separated in flight from the primary beam and other reaction products by the fragment separator FRS. The ions were energy-bunched, slowed-down and thermalized in a gas-filled cryogenic stopping cell (CSC). They were then extracted from the CSC and injected into a high-resolution multiple-reflection time-of-flight mass spectrometer (MR-TOF-MS). The excitation energy of the isomer and, for the first time, the isomeric-to-ground state ratio were determined from the measured mass spectrum. In the subsequent experimental step, the isomers were spatially separated from the ions in the ground state by an ion deflector and finally collected with a silicon detector for decay spectroscopy. This pioneering experimental result opens up unique perspectives for isomer-resolved studies. With this versatile experimental method new isomers with half-lives longer than a few milliseconds can be discovered and their decay properties can be measured with highest sensitivity and selectivity. These experiments can be extended to studies with isomeric beams in nuclear reactions.

  3. Treating electron transport in MCNP{sup trademark}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, H.G.

    1996-12-31

    The transport of electrons and other charged particles is fundamentally different from that of neutrons and photons. A neutron, in aluminum slowing down from 0.5 MeV to 0.0625 MeV will have about 30 collisions; a photon will have fewer than ten. An electron with the same energy loss will undergo 10{sup 5} individual interactions. This great increase in computational complexity makes a single- collision Monte Carlo approach to electron transport unfeasible for many situations of practical interest. Considerable theoretical work has been done to develop a variety of analytic and semi-analytic multiple-scattering theories for the transport of charged particles. Themore » theories used in the algorithms in MCNP are the Goudsmit-Saunderson theory for angular deflections, the Landau an theory of energy-loss fluctuations, and the Blunck-Leisegang enhancements of the Landau theory. In order to follow an electron through a significant energy loss, it is necessary to break the electron`s path into many steps. These steps are chosen to be long enough to encompass many collisions (so that multiple-scattering theories are valid) but short enough that the mean energy loss in any one step is small (for the approximations in the multiple-scattering theories). The energy loss and angular deflection of the electron during each step can then be sampled from probability distributions based on the appropriate multiple- scattering theories. This subsumption of the effects of many individual collisions into single steps that are sampled probabilistically constitutes the ``condensed history`` Monte Carlo method. This method is exemplified in the ETRAN series of electron/photon transport codes. The ETRAN codes are also the basis for the Integrated TIGER Series, a system of general-purpose, application-oriented electron/photon transport codes. The electron physics in MCNP is similar to that of the Integrated TIGER Series.« less

  4. An efficient multiple exposure image fusion in JPEG domain

    NASA Astrophysics Data System (ADS)

    Hebbalaguppe, Ramya; Kakarala, Ramakrishna

    2012-01-01

    In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.

  5. A MODFLOW Infiltration Device Package for Simulating Storm Water Infiltration.

    PubMed

    Jeppesen, Jan; Christensen, Steen

    2015-01-01

    This article describes a MODFLOW Infiltration Device (INFD) Package that can simulate infiltration devices and their two-way interaction with groundwater. The INFD Package relies on a water balance including inflow of storm water, leakage-like seepage through the device faces, overflow, and change in storage. The water balance for the device can be simulated in multiple INFD time steps within a single MODFLOW time step, and infiltration from the device can be routed through the unsaturated zone to the groundwater table. A benchmark test shows that the INFD Package's analytical solution for stage computes exact results for transient behavior. To achieve similar accuracy by the numerical solution of the MODFLOW Surface-Water Routing (SWR1) Process requires many small time steps. Furthermore, the INFD Package includes an improved representation of flow through the INFD sides that results in lower infiltration rates than simulated by SWR1. The INFD Package is also demonstrated in a transient simulation of a hypothetical catchment where two devices interact differently with groundwater. This simulation demonstrates that device and groundwater interaction depends on the thickness of the unsaturated zone because a shallow groundwater table (a likely result from storm water infiltration itself) may occupy retention volume, whereas a thick unsaturated zone may cause a phase shift and a change of amplitude in groundwater table response to a change of infiltration. We thus find that the INFD Package accommodates the simulation of infiltration devices and groundwater in an integrated manner on small as well as large spatial and temporal scales. © 2014, National Ground Water Association.

  6. Direct Numerical Simulation of Turbulent Flow Over Complex Bathymetry

    NASA Astrophysics Data System (ADS)

    Yue, L.; Hsu, T. J.

    2017-12-01

    Direct numerical simulation (DNS) is regarded as a powerful tool in the investigation of turbulent flow featured with a wide range of time and spatial scales. With the application of coordinate transformation in a pseudo-spectral scheme, a parallelized numerical modeling system was created aiming at simulating flow over complex bathymetry with high numerical accuracy and efficiency. The transformed governing equations were integrated in time using a third-order low-storage Runge-Kutta method. For spatial discretization, the discrete Fourier expansion was adopted in the streamwise and spanwise direction, enforcing the periodic boundary condition in both directions. The Chebyshev expansion on Chebyshev-Gauss-Lobatto points was used in the wall-normal direction, assuming there is no-slip on top and bottom walls. The diffusion terms were discretized with a Crank-Nicolson scheme, while the advection terms dealiased with the 2/3 rule were discretized with an Adams-Bashforth scheme. In the prediction step, the velocity was calculated in physical domain by solving the resulting linear equation directly. However, the extra terms introduced by coordinate transformation impose a strict limitation to time step and an iteration method was applied to overcome this restriction in the correction step for pressure by solving the Helmholtz equation. The numerical solver is written in object-oriented C++ programing language utilizing Armadillo linear algebra library for matrix computation. Several benchmarking cases in laminar and turbulent flow were carried out to verify/validate the numerical model and very good agreements are achieved. Ongoing work focuses on implementing sediment transport capability for multiple sediment classes and parameterizations for flocculation processes.

  7. Multiple-step preparation and physicochemical characterization of crystalline α-germanium hydrogenphosphate

    NASA Astrophysics Data System (ADS)

    Romano, Ricardo; Ruiz, Ana I.; Alves, Oswaldo L.

    2004-04-01

    The reaction between germanium oxide and phosphoric acid has previously been described and led to impure germanium hydrogenphosphate samples with low crystallinity. A new multiple-step route involving the same reaction under refluxing and soft hydrothermal conditions is described for the preparation of pure and crystalline α-GeP. The physicochemical characterization of the samples allows accompaniment of the reaction evolution as well as determining short- and long-range structural organization. The phase purity of the α-GeP sample was confirmed by applying Rietveld's profile analysis, which also determined the cell parameters of its crystals.

  8. A Two-Step Approach to Uncertainty Quantification of Core Simulators

    DOE PAGES

    Yankov, Artem; Collins, Benjamin; Klein, Markus; ...

    2012-01-01

    For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less

  9. Multiverse data-flow control.

    PubMed

    Schindler, Benjamin; Waser, Jürgen; Ribičić, Hrvoje; Fuchs, Raphael; Peikert, Ronald

    2013-06-01

    In this paper, we present a data-flow system which supports comparative analysis of time-dependent data and interactive simulation steering. The system creates data on-the-fly to allow for the exploration of different parameters and the investigation of multiple scenarios. Existing data-flow architectures provide no generic approach to handle modules that perform complex temporal processing such as particle tracing or statistical analysis over time. Moreover, there is no solution to create and manage module data, which is associated with alternative scenarios. Our solution is based on generic data-flow algorithms to automate this process, enabling elaborate data-flow procedures, such as simulation, temporal integration or data aggregation over many time steps in many worlds. To hide the complexity from the user, we extend the World Lines interaction techniques to control the novel data-flow architecture. The concept of multiple, special-purpose cursors is introduced to let users intuitively navigate through time and alternative scenarios. Users specify only what they want to see, the decision which data are required is handled automatically. The concepts are explained by taking the example of the simulation and analysis of material transport in levee-breach scenarios. To strengthen the general applicability, we demonstrate the investigation of vortices in an offline-simulated dam-break data set.

  10. Reactive Collision Avoidance Algorithm

    NASA Technical Reports Server (NTRS)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on-line. The optimal avoidance trajectory is implemented as a receding-horizon model predictive control law. Therefore, at each time step, the optimal avoidance trajectory is found and the first time step of its acceleration is applied. At the next time step of the control computer, the problem is re-solved and the new first time step is again applied. This continual updating allows the RCA algorithm to adapt to a colliding spacecraft that is making erratic course changes.

  11. Gaze Fluctuations Are Not Additively Decomposable: Reply to Bogartz and Staub

    ERIC Educational Resources Information Center

    Kelty-Stephen, Damian G.; Mirman, Daniel

    2013-01-01

    Our previous work interpreted single-lognormal fits to inter-gaze distance (i.e., "gaze steps") histograms as evidence of multiplicativity and hence interactions across scales in visual cognition. Bogartz and Staub (2012) proposed that gaze steps are additively decomposable into fixations and saccades, matching the histograms better and…

  12. The use of poly-cation oxides to lower the temperature of two-step thermochemical water splitting

    DOE PAGES

    Zhai, Shang; Rojas, Jimmy; Ahlborg, Nadia; ...

    2018-01-01

    We report the discovery of a new class of oxides – poly-cation oxides (PCOs) – that consist of multiple cations and can thermochemically split water in a two-step cycle to produce hydrogen (H 2 ) and oxygen (O 2 ).

  13. Applied Missing Data Analysis. Methodology in the Social Sciences Series

    ERIC Educational Resources Information Center

    Enders, Craig K.

    2010-01-01

    Walking readers step by step through complex concepts, this book translates missing data techniques into something that applied researchers and graduate students can understand and utilize in their own research. Enders explains the rationale and procedural details for maximum likelihood estimation, Bayesian estimation, multiple imputation, and…

  14. Gait characteristics under different walking conditions: Association with the presence of cognitive impairment in community-dwelling older people

    PubMed Central

    Fransen, Erik; Perkisas, Stany; Verhoeven, Veronique; Beauchet, Olivier; Remmen, Roy

    2017-01-01

    Background Gait characteristics measured at usual pace may allow profiling in patients with cognitive problems. The influence of age, gender, leg length, modified speed or dual tasking is unclear. Methods Cross-sectional analysis was performed on a data registry containing demographic, physical and spatial-temporal gait parameters recorded in five walking conditions with a GAITRite® electronic carpet in community-dwelling older persons with memory complaints. Four cognitive stages were studied: cognitively healthy individuals, mild cognitive impaired patients, mild dementia patients and advanced dementia patients. Results The association between spatial-temporal gait characteristics and cognitive stages was the most prominent: in the entire study population using gait speed, steps per meter (translation for mean step length), swing time variability, normalised gait speed (corrected for leg length) and normalised steps per meter at all five walking conditions; in the 50-to-70 years old participants applying step width at fast pace and steps per meter at usual pace; in the 70-to-80 years old persons using gait speed and normalised gait speed at usual pace, fast pace, animal walk and counting walk or steps per meter and normalised steps per meter at all five walking conditions; in over-80 years old participants using gait speed, normalised gait speed, steps per meter and normalised steps per meter at fast pace and animal dual-task walking. Multivariable logistic regression analysis adjusted for gender predicted in two compiled models the presence of dementia or cognitive impairment with acceptable accuracy in persons with memory complaints. Conclusion Gait parameters in multiple walking conditions adjusted for age, gender and leg length showed a significant association with cognitive impairment. This study suggested that multifactorial gait analysis could be more informative than using gait analysis with only one test or one variable. Using this type of gait analysis in clinical practice could facilitate screening for cognitive impairment. PMID:28570662

  15. Age-related changes in compensatory stepping in response to unpredictable perturbations.

    PubMed

    McIlroy, W E; Maki, B E

    1996-11-01

    Recent studies highlight the importance of compensatory stepping to preserve stability, and the spatial and temporal demands placed on the control of this reaction. Age-related changes in the control of stepping could greatly influence the risk of falling. The present study compares, in healthy elderly and young adults, the characteristics of compensatory stepping responses to unpredictable postural perturbations. A moving platform was used to unpredictably perturb the upright stance of 14 naive, active and mobile subjects (5 aged 22 to 28 and 9 aged 65 to 81). The first 10 randomized trials (5 forward and 5 backward) were evaluated to allow a focus on reactions to relatively novel perturbations. The behavior of the subjects was not constrained. Forceplate and kinematic measures were used to evaluate the responses evoked by the brief (600 msec) platform translation. Subjects stepped in 98% of the trials. Although the elderly were less likely to execute a lateral anticipatory postural adjustment prior to foot-lift, the onset of swing-leg unloading tended to begin at the same time in the two age groups. There was remarkable similarity between the young and elderly in many other characteristics of the first step of the response. In spite of this similarity, the elderly subjects were twice as likely to take additional steps to regain stability (63% of trials for elderly). Moreover, in elderly subjects, the additional steps were often directed so as to preserve lateral stability, whereas the young rarely showed this tendency. Given the functional significance of base-of-support changes as a strategy for preserving stability and the age-related differences presently revealed, assessment of the capacity to preserve stability against unpredictable perturbation, and specific measures such as the occurrence or placement of multiple steps, may prove to be a significant predictor of falling risk and an important outcome in evaluating or developing intervention strategies to prevent falls.

  16. Evaluation of a shared-work program for reducing assistance provided to supported workers with severe multiple disabilities.

    PubMed

    Parsons, Marsha B; Reid, Dennis H; Green, Carolyn W; Browning, Leah B; Hensley, Mary B

    2002-01-01

    Concern has been expressed recently regarding the need to enhance the performance of individuals with highly significant disabilities in community-based, supported jobs. We evaluated a shared-work program for reducing job coach assistance provided to three workers with severe multiple disabilities in a publishing company. Following systematic observations of the assistance provided as each worker worked on entire job tasks, steps comprising the tasks were then re-assigned across workers. The re-assignment involved assigning each worker only those task steps for which the respective worker received the least amount of assistance (e.g., re-assigning steps that a worker could not complete due to physical disabilities), and ensuring the entire tasks were still completed by combining steps performed by all three workers. The shared-work program was accompanied by reductions in job coach assistance provided to each worker. Work productivity of the supported workers initially decreased but then increased to a level equivalent to the higher ranges of baseline productivity. These results suggested that the shared-work program appears to represent a viable means of enhancing supported work performance of people with severe multiple disabilities in some types of community jobs. Future research needs discussed focus on evaluating shared-work approaches with other jobs, and developing additional community work models specifically for people with highly significant disabilities.

  17. Transfer effects of step training on stepping performance in untrained directions in older adults: A randomized controlled trial.

    PubMed

    Okubo, Yoshiro; Menant, Jasmine; Udyavar, Manasa; Brodie, Matthew A; Barry, Benjamin K; Lord, Stephen R; L Sturnieks, Daina

    2017-05-01

    Although step training improves the ability of quick stepping, some home-based step training systems train limited stepping directions and may cause harm by reducing stepping performance in untrained directions. This study examines the possible transfer effects of step training on stepping performance in untrained directions in older people. Fifty four older adults were randomized into: forward step training (FT); lateral plus forward step training (FLT); or no training (NT) groups. FT and FLT participants undertook a 15-min training session involving 200 step repetitions. Prior to and post training, choice stepping reaction time and stepping kinematics in untrained, diagonal and lateral directions were assessed. Significant interactions of group and time (pre/post-assessment) were evident for the first step after training indicating negative (delayed response time) and positive (faster peak stepping speed) transfer effects in the diagonal direction in the FT group. However, when the second to the fifth steps after training were included in the analysis, there were no significant interactions of group and time for measures in the diagonal stepping direction. Step training only in the forward direction improved stepping speed but may acutely slow response times in the untrained diagonal direction. However, this acute effect appears to dissipate after a few repeated step trials. Step training in both forward and lateral directions appears to induce no negative transfer effects in diagonal stepping. These findings suggest home-based step training systems present low risk of harm through negative transfer effects in untrained stepping directions. ANZCTR 369066. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Comparison of the phenolic composition of fruit juices by single step gradient HPLC analysis of multiple components versus multiple chromatographic runs optimised for individual families.

    PubMed

    Bremner, P D; Blacklock, C J; Paganga, G; Mullen, W; Rice-Evans, C A; Crozier, A

    2000-06-01

    After minimal sample preparation, two different HPLC methodologies, one based on a single gradient reversed-phase HPLC step, the other on multiple HPLC runs each optimised for specific components, were used to investigate the composition of flavonoids and phenolic acids in apple and tomato juices. The principal components in apple juice were identified as chlorogenic acid, phloridzin, caffeic acid and p-coumaric acid. Tomato juice was found to contain chlorogenic acid, caffeic acid, p-coumaric acid, naringenin and rutin. The quantitative estimates of the levels of these compounds, obtained with the two HPLC procedures, were very similar, demonstrating that either method can be used to analyse accurately the phenolic components of apple and tomato juices. Chlorogenic acid in tomato juice was the only component not fully resolved in the single run study and the multiple run analysis prior to enzyme treatment. The single run system of analysis is recommended for the initial investigation of plant phenolics and the multiple run approach for analyses where chromatographic resolution requires improvement.

  19. Separation of Intercepted Multi-Radar Signals Based on Parameterized Time-Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Lu, W. L.; Xie, J. W.; Wang, H. M.; Sheng, C.

    2016-09-01

    Modern radars use complex waveforms to obtain high detection performance and low probabilities of interception and identification. Signals intercepted from multiple radars overlap considerably in both the time and frequency domains and are difficult to separate with primary time parameters. Time-frequency analysis (TFA), as a key signal-processing tool, can provide better insight into the signal than conventional methods. In particular, among the various types of TFA, parameterized time-frequency analysis (PTFA) has shown great potential to investigate the time-frequency features of such non-stationary signals. In this paper, we propose a procedure for PTFA to separate overlapped radar signals; it includes five steps: initiation, parameterized time-frequency analysis, demodulating the signal of interest, adaptive filtering and recovering the signal. The effectiveness of the method was verified with simulated data and an intercepted radar signal received in a microwave laboratory. The results show that the proposed method has good performance and has potential in electronic reconnaissance applications, such as electronic intelligence, electronic warfare support measures, and radar warning.

  20. Describing the dynamics of processes consisting simultaneously of Poissonian and non-Poissonian kinetics

    NASA Astrophysics Data System (ADS)

    Eule, S.; Friedrich, R.

    2013-03-01

    Dynamical processes exhibiting non-Poissonian kinetics with nonexponential waiting times are frequently encountered in nature. Examples are biochemical processes like gene transcription which are known to involve multiple intermediate steps. However, often a second process, obeying Poissonian statistics, affects the first one simultaneously, such as the degradation of mRNA in the above example. The aim of the present article is to provide a concise treatment of such random systems which are affected by regular and non-Poissonian kinetics at the same time. We derive the governing master equation and provide a controlled approximation scheme for this equation. The simplest approximation leads to generalized reaction rate equations. For a simple model of gene transcription we solve the resulting equation and show how the time evolution is influenced significantly by the type of waiting time distribution assumed for the non-Poissonian process.

  1. Operating system for a real-time multiprocessor propulsion system simulator. User's manual

    NASA Technical Reports Server (NTRS)

    Cole, G. L.

    1985-01-01

    The NASA Lewis Research Center is developing and evaluating experimental hardware and software systems to help meet future needs for real-time, high-fidelity simulations of air-breathing propulsion systems. Specifically, the real-time multiprocessor simulator project focuses on the use of multiple microprocessors to achieve the required computing speed and accuracy at relatively low cost. Operating systems for such hardware configurations are generally not available. A real time multiprocessor operating system (RTMPOS) that supports a variety of multiprocessor configurations was developed at Lewis. With some modification, RTMPOS can also support various microprocessors. RTMPOS, by means of menus and prompts, provides the user with a versatile, user-friendly environment for interactively loading, running, and obtaining results from a multiprocessor-based simulator. The menu functions are described and an example simulation session is included to demonstrate the steps required to go from the simulation loading phase to the execution phase.

  2. An Advice Mechanism for Heterogeneous Robot Teams

    NASA Astrophysics Data System (ADS)

    Daniluk, Steven

    The use of reinforcement learning for robot teams has enabled complex tasks to be performed, but at the cost of requiring a large amount of exploration. Exchanging information between robots in the form of advice is one method to accelerate performance improvements. This thesis presents an advice mechanism for robot teams that utilizes advice from heterogeneous advisers via a method guaranteeing convergence to an optimal policy. The presented mechanism has the capability to use multiple advisers at each time step, and decide when advice should be requested and accepted, such that the use of advice decreases over time. Additionally, collective collaborative, and cooperative behavioural algorithms are integrated into a robot team architecture, to create a new framework that provides fault tolerance and modularity for robot teams.

  3. A Parallel Spectroscopic Method for Examining Dynamic Phenomena on the Millisecond Time Scale

    PubMed Central

    Snively, Christopher M.; Chase, D. Bruce; Rabolt, John F.

    2009-01-01

    An infrared spectroscopic technique based on planar array infrared (PAIR) spectroscopy has been developed that allows the acquisition of spectra from multiple samples simultaneously. Using this technique, it is possible to acquire spectra over a spectral range of 950–1900cm−1 with a temporal resolution of 2.2ms. The performance of this system was demonstrated by determining the shear-induced orientational response of several low molecular weight liquid crystals. Five different liquid crystals were examined in combination with five different alignment layers, and both primary and secondary screens were demonstrated. Implementation of this high throughput PAIR technique resulted in a reduction in acquisition time as compared to both step-scan and ultra-rapid-scanning FTIR spectroscopy. PMID:19239197

  4. Command and Control Software Development

    NASA Technical Reports Server (NTRS)

    Wallace, Michael

    2018-01-01

    The future of the National Aeronautics and Space Administration (NASA) depends on its innovation and efficiency in the coming years. With ambitious goals to reach Mars and explore the vast universe, correct steps must be taken to ensure our space program reaches its destination safely. The interns in the Exploration Systems and Operations Division at the Kennedy Space Center (KSC) have been tasked with building command line tools to ease the process of managing and testing the data being produced by the ground control systems while its recording system is not in use. While working alongside full-time engineers, we were able to create multiple programs that reduce the cost and time it takes to test the subsystems that launch rockets to outer space.

  5. Interface induced spin-orbit interaction in silicon quantum dots and prospects of scalability

    NASA Astrophysics Data System (ADS)

    Ferdous, Rifat; Wai, Kok; Veldhorst, Menno; Hwang, Jason; Yang, Henry; Klimeck, Gerhard; Dzurak, Andrew; Rahman, Rajib

    A scalable quantum computing architecture requires reproducibility over key qubit properties, like resonance frequency, coherence time etc. Randomness in these properties would necessitate individual knowledge of each qubit in a quantum computer. Spin qubits hosted in Silicon (Si) quantum dots (QD) is promising as a potential building block for a large-scale quantum computer, because of their longer coherence times. The Stark shift of the electron g-factor in these QDs has been used to selectively address multiple qubits. From atomistic tight-binding studies we investigated the effect of interface non-ideality on the Stark shift of the g-factor in a Si QD. We find that based on the location of a monoatomic step at the interface with respect to the dot center both the sign and magnitude of the Stark shift change. Thus the presence of interface steps in these devices will cause variability in electron g-factor and its Stark shift based on the location of the qubit. This behavior will also cause varying sensitivity to charge noise from one qubit to another, which will randomize the dephasing times T2*. This predicted device-to-device variability is experimentally observed recently in three qubits fabricated at a Si/Si02 interface, which validates the issues discussed.

  6. Lifetime Prediction for Degradation of Solar Mirrors using Step-Stress Accelerated Testing (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, J.; Elmore, R.; Kennedy, C.

    This research is to illustrate the use of statistical inference techniques in order to quantify the uncertainty surrounding reliability estimates in a step-stress accelerated degradation testing (SSADT) scenario. SSADT can be used when a researcher is faced with a resource-constrained environment, e.g., limits on chamber time or on the number of units to test. We apply the SSADT methodology to a degradation experiment involving concentrated solar power (CSP) mirrors and compare the results to a more traditional multiple accelerated testing paradigm. Specifically, our work includes: (1) designing a durability testing plan for solar mirrors (3M's new improved silvered acrylic "Solarmore » Reflector Film (SFM) 1100") through the ultra-accelerated weathering system (UAWS), (2) defining degradation paths of optical performance based on the SSADT model which is accelerated by high UV-radiant exposure, and (3) developing service lifetime prediction models for solar mirrors using advanced statistical inference. We use the method of least squares to estimate the model parameters and this serves as the basis for the statistical inference in SSADT. Several quantities of interest can be estimated from this procedure, e.g., mean-time-to-failure (MTTF) and warranty time. The methods allow for the estimation of quantities that may be of interest to the domain scientists.« less

  7. Direct extraction of genomic DNA from maize with aqueous ionic liquid buffer systems for applications in genetically modified organisms analysis.

    PubMed

    Gonzalez García, Eric; Ressmann, Anna K; Gaertner, Peter; Zirbs, Ronald; Mach, Robert L; Krska, Rudolf; Bica, Katharina; Brunner, Kurt

    2014-12-01

    To date, the extraction of genomic DNA is considered a bottleneck in the process of genetically modified organisms (GMOs) detection. Conventional DNA isolation methods are associated with long extraction times and multiple pipetting and centrifugation steps, which makes the entire procedure not only tedious and complicated but also prone to sample cross-contamination. In recent times, ionic liquids have emerged as innovative solvents for biomass processing, due to their outstanding properties for dissolution of biomass and biopolymers. In this study, a novel, easily applicable, and time-efficient method for the direct extraction of genomic DNA from biomass based on aqueous-ionic liquid solutions was developed. The straightforward protocol relies on extraction of maize in a 10 % solution of ionic liquids in aqueous phosphate buffer for 5 min at room temperature, followed by a denaturation step at 95 °C for 10 min and a simple filtration to remove residual biopolymers. A set of 22 ionic liquids was tested in a buffer system and 1-ethyl-3-methylimidazolium dimethylphosphate, as well as the environmentally benign choline formate, were identified as ideal candidates. With this strategy, the quality of the genomic DNA extracted was significantly improved and the extraction protocol was notably simplified compared with a well-established method.

  8. Characterization of multi-dye pressure-sensitive microbeads

    NASA Astrophysics Data System (ADS)

    Lacroix, Daniel; Viraye-Chevalier, Teddy; Seiter, Guillaume; Howard, Jonathan; Dabiri, Dana; Khalil, Gamal E.; Xia, Younan; Zhu, Cun

    2013-11-01

    The response times of pressure-sensitive particles to passing shockwaves were measured to investigate their ability to accurately determine pressure changes in unsteady flows. The particles tested were loaded with novel pressure-sensitive dyes such as Pt (II) meso-tetra(pentafluorophenyl)porphine, Pt(II) octaethylporphine, bis(3,5-difluoro-2-(2-pyridyl)phenyl-(2-carboxypyridyl))iridium III, and iridium(III) bis(4-phenylthieno[3,2-c] pyridinato-N,C2')acetylacetonate. For this work, porous silicon dioxide pressure-sensitive beads (PSBeads) were used. Two synthetic procedures were used to fabricate the particles. In the first, a one-step method loaded dyes during the synthesis of microbeads, in the second a two-step method synthesized the microbeads first, then loaded the dyes. The shock tube facility was used to measure the response times of microbeads to fast pressure jumps. The study involved testing multiple luminophors loaded in microbeads with various size distributions. Response times for the silica-based microbeads ranged between 26 μs and 462 μs (at 90% of the amplitude response), which are much faster than previously reported polystyrene-based microbead response times, which range from 507 μs to 1582 μs (at 90% of the amplitude response) [F. Kimura, M. Rodriguez, J. McCann, B. Carlson, D. Dabiri, G. Khalil, J. B. Callis, Y. Xia, and M. Gouterman, "Development and characterization of fast responding pressure sensitive microspheres," Rev. Sci. Instrum. 79, 074102 (2008)].

  9. A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.

    PubMed

    Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J

    2009-11-28

    In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method.

  10. Time-resolved, single-cell analysis of induced and programmed cell death via non-invasive propidium iodide and counterstain perfusion.

    PubMed

    Krämer, Christina E M; Wiechert, Wolfgang; Kohlheyer, Dietrich

    2016-09-01

    Conventional propidium iodide (PI) staining requires the execution of multiple steps prior to analysis, potentially affecting assay results as well as cell vitality. In this study, this multistep analysis method has been transformed into a single-step, non-toxic, real-time method via live-cell imaging during perfusion with 0.1 μM PI inside a microfluidic cultivation device. Dynamic PI staining was an effective live/dead analytical tool and demonstrated consistent results for single-cell death initiated by direct or indirect triggers. Application of this method for the first time revealed the apparent antibiotic tolerance of wild-type Corynebacterium glutamicum cells, as indicated by the conversion of violet fluorogenic calcein acetoxymethyl ester (CvAM). Additional implementation of this method provided insight into the induced cell lysis of Escherichia coli cells expressing a lytic toxin-antitoxin module, providing evidence for non-lytic cell death and cell resistance to toxin production. Finally, our dynamic PI staining method distinguished necrotic-like and apoptotic-like cell death phenotypes in Saccharomyces cerevisiae among predisposed descendants of nutrient-deprived ancestor cells using PO-PRO-1 or green fluorogenic calcein acetoxymethyl ester (CgAM) as counterstains. The combination of single-cell cultivation, fluorescent time-lapse imaging, and PI perfusion facilitates spatiotemporally resolved observations that deliver new insights into the dynamics of cellular behaviour.

  11. Improvisation and the self-organization of multiple musical bodies.

    PubMed

    Walton, Ashley E; Richardson, Michael J; Langland-Hassan, Peter; Chemero, Anthony

    2015-01-01

    Understanding everyday behavior relies heavily upon understanding our ability to improvise, how we are able to continuously anticipate and adapt in order to coordinate with our environment and others. Here we consider the ability of musicians to improvise, where they must spontaneously coordinate their actions with co-performers in order to produce novel musical expressions. Investigations of this behavior have traditionally focused on describing the organization of cognitive structures. The focus, here, however, is on the ability of the time-evolving patterns of inter-musician movement coordination as revealed by the mathematical tools of complex dynamical systems to provide a new understanding of what potentiates the novelty of spontaneous musical action. We demonstrate this approach through the application of cross wavelet spectral analysis, which isolates the strength and patterning of the behavioral coordination that occurs between improvising musicians across a range of nested time-scales. Revealing the sophistication of the previously unexplored dynamics of movement coordination between improvising musicians is an important step toward understanding how creative musical expressions emerge from the spontaneous coordination of multiple musical bodies.

  12. Microgravity Foam Structure and Rheology

    NASA Technical Reports Server (NTRS)

    Durian, Douglas J.

    1997-01-01

    To exploit rheological and multiple-light scattering techniques, and ultimately microgravity conditions, in order to quantify and elucidate the unusual elastic character of foams in terms of their underlying microscopic structure and dynamics. Special interest is in determining how this elastic character vanishes, i.e. how the foam melts into a simple viscous liquid, as a function of both increasing liquid content and shear strain rate. The unusual elastic character of foams will be quantified macroscopically by measurement of the shear stress as a function of static shear strain, shear strain rate, and time following a step strain; such data will be analyzed in terms of a yield stress, a static shear modulus, and dynamical time scales. Microscopic information about bubble packing and rearrangement dynamics, from which these macroscopic non-Newtonian properties presumably arise, will be obtained non-invasively by novel multiple-light scattering diagnostics such as Diffusing-Wave Spectroscopy (DWS). Quantitative trends with materials parameters, such as average bubble size, and liquid content, will be sought in order to elucidate the fundamental connection between the microscopic structure and dynamics and the macroscopic rheology.

  13. Study of hydroxymethylfurfural and furfural formation in cakes during baking in different ovens, using a validated multiple-stage extraction-based analytical method.

    PubMed

    Petisca, Catarina; Henriques, Ana Rita; Pérez-Palacios, Trinidad; Pinho, Olívia; Ferreira, Isabel M P L V O

    2013-12-15

    A procedure for extraction of hydroxymethylfurfural (HMF) and furfural from cakes was validated. Higher yield was achieved by multiple step extraction with water/methanol (70/30) and clarification with Carrez I and II reagents. Oven type and baking time strongly influenced HMF, moisture and volatile profile of model cakes, whereas furfural content was not significantly affected. No correlation was found between these parameters. Baking time influenced moisture and HMF formation in cakes from traditional and microwave ovens but not in steam oven cakes. Significant moisture decrease and HMF increase (3.63, 9.32, and 41.9 mg kg(-1)dw at 20, 40 and 60 min, respectively) were observed during traditional baking. Cakes baked by microwave also presented a significant increase of HMF (up to 16.84 mg kg(-1)dw at 2.5 min). Steam oven cakes possessed the highest moisture content and no significant differences in HMF and furfural. This oven is likely to form low HMF and furfural, maintaining cake moisture and aroma compounds. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Improvisation and the self-organization of multiple musical bodies

    PubMed Central

    Walton, Ashley E.; Richardson, Michael J.; Langland-Hassan, Peter; Chemero, Anthony

    2015-01-01

    Understanding everyday behavior relies heavily upon understanding our ability to improvise, how we are able to continuously anticipate and adapt in order to coordinate with our environment and others. Here we consider the ability of musicians to improvise, where they must spontaneously coordinate their actions with co-performers in order to produce novel musical expressions. Investigations of this behavior have traditionally focused on describing the organization of cognitive structures. The focus, here, however, is on the ability of the time-evolving patterns of inter-musician movement coordination as revealed by the mathematical tools of complex dynamical systems to provide a new understanding of what potentiates the novelty of spontaneous musical action. We demonstrate this approach through the application of cross wavelet spectral analysis, which isolates the strength and patterning of the behavioral coordination that occurs between improvising musicians across a range of nested time-scales. Revealing the sophistication of the previously unexplored dynamics of movement coordination between improvising musicians is an important step toward understanding how creative musical expressions emerge from the spontaneous coordination of multiple musical bodies. PMID:25941499

  15. Differences in Lower Extremity and Trunk Kinematics between Single Leg Squat and Step Down Tasks

    PubMed Central

    Lewis, Cara L.; Foch, Eric; Luko, Marc M.; Loverro, Kari L.; Khuu, Anne

    2015-01-01

    The single leg squat and single leg step down are two commonly used functional tasks to assess movement patterns. It is unknown how kinematics compare between these tasks. The purpose of this study was to identify kinematic differences in the lower extremity, pelvis and trunk between the single leg squat and the step down. Fourteen healthy individuals participated in this research and performed the functional tasks while kinematic data were collected for the trunk, pelvis, and lower extremities using a motion capture system. For the single leg squat task, the participant was instructed to squat as low as possible. For the step down task, the participant was instructed to stand on top of a box, slowly lower him/herself until the non-stance heel touched the ground, and return to standing. This was done from two different heights (16cm and 24cm). The kinematics were evaluated at peak knee flexion as well as at 60° of knee flexion. Pearson correlation coefficients (r) between the angles at those two time points were also calculated to better understand the relationship between each task. The tasks resulted in kinematics differences at the knee, hip, pelvis, and trunk at both time points. The single leg squat was performed with less hip adduction (p ≤ 0.003), but more hip external rotation and knee abduction (p ≤ 0.030), than the step down tasks at 60° of knee flexion. These differences were maintained at peak knee flexion except hip external rotation was only significant in the 24cm step down task (p ≤ 0.029). While there were multiple differences between the two step heights at peak knee flexion, the only difference at 60° of knee flexion was in trunk flexion (p < 0.001). Angles at the knee and hip had a moderate to excellent correlation (r = 0.51–0.98), but less consistently so at the pelvis and trunk (r = 0.21–0.96). The differences in movement patterns between the single leg squat and the step down should be considered when selecting a single leg task for evaluation or treatment. The high correlation of knee and hip angles between the three tasks indicates that similar information about knee and hip kinematics was gained from each of these tasks, while pelvis and trunk angles were less well predicted. PMID:25955321

  16. Validity of the Timed Up and Go Test as a Measure of Functional Mobility in Persons With Multiple Sclerosis.

    PubMed

    Sebastião, Emerson; Sandroff, Brian M; Learmonth, Yvonne C; Motl, Robert W

    2016-07-01

    To examine the validity of the timed Up and Go (TUG) test as a measure of functional mobility in persons with multiple sclerosis (MS) by using a comprehensive framework based on construct validity (ie, convergent and divergent validity). Cross-sectional study. Hospital setting. Community-residing persons with MS (N=47). Not applicable. Main outcome measures included the TUG test, timed 25-foot walk test, 6-minute walk test, Multiple Sclerosis Walking Scale-12, Late-Life Function and Disability Instrument, posturography evaluation, Activities-specific Balance Confidence scale, Symbol Digits Modalities Test, Expanded Disability Status Scale, and the number of steps taken per day. The TUG test was strongly associated with other valid outcome measures of ambulatory mobility (Spearman rank correlation, rs=.71-.90) and disability status (rs=.80), moderately to strongly associated with balance confidence (rs=.66), and weakly associated with postural control (ie, balance) (rs=.31). The TUG test was moderately associated with cognitive processing speed (rs=.59), but not associated with other nonambulatory measures (ie, Late-Life Function and Disability Instrument-upper extremity function). Our findings support the validity of the TUG test as a measure of functional mobility. This warrants its inclusion in patients' assessment alongside other valid measures of functional mobility in both clinical and research practice in persons with MS. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  17. A new method for registration of heterogeneous sensors in a dimensional measurement system

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Wang, Zhong; Fu, Luhua; Qu, Xinghua; Zhang, Heng; Liu, Changjie

    2017-10-01

    Registration of multiple sensors is a basic step in multi-sensor dimensional or coordinate measuring systems before any measurement. In most cases, a common standard is used to be measured by all sensors, and this may work well for general registration of multiple homogeneous sensors. However, when inhomogeneous sensors detect a common standard, it is usually very difficult to obtain the same information, because of the different working principles of the sensors. In this paper, a new method called multiple steps registration is proposed to register two sensors: a video camera sensor (VCS) and a tactile probe sensor (TPS). In this method, the two sensors measure two separated standards: a chrome circle on a reticle and a reference sphere with a constant distance between them, fixed on a steel plate. The VCS captures only the circle and the TPS touches only the sphere. Both simulations and real experiments demonstrate that the proposed method is robust and accurate in the registration of multiple inhomogeneous sensors in a dimensional measurement system.

  18. Performance evaluation of the multiple root node approach to the Rete pattern matcher for production systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sohn, A.; Gaudiot, J.-L.

    1991-12-31

    Much effort has been expanded on special architectures and algorithms dedicated to efficient processing of the pattern matching step of production systems. In this paper, the authors investigate the possible improvement on the Rete pattern matcher for production systems. Inefficiencies in the Rete match algorithm have been identified, based on which they introduce a pattern matcher with multiple root nodes. A complete implementation of the multiple root node-based production system interpreter is presented to investigate its relative algorithmic behavior over the Rete-based Ops5 production system interpreter. Benchmark production system programs are executed (not simulated) on a sequential machine Sun 4/490more » by using both interpreters and various experimental results are presented. Their investigation indicates that the multiple root node-based production system interpreter would give a maximum of up to 6-fold improvement over the Lisp implementation of the Rete-based Ops5 for the match step.« less

  19. Probability-based constrained MPC for structured uncertain systems with state and random input delays

    NASA Astrophysics Data System (ADS)

    Lu, Jianbo; Li, Dewei; Xi, Yugeng

    2013-07-01

    This article is concerned with probability-based constrained model predictive control (MPC) for systems with both structured uncertainties and time delays, where a random input delay and multiple fixed state delays are included. The process of input delay is governed by a discrete-time finite-state Markov chain. By invoking an appropriate augmented state, the system is transformed into a standard structured uncertain time-delay Markov jump linear system (MJLS). For the resulting system, a multi-step feedback control law is utilised to minimise an upper bound on the expected value of performance objective. The proposed design has been proved to stabilise the closed-loop system in the mean square sense and to guarantee constraints on control inputs and system states. Finally, a numerical example is given to illustrate the proposed results.

  20. Prediction of health levels by remote sensing

    NASA Technical Reports Server (NTRS)

    Rush, M.; Vernon, S.

    1975-01-01

    Measures of the environment derived from remote sensing were compared to census population/housing measures in their ability to discriminate among health status areas in two urban communities. Three hypotheses were developed to explore the relationships between environmental and health data. Univariate and multiple step-wise linear regression analyses were performed on data from two sample areas in Houston and Galveston, Texas. Environmental data gathered by remote sensing were found to equal or surpass census data in predicting rates of health outcomes. Remote sensing offers the advantages of data collection for any chosen area or time interval, flexibilities not allowed by the decennial census.

  1. COMPACT CASCADE IMPACTS

    DOEpatents

    Lippmann, M.

    1964-04-01

    A cascade particle impactor capable of collecting particles and distributing them according to size is described. In addition the device is capable of collecting on a pair of slides a series of different samples so that less time is required for the changing of slides. Other features of the device are its compactness and its ruggedness making it useful under field conditions. Essentially the unit consists of a main body with a series of transverse jets discharging on a pair of parallel, spaced glass plates. The plates are capable of being moved incremental in steps to obtain the multiple samples. (AEC)

  2. In line monitoring of the preparation of water-in-oil-in-water (W/O/W) type multiple emulsions via dielectric spectroscopy.

    PubMed

    Beer, Sebastian; Dobler, Dorota; Gross, Alexander; Ost, Martin; Elseberg, Christiane; Maeder, Ulf; Schmidts, Thomas Michael; Keusgen, Michael; Fiebich, Martin; Runkel, Frank

    2013-01-30

    Multiple emulsions offer various applications in a wide range of fields such as pharmaceutical, cosmetics and food technology. Two features are known to yield a great influence on multiple emulsion quality and utility as encapsulation efficiency and prolonged stability. To achieve a prolonged stability, the production of the emulsions has to be observed and controlled, preferably in line. In line measurements provide available parameters in a short time frame without the need for the sample to be removed from the process stream, thereby enabling continuous process control. In this study, information about the physical state of multiple emulsions obtained from dielectric spectroscopy (DS) is evaluated for this purpose. Results from dielectric measurements performed in line during the production cycle are compared to theoretically expected results and to well established off line measurements. Thus, a first step to include the production of multiple emulsions into the process analytical technology (PAT) guidelines of the Food and Drug Administration (FDA) is achieved. DS proved to be beneficial in determining the crucial stopping criterion, which is essential in the production of multiple emulsions. The stopping of the process at a less-than-ideal point can severely lower the encapsulation efficiency and the stability, thereby lowering the quality of the emulsion. DS is also expected to provide further information about the multiple emulsion like encapsulation efficiency. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. 2-Step Maximum Likelihood Channel Estimation for Multicode DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.

  4. Using a divided-attention stepping accuracy task to improve balance and functional outcomes in an individual with incomplete spinal cord injury: A case report.

    PubMed

    Leach, Susan J; Magill, Richard A; Maring, Joyce R

    2017-01-01

    A spinal cord injury (SCI) frequently results in impaired balance, endurance, and strength with subsequent limitations in functional mobility and community participation. The purpose of this case report was to implement a training program for an individual with a chronic incomplete SCI using a novel divided-attention stepping accuracy task (DASAT) to determine if improvements could be made in impairments, activities, and participation. The client was a 51-year-old male with a motor incomplete C4 SCI sustained 4 years prior. He presented with decreased quality of life (QOL) and functional independence, and deficits in balance, endurance, and strength consistent with central cord syndrome. The client completed the DASAT intervention 3 times per week for 6 weeks. Each session incorporated 96 multi-directional steps to randomly-assigned targets in response to 3-step verbal commands. QOL, measured using the SF-36, was generally enhanced but fluctuated. Community mobility progressed from close supervision to independence. Significant improvement was achieved in all balance scores: Berg Balance Scale by 9 points [Minimal Detectable Change (MDC) = 4.9 in elderly]; Functional Reach Test by 7.62 cm (MDC = 5.16 in C5/C6 SCI); and Timed Up-and-Go by 0.53 s (MDC not established). Endurance increased on the 6-Minute Walk Test, with the client achieving an additional 47 m (MDC = 45.8 m). Lower extremity isokinetic peak torque strength measures were mostly unchanged. Six minutes of DASAT training per session provided an efficient, low-cost intervention utilizing multiple trials of variable practice, and resulted in better performance in activities, balance, and endurance in this client.

  5. Effective learning strategies for real-time image-guided adaptive control of multiple-source hyperthermia applicators.

    PubMed

    Cheng, Kung-Shan; Dewhirst, Mark W; Stauffer, Paul R; Das, Shiva

    2010-03-01

    This paper investigates overall theoretical requirements for reducing the times required for the iterative learning of a real-time image-guided adaptive control routine for multiple-source heat applicators, as used in hyperthermia and thermal ablative therapy for cancer. Methods for partial reconstruction of the physical system with and without model reduction to find solutions within a clinically practical timeframe were analyzed. A mathematical analysis based on the Fredholm alternative theorem (FAT) was used to compactly analyze the existence and uniqueness of the optimal heating vector under two fundamental situations: (1) noiseless partial reconstruction and (2) noisy partial reconstruction. These results were coupled with a method for further acceleration of the solution using virtual source (VS) model reduction. The matrix approximation theorem (MAT) was used to choose the optimal vectors spanning the reduced-order subspace to reduce the time for system reconstruction and to determine the associated approximation error. Numerical simulations of the adaptive control of hyperthermia using VS were also performed to test the predictions derived from the theoretical analysis. A thigh sarcoma patient model surrounded by a ten-antenna phased-array applicator was retained for this purpose. The impacts of the convective cooling from blood flow and the presence of sudden increase of perfusion in muscle and tumor were also simulated. By FAT, partial system reconstruction directly conducted in the full space of the physical variables such as phases and magnitudes of the heat sources cannot guarantee reconstructing the optimal system to determine the global optimal setting of the heat sources. A remedy for this limitation is to conduct the partial reconstruction within a reduced-order subspace spanned by the first few maximum eigenvectors of the true system matrix. By MAT, this VS subspace is the optimal one when the goal is to maximize the average tumor temperature. When more than 6 sources present, the steps required for a nonlinear learning scheme is theoretically fewer than that of a linear one, however, finite number of iterative corrections is necessary for a single learning step of a nonlinear algorithm. Thus, the actual computational workload for a nonlinear algorithm is not necessarily less than that required by a linear algorithm. Based on the analysis presented herein, obtaining a unique global optimal heating vector for a multiple-source applicator within the constraints of real-time clinical hyperthermia treatments and thermal ablative therapies appears attainable using partial reconstruction with minimum norm least-squares method with supplemental equations. One way to supplement equations is the inclusion of a method of model reduction.

  6. Impact of temporal resolution of inputs on hydrological model performance: An analysis based on 2400 flood events

    NASA Astrophysics Data System (ADS)

    Ficchì, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-07-01

    Hydro-climatic data at short time steps are considered essential to model the rainfall-runoff relationship, especially for short-duration hydrological events, typically flash floods. Also, using fine time step information may be beneficial when using or analysing model outputs at larger aggregated time scales. However, the actual gain in prediction efficiency using short time-step data is not well understood or quantified. In this paper, we investigate the extent to which the performance of hydrological modelling is improved by short time-step data, using a large set of 240 French catchments, for which 2400 flood events were selected. Six-minute rain gauge data were available and the GR4 rainfall-runoff model was run with precipitation inputs at eight different time steps ranging from 6 min to 1 day. Then model outputs were aggregated at seven different reference time scales ranging from sub-hourly to daily for a comparative evaluation of simulations at different target time steps. Three classes of model performance behaviour were found for the 240 test catchments: (i) significant improvement of performance with shorter time steps; (ii) performance insensitivity to the modelling time step; (iii) performance degradation as the time step becomes shorter. The differences between these groups were analysed based on a number of catchment and event characteristics. A statistical test highlighted the most influential explanatory variables for model performance evolution at different time steps, including flow auto-correlation, flood and storm duration, flood hydrograph peakedness, rainfall-runoff lag time and precipitation temporal variability.

  7. The Polar Cusp Observed by Cluster Under Constant Imf-Bz Southward

    NASA Astrophysics Data System (ADS)

    Escoubet, C. P.; Berchem, J.; Pitout, F.; Trattner, K. J.; Richard, R. L.; Taylor, M. G.; Soucek, J.; Grison, B.; Laakso, H. E.; Masson, A.; Dunlop, M. W.; Dandouras, I. S.; Reme, H.; Fazakerley, A. N.; Daly, P. W.

    2011-12-01

    The Earth's magnetic field is influenced by the interplanetary magnetic field (IMF), specially at the magnetopause where both magnetic fields enter in direct contact and magnetic reconnection can be initiated. In the polar regions, the polar cusp that extends from the magnetopause down to the ionosphere is also directly influenced. The reconnection not only allow ions and electrons from the solar wind to enter the polar cusp but also give an impulse to the magnetic field lines threading the polar cusp through the reconnection electric field. A dispersion in energy of the ions is subsequently produced by the motion of field lines and the time-of-flight effect on down-going ions. If reconnection is continuous and operates at constant rate, the ion dispersion is smooth and continuous. On the other hand if the reconnection rate varies, we expect interruption in the dispersion forming energy steps or staircase. Similarly, multiple entries near the magnetopause could also produce steps at low or mid-altitude when a spacecraft is crossing subsequently the field lines originating from these multiple sources. Cluster with four spacecraft following each other in the mid-altitude cusp can be used to distinguish between these "temporal" and "spatial" effects. We will show two Cluster cusp crossings where the spacecraft were separated by a few minutes. The energy dispersions observed in the first crossing were the same during the few minutes that separated the spacecraft. In the second crossing, two ion dispersions were observed on the first spacecraft and only one of the following spacecraft, about 10 min later. The detailed analysis indicates that these steps result from spatial structures.

  8. The Strong Lensing Time Delay Challenge (2014)

    NASA Astrophysics Data System (ADS)

    Liao, Kai; Dobler, G.; Fassnacht, C. D.; Treu, T.; Marshall, P. J.; Rumbaugh, N.; Linder, E.; Hojjati, A.

    2014-01-01

    Time delays between multiple images in strong lensing systems are a powerful probe of cosmology. At the moment the application of this technique is limited by the number of lensed quasars with measured time delays. However, the number of such systems is expected to increase dramatically in the next few years. Hundred such systems are expected within this decade, while the Large Synoptic Survey Telescope (LSST) is expected to deliver of order 1000 time delays in the 2020 decade. In order to exploit this bounty of lenses we needed to make sure the time delay determination algorithms have sufficiently high precision and accuracy. As a first step to test current algorithms and identify potential areas for improvement we have started a "Time Delay Challenge" (TDC). An "evil" team has created realistic simulated light curves, to be analyzed blindly by "good" teams. The challenge is open to all interested parties. The initial challenge consists of two steps (TDC0 and TDC1). TDC0 consists of a small number of datasets to be used as a training template. The non-mandatory deadline is December 1 2013. The "good" teams that complete TDC0 will be given access to TDC1. TDC1 consists of thousands of lightcurves, a number sufficient to test precision and accuracy at the subpercent level, necessary for time-delay cosmography. The deadline for responding to TDC1 is July 1 2014. Submissions will be analyzed and compared in terms of predefined metrics to establish the goodness-of-fit, efficiency, precision and accuracy of current algorithms. This poster describes the challenge in detail and gives instructions for participation.

  9. Dynamic changes in cortical tensions in multiple cell types during germband retraction

    NASA Astrophysics Data System (ADS)

    Hutson, M. Shane; Lacy, Monica E.; McCleery, W. Tyler

    The process of germband retraction in Drosophila embryogenesis involves the coordinated mechanics of both germband and amnioserosa cells. These two tissues simultaneously and coordinately uncurl from their interlocking U-like shapes. As tissue-level retraction proceeds, individual cells change shape in stereotypical ways. Using time-lapse confocal images, analysis of dynamic cellular triple-junction angles, and whole-embryo finite-element models, we have quantified dynamic changes in cortical tensions - including their anisotropy - in both germband and amnioserosa cells. We find a strong transition midway through the two-hour course of retraction at which point tensions and anisotropies undergo a near step change. These changes take place among amnioserosa cells, in multiple segments of the germband, and at the interface between these two tissues. Research was supported by NIH Grant Numbers 1R01GM099107 and 1R21AR068933.

  10. Analyzing multiple data sets by interconnecting RSAT programs via SOAP Web services: an example with ChIP-chip data.

    PubMed

    Sand, Olivier; Thomas-Chollier, Morgane; Vervisch, Eric; van Helden, Jacques

    2008-01-01

    This protocol shows how to access the Regulatory Sequence Analysis Tools (RSAT) via a programmatic interface in order to automate the analysis of multiple data sets. We describe the steps for writing a Perl client that connects to the RSAT Web services and implements a workflow to discover putative cis-acting elements in promoters of gene clusters. In the presented example, we apply this workflow to lists of transcription factor target genes resulting from ChIP-chip experiments. For each factor, the protocol predicts the binding motifs by detecting significantly overrepresented hexanucleotides in the target promoters and generates a feature map that displays the positions of putative binding sites along the promoter sequences. This protocol is addressed to bioinformaticians and biologists with programming skills (notions of Perl). Running time is approximately 6 min on the example data set.

  11. Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Tucker, Deanne (Technical Monitor)

    1994-01-01

    Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.

  12. Algorithms for image recovery calculation in extended single-shot phase-shifting digital holography

    NASA Astrophysics Data System (ADS)

    Hasegawa, Shin-ya; Hirata, Ryo

    2018-04-01

    The single-shot phase-shifting method of image recovery using an inclined reference wave has the advantages of reducing the effects of vibration, being capable of operating in real time, and affording low-cost sensing. In this method, relatively low reference angles compared with that in the conventional method using phase shift between three or four pixels has been required. We propose an extended single-shot phase-shifting technique which uses the multiple-step phase-shifting algorithm and the corresponding multiple pixels which are the same as that of the period of an interference fringe. We have verified the theory underlying this recovery method by means of Fourier spectral analysis and its effectiveness by evaluating the visibility of the image using a high-resolution pattern. Finally, we have demonstrated high-contrast image recovery experimentally using a resolution chart. This method can be used in a variety of applications such as color holographic interferometry.

  13. Urinary tract infection as a single presenting sign of multiple vaginal foreign bodies: case report and review of the literature.

    PubMed

    Neulander, Endre Z; Tiktinsky, Alex; Romanowsky, Igor; Kaneti, Jacob

    2010-02-01

    Vaginal foreign bodies in children usually present with foul-smelling discharge and/or vaginal bleeding. Rarely, these basic clinical diagnostic signs are not present. We report on a 5(1/2)-year-old girl with recurrent lower urinary tract infection as the sole presentation of multiple vaginal foreign bodies. Ultrasound of the lower urinary tract was inconclusive, and cystography indicated for recurrent urinary tract infections was declined by the patient in an outpatient setting. Cystography under general anesthesia raised the suspicion of foreign vaginal objects, and the definitive diagnosis was made by vaginoscopy. The relevant literature covering this subject is reviewed. High level of suspicion and strict basic diagnostic protocol are the most important steps for a timely diagnosis of this condition. Copyright 2010 North American Society for Pediatric and Adolescent Gynecology. Published by Elsevier Inc. All rights reserved.

  14. Photogrammetry of a 5m Inflatable Space Antenna With Consumer Digital Cameras

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.; Giersch, Louis R.; Quagliaroli, Jessica M.

    2000-01-01

    This paper discusses photogrammetric measurements of a 5m-diameter inflatable space antenna using four Kodak DC290 (2.1 megapixel) digital cameras. The study had two objectives: 1) Determine the photogrammetric measurement precision obtained using multiple consumer-grade digital cameras and 2) Gain experience with new commercial photogrammetry software packages, specifically PhotoModeler Pro from Eos Systems, Inc. The paper covers the eight steps required using this hardware/software combination. The baseline data set contained four images of the structure taken from various viewing directions. Each image came from a separate camera. This approach simulated the situation of using multiple time-synchronized cameras, which will be required in future tests of vibrating or deploying ultra-lightweight space structures. With four images, the average measurement precision for more than 500 points on the antenna surface was less than 0.020 inches in-plane and approximately 0.050 inches out-of-plane.

  15. Quick and Easy Adaptations and Accommodations for Early Childhood Students

    ERIC Educational Resources Information Center

    Breitfelder, Leisa M.

    2008-01-01

    Research-based information is used to support the idea of the use of adaptations and accommodations for early childhood students who have varying disabilities. Multiple adaptations and accommodations are outlined. A step-by-step plan is provided on how to make specific adaptations and accommodations to fit the specific needs of early childhood…

  16. Stutter-Step Models of Performance in School

    ERIC Educational Resources Information Center

    Morgan, Stephen L.; Leenman, Theodore S.; Todd, Jennifer J.; Kentucky; Weeden, Kim A.

    2013-01-01

    To evaluate a stutter-step model of academic performance in high school, this article adopts a unique measure of the beliefs of 12,591 high school sophomores from the Education Longitudinal Study, 2002-2006. Verbatim responses to questions on occupational plans are coded to capture specific job titles, the listing of multiple jobs, and the listing…

  17. Discussion of "Simple design criterion for residual energy on embankment dam stepped spillways" by Stefan Felder and Hubert Chanson

    USDA-ARS?s Scientific Manuscript database

    Researchers from the University of Queensland of New South Wales provided guidance to designers regarding the hydraulic performance of embankment dam stepped spillways. Their research compares a number of high-quality physical model data sets from multiple laboratories, emphasizing the variability ...

  18. Evidence-Based Assessment of Attention-Deficit/Hyperactivity Disorder: Using Multiple Sources of Information

    ERIC Educational Resources Information Center

    Frazier, Thomas W.; Youngstrom, Eric A.

    2006-01-01

    In this article, the authors illustrate a step-by-step process of acquiring and integrating information according to the recommendations of evidence-based practices. A case example models the process, leading to specific recommendations regarding instruments and strategies for evidence-based assessment (EBA) of attention-deficit/hyperactivity…

  19. Productivity improvement through cycle time analysis

    NASA Astrophysics Data System (ADS)

    Bonal, Javier; Rios, Luis; Ortega, Carlos; Aparicio, Santiago; Fernandez, Manuel; Rosendo, Maria; Sanchez, Alejandro; Malvar, Sergio

    1996-09-01

    A cycle time (CT) reduction methodology has been developed in the Lucent Technology facility (former AT&T) in Madrid, Spain. It is based on a comparison of the contribution of each process step in each technology with a target generated by a cycle time model. These targeted cycle times are obtained using capacity data of the machines processing those steps, queuing theory and theory of constrains (TOC) principles (buffers to protect bottleneck and low cycle time/inventory everywhere else). Overall efficiency equipment (OEE) like analysis is done in the machine groups with major differences between their target cycle time and real values. Comparisons between the current value of the parameters that command their capacity (process times, availability, idles, reworks, etc.) and the engineering standards are done to detect the cause of exceeding their contribution to the cycle time. Several friendly and graphical tools have been developed to track and analyze those capacity parameters. Specially important have showed to be two tools: ASAP (analysis of scheduling, arrivals and performance) and performer which analyzes interrelation problems among machines procedures and direct labor. The performer is designed for a detailed and daily analysis of an isolate machine. The extensive use of this tool by the whole labor force has demonstrated impressive results in the elimination of multiple small inefficiencies with a direct positive implications on OEE. As for ASAP, it shows the lot in process/queue for different machines at the same time. ASAP is a powerful tool to analyze the product flow management and the assigned capacity for interdependent operations like the cleaning and the oxidation/diffusion. Additional tools have been developed to track, analyze and improve the process times and the availability.

  20. SRSF3 represses the expression of PDCD4 protein by coordinated regulation of alternative splicing, export and translation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Seung Kuk; Jeong, Sunjoo, E-mail: sjsj@dankook.ac.kr

    2016-02-05

    Gene expression is regulated at multiple steps, such as transcription, splicing, export, degradation and translation. Considering diverse roles of SR proteins, we determined whether the tumor-related splicing factor SRSF3 regulates the expression of the tumor-suppressor protein, PDCD4, at multiple steps. As we have reported previously, knockdown of SRSF3 increased the PDCD4 protein level in SW480 colon cancer cells. More interestingly, here we showed that the alternative splicing and the nuclear export of minor isoforms of pdcd4 mRNA were repressed by SRSF3, but the translation step was unaffected. In contrast, only the translation step of the major isoform of pdcd4 mRNAmore » was repressed by SRSF3. Therefore, overexpression of SRSF3 might be relevant to the repression of all isoforms of PDCD4 protein levels in most types of cancer cell. We propose that SRSF3 could act as a coordinator of the expression of PDCD4 protein via two mechanisms on two alternatively spliced mRNA isoforms.« less

Top